Whenever Professor David Guetta hears about artificial intelligence, he says, it’s always in the context of anecdotes—stories about incidents that happened to someone else: a deepfake used to con an individual or someone who was denied credit because they were duped using AI.
While these tales make for good headlines and spicy conversation starters, the critical elements of this discussion are lost. How concerned should we really be about the risks and opportunities presented by this technology? What are the consequences of unethical AI usage?
Guetta, Associate Professor of Professional Practice at Columbia Business School, moderated this panel as part of the annual KPMG Peat Marwick/Stanley R. Klion Forum. Hosted by the Bernstein Center for Leadership and Ethics in co-sponsorship with the Digital Future Initiative, the Forum featured experts from media, law, and industry who answered some of the burning question we all share around the ethical concerns of generative AI.
At the outset, Pulitzer Prize-winning tech journalist and New York Times bestselling author Julia Angwin ’00, highlighted an underlying issue—something she terms “the denominator problem”—in our collective discussions around AI.“(AI) is not a monolith. And I’m obsessed with this idea that we need to establish a baseline,” she said.
Talia Gillis, a Columbia Law School professor who studies the law and economics of consumer markets and the role that AI can play there, wholeheartedly agreed. “We've really avoided the difficult questions,” she said. Because we are so exposed to the anecdotal evidence that Guetta referred to—something going awry because of AI—we tend to over index and attribute too much value to the bad, she added.
Jiahao Chen, the founder of Responsible AI, noted that one of the greatest problems we have when evaluating AI is that we have no clear definition of what success looks like. “We’re just not agreeing on the semantics of what we're studying,” he said. “And so, we throw around terms like bias, as if we understand what that means. What we are calling AI is actually this diffuse, poorly defined blob of a system that actually encompasses many different things,” he added.
One of the major concerns Angwin shared relates to how generative AI actually works. They’re essentially models that have been trained to scoop up enormous amounts of content from the Internet and use that to create a new product. Her concern is rooted in the fact that these models are drawing on publicly available information. “I'm really worried not just about the bias and the accuracy [of generative AI], but actually, what happens to our public Internet when it's being overgrazed,” she said.
Speaking from the perspective of a legal expert, Gillis echoed Angwin’s concerns. “[Generative AI] is blind-siding us and presenting us with the larger question of what our copyright laws should be?” she said. What does it mean that we rely on people to produce content that they're not actually producing? What does it mean that people don't get attribution? What does it mean that we have this whole labor force that is uncompensated for the labor that they put in?”
In response to all of this, Gillis added, the challenge is going to be to understand whether copyright laws need to change to accommodate this new reality, but also whether we should be thinking about other frameworks— beyond copyright—that are right to address policy issues that are arising.
Finally, the conversation turned to the content produced by generative AI. Much news coverage has underscored the extent to which AI has the potential to exacerbate entrenched bias, sexism, racism, and other forms of discrimination. But, Professor Guetta asked the panelists, whether there’s a chance that this particular effect actually has a counterintuitive positive outcome in that it raises awareness around some of the issues that might previously have been underappreciated or not acknowledged at all.
Chen was not convinced, warning that the failure to acknowledge retrospective bias in data remains a major one. We almost need a disclaimer, he joked, like when investing, that “past performance is not indicative of future performance.”
Angwin, on the other hand, ended the conversation on a slightly more optimistic note. “I still think we’re better off for knowing,” she said of the bias highlighted by AI. “Indeed, as a journalist, I have devoted my life to this premise. I am constantly trying to investigate and bring facts to light in the hopes that they will be solved,” she said. “So, this is the optimism that keeps me going.”
While generative AI is a constantly evolving technology that will inevitably infiltrate everyday life in much the same way the Internet has done for the past thirty years, it’s important to always keep ethics and ethical decision-making at the forefront. Amid much uncertainty and unpredictability, perhaps the one thing we can reliably forecast is that the pace of innovation won’t slacken any time soon. In the spirit of that, our priority should be to learn about it, leverage it, and understand the importance of embedding ethics in our adoption of it at every stage.