Misinformation is nothing new — people and organizations have been publishing claims that contradict and distort well-verified facts for centuries. Long before the US political climate of the 2010s gave rise to terms like fake news and alternative facts, misinformation and disinformation were used by everyone from rulers in ancient Rome to 20th century satirists.
The misinformation ecosystem of the past decade, however, is new, thanks in large part to the rise of social media and, more recently, artificial intelligence. The ease with which content can now be created and shared, as well as the use of algorithms that are optimized for engagement, means misinformation can spread quickly, especially within an environment that often doesn’t cue people to fact check. In the past, misinformation was spread by a select few who wielded influence, but new platforms coupled with AI tools have democratized the practice.
It’s a tangled web that Gita Johar, the Meyer Feldberg Professor of Business at Columbia Business School, captures in a framework she calls The Three Ps: publishers, people, and platforms. Publishers, intentionally or not, may create false and sensationalized content — misinformation about climate change, for example. People consume and share this content, often through social media platforms, giving rise to often-problematic behavior.
Like a three-legged stool, Johar says, it’s not possible for one to exist without the other. Understanding this is the key to preventing the spread of misinformation, an increasingly potent hazard that impacts not only people but also private businesses, which stand to lose their reputations, partnerships, and ultimately revenue.
In a conversation with CBS, Johar shared more about the rise of AI-fueled misinformation, how it can be prevented, and what exactly is at stake for businesses caught in the mix.
CBS: Why is AI-based misinformation such an important battle for businesses?
Gita Johar: When businesses pay for ad placement on social platforms, it's often done through very opaque auction systems. So they never know where it's going to show up. Their ads can show up on misinformation sites, without the knowledge of the brand, because social media companies are not telling the brand every single website that ads were placed on. They're just telling them how many people they reached and how many impressions they got, and all that information.
As AI enters the picture, the amount of misinformation created on these sites is going to multiply, so these ads will have an even greater probability to show up on websites that can negatively affect their reputation. AI can also be used by competitors and disgruntled consumers to very quickly create fake news about your brand, making consumers see a brand less favorably.
CBS: How else is AI compounding the risks of misinformation?
Johar: People have started realizing that AI is behind a lot of this misinformation. Over time, they're not going to know what to trust anymore, plus there's such a deficit of trust in society as it is. As AI does more and more, even if you have disclaimers saying such and such was produced by AI, what you're going to see is consumers becoming more skeptical of information.
This is where you need to have those trusted sources of information. Groups like Media Bias/FactCheck rate information on its legitimacy and rate publishers as fake news publishers. The problem is, given polarization in society, people don't believe those labels either. When you start mistrusting information, one should be able to go to a corner where you know that this information is information you can trust. If you don't know what to trust, I think that leads to very, very bad outcomes.
CBS: Can we trust AI to help us fact check or at least identify publishers of misinformation?
Johar: We know that AI hallucinates and cannot be trusted fully yet. That's the issue with using AI to fact check. You need — and I am working on — a kind of machine learning-based fact check but with clear parameters on how much to trust it, such as what the uncertainty is around these true/false estimates. We also must involve people in fact checking so that in the end, everyone feels involved in this ecosystem, and then they begin to trust it more. Long term, it would be like a Wikipedia model for fact checking.
CBS: What role should regulators be playing here, if any?
Johar: This is exactly a problem where you need regulations and government intervention. You have things like that in the EU with the Digital Services Act, where they're trying to fight misinformation by saying a platform is responsible. In the United States, our current law is based on the Telecommunications Act of 1996. The United States basically doesn't regulate platforms because they say they are not responsible for the content and the platforms are seen as like an internet service provider. So they have no responsibility for the content on their platform.
The minute you start regulating it, there are lots of avenues, like the EU leading the way with the Digital Services Act, which says that platforms will be fined up to 6 percent of their revenue globally if they're found to propagate misinformation. That's a heavy stick, but enforcement and implementation is an issue.
Now if you start regulating in the United States, then you run into the First Amendment. I think that that is the problem with regulation. You really need a policy here that makes sense and prevents the wide and fast spread of misinformation but at the same time without running into these First Amendment issues. Of course, such regulation relies on the fact that the definition of misinformation is very clear and widely accepted.
CBS: Government regulation aside, do advertisers and consumers have the ability to effect change?
Johar: I think the big solution is actually advertisers. Advertisers can start to form a kind of trade association, like the National Advertising Review Board of the Better Business Bureau. If AI is going to start creating all kinds of false information about your brand, you have to be very careful to protect your brand. So it’s in the interest of all advertisers to form some sort of trade association and basically withhold advertising dollars from any platforms that are not seriously monitoring misinformation. I think a lot of work is needed, but businesses can lead the way here. They have all the power, but they haven't used it because they are individual advertisers.
So advertisers need to work together to make this happen. It's good for them, and it's good for society. It's really a win-win. Then they can actually force platforms to abide by some kind of rules and procedures and make sure — especially as we go into the elections — that they're actually monitoring and trying to prevent the spread of misinformation.
Watch disinformation expert Renee DiResta discuss AI and state propaganda during a recent visit to Columbia Business School: