On social media, mitigating the spread of misinformation presents an ongoing challenge, as platforms struggle to fact-check a constant deluge of posts.
New research from Columbia Business School could help. Researchers plumbed a trove of users’ existing posts on Twitter (now X) to find language patterns that predict which users — across socio-demographics, social media activities, emotions, and personalities — are most likely to be fake-news sharers, offering criteria that could be used to inform fact-checking efforts.
Key Takeaways:
- Social media users who share fake news display distinctive linguistic patterns and emotions across their posts.
- They display higher usage of words associated with anger compared to other users, except those who share fact-checks.
- They also display higher usage of power-related, religion-related, and death-related words compared to most other users, including fact-check sharers.
- Incorporating textual cues from the posts of fake-news sharers into predictive models improved the models’ accuracy in identifying users with the potential to share misinformation.
- The findings point to promising interventions for reducing fake-news sharing: One experiment found that using empowering language in advertisements for fact-checking tools could effectively encourage social media users to adopt them.
The threat of online misinformation garnered widespread attention in the wake of the 2016 US presidential election, when multiple federal investigations found that Russian operatives had waged a disinformation campaign to influence the election, denigrating Hillary Clinton as a candidate and bolstering the campaign of Donald Trump.
The events caught the attention of Gita Johar, the Meyer Feldberg Professor of Business in the Marketing Division at Columbia Business School. Johar and her colleagues became interested in why platforms can’t control misinformation. One of the main issues they discovered was that human fact-checkers hired by platforms to verify posts were quickly overwhelmed by the amount of content. “The platforms throw up their hands and say, ‘Look, there are so many posts. It’s impossible to really check or flag them all,’” Johar says.
Building on prior research in the area of misinformation, she and her colleagues set out to help address the challenge by investigating who is most likely to share misinformation in the form of fake news. Johar notes that users may not be intentionally sharing misinformation, since they may not know an article they repost is false. Still, she says, the spread of such information can be harmful, regardless of the user’s intent.
“Our simple starting point was, given that there's this fire hose of information that goes out on social media, if you're Meta or if you’re — at the time — Twitter, how do you know what, or who, to fact-check?” Johar says.
While prior research in the field had focused on finding markers of fake news based on the language used in the articles being shared, Johar and her colleagues were interested in uncovering markers for who shares fake news. Prior work had used surveys to define socio-demographics of fake-news sharers, but the researchers felt they could get more specific.
“Past research has said it’s mostly men, or it’s mostly conservative people, and that, we felt, was a very broad generalization,” Johar says. “We wanted to go beyond that and think about what motivates people to post this kind of information.”
Insights in Tweet Histories
For clues, the team turned to a resource Johar says had been overlooked: users’ existing posts. In 2017, the researchers pulled individual users’ last 3,200 tweets — the maximum allowed by Twitter at the time. (The study refers to the platform as Twitter, since it had not been rebranded as X at the time the research was conducted.) They searched the past tweets for posts that had been flagged as fake news by Snopes or as coming from fake-news publishers by third-party organization Hoaxy. After establishing that an account had shared fake news, they looked for the characteristics of the sharer, in terms of their emotions and personality as indicated by their language.
Using the Linguistic Inquiry and Word Count text analysis tool, which quantifies word use in terms of characteristics, emotions, and traits, the researchers were able to differentiate fake-news sharers from others. Fake-news sharers used more words related to power, like powerful and control, and to high-arousal emotions, like anger and anxiety. They also posted more words associated with existential concerns like death and religion, as well as money. They used fewer words related to friends and family and to low-arousal emotions, whether positive or negative.
The researchers then used the trait prediction engine Magic Sauce to estimate users’ personality traits, specifically those defined in psychology as the Big Five: openness, conscientiousness, extraversion, agreeableness, and neuroticism. The research showed fake-news sharers scored higher on openness and neuroticism but were less agreeable, extroverted, and conscientious than randomly selected Twitter users.
Interestingly, some of these characteristics overlapped with users who shared fact-checks of fake news, suggesting some traits could be an indicator of people who share posts related to misinformation, whether it’s the fake news or a correction to it. “That was quite a revelation,” Johar says. “Fake-news sharers and fact-check sharers are pretty similar in many ways. Both are motivated by anger.”
She hopes to further study this surprising alignment between the two seemingly different types of users. “You don’t want to misclassify ‘the good guys,’ the people who are sharing fact-checks, as fake-news sharers,” she says. “That’s why the predictive study results showing we can improve prediction of fake-news sharers is very important. It’s critical to do a proper differentiation between them and not misclassify fact-check sharers as fake-news sharers and, maybe in a future study, even try to see how you can create more fact-check sharers.”
Who Shares Fake News?
In a second study, the researchers found that drawing from these users’ distinctive word choices to add textual cues to predictive models significantly enhanced those models’ accuracy in identifying who shares fake news. This finding could have real implications for platforms as they attempt to better target their fact-checking efforts. A simple predictive model could make a big difference once platforms know how to leverage it, according to Johar.
“Going forward, instead of trying to keep up with examining the content of each flagged post — and missing a lot of misinformation that may not be flagged — I could observe the behavior of a new person on the platform, draw a profile of them from the language of their posts and other information, and see whether or not they need to be prioritized for fact-checking their future posts,” she says.
Johar notes that the current discourse on the topic of online speech in the United States is encouraging a move away from algorithmic fact-checking as well as third-party fact-checking, as X and now Meta have done. However, the European Union’s Digital Services Act requires that social platforms have robust policies to prevent the spread of misinformation, and the insights developed in this research could help platforms abide by those laws.
Reducing the Spread of Fake News
In additional exploratory experiments, the researchers tested out ideas for practically reducing the spread of fake news — a goal that doesn’t have a simple answer. For instance, researchers tried manipulating people’s anger levels in the moment but found no impact on their tendency to spread fake news. That action seemed linked more to “trait anger,” or “chronic anger,” as Johar put it, than a heat-of-the-moment reaction. “Angry people just tended to share more fake news,” she says. “In our study, we didn't find that mitigating anger in the moment helped to reduce their sharing of fake news, at least the way we did it. Maybe there are stronger ways to do it that could help.”
Helping users feel more powerful in the moment proved more effective. The researchers found that advertisements for a fact-checking plugin that used terms related to power, like telling users, “You have the power” to stop fake news, and, “You are in control,” made users more likely to download the tool. “There is an intervention that actually works,” Johar says. “Making people feel powerful in the moment can reduce users’ sharing of fake news.”
This research offers new insights for marketers and policymakers who aim to mitigate fake news. In addition to the opportunity to predict which users are most likely to share fake news, based not just on their demographics but how they word their posts, Johar believes there could be implications for how tech companies manage emotions on their platforms. If platforms worked to make people feel empowered and in control through, for example, the colors and language they use, she says, it might reduce the motivation users feel to share spurious articles.
“If they can get users to be calm and feel powerful in who they are, that might be a way to actually reduce the temperature on these platforms,” Johar says. “But of course, that’s something you don’t always control.”

Adapted from “Who Shares Fake News? Uncovering Insights from Social Media Users’ Post Histories” by Verena Schoenmueller of ESADE University, Simon J. Blanchard of Georgetown University, and Gita V. Johar of Columbia Business School.