AI is constantly transforming the way businesses operate, compete, and innovate. From generating detailed digital twin personas and examining group dynamics to building robust benchmarking datasets, AI is particularly useful for understanding and replicating human behavior.
Columbia Business School faculty are at the forefront of researching how best to harness AI for this purpose, along with how the technology can drive decision-making and shape strategy.
As part of the Columbia University AI Summit, CBS’s Digital Future Initiative hosted an expert session, “AI in Business: Cutting-Edge Research and Real-World Impact” aimed at showcasing cutting-edge research from CBS and Columbia University faculty.
“AI cuts pretty much through everything that we do here at CBS, specifically around research,” said Oded Netzer, Vice Dean for Research and Arthur J. Samberg Professor of Business, who introduced the session.
Together, the speakers painted a comprehensive picture of how AI is redefining our understanding of human behavior, business innovation, and efficiency.
Generating ‘Digital Twins’
Tianyi Peng, an assistant professor of business in CBS’s Decision, Risk, and Operations Division, demonstrated how large language models (LLMs) can be used to generate AI agents that mimic human behavior – in other words, digital twins.
Through his research, Peng tasked LLMs with generating consumer personas based on product descriptions. For instance, by inputting a random Amazon product, the model generated a plausible shopper persona — complete with name, age, occupation, and shopping preferences.

Peng explained that when simulating a persona for a high-end blender, for example, the LLM produced a profile that resonated with the expected attributes of a tech-savvy product manager, illustrating the model’s capability to align personality traits with specific product types.
However, like most AI-generated insights, bias was consistently present in these generated personas. As more detailed information was added to the persona profiles, the AI exhibited increasingly pronounced bias—at times, to an astonishing degree. This bias is distinct from "persona simulation bias," which has been identified in early literature. It specifically arises from the persona generation process, hence the term "persona generation bias." LLMs often generate stereotypical and overly positive personas, sometimes to the point where the results significantly deviate from reality.
Read more: Tracking AI’s Impact on Creativity, Leadership, and Innovation
The finding highlights a critical challenge for sales and marketing professionals who utilize AI: while digital twin personas hold significant promise for product testing, market research, and even political forecasting, they require careful calibration.
“Always remember to calibrate, ask whether the simulation and the persona profile itself match your applications. That's important," Peng said.
Watch Prof. Peng's presentation:
Beyond Static Models
Research from Olivier Toubia, the Glaubinger Professor of Business in CBS’s Marketing Division, questions how digital twins can best replicate human behavior.
He noted that there are two main approaches to AI agents — one focusing on having a single "super agent" with the best skills, or, another focusing on having a diverse panel of "imperfect" agents to reflect human heterogeneity. For applications like opinion surveys, market research, and creativity, the latter approach of capturing human diversity is more important, according to Toubia.
Toubia also highlighted the importance of moving beyond static, individual models to explore how interactions among AI-generated personas can influence collective behavior. In order to research this method further, he and his co-authors have created a panel of digital twins based on copies of approximately 2,000 real individuals. Participants underwent a series of surveys that covered personality assessments, cognitive tests, economic preference evaluations, and behavioral experiments.
By administering these surveys over multiple waves, Toubia and his colleagues were able to establish a test-retest reliability benchmark, which is crucial for validating the consistency and predictive power of the digital twins. The data provides a baseline — consistency in self-reported behavior — that researchers can use to gauge the performance of their predictive models.
“We want to replicate a panel that is going to be representative of imperfect humans, to predict how they would behave,” Toubia said.
In the future, Toubia and his co-authors plan to make the data publicly available and create a common benchmark dataset that researchers and companies can use to test and improve digital twin approaches.
Watch Prof. Toubia's presentation:
Welcome to the Real World
In real-world applications, AI agents are required to perform dynamic simulations of emergent behaviors in complex scenarios. Research from Lydia Chilton, an assistant professor of computer science in the Columbia University School of Engineering & Applied Sciences, showcased how AI agents, when placed in simulated environments, can exhibit behavior that mirrors the nuanced and often unpredictable nature of human interactions.
Take the “shopping cart theory” for example, which posits why shoppers at big box stores may or may not choose to return their shopping cart to the designated area in the store’s parking lot. In her research, Chilton used AI agents to simulate the behavior of shoppers in the parking lot. The simulation demonstrated that when faced with different stressors — such as having a child in the car or encountering a long distance to the return receptacle—the agents’ behavior shifted in a way that closely paralleled real human actions.
She noted the value of using simulated human agents to thoroughly test systems before deploying them in the real world. AI agents allow testing of complex human coordination and interaction scenarios at little to no cost.
"Testing that with humans is expensive. Testing with agents is nearly free,” Chilton said.
Chilton emphasized that simulations such as these must account for human psychological factors and motivations, since their deployment can significantly impact how people behave and interact.
Watch Prof. Chilton's presentation:
A Human-Centric Approach to AI
As AI becomes ubiquitous in workplaces, Stephan Meier, the James P. Gorman Professor of Business at CBS and chair of the School’s Management Division, argues that it is ultimately human workers and leaders – in tandem with AI – who will define the future of work.
He highlighted the fact that despite widespread AI adoption, human productivity has yet to drastically increase. Similar to the innovation brought by the Information Age, productivity gains cannot be fully realized until AI is used to enhance, rather than diminish, the human experience.
“You see the AI age everywhere but in productivity statistics,” said Meier, adopting a famous quote by Nobel laureate Robert Solow about the IT revolution and its lagged effect on productivity. He continued: “There is no increase in productivity yet, or at least not that much.”
Based on insights from his book “The Employee Advantage”, Meier argues that a human-centric approach is needed that is focused on creating value by focusing on human motivation. He points out that in many industries, AI has been implemented to enhance the customer experience. By doing so, it increased the company's margins. However, when it comes to employee experience, AI is consistently underutilized. Rather than managers focusing on cost reduction, they should utilize AI to augment employee motivation, such as by automating routine tasks to allow employees to spend more time on higher-value activities.
Meier also highlighted a case study involving Morgan Stanley, in which he and CBS Professor Jeffrey Schwartz found that setting up employees for success while working with AI also requires strong leadership, extensive employee engagement. Ahead of introducing an innovative new AI- based wealth management system, Morgan Stanley held hundreds of meetings with advisors to understand their concerns and incorporate their feedback.
The company’s employees have readily embraced the technology. The reason, according to Meier, was because managers communicated clearly that the goal of the new system was to augment the advisors’ work, not replace them.
“Focus on creating more value and not just cost or efficiency. Not doing what we always did, but cheaper, but doing new stuff that we couldn't do before. As a result companies will gain an AI-enabled Employee Advantage,” Meier said.
Watch Prof. Meier's presentation:
Explore more insights from Columbia University's AI Summit.
Watch all five faculty members discuss their AI-related research and how the technology is driving decision-making, shaping strategy, and redefining industries: