NEW YORK CITY – AI tools are reshaping how hiring managers evaluate candidates, enabling them to expand talent pools, identify biases, and make more informed, data-driven decisions. But in the job application process, AI also complicates things. Without AI, candidates relied on their own expertise to write resumes and cover letters. But now, candidates are increasingly using tools like ChatGPT to craft polished applications, which can complicate whether hiring managers can determine if a candidate truly has the expertise they claim. Using experiments with global candidates, entrepreneurs, and evaluators, a new Columbia Business School study found that generative AI reduces evaluators’ ability to distinguish skilled candidates from less-qualified ones by 4–9%, exposing a growing dilemma at the intersection of human expertise and AI assistance.
In the paper, “Does AI Cheapen Talk? Theory and Evidence from Global Entrepreneurship and Hiring,” Columbia Business School Professors Bo Cowgill and Nataliya Langburd Wright, along with Yeshiva University Professor Pablo Hernandez-Lagos, examine the effect of generative AI on evaluators’ ability to assess expertise in job applications and startup pitches. To explore this, the researchers conducted experiments with over 1,100 participants from around the world, asking each job candidate and entrepreneur to produce four submissions—two in their area of expertise (one with ChatGPT and one without) and two outside their expertise (again, one with AI and one without). Over 800 evaluators, including experienced hiring managers and investors, reviewed eight submissions each, allowing researchers to measure whether AI-assisted materials made it harder to distinguish between experts and non-experts and to track how often evaluators sought additional information beyond the submissions. The study revealed that while AI improved the overall quality of submissions, it also reduced the differences in the quality of the document submitted by skilled versus less-skilled candidates, making it harder for evaluators to identify true expertise. Evaluators reviewing AI-assisted materials were more likely to seek additional information — such as background checks or references — to verify candidates’ qualifications. But this was not the case everywhere. In non-English-speaking sender countries, the research found that AI primarily helped experts communicate more effectively, actually improving evaluators’ ability to identify true expertise. In contrast, in English-speaking contexts, AI disproportionately benefited less skilled candidates, making it harder to distinguish between levels of expertise.
These finding shed light on the nuanced impact of generative AI on hiring and pitching processes, with important implications for both evaluators and organizations. These findings underscore the need for organizations to adapt their screening methods in an AI-driven world. As written materials become increasingly uniform, companies may need to update their hiring process and look for additional information through live assessments or task-based exercises. Future research could explore how AI affects real-time evaluations, such as job interviews and interactive assessments, and whether different AI models produce varying effects on expertise perception.
To learn more about the cutting-edge research being conducted, please visit the Columbia Business School.
###