Generative AI and large language models are among the most transformational technologies to emerge in decades, promising increased productivity, efficiency, and new opportunities for innovation.
Yet, even before the mainstream rise of ChatGPT and its cohort in late 2022, financial services stood at the forefront of the AI revolution: major institutional players such as JPMorgan Chase, Vanguard, and Morgan Stanley have used AI to assist with asset management for several years.
Now, both big and small companies are looking to use these technologies for everything from managing funds to hiring employees. Venture capitalists and startup founders alike rely on AI tools to build better products and make critical investment decisions. The market for AI tools in asset management alone is projected to hit nearly $12 billion by 2030, according to data from research firm MarketDigits. That means it is imperative for today's business leaders to be aware of AI's influence, its potential regulation, and how to leverage the technology to get ahead—and stay ahead—of competitors.
As AI's influence grows, Columbia Business School's faculty are at the forefront of research on how the technology will disrupt the financial services industry and how business leaders are using it to get ahead.
Industry participants have access to a wealth of information—financial data, news headlines, and company reporting, for example—yet they often fail to pay attention to much of it, even though it may be crucial, says Harry Mamaysky, professor of professional practice in CBS's Finance Division.
“Using AI, we can gain Edge from this greater volume of information to understand the things that we should be worried about,” he adds.
Mamaysky also serves as faculty director for the School's Program for Financial Studies, which promotes research in financial economics and quantitative finance. In addition, Mamaysky and his team manage the School's Master of Science in Financial Economics program.
CBS faculty members who specialize in researching the finance industry have analyzed the ethical issues and regulatory impacts of AI to better understand how companies in the space can better leverage the technology and why it is certain to change the industry forever.
David Vs. Goliath
As AI and finance become increasingly intertwined, both institutional players and smaller startups have invested heavily in making the technology work to their advantage. In February 2024, Bloomberg reported that industry giant Vanguard has experimented with using machine learning to manage several of its active stock funds to the tune of $13 billion. Two funds—the $7.8 billion Vanguard Strategic Equity Fund and the $1.5 billion Vanguard Strategic Small-Cap Equity Fund—beat peer funds and exceeded benchmarks in 2023.
Similarly, smaller financial firms, such as Live Oak and Five Star Bank, have adopted machine learning to help guide their credit and auto lending decisions. However, larger firms benefit the most, according to research by Assistant Professor of Business Tania Babina, who works in CBS's Finance Division.
In a paper forthcoming in the Journal of Financial Economics, Babina, along with fellow researchers Anastassia Fedyk, Alex Xi He, and James Hodson, found that larger financial firms in particular can capitalize on AI because they are able to generate more customer data than their smaller competitors. This creates a snowball effect, where already large firms that invest in AI grow larger, gaining more sales, employment, and market share in comparison to their smaller competitors. Additionally, larger firms can further expand through product innovation and increased product offerings.
“The benefits from AI depend to a large extent on who owns big data—the key input to AI technologies,” according to the paper.
Still, smaller firms shouldn't be counted out from leveraging AI technology to great success. A case in point is Maverick Real Estate Partners, a private equity fund manager that invests in distressed debt (see story on p. 38). David Aviram '06 and Ted Martell '06 began cultivating the firm's data infrastructure a decade ago. By collecting property data at the Manhattan County Clerk's Office, the two were able to develop a proprietary algorithm to detect promising leads. Today, the firm has invested nearly $1 billion in New York's commercial real estate industry, thanks in part to the quality of their data.
Effective AI users should be focused on collecting a large amount of high-quality data and dedicating resources to data organization. In doing so, these companies will have a better chance at monetization down the road, according to Michael Johannes, the Ann F. Kaplan Professor of Business at CBS and chair of the School's Finance Division.
Industry participants have access to a wealth of information—financial data, news headlines, and company reporting, for example—yet they often fail to pay attention to much of it, even though it may be crucial. - Harry Mamaysky, Professor of Professional Practice
“It's about getting your data and getting it organized,” he says, adding that the role of data and its use can vary significantly depending on where you work. For example, the data-driven approaches used by a managing director at an investment bank would likely differ from those employed by someone running a private equity fund.
Still, he adds, the good news is companies across the financial industry are increasingly leveraging data analytics to generate revenue streams and make more informed, data-backed decisions, regardless of their specific business model.
Ethical Issues Remain
Countless headlines have revealed the ingrained biases of AI models in action, and according to Mamaysky, the financial services industry is by no means inoculated from the ethical concerns brought about by AI technology. A single model can have billions of parameters, creating complications and emergent properties that aren't always obvious.
“The models make hiring decisions; they make credit extension decisions. It doesn't really understand causality in the way that we think about causality. It understands correlations but doesn't know that there are mitigating circumstances,” Mamaysky explains.
One of the most common concerns of industry players is hallucinations: errors in AI-based predictions due to insufficient or erroneous data that, in the case of the finance industry, can result in flawed models. Even when the amount of data is sufficient, training an AI on a large dataset can mean the model absorbs biases. That is because datasets are initially collected and created by humans, who consciously or unconsciously permit bias to creep in when collecting data.
Biases can also reflect historical inequities. For example, before the Equal Credit Opportunity Act of 1974, it was not unlawful for US creditors to discriminate against applicants on several inherent characteristics, including race and gender, as well as marital status. This often led credit managers to make unfair assumptions regarding marginalized groups' abilities to manage their finances as well as those groups' overall credit worthiness, according to a report by the New York State Department of Financial Services.
Though AI alone “has no innate sense of right or wrong,” Mamaysky says, adept AI users should be vigilant when it comes to biases in a dataset, lest it impact the validity of any related models.
Incoming Regulation
Because consumer data is needed to produce AI modeling, regulators have started taking notice. In October, the US Consumer Financial Protection Bureau (CFPB) proposed the Personal Financial Data Rights rule. This omnibus regulation would shift financial service providers toward a policy known as open banking.
Open banking would grant new protections to consumers who have had their data misused by financial institutions. These protections would forbid financial institutions from hoarding a person's data and require companies to, at a customer's request, share data with other companies offering better products.
“With the right consumer protections in place, a shift toward open and decentralized banking can supercharge competition, improve financial products and services, and discourage junk fees,” CFPB Director Rohit Chopra said in a statement announcing the rule.
Currently, consumer access to one's financial data is inconsistent across different financial institutions. A lack of norms in the market allows institutional finance companies to hide or obscure important data points, like prices. This, in turn, undercuts the ability of upstart financial institutions to compete, according to the CFPB.
Perhaps most critically, the rule would allow people to break up with banks that provide bad service and forbid companies that receive sensitive personal financial data from misusing or wrongfully monetizing it.
“From a government policy perspective, it is so important to think about who owns customer data— you or the firm you are buying from. That is where open banking policies that reallocate the ownership of data from firm to customer are coming into play.” - Tania Babina, Assistant Professor of Business
Babina analyzed early evidence of the impact of open banking policy using a hand-collected dataset of policies from 49 countries. She and her fellow researchers found that the implementation of open banking policies is associated with significant increases in venture capital fintech activity across many different financial products as well as greater financial knowledge among consumers.
“From a government policy perspective, it is so important to think about who owns customer data—you or the firm you are buying from. That is where open banking policies that reallocate the ownership of data from firm to customer are coming into play,” Babina says.
The Personal Financial Data Rights rule will likely be finalized by fall 2024.
“With the right consumer protections in place, a shift toward open and decentralized banking can supercharge competition, improve financial products and services, and discourage junk fees.” - Rohit Chopra, CFPB Director
Using AI to Further Finance Research
In 2018, Mamaysky and Charles Calomiris, the School's Henry Kaufman Professor Emeritus of Financial Institutions and professor emeritus of international and public affairs, began researching how regulation can impact companies' growth, leverage, profitability, and equity returns. Assisted by then-CBS PhD student Ruoke Yang MPhil '17 PhD '19, the researchers developed language models to analyze when companies talk about regulations on corporate earnings calls.
The researchers began with a manual textual analysis of all quarterly earnings calls of publicly traded companies from S&P Global's Transcripts Data from 2009 to 2018. They then merged the conference call data with pricing and accounting data from the Center for Research in Security Prices and Compustat.
Initially, Mamaysky and Calomiris used a language model to analyze the data, counting how many times companies' leaders mentioned regulation in the earning calls. However, their methodology soon evolved to feed references to regulation into ChatGPT's interface. They then asked ChatGPT if it detected whether a sentence indicated increasing or decreasing regulation. The researchers eventually discovered that higher regulatory exposure resulted in slower sales, reduced profitability, and higher post-call equity returns for smaller companies.
Mamaysky notes that while the use of ChatGPT worked well enough, it was no replacement for experienced researchers, since it hasn't yet collected enough data. “GenAI is not that good at giving practical financial advice because it's too unstructured and too easy to overfit the data when you're making forecasts,” Mamaysky says.
While GenAI's relationship with finance is still in its infancy, practitioners already know one thing for sure: AI is here to stay. That means savvy finance leaders must stay abreast of the technology's ethical and regulatory issues if they want to leverage it for success—and avoid getting left behind.