Business leaders have long cautioned against overregulating AI technology, urging policymakers to consider defining specific outcomes, lest they risk stifling innovation. Federal regulators in the United States, unlike their European Union counterparts, seem to be listening: They have yet to pass sweeping legislation on AI, leaving it up to the states to decide on AI-related regulation, save for a select few bills.
The healthcare industry has often been at the forefront of conversations around such AI-related regulatory frameworks. AI has accelerated the development of new lifesaving medicines, for example, but also has created new concerns around privacy, bias, and job displacement. Properly balancing AI’s hand in innovation with government oversight will be crucial to unlocking the technology’s full potential, according to US Senator Bill Cassidy, who represents Louisiana.
As a ranking member of the Senate Committee on Health, Education, Labor, and Pensions, as well as an experienced physician, Cassidy’s concerns with AI revolve around proactively regulating a rapidly evolving technology while supporting innovation with appropriate safeguards.
“What I'm trying to do is get the universe of what we have to consider with regulation as small as it needs to be. You want that incredible innovation to occur, but you want to make sure that it doesn't deny people agency. And I think that regulation should be there to preserve the individual's agency,” Cassidy says.
During a fireside chat as part of Columbia Business School’s Digital Future Initiative and the Healthcare and Pharmaceutical Management Program, Cassidy shared three key insights into AI in healthcare, and how regulators and business leaders should navigate AI’s role in the future.
Good Regulation Is Not Prescriptive
Cassidy noted that good regulation of AI should focus on high-risk activities that have the potential to deny people agency or control over their lives without their consent. This can relate to AI systems that have the potential to infringe on individual privacy and civil liberties, such as facial recognition, surveillance, and predictive policing technologies.
He suggested using broad principles as guardrails for the courts to interpret and apply, rather than enacting overly prescriptive regulations such as those put in place by the EU’s AI Act earlier this year. Cassidy argued that the EU’s bill regulates a wide range of AI applications rather than focusing on ones that are particularly harmful. Additionally, the EU's approach may have a chill effect on innovation, leading top AI companies to exit the market and develop their businesses in countries with more lax regulation.
“When companies seek to develop, they typically seek to develop in a lower regulatory state, which is the United States. And we benefit from the economics of that,” Cassidy said.
Patient Privacy Comes First
Cassidy emphasized that personal data and privacy should be the focus of any regulation of AI in healthcare.
“I think there's a lot of damage being done to people's privacy now that people don't realize. And if they knew about it, they would be very upset,” Cassidy said.
He noted how AI-based health apps that track individuals’ biometrics, for example, are not bound by patient privacy laws such as HIPAA. Therefore, companies are able to sell users’ health data at will. In that case, he believes the country’s regulatory regime, specifically the US Food and Drug Administration, has a role to play.
“The implications for our personal privacy are tremendous,” Cassidy said.
The Risk of Overreliance on AI
While AI can be a powerful tool in the right hands, overreliance on the technology can cripple critical thinking, according to Cassidy.
“If you've got a bunch of radiologists, some specialize in mammography reading and the others do not, it can make the non-specialist almost as good as a specialist,” Cassidy noted. “It can make the specialist a little bit better, but the rise is not as dramatic. AI clearly has a role there.”
He added, “But you also want to make sure you don't create scenarios in which people are just routinely doing things even if they are routinely wrong.”
The issue creates liability questions around who must take responsibility when poor decisions are made: AI or the overly reliant individual? Cassidy stressed the importance of doctors using their own clinical judgment skills rather than relying on AI recommendations.
At the same time, he warned about clinicians losing their direct connection to patients by becoming overreliant on AI tools and emphasized the importance of preserving empathy in patient-provider interactions. He also urged all stakeholders in the healthcare system to never stop questioning and verifying AI outputs that have no human oversight.