Investor and LinkedIn co-founder Reid Hoffman recently visited Columbia Business School to share insights on the future of AI and the digital economy at an event hosted by the School’s Digital Future Initiative.
Held as part of CBS’s new Distinguished Leadership Series, the conversation with CBS Dean Costis Maglaras covered various topics, including the challenges of integrating AI into the global economy and advice for managing the technology’s explosive growth.
During the discussion, Hoffman emphasized the importance of policymakers defining specific outcomes rather than regulating AI with broad strokes, which could risk stifling innovation. The event provided an opportunity for attendees to gain valuable insights into the evolving landscape of AI and its implications for business and society as a whole.
"What we are doing is creating the cognitive industrial revolution — the steam engine of the mind," Hoffman said. "AI greatly amplifies what we are capable of. The steam engine allowed you transport, logistics, and manufacturing. AI is now that but in cognitive and language characteristics.”
Last year, Hoffman published Impromptu: Amplifying Our Humanity Through AI, a book he wrote using OpenAI's GPT-4 that details how AI can serve as humanity's partner in education, business, and creativity.
During the conversation at CBS, Hoffman drew from his deep experience in watching AI's growth. He was an early investor in the AI research lab OpenAI and was a board member until 2023. In 2022, he co-founded Inflection AI, a studio dedicated to developing generative AI applications, such as the personal assistant Pi. He also works as a partner at Greylock, a venture capital firm that has invested heavily in AI applications, infrastructure, and foundation models.
Below are highlights from Hoffman and Maglaras' discussion, including how to put AI’s speed of growth into context, Hoffman’s concern with open sourcing AI software, and the best way for regulators to balance the technology’s enormous upsides with the need to protect users.
Contextualizing AI's Massive Growth
To understand just how impactful the rise of AI has been on human innovation, Hoffman suggests one look at the dramatic innovations of the Industrial Revolution as a parallel. Similar to how the steam engine vastly improved manufacturing and logistics, Hoffman believes AI is amplifying — though notably not replacing — human creativity.
"I think what we're doing is creating the cognitive industrial revolution, a steam engine of the mind," Hoffman said, making the case that just as the steam engine paved the way for innovations in logistics, manufacturing, and transport, AI has set the stage for a revolution in cognition and language.
He also noted that while the algorithms behind AI models have existed for decades, it is only in recent years that the technology has been able to achieve such a rapid rate of growth, giving users what Hoffman refers to as "cognitive superpowers."
The Risk of Open Sourcing
Hoffman noted that while there is value in open sourcing AI software to accelerate growth — something he became familiar with while serving on the board of Mozilla — bad actors can easily take advantage of its capabilities. While open sourcing databases and web browsers are often benign, open source AI software has the potential to benefit election interference, cybercriminals, and rogue states.
He added that the 2024 US election season will likely see the use of open source AI models to spread misinformation. A US intelligence report published in February warned that foreign governments could use AI to develop computer viruses and even new chemical weapons. The report also stated that China’s government used GenAI-based TikTok videos to target US candidates during the 2022 midterm election cycle.
If open sourcing were somehow limited to academic institutions or well-meaning entrepreneurs, Hoffman noted, he would support the practice. "But the problem with open sourcing is, once the model gets out of the barn, it's out there infinitely," he said.
He added it is better to get ahead and regulate the problematic portions of AI technology before open sourcing. However, even if AI is given safety guardrails — being trained not to teach users how to make anthrax, for example — open sourcing can allow bad actors to untrain those guardrails, according to Hoffman.
Adding Checks and Balances as AI Grows
Increased regulation of AI technology is inevitable, both in the United States and abroad. In October 2022, the Biden administration published the document “Blueprint for an AI Bill of Rights,” which laid out five guiding principles for the design and deployment of automated systems. More recently, in October 2023, President Biden issued an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, which notes that unlocking AI's "myriad benefits requires mitigating its substantial risks."
While Hoffman sees regulation as a necessity, he cautioned policymakers to clearly define legislative outcomes, lest they risk suppressing innovation: "You have to do the hard work of thinking about the outcomes that you're trying to steer away from, as opposed to saying just stop until you're perfect.”
To understand the implications of aggressive regulation, Hoffman argued, one can look to innovations we take for granted, such as modern medicine and cars, to name a few. Regulating AI means iterating as the technology grows and adding guardrails only when absolutely necessary, he added.