The introduction of ChatGPT late last year sent shock waves around the world, raising substantial public interest in the capabilities of the AI-powered chatbot.
Developed by research company OpenAI, ChatGPT represents a significant advancement in AI language modeling. Instead of having to input specific commands or keywords, users of the chatbot speak or type questions using natural language, making the technology more user-friendly and allowing for numerous practical uses, from content creation to search and more intuitive computer interfaces.
But while ChatGPT and other emerging AI tools have innovative capabilities that are poised to revolutionize workplaces and entire industries, we should brace ourselves for even greater disruptive changes in the fields of AI, robotics, and associated technologies, says Hod Lipson, professor of mechanical engineering and data science at Columbia Engineering.
A world-renowned roboticist and AI expert, Professor Lipson is the founder and director of the Creative Machines Lab, housed at Columbia's Engineering School, where he and his team push the boundaries of robotics and AI with groundbreaking research. He is also a guest lecturer in a new CBS class, Technology Breakthroughs, taught by CBS Dean Costis Maglaras and Columbia Engineering Dean Shih-Fu Chang. The class examines the impact new technologies have had on business and society.
The strong response to ChatGPT could be merely the beginning of a far larger trend, Lipson notes. While progress to date has been slow and steady, newer technologies driven by more data and processing power are accelerating the use of AI and robotics—at a pace that's surprising even those working in the field, he says.
“I think the future is going to be nothing but amazement,” he adds.
Lipson recently spoke with us about the factors behind the latest developments in robotics and AI, the societal implications of technological advancement, and the role of business and industry in pushing the boundaries of innovation.
CBS: What are the most recent breakthroughs in the development of AI?
Hod Lipson: AI software used to focus on rulesbased automation. For example, you could program software to detect fraudulent transactions by noticing if somebody spends in one day more money than they spent in the entire previous month. That's probably a fraudulent transaction, and the AI flags it for review. You can apply these rules automatically to millions of transactions a second, so it's efficient. But when you want to improve the system, you're stuck because you have to go back and create new rules. This is where we have been stuck for decades, until around 2012, when we figured out how to program computers not by telling them what to do but by showing them what to do. In other words, instead of telling the computer how to find fraudulent transactions, we instead give the computer examples of fraudulent transactions, and it can study them, find their statistical signatures, and then look for more. And when it finds more, it can study those and get even better at finding what it's looking for.
These kinds of data-driven systems are self-improving, and this is the key thing. It's why we are seeing exponential growth, because modern AI is based on machine learning and machine learning keeps improving. The more examples and data it collects, the better it gets.
It's difficult to predict where AI is going to go next because the improvements are accelerating. We are seeing these improvements touching many industries and sectors, from medicine to agriculture to security and retail. We're seeing the technology power driverless cars and factory automation. It's affecting every market segment that you can think of.
CBS: You've talked about “compounding exponentials” that are driving change at an accelerating rate. Can you share more?
Lipson: Lots of people have the misconception that the reason AI is moving forward at an exponential rate is simply because computing power is continually improving. Moore's Law states that the amount of computing power available for a given cost will increase by a factor of two every 18 months or so, allowing us to build computers that are faster, cheaper, and better at an exponential rate. And while it seems like AI is riding this curve, and is therefore accelerating, there's a lot more going on here than just Moore's Law.
The creation of data, which is the fuel of modern AI, is also accelerating. Some people say that the amount of data that we have is doubling every 12 months. That's an incredible rate of acceleration. And on top of that, we've seen the growth of AI systems themselves—how much data they can store, the size of the “brains” of AI systems, if you like—doubling even faster. And on top of these factors, you also have the fact that AI systems are now able to teach other AI systems, creating a compounding effect. All these factors are self-amplifying and creating an incredible rate of acceleration.
CBS: Why is data so important to the development of AI?
Lipson: When you think about the economics of AI, certain things are a commodity and others are assets. Programming code, for example, is open source now and it's mostly free. Computing power is a commodity, too—a penny a core hour. And talent is ubiquitous. You have high school kids these days who can put together a system that would have earned them a PhD just a few years ago. People all over the planet (and AI itself) are learning quickly to create these systems.
But two things are not a commodity, and they're important to understand. One is data. The other is understanding what problems need to be solved. And this is where I think people with business backgrounds have an advantage. If you are in a particular market segment, you understand what the challenges are, and you understand what data are available to you. And you understand how data can help you solve a problem. Everything else you need is a commodity. You can put together a solution and lead your industry. This is what will differentiate the leaders from the followers in this world of AI.
CBS: How can generative AI help us achieve business goals or create new markets?
Lipson: The current wave of creative AI, or generative AI, is really fascinating. It's a different kind of intelligence. Up until recently, most of AI focused on decision-making. AI would ingest a lot of data and then make a decision. Is it a cat or a dog? Should I buy or sell? Should I turn left or turn right? Now, we are seeing a different kind of intelligence, which is creativity. You start from a goal, a seed, a very small thing, and then generate a lot of new things. We used to think computers can only make decisions and that creativity is uniquely human, but it turns out, creativity is exactly what AI is good at. It's actually very good at generating new ideas.
You can see that with software like ChatGPT, Stable Diffusion, or DALL·E. They can create not only poems, but music, scientific reports, engineering designs, molecules, or art. In class, I see students struggling to generate a new design for a robot, but AI can sketch out eight different designs in 25 seconds. It's amazing to see how creative AI can be. And this is important because a lot of our ability to innovate has to do with creativity. It's particularly difficult in areas where we don't have a lot of intuition. We are very good at designing chairs or bridges—the things we understand—but we are not as good at designing proteins or antennas, or nano materials—things that we don't have a lot of intuition about.
AI can design all these things for us, and it expends the same amount of effort to design a bridge as to design a protein. So I think this is an incredibly powerful aspect of AI. We've been stuck in a corner for centuries because of our limited intuition about the world, and now AI can allow us to create amazing new things.
CBS: Should we be concerned about the potential power of AI?
Lipson: People think AI is competing with humans, but the right way to look at it is that finally we're going to collaborate with another creative species that's going to think about problems in a different way. It's going to allow us to look at things differently and create new solutions. And it's not just going to be one AI that's going to be creative. We're going to have a whole ecosystem of AIs that are very good at finding creative solutions to a lot of challenges, such as designing spaceships or finding a solution for climate change, probably in ways we can't comprehend. When we put humans and AI together, who knows where we can go and what we can achieve.
CBS: Your specialization is robotics. How does that fit in with AI?
Lipson: Robotics is taking AI and giving it a physical body. By itself, AI is an abstraction that works on a computer, but it's detached from the physical world. When you take AI and put it in a body, it becomes a robot. And robotics turns out to be particularly difficult. It's one of these things that we humans take for granted, but it actually takes a lot of computing power to do trivial things like walk or manipulate things with our hands. It's a little bit like telling the difference between a cat and a dog. We take it for granted, we can do it easily—we don't even think about it—but until recently, it was a very difficult challenge for computers.
Right now, I would say physical AI robotics is way behind compared to virtual AI. There's a lot more to do there. But it also means that, for example, jobs and activities that have to do with unstructured physical action are not going to be automated as quickly as decision-making or operations that are much more abstract. For example, AI can drive your car tomorrow, but when your car breaks down, it's going to take a human to fix it. We are very far from having a robot that can fix a broken car. So jobs like plumbers, electricians, hairdressers, nurses, anybody who works with their hands are safe for now. And that's a reversal of how people have tended to view how automation is going to affect jobs.
CBS: How can we deal with the ethical challenges presented by AI?
Lipson: Some of these challenges are immediate and some of them are long term. One of the immediate questions is, how do we train these AI systems? How do we make sure they are less biased than the data they're being trained on? How can they do a better job at managing the world than we humans do? These questions were not really at the forefront for many years because AI wasn't very good. It's only in the last couple of years that AI has become so good that it can do things that suddenly are life and death—every- thing from driving a car to security—so that the questions are very, very important.
So there's a lot of effort trying to both under- stand how to balance AI, or how to understand its weaknesses or understand how to manage multiple AIs working together. I can't say this problem is solved, but there's a lot of attention on it. For example, if you look at ChatGPT today versus a few months ago, you'll see a marked difference in the raw answers it used to give com- pared with its ability now to speak in a much more appropriate way.
But the long-term question that's still unanswered is not what AI will do to people, but what people will use AI to do to other people. Will it be used for warfare or hacking? That's some- thing that of course we need to figure out how to handle. Personally, when it comes to AI, I think the benefits far outweigh the risks, but the risks are there and we need to be aware of them and then work to mitigate them.