'The godfather of AI' sounds alarm about potential dangers of AI
ERIC DEGGANS, HOST:
Geoffrey Hinton is known as the godfather of artificial intelligence. He helped create some of the most significant tools in the field. But now he's warning loudly and passionately that the technology may be getting out of hand. NPR's Bobby Allyn spoke to him about what's driving his crusade.
BOBBY ALLYN, BYLINE: You know a computer scientist is a big deal when Snoop Dogg is talking about him. Here he is at a conference in Beverly Hills earlier this month discussing AI.
(SOUNDBITE OF ARCHIVED RECORDING)
SNOOP DOGG: What is going on? Then I heard the dude that - the old dude that created AI, something like, this is not safe because the AI's got their own minds, and these [expletive] are going to start doing their own [expletive]. I'm like, is we in a [expletive] movie right now or what?
ALLYN: The old dude, of course, is Geoffrey Hinton, a 75-year-old British academic living in Toronto who has spent 50 years developing cutting-edge AI, most recently for Google.
GEOFFREY HINTON: OK. Can you hear me now?
ALLYN: In 2012, Hinton and two of his students at the University of Toronto built what's called a neural network. It's called that because it's a geeky computer system that kind of operates the way a brain works, like the way neurons work. You could feed it tons and tons of data, like photos, and it would learn how to identify, say, a flower from a dog. This breakthrough is the foundation of so many AI tools used in everything from analyzing MRI scans and hospitals to helping farmers understand crop yields and, of course, used in the hit service ChatGPT. But now Hinton has left Google and is sounding the alarm.
HINTON: These things could get more intelligent than us and could decide to take over, and we need to worry now about how we prevent that happening.
ALLYN: He came to this position recently after two things happened - first, when he was testing out a chat bot at Google and it appeared to understand a joke he told it, that unsettled him; secondly, when he realized AI that can outperform humans is actually way closer than he previously thought.
HINTON: I thought for a long time that we were, like, 30 to 50 years away from that. So I call that far away from something that's got greater general intelligence than a person. Now, I think we may be much closer, maybe only five years away from that.
ALLYN: Last month, more than 30,000 AI researchers and other academics signed a letter calling for a pause on AI research until the risks to society are better understood. Hinton refused to sign the letter because it didn't make sense to him.
HINTON: The research will happen in China if it doesn't happen here because there's so many benefits of these things, such huge increases in productivity.
ALLYN: Now, what do those controls look like? How exactly should AI be regulated? Those are tricky questions that even Hinton doesn't have answers to. But he thinks politicians need to give equal time and money into developing guardrails. Some of his warnings do sound a little bit like doomsday for mankind.
HINTON: There's a serious danger that we'll get things smarter than us fairly soon and that these things might get bad motives and take control.
ALLYN: Hinton isn't talking about a robot invasion of the White House, but more like the ability to create and deploy sophisticated disinformation campaigns that could interfere with elections.
HINTON: This isn't just a science fiction problem. This is a serious problem that's probably going to arrive fairly soon, and politicians need to be thinking about what to do about it now.
ALLYN: He says he got a laugh out of the clip of Snoop Dogg talking about his AI warnings. Snoop seems to get it. Hinton hopes that Washington will, too.
Bobby Allyn, NPR News. Transcript provided by NPR, Copyright NPR.