© 2024 Wyoming Public Media
800-729-5897 | 307-766-4240
Wyoming Public Media is a service of the University of Wyoming
Play Live Radio
Next Up:
0:00 0:00
Available On Air Stations
Transmission & Streaming Disruptions

Leading experts warn of a risk of extinction from AI

The welcome screen for the OpenAI ChatGPT app is displayed on a laptop screen in February in London.
Leon Neal
Getty Images
The welcome screen for the OpenAI ChatGPT app is displayed on a laptop screen in February in London.

AI experts issued a dire warning on Tuesday: Artificial intelligence models could soon be smarter and more powerful than us and it is time to impose limits to ensure they don't take control over humans or destroy the world.

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," a group of scientists and tech industry leaders said in a statement that was posted on the Center for AI Safety's website.

Sam Altman, CEO of OpenAI, the Microsoft-backed AI research lab that is behind ChatGPT, and the so-called godfather of AI who recently left Google, Geoffrey Hinton, were among the hundreds of leading figures who signed the we're-on-the-brink-of-crisis statement.

The call for guardrails on AI systems has intensified in recent months as public and profit-driven enterprises are embracing new generations of programs.

In a separate statement published in March and now signed by more than 30,000 people, tech executives and researchers called for a six-month pause on training of AI systems more powerful than GPT-4, the latest version of the ChatGPT chatbot.

An open letter warned: "Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources."

In a recent interview with NPR, Hinton, who was instrumental in AI's development, said AI programs are on track to outperform their creators sooner than anyone anticipated.

"I thought for a long time that we were, like, 30 to 50 years away from that. ... Now, I think we may be much closer, maybe only five years away from that," he estimated.

Dan Hendrycks, director of the Center for AI Safety, noted in a Twitter thread that in the immediate future, AI poses urgent risks of "systemic bias, misinformation, malicious use, cyberattacks, and weaponization."

He added that society should endeavor to address all of the risks posed by AI simultaneously. "Societies can manage multiple risks at once; it's not 'either/or' but 'yes/and.' " he said. "From a risk management perspective, just as it would be reckless to exclusively prioritize present harms, it would also be reckless to ignore them as well."

NPR's Bobby Allyn contributed to this story.

Copyright 2023 NPR. To see more, visit https://www.npr.org.

Vanessa Romo is a reporter for NPR's News Desk. She covers breaking news on a wide range of topics, weighing in daily on everything from immigration and the treatment of migrant children, to a war-crimes trial where a witness claimed he was the actual killer, to an alleged sex cult. She has also covered the occasional cat-clinging-to-the-hood-of-a-car story.

Enjoying stories like this?

Donate to help keep public radio strong across Wyoming.

Related Content