X

AI Poses Major Societal Risks, Say Industry Leaders

In a signed statement, they say mitigating the risks of AI should be a global priority.

img-8904
img-8904
Nina Raemont Writer
A recent graduate of the University of Minnesota, Nina started at CNET writing breaking news stories before shifting to covering Security Security and other government benefit programs. In her spare time, she's in her kitchen, trying a new baking recipe.
Nina Raemont
2 min read
A cardboard craft-style open laptop with a chatty robot on the screen.

Nearly every major tech company has released some type of generative AI tool in recent months. 

Carol Yepes / Getty Images

Artificial intelligence industry leaders say they're concerned about the potential threats advanced AI systems pose to humanity. On Tuesday, several of them, including OpenAI CEO Sam Altman and DeepMind CEO Demis Hassabis, along with other scientists and notable figures signed a statement warning of the risks of AI. 

The curt, sentence-long statement was posted on the website of the nonprofit Center for AI Safety. "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," the statement reads. 

Nearly every major tech company has released an AI chatbot or other generative AI tool in recent months, following the launch of OpenAI's ChatGPT and Dall-E last year. The technology has begun to seep into everyday life and could change everything from how you search for information on the web to how you create a fitness routine. The rapid release of AI tools has also spurred scientists and industry experts to voice concerns about the technology's risks if development continues without regulation.

The statement is the latest in a series of recent warnings on the potential threats of the advanced technology. Last week, Microsoft, an industry leader in AI and an investor in OpenAI, released a 40-page report saying AI regulation is needed to stay ahead of bad actors and potential risks. In March, Tesla and Twitter CEO Elon Musk, Apple co-founder Steve Wozniak and a thousand other tech industry folks signed an open letter demanding companies halt production on advanced AI projects for at least six months, or until industry standards and protocols have caught up.

"Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders" reads the letter, which was published March 22. "Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable."

Some critics have noted that the attention tech leaders are giving to the future risks of the technology fail to address current problems, like AI's tendency to "hallucinate," the unclear ways an AI chatbot arrives at an answer to a prompt, and data privacy and plagiarism concerns. There's also the potential that some of these tech leaders are requesting a halt on their competitors' products so that they can have time to build an AI product of their own. 

Editors' note: CNET is using an AI engine to create some personal finance explainers that are edited and fact-checked by our editors. For more, see this post.