Home / News / Artificial Intelligence / OpenAI’s Altman and other AI titans warn of advanced AI’s “extinction” risk

OpenAI’s Altman and other AI titans warn of advanced AI’s “extinction” risk

Another headline-grabbing AI policy intervention: Hundreds of AI scientists, academics, tech CEOs, and public figures—including OpenAI CEO Sam Altman, DeepMind CEO Demis Hassabis, veteran AI computer scientist Geoffrey Hinton, MIT’s Max Tegmark, Skype co-founder Jaan Tallinn, Grimes the musician, and populist podcaster Sam Harris—have signed a statement calling for global attention on existential AI risk.

The statement, hosted on the website of San Francisco-based, privately funded not-for-profit Center for AI Safety (CAIS), compares AI risk to nuclear apocalypse and urges policymakers to mitigate “doomsday” extinction-level AI risk.

Full statement:

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

According to a short explainer on CAIS’ website, the statement was kept “succinct” to avoid drowning out their message about “some of advanced AI’s most severe risks” by discussion of other “important and urgent risks from AI,” which they imply are preventing discussion of extinction-level AI risk.

However, AI hype has surged off the back of expanded access to generative AI tools like OpenAI’s ChatGPT and DALL-E, leading to a glut of headline-grabbing discussion about “superintelligent” killer AIs. such as this one from earlier this month, where statement-signatory Hinton warned of the “existential threat” of AI taking control. Last week, Altman called for regulation to prevent AI from destroying humanity.

Elon Musk and scores of others signed an open letter in March calling for a six-month pause on the development of AI models more powerful than OpenAI’s GPT-4 to allow time for shared safety protocols to be developed and applied to advanced AI, warning of the risks posed by “ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control”.

So, in recent months, there has been a barrage of heavily publicized warnings about AI risks that don’t exist.

This drumbeat of hysterical headlines may have diverted attention from existing harm. The tools’ free use of copyrighted data to train AI systems without permission or consent (or payment); the systematic scraping of online personal data in violation of people’s privacy; or AI giants’ lack of transparency regarding the data used to train these tools Or built-in flaws like disinformation (“hallucination”) and bias (automated discrimination). AI spam! And the environmental impact of training these AI monsters.

It’s notable that after a meeting last week between the UK prime minister and several major AI executives, including Altman and Hassabis, the government appears to be shifting tack on AI regulation with a sudden keen interest in existential risk, according to the Guardian.

In this recent Columbia Journalism Review article reviewing media coverage of ChatGPT, Jenna Burrell, director of research at Data & Society, argued that we should stop focusing on red herrings like AI’s “sentience” and instead cover how AI is further concentrating wealth and power.

So of course AI giants want to route regulatory attention into the far-flung theoretical future with talk of an AI-driven doomsday to distract lawmakers from more fundamental competition and antitrust considerations in the here and now. (And data exploitation to concentrate market power isn’t new.)

It speaks volumes about existing AI power structures that tech execs at AI giants like OpenAI, DeepMind, Stability AI, and Anthropic are so happy to band together and chatter about existential AI risk. And how much more reluctant they are to discuss the harm their tools are causing now?

OpenAI was notably absent from the Musk-signed open letter, but several of its employees support the CAIS-hosted statement. The latest statement appears to be OpenAI’s (unofficial) commercially self-serving response to Musk’s earlier attempt to hijack the existential AI risk narrative in his own interests (which no longer favor OpenAI leading the AI charge).

Instead of calling for a development pause, which would freeze OpenAI’s lead in generative AI, it lobbyes policymakers to focus on risk mitigation while crowdfunding “democratic processes for steering AI,” as Altman put it. Along with in-person lobbying of international regulators, the company is positioning itself (and using its investors’ wealth) to shape future mitigation guardrails. Altman recently threatened to pull OpenAI’s tool out of Europe if draft EU AI rules weren’t watered down to exclude it.

Some signatories of the earlier letter have signed both (hi, Tristan Harris!).

Who’s CAIS? This message’s host isn’t well known. It admits lobbying policymakers. Its website states its mission is “to reduce societal-scale risks from AI” and encourages research and field-building, including funding research. It also advocates for policy.

The website’s FAQ states that private donations fund it. It answers “Is CAIS an independent organization?” with “serving the public interest.”

CAIS is a nonprofit organization entirely supported by private contributions. Our policies and research directions are not determined by individual donors, ensuring that our focus remains on serving the public interest.

We asked CAIS.

CAIS director Dan Hendrycks expands on the statement explainer, naming “systemic bias, misinformation, malicious use, cyberattacks, and weaponization” as “important and urgent risks from AI… not just the risk of extinction” in a Twitter thread.

“These are all important risks that need to be addressed,” he adds, downplaying concerns that policymakers have limited bandwidth to address AI harms. “Societies can manage multiple risks at once; it’s not ‘either/or,” but ‘yes/and.’ From a risk management perspective, prioritizing present harms is reckless, but ignoring them is too.

The thread also credits David Krueger, an assistant professor of computer science at Cambridge University, with proposing a single-sentence statement about AI risk and “jointly” developing it.

 

About Chambers

Check Also

The Air Force has abandoned its attempt to install a directed-energy weapon on a fighter jet, marking another failure for airborne lasers

The U.S. military’s most recent endeavor to create an airborne laser weapon, designed to safeguard …