Home / News / Artificial Intelligence / OpenAI’s Altman and other AI titans warn of advanced AI’s “extinction” risk

OpenAI’s Altman and other AI titans warn of advanced AI’s “extinction” risk

Another headline-grabbing AI policy intervention: Hundreds of AI scientists, academics, tech CEOs, and public figures—from OpenAI CEO Sam Altman and DeepMind CEO Demis Hassabis to veteran AI computer scientist Geoffrey Hinton, MIT’s Max Tegmark and Skype co-founder Jaan Tallinn to Grimes the musician and populist podcaster Sam Harris—have signed a statement calling for global attention on existential AI risk.

The statement, hosted on the website of San Francisco-based, privately funded not-for-profit Center for AI Safety (CAIS), compares AI risk to nuclear apocalypse and urges policymakers to mitigate “doomsday” extinction-level AI risk.

Full statement:

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

According to a brief explanation on CAIS’ website, the statement was kept “succinct” to avoid “some of advanced AI’s most severe risks” being drowned out by other “important and urgent risks from AI,” which they imply are preventing discussion of extinction-level AI risk.
However, as AI hype has surged due to increased access to generative AI tools like OpenAI’s ChatGPT and DALL-E, headline-grabbing discussion about “superintelligent” killer AIs has increased. (Like this one from earlier this month, where Hinton warned of the “existential threat” of AI taking over). Last week, Altman urged for regulation to prevent AI from harming humanity.
In March, Elon Musk and scores of others signed an open letter calling for a six-month pause on the development of AI models more powerful than OpenAI’s GPT-4 to allow time for shared safety protocols to be developed and applied to advanced AI, warning of the risks posed by “ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control”.

In recent months, AI concerns that don’t exist have been aggressively promoted.

Hysterical headlines may have diverted attention from existing effects. Such concerns the tools’ free use of copyrighted data to train AI systems without permission or approval (or payment); the systematic scraping of online personal data in violation of people’s privacy; or AI giants’ lack of openness regarding the data utilized to train these tools. Disinformation (“hallucination”) and bias (automatic discrimination) are built-in. AI-driven spam too!

After a meeting last week between the UK prime minister and many top AI executives, including Altman and Hassabis, the government looks to be altering tack on AI legislation, with a sudden interest in existential risk, reported the Guardian.

As Jenna Burrell, director of research at Data & Society, noted in this recent Columbia Journalism Review article reviewing media coverage of ChatGPT, talking about existential AI risk distracts from market structure and dominance issues.

Naturally, AI giants want to divert regulatory attention into the far-flung theoretical future with talk of an AI-driven doomsday to distract lawmakers from more pressing competition and antitrust issues in the present. Data exploitation for market power is nothing new.

Tech execs from AI heavyweights like OpenAI, DeepMind, Stability AI, and Anthropic are glad to join together and warn about existential AI risk, which speaks volumes about existing AI power structures. How much less likely they are to discuss the harms their technologies are inflicting now.

OpenAI did not sign the open letter, but several of its employees support the CAIS-hosted declaration. OpenAI’s current statement appears to be an unofficial, commercially self-serving response to Musk’s earlier attempt to hijack the existential AI risk narrative in his own interests (which no longer favor OpenAI leading the AI drive).

Instead of a development pause, which would freeze OpenAI’s lead in generative AI, it lobbys policymakers to focus on risk mitigation while crowdfunding “democratic processes for steering AI,” as Altman put it. Thus, the corporation is positioning itself (and using its investors’ capital) to create future mitigation guardrails and lobbying foreign authorities in person.

Some signatories of the earlier letter have signed both (hello Tristan Harris!).

Who’s CAIS? This message’s host isn’t well known. It admits lobbying policymakers. Its website says its objective is “to reduce societal-scale risks from AI” and encourages research and field-building, including sponsoring studies. It also advocates for policy.

The website’s FAQ states that individual donations fund it. It answers “is CAIS an independent organization” with “serving the public interest”:

CAIS is a nonprofit organization entirely supported by private contributions. Our policies and research directions are not determined by individual donors, ensuring that our focus remains on serving the public interest.

We asked CAIS.

CAIS director Dan Hendrycks expands on the statement explainer, naming “systemic bias, misinformation, malicious use, cyberattacks, and weaponization” as “important and urgent risks from AI… not just the risk of extinction” in a Twitter thread.

“These are all important risks that need to be addressed,” he adds, downplaying fears governments have limited bandwidth to address AI problems. “Societies can manage multiple risks at once; it’s not ‘either/or’ but ‘yes/and.’ From a risk management standpoint, prioritizing present damages is foolish, but ignoring them is too.

The thread also cites David Krueger, an assistant professor of Computer Science at Cambridge University, for proposing a single-sentence statement about AI risk and “jointly” developing it.

 

About Chambers

Check Also

Researchers have recently identified the initial fractal molecule found in the natural world

Fractals, which are self-repeating shapes that can be infinitely magnified without losing their intricate details, …