Home / News / Artificial Intelligence / OpenAI, DeepMind, and Anthropic to give UK early access to foundational AI safety models

OpenAI, DeepMind, and Anthropic to give UK early access to foundational AI safety models

Prime minister Rishi Sunak announced at London Tech Week that OpenAI, Google DeepMind, and Anthropic will provide “early or priority access” to their AI models to support evaluation and safety research.

After several AI giants warned about the existential and even extinction-level risks of unregulated AI, Sunak became more interested in AI safety.

Today, Sunak promised cutting-edge UK AI safety research. Our expert taskforce receives £100 million for AI safety, more than any other government.

The government said this AI safety taskforce will focus on AI foundation models.

“We’re working with the frontier labs—Google DeepMind, OpenAI, and Anthropic,” Sunak said. “And I’m pleased to announce they’ve committed to giving early or priority access to models for research and safety purposes to help build better evaluations and better understand the opportunities and risks of these systems.”

The PM also repeated his announcement of the AI safety summit, comparing it to the COP climate conferences that seek global climate change agreements.

“Just as we unite through COP to tackle climate change, the UK will host the first ever summit on global AI safety later this year,” he said. “I want to make the UK not just the intellectual home but the geographical home of global AI safety regulation.”

Sunak’s government has shifted to AI safety advocacy.

It supported “a pro-innovation approach to AI regulation” in a March white paper. The paper’s “flexible principles” approach downplayed safety concerns by avoiding the need for AI-specific laws or a watchdog. The government also suggested the antitrust watchdog and data protection authority oversee AI apps.

Sunank wants a global AI safety watchdog in the UK after a few months. Or, at least, that it wants the UK to lead AI safety research by evaluating learning algorithm outputs.

Rapid advances in generative AI and warnings from tech giants and AI industry figures that the technology could spiral out of control appear to have prompted a Downing Street strategy rethink.

The PM met with OpenAI, DeepMind, and Anthropic CEOs shortly before the government’s AI tone changed.

If this trio of AI giants sticks to their commitments to provide the UK with advanced access to their models, the country could lead research into effective evaluation and audit techniques before any legislative oversight regimes mandating algorithmic transparency have spun up elsewhere (the EU’s draft AI Act isn’t expected to be in legal force until 2026, although the EU’s Digital Services Act is already in force).

The UK’s early AI safety efforts may be vulnerable to industry capture. If AI giants dominate AI safety research by providing selective access to their systems, they could shape future UK AI rules that apply to their businesses.

AI tech giants’ close involvement in publicly funded research into the safety of their commercial technologies before any legally binding AI safety framework is applied to them suggests they will at least have scope to frame how AI safety is looked at and which components, topics, and themes are prioritized (and thus downplayed). By affecting research based on access.

AI ethicists have long warned that headline-grabbing fears about “superintelligent” AIs’ risks to humans are drowning out discussion of real-world harms like bias, discrimination, privacy abuse, copyright infringement, and environmental resource exploitation.

So while the UK government may view AI giants’ buy-in as a PR coup, if the AI summit and wider AI safety efforts are to produce robust and credible results, it must ensure the involvement of independent researchers, civil society groups, and groups disproportionately at risk of harm from automation, not just trumpet its plan for a partnership between “brilliant AI companies” and local academics—given academic research is already often dependent on funding from

 

About Chambers

Check Also

The Air Force has abandoned its attempt to install a directed-energy weapon on a fighter jet, marking another failure for airborne lasers

The U.S. military’s most recent endeavor to create an airborne laser weapon, designed to safeguard …