Home / News / Artificial Intelligence / MPs warn policymakers aren’t keeping up with AI rulebook, so UK government urged against delay

MPs warn policymakers aren’t keeping up with AI rulebook, so UK government urged against delay

A UK parliamentary committee investigating artificial intelligence’s opportunities and challenges has urged the government to reconsider its decision not to introduce legislation to regulate the technology in the short term and to make an AI bill a ministerial priority.

Committee chair Greg Clark writes in a statement today accompanying the publication of an interim report that warns the government’s approach “is already risking falling behind the pace of development of AI” that the government should move with “greater urgency” to legislate AI governance rules if ministers want to make the UK an AI safety hub.

The government has not confirmed whether the November King’s Speech will include AI-specific legislation. This new session of Parliament will be the last opportunity before the General Election for the UK to legislate on AI governance,” the committee adds, calling for “a tightly-focussed AI Bill” to be introduced this fall.

The report states that this would support the prime minister’s goal of making the UK an AI governance leader. “We see a danger that if the UK does not bring in any new statutory regulation for three years, other legislation — like the EU AI Act — could become the de facto standard and be hard to displace.”

It’s not the first warning about the government delaying AI legislation. The independent research-focused Ada Lovelace Institute released a report last month highlighting ministers’ contradictions: the government is pitching to position the UK as a global hub for AI safety research while proposing no new AI governance laws and actively pushing to deregulate existing data protection rules, which the Institute believes threatens its AI safety agenda.

In March, the government outlined a “pro-innovation” approach to regulating artificial intelligence based on flexible “principles” to avoid new legislation. The plan requires UK regulatory bodies to monitor AI activity in their areas without new powers or resources.

MPs concerned about the risks and opportunities of automation technology adoption are concerned about AI governance being dumped on the UK’s overburdened regulatory bodies without new powers or duties.

The Science, Innovation, and Technology Committee’s interim report lists twelve AI governance challenges that policymakers must address, including bias, privacy, misrepresentation, explainability, IP and copyright, liability for harms, and data access, compute access, and the open source vs. proprietary code debate.

The report also highlights employment issues as automation tools in the workplace may disrupt jobs and the need for international AI governance coordination. It even references “existential” concerns raised by several high-profile technologists who have claimed that AI “superintelligence” could threaten humanity’s existence. (“Some people think that AI is a major threat to human life,” the committee notes in its 12th bullet point. “If that is possible, governance must protect national security.”)

The committee’s interim report list appears to be comprehensively examining AI challenges. However, its members are less convinced the UK government is fully aware of this topic.

The UK government’s AI governance plan relies on our regulatory system and promised central support functions. The time needed to establish new regulatory bodies makes a sectoral approach a good start. We know that many regulators are already considering AI’s effects on their work, both individually and through the Digital Regulation Cooperation Forum. However, it is already clear that a more well-developed central coordinating function may be needed to resolve all of this report’s Challenges.

The report recommends the government establish “‘due regard’ duties for existing regulators” in the AI bill as a priority.

The Ada Lovelace Institute’s report also recommended that ministers conduct a “gap analysis” of UK regulators to assess “resourcing and capacity but whether any regulators require new powers to implement and enforce the principles outlined in the AI white paper” as a threat to the government’s AI governance strategy.

We believe that the UK’s depth of expertise in AI and the disciplines that contribute to it, its vibrant and competitive developer and content industry, and its longstanding reputation for developing trustworthy and innovative regulation provide a major opportunity for the UK to be a global leader in AI development and deployment. In its conclusion, the report says that opportunity is limited. Without a serious, rapid, and effective effort to establish the right governance frameworks and ensure a leading role in international initiatives, other jurisdictions will steal a march and their frameworks may become the default, even if they are less effective than the UK’s.

“We urge the government to accelerate, not pause, the establishment of an AI governance regime, including any necessary statutory measures.”

Prime minister Rishi Sunak visited Washington this summer to rally support for an AI safety summit his government announced for autumn. The initiative came a few months after the government’s AI white paper downplayed risks and emphasized the tech’s economic potential. A handful of meetings this summer with AI industry CEOs, including OpenAI’s Sam Altman, Google-DeepMind’s Demis Hassabis, and Anthropic’s Dario Amodei, piqued Sunak’s interest in AI safety.

The US AI giants’ regulation and governance talking points have mostly focused on theoretical future risks from artificial superintelligence, rather than encouraging policymakers to focus on current AI harms. Whether bias, privacy, copyright, or digital market concentration issues that risk AI advancements keeping another generation of US tech giants as our overlords.

Critics say the AI giants lobby for self-serving regulation that creates a competitive moat for their businesses by artificially restricting access to AI models and/or dampening others’ ability to build rival tech, while distracting policymakers from passing (or enforcing) legislation that addresses real-world AI harms.

In its conclusion, the committee seems to share this concern. Some observers have suggested pausing the development of certain AI models and tools to allow global regulatory and governance frameworks to catch up. We doubt such a pause is possible. AI leaders’ calls for new regulation cannot be ignored, but those in a privileged position may seek to defend it against market insurgents through regulation, the report notes.

We’ve asked the Department for Science, Innovation, and Technology to respond to the committee’s request for an AI bill in the new parliament.

Update: A spokesperson for the department sent us this statement:

AI has enormous potential to change every aspect of our lives, and we owe it to our children and our grandchildren to harness that potential safely and responsibly.

That’s why the UK is bringing together global leaders and experts for the world’s first major global summit on AI safety in November — driving targeted, rapid international action on the guardrails needed to support innovation while tackling risks and avoiding harms.

Our AI Regulation White Paper sets out a proportionate and adaptable approach to regulation in the UK, while our Foundation Model Taskforce is focused on ensuring the safe development of AI models with an initial investment of £100 million — more funding dedicated to AI safety than any other government in the world.

The government also suggested it may go further, calling the AI regulation white paper a first step in addressing technology risks and opportunities. In response to rapid field developments, it plans to review and adapt its approach.

About Chambers

Check Also

Researchers have recently identified the initial fractal molecule found in the natural world

Fractals, which are self-repeating shapes that can be infinitely magnified without losing their intricate details, …