Home / News / Artificial Intelligence / OpenAI leaders suggest international AI regulation

OpenAI leaders suggest international AI regulation

OpenAI’s leadership believes the world needs an international regulatory body like that governing nuclear power quickly because AI is developing rapidly and poses clear dangers. But slowly.

In a blog post, OpenAI founder Sam Altman, President Greg Brockman, and Chief Scientist Ilya Sutskever explain that artificial intelligence innovation is moving too fast for existing authorities to control.

The tech, most visible in OpenAI’s ChatGPT conversational agent, is both a threat and an asset.

The post admits AI won’t manage itself:

We need some degree of coordination among the leading development efforts to ensure that the development of superintelligence occurs in a manner that allows us to both maintain safety and help smooth integration of these systems with society.

We are likely to eventually need something like an [International Atomic Energy Agency] for superintelligence efforts; any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc.

The IAEA is the UN’s official nuclear power collaboration body, but like other such organizations, it can lack power. This model’s AI-governing body can’t turn off bad actors, but it can set and track international standards and agreements.

OpenAI notes that tracking compute power and energy usage for AI research is one of the few objective measures that can and should be reported and tracked. Like other industries, AI resources should be monitored and audited. The company suggested exempting smaller companies to avoid stifling innovation.

Today, leading AI researcher and critic Timnit Gebru told the Guardian, “Unless there is external pressure to do something different, companies are not just going to self-regulate. Regulation and more than profit are needed.”

OpenAI has visibly embraced the latter, to the dismay of many who hoped it would live up to its name, but as a market leader, it is also calling for real governance action beyond hearings where senators line up to give reelection speeches that end in question marks.

The proposal amounts to “maybe we should, like, do something,” but it starts a conversation in the industry and shows support from the largest AI brand and provider in the world. “We don’t yet know how to design such a mechanism,” but public oversight is vital.

Although the company’s leaders support tapping the brakes, they don’t want to let go of the enormous potential “to improve our societies” (not to mention bottom lines) and because bad actors may have their foot on the gas.

 

About Chambers

Check Also

The Air Force has abandoned its attempt to install a directed-energy weapon on a fighter jet, marking another failure for airborne lasers

The U.S. military’s most recent endeavor to create an airborne laser weapon, designed to safeguard …