Home / News / Artificial Intelligence / Gary Marcus is happy to control AI for the U.S. “I’m interested”

Gary Marcus is happy to control AI for the U.S. “I’m interested”

On Tuesday, neurologist, founder, and author Gary Marcus sat between OpenAI CEO Sam Altman and IBM’s chief privacy trust officer Christina Montgomery as they spoke before the Senate Judiciary Committee for nearly three hours. Because Altman controls one of the world’s most powerful firms and has frequently requested lawmakers regulate him, they focused on him. (Most CEOs want Congress to ignore their industry.)

Marcus has been known in academic circles for some years, but his newsletter (“The Road to A.I. We Can Trust”), podcast (“Humans vs. Machines”), and genuine uneasiness over the unregulated expansion of AI have raised his profile. He appeared on Bloomberg TV, the New York Times Sunday Magazine, and Wired this month, in addition to the hearing.

Because this week’s hearing was so historic—Senator Josh Hawley called AI “one of the most technological innovations in human history,” and Senator John Kennedy was so charmed by Altman that he asked Altman to pick his own regulators—we wanted to talk with Marcus about the experience and what happens next.

Still in Washington?

Staying in Washington. I’m meeting with lawmakers, their staff, and other fascinating folks to see if we can implement my ideas.

You taught at NYU. You co-founded two AI firms, one with Rodney Brooks, a famous roboticist. I interviewed Brooks on stage in 2017, and he stated he didn’t think Elon Musk understood AI and that Musk was wrong to say AI was an existential threat.

Rod and I doubt that contemporary AI is artificial general intelligence. Disassemble several issues. How dangerous is our AI, and are we nearing AGI? Current AI is harmful but not existential. It threatens democracy in numerous ways. That’s harmless. Humanity will survive. But it’s dangerous.

You debated Meta’s head AI scientist, Yann LeCun, recently. Is that about deep-learning neural networks?

LeCun and I have disputed numerous subjects for years. Philosopher David Chalmers moderated a 2017 public discussion. Since then, LeCun has refused to argue again. I respond to his subtweets on Twitter because he’s significant.

LeCun believes using huge language models is harmless, but I disagree. I disagree. The ability to subtly shape people’s political beliefs using training data that the public doesn’t know about is a threat to democracy. Like social media, but more pernicious. These tools can also be used to manipulate others. Scale them substantially. Risks exist.

On Tuesday, you told senators that Sam Altman didn’t tell them his “germane” worst fear and directed them to him. He didn’t mention autonomous weapons, which I raised with him a few years ago as a primary concern. It’s interesting that firearms weren’t mentioned.

We discussed a lot, but we missed enforcement, national security, autonomous weapons, and other vital topics. These will multiply.

Open source vs. closed systems

It hardly surfaced. It’s a fascinating and hard question. The answer is unclear. You want independent scientists. You may want to license large-scale deployments that pose security issues. We may not want all evil actors to have powerful tools. So there are pros and cons, and maybe the best solution is to allow some open source but limit what can be done and how it can be delivered.

Thoughts on Meta’s plan to release its language model for public use?

I don’t like LLaMA, Meta’s AI technology. That was sloppy. That truly is one of the genies out of the bottle. I don’t know if they consulted anyone about their actions because there was no legal apparatus. Maybe they did, but with that or Bing, a company decides to do this.

However, some company decisions may cause harm in the future. So I think governments and scientists should increasingly have a role in selecting what goes out there [through a kind of FDA for AI], where you first undertake a trial before wider deployment. You mention savings. Try again. If the advantages outweigh the dangers, we release at scale. Today, any firm may distribute something to 100 million users without government or scientific oversight. You need an unbiased authority.

Who would provide unbiased authorities? Who knows how these things work?

Not me. Canadian computer scientist, not Yoshua Bengio. Many scientists work for other companies. How to attract and motivate auditors is a major concern. 100,000 computer scientists work here. Not all are contracted to Google or Microsoft.

Would you join this AI agency?

I’m interested, and I think I have a good, neutral voice that could help us get to a nice position.

How was appearing before the Senate Judiciary Committee? Will you be invited back?

I don’t know if I’ll be invited back. It and the room moved me deeply. Probably smaller than on TV. But everyone seemed to be doing their best for the U.S. and humanity. The senators played their best since everyone understood the stakes. We tried our hardest, since we were there for a reason.

About Chambers

Check Also

The Air Force has abandoned its attempt to install a directed-energy weapon on a fighter jet, marking another failure for airborne lasers

The U.S. military’s most recent endeavor to create an airborne laser weapon, designed to safeguard …