Trade EverythingJul 11
free markets are responsible for our prosperity. let’s build more of them.
Tarek MansourIn a Senate Judiciary Committee hearing today, OpenAI’s Sam Altman advocated for the establishment of a capacity-based, regulatory licensing regime for AI models that, ideally, larger companies like Google and OpenAI would be subject to, but smaller, open-source ones would not. “Where I think the licensing scheme comes in is as we head toward AGI — I think we need to treat that as seriously as we do other technologies,” Altman said.
When Sen. Graham (R-SC) suggested a government agency “that issues licenses and takes them away,” Altman agreed.
In addition to Graham and Blumenthal, the hearing included questioning from Josh Hawley (R-MO), Mazie Hirono (D-HI), Cory Booker (D-NJ), Dick Durbin (D-IL), and several other senators. Christina Montgomery, VP/ Chief Privacy & Trust Officer for IBM and NYU professor Gary Marcus also provided testimony. All speakers — witnesses and senators alike — agreed that AI should be regulated, though Altman, Montgomery, and Marcus differed on specifics, and senators like Hirono questioned if effective licensing was even possible.
Of the witnesses, IBM’s Montgomery seemed to have the most cautious position. “IBM urges congress to adopt a precision regulation approach to AI,” she testified. “This means establishing rules to govern the deployment of AI in specific use cases, not regulating the technology itself. Such an approach would involve four things: First, different rules for different risks. The strongest regulation should be applied to use cases with the greatest risk to people and society. Second, clearly defining risks. There must be clear guidance on certain uses or categories of AI-supported activity that are inherently high risk. Third, be transparent. AI shouldn’t be hidden. Consumers should know when they’re interacting with an AI system. Finally, showing the impact. For higher risk use cases, companies should be required to conduct impact assessments that show how their systems perform against tests for bias, and other ways that they could potentially impact the public.”
“AI should be regulated at the point of risk,” Montgomery said. Later she indicated her position was that some uses of AI should be licensed, but not all.
OpenAI’s Altman suggested more aggressive regulation. “Number one, I would form an agency that issues licensing of models above a certain scale of capabilities, and can take that license away to ensure certain safety compliance standards. Number two, I would create a set of safety standards, specific tests that a model has to pass before it can be deployed into the world. And third, I would require independent audits by experts who can say the model is or isn’t in compliance with safety thresholds.”
Marcus repeatedly suggested an international governing body and an FDA-like agency that weighed a model’s benefits against its harms before allowing its deployment. When asked for specifics on the global agency, Marcus said he was still “feeling [his] way” on the right body to do that. “The UN should be involved,” as should the OECD, he suggested. “Ultimately we may need something like CERN — global, international, and neutral — but focused on AI safety.”
Marcus recommended rules for AI — “Number one, a safety review process like we use the FDA before widespread deployment. Number two, a nimble monitoring agency to follow what’s going on, with authority to call things back. Number three, funding toward AI safety research.”
Richard Blumenthal (D-CT) chaired the meeting, and introduced the hearing as “the first in a series intended to write the rules of AI.” The hearing additionally focused on Section 230, copyright, privacy, and AI firm accountability for varying degrees of harm.
-Brandon Gorrell
0 free articles left