This is a follow-up to my first post, “To Everything There is a Season,” which advocated for federal regulation of Artificial General Intelligence (AGI).
The Senate held hearings this week on the subject of AI regulation. What makes this particular hearing so unusual is that it came about as the result of mutual concern in government and industry. Rarely do industries petition government to regulate their affairs.
You can find the hearing on C-Span, but I found the clip below of Sen. John Kennedy (R-LA) most fruitful. Kennedy is nothing if not entertaining, but he is also incisive, revealing the perfidy of so many appointees and witnesses who come before his committees. He has a talent for getting to the core of issues before him.
Here, he sets forth a few stipulations about AI which will be familiar to those of you who’ve followed the topic, then asks each witness for specific laws or regulations that Congress should enact. “This is your chance, folks, to tell us how to get this right. Please use it.”
Headline: He elicited some constructive and useful suggestions.
The witnesses were heavy-hitters in the industry and each had specific ideas.
Cristina Montgomery, Vice President and Chief Privacy & Trust Officer for IBM and member of the U.S. Chamber of Commerce AI Commission, and a member of the United States’ National AI Advisory Committee (NAIAC):
Disclosure and protection of the data used to train AI
Disclosure of the models and how they perform
Similar to the EU’s AI Act, identify and prevent development in highest risk use cases
Requiring impact assessments (“asking companies to show their work”) prior to deployment of AI models
Gary Marcus, professor emeritus of psychology and neural science at New York University and founder of Geometric Intelligence, a machine learning company. He is a psychologist, cognitive scientist and author, known for his research on the intersection of cognitive psychology, neuroscience, and AI:
“A safety review like we use with the FDA prior to widespread deployment”
“A nimble monitoring agency to follow what going on pre- and post-deployment with the authority to call things back”
Funding for AI safety research. He cited the lack of a common code of ethics and standards for safety.
Sam Altman, the entrepreneur and co-founder (with Elon Musk) of OpenAI, which created ChatGPT:
A new federal agency that licenses any effort above a certain scale of capabilities with the ability to take that license away [note from author: please don’t model it after the FCC]
A set of safety standards focused on “dangerous capability evaluations” of autonomous AI, such as “self-replication and self extrication into the wild.”
Frequent audits of models from experts to check safety compliance
In practice, these types of reviews and assessments are similar to the penetration and vulnerability tests that Information Security departments in industry and government currently perform with applications they deploy on the worldwide web to prevent data breaches by examining code, encryption and software configuration.
In the case of conventional software, every company has a strong incentive to prevent data breaches, because they are legally liable for damages and because breaches damage the company’s brand and can cause customers to flee.
I think experience has shown that accountability is a more effective regulator and ethical teacher than government agency processes alone, which can be captured by those they’re tasked with regulating. So to the above ideas, I would add two for consideration that are aimed at making humans accountable for the behavior of AGI:
Criminal liability for violating the safety standards suggested above, similar to those imposed for violating Sarbanes-Oxley (S/Ox) requirements for financial reporting.
Civil and criminal liability for damages resulting from AI deployments that result in physical or intellectual property theft, property loss, physical injury and death, and libel or fraud.
Not a bad start, guys.
-30-