The Biden administration said Friday it struck a deal with some of the biggest U.S. technology companies to manage risks posed by artificial intelligence. However, the agreement didn’t directly address how AI systems are trained, a crucial issue as AI companies face lawsuits over alleged copyright violations.
The White House said the commitments were being made by seven major AI companies that met with President Joe Biden on Friday:
Amazon.com
(ticker: AMZN),
Meta Platforms
(META),
Microsoft
(MSFT) and its investee company OpenAI,
Alphabet’s
(GOOGL) Google, and the privately held firms Inflection and Anthropic.
“These commitments, which the companies have chosen to undertake immediately, underscore three principles that must be fundamental to the future of AI—safety, security, and trust—and mark a critical step toward developing responsible AI,” the White House said in a statement.
The most striking commitment was to develop mechanisms that would let people know when content is AI generated, such as a watermarking system. The measure could reduce the risk of deepfakes—AI-generated content that can be hard to distinguish from authentic videos, images, and audio and can lead to disinformation. It could also come as a massive relief to artists and authors who fear a tidal wave of AI-generated content flooding their industries.
However, the commitments didn’t include forcing the companies to disclose the data used to train their AI systems. That was the key issue highlighted earlier this month when comedian Sarah Silverman and two other authors filed a pair of purported class-action lawsuits over the alleged use of unlicensed copyrighted material in training AI models.
On the issue of safety, the companies pledged to undertake internal and external security testing of AI systems before their release and sharing information on potential AI risks.
The commitments on security featured investing in cybersecurity and protecting details about what parameters are most important for AI models—as well as facilitating third-party discovery and reporting of vulnerabilities in their AI systems.
While the White House is eager to give the impression it is on top of AI development, the question is how closely the companies will—or can—adhere to these voluntary pledges.
“The voluntary commitments announced today are not enforceable, which is why it’s vital that Congress, together with the White House, promptly crafts legislation requiring transparency, privacy protections, and stepped up research on the wide range of risks posed by generative AI,” said Paul Barrett, deputy director of the Center for Business and Human Rights at New York University’s Stern School of Business, in a statement.
The agreement brokered by the White House is separate from efforts that might be pursued in Congress to make laws to regulate AI. The White House said it was also developing an executive order and will pursue bipartisan legislation on the topic.
Microsoft President Brad Smith said in a blog post Friday the company supported the commitments and it could contribute to talks over an international code of conduct on AI.
One notable person not featured on the guest list was Elon Musk, the chief executive of
Tesla
(TSLA) and founder of xAI. Musk wasn’t invited to a similar meeting earlier this year.
Twitter and Tesla didn’t immediately respond to a request for comment from Barron’s on whether Musk or other representatives of either company had been invited to the White House gathering.
Write to Adam Clark at [email protected]
Read the full article here