The Biden administration announced on Friday a voluntary agreement with seven leading AI companies, including Amazon
AMZN
MSFT
At first glance, the voluntary nature of these commitments looks promising. Regulation in the technology sector is always contentious, with companies wary of stifling growth and governments eager to avoid making mistakes. By sidestepping the direct imposition of command and control regulation, the administration can avoid the pitfalls of imposing excessively burdensome rules. This is precisely the mistake the European Union has made over the years, the end result being to choke off innovation on the continent.
However, a closer examination of the voluntary agreement reveals some caveats. Notably, companies might feel pressured to participate, given the implicit threat of regulation. The line between a voluntary commitment and mandatory obligation, as is always the case with governments, is blurry.
Furthermore, the commitments lack specificity and seem to be broadly aligned with what most AI companies are already doing: ensuring the safety of their products, prioritizing cybersecurity, and aiming for transparency. Although the president touts these commitments as groundbreaking steps, it might be more accurate to view them as the formalization of existing industry practices. This leads to the question: Is the administration’s move about optics or is it a substantive policy action?
Despite its rhetoric, the Biden administration hasn’t taken much in the way of action to regulate AI. To be clear, this may well be the right approach. But it suggests this agreement might be primarily seen as a symbolic gesture aimed at placating the so-called nervous ninnies — the vocal critics concerned about the impact of AI – rather than a move toward aggressive regulation.
While managing risks and maintaining safety are laudable goals, the administration’s short press release doesn’t provide much in the way of details either. The agreement does not elucidate what specific outcomes it aims to achieve, nor what concrete steps are being taken by the companies involved.
So, what does this all mean for the future of AI? The short answer is probably not much. This agreement seems to be largely a public relations exercise, both for the government, aiming to show that it’s taking some actionable steps, as well as for the AI companies, keen to showcase their commitment to responsible AI development.
That said, it’s not an entirely hollow gesture. It does emphasize important principles of safety, security, and trust in AI, and it reinforces the notion that companies should take responsibility for the potential societal impact of their technologies. Moreover, the administration’s focus on a cooperative approach, involving a broad range of stakeholders, hints at a potentially promising direction for future AI governance. However, we should also not forget the risk of government growing too cozy with industry.
Still, let’s not mistake this announcement for a seismic shift in AI regulation. We should consider this a not-very-significant step on the path to responsible AI. At the end of the day, what the government and these companies have done is put out a press release.
Read the full article here