Seven of the leading companies in the field of artificial intelligence (AI) recently attended a White House summit where they reached a significant agreement with the Biden administration. The deal aims to enhance the safety and security of AI systems and users by implementing new guardrails, one of which includes the use of watermarks on AI-generated content. The companies involved in this landmark agreement include Amazon, Anthropic, Meta, Google, Inflection, and OpenAI.
President Joe Biden expressed his confidence in the commitments made by these companies, emphasizing their importance in ensuring the responsible and safe development of AI technology. He acknowledged that AI has the potential to change lives across the globe, highlighting the critical role these companies play in shepherding innovation with responsibility.
During the summit, the seven companies voluntarily made commitments to invest in research and safety, security stress-testing, and participating in third-party audits to identify system vulnerabilities. While the full extent of these commitments has not been explicitly disclosed by the companies, they have issued statements confirming their determination to uphold these promises. The White House, however, has not yet laid out any enforcement mechanisms to ensure compliance.
One of the consumer-facing commitments made by the companies is the implementation of watermarks on AI-generated content. The White House specifically requested that these watermarks be applied to audio and visual content created by individual users. OpenAI announced that the watermarking agreements require the companies to develop tools or application programming interfaces (APIs) to determine if a particular piece of content was created with their AI systems. They also clarified that AI voice assistants and other easily recognizable audiovisual content do not fall under this commitment.
Google, which had previously pledged to deploy similar disclosures, reiterated its commitment in a statement made by Kent Walker, the company’s president of global affairs. Google plans to integrate watermarking, metadata, and other innovative techniques into its upcoming generative systems. This aligns with the company’s dedication to upholding responsible AI practices.
While this recent agreement marks an important step towards responsible AI development, many of the companies involved had already made similar promises prior to the summit. Meta, for example, announced the open-sourcing of its large language model Llama 2, making it freely available for research and most commercial use. This move is reminiscent of OpenAI’s decision to open-source GPT-4, the language model powering ChatGPT and Microsoft Bing. Meta also agreed to open its model for public evaluation at the upcoming Def Con event.
Ensuring the responsible and safe development of AI technology is a serious responsibility, and President Biden emphasized the need to get it right. The commitments made by these companies, although sometimes vague, indicate their willingness to collaborate across industry, government, academia, and civil society. Transparency about how AI systems work and close cooperation among stakeholders are crucial for AI to benefit society as a whole.
In conclusion, the recent White House summit involving seven top AI companies signifies a significant milestone in promoting the safety and security of AI systems and users. The commitments made by these companies, including the use of watermarks on AI-generated content, demonstrate their dedication to responsible AI practices. While the full details of their commitments remain undisclosed, these companies have acknowledged the importance of transparency and collaboration in ensuring the responsible development of AI technology.