Just days after President Joe Biden unveiled a sweeping executive order retasking the federal government with regards to AI development, Vice President Kamala Harris announced at the UK AI Safety Summit on Tuesday a half dozen more machine learning initiatives that the administration is undertaking. Among the highlights: the establishment of the United States AI Safety Institute, the first release of draft policy guidance on the federal government’s use of AI, and a declaration on the responsible military applications for the emerging technology.
In her prepared remarks, Harris emphasized the importance of adopting and advancing AI in a way that protects the public from potential harm while ensuring that everyone can enjoy its benefits. She acknowledged that while AI has the potential to do profound good, it also has the potential to cause profound harm, from AI-enabled cyber-attacks to AI-formulated bioweapons. These existential threats were a central theme of the summit.
To address these dangers, Harris announced the establishment of the United States AI Safety Institute (US AISI) within the National Institute of Standards and Technology (NIST). The institute will be responsible for creating and publishing guidelines, benchmark tests, and best practices for testing and evaluating potentially dangerous AI systems. It will also provide technical guidance to lawmakers and law enforcement on various AI-related topics, such as identifying generated content, mitigating AI-driven discrimination, and ensuring transparency in its use.
The Office of Management and Budget (OMB) is set to release the administration’s first draft policy guidance on government AI use later this week. The draft guidance aims to advance responsible AI innovation while maintaining transparency and protecting federal workers from increased surveillance and job displacement. It will establish safeguards for the use of AI in transportation, immigration, health, education, and other public sector applications. Public comments on the draft guidance are encouraged and can be submitted through the ai.gov/input platform.
In addition to these initiatives, Harris highlighted the US’s Political Declaration on the Responsible Use of Artificial Intelligence and Autonomy, which has already garnered 30 signatories. The declaration sets norms for responsible development and deployment of military AI systems. The administration is also launching a virtual hackathon to combat AI-empowered phone and internet scams, particularly those targeting the elderly. Participants will work to build AI models that can counter robocalls and robotexts.
Content authentication is another focus of the Biden-Harris administration. The Commerce Department, in collaboration with industry advocacy groups like the C2PA, will work to validate content produced by the White House and establish industry norms. The administration is also calling for international support in developing global standards for authenticating government-produced content.
Harris emphasized that the voluntary commitments made by AI companies are just an initial step toward a safer AI future and called for legislation that strengthens AI safety without stifling innovation. She acknowledged that in the absence of regulation and strong government oversight, some technology companies prioritize profit over the wellbeing of customers, the security of communities, and the stability of democracies.
Overall, the Biden-Harris administration is taking significant steps to ensure the responsible development and use of AI in the United States. By establishing the United States AI Safety Institute, releasing draft policy guidance, and encouraging international cooperation, they aim to manage the potential dangers of AI while reaping its benefits. Legislation to strengthen AI safety will be a crucial next step in this process, striking a balance between innovation and protection.