The United States National Institute of Standards and Technology (NIST) and the Department of Commerce have established the Artificial Intelligence (AI) Safety Institute Consortium and are now seeking members to join. This consortium aims to evaluate AI systems and enhance the safety and trustworthiness of this emerging technology.
According to an official notice published on November 2, 2023, by NIST in the Federal Registry, the purpose of this collaboration is to address the challenges associated with the development and deployment of AI. The consortium will collaborate with non-profit organizations, universities, other government agencies, and technology companies to achieve its goals.
In order to ensure a human-centered approach to AI safety and governance, the consortium will develop and implement specific policies and measurements. They will also focus on various functions, including the development of measurement and benchmarking tools, policy recommendations, red-teaming efforts, psychoanalysis, and environmental analysis.
The establishment of this consortium is a response to a recent executive order issued by US president Joseph Biden. The executive order introduced six new standards for AI safety and security, although these standards have not yet been legally enacted. This initiative by the US government reflects the growing importance of AI safety and the need to address potential risks associated with its development and deployment.
While some European and Asian countries have already started implementing policies to govern AI systems in terms of user privacy, citizen security, and mitigating unintended consequences, the US has lagged behind in this area. President Biden’s executive order and the creation of the AI Safety Institute Consortium mark progress towards the establishment of specific policies for AI in the US. However, there is still a lack of a concrete timeline for implementing laws governing AI development and deployment beyond existing regulations that apply to businesses and technology.
Many experts believe that current laws are insufficient when it comes to regulating the rapidly evolving AI sector. As AI technology continues to advance and become embedded in various aspects of society, there is a growing need for comprehensive and tailored regulations to address the unique challenges and potential risks associated with AI.
The AI Safety Institute Consortium aims to bridge this gap by bringing together various stakeholders to collaborate on addressing these challenges. By leveraging the expertise and perspectives of non-profit organizations, universities, government agencies, and technology companies, the consortium can develop comprehensive solutions and guidelines to enhance AI safety and trustworthiness.
The development of measurement and benchmarking tools will enable the assessment of AI systems based on predefined criteria and standards. This will help identify areas that require improvement and ensure that AI systems are developed and deployed with safety in mind.
Policy recommendations will provide guidance to lawmakers and regulators on how to create a regulatory framework that balances innovation and safety. By taking a human-centered approach, the consortium can ensure that AI systems are designed to benefit society as a whole while minimizing potential risks.
Red-teaming efforts involve simulating potential attacks or scenarios to identify vulnerabilities in AI systems. By conducting these exercises, the consortium can test the robustness and security of AI systems, leading to improvements in their design and implementation.
Psychoanalysis and environmental analysis will assess the impact of AI systems on individuals and their surroundings. This includes considerations such as the ethical implications of AI algorithms and their potential biases, as well as the environmental impact of AI infrastructure.
Overall, the AI Safety Institute Consortium is an important initiative that brings together a diverse range of stakeholders to address the challenges associated with AI development and deployment. By working together, these organizations can contribute to the establishment of comprehensive policies and guidelines that promote the safety and trustworthiness of AI systems in the United States.