In a recent development, a group of prominent companies, including GitHub, Hugging Face, Creative Commons, EleutherAI, LAION, and Open Future, have jointly submitted a paper to EU policymakers, urging them to provide more support for open-source development of various AI models as they finalize the AI Act. This paper serves as a comprehensive list of recommendations to the European Parliament, addressing key aspects of AI regulation.
One of the main suggestions put forward by these companies is the need for clearer definitions of AI components. By clarifying specific terms and concepts related to AI, policymakers can establish a more solid foundation for the regulation. Furthermore, the paper emphasizes the importance of distinguishing between hobbyists and researchers working on open-source models, ensuring that they are not unfairly treated as commercial entities benefiting from AI. This distinction enables the preservation of an open, collaborative environment for AI development.
Another significant recommendation highlighted in the paper pertains to the allowance of limited real-world testing for AI projects. The companies argue that stringent regulations prohibiting real-world testing would impede research and development efforts significantly. By enabling open testing, developers can gather valuable insights and improve the functionality of their AI models, ultimately leading to safer and more reliable applications.
Additionally, the group suggests setting proportional requirements for different foundation models. This approach recognizes that not all AI models pose the same level of risk and should be subject to different regulatory measures accordingly. By tailoring requirements based on the nature of the model, the EU can strike a balance between addressing risks and fostering innovation in the AI landscape.
The companies express their optimism about the potential of the AI Act to set a global precedent in regulating AI effectively while also encouraging innovation. They applaud the EU for taking a proactive approach and seek to further this goal by supporting the open ecosystem approach to AI. This approach, characterized by the open-source sharing of AI tools and knowledge, has proven beneficial for collaboration and progress in the field.
Peter Cihon, GitHub’s senior policy manager, highlights the objective of the paper as providing guidance to lawmakers on how best to support AI development. The companies involved understand the significance of having their voices heard in shaping AI regulations not only within the EU but also globally. By sharing their insights and recommendations, they aim to influence policymakers to adopt approaches that facilitate responsible AI development and regulation.
It is worth noting that the EU’s AI Act has faced criticism for being too broad in its definitions while also being too narrowly focused on the application layer of AI technologies. These concerns highlight the challenge of finding the right balance between comprehensive regulation and flexibility to accommodate diverse AI applications.
Open-source development of AI models has been a subject of debate among developers and companies. Advocates of open-source AI emphasize the benefits of accessibility, transparency, and community collaboration. They argue that AI development works best when everyone has free access to models and can contribute to their improvement. However, challenges arise when companies like OpenAI limit access to their models or stop sharing research due to competition and safety concerns.
The paper raises concerns about proposed requirements for high-risk models, insisting that these requirements may disproportionately affect smaller developers who lack sufficient financial resources. The companies argue that involving costly third-party auditors may not be necessary to mitigate risks associated with foundation models, and regulatory measures should be proportionate to the level of risk involved.
Finally, the paper addresses the issue of testing AI models in real-world circumstances. The companies argue that prohibiting real-world testing would significantly impede research and development efforts. They emphasize that open testing provides valuable insights and lessons for improving the functions and safety of AI applications. Currently, AI models can only be tested in closed experiments to mitigate potential legal issues arising from untested products.
The recommendations put forth in this paper reflect the perspectives and concerns of prominent AI companies. It is evident that these companies view the EU’s AI Act as an opportunity to shape the global landscape of AI regulation and promote responsible AI development. By advocating for clearer definitions, proportional requirements, open-source principles, and the allowance of real-world testing, they aim to strike a balance between addressing risks and facilitating innovation in the AI field. Ultimately, their goal is to support the development of robust and trustworthy AI technologies while fostering an environment of collaboration and progress.