The use of illegally obtained information to train artificial intelligence (AI) is raising concerns in Japan. Experts and researchers are urging caution on the matter, warning it could lead to numerous copyright infringement cases, job losses, false information and the leaking of confidential information. The government’s AI strategy council drafted a report on May 26, pointing out the lack of regulation around AI, including the risks the tech poses to copyright infringement.
Japan is yet to introduce laws that prohibit AI from using copyrighted material and illegally-acquired information for training. This issue was raised by Japanese lawmaker Takashi Kii on April 24, who also asked about the guidelines for the use of AI chatbots such as ChatGPT in schools, another area of potential uncertainty. In March 2024, the AI chatbot is reportedly set to be adopted by the education system.
Keiko Nagaoka, Japan’s Minister of Education, Culture, Sports, Science and Technology, stated that it is possible to use the work for information analysis regardless of the method, regardless of the content. Takashi proceeded to ask for more specific information, but no clear guidelines were given at the time of the discussion.
As the lack of legislation recognizes machines or robots as being capable of authorship, AI technology lies in a gray area and is prone to copyright infringement. The law only protects the way ideas are expressed and does not protect the ideas themselves. The input is from individuals, but the actual expression is coming from the AI machine itself. The current situation poses several hypothetical questions that need to be resolved through legal proceedings and regulation.
Andrew Petale, a lawyer and trademarks attorney at Melbourne-based Y Intellectual Property, pointed out that it is still a gray area. However, from the perspective of AI companies, their models do not infringe on copyright as their AI-bots transform original work into something new, which qualifies as fair use under U.S. laws where most of the action is kicking off.
In the United States, Google and Microsoft have been able to make significant progress in creating AI technology that has successfully passed the Creative Turing Test – creating a machine that can compose music, write poetry and produce artwork that is nearly indistinguishable from a human being. However, the legality of training AI on copyrighted material remains a contentious issue as the technology continues to evolve.
AI technology has grown at an exponential rate in recent years, but regulatory authorities have struggled to keep up with it. A report by the European Union (EU) found that current AI technology could lead to discrimination, breaches of individuals’ privacy rights, and impactful decisions made solely by algorithms. In the report, the EU introduced a plan to create a common regulatory framework on AI, noting that it is necessary to ensure safety and respect for fundamental rights.
Japanese experts have also urged the government to introduce guidelines to regulate the creation and use of AI technology, covering aspects such as access to data, transparency, accountability, and explainability. The experts feel that these guidelines will help to ensure that AI technology is used appropriately and legally while preventing illegal usage like copyright infringement. In addition, AI technology creators are also calling for the establishment of an international regulatory body to oversee AI.
In conclusion, the issue of training AI on copyrighted materials and illegally-obtained information is a complex one that poses legal, ethical, and social challenges. The potential benefits of AI technology are vast, but regulatory authorities must address these challenges to ensure that the technology can be safely and legally used while protecting individuals’ fundamental rights.