Artificial intelligence (AI) researchers find themselves in a seemingly never-ending race to develop more powerful AI systems. The question arises: who is engaged in this race—the US and China, or a handful of mostly US-based labs? However, the identity of the contenders may not matter as much as the existence of the race itself. A lesson from history, exemplified by the development of the atomic bomb, is that the perceived competition can be just as motivating as an actual race. When an AI lab suddenly goes silent, is it struggling or working on a breakthrough?
An example that highlights the power of competition in the AI field is the release of ChatGPT by OpenAI in November 2022. Google’s management responded by declaring a “code red” situation for its AI strategy, prompting other labs to accelerate their efforts to bring AI products to the public. The attention garnered by OpenAI created a sense of competitive dynamics, causing other labs to intensify their work.
To mitigate the negative aspects of this race, increased transparency between companies could be beneficial. Drawing a parallel to the Manhattan Project, the US kept its atomic bomb development a secret from the USSR, only informing its ally about the weapon’s existence a week after the Trinity test. In 1945, at the Potsdam conference, President Truman personally informed the Soviet premier, Joseph Stalin, about the atomic bomb. Stalin appeared unimpressed, expressing hope that the US would use the weapon against the Japanese. However, this moment could be seen as the missed opportunity to prevent a deadly nuclear arms race after World War II.
In July 2023, the White House managed to secure voluntary commitments from several AI labs, including OpenAI, Google, and Meta, to promote transparency. These commitments involved subjecting their AI systems to testing by internal and external experts before release and sharing information on managing AI risks with governments, civil society, and academia. While this is a step in the right direction, it is crucial for governments to articulate specific dangers they are aiming to address through transparency efforts.
A historical analogy can also be drawn between the destructive potential of nuclear weapons during World War II and the AI risks of today. The first atomic bombs were undeniably devastating, but the type of citywide destruction they could cause was not entirely unprecedented during the war. For example, in a bombing raid on Tokyo in March 1945, American bombers dropped over 2,000 tons of incendiary bombs, resulting in the death of over 100,000 residents—similar to the number killed in the Hiroshima bombing. Hiroshima and Nagasaki were chosen as targets for the atomic bombs precisely because they were among the few cities that had not been completely decimated by conventional bombings. The US military believed it would be impossible to accurately gauge the destructive power of the new weapons if they were dropped on already devastated cities.
After the war, when US scientists visited Hiroshima and Nagasaki, they observed that these cities did not appear significantly different from other cities that had been firebombed using conventional weaponry. This led to the realization that nuclear weapons would need to be used in large numbers for warfare, regardless of the concept of deterrence. The fusion nuclear weapons developed during the Cold War were thousands of times more powerful than the fission weapons used on Japan. The magnitude of the destruction that could be unleashed became difficult to comprehend due to the vast difference in scale between these previous weapons and their more powerful successors.
Similarly, AI presents an order of magnitude problem. Biased algorithms and poorly-implemented AI systems are already posing threats to livelihoods and liberty, particularly for marginalized communities. However, the worst risks associated with AI are still on the horizon. The true magnitude of the risks we are preparing for remains unclear, and it is essential to determine what actions can be taken to address and mitigate them.
In conclusion, the race for more powerful AI systems exists, whether it is between countries or individual labs. Drawing from the lessons of history, competition and perceived race dynamics can be powerful motivators. Increasing transparency between AI companies could be a crucial step in managing the negative aspects of this race. Moreover, it is essential for governments to specify the dangers they are targeting through transparency efforts. By understanding the historical analogy of nuclear weapons development, we can better grasp the magnitude of the risks associated with AI. Addressing these risks and taking appropriate measures is a pressing concern to ensure a safer and more secure future.