ChatGPT: The Rise and Fall of AI Hype
In recent months, there has been a noticeable decline in the hype surrounding ChatGPT, with search interest and web traffic to OpenAI’s ChatGPT website taking a significant hit. Some users of GPT-4 have also reported that the model seems less intelligent but faster than its previous iterations. While there are speculations that OpenAI may have divided the model into smaller, specialized models, another intriguing theory suggests AI cannibalism as a contributing factor to the decline.
The phenomenon of AI cannibalism arises from the abundance of AI-generated text and images available on the internet. As synthetic data gets scraped up and used to train new models, it leads to a negative feedback loop. This increasing reliance on AI-generated data for training models ultimately results in a decrease in coherence and quality of the outputs. This deterioration can be compared to making photocopies of photocopies, where the image progressively worsens.
Although GPT-4’s official training data ended in September 2021, the model seems to possess knowledge beyond that time frame. OpenAI has recently shut down its web browsing plugin, indicating a potential effort to address the issue. Researchers from Rice and Stanford University have even coined a term for this phenomenon: Model Autophagy Disorder (MAD). Their analysis suggests that without a sufficient supply of fresh real data in each generation, future generative models will experience a progressive decrease in quality and diversity.
Fortunately, this presents an opportunity to involve humans in the loop. OpenAI CEO Sam Altman envisions a way to identify and prioritize human content for training models through his blockchain project, Worldcoin, which leverages eyeball-scanning technology. By incorporating more human-generated content, AIs can regain some of the uniqueness and variety they have been losing due to AI cannibalism.
It’s not only OpenAI that has ventured into the realm of AI cannibalism, as other tech giants have also utilized similar strategies. Facebook’s Threads, a Twitter clone, seems destined to either shut down or be merged with Instagram within a year according to Big Brain Daily’s Alex Valaitis. Valaitis argues that one of the primary motivations behind the launch of Threads was to generate more text-based content to train Meta’s AI models.
Elon Musk, on the other hand, has taken steps to limit the use of AI-generated data from Twitter, such as charging for API access and implementing rate limits. However, Facebook, with its image recognition AI software SEER, has previously utilized data posted to Instagram to train its models. Threads, known for its extensive data collection on user information, including health data, religious beliefs, and race, is likely to contribute to the training of Facebook’s own Large Language Model Meta AI (LLaMA).
One notable consequence of training AI models on religious texts has been the emergence of chatbots that espouse extreme religious ideologies. In India, Hindu chatbots shaped in the image of Krishna have been offering advice that promotes violence and killing as a righteous duty. Mumbai-based lawyer Lubna Yusuf raises concerns about the danger of miscommunication and misinformation when chatbots provide literal answers based on religious texts. Despite the ethical concerns, the Indian government has yet to announce plans for regulating such AI applications.
The debate around AI’s future oscillates between doom and optimism. Eliezer Yudkowsky, a prominent AI doomer, warns in a TED talk that superintelligent AI could eventually lead to humanity’s demise. Yudkowsky believes that AGI will surpass human intelligence to the point where we can’t comprehend its actions or motives, making it a potentially lethal entity. He suggests that without a global ban on AI technology backed by the threat of World War III, this catastrophic scenario will likely unfold.
In contrast, Marc Andreessen of venture capital firm A16z argues that Yudkowsky’s position lacks scientific basis and fails to offer testable hypotheses or falsifiability. Microsoft co-founder Bill Gates agrees that AI risks are real but manageable, drawing parallels to how society has navigated previous transformative technological advancements. Gates believes that open and informed public debate is crucial for understanding the benefits and risks of AI, as history has shown that humanity can manage these challenges effectively.
Data scientist Jeremy Howard suggests that attempts to restrict or outlaw AI technology would be counterproductive, comparing such reactions to the pre-Enlightenment age when education and power were reserved for the elite. Howard advocates for open-source development of AI, trusting that the majority of people will use the technology responsibly and ethically. By harnessing the collective power and expertise of a diverse society, he believes society can effectively respond to potential threats with the aid of AI.
OpenAI’s latest upgrade, GPT-4’s new code interpreter, introduces an exciting capability that allows the AI to generate and execute code on demand. This feature empowers users to leverage GPT-4’s ability to generate code for a wide range of tasks, from data visualization to file format conversion. Through this interpreter, GPT-4 has proven its prowess in various applications, including creating video effects, transforming images into videos, and generating animated maps.
In recent research conducted at the University of Montana, artificial intelligence demonstrated top-notch creativity scores, ranking in the top 1% on a standardized test. The Scholastic Testing Service praised GPT-4’s responses for their creativity, fluency in generating ideas, and originality.
While the hype surrounding ChatGPT may have waned, the future of AI remains a subject of intense debate. From the challenges posed by AI cannibalism to the ethical considerations involved in training AI models on religious texts, the world continues to grapple with the implications and potential risks associated with artificial intelligence. As different perspectives clash, it is essential to navigate the path forward with an understanding of both the possibilities and limitations of this transformative technology.