OpenAI, the renowned artificial intelligence research lab, has once again highlighted the potential risks associated with the development of superintelligent AI. However, some critics argue that this focus on regulation and hypothetical scenarios distracts from addressing the more immediate harms caused by existing AI systems.
James Vincent, in a previous article, pointed out that OpenAI’s discussions on regulating superintelligence, also referred to as artificial general intelligence (AGI), can be seen as a rhetorical strategy. Critics suggest that this approach allows OpenAI CEO, Sam Altman, to divert attention away from the current negative impacts of AI systems, while keeping lawmakers and the public engrossed in science-fiction-like scenarios.
While OpenAI continues to emphasize the need for robust governance and regulation for the future development of superintelligent AI, recent data indicates a decline in the popularity and usage of some of their existing AI applications. According to reports by CNBC and SensorTower, ChatGPT and Bing app installations dropped by 38% in June. Additionally, SimilarWeb data reveals a 9.7% decrease in worldwide traffic to the OpenAI website, accompanied by a decline in the time spent on the site.
The significant drop in ChatGPT and Bing app installs raises questions about the reasons behind this downward trend. One possible explanation could be the reduced demand for AI-powered tools due to summer vacations, particularly among students who no longer require the assistance of these applications for their homework or academic work. Another explanation is that the AI trend has possibly reached a saturation point for now, leading to a decline in usage and interest.
It is essential to analyze these developments and understand the implications they may hold for OpenAI and the broader AI landscape. The decline in user engagement and application downloads suggests a potential shift in user preferences or decreased reliance on AI-based solutions. This could reflect both the limitations of existing AI technologies and the evolving needs and expectations of users.
Critics argue that instead of focusing primarily on the hypothetical risks of superintelligent AI, OpenAI should redirect its attention towards addressing the immediate challenges associated with AI systems already in use. AI technologies have proven to be susceptible to biases, ethical concerns, and potential misuse, leading to tangible harms in various domains such as facial recognition, automated decision-making, and content moderation.
By prioritizing the immediate harms and shortcomings of AI systems, OpenAI can play a more active role in shaping responsible and beneficial AI deployment. This includes addressing issues such as data biases, algorithmic transparency, and the potential for discriminatory outcomes. Additionally, efforts should be made to involve diverse perspectives and expertise in the development and governance of AI systems to ensure fairness, inclusivity, and accountability.
Moreover, the decline in user engagement should serve as a wake-up call for AI developers and researchers. It highlights the need to continuously improve and innovate AI technologies to meet users’ evolving demands and overcome the limitations of current systems. OpenAI and similar organizations should invest in research and development to enhance the capabilities, reliability, and usefulness of AI applications to regain and retain user trust.
While focusing on hypothetical risks and potential regulations for superintelligent AI is essential for long-term planning and preparedness, OpenAI should balance its efforts by actively addressing the immediate concerns associated with AI systems in use today. This comprehensive approach will contribute to a more responsible and beneficial development of AI technologies.
In conclusion, OpenAI’s ongoing discussions regarding the regulation and risks of superintelligent AI have attracted criticism, with some arguing that it diverts attention from the present harms of existing AI systems. The recent decline in user engagement and application installations for ChatGPT and Bing raises questions about the reasons behind this trend. OpenAI should take note of these developments, prioritize addressing immediate concerns, and continue to innovate to meet users’ evolving needs. By striking a balance between long-term planning and immediate challenges, OpenAI can contribute to a more responsible and beneficial AI landscape.