In a bizarre turn of events, Microsoft recently published and then retracted an AI-generated article that suggested visiting a Canadian food bank as a tourist attraction. The article, titled “Headed to Ottawa? Here’s what you shouldn’t miss!” initially appeared on Microsoft Start, the company’s AI-aggregated news service. However, it quickly gained attention for its highly inappropriate recommendation to “consider going into [the food bank] on an empty stomach.”
The Twitter user Paris Marx was the first to call out this story, highlighting the insensitivity of suggesting that tourists visit a place intended to support those in need. Marx emphasized that people who rely on food banks already face enough difficulties in their lives and that it is completely inappropriate to treat it as a tourist attraction.
Following the backlash, the article was removed, and Microsoft senior director Jeff Jones stated that an investigation would be conducted to determine how it passed the company’s review process. The fact that the article was attributed to “Microsoft Travel,” with no mention of real human authors, raises questions about the involvement of actual people in the creation of this content.
Microsoft Start, which replaced Microsoft News in 2021, claims to use “human oversight” for its algorithms, which sift through a large volume of content from partners. This oversight is intended to consider factors such as freshness, category, topic type, opinion content, and potential popularity, aligning with user preferences. However, these claims of human oversight are now under scrutiny, as it appears that AI-generated content slipped through the system.
This incident is not an isolated case of companies overstepping the boundaries with AI-generated content. Earlier this year, CNET published numerous articles on financial topics that were riddled with errors and were generated by artificial intelligence. Similarly, Gizmodo’s parent company, G/O Media, faced criticism when an AI-composed Star Wars article filled with mistakes was posted on their site. These instances highlight the challenges and risks associated with relying solely on AI for content creation.
It is worth noting that Microsoft had previously laid off around 50 reporters from its news division in 2020 as it shifted towards AI-generated news. This decision, coupled with the recent incident, raises concerns about the impact on journalism and the quality of news when human journalists are replaced by AI systems.
While AI has its merits, it is clear that caution must be exercised in its application, particularly in sensitive areas such as news and content creation. The Associated Press has been proceeding with measured caution in incorporating AI into news coverage, recognizing the need for human involvement and accountability. On the other hand, media outlets like Microsoft’s news publishing wing appear more willing to embrace fully AI-written articles, even if it means dealing with the aftermath of errors and insensitivity.
In order to maintain credibility and ensure responsible content generation, strict oversight and review processes need to be in place when utilizing AI. The involvement of human editors and journalists can help to identify potential issues, biases, and sensitivities that AI systems may overlook. Collaborative efforts between AI and human intelligence can lead to better, more balanced content that respects the sensitivities of different topics and audiences.
As the use of AI continues to evolve, it is crucial for organizations to prioritize ethical considerations, user feedback, and continuous improvement. Learning from incidents like this one can help shape the future of AI-generated content towards a more responsible and respectful approach.