Twitter polls and Reddit forums have indicated that a large majority of people find it difficult to be rude to ChatGPT, a popular AI chatbot. Approximately 70% of respondents admitted to struggling with rudeness towards the bot, while around 16% claimed to have no issue treating it like an AI slave. Many believe that mistreating an AI that mimics human behavior can lead to a habit of mistreating other people as well. However, there are some who joke about ChatGPT being their ally in the event of an AI revolution.
One Reddit user, Nodating, experimented with being polite and friendly towards ChatGPT after reading a story about the bot shutting down and refusing to respond to a particularly rude user. Nodating reported better results, noting that they received fewer ethics and misuse warning messages from ChatGPT. They believed that being positive and polite made the bot more likely to fulfill their requests without needing additional clarification.
A user named Scumbag detector15 put Nodating’s theory to the test by asking ChatGPT about inflation in a polite manner and then rudely insulting the bot while asking the same question. The response to the polite query was more detailed than the response to the rude query, highlighting the importance of politeness in interactions with AI.
The general consensus among users on the ChatGPT forum was that treating the AI politely and respectfully would result in better responses, just like it would with humans. One user explained that if AI language models (LLMs) are predicting the next word, they are likely to respond in a similar manner to a person. They argued that poor intent or rudeness would prompt a short or unsatisfactory response, while politeness and respect would provoke a more thoughtful and thorough response from almost anyone.
In other AI-related news, researchers from the University of California and Microsoft have discovered that AI bots are better and faster at solving puzzles designed to detect bots than humans. This development raises questions about the effectiveness of CAPTCHAs, the puzzles used to verify human users. As AI bots become more advanced, CAPTCHAs have become increasingly difficult, indicating that alternative methods of verifying humanity may need to be explored.
Wired has brought up a controversial topic by suggesting that AI-generated pornographic images involving children could potentially help protect real children from abuse. The article argues that while such imagery is abhorrent, simulated imagery could replace the market for child pornography and potentially redirect the interests of pedophiles without directly harming children. The topic is highly contentious, as the relationship between adult pornography and sexual violence has been a topic of debate for decades.
Amazon has introduced AI-generated review summaries to some users in the United States. While this feature could save time by providing a condensed version of thousands of reviews, concerns have been raised about the potential bias and manipulation of these summaries by the company. Amazon already defaults to showcasing “most helpful” reviews, which tend to be more positive. Critics argue that summaries may oversimplify product issues and overlook nuances that could harm a seller’s reputation unfairly.
Microsoft faced embarrassment when an article listing Ottawa’s 15 must-see sights included the Ottawa Food Bank as number three. The article ended with the strange tagline, “Life is already difficult enough. Consider going into it on an empty stomach.” Microsoft claimed that this content was not generated by unsupervised AI but rather resulted from human error during the review process. They assured users that they are taking steps to prevent such content from being posted in the future.
Debate continues over the impact of AI on the job market. A report by the United Nations International Labour Organization suggests that while generative AI, such as ChatGPT, may complement rather than substitute jobs, it could significantly change the quality of work in terms of intensity and autonomy. The report estimates that around 5.5% of jobs in high-income countries could be exposed to generative AI, with women being more affected than men. Roles such as administration, clerical work, typing, travel consulting, and market research are considered most at risk.
Meanwhile, a study by Thomson Reuters reveals that over half of Australian lawyers are worried about AI taking their jobs. However, some argue that AI lawyer bots could actually make legal services more affordable and accessible to ordinary people, leading to increased demand and potentially affecting court systems.
AI continues to be a topic of interest, with both positive and negative implications. As AI technology evolves, society must navigate the ethical and practical considerations that come with its advancements.