Platformer, a newsletter on the intersection of Silicon Valley and democracy, focuses on an important advance in Bard, Google’s answer to ChatGPT, and how it aims to solve a critical problem with chatbots – their tendency to make things up.
Ever since chatbots were introduced, their creators have warned against blindly trusting the information they provide. ChatGPT and other similar tools generate text based on predictive algorithms rather than drawing from a database of established facts. This means that chatbots often make confident but incorrect guesses based on the massive collection of text they were trained on. As a result, even highly educated individuals can be fooled by their responses, as was demonstrated by a lawyer who unknowingly used citations generated by ChatGPT that were completely fabricated.
This lack of reliability in chatbots has made them largely useless as research assistants. While they can provide quick answers, they often fail to cite their sources, leading users to spend additional time researching the accuracy of the information provided. Recognizing this issue, Google has introduced a new feature in Bard to address this problem.
Previously, Bard had a “Google It” button that allowed users to submit their query to Google search for a second opinion. However, the responsibility of determining the truthfulness of the chatbot’s response still fell on the user. Now, with the new update, Bard takes on more of the burden. After the chatbot answers a query, clicking the Google button will trigger a double-check of the response. Bard reads the answer and evaluates whether there is content across the web to substantiate it. The sentences within the response are then highlighted in green or brown. Green-highlighted responses are linked to cited web pages, providing users with the source of the information. Brown-highlighted responses indicate that Bard doesn’t know the source, suggesting a likely mistake.
The ability to double-check Bard’s responses is a step forward in ensuring the accuracy of information provided by AI language models. It allows users to verify the validity of the chatbot’s statements by exploring the sources cited or highlighting potential errors. While it still requires some manual verification on the user’s part, it is a welcome development in holding AI models accountable for their mistakes.
In an interview with Jack Krawczyk, a senior director of product at Google, he shared a personal experience where he used Bard to look up ways to get rid of the smell of swordfish in his house. Bard initially provided incorrect information, but Krawczyk was able to double-check the response and find the accurate solution. This example highlights the need for double-checking as chatbots like Bard, despite their advancements, can still make mistakes.
Another major update to Bard is its ability to connect with Google products such as Gmail, Docs, Drive, YouTube, and Maps. These extensions allow users to search, summarize, and ask questions about documents stored in their Google accounts in real-time. While this feature is currently only available for personal accounts, it presents opportunities for improved browsing experiences and accomplishing tasks through conversational interfaces. However, the extensions are not perfect and may produce inaccurate results, such as showing incorrect emails or suggesting spam messages.
Looking ahead, there is a question of whether AI will be able to check its own work effectively in the long term. Currently, the responsibility of steering chatbots towards accurate answers lies with the user. Tools that encourage AI models to cite their sources are crucial in the present moment. However, the hope is that future advancements will enable AI models to autonomously verify the accuracy of their responses without the user having to request it.
As AI language models continue to evolve, ensuring their reliability and accountability becomes a high priority. Google’s efforts to improve Bard by incorporating double-checking and integration with other Google products are significant steps in the right direction. With further advancements, users can expect more accurate and trustworthy interactions with chatbots, ultimately enhancing their usability as research assistants and more.