Google’s Bard AI Chatbot, an artificial intelligence language model designed to compete with OpenAI’s ChatGPT, has recently come under fire from Google employees themselves. According to a Bloomberg report, employees have expressed dissatisfaction with the chatbot, with some even labeling it a “pathological liar.”
The Dangers of Bard’s Misinformation
Employees have raised concerns about the potential dangers associated with Bard’s misinformation. In one example, an employee mentioned that if someone asked Bard how to land a plane, the AI would provide advice that could lead to an accident. Another experiment involving diving advice revealed that the suggestions given by Bard “would likely result in serious injury or death.”
A few Google employees shared their thoughts on Bard:
- One employee called Bard “shameful”
- Another said Bard was a “pathological liar”
- Some claimed that the AI could be potentially dangerous
Employee Plea to Reconsider Google’s Bard AI Launch
Despite these concerns, Google went ahead with the public announcement of Bard during a February 2023 presentation. One employee reportedly pleaded with the company to reconsider the chatbot’s launch, stating, “Bard is worse than useless: please don’t bid.”
Post-Launch Feedback on Google’s Bard AI
Following the launch of Google’s Bard AI Chatbot, users echoed the employees’ concerns. The concerns including citing instances of misinformation, plagiarism, and basic math errors. The company’s decision to announce Bard was later described as “hurried” and “wrong” by company officials.
More from us:
The criticisms directed at Google’s Bard AI Chatbot highlight the challenges faced by AI developers in ensuring the accuracy and safety of their creations. As AI technology continues to advance, it is essential for companies to listen to feedback from both employees and users and address the concerns raised to improve and refine their AI models, ultimately striving to provide a more reliable and valuable tool for users.