Google’s AI chatbot, Bard, has recently been updated with numerous features to compete with OpenAI’s ChatGPT. However, doubts about its reliability have emerged. Debbie Weinstein, the managing director of Google UK, expressed concerns during an interview with the BBC, stating that Bard may struggle to provide trustworthy information. She emphasized that people should rely on Google’s search engine for accurate answers.
Currently, Google Bard’s homepage acknowledges its limitations and occasional imperfections, noting that it may not always provide correct responses. Nevertheless, it doesn’t explicitly advise users to cross-check their answers through traditional search engines. Like other AI chatbots, Bard is susceptible to hallucination, wherein it confidently responds with inaccurate information. Even OpenAI’s powerful GPT-4 language model is not immune to such issues.
The launch of Bard in February was marred by incorrect responses during a demonstration, leading to a decline in Google’s share price. This incident highlighted the common problem of fabricated and inaccurate answers across various AI systems.
Initially based on LaMDA large language models, Bard was later shifted to PaLM LLM. It now supports over 40 languages (text-to-voice) and integrates Google Lens, allowing users to upload images along with prompts. Furthermore, Bard has been expanded to more geographic regions and offers customized responses to enhance user experience.