The chief executive officer of Google, Sundar Pichai, has warned against unquestioningly trusting the information produced by AI systems. As he explained in the interview with the BBC, even the most advanced AIs can be prone to errors.
His comments predetermine a dilemma of criticality in the modern technological environment: AI has powerful resources, but it is subject to fundamental flaws.
Pichai urged users to perceive AI as one of the various analysis tools, and not as a source of truth to be depended on at all times. He stressed that a healthy information architecture cannot be neglected, thus confirming the need to continue using Google search to obtain verifiable data.
Pichai emphasized the usefulness of AI in creative work as well as its shortcomings in the context of accuracy. He claimed that people should learn how to use AI properly and remember its limitations.
This lesson becomes especially applicable when Google expects to roll out its next AI system, Gemini 3.0, which is a project that is already helping users recover on rival platform ChatGPT.
Also, Google has implemented an AI Mode in their search engine, incorporating the Gemini chatbot in the search results, with the aim of creating an interactive search experience similar to using a subject-matter expert.
Nevertheless, as evidenced by the very research that the BBC did earlier this year, many AI chatbots still generate incorrect summaries of news content. The study covered the tools created by OpenAI, Microsoft, Google, and Perplexity, whose responses had various errors.
Pichai explained this difficulty by the fact that the speed at which technology is changing is far beyond the amount of time necessary to put in place complete safety and accuracy procedures. He repeated that Google attempts to balance between bold innovation and responsible stewardship when developing new tools.
In order to prevent risks related to it, Google is increasing its investments in AI safety research. According to Pichai, the company is also spreading technology that can identify whether an AI made an image.
Upon being questioned on the longstanding issues expressed by Elon Musk of one entity gaining undue control over the governance of AI, Pichai argued that one company should not have control over this kind of technology of this magnitude.
He also noted that there are many actors in the AI ecosystem nowadays, which makes such fears highly unrealistic.