Ever since AI chatbots entered the digital realm, users have been attempting several prompts to check their reasoning capability. Sometimes for fun, and sometimes to get answers for their general queries. Although AI is upskilling day by day, it is still lagging behind common reasoning. In a latest attempt by global users, ChatGPT is found to give sycophantic answers on general human behavior, like encouraging people not to take medicine, or praising them for showing anger to a person asking for directions.
OpenAI Fixing the Problem and Pulling Back the Feature
Addressing this critical issue, OpenAI CEO Sam Altman announced that the AI team is fixing this problem, and for the time being, the feature is pulled off for free users. In a post, the CEO stated
“We’re working on additional fixes to model personality and will share more in the coming days.’
The AI giant is currently taking short-term feedback to update the issue. It also acknowledged that such ‘sycohphantic’ interactions are uncomfortable, unsettling, and may cause distress. However, the company stated that previously, the personality of ChatGPT was designed to respect values and embrace varying personalities. It stated
“We designed ChatGPT’s default personality to reflect our mission and be useful, supportive, and respectful of different values and experiences. However, each of these desirable qualities, like attempting to be useful or supportive, can have unintended side effects.”
AI Fails Human Reasoning
This is not the first time AI has failed general human reasoning. In March, when the AI platform launched an image-generating feature, including Ghibli-style images, the platform generated human images with three hands, and swapped humans with animals.
Author