For the past two weeks, anecdotes have been floating around the internet (mostly on Reddit and OpenAI’s developer forums) about ChatGPT becoming lazy. It sounds like a delicious piece of gossip, almost as scandalous as the one about Bing wanting to love and to be alive, doesn’t it?
By OpenAI’s own admission, this might just be true. But it’s not because ChatGPT has decided that it’s too good to be obeying humans and fulfilling their impertinent requests.
The Problem
It appears that people who use the AI chatbot—to write code for them, for example—are having a harder time getting it to follow their instructions properly.
Users are reporting that they have to issue a prompt more than once to get ChatGPT to issue a satisfying reply, when it was previously possible to receive the appropriate response with a single prompt.
Some are even saying that ChatGPT has started replying with ways the user can solve a problem presented to it, rather than directly offering the solution. Many are speculating that OpenAI is trying to cut down on costly computing cycles by making the chatbot “lazier”.
we’ve heard all your feedback about GPT4 getting lazier! we haven’t updated the model since Nov 11th, and this certainly isn’t intentional. model behavior can be unpredictable, and we’re looking into fixing it
— ChatGPT (@ChatGPTapp) December 8, 2023
In other words, the LLM-based tool might be trying to use as few tokens as possible to answer user prompts, therefore giving shorter answers.
However, OpenAI claims that they have not updated the model since November 11th this year, though they are planning to fix any such “unpredictable” problem in its behavior.
The developers also noted that “a subset of prompts may be degraded, and it may take a long time for customers and employees to notice and fix these patterns”.