World

ChatGPT users are grumbling that the popular chatbot has gotten lazier by refusing to complete some complex requests and its creator OpenAI is investigating the strange behavior.

Social media platforms such as X, Reddit and even OpenAIs developer forum are riddled with accounts that ChatGPT a large-language model trained on massive troves of internet data — is resisting labor-intensive prompts, such as requests to help the user write code and transcribe blocks of text.

The strange trend has emerged as Microsoft-backed OpenAI faces stiff competition from other firms pursuing generative artificial intelligence products, including Google, which recently released its own Gemini chatbot tool.

Late last month, X user and startup executive Matt Wensing posted screenshots of an exchange in which he asked ChatGPT to list all weeks between now and May 5th 2024.

The chatbot initially refused, claiming that it cant generate an exhaustive list of each individual week but eventually complied when Wensing insisted.

GPT has definitely gotten more resistant to doing tedious work, Wensing wrote on Nov. 27. Essentially giving you part of the answer and then telling you to do the rest.

In a second example, Wensing asked ChatGPT to generate a few dozen lines of code. Instead, the chatbot provided him a template and told him to follow this pattern to complete the task.

Another example where I asked it to extend some code.

It would have had to generate perhaps 50 lines.

It told me to do it instead. pic.twitter.com/MymTCrMSCZ

In another viral instance, ChatGPT refused a users request to transcribe text included in a photo from a page of a book.

ChatGPT officials confirmed last Thursday that they were examining the situation.

We’ve heard all your feedback about GPT4 getting lazier! we haven’t updated the model since Nov 11th, and this certainly isn’t intentional. model behavior can be unpredictable, and we’re looking into fixing it.

we've heard all your feedback about GPT4 getting lazier! we haven't updated the model since Nov 11th, and this certainly isn't intentional. model behavior can be unpredictable, and we're looking into fixing it ?

The company later clarified its point, stating the idea is not that the model has somehow changed itself since Nov 11th.

It’s just that differences in model behavior can be subtle — only a subset of prompts may be degraded, and it may take a long time for customers and employees to notice and fix these patterns, the company said.

ChatGPT and other chatbots have swelled in popularity since last year. However, experts have raised concerns about their tendency to hallucinate, or spit out false and inaccurate information, such as a recent incident in which ChatGPT and Googles Bard chatbot each falsely claimed that Israel and Hamas had reached a ceasefire agreement when no deal had occurred at the time.

A recent study performed by researchers at Long Island University found that ChatGPT incorrectly responded to some 75% of questions about prescription drug usage and gave some responses that would have caused harm to patients.

OpenAI has contended with internal turmoil following the surprise firing and rehiring of Sam Altman as its CEO. A report last week said senior OpenAI employees had raised concerns that Altman was psychologically abusive to staffers before his initial firing.

ChatGPT remains the most popular chatbot of its kind, reaching more than 100 million active weekly users as of November, according to Altman.

Articles You May Like

Google Says it Has Cracked a Quantum Computing Challenge with New Chip
Chinas Xi is likely to decline Trumps inauguration invitation, seeing it as too risky to attend
Toyota invests another nearly $1 billion in Kentucky to prep for its first 3-row electric SUV
Tesla loses HW chief to Amazon and Cybertruck might go to China after all
Trump gushes over ‘handsome’ Prince William