World

Unsafe at any seed — Why ChatGPT and Bing Chat are so good at making things up A look inside the hallucinating artificial minds of the famous text prediction bots.

Benj Edwards – Apr 6, 2023 3:58 pm UTC EnlargeAurich Lawson | Getty Images reader comments 143 with Share this story Share on Facebook Share on Twitter Share on Reddit

Over the past few months, AI chatbots like ChatGPT have captured the world’s attention due to their ability to converse in a human-like way on just about any subject. But they come with a serious drawback: They can present convincing false information easily, making them unreliable sources of factual information and potential sources of defamation.

Why do AI chatbots make things up, and will we ever be able to fully trust their output? We asked several experts and dug into how these AI models work to find the answers. Hallucinationsa loaded term in AI

AI chatbots such as OpenAI’s ChatGPT rely on a type of AI called a “large language model” (LLM) to generate their responses. An LLM is a computer program trained on millions of text sources that can read and generate “natural language” textlanguage as humans would naturally write or talk. Unfortunately, they can also make mistakes. Further ReadingSorry in advance! Snapchat warns of hallucinations with new AI conversation bot

In academic literature, AI researchers often call these mistakes “hallucinations.” But that label has grown controversial as the topic becomes mainstream because some people feel it anthropomorphizes AI models (suggesting they have human-like features) or gives them agency (suggesting they can make their own choices) in situations where that should not be implied. The creators of commercial LLMs may also use hallucinations as an excuse to blame the AI model for faulty outputs instead of taking responsibility for the outputs themselves.

Still, generative AI is so new that we need metaphors borrowed from existing ideas to explain these highly technical concepts to the broader public. In this vein, we feel the term “confabulation,” although similarly imperfect, is a better metaphor than “hallucination.” In human psychology, a “confabulation” occurs when someone’s memory has a gap and the brain convincingly fills in the rest without intending to deceive others. ChatGPT does not work like the human brain, but the term “confabulation” arguably serves as a better metaphor because there’s a creative gap-filling principle at work, as we’ll explore below. Advertisement The confabulation problem

It’s a big problem when an AI bot generates false information that can potentially mislead, misinform, or defame. Recently, The Washington Post reported on a law professor who discovered that ChatGPT had placed him on a list of legal scholars who had sexually harassed someone. But it never happenedChatGPT made it up. The same day, Ars reported on an Australian mayor who allegedly found that ChatGPT claimed he had been convicted of bribery and sentenced to prison, a complete fabrication.

Shortly after ChatGPT’s launch, people began proclaiming the end of the search engine. At the same time, though, many examples of ChatGPT’s confabulations began to circulate on social media. The AI bot has invented books and studies that don’t exist, publications that professors didn’t write, fake academic papers, false legal citations, non-existent Linux system features, unreal retail mascots, and technical details that don’t make sense.

Curious how GPT will replace Google if it gives wrong answers with high confidence.

For example, I asked ChatGPT to give a list of top books on Social Cognitive Theory. Out of the 10 books on the answer, 4 books don’t exist and 3 books were written by different people. pic.twitter.com/b2jN9VNCFv Herman Saksono (he/him) (@hermansaksono) January 16, 2023

And yet despite ChatGPT’s predilection for casually fibbing, counter-intuitively, its resistance to confabulation is why we’re even talking about it today. Some experts note that ChatGPT was technically an improvement over vanilla GPT-3 (its predecessor model) because it could refuse to answer some questions or let you know when its answers might not be accurate. Further ReadingNYC schools block ChatGPT, fearing negative impact on learning

“A major factor in Chat’s success is that it manages to suppress confabulation enough to make it unnoticeable for many common questions,” said Riley Goodside, an expert in large language models who serves as staff prompt engineer at Scale AI. “Compared to its predecessors, ChatGPT is notably less prone to making things up.”

If used as a brainstorming tool, ChatGPT’s logical leaps and confabulations might lead to creative breakthroughs. But when used as a factual reference, ChatGPT could cause real harm, and OpenAI knows it.

Not long after the model’s launch, OpenAI CEO Sam Altman tweeted, “ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness. It’s a mistake to be relying on it for anything important right now. Its a preview of progress; we have lots of work to do on robustness and truthfulness.” In a later tweet, he wrote, “It does know a lot, but the danger is that it is confident and wrong a significant fraction of the time.”

What’s going on here? Page: 1 2 3 4 Next → reader comments 143 with Share this story Share on Facebook Share on Twitter Share on Reddit Benj Edwards Benj Edwards is an AI and Machine Learning Reporter for Ars Technica. For over 16 years, he has written about technology and tech history for sites such as The Atlantic, Fast Company, PCMag, PCWorld, Macworld, How-To Geek, and Wired. In 2005, he created Vintage Computing and Gaming. He also hosted The Culture of Tech podcast and contributes to Retronauts. Mastodon: benjedwards@mastodon.social Twitter @benjedwards Advertisement Channel Ars Technica ← Previous story Next story → Related Stories Today on Ars

Articles You May Like

Israel names soldiers killed in base attack – as 23 killed in Gaza school strike
American Water, largest water utility, hit by cyberattack at time of rising threats against U.S. water supply
Florida gas stations are running out of fuel as people flee Hurricane Milton
Microsoft will let people buy games through Xbox app on Android in November after Google antitrust ruling
Jets owner: Decision to fire Saleh ‘mine alone’