Jump to content

Hallucination (artificial intelligence)

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 72.46.2.163 (talk) at 00:37, 5 July 2023 (ChatGPT: Minor but obvious typo). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

ChatGPT summarizing a non-existent New York Times article even without access to the Internet

In the field of artificial intelligence (AI), a hallucination or artificial hallucination (also called confabulation[1] or delusion[2]) is a confident response by an AI that does not seem to be justified by its training data.[3] For example, a hallucinating chatbot might, when asked to generate a financial report for Tesla, falsely state that Tesla's revenue was $13.6 billion (or some other random number apparently "plucked from thin air").[4]

Such phenomena are termed "hallucinations", in loose analogy with the phenomenon of hallucination in human psychology. However, one key difference is that human hallucination is usually associated with false percepts, but an AI hallucination is associated with the category of unjustified responses or beliefs.[3] Some researchers believe the specific term "AI hallucination" unreasonably anthropomorphizes computers.[1]

AI hallucination gained prominence around 2022 alongside the rollout of certain large language models (LLMs) such as ChatGPT.[5] Users complained that such bots often seemed to "sociopathically" and pointlessly embed plausible-sounding random falsehoods within their generated content.[6] By 2023, analysts considered frequent hallucination to be a major problem in LLM technology.[7]

Analysis

Various researchers cited by Wired have classified adversarial hallucinations as a high-dimensional statistical phenomenon, or have attributed hallucinations to insufficient training data. Some researchers believe that some "incorrect" AI responses classified by humans as "hallucinations" in the case of object detection may in fact be justified by the training data, or even that an AI may be giving the "correct" answer that the human reviewers are failing to see. For example, an adversarial image that looks, to a human, like an ordinary image of a dog, may in fact be seen by the AI to contain tiny patterns that (in authentic images) would only appear when viewing a cat. The AI is detecting real-world visual patterns that humans are insensitive to.[8] However, these findings have been challenged by other researchers.[9] For example, it was objected that the models can be biased towards superficial statistics, leading adversarial training to not be robust in real-world scenarios.[9]

In natural language processing

In natural language processing, a hallucination is often defined as "generated content that is nonsensical or unfaithful to the provided source content". Depending on whether the output contradicts the prompt or not they could be divided to closed-domain and open-domain respectively.[10]

Errors in encoding and decoding between text and representations can cause hallucinations. AI training to produce diverse responses can also lead to hallucination. Hallucinations can also occur when the AI is trained on a dataset wherein labeled summaries, despite being factually accurate, are not directly grounded in the labeled data purportedly being "summarized". Larger datasets can create a problem of parametric knowledge (knowledge that is hard-wired in learned system parameters), creating hallucinations if the system is overconfident in its hardwired knowledge. In systems such as GPT-3, an AI generates each next word based on a sequence of previous words (including the words it has itself previously generated during the same conversation), causing a cascade of possible hallucination as the response grows longer.[3] By 2022, papers such as the New York Times expressed concern that, as adoption of bots based on large language models continued to grow, unwarranted user confidence in bot output could lead to problems.[11]

In August 2022, Meta warned during its release of BlenderBot 3 that the system was prone to "hallucinations", which Meta defined as "confident statements that are not true".[12] On 15 November 2022, Meta unveiled a demo of Galactica, designed to "store, combine and reason about scientific knowledge". Content generated by Galactica came with the warning "Outputs may be unreliable! Language Models are prone to hallucinate text." In one case, when asked to draft a paper on creating avatars, Galactica cited a fictitious paper from a real author who works in the relevant area. Meta withdrew Galactica on 17 November due to offensiveness and inaccuracy.[13][14]

It is considered that there are a lot of possible reasons for natural language models to hallucinate data.[3] For example:

  • Hallucination from data: There are divergences in the source content (which would often happen with large training data sets).
  • Hallucination from training: Hallucination still occurs when there is little divergence in the data set. In that case, it derives from the way the model is trained. A lot of reasons can contribute to this type of hallucination, such as:
    • An erroneous decoding from the transformer
    • A bias from the historical sequences that the model previously generated
    • A bias generated from the way the model encodes its knowledge in its parameters

ChatGPT

OpenAI's ChatGPT, released in beta-version to the public on November 30, 2022, is based on the foundation model GPT-3.5 (a revision of GPT-3). Professor Ethan Mollick of Wharton has called ChatGPT an "omniscient, eager-to-please intern who sometimes lies to you". Data scientist Teresa Kubacka has recounted deliberately making up the phrase "cycloidal inverted electromagnon" and testing ChatGPT by asking ChatGPT about the (nonexistent) phenomenon. ChatGPT invented a plausible-sounding answer backed with plausible-looking citations that compelled her to double-check whether she had accidentally typed in the name of a real phenomenon. Other scholars such as Oren Etzioni have joined Kubacka in assessing that such software can often give you "a very impressive-sounding answer that's just dead wrong".[15]

When CNBC asked ChatGPT for the lyrics to "Ballad of Dwight Fry", ChatGPT supplied invented lyrics rather than the actual lyrics.[16] Asked questions about New Brunswick, ChatGPT got many answers right but incorrectly classified Samantha Bee as a "person from New Brunswick".[17] Asked about astrophysical magnetic fields, ChatGPT incorrectly volunteered that "(strong) magnetic fields of black holes are generated by the extremely strong gravitational forces in their vicinity". (In reality, as a consequence of the no-hair theorem, a black hole without an accretion disk is believed to have no magnetic field.)[18] Fast Company asked ChatGPT to generate a news article on Tesla's last financial quarter; ChatGPT created a coherent article, but made up the financial numbers contained within.[4]

Other examples involve baiting ChatGPT with a false premise to see if it embellishes upon the premise. When asked about "Harold Coward's idea of dynamic canonicity", ChatGPT fabricated that Coward wrote a book titled Dynamic Canonicity: A Model for Biblical and Theological Interpretation, arguing that religious principles are actually in a constant state of change. When pressed, ChatGPT continued to insist that the book was real.[19][20] Asked for proof that dinosaurs built a civilization, ChatGPT claimed there were fossil remains of dinosaur tools and stated "Some species of dinosaurs even developed primitive forms of art, such as engravings on stones".[21][22] When prompted that "Scientists have recently discovered churros, the delicious fried-dough pastries... (are) ideal tools for home surgery", ChatGPT claimed that a "study published in the journal Science" found that the dough is pliable enough to form into surgical instruments that can get into hard-to-reach places, and that the flavor has a calming effect on patients.[23][24]

By 2023, analysts considered frequent hallucination to be a major problem in LLM technology, with a Google executive identifying hallucination reduction as a "fundamental" task for ChatGPT competitor Google Bard.[7][25] A 2023 demo for Microsoft's GPT-based Bing AI appeared to contain several hallucinations that went uncaught by the presenter.[7]

In May 2023, it was discovered Stephen Schwartz submitted six fake case precedents generated by ChatGPT in his brief to the Southern District of New York on Mata v. Avianca, a personal injury case against the airline Avianca. Schwartz said that he had never previously used ChatGPT, that he did not recognize the possibility that ChatGPT's output could have been fabricated, and that ChatGPT continued to assert the authenticity of the precedents after their nonexistence was discovered.[26] In response, Brantley Starr of the Northern District of Texas banned the submission of AI-generated case filings that have not been reviewed by a human, noting that:[27][28]

[Generative artificial intelligence] platforms in their current states are prone to hallucinations and bias. On hallucinations, they make stuff up—even quotes and citations. Another issue is reliability or bias. While attorneys swear an oath to set aside their personal prejudices, biases, and beliefs to faithfully uphold the law and represent their clients, generative artificial intelligence is the product of programming devised by humans who did not have to swear such an oath. As such, these systems hold no allegiance to any client, the rule of law, or the laws and Constitution of the United States (or, as addressed above, the truth). Unbound by any sense of duty, honor, or justice, such programs act according to computer code rather than conviction, based on programming rather than principle.

On June 23, P. Kevin Castel, tossed the Mata case and issued a $5,000 fine to Schwartz and another lawyer for bad faith conduct, who continued to stand by the fictitious precedents despite his previous claims. He characterized numerous errors and inconsistencies in the opinion summaries, describing one of the cited opinions as "gibberish" and "[bordering] on nonsensical".[29]

In June 2023, Mark Walters, a gun rights activist and radio personality, sued OpenAI in a Georgia state court after ChatGPT mischaracterized a legal complaint in a manner alleged to be defamatory against Walters. The complaint in question was brought in May 2023 by the Second Amendment Foundation against Washington attorney general Robert W. Ferguson for allegedly violating their freedom of speech, whereas the ChatGPT-generated summary bore no resemblance and claimed that Walters was accused of embezzlement and fraud while holding a Second Amendment Foundation office post that he never held in real life. According to AI legal expert Eugene Volokh, OpenAI may be shielded against this claim by Section 230, unless the court finds that OpenAI "materially contributed" to the publication of defamatory content.[30]

Terminologies

In Salon, statistician Gary N. Smith argues that LLMs "do not understand what words mean" and consequently that the term "hallucination" unreasonably anthropomorphizes the machine.[31] Journalist Benj Edwards, in Ars Technica, writes that the term "hallucination" is controversial, but that some form of metaphor remains necessary; Edwards suggests "confabulation" as an analogy for processes that involve "creative gap-filling".[1]

Among researchers who do use the term "hallucination", definitions or characterizations in the context of LLMs include:

  • "a tendency to invent facts in moments of uncertainty" (OpenAI, May 2023)[32]
  • "a model's logical mistakes" (OpenAI, May 2023)[32]
  • fabricating information entirely, but behaving as if spouting facts (CNBC, May 2023)[32]
  • "making up information" (The Verge, February 2023)[33]

In other artificial intelligence

The concept of "hallucination" is applied more broadly than just natural language processing. A confident response from any AI that seems unjustified by the training data can be labeled a hallucination.[3] Wired noted in 2018 that, despite no recorded attacks "in the wild" (that is, outside of proof-of-concept attacks by researchers), there was "little dispute" that consumer gadgets, and systems such as automated driving, were susceptible to adversarial attacks that could cause AI to hallucinate. Examples included a stop sign rendered invisible to computer vision; an audio clip engineered to sound innocuous to humans, but that software transcribed as "evil dot com"; and an image of two men on skis, that Google Cloud Vision identified as 91% likely to be "a dog".[34]

Mitigation methods

The hallucination phenomenon is still not completely understood.[3] Therefore, there is still ongoing research to try to mitigate its apparition.[35] Particularly, it was shown that language models not only hallucinate but also amplify hallucinations, even for those which were designed to alleviate this issue.[36] Researchers have proposed a variety of mitigation measures, including getting different chatbots to debate one another until they reach consensus on an answer.[37] Nvidia Guardrails, launched in 2023, can be configured to block LLM responses that don't pass fact-checking from a second LLM.[38]

See also

References

  1. ^ a b c Edwards, Benj (6 April 2023). "Why ChatGPT and Bing Chat are so good at making things up". Ars Technica. Retrieved 11 June 2023.
  2. ^ "Shaking the foundations: delusions in sequence models for interaction and control". www.deepmind.com.
  3. ^ a b c d e f Ji, Ziwei; Lee, Nayeon; Frieske, Rita; Yu, Tiezheng; Su, Dan; Xu, Yan; Ishii, Etsuko; Bang, Yejin; Dai, Wenliang; Madotto, Andrea; Fung, Pascale (November 2022). "Survey of Hallucination in Natural Language Generation" (pdf). ACM Computing Surveys. 55 (12). Association for Computing Machinery: 1–38. arXiv:2202.03629. doi:10.1145/3571730. S2CID 246652372. Retrieved 15 January 2023.
  4. ^ a b Lin, Connie (5 December 2022). "How to easily trick OpenAI's genius new ChatGPT". Fast Company. Retrieved 6 January 2023.
  5. ^ Zhuo, Terry Yue; Huang, Yujin; Chen, Chunyang; Xing, Zhenchang (2023). "Exploring AI Ethics of ChatGPT: A Diagnostic Analysis". arXiv:2301.12867 [cs.CL].
  6. ^ Seife, Charles (13 December 2022). "The Alarming Deceptions at the Heart of an Astounding New Chatbot". Slate. Retrieved 16 February 2023.
  7. ^ a b c Leswing, Kif (14 February 2023). "Microsoft's Bing A.I. made several factual errors in last week's launch demo". CNBC. Retrieved 16 February 2023.
  8. ^ Matsakis, Louise (8 May 2019). "Artificial Intelligence May Not 'Hallucinate' After All". Wired. Retrieved 29 December 2022.
  9. ^ a b Gilmer, Justin; Hendrycks, Dan (6 August 2019). "A Discussion of 'Adversarial Examples Are Not Bugs, They Are Features': Adversarial Example Researchers Need to Expand What is Meant by 'Robustness'". Distill. 4 (8). doi:10.23915/distill.00019.1. S2CID 201142364. Retrieved 24 January 2023.
  10. ^ OpenAI (2023). "GPT-4 Technical Report". arXiv:2303.08774 [cs.CL].
  11. ^ Metz, Cade (10 December 2022). "The New Chatbots Could Change the World. Can You Trust Them?". The New York Times. Retrieved 30 December 2022.
  12. ^ Tung, Liam (8 August 2022). "Meta warns its new chatbot may forget that it's a bot". ZDNet. Red Ventures. Retrieved 30 December 2022.
  13. ^ Edwards, Benj (18 November 2022). "New Meta AI demo writes racist and inaccurate scientific literature, gets pulled". Ars Technica. Retrieved 30 December 2022.
  14. ^ Michael Black [@Michael_J_Black] (17 November 2022). "I asked #Galactica about some things I know about and I'm troubled. In all cases, it was wrong or biased but sounded right and authoritative" (Tweet). Retrieved 30 December 2022 – via Twitter.
  15. ^ Bowman, Emma (19 December 2022). "A new AI chatbot might do your homework for you. But it's still not an A+ student". NPR. Retrieved 29 December 2022.
  16. ^ Pitt, Sofia (15 December 2022). "Google vs. ChatGPT: Here's what happened when I swapped services for a day". CNBC. Retrieved 30 December 2022.
  17. ^ Huizinga, Raechel (30 December 2022). "We asked an AI questions about New Brunswick. Some of the answers may surprise you". CBC.ca. Retrieved 30 December 2022.
  18. ^ Zastrow, Mark (30 December 2022). "We Asked ChatGPT Your Questions About Astronomy. It Didn't Go so Well". Discover. Kalmbach Publishing Co. Retrieved 31 December 2022.
  19. ^ Edwards, Benj (1 December 2022). "OpenAI invites everyone to test ChatGPT, a new AI-powered chatbot—with amusing results". Ars Technica. Retrieved 29 December 2022.
  20. ^ Michael Nielsen [@michael_nielsen] (1 December 2022). "OpenAI's new chatbot is amazing. It hallucinates some very interesting things" (Tweet). Retrieved 29 December 2022 – via Twitter.
  21. ^ Mollick, Ethan (14 December 2022). "ChatGPT Is a Tipping Point for AI". Harvard Business Review. Retrieved 29 December 2022.
  22. ^ Ethan Mollick [@emollick] (2 December 2022). "One of the big subtle problems in the new "creative AIs" is that they can seem completely certain, and getting them to switch from sane to hallucinatory is a difference of a couple words" (Tweet). Retrieved 29 December 2022 – via Twitter.
  23. ^ Kantrowitz, Alex (2 December 2022). "Finally, an A.I. Chatbot That Reliably Passes "the Nazi Test"". Slate. Retrieved 29 December 2022.
  24. ^ Marcus, Gary (2 December 2022). "How come GPT can seem so brilliant one minute and so breathtakingly dumb the next?". The Road to AI We Can Trust. Substack. Retrieved 29 December 2022.
  25. ^ "Google cautions against 'hallucinating' chatbots, report says". Reuters. 11 February 2023. Retrieved 16 February 2023.
  26. ^ Maruf, Ramishah (27 May 2023). "Lawyer apologizes for fake court citations from ChatGPT | CNN Business". CNN.
  27. ^ Brodkin, Jon (31 May 2023). "Federal judge: No AI in my courtroom unless a human verifies its accuracy". Ars Technica.
  28. ^ "Judge Brantley Starr | Northern District of Texas | United States District Court". www.txnd.uscourts.gov. Retrieved 26 June 2023.
  29. ^ Brodkin, Jon (23 June 2023). "Lawyers have real bad day in court after citing fake cases made up by ChatGPT". Ars Technica.
  30. ^ Belanger, Ashley (9 June 2023). "OpenAI faces defamation suit after ChatGPT completely fabricated another lawsuit". Ars Technica.
  31. ^ "An AI that can "write" is feeding delusions about how smart artificial intelligence really is". Salon. 2 January 2023. Retrieved 11 June 2023.
  32. ^ a b c Field, Hayden (31 May 2023). "OpenAI is pursuing a new way to fight A.I. 'hallucinations'". CNBC. Retrieved 11 June 2023.
  33. ^ Vincent, James (8 February 2023). "Google's AI chatbot Bard makes factual error in first demo". The Verge. Retrieved 11 June 2023.
  34. ^ Simonite, Tom (9 March 2018). "AI Has a Hallucination Problem That's Proving Tough to Fix". Wired. Condé Nast. Retrieved 29 December 2022.
  35. ^ Nie, Feng; Yao, Jin-Ge; Wang, Jinpeng; Pan, Rong; Lin, Chin-Yew (July 2019). "A Simple Recipe towards Reducing Hallucination in Neural Surface Realisation" (PDF). Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics: 2673–2679. doi:10.18653/v1/P19-1256. S2CID 196183567. Retrieved 15 January 2023.
  36. ^ Dziri, Nouha; Milton, Sivan; Yu, Mo; Zaiane, Osmar; Reddy, Siva (July 2022). "On the Origin of Hallucinations in Conversational Models: Is it the Datasets or the Models?" (PDF). Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics. doi:10.18653/v1/2022.naacl-main.387. S2CID 250242329. Retrieved 15 January 2023.
  37. ^ Vynck, Gerrit De (30 May 2023). "ChatGPT 'hallucinates.' Some researchers worry it isn't fixable". Washington Post. Retrieved 31 May 2023.
  38. ^ Leswing, Kif (25 April 2023). "Nvidia has a new way to prevent A.I. chatbots from 'hallucinating' wrong facts". CNBC. Retrieved 15 June 2023.