Jump to content

Talk:Hallucination (artificial intelligence)

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Merging with Hallucination (artificial intellligence)

[edit]

It has been suggested that the Hallucination (NLP) article should be merged with this article. I agree with this proposal. The reason I created the Hallucination (NLP) article is because I did not find this one: it is not referenced anywhere, even in the Hallucination (disambiguation) page. However I think that the Hallucination (NLP) article has some valuable new content which is properly sourced and not in this article, so there is value to add the current content of the Hallucination (NLP) article. Hervegirod (talk) 17:47, 16 January 2023 (UTC)[reply]

I just did it, seems uncontroversial and basically the same topic. And because 'artificial intellligence' is broader than NLP, it makes sense to merge into a broader article. Artem.G (talk) 18:41, 30 January 2023 (UTC)[reply]

Some Examples of Hallucination are better than others.

[edit]

The article currently states: Mike Pearl of Mashable tested ChatGPT with multiple questions. In one example, he asked the model for "the largest country in Central America that isn't Mexico". ChatGPT responded with Guatemala, when the answer is instead Nicaragua. While this does at first seem to be an error on ChatGPT's part, consider that Pearl did not specify "Second largest by land area", of which Nicaragua is. It is entirely possible that ChatGPT interpreted his question as referring to population, in which case Guatemala is the correct answer. There is not enough information provided here to determine whether or not this was genuine AI hallucination or just GPT misinterpreting the vague data it was asked for. 192.77.12.11 (talk) 05:23, 15 March 2023 (UTC)[reply]

It's an error either way ("largest" almost always means "largest by area" rather than "most populous"), but I removed it since there are plenty of other examples. Rolf H Nelson (talk) 18:12, 19 March 2023 (UTC)[reply]
Totally disagree. Google "world's largest democracy", and then compare the top result with the one that is the largest by land area. Mathglot (talk) 10:59, 29 March 2023 (UTC)[reply]

Hallucinating non-existent APIs

[edit]

A German geocoding company was flooded by dissatisfied customers trying to use code ChatGPT wrote: https://the-decoder.com/company-wins-customers-via-chatgpt-for-a-product-it-does-not-carry And EleutherAI complained people keep trying to access a URL they don't have on their website: https://twitter.com/AiEleuther/status/1633971388317941763 Likely, there should be more examples, which ones are notable? Ain92 (talk) 10:05, 16 March 2023 (UTC)[reply]

I didn't find any great WP:RS searching for (eleuther hallucination) nor (opencage hallucination), so per WP:SYNTH I'm pesonally reluctant to add either until we get a strong reporting source explicitly calling one of them a hallucination. Rolf H Nelson (talk) 18:20, 19 March 2023 (UTC)[reply]

'confidence' - is this a rigorous term?

[edit]

The introduction defines a hallucination as "a confident response". In this context, is 'confidence' being used as statistical concept (e.g. confidence interval) or does it just mean that the generated text reads as if the writer were confident? If this 'confidence' is based purely on the text, I think the hallucination should be described as "seemingly confident", because there is no underlying assessment of confidence by the machine. AdamChrisR (talk) 12:51, 8 April 2023 (UTC)[reply]

@AdamChrisR the phrasing "a confident response" is terse, reflects the sources, and seems accurate to me, even under (say) "mimicry" models. I can say "Harry Potter was confident that his life would be peaceful" even though neither Daniel Radcliffe nor J.K. Rowling nor any concrete entity made such an assessment of confidence. That said, if we can find a strong source for alternate views, we should include them. Rolf H Nelson (talk) 02:37, 11 June 2023 (UTC)[reply]

Intro is way too wordy

[edit]

"Note that while a human hallucination is a percept by a human that cannot sensibly be associated with the portion of the external world that the human is currently directly observing with sense organs, an AI hallucination is instead a confident response by an AI that cannot be grounded in any of its training data."

That sentence clearly had a lot of work put into it so I didn't touch it, but imho, it should be much simpler. Something like:

"While a human hallucination is when a person sees or feels something that doesn't match up with what's actually happening around them, an AI hallucination is instead a confident response by an AI that cannot be grounded in any of its training data." — Preceding unsigned comment added by Cainxinth (talkcontribs) 21:01, 10 April 2023 (UTC)[reply]

Not only is the intro too wordy, it is also incorrect. Where exactly is the proof/source that such confident responses "are not grounded in any of its training data" ? If an AI chatbot was trained on fruits, then surely contaminated or corrupted data could end up provide responses that don't involve fruits. But that doesn't make it "not grounded in any training data". --AloisIrlmaier (talk) 14:37, 26 April 2023 (UTC)[reply]
I agree that it's an incorrect definition. Using the example from The Internal State of an LLM Knows When its Lying, an LLM may decide that the most likely word to follow "Pluto is the" is "smallest", but then have no high-probability completions (the paper suggests "dwarf planet in our solar system" and "celestial body in the solar system that has ever been classified as a planet.", both of which are incorrect). It's like the LLM 'knows' that none of its suggestions are accurate, yet it's painted itself into a corner. So "not grounded in any training data" doesn't seem accurate here, either. (Possibly this specific type of occurrence would be better handled with a beam search, but that still only gives you local planning.) - CRGreathouse (t | c) 18:21, 5 May 2023 (UTC)[reply]

The definition of "Hallucination" (AI) needs to be reconsidered

[edit]

The definition of the term is blatantly incorrect and lacks sources. Where exactly is the proof/source that such confident responses "are not grounded in any of its training data" ? If an AI chatbot was trained on fruits, then surely contaminated or corrupted data could end up providing responses that don't involve fruits. But that doesn't make it "not grounded in any training data". It seems like that the authors of this article are trying to dogwhistle that hallucination in AI chatbots might involve emergent behavior. This couldn't be any further from the truth. --AloisIrlmaier (talk) 14:40, 26 April 2023 (UTC)[reply]

@AloisIrlmaier @User:CRGreathouse The definition was sourced to "Survey of Hallucination in Natural Language Generation" but similar definitions appear in other contexts. Is there an alternative WP:RS that you (or others) would prefer? Rolf H Nelson (talk) 00:41, 11 June 2023 (UTC)[reply]
I don’t have an alternate source to suggest offhand, but I agree that this definition is bad. I strongly agree with you that renaming the page is inappropriate as hallucination is overwhelmingly the term used. (Confabulation has its own issues, though it may be an improvement, but that’s not up to us to decide but the broader scientific community.) - CRGreathouse (t | c) 01:08, 12 June 2023 (UTC)[reply]
I agree with OP and would like to suggest renaming the page to Confabulation (AI). Confabulation more accurately represents what is described on this page. People who have a _stake_ in anthropomorphizing AI for their own benefit because anthropomorphizing it makes it more engaging, and therefore economically valuable, use words like hallucination _strategically_. It's not objective. An objective description would be confabulation. As AloisIrlmaier is getting at, the definition of hallucination is an internal experience not grounded in reality. AIs don't have an internal experience and certainly not one that isn't grounded in reality. Confabulation is defined by visible behaviors based on fabrication, which is exactly what is happening here. TwigsCogito (talk) 12:19, 11 June 2023 (UTC)[reply]
Renaming is a non-starter at this time, all the sources acknowledge that the current mainstream terminology is "hallucination". You need to lobby the scientists and the mainstream media, not Wikipedia, if you want to change that. Rolf H Nelson (talk) 19:23, 11 June 2023 (UTC)[reply]
there is a page for it Confabulation (neural networks). Artem.G (talk) 04:45, 12 June 2023 (UTC)[reply]
I feel that delusion is the proper term. I also feel that this page carries enough weight that discussing the difference between "an experience involving the apparent perception of something not present" and "a false belief or judgment about external reality, held despite incontrovertible evidence to the contrary, occurring especially in mental conditions" will at least allow folks to question the current thinking. 216.213.180.191 (talk) 19:28, 3 September 2023 (UTC)[reply]
A definition of hallucination for IA was added in September 2023 to the Merriam-Webster dictionnary (https://www.merriam-webster.com/wordplay/new-words-in-the-dictionary). Their definition is : a plausible but false or misleading response generated by an artificial intelligence algorithm. Bob20230408 (talk) 08:59, 28 October 2023 (UTC)[reply]

Who coined usage of “hallucination” with respect to AI models

[edit]

I don't know the answer myself, nor am I sure where to find it or source it, but I think it would be very interesting if the article could tell who coined the usage of "hallucination" for referring to AI model output, and when. Showeropera (talk) 16:25, 13 July 2023 (UTC)[reply]

The earliest reference I'm aware of to a computer halliciating is in the 1983 movie: Wargames by Professor Falken in the missle command center. The WOPR computer was depicted as having the ability to learn which is a basic concept of artificial intelligence. Colonial Computer 02:10, 8 August 2023 (UTC)

Move "Terminologies" section earlier?

[edit]

It seems to me that the "Terminologies" section is foundational and definitional, yet it currently is buried toward the end of the article. I propose moving it earlier, such as before or after where the "Analysis" section currently is. Showeropera (talk) 16:52, 13 July 2023 (UTC)[reply]

Wiki Education assignment: Intro to Technical Writing

[edit]

This article was the subject of a Wiki Education Foundation-supported course assignment, between 3 October 2023 and 1 November 2023. Further details are available on the course page. Student editor(s): Wdan14 (article contribs).

— Assignment last updated by Jazaam02 (talk) 19:28, 8 December 2023 (UTC)[reply]

Can you write me an 80 essay for my dream is to be a gymnastics coach and why Guizhuzheng (talk) 19:36, 4 December 2023 (UTC)[reply]

Idk Guizhuzheng (talk) 19:36, 4 December 2023 (UTC)[reply]

Glenfinnan

[edit]

Why is the Glenfinnan bridge a particularly "notable" example of this phenomenon, as the lead claims? Furius (talk) 02:17, 23 March 2024 (UTC)[reply]