- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
Google rolled out AI overviews across the United States this month, exposing its flagship product to the hallucinations of large language models.
Google rolled out AI overviews across the United States this month, exposing its flagship product to the hallucinations of large language models.
No, hallucination is a really good term. It can be super confident and seemingly correct but still completely made up.
I think delusion might be a better word. You can hallucinate and know it’s not real
It’s a really bad term because it’s usually associated with a mind, and LLMs are nothing of the sort.
Anthropomorphication is hard to avoid in AI.
Many worthy things are difficult.
But is anthropomorphism of AI particularly worrying?
So is bullshitting. More so, only human minds can bullshit.
We anthropomorphize machines all the time, it’s fine.
I’d prefer we’d start calling all genai output hallucinations again. It used to be like 10 years ago, but somewhere along the line marketing decided hallucinated truths aren’t “hallucinations”.
And a bull’s anus.
That is just being WRONG.
It is, but it isnt applicable in at least the glue-pizza situation as the probable source comment has been found on reddit.
A better use of the term might be how when you try to get Bing’s image creator to make “Battletech” art, you just mostly get really obvious Warhammer 40k Space Marines and occasionally Iron Maiden album art.
for it to “hallucinate” things, it would have to believe in what it’s saying. ai is unable to think - so it cannot hallucinate
Hallucination is a technical term. Nothing to do with thinking. The scientific community could have chosen another term to describe the issue but hallucination explains really well what’s happening.
huh, i kinda assumed it was a term made up/taken by journalists mostly, are there actual research papers on this using that term?