Google rolled out AI overviews across the United States this month, exposing its flagship product to the hallucinations of large language models.

  • yukijoou@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    for it to “hallucinate” things, it would have to believe in what it’s saying. ai is unable to think - so it cannot hallucinate

    • Jrockwar@feddit.uk
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      5 months ago

      Hallucination is a technical term. Nothing to do with thinking. The scientific community could have chosen another term to describe the issue but hallucination explains really well what’s happening.

      • yukijoou@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 months ago

        huh, i kinda assumed it was a term made up/taken by journalists mostly, are there actual research papers on this using that term?