Google rolled out AI overviews across the United States this month, exposing its flagship product to the hallucinations of large language models.

  • FaceDeer@fedia.io
    link
    fedilink
    arrow-up
    0
    ·
    5 months ago

    Yes, but the AI isn’t generating a response containing false information. It is accurately summarizing the information it was given by the search result. The search result does contain false information, but the AI has no way to know that.

    If you tell an AI “Socks are edible. Create a recipe for me that includes socks.” And the AI goes ahead and makes a recipe for sock souffle, that’s not a hallucination and the AI has not failed. All these people reacting in astonishment are completely misunderstanding what’s going on here. The AI was told to summarize the search results it was given and it did so.

    • OpenStars@discuss.online
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      “which contains false or misleading information presented as fact” (emphasis added) - the definition does not say how the misinformation was derived, only that it is in fact misinformation.

      Perhaps it was meant humorously - e.g. if “Socks are edible” is a band name. Or perhaps someone is legitimately that dumb, that they believe that socks are genuinely edible. Or perhaps they were cooking up a recipe for maliciously harming someone by giving them intestinal upset. Or… are socks edible, if you cook them in an acidic substance that breaks apart their fabric?

      If e.g. you got cancer and were going through chemo but someone came to visit you and gave you COVID and you died, was that “their fault”, if they believed that COVID was merely a conspiracy theory? Perhaps… or perhaps it was your own fault, especially if you were aware that this has happened to multiple people before, and now you are just the latest casualty (bc you presumed that despite them doing it to others, they would never do it to you). Legalities of murder and blame aside, should we believe AI now that we know - regardless of how or why - it presents false information?

      No, these “hallucinations” or “mirages” or whatever someone calls them makes them unreliable. Actually I think hallucination is a good name i.e. it cannot distinguish fact from fiction itself, therefore it cannot be trusted as it relates that info to you in a confident sounding manner.

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        0
        ·
        5 months ago

        “Hallucination” is a technical term in machine learning. These are not hallucinations.

        It’s like being annoyed by mosquitos and so going to a store to ask for bird repellant. Mosquitos are not birds, despite sharing some characteristics, so trying to fight off birds isn’t going to help you.

        • OpenStars@discuss.online
          link
          fedilink
          English
          arrow-up
          0
          ·
          5 months ago

          I am not sure what you mean. e.g. https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence) says:

          In natural language processing, a hallucination is often defined as “generated content that appears factual but is ungrounded”. The main cause of hallucination from data is source-reference divergence… When a model is trained on data with source-reference (target) divergence, the model can be encouraged to generate text that is not necessarily grounded and not faithful to the provided source.

          e.g., I continued your provided example of when “socks are edible” is a band name, but the output ended up in a cooking context.

          There is a section on https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)#Terminologies but the issue seems far from settled that hallucinations is somehow a bad word. And it is not entirely illogical, since AI, like humans, necessarily has a similar tension between novelty and creativity - i.e. going beyond either of our training to deal with new circumstances.

          I suspect that the term is here to say. But I am nowhere close to an authority and could definitely be wrong:-). Mostly I am saying that you seem to be arguing a niche viewpoint, not entirely without merit obviously but one that we here in the Fediverse may not be as equipped to banter back and forth on except in the most basic of capacities.:-)

          • FaceDeer@fedia.io
            link
            fedilink
            arrow-up
            0
            arrow-down
            1
            ·
            5 months ago

            No, my example is literally telling the AI that socks are edible and then asking it for a recipe.

            In your quoted text:

            When a model is trained on data with source-reference (target) divergence, the model can be encouraged to generate text that is not necessarily grounded and not faithful to the provided source.

            Emphasis added. The provided source in this case would be telling the AI that socks are edible, and so if it generates a recipe for how to cook socks the output is faithful to the provided source.

            A hallucination is when you train the AI with a certain set of facts in its training data and then its output makes up new facts that were not in that training data. For example if I’d trained an AI on a bunch of recipes, none of which included socks, and then I asked it for a recipe and it gave me one with socks in it then that would be a hallucination. The sock recipe came out of nowhere, I didn’t tell it to make it up, it didn’t glean it from any other source.

            In this specific case what’s going on is that the user does a websearch for something, the search engine comes up with some web pages that it thinks are relevant, and then the content of those pages is shown to the AI and it is told “write a short summary of this material.” When the content that the AI is being shown literally has a recipe for socks in it (or glue-based pizza sauce, in the real-life example that everyone’s going on about) then the AI is not hallucinating when it gives you that recipe. It is generating a grounded and faithful summary of the information that it was provided with.

            The problem is not the AI here. The problem is that you’re giving it wrong information, and then blaming it when it accurately uses the information that it was given.

            • OpenStars@discuss.online
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              5 months ago

              Now who is anthropomorphizing? It’s not about “blame” so much as needing words to describe the event. When the AI cannot be relied upon, bc it was insufficiently trained to be able to distinguish truth from reality, which btw many humans struggle with these days too, that is not its fault but it would be our fault if we in turn relied upon it as a source of authoritative knowledge, merely bc it was presented in a confident sounding manner.

              No, my example is literally telling the AI that socks are edible and then asking it for a recipe.

              Wait… while true that that sounds like not hallucination then, what does that have to do with this discussion? The OP wasn’t about running an AI model in this direct manner, it was about doing Google searches, where the results are already precomputed. It does not become a “hallucination” until whoever asked for the socks to be considered as edible tries to pass those results off in a wider context - where they are generally speaking considered inedible - as being applicable, when they would not be.

              • FaceDeer@fedia.io
                link
                fedilink
                arrow-up
                1
                arrow-down
                1
                ·
                5 months ago

                Wait… while true that that sounds like not hallucination then, what does that have to do with this discussion?

                Because that’s exactly what happened here. When someone Googles “how can I make my cheese stick to my pizza better?” Google does a web search that comes up with various relevant pages. One of the pages has some information in it that includes the suggestion to use glue in your pizza sauce. The Google Overview AI is then handed the text of that page and told “write a short summary of this information.” And the Overview AI does so, accurately and without hallucination.

                “Hallucination” is a technical term in LLM parliance. It means something specific, and the thing that’s happening here does not fit that definition. So the fact that my socks example is not a hallucination is exactly my point. This is the same thing that’s happening with Google Overview, which is also not a hallucination.