You know how Google’s new feature called AI Overviews is prone to spitting out wildly incorrect answers to search queries? In one instance, AI Overviews told a user to use glue on pizza to make sure the cheese won’t slide off (pssst…please don’t do this.)

Well, according to an interview at The Vergewith Google CEO Sundar Pichai published earlier this week, just before criticism of the outputs really took off, these “hallucinations” are an “inherent feature” of  AI large language models (LLM), which is what drives AI Overviews, and this feature “is still an unsolved problem.”

  • thefactremains@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    5 months ago

    Why not solve it before training the AI?

    Simply make it clear that this tech is experimental, then provide sources and context with every result. People can make their own assessment.

    • nyan@lemmy.cafe
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      5 months ago

      Because a lot of people won’t look at sources even if you serve them up on a silver platter?

        • nyan@lemmy.cafe
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 months ago

          Yes, but as a solution it’s far inferior to not presenting questionable output to the public at all.

          (There are a few specific AI/LLM types whose output we might be able to “human-proof”—for instance, if we don’t allow image generators to make photorealistic images of any sort for any purpose, they become much more difficult to abuse—but I can’t see how you would do it for search engine adjuncts like this without having a human curate their training sets.)