• AdolfSchmitler@lemmy.world
      link
      fedilink
      arrow-up
      36
      ·
      4 months ago

      There’s an idea about “autistic ai” or something where you give ai an objective like “get a person from point a to b as fast as you can” and the ai goes so fast the g force kills the person but the ai thinks it was a success because you never told it to keep the person alive.

      Though I suppose that’s more human error. Something we take as a given but a machine will not.

      • BlueMagma@sh.itjust.works
        link
        fedilink
        arrow-up
        11
        ·
        4 months ago

        It’s called the AI alignment problem, it’s fascinating, if you want to dig deeper in the subject I highly recommend ‘Robert miles AI safety’ channel on YouTube