• The Picard Maneuver@lemmy.world
    link
    fedilink
    arrow-up
    33
    arrow-down
    1
    ·
    9 months ago

    I’ve seen some of those posts, and while I don’t share the rage, I have to admit that it’s funny in an absurdist way.

    These early guardrails on AI are so clunky and will need to be refined for sure.

    • Godric@lemmy.world
      link
      fedilink
      arrow-up
      14
      arrow-down
      1
      ·
      9 months ago

      It’s genuinely hilarious how lazy it is, the companies just appending “Racially diverse” to prompts at random.

      “Generate an image of a German soldier in 1943” and you get back a college advertisement full of Nazis, it’s ridiculous XD

    • heavy@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      6
      ·
      9 months ago

      I think this is the correct response though, it’s absurd and a symptom of how these generative models work.

      The discussion should highlight for people that these models can be, and often turns are, wrong. There’s not a mechanism to verify factuality or accuracy, and you shouldn’t expect as much.

      Instead, this group of people go into the ol playbook and pull out “wow, minorities are being forced on me again!” Generating these silly conspiracies and manufactured outrage.

      Chill, it’s funny, laugh. Seeing minorities in something shouldnt be the reason (or the example) for outrage.

    • UnspecificGravity@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      9 months ago

      It’s a real challenge. The datasets all have genuine bias built in and identifying and correcting it is incredibly difficult. I mean, there WERE people of color in historic Europe, lots of them. So you can’t correct this by just making them all white because that isn’t necessarily more accurate. But yeah, we know that King George wasn’t black.