LLMs are solving MCAT, the bar test, SAT etc like they’re nothing. At this point their performance is super human. However they’ll often trip on super simple common sense questions, they’ll struggle with creative thinking.

Is this literally proof that standard tests are not a good measure of intelligence?

  • steventrouble@programming.dev
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    8 months ago

    I’ve yet to see an LLM write, rewrite, then rewrite again it’s output.

    It’s because we (ML peeps) literally prevent them from deleting their own ouput. It’d be like if we stuck you in a room, and only let you interact with the outside world using a keyboard that has no backspace.

    Seriously, try it. Try writing your reply without using the delete button, or backspace, or the arrow keys, or the mouse. See how much better you do than an LLM.

    It’s hard! To say that an LLM is not capable of thought just because it makes mistakes sometimes is to ignore the immense difficulty of the problem we’re asking it to solve.

    • starman2112@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      8 months ago

      To me it isn’t just the lack of an ability to delete it’s own inputs, I mean outputs, it’s the fact that they work by little more than pattern recognition. Contrast that with humans, who use pattern recognition as well as an understanding of their own ideas to find the words they want to use.

      Man, it is super hard writing without hitting backspace or rewriting anything. Autocorrect helped a ton, but I hate the way this comment looks lmao

      This isn’t to say that I don’t think a neural network can be conscious, or self aware, it’s just that I’m unconvinced that they can right now. That is, that they can be. I’m gonna start hitting backspace again after this paragraph