LLMs are solving MCAT, the bar test, SAT etc like they’re nothing. At this point their performance is super human. However they’ll often trip on super simple common sense questions, they’ll struggle with creative thinking.
Is this literally proof that standard tests are not a good measure of intelligence?
Intelligence cannot be measured. It’s a reification fallacy. Inelegance is colloquial and subjective.
If I told you that I had an instrument that could objectively measure beauty, you’d see the problem right away.
But intelligence is the capacity to solve problems. If you can solve problems quickly, you are by definition intelligent.
https://www.merriam-webster.com/dictionary/intelligence
It can be measured by objective tests. It’s not subjective like beauty or humor.
The problem with AI doing these tests is that it has seen and memorized all the previous questions and answers. Many of the tests mentioned are not tests of reasoning, but recall: the bar exam, for example.
If any random person studied every previous question and answer, they would do well too. No one would be amazed that an answer key knew all the answers.
This isn’t quite correct. There is the possibility of biasing the results with the training data, but models are performing well at things they haven’t seen before.
For example, this guy took an IQ test, rewrote the visual questions as natural language questions, and gave the test to various LLMs:
https://www.maximumtruth.org/p/ais-ranked-by-iq-ai-passes-100-iq
These are questions with specific wording that the models won’t have been trained on given he wrote them out fresh. Old models have IQ results that are very poor, but the SotA model right now scores a 100.
People who are engaging with the free version of ChatGPT and think “LLMs are dumb” is kind of like talking to a moron human and thinking “humans are dumb.” Yes, the free version of ChatGPT has around a 60 IQ on that test, but it also doesn’t represent the cream of the crop.
Maybe, but this is giving the AI a lot of help. No one rewrites visual questions for humans who take IQ tests. That spacial reasoning is part of the test.
In reality, no AI would pass any test because the first part is writing your name on the paper. Just doing that is beyond most AIs because they literally don’t have to deal with the real world. They don’t actually understand anything.
This isn’t correct and has been shown not to be correct in research over and over and over in the past year.
https://arxiv.org/abs/2310.07582
Just a few of the relevant papers you might want to check out before stating things as facts.