LLMs are solving MCAT, the bar test, SAT etc like they’re nothing. At this point their performance is super human. However they’ll often trip on super simple common sense questions, they’ll struggle with creative thinking.
Is this literally proof that standard tests are not a good measure of intelligence?
OP picked standardized tests that only require memorization because they have zero idea what a real IQ test like the WAIS is like.
Also how those IQ tests work. You kind of have to go in “blind” to get an accurate result. And LLM can’t do anything “blind” because you have to train them.
A chatbots can’t even take a real IQ test, if we trained a chatbots to take a real IQ test, it would be a pointless test
Actually, you can give chatbots a real IQ test, and the range of scores fall into roughly the same spread as how they rank on other measures, with the leading model scoring at 100:
https://www.maximumtruth.org/p/ais-ranked-by-iq-ai-passes-100-iq
Nobody is a blank slate. Everyone has knowledge from their past experience, and instincts from their genetics. AIs are the same. They are trained on various things just as humans have experienced various things, but they can be just as blind as each other on the contents of the test.
No, they wouldn’t.
Because real IQ tests arent just multiple choice exams
You would have to train it to handle the different tasks, and training it at the tasks would make it better at the tasks, raising their scores.
I don’t know if the issue is you don’t know about how IQ tests work, or what LLM can do.
But it’s probably both instead of one or the other.