There’s an idea about “autistic ai” or something where you give ai an objective like “get a person from point a to b as fast as you can” and the ai goes so fast the g force kills the person but the ai thinks it was a success because you never told it to keep the person alive.
Though I suppose that’s more human error. Something we take as a given but a machine will not.
It’s called the AI alignment problem, it’s fascinating, if you want to dig deeper in the subject I highly recommend ‘Robert miles AI safety’ channel on YouTube
The danger isn’t that it’s smart, the danger is that it’s stupid.
There’s an idea about “autistic ai” or something where you give ai an objective like “get a person from point a to b as fast as you can” and the ai goes so fast the g force kills the person but the ai thinks it was a success because you never told it to keep the person alive.
Though I suppose that’s more human error. Something we take as a given but a machine will not.
It’s called the AI alignment problem, it’s fascinating, if you want to dig deeper in the subject I highly recommend ‘Robert miles AI safety’ channel on YouTube
Computers do what people tell them to do, not what people want.
Or more precise: The danger is that people think it’s smart