when WE hallucinate, it’s because our internal predictive models are flying off the rails filling in the blanks based on assumptions rather than referencing concrete sensory information and generating results that conflict with reality.
Is it really? You make it sound like this is a proven fact.
Is it really? You make it sound like this is a proven fact.
I believe that’s where the scientific community is moving towards, based on watching this Kyle Hill video.
Here is an alternative Piped link(s):
this Kyke Hill video
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
I know I’m responding to a bot, but… how does a PipedLinkBot get “Kyle Hill” wrong to “Kyke Hill”? More AI hallucinations?
Op has a pencil in the top right, looks like it was edited