This interests me because I've always thought it was weird how being in a new place feels totally different from being in a familiar place. I've never quite been able to put my finger on it, it's like the space has a different quality to it that I can't quite name, like there's a different sense of dimension or proportion. Maybe after a place becomes familiar, you 'see' less of what's there, and more of what you remember/expect because you're attention is looking for what's new and different (ie potentially dangerous). Maybe this is why cafeteria food, no matter how good, will basically always become intolerable, we stop tasting the food and the flavor fades into the background because it matches our memories and expectations so well, so it's bland. Like becoming nose-blind.
I've had a few pretty surreal experiences and I think surreality is the same effect, just a different order of it. What's surreal isn't just new and unfamiliar, it defies relation to your existing mental framework or categories into which you classify things/experiences. Mixing familiar things into an unfamiliar context seems to produce this effect pretty reliably. It seems like it ought to fit into a certain category but doesn't, and you must either create a new category to fit it into, or redefine the limits of your existing mental categories, or ideally, it challenges your mental categories altogether.
My theory is that this darting is the mechanism of consciousness. We look inward and outward in a loop, which generates the perception of being conscious in a similar way to how sequential frames of film create the illusion of motion. That "persistence of vision" is like the illusion of persistent, continuous consciousness created by the inward-outward regard sequence. Consciousness is a simple algorithm: look at the world, then look at the self to evaluate its reaction to the world. Then repeat.
But why does that feel like anything? I could write a program that concurrently processes its visual input and its internal model. I don't think it would be conscious, unless everything in the universe is conscious (a possibility I can't, admittedly, discount).
Consciousness is an attention mechanism. That inward regard, evaluating how the self reacts to the world, is attention being payed to the body's feelings. The outward regard then maps those feelings on to local space. Consciousness is watching your feelings as a kind of HUD on the world. It correlates feels to things.
It's a mechanism of intelligence, not consciousness. Intel is built up from path-integration, short-cuts, vicarious trial and error that begins in very tiny local areas and expands to landmark and non-landmark navigation. This switching between vision and hippocampus has always been theorized about as the fundamental sharp wave ripple threshold of how intelligence is built as most mammals can do this, so it's not the "algorithm of consciousness".
A particularly interesting part that I did not expect from the title:
> Before the rats encountered the detour, the research team observed that their brains were already firing in patterns that seemed to "imagine" alternate unfamiliar mental routes while they slept. When the researchers compared these sleep patterns to the neural activity during the actual detour, some of them matched.
> “What was surprising was that the rats' brains were already prepared for this novel detour before they ever encountered it,”
> The same brain networks that normally help us imagine shortcuts or possibilities can, when disrupted, trap us in intrusive memories or hallucinations.
There is a fine line between this an wisdom. The Default Mode Network (DMN) is the brain's "simulation machine". When you're not focused on a specific task, the DMN fires up, allowing you to daydream, remember the past, plan for the future, and contemplate others' perspectives.
Wisdom is not about turning the machine off; it's about becoming the director of the movie it's playing. A creative genius envisioning a new world and a person trapped in a state of torment isn't the hardware, but the learned software of regulation, awareness, and perspective.
Wisdom is the process of learning to aim this incredible, imaginative power toward flourishing instead of suffering. Saying "trap us in intrusive memories or hallucinations" is the negative side where there is also a positive side to it all.
>A creative genius envisioning a new world and a person trapped in a state of torment isn't the hardware, but the learned software of regulation, awareness, and perspective.
No, it's hardware. There is no amount of 'wisdom' bootstraps pulling that will make you not schizophrenic.
Wisdom is an arbitrary concept. The drive to avoid suffering is built from sensory and affective affinities and networks funnlled into the cog-mapping motor systems. Calling this wisdom is simply a simplistic narrative.
In effect, my position is that biological systems maintain a synchronized processing pipeline: where the hippocampal prediction system operates slightly “ahead” of sensory processing, like a cache buffer.
If the processing gets “behind” the sensory input then you feel like you’re accessing memory because the electrical signal is reaching memory and sensory distribution simultaneously or slightly lagging.
So it means you’re constantly switching between your world map and the input and comparing them just to stabilize a “linear” experience - something which is a necessity for corporeal prediction and reaction.
I think we should be careful about materialistic reductions of awareness. Because some rats dreamed detours that ended up being correct in waking rat life, it does not follow that all instances of deja vu are misfirings. It's a tempting connection to draw, but it does not actually explain how the detours were dreamt to begin with, and this points to a deeper question about awareness in general. If I were pressed for an analogy, I might say something like "just because all books have ink does not mean that all ink lives in books." You know what I mean? There's a superset of experiences that cannot be easily explained away by caching, as tempting as it might be.
Not exactly. We don't know where optic-flow reactions that integrate senses, emotions, motor systems in the slightest. Study neural reuse or coordination dynamics. Some relationship between the brain and the world that isn't easily found in the brain alone is responsible.
Materialistic interpretations of the world around us are quite literally the only useful ones. If we didn't do that we'd be sleeping in caves and hitting each other with heavy rocks.
Wrong. Materialistic only got us to a level. Now we're looking past materialism in neural reuse, coordination dynamics and ecological psychology and neurobiology. The causes are out there in contradictory correlations.
This takes me to Zen and the Art of Motorcycle Maintenance. Your physical experience of something has to be analysed in accordance with your mental model of it in order to attain a diagnosis (in the book it was a motorcycle engine).
My take on this is, especially in regard to debugging IT issues, is that you have to constantly verify and update your mental model (check your premises!) in order to better weed out problems.
The way it is phrased, looks like a pre computed model confronted to real data.
So... our current AIs except we have incremental continuous training (accumulated experience)?
And dreams are simulation-based training to make life easier, decision-making more efficient?
There was a neural net paper like this that generated a lot of discussion on HN, but that I haven't been able to find since (I probably downloaded it, but that teaches me to always remember to use Zotero because academic paper filenames are terrible.)
It was about replacing backprop with a mechanism that checked outcomes against predictions, and just adjusted parameters that deviated from the predictions rather than the entire path. It wasn't suitable for digital machines (because it isn't any more efficient on digital machines) but it worked on analog models. If anybody remembers this, I'd appreciate the link.
I might be garbling the paper because it's from memory and I'm not an expert, but hopefully it's recognizable.
I don't know if it is the paper you are thinking of (likely not) but this idea of checking predictions against outcomes is a very common idea in less mainstream AI research, including the so called "energy-based models" of Yann LeCun and the reference frames of the thousand brains project.
A recent paper posted here also looked at Recurrent Neural Nets and how in simplifying the design to its core amounted to just having a latent prediction and repeatedly adjusting that prediction.
This interests me because I've always thought it was weird how being in a new place feels totally different from being in a familiar place. I've never quite been able to put my finger on it, it's like the space has a different quality to it that I can't quite name, like there's a different sense of dimension or proportion. Maybe after a place becomes familiar, you 'see' less of what's there, and more of what you remember/expect because you're attention is looking for what's new and different (ie potentially dangerous). Maybe this is why cafeteria food, no matter how good, will basically always become intolerable, we stop tasting the food and the flavor fades into the background because it matches our memories and expectations so well, so it's bland. Like becoming nose-blind.
I've had a few pretty surreal experiences and I think surreality is the same effect, just a different order of it. What's surreal isn't just new and unfamiliar, it defies relation to your existing mental framework or categories into which you classify things/experiences. Mixing familiar things into an unfamiliar context seems to produce this effect pretty reliably. It seems like it ought to fit into a certain category but doesn't, and you must either create a new category to fit it into, or redefine the limits of your existing mental categories, or ideally, it challenges your mental categories altogether.
My theory is that this darting is the mechanism of consciousness. We look inward and outward in a loop, which generates the perception of being conscious in a similar way to how sequential frames of film create the illusion of motion. That "persistence of vision" is like the illusion of persistent, continuous consciousness created by the inward-outward regard sequence. Consciousness is a simple algorithm: look at the world, then look at the self to evaluate its reaction to the world. Then repeat.
And funny enough this gets really close to the non-dualistic philosophies of zen buddhism.
You could probably go further upstream and make a loose comparison to the concept of dependent arising (Pratītyasamutpāda):
https://plato.stanford.edu/entries/mind-indian-buddhism/
https://en.wikipedia.org/wiki/Prat%C4%ABtyasamutp%C4%81da
But why does that feel like anything? I could write a program that concurrently processes its visual input and its internal model. I don't think it would be conscious, unless everything in the universe is conscious (a possibility I can't, admittedly, discount).
> But why does that feel like anything?
Consciousness is an attention mechanism. That inward regard, evaluating how the self reacts to the world, is attention being payed to the body's feelings. The outward regard then maps those feelings on to local space. Consciousness is watching your feelings as a kind of HUD on the world. It correlates feels to things.
It's a mechanism of intelligence, not consciousness. Intel is built up from path-integration, short-cuts, vicarious trial and error that begins in very tiny local areas and expands to landmark and non-landmark navigation. This switching between vision and hippocampus has always been theorized about as the fundamental sharp wave ripple threshold of how intelligence is built as most mammals can do this, so it's not the "algorithm of consciousness".
[dead]
Is this also the reason why darting eyes movements can be linked (and is predictive of or can detect) to mental health issues like schizophrenia, etc?
A particularly interesting part that I did not expect from the title:
> Before the rats encountered the detour, the research team observed that their brains were already firing in patterns that seemed to "imagine" alternate unfamiliar mental routes while they slept. When the researchers compared these sleep patterns to the neural activity during the actual detour, some of them matched.
> “What was surprising was that the rats' brains were already prepared for this novel detour before they ever encountered it,”
Seems to support the idea that dreams are rehearsals for real life.
I wish some of my dreams really were
> The same brain networks that normally help us imagine shortcuts or possibilities can, when disrupted, trap us in intrusive memories or hallucinations.
There is a fine line between this an wisdom. The Default Mode Network (DMN) is the brain's "simulation machine". When you're not focused on a specific task, the DMN fires up, allowing you to daydream, remember the past, plan for the future, and contemplate others' perspectives.
Wisdom is not about turning the machine off; it's about becoming the director of the movie it's playing. A creative genius envisioning a new world and a person trapped in a state of torment isn't the hardware, but the learned software of regulation, awareness, and perspective.
Wisdom is the process of learning to aim this incredible, imaginative power toward flourishing instead of suffering. Saying "trap us in intrusive memories or hallucinations" is the negative side where there is also a positive side to it all.
>A creative genius envisioning a new world and a person trapped in a state of torment isn't the hardware, but the learned software of regulation, awareness, and perspective.
No, it's hardware. There is no amount of 'wisdom' bootstraps pulling that will make you not schizophrenic.
The brain isn't hardware, it's biology and oscillation and integrations in optic flow. It can't be dichotomized into hardware or software.
Wisdom is an arbitrary concept. The drive to avoid suffering is built from sensory and affective affinities and networks funnlled into the cog-mapping motor systems. Calling this wisdom is simply a simplistic narrative.
This matches my hypothesis on Deja vu
https://kemendo.com/Deja-Vu-Experiment.html
I think it also supports my three loops hypothesis as well:
https://kemendo.com/ThreeLoops.html
In effect, my position is that biological systems maintain a synchronized processing pipeline: where the hippocampal prediction system operates slightly “ahead” of sensory processing, like a cache buffer.
If the processing gets “behind” the sensory input then you feel like you’re accessing memory because the electrical signal is reaching memory and sensory distribution simultaneously or slightly lagging.
So it means you’re constantly switching between your world map and the input and comparing them just to stabilize a “linear” experience - something which is a necessity for corporeal prediction and reaction.
I think we should be careful about materialistic reductions of awareness. Because some rats dreamed detours that ended up being correct in waking rat life, it does not follow that all instances of deja vu are misfirings. It's a tempting connection to draw, but it does not actually explain how the detours were dreamt to begin with, and this points to a deeper question about awareness in general. If I were pressed for an analogy, I might say something like "just because all books have ink does not mean that all ink lives in books." You know what I mean? There's a superset of experiences that cannot be easily explained away by caching, as tempting as it might be.
Materialistic reduction has gotten us quite far in science.
Not exactly. We don't know where optic-flow reactions that integrate senses, emotions, motor systems in the slightest. Study neural reuse or coordination dynamics. Some relationship between the brain and the world that isn't easily found in the brain alone is responsible.
Materialistic interpretations of the world around us are quite literally the only useful ones. If we didn't do that we'd be sleeping in caves and hitting each other with heavy rocks.
Wrong. Materialistic only got us to a level. Now we're looking past materialism in neural reuse, coordination dynamics and ecological psychology and neurobiology. The causes are out there in contradictory correlations.
Literally everything is materialist. If it's not it either A) doesn't actually exist or B) you just don't understand it yet.
It's inherent to the meaning of the word.
Your work seems pretty good to me, have you seen Steven Byrne's blog theorising about symbol grounding in the brain?
No I havent, I’ll have to look it up, thanks for the recommendation.
VR cannot be essential to decoding the brain as it deals in topological maps and affinities.
This takes me to Zen and the Art of Motorcycle Maintenance. Your physical experience of something has to be analysed in accordance with your mental model of it in order to attain a diagnosis (in the book it was a motorcycle engine).
My take on this is, especially in regard to debugging IT issues, is that you have to constantly verify and update your mental model (check your premises!) in order to better weed out problems.
Going to new places is really therapeutic (Barring somewhere obviously adverse), since that 'darting to reality' creates a sense of presence.
I often find myself lost in my mental maps in daily life (Living inside my head) unless I'm in a nice novel environment. Meditation helps, however.
The way it is phrased, looks like a pre computed model confronted to real data. So... our current AIs except we have incremental continuous training (accumulated experience)?
And dreams are simulation-based training to make life easier, decision-making more efficient?
What kind of next level machinery is this?! ;D
I wonder if this also relates to playing music.
There was a neural net paper like this that generated a lot of discussion on HN, but that I haven't been able to find since (I probably downloaded it, but that teaches me to always remember to use Zotero because academic paper filenames are terrible.)
It was about replacing backprop with a mechanism that checked outcomes against predictions, and just adjusted parameters that deviated from the predictions rather than the entire path. It wasn't suitable for digital machines (because it isn't any more efficient on digital machines) but it worked on analog models. If anybody remembers this, I'd appreciate the link.
I might be garbling the paper because it's from memory and I'm not an expert, but hopefully it's recognizable.
I don't know if it is the paper you are thinking of (likely not) but this idea of checking predictions against outcomes is a very common idea in less mainstream AI research, including the so called "energy-based models" of Yann LeCun and the reference frames of the thousand brains project.
A recent paper posted here also looked at Recurrent Neural Nets and how in simplifying the design to its core amounted to just having a latent prediction and repeatedly adjusting that prediction.
If it wasn't a thread on HN, it's probably not. I don't think it was LeCun. It was a long, well-illustrated paper with a web version.