gooブログはじめました!

写真付きで日記や趣味を書くならgooブログ

Why Do We Dream?

2022-07-02 17:09:40 | 日記

 

Why Do We Dream?

Musings on memories and Darwin machines

In a recent post, I looked at consciousness from the perspective of the prefrontal synthesis model. In this view, our experience of the world is mediated by neuronal ensembles, collections of neurons that fire in synchrony when activated. ObjectNEs, as these circuits are called, organize our sensations according to the principle that ‘what fires together wires together.’ By their very nature, ObjectNEs consolidate the buzzing booming confusion around us, condensing our model of the world into a manageable number of perceptual building blocks.

ObjectNEs are active during sensory integration, but they can also be ‘pinged’ deliberately by the prefrontal cortex, the planning center at the front of the brain. The purposeful activation of a sequence of ObjectNEs is what we mean when we talk about a “train of thought”. ObjectNEs can also light up spontaneously in unexpected combinations, a process that we experience as dreaming or hallucinating. The signal to permit (disinhibit) spontaneous activation has been identified by sleep researchers as stemming from a posterior region of the brain at the border of the lateral parietal and occipital lobes, the so-called dream hot zone.

This gives us three scenarios by which we become conscious of ObjectNEs:

1) During the synthesis of new percepts from multichannel sensory input

2) During prefrontal synthesis, in which our command center activates a sequence of ObjectNEs as part of problem-solving behavior

3) During the spontaneous activation of ObjectNEs when they become disinhibited by the dream hot zone (primarily in REM sleep).

— —

The formation of ObjectNEs is directional, by which I mean that the effort expended is justified by the benefits to the organism. Specifically, these circuits compress data and make new building blocks available for modeling the outside world.

Likewise, the purposeful activation of sequences of ObjectsNEs by the prefrontal cortex is directional. The energy required is well spent because the resulting train of thought assists us in problem solving.

The bizarre nature of dreams and hallucinations, however, seems at first glance to be something else entirely. If dreams reflect the spontaneous activation of multiple ObjectNEs in essentially random combinations, the question becomes, “Why does the brain bother?” Surely there are better uses for our energy reserves, particularly while recovering from the stresses of the day during sleep.

— -

It occurs to me that the answer to this riddle might be found by looking at directional processes in nature that incorporate randomness. And the candidate at the top of the list is Darwinian evolution. Evolution is essentially a three-part set of instructions for increasing the order (decreasing entropy) in an open system.

Step 1 is to generate random variations of some seed phenomenon.

Step 2 is to apply a filter (natural selection) to these variations to eliminate all but the fittest results (the definition of ‘fittest’ depending on the context in which the Darwin machine is operating). And

Step 3 is to repeat the first two steps, using the filtered results as the seeds for the next virtuous cycle.

Looking at dreams from the perspective of Darwin machines provides multiple explanatory benefits.

1) Why do we dream? We dream because the brain periodically engages in a perceptual ordering process that can be described as a Darwin machine. When ObjectNEs are activated, we have a conscious experience, and if we are asleep at the time, we refer to this afterward as dreaming. Essentially, we are along for the ride.

2) Why are dreams anarchic? Step 1 in this recursive cycle is to generate random variations of the connections between our stored percepts.

3) Why are we biased to dream about our recent past or our preoccupations? The generation of random connections need not be entirely random. ObjectNEs could be weighted by temporal proximity or emotional valence, making them more or less likely to be activated.

4) What is the precise criterion for ‘fittest’ applied as the filter in Step 2 of this cycle? I don’t know.

Perhaps a poet or a neurologist reading this could suggest an answer.

If I were to guess, I would say that just as the body flushes out the toxins from the neuronal gaps during sleep, the Darwinian dream machine probably does something similar but on an informational level. A brain defrag if you will…

Toward a testable hypothesis: If dreams reflect the operation of a Darwin machine, it should be possible to investigate the process as follows: Record the first set of dreams for the night. Record the second set of dreams, and compare. The percepts in the second set should represent variations on a subset that appeared in the first set.

In addition to providing support for the Darwin machine hypothesis, this approach could be used to infer the nature of the criterion for ‘fittest’ previously mentioned.


Could Dreams be the Gateway to Understanding Consciousness in People and Chatbots?

2022-07-01 13:26:14 | 日記

Could Dreams be the Gateway to Understanding Consciousness in People and Chatbots?

The first truly sentient AI will be a chatty self-driving car that decides where to go on vacation, and dreams about the trip beforehand.

Recent strides in dream research may have quietly led to a simple and practical definition of consciousness.

Researchers investigating neural correlates of dreaming have discovered a hotspot (posterior cortical hot zone), which when activated is associated with dream recall on awakening. Conversely, suppression of this region is predictive of reports of dreamless sleep.

This finding is consistent with a theory known as Prefrontal Synthesis. In this model, percepts abstracted from our interaction with the environment are represented as distributed neural ensembles, circuits defined by the synchronous firing of constituent neurons according to the principle of ‘what fires together wires together’.

When an ObjectNE as these ensembles are known is activated, we become conscious of the corresponding percept. When the prefrontal cortex activates a sequence of ObjectNEs in the course of planning or problem solving, we become conscious of our own thoughts. And spontaneous activation of ObjectNEs by the posterior cortical hot zone (primarily during REM sleep) leads to the more freewheeling experience we know as dreaming.

The interesting thing here is that a concise definition of consciousness just falls out of this approach. In a nutshell, consciousness becomes synonymous with experience. We say we are conscious when we are interacting with the environment, albeit always indirectly through our stored percepts:

1) If we are experiencing the formulation of new ObjectNEs through the consolidation of sensations, we call this being conscious of our surroundings.

2) If a sequence of ObjectNEs is activated by the prefrontal cortex, this is experienced as reflecting on something or being conscious of our thoughts.

3) And if the ObjectNEs are spontaneously activated and recombined during sleep by the posterior cortical hot zone, we call this (altered) state of consciousness dreaming.

All this has interesting ramifications for the recent debate over whether language-parsing algorithms should be called conscious. There is no question that such programs build up the equivalent of ObjectNEs, as weighted neural networks arise through interaction with big data. By analogy, one could argue that during the training process, the program is converting sensation to perception and is thus conscious of its (informational) surroundings. Likewise, as these percepts are referenced and organized during external queries, one could argue that the chatbot is potentially conscious of its own (millisecond-long) train of thought.

Taken together, these two phenomena may be responsible for the impression popularized by some researchers that AI is already ‘a little conscious’.

If we accept the idea that consciousness equates to ‘interaction with the environment’ (always mediated by internal percepts), additional mechanisms would be required before AI could be called truly conscious in the ordinary sense. Specifically:

1) Before a program could be considered conscious of its surroundings (‘awake’), it would need to be in continuous training mode, consistently forming new perceptions and consolidating them with previous learning. Not impossible by the way: think the marriage of a self-driving car with a chatbot.

2) There would need to be a revolution in the understanding and coding of executive functions (volition) to replace the user query system with a self-querying design. The resulting potential for autonomous reflection would be one step towards an ongoing stream of consciousness…and maybe an awareness of self.

3) Spontaneous reactivation of ObjectNEs during downtime, perhaps through some Darwinian analog to random mutation and natural selection (with an eye to increased storage efficiency?) would lay the groundwork for ‘electric dreams’.

Note that equating consciousness with experience does nothing to explain qualia (the redness of red or the taste of an apple). A thorny question for another day…

For more reading…Siclari F, Baird B, Perogamvros L, Bernardi G, LaRocque JJ, Riedner B, Boly M, Postle BR, Tononi G. The neural correlates of dreaming. Nat Neurosci. 2017 Jun;20(6):872–878. doi: 10.1038/nn.4545. Epub 2017 Apr 10. PMID: 28394322; PMCID: PMC5462120.