1
|
Grossberg S. How children learn to understand language meanings: a neural model of adult-child multimodal interactions in real-time. Front Psychol 2023; 14:1216479. [PMID: 37599779 PMCID: PMC10435915 DOI: 10.3389/fpsyg.2023.1216479] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Accepted: 06/28/2023] [Indexed: 08/22/2023] Open
Abstract
This article describes a biological neural network model that can be used to explain how children learn to understand language meanings about the perceptual and affective events that they consciously experience. This kind of learning often occurs when a child interacts with an adult teacher to learn language meanings about events that they experience together. Multiple types of self-organizing brain processes are involved in learning language meanings, including processes that control conscious visual perception, joint attention, object learning and conscious recognition, cognitive working memory, cognitive planning, emotion, cognitive-emotional interactions, volition, and goal-oriented actions. The article shows how all of these brain processes interact to enable the learning of language meanings to occur. The article also contrasts these human capabilities with AI models such as ChatGPT. The current model is called the ChatSOME model, where SOME abbreviates Self-Organizing MEaning.
Collapse
Affiliation(s)
- Stephen Grossberg
- Center for Adaptive Systems, Boston University, Boston, MA, United States
| |
Collapse
|
2
|
Sato R, Shimomura K, Morita K. Opponent learning with different representations in the cortico-basal ganglia pathways can develop obsession-compulsion cycle. PLoS Comput Biol 2023; 19:e1011206. [PMID: 37319256 PMCID: PMC10306209 DOI: 10.1371/journal.pcbi.1011206] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2023] [Accepted: 05/23/2023] [Indexed: 06/17/2023] Open
Abstract
Obsessive-compulsive disorder (OCD) has been suggested to be associated with impairment of model-based behavioral control. Meanwhile, recent work suggested shorter memory trace for negative than positive prediction errors (PEs) in OCD. We explored relations between these two suggestions through computational modeling. Based on the properties of cortico-basal ganglia pathways, we modeled human as an agent having a combination of successor representation (SR)-based system that enables model-based-like control and individual representation (IR)-based system that only hosts model-free control, with the two systems potentially learning from positive and negative PEs in different rates. We simulated the agent's behavior in the environmental model used in the recent work that describes potential development of obsession-compulsion cycle. We found that the dual-system agent could develop enhanced obsession-compulsion cycle, similarly to the agent having memory trace imbalance in the recent work, if the SR- and IR-based systems learned mainly from positive and negative PEs, respectively. We then simulated the behavior of such an opponent SR+IR agent in the two-stage decision task, in comparison with the agent having only SR-based control. Fitting of the agents' behavior by the model weighing model-based and model-free control developed in the original two-stage task study resulted in smaller weights of model-based control for the opponent SR+IR agent than for the SR-only agent. These results reconcile the previous suggestions about OCD, i.e., impaired model-based control and memory trace imbalance, raising a novel possibility that opponent learning in model(SR)-based and model-free controllers underlies obsession-compulsion. Our model cannot explain the behavior of OCD patients in punishment, rather than reward, contexts, but it could be resolved if opponent SR+IR learning operates also in the recently revealed non-canonical cortico-basal ganglia-dopamine circuit for threat/aversiveness, rather than reward, reinforcement learning, and the aversive SR + appetitive IR agent could actually develop obsession-compulsion if the environment is modeled differently.
Collapse
Affiliation(s)
- Reo Sato
- Physical and Health Education, Graduate School of Education, The University of Tokyo, Tokyo, Japan
| | - Kanji Shimomura
- Physical and Health Education, Graduate School of Education, The University of Tokyo, Tokyo, Japan
| | - Kenji Morita
- Physical and Health Education, Graduate School of Education, The University of Tokyo, Tokyo, Japan
- International Research Center for Neurointelligence (WPI-IRCN), The University of Tokyo, Tokyo, Japan
| |
Collapse
|
3
|
Grossberg S. Toward Understanding the Brain Dynamics of Music: Learning and Conscious Performance of Lyrics and Melodies With Variable Rhythms and Beats. Front Syst Neurosci 2022; 16:766239. [PMID: 35465193 PMCID: PMC9028030 DOI: 10.3389/fnsys.2022.766239] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2021] [Accepted: 02/23/2022] [Indexed: 11/13/2022] Open
Abstract
A neural network architecture models how humans learn and consciously perform musical lyrics and melodies with variable rhythms and beats, using brain design principles and mechanisms that evolved earlier than human musical capabilities, and that have explained and predicted many kinds of psychological and neurobiological data. One principle is called factorization of order and rhythm: Working memories store sequential information in a rate-invariant and speaker-invariant way to avoid using excessive memory and to support learning of language, spatial, and motor skills. Stored invariant representations can be flexibly performed in a rate-dependent and speaker-dependent way under volitional control. A canonical working memory design stores linguistic, spatial, motoric, and musical sequences, including sequences with repeated words in lyrics, or repeated pitches in songs. Stored sequences of individual word chunks and pitch chunks are categorized through learning into lyrics chunks and pitches chunks. Pitches chunks respond selectively to stored sequences of individual pitch chunks that categorize harmonics of each pitch, thereby supporting tonal music. Bottom-up and top-down learning between working memory and chunking networks dynamically stabilizes the memory of learned music. Songs are learned by associatively linking sequences of lyrics and pitches chunks. Performance begins when list chunks read word chunk and pitch chunk sequences into working memory. Learning and performance of regular rhythms exploits cortical modulation of beats that are generated in the basal ganglia. Arbitrary performance rhythms are learned by adaptive timing circuits in the cerebellum interacting with prefrontal cortex and basal ganglia. The same network design that controls walking, running, and finger tapping also generates beats and the urge to move with a beat.
Collapse
Affiliation(s)
- Stephen Grossberg
- Center for Adaptive Systems, Graduate Program in Cognitive and Neural Systems, Department of Mathematics & Statistics, Psychological & Brain Sciences, and Biomedical Engineering, Boston University, Boston, MA, United States
| |
Collapse
|
4
|
Ferreira F, Wojtak W, Sousa E, Louro L, Bicho E, Erlhagen W. Rapid Learning of Complex Sequences With Time Constraints: A Dynamic Neural Field Model. IEEE Trans Cogn Dev Syst 2021. [DOI: 10.1109/tcds.2020.2991789] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
5
|
Grossberg S. A Canonical Laminar Neocortical Circuit Whose Bottom-Up, Horizontal, and Top-Down Pathways Control Attention, Learning, and Prediction. Front Syst Neurosci 2021; 15:650263. [PMID: 33967708 PMCID: PMC8102731 DOI: 10.3389/fnsys.2021.650263] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2021] [Accepted: 03/29/2021] [Indexed: 11/27/2022] Open
Abstract
All perceptual and cognitive circuits in the human cerebral cortex are organized into layers. Specializations of a canonical laminar network of bottom-up, horizontal, and top-down pathways carry out multiple kinds of biological intelligence across different neocortical areas. This article describes what this canonical network is and notes that it can support processes as different as 3D vision and figure-ground perception; attentive category learning and decision-making; speech perception; and cognitive working memory (WM), planning, and prediction. These processes take place within and between multiple parallel cortical streams that obey computationally complementary laws. The interstream interactions that are needed to overcome these complementary deficiencies mix cell properties so thoroughly that some authors have noted the difficulty of determining what exactly constitutes a cortical stream and the differences between streams. The models summarized herein explain how these complementary properties arise, and how their interstream interactions overcome their computational deficiencies to support effective goal-oriented behaviors.
Collapse
Affiliation(s)
- Stephen Grossberg
- Graduate Program in Cognitive and Neural Systems, Departments of Mathematics and Statistics, Psychological and Brain Sciences, and Biomedical Engineering, Center for Adaptive Systems, Boston University, Boston, MA, United States
| |
Collapse
|
6
|
Grossberg S. A Path Toward Explainable AI and Autonomous Adaptive Intelligence: Deep Learning, Adaptive Resonance, and Models of Perception, Emotion, and Action. Front Neurorobot 2020; 14:36. [PMID: 32670045 PMCID: PMC7330174 DOI: 10.3389/fnbot.2020.00036] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2020] [Accepted: 05/18/2020] [Indexed: 11/13/2022] Open
Abstract
Biological neural network models whereby brains make minds help to understand autonomous adaptive intelligence. This article summarizes why the dynamics and emergent properties of such models for perception, cognition, emotion, and action are explainable, and thus amenable to being confidently implemented in large-scale applications. Key to their explainability is how these models combine fast activations, or short-term memory (STM) traces, and learned weights, or long-term memory (LTM) traces. Visual and auditory perceptual models have explainable conscious STM representations of visual surfaces and auditory streams in surface-shroud resonances and stream-shroud resonances, respectively. Deep Learning is often used to classify data. However, Deep Learning can experience catastrophic forgetting: At any stage of learning, an unpredictable part of its memory can collapse. Even if it makes some accurate classifications, they are not explainable and thus cannot be used with confidence. Deep Learning shares these problems with the back propagation algorithm, whose computational problems due to non-local weight transport during mismatch learning were described in the 1980s. Deep Learning became popular after very fast computers and huge online databases became available that enabled new applications despite these problems. Adaptive Resonance Theory, or ART, algorithms overcome the computational problems of back propagation and Deep Learning. ART is a self-organizing production system that incrementally learns, using arbitrary combinations of unsupervised and supervised learning and only locally computable quantities, to rapidly classify large non-stationary databases without experiencing catastrophic forgetting. ART classifications and predictions are explainable using the attended critical feature patterns in STM on which they build. The LTM adaptive weights of the fuzzy ARTMAP algorithm induce fuzzy IF-THEN rules that explain what feature combinations predict successful outcomes. ART has been successfully used in multiple large-scale real world applications, including remote sensing, medical database prediction, and social media data clustering. Also explainable are the MOTIVATOR model of reinforcement learning and cognitive-emotional interactions, and the VITE, DIRECT, DIVA, and SOVEREIGN models for reaching, speech production, spatial navigation, and autonomous adaptive intelligence. These biological models exemplify complementary computing, and use local laws for match learning and mismatch learning that avoid the problems of Deep Learning.
Collapse
Affiliation(s)
- Stephen Grossberg
- Graduate Program in Cognitive and Neural Systems, Departments of Mathematics & Statistics, Psychological & Brain Sciences, and Biomedical Engineering, Center for Adaptive Systems, Boston University, Boston, MA, United States
| |
Collapse
|
7
|
Grossberg S. Developmental Designs and Adult Functions of Cortical Maps in Multiple Modalities: Perception, Attention, Navigation, Numbers, Streaming, Speech, and Cognition. Front Neuroinform 2020; 14:4. [PMID: 32116628 PMCID: PMC7016218 DOI: 10.3389/fninf.2020.00004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2019] [Accepted: 01/16/2020] [Indexed: 11/13/2022] Open
Abstract
This article unifies neural modeling results that illustrate several basic design principles and mechanisms that are used by advanced brains to develop cortical maps with multiple psychological functions. One principle concerns how brains use a strip map that simultaneously enables one feature to be represented throughout its extent, as well as an ordered array of another feature at different positions of the strip. Strip maps include circuits to represent ocular dominance and orientation columns, place-value numbers, auditory streams, speaker-normalized speech, and cognitive working memories that can code repeated items. A second principle concerns how feature detectors for multiple functions develop in topographic maps, including maps for optic flow navigation, reinforcement learning, motion perception, and category learning at multiple organizational levels. A third principle concerns how brains exploit a spatial gradient of cells that respond at an ordered sequence of different rates. Such a rate gradient is found along the dorsoventral axis of the entorhinal cortex, whose lateral branch controls the development of time cells, and whose medial branch controls the development of grid cells. Populations of time cells can be used to learn how to adaptively time behaviors for which a time interval of hundreds of milliseconds, or several seconds, must be bridged, as occurs during trace conditioning. Populations of grid cells can be used to learn hippocampal place cells that represent the large spaces in which animals navigate. A fourth principle concerns how and why all neocortical circuits are organized into layers, and how functionally distinct columns develop in these circuits to enable map development. A final principle concerns the role of Adaptive Resonance Theory top-down matching and attentional circuits in the dynamic stabilization of early development and adult learning. Cortical maps are modeled in visual, auditory, temporal, parietal, prefrontal, entorhinal, and hippocampal cortices.
Collapse
Affiliation(s)
- Stephen Grossberg
- Center for Adaptive Systems, Graduate Program in Cognitive and Neural Systems, Departments of Mathematics & Statistics, Psychological & Brain Sciences, and Biomedical Engineering, Boston University, Boston, MA, United States
| |
Collapse
|
8
|
Zeid O, Bullock D. Moving in time: Simulating how neural circuits enable rhythmic enactment of planned sequences. Neural Netw 2019; 120:86-107. [DOI: 10.1016/j.neunet.2019.08.006] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2019] [Revised: 07/24/2019] [Accepted: 08/09/2019] [Indexed: 10/26/2022]
|
9
|
Grossberg S. The resonant brain: How attentive conscious seeing regulates action sequences that interact with attentive cognitive learning, recognition, and prediction. Atten Percept Psychophys 2019; 81:2237-2264. [PMID: 31218601 PMCID: PMC6848053 DOI: 10.3758/s13414-019-01789-2] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
This article describes mechanistic links that exist in advanced brains between processes that regulate conscious attention, seeing, and knowing, and those that regulate looking and reaching. These mechanistic links arise from basic properties of brain design principles such as complementary computing, hierarchical resolution of uncertainty, and adaptive resonance. These principles require conscious states to mark perceptual and cognitive representations that are complete, context sensitive, and stable enough to control effective actions. Surface-shroud resonances support conscious seeing and action, whereas feature-category resonances support learning, recognition, and prediction of invariant object categories. Feedback interactions between cortical areas such as peristriate visual cortical areas V2, V3A, and V4, and the lateral intraparietal area (LIP) and inferior parietal sulcus (IPS) of the posterior parietal cortex (PPC) control sequences of saccadic eye movements that foveate salient features of attended objects and thereby drive invariant object category learning. Learned categories can, in turn, prime the objects and features that are attended and searched. These interactions coordinate processes of spatial and object attention, figure-ground separation, predictive remapping, invariant object category learning, and visual search. They create a foundation for learning to control motor-equivalent arm movement sequences, and for storing these sequences in cognitive working memories that can trigger the learning of cognitive plans with which to read out skilled movement sequences. Cognitive-emotional interactions that are regulated by reinforcement learning can then help to select the plans that control actions most likely to acquire valued goal objects in different situations. Many interdisciplinary psychological and neurobiological data about conscious and unconscious behaviors in normal individuals and clinical patients have been explained in terms of these concepts and mechanisms.
Collapse
Affiliation(s)
- Stephen Grossberg
- Center for Adaptive Systems, Room 213, Graduate Program in Cognitive and Neural Systems, Departments of Mathematics & Statistics, Psychological & Brain Sciences, and Biomedical Engineering, Boston University, 677 Beacon Street, Boston, MA, 02215, USA.
| |
Collapse
|
10
|
Grossberg S. The Embodied Brain of SOVEREIGN2: From Space-Variant Conscious Percepts During Visual Search and Navigation to Learning Invariant Object Categories and Cognitive-Emotional Plans for Acquiring Valued Goals. Front Comput Neurosci 2019; 13:36. [PMID: 31333437 PMCID: PMC6620614 DOI: 10.3389/fncom.2019.00036] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2019] [Accepted: 05/21/2019] [Indexed: 11/13/2022] Open
Abstract
This article develops a model of how reactive and planned behaviors interact in real time. Controllers for both animals and animats need reactive mechanisms for exploration, and learned plans to efficiently reach goal objects once an environment becomes familiar. The SOVEREIGN model embodied these capabilities, and was tested in a 3D virtual reality environment. Neural models have characterized important adaptive and intelligent processes that were not included in SOVEREIGN. A major research program is summarized herein by which to consistently incorporate them into an enhanced model called SOVEREIGN2. Key new perceptual, cognitive, cognitive-emotional, and navigational processes require feedback networks which regulate resonant brain states that support conscious experiences of seeing, feeling, and knowing. Also included are computationally complementary processes of the mammalian neocortical What and Where processing streams, and homologous mechanisms for spatial navigation and arm movement control. These include: Unpredictably moving targets are tracked using coordinated smooth pursuit and saccadic movements. Estimates of target and present position are computed in the Where stream, and can activate approach movements. Motion cues can elicit orienting movements to bring new targets into view. Cumulative movement estimates are derived from visual and vestibular cues. Arbitrary navigational routes are incrementally learned as a labeled graph of angles turned and distances traveled between turns. Noisy and incomplete visual sensor data are transformed into representations of visual form and motion. Invariant recognition categories are learned in the What stream. Sequences of invariant object categories are stored in a cognitive working memory, whereas sequences of movement positions and directions are stored in a spatial working memory. Stored sequences trigger learning of cognitive and spatial/motor sequence categories or plans, also called list chunks, which control planned decisions and movements toward valued goal objects. Predictively successful list chunk combinations are selectively enhanced or suppressed via reinforcement learning and incentive motivational learning. Expected vs. unexpected event disconfirmations regulate these enhancement and suppressive processes. Adaptively timed learning enables attention and action to match task constraints. Social cognitive joint attention enables imitation learning of skills by learners who observe teachers from different spatial vantage points.
Collapse
Affiliation(s)
- Stephen Grossberg
- Center for Adaptive Systems, Graduate Program in Cognitive and Neural Systems, Departments of Mathematics & Statistics, Psychological & Brain Sciences, and Biomedical Engineering, Boston University, Boston, MA, United States
| |
Collapse
|
11
|
Grossberg S, Kishnan D. Neural Dynamics of Autistic Repetitive Behaviors and Fragile X Syndrome: Basal Ganglia Movement Gating and mGluR-Modulated Adaptively Timed Learning. Front Psychol 2018; 9:269. [PMID: 29593596 PMCID: PMC5859312 DOI: 10.3389/fpsyg.2018.00269] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2017] [Accepted: 02/19/2018] [Indexed: 11/13/2022] Open
Abstract
This article develops the iSTART neural model that proposes how specific imbalances in cognitive, emotional, timing, and motor processes that involve brain regions like prefrontal cortex, temporal cortex, amygdala, hypothalamus, hippocampus, and cerebellum may interact together to cause behavioral symptoms of autism. These imbalances include underaroused emotional depression in the amygdala/hypothalamus, learning of hyperspecific recognition categories that help to cause narrowly focused attention in temporal and prefrontal cortices, and breakdowns of adaptively timed motivated attention and motor circuits in the hippocampus and cerebellum. The article expands the model's explanatory range by, first, explaining recent data about Fragile X syndrome (FXS), mGluR, and trace conditioning; and, second, by explaining distinct causes of stereotyped behaviors in individuals with autism. Some of these stereotyped behaviors, such as an insistence on sameness and circumscribed interests, may result from imbalances in the cognitive and emotional circuits that iSTART models. These behaviors may be ameliorated by operant conditioning methods. Other stereotyped behaviors, such as repetitive motor behaviors, may result from imbalances in how the direct and indirect pathways of the basal ganglia open or close movement gates, respectively. These repetitive behaviors may be ameliorated by drugs that augment D2 dopamine receptor responses or reduce D1 dopamine receptor responses. The article also notes the ubiquitous role of gating by basal ganglia loops in regulating all the functions that iSTART models.
Collapse
Affiliation(s)
- Stephen Grossberg
- Center for Adaptive Systems, Graduate Program in Cognitive and Neural Systems, Departments of Mathematics & Statistics, Psychological & Brain Sciences, and Biomedical Engineering, Boston University, Boston, MA, United States
| | - Devika Kishnan
- Department of Biomedical Engineering, Boston University, Boston, MA, United States
| |
Collapse
|
12
|
Grossberg S. Desirability, availability, credit assignment, category learning, and attention: Cognitive-emotional and working memory dynamics of orbitofrontal, ventrolateral, and dorsolateral prefrontal cortices. Brain Neurosci Adv 2018; 2:2398212818772179. [PMID: 32166139 PMCID: PMC7058233 DOI: 10.1177/2398212818772179] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2017] [Accepted: 03/16/2018] [Indexed: 11/17/2022] Open
Abstract
BACKGROUND The prefrontal cortices play an essential role in cognitive-emotional and working memory processes through interactions with multiple brain regions. METHODS This article further develops a unified neural architecture that explains many recent and classical data about prefrontal function and makes testable predictions. RESULTS Prefrontal properties of desirability, availability, credit assignment, category learning, and feature-based attention are explained. These properties arise through interactions of orbitofrontal, ventrolateral prefrontal, and dorsolateral prefrontal cortices with the inferotemporal cortex, perirhinal cortex, parahippocampal cortices; ventral bank of the principal sulcus, ventral prearcuate gyrus, frontal eye fields, hippocampus, amygdala, basal ganglia, hypothalamus, and visual cortical areas V1, V2, V3A, V4, middle temporal cortex, medial superior temporal area, lateral intraparietal cortex, and posterior parietal cortex. Model explanations also include how the value of visual objects and events is computed, which objects and events cause desired consequences and which may be ignored as predictively irrelevant, and how to plan and act to realise these consequences, including how to selectively filter expected versus unexpected events, leading to movements towards, and conscious perception of, expected events. Modelled processes include reinforcement learning and incentive motivational learning; object and spatial working memory dynamics; and category learning, including the learning of object categories, value categories, object-value categories, and sequence categories, or list chunks. CONCLUSION This article hereby proposes a unified neural theory of prefrontal cortex and its functions.
Collapse
Affiliation(s)
- Stephen Grossberg
- Center for Adaptive Systems, Graduate Program in Cognitive and Neural Systems, Departments of Mathematics & Statistics, Psychological & Brain Sciences, Biomedical Engineering, Boston University, Boston, MA, USA
| |
Collapse
|
13
|
Franklin DJ, Grossberg S. A neural model of normal and abnormal learning and memory consolidation: adaptively timed conditioning, hippocampus, amnesia, neurotrophins, and consciousness. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2017; 17:24-76. [PMID: 27905080 PMCID: PMC5272895 DOI: 10.3758/s13415-016-0463-y] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Abstract
How do the hippocampus and amygdala interact with thalamocortical systems to regulate cognitive and cognitive-emotional learning? Why do lesions of thalamus, amygdala, hippocampus, and cortex have differential effects depending on the phase of learning when they occur? In particular, why is the hippocampus typically needed for trace conditioning, but not delay conditioning, and what do the exceptions reveal? Why do amygdala lesions made before or immediately after training decelerate conditioning while those made later do not? Why do thalamic or sensory cortical lesions degrade trace conditioning more than delay conditioning? Why do hippocampal lesions during trace conditioning experiments degrade recent but not temporally remote learning? Why do orbitofrontal cortical lesions degrade temporally remote but not recent or post-lesion learning? How is temporally graded amnesia caused by ablation of prefrontal cortex after memory consolidation? How are attention and consciousness linked during conditioning? How do neurotrophins, notably brain-derived neurotrophic factor (BDNF), influence memory formation and consolidation? Is there a common output path for learned performance? A neural model proposes a unified answer to these questions that overcome problems of alternative memory models.
Collapse
Affiliation(s)
- Daniel J Franklin
- Center for Adaptive Systems, Graduate Program in Cognitive and Neural Systems, and Departments of Mathematics, Psychological & Brain Sciences, and Biomedical Engineering, Boston University, 677 Beacon Street, Room 213, Boston, MA, 02215, USA
| | - Stephen Grossberg
- Center for Adaptive Systems, Graduate Program in Cognitive and Neural Systems, and Departments of Mathematics, Psychological & Brain Sciences, and Biomedical Engineering, Boston University, 677 Beacon Street, Room 213, Boston, MA, 02215, USA.
| |
Collapse
|
14
|
Grossberg S. Towards solving the hard problem of consciousness: The varieties of brain resonances and the conscious experiences that they support. Neural Netw 2016; 87:38-95. [PMID: 28088645 DOI: 10.1016/j.neunet.2016.11.003] [Citation(s) in RCA: 45] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2016] [Revised: 10/21/2016] [Accepted: 11/20/2016] [Indexed: 10/20/2022]
Abstract
The hard problem of consciousness is the problem of explaining how we experience qualia or phenomenal experiences, such as seeing, hearing, and feeling, and knowing what they are. To solve this problem, a theory of consciousness needs to link brain to mind by modeling how emergent properties of several brain mechanisms interacting together embody detailed properties of individual conscious psychological experiences. This article summarizes evidence that Adaptive Resonance Theory, or ART, accomplishes this goal. ART is a cognitive and neural theory of how advanced brains autonomously learn to attend, recognize, and predict objects and events in a changing world. ART has predicted that "all conscious states are resonant states" as part of its specification of mechanistic links between processes of consciousness, learning, expectation, attention, resonance, and synchrony. It hereby provides functional and mechanistic explanations of data ranging from individual spikes and their synchronization to the dynamics of conscious perceptual, cognitive, and cognitive-emotional experiences. ART has reached sufficient maturity to begin classifying the brain resonances that support conscious experiences of seeing, hearing, feeling, and knowing. Psychological and neurobiological data in both normal individuals and clinical patients are clarified by this classification. This analysis also explains why not all resonances become conscious, and why not all brain dynamics are resonant. The global organization of the brain into computationally complementary cortical processing streams (complementary computing), and the organization of the cerebral cortex into characteristic layers of cells (laminar computing), figure prominently in these explanations of conscious and unconscious processes. Alternative models of consciousness are also discussed.
Collapse
Affiliation(s)
- Stephen Grossberg
- Center for Adaptive Systems, Boston University, 677 Beacon Street, Boston, MA 02215, USA; Graduate Program in Cognitive and Neural Systems, Departments of Mathematics & Statistics, Psychological & Brain Sciences, and Biomedical Engineering Boston University, 677 Beacon Street, Boston, MA 02215, USA.
| |
Collapse
|
15
|
Grossberg S, Kazerounian S. Phoneme restoration and empirical coverage of Interactive Activation and Adaptive Resonance models of human speech processing. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2016; 140:1130. [PMID: 27586743 DOI: 10.1121/1.4946760] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Magnuson [J. Acoust. Soc. Am. 137, 1481-1492 (2015)] makes claims for Interactive Activation (IA) models and against Adaptive Resonance Theory (ART) models of speech perception. Magnuson also presents simulations that claim to show that the TRACE model can simulate phonemic restoration, which was an explanatory target of the cARTWORD ART model. The theoretical analysis and review herein show that these claims are incorrect. More generally, the TRACE and cARTWORD models illustrate two diametrically opposed types of neural models of speech and language. The TRACE model embodies core assumptions with no analog in known brain processes. The cARTWORD model defines a hierarchy of cortical processing regions whose networks embody cells in laminar cortical circuits as part of the paradigm of laminar computing. cARTWORD further develops ART speech and language models that were introduced in the 1970s. It builds upon Item-Order-Rank working memories, which activate learned list chunks that unitize sequences to represent phonemes, syllables, and words. Psychophysical and neurophysiological data support Item-Order-Rank mechanisms and contradict TRACE representations of time, temporal order, silence, and top-down processing that exhibit many anomalous properties, including hallucinations of non-occurring future phonemes. Computer simulations of the TRACE model are presented that demonstrate these failures.
Collapse
Affiliation(s)
- Stephen Grossberg
- Departments of Mathematics, Psychology, and Biomedical Engineering, Center for Adaptive Systems, Graduate Program in Cognitive and Neural Systems, Center for Computational Neuroscience and Neural Technology, Boston University, Boston, Massachusetts 02215, USA
| | - Sohrob Kazerounian
- Nuance Communications, Inc., 1 Wayside Road, Burlington, Massachusetts 01803, USA
| |
Collapse
|
16
|
Ye W, Liu S, Liu X, Yu Y. A neural model of the frontal eye fields with reward-based learning. Neural Netw 2016; 81:39-51. [PMID: 27284696 DOI: 10.1016/j.neunet.2016.05.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2015] [Revised: 05/03/2016] [Accepted: 05/06/2016] [Indexed: 11/24/2022]
Abstract
Decision-making is a flexible process dependent on the accumulation of various kinds of information; however, the corresponding neural mechanisms are far from clear. We extended a layered model of the frontal eye field to a learning-based model, using computational simulations to explain the cognitive process of choice tasks. The core of this extended model has three aspects: direction-preferred populations that cluster together the neurons with the same orientation preference, rule modules that control different rule-dependent activities, and reward-based synaptic plasticity that modulates connections to flexibly change the decision according to task demands. After repeated attempts in a number of trials, the network successfully simulated three decision choice tasks: an anti-saccade task, a no-go task, and an associative task. We found that synaptic plasticity could modulate the competition of choices by suppressing erroneous choices while enhancing the correct (rewarding) choice. In addition, the trained model captured some properties exhibited in animal and human experiments, such as the latency of the reaction time distribution of anti-saccades, the stop signal mechanism for canceling a reflexive saccade, and the variation of latency to half-max selectivity. Furthermore, the trained model was capable of reproducing the re-learning procedures when switching tasks and reversing the cue-saccade association.
Collapse
Affiliation(s)
- Weijie Ye
- School of Mathematics, South China University of Technology, Guangzhou, 510640, China
| | - Shenquan Liu
- School of Mathematics, South China University of Technology, Guangzhou, 510640, China.
| | - Xuanliang Liu
- School of Mathematics, South China University of Technology, Guangzhou, 510640, China
| | - Yuguo Yu
- Center for Computational Systems Biology, The State Key Laboratory of Medical Neurobiology and Institutes of Brain Science, Fudan University, School of Life Sciences, Shanghai, 200433, China
| |
Collapse
|
17
|
Grossberg S, Palma J, Versace M. Resonant Cholinergic Dynamics in Cognitive and Motor Decision-Making: Attention, Category Learning, and Choice in Neocortex, Superior Colliculus, and Optic Tectum. Front Neurosci 2016; 9:501. [PMID: 26834535 PMCID: PMC4718999 DOI: 10.3389/fnins.2015.00501] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2015] [Accepted: 12/18/2015] [Indexed: 12/20/2022] Open
Abstract
Freely behaving organisms need to rapidly calibrate their perceptual, cognitive, and motor decisions based on continuously changing environmental conditions. These plastic changes include sharpening or broadening of cognitive and motor attention and learning to match the behavioral demands that are imposed by changing environmental statistics. This article proposes that a shared circuit design for such flexible decision-making is used in specific cognitive and motor circuits, and that both types of circuits use acetylcholine to modulate choice selectivity. Such task-sensitive control is proposed to control thalamocortical choice of the critical features that are cognitively attended and that are incorporated through learning into prototypes of visual recognition categories. A cholinergically-modulated process of vigilance control determines if a recognition category and its attended features are abstract (low vigilance) or concrete (high vigilance). Homologous neural mechanisms of cholinergic modulation are proposed to focus attention and learn a multimodal map within the deeper layers of superior colliculus. This map enables visual, auditory, and planned movement commands to compete for attention, leading to selection of a winning position that controls where the next saccadic eye movement will go. Such map learning may be viewed as a kind of attentive motor category learning. The article hereby explicates a link between attention, learning, and cholinergic modulation during decision making within both cognitive and motor systems. Homologs between the mammalian superior colliculus and the avian optic tectum lead to predictions about how multimodal map learning may occur in the mammalian and avian brain and how such learning may be modulated by acetycholine.
Collapse
Affiliation(s)
- Stephen Grossberg
- Graduate Program in Cognitive and Neural Systems, Boston UniversityBoston, MA, USA
- Center for Adaptive Systems, Boston UniversityBoston, MA, USA
- Departments of Mathematics, Psychology, and Biomedical Engineering, Boston UniversityBoston, MA, USA
- Center for Computational Neuroscience and Neural Technology, Boston UniversityBoston, MA, USA
| | - Jesse Palma
- Center for Computational Neuroscience and Neural Technology, Boston UniversityBoston, MA, USA
| | - Massimiliano Versace
- Graduate Program in Cognitive and Neural Systems, Boston UniversityBoston, MA, USA
- Center for Computational Neuroscience and Neural Technology, Boston UniversityBoston, MA, USA
| |
Collapse
|
18
|
Neural Dynamics of the Basal Ganglia During Perceptual, Cognitive, and Motor Learning and Gating. INNOVATIONS IN COGNITIVE NEUROSCIENCE 2016. [DOI: 10.1007/978-3-319-42743-0_19] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
|
19
|
Realizing the Now-or-Never bottleneck and Chunk-and-Pass processing with Item-Order-Rank working memories and masking field chunking networks. Behav Brain Sci 2016; 39:e75. [DOI: 10.1017/s0140525x15000801] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
AbstractChristiansen & Chater's (C&C's) key goals for a language system have been realized by neural models for short-term storage of linguistic items in an Item-Order-Rank working memory, which inputs to Masking Fields that rapidly learn to categorize, or chunk, variable-length linguistic sequences, and choose the contextually most predictive list chunks while linguistic inputs are stored in the working memory.
Collapse
|
20
|
Clough M, Mitchell L, Millist L, Lizak N, Beh S, Frohman TC, Frohman EM, White OB, Fielding J. Ocular motor measures of cognitive dysfunction in multiple sclerosis II: working memory. J Neurol 2015; 262:1138-47. [PMID: 25851742 DOI: 10.1007/s00415-015-7644-4] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2014] [Revised: 01/10/2015] [Accepted: 01/11/2015] [Indexed: 11/24/2022]
Abstract
Our companion paper documents pervasive inhibitory deficits in multiple sclerosis (MS) using ocular motor (OM) measures. Here we investigated the utility of an OM working memory (WMem) task in characterising WMem deficits in these patients as a function of disease status and disease duration. 22 patients with CIS, 22 early clinically definite MS patients (CDMS: <7 years of diagnosis), 22 late CDMS patients (>7 years from diagnosis), and 22 healthy controls participated. All participants completed the ocular motor WMem task, the paced auditory serial addition test (PASAT), and the symbol digit modalities test (SDMT). Clinical disability was characterised in CDMS patients using the Expanded Disability Severity Scale (EDSS). WMem performance was measured as proportion of errors (WMem errors), saccade latency, and relative sensitivity to WMem loading (WMem effect), an indicator of WMem capacity. All patient groups performed more WMem errors than controls with proportion of WMem errors, and degree of WMem effect increasing with increasing disease duration. A larger WMem effect, reflecting poorer WMem capacity, corresponded to poorer performance on neuropsychological measures, and a higher disability score for CDMS patients with the longest disease duration; an observation that suggests wider implication of WMem executive processes with advancing disease. Conspicuously, performance decrements on standard neuropsychological testing did not similarly increase commensurate with disease duration. The ocular motor WMem task appears to meaningfully dissociate WMem deficit from healthy individuals as well as a function of increasing disease duration. Potentially, this task represents a highly informative and objective method by which to ascertain progressive WMem changes from the earliest inception of MS.
Collapse
Affiliation(s)
- Meaghan Clough
- School of Psychological Sciences, Monash University, Clayton, 3800, Australia
| | | | | | | | | | | | | | | | | |
Collapse
|
21
|
Magnuson JS. Phoneme restoration and empirical coverage of interactive activation and adaptive resonance models of human speech processing. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2015; 137:1481-92. [PMID: 25786959 PMCID: PMC4368586 DOI: 10.1121/1.4904543] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/26/2014] [Revised: 11/17/2014] [Accepted: 11/25/2014] [Indexed: 06/04/2023]
Abstract
Grossberg and Kazerounian [(2011). J. Acoust. Soc. Am. 130, 440-460] present a model of sequence representation for spoken word recognition, the cARTWORD model, which simulates essential aspects of phoneme restoration. Grossberg and Kazerounian also include simulations with the TRACE model presented by McClelland and Elman [(1986). Cognit. Psychol. 18, 1-86] that seem to indicate that TRACE cannot simulate phoneme restoration. Grossberg and Kazerounian also claim cARTWORD should be preferred to TRACE because of TRACE's implausible approach to sequence representation (reduplication of time-specific units) and use of non-modulatory feedback (i.e., without position-specific bottom-up support). This paper responds to Grossberg and Kazerounian first with TRACE simulations that account for phoneme restoration when appropriately constructed noise is used (and with minor changes to TRACE phoneme definitions), then reviews the case for reduplicated units and feedback as implemented in TRACE, as well as TRACE's broad and deep coverage of empirical data. Finally, it is argued that cARTWORD is not comparable to TRACE because cARTWORD cannot represent sequences with repeated elements, has only been implemented with small phoneme and lexical inventories, and has been applied to only one phenomenon (phoneme restoration). Without evidence that cARTWORD captures a similar range and detail of human spoken language processing as alternative models, it is premature to prefer cARTWORD to TRACE.
Collapse
Affiliation(s)
- James S Magnuson
- Department of Psychology, University of Connecticut, Storrs, Connecticut 06269
| |
Collapse
|
22
|
Grossberg S, Srinivasan K, Yazdanbakhsh A. Binocular fusion and invariant category learning due to predictive remapping during scanning of a depthful scene with eye movements. Front Psychol 2015; 5:1457. [PMID: 25642198 PMCID: PMC4294135 DOI: 10.3389/fpsyg.2014.01457] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2014] [Accepted: 11/28/2014] [Indexed: 12/02/2022] Open
Abstract
How does the brain maintain stable fusion of 3D scenes when the eyes move? Every eye movement causes each retinal position to process a different set of scenic features, and thus the brain needs to binocularly fuse new combinations of features at each position after an eye movement. Despite these breaks in retinotopic fusion due to each movement, previously fused representations of a scene in depth often appear stable. The 3D ARTSCAN neural model proposes how the brain does this by unifying concepts about how multiple cortical areas in the What and Where cortical streams interact to coordinate processes of 3D boundary and surface perception, spatial attention, invariant object category learning, predictive remapping, eye movement control, and learned coordinate transformations. The model explains data from single neuron and psychophysical studies of covert visual attention shifts prior to eye movements. The model further clarifies how perceptual, attentional, and cognitive interactions among multiple brain regions (LGN, V1, V2, V3A, V4, MT, MST, PPC, LIP, ITp, ITa, SC) may accomplish predictive remapping as part of the process whereby view-invariant object categories are learned. These results build upon earlier neural models of 3D vision and figure-ground separation and the learning of invariant object categories as the eyes freely scan a scene. A key process concerns how an object's surface representation generates a form-fitting distribution of spatial attention, or attentional shroud, in parietal cortex that helps maintain the stability of multiple perceptual and cognitive processes. Predictive eye movement signals maintain the stability of the shroud, as well as of binocularly fused perceptual boundaries and surface representations.
Collapse
Affiliation(s)
- Stephen Grossberg
- Center for Adaptive Systems, Graduate Program in Cognitive and Neural Systems, Center of Excellence for Learning in Education, Science and Technology, Center for Computational Neuroscience and Neural Technology, and Department of Mathematics Boston University, Boston, MA, USA
| | - Karthik Srinivasan
- Center for Adaptive Systems, Graduate Program in Cognitive and Neural Systems, Center of Excellence for Learning in Education, Science and Technology, Center for Computational Neuroscience and Neural Technology, and Department of Mathematics Boston University, Boston, MA, USA
| | - Arash Yazdanbakhsh
- Center for Adaptive Systems, Graduate Program in Cognitive and Neural Systems, Center of Excellence for Learning in Education, Science and Technology, Center for Computational Neuroscience and Neural Technology, and Department of Mathematics Boston University, Boston, MA, USA
| |
Collapse
|
23
|
Kazerounian S, Grossberg S. Real-time learning of predictive recognition categories that chunk sequences of items stored in working memory. Front Psychol 2014; 5:1053. [PMID: 25339918 PMCID: PMC4186345 DOI: 10.3389/fpsyg.2014.01053] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2014] [Accepted: 09/02/2014] [Indexed: 11/20/2022] Open
Abstract
How are sequences of events that are temporarily stored in a cognitive working memory unitized, or chunked, through learning? Such sequential learning is needed by the brain in order to enable language, spatial understanding, and motor skills to develop. In particular, how does the brain learn categories, or list chunks, that become selectively tuned to different temporal sequences of items in lists of variable length as they are stored in working memory, and how does this learning process occur in real time? The present article introduces a neural model that simulates learning of such list chunks. In this model, sequences of items are temporarily stored in an Item-and-Order, or competitive queuing, working memory before learning categorizes them using a categorization network, called a Masking Field, which is a self-similar, multiple-scale, recurrent on-center off-surround network that can weigh the evidence for variable-length sequences of items as they are stored in the working memory through time. A Masking Field hereby activates the learned list chunks that represent the most predictive item groupings at any time, while suppressing less predictive chunks. In a network with a given number of input items, all possible ordered sets of these item sequences, up to a fixed length, can be learned with unsupervised or supervised learning. The self-similar multiple-scale properties of Masking Fields interacting with an Item-and-Order working memory provide a natural explanation of George Miller's Magical Number Seven and Nelson Cowan's Magical Number Four. The article explains why linguistic, spatial, and action event sequences may all be stored by Item-and-Order working memories that obey similar design principles, and thus how the current results may apply across modalities. Item-and-Order properties may readily be extended to Item-Order-Rank working memories in which the same item can be stored in multiple list positions, or ranks, as in the list ABADBD. Comparisons with other models, including TRACE, MERGE, and TISK, are made.
Collapse
Affiliation(s)
| | - Stephen Grossberg
- Graduate Program in Cognitive and Neural Systems, Department of Mathematics, Center for Adaptive Systems, Center for Computational Neuroscience and Neural Technology, Boston UniversityBoston, MA, USA
| |
Collapse
|
24
|
Accuracy and response-time distributions for decision-making: linear perfect integrators versus nonlinear attractor-based neural circuits. J Comput Neurosci 2013; 35:261-94. [PMID: 23608921 PMCID: PMC3825033 DOI: 10.1007/s10827-013-0452-x] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2012] [Revised: 03/25/2013] [Accepted: 03/27/2013] [Indexed: 12/31/2022]
Abstract
Animals choose actions based on imperfect, ambiguous data. “Noise” inherent in neural processing adds further variability to this already-noisy input signal. Mathematical analysis has suggested that the optimal apparatus (in terms of the speed/accuracy trade-off) for reaching decisions about such noisy inputs is perfect accumulation of the inputs by a temporal integrator. Thus, most highly cited models of neural circuitry underlying decision-making have been instantiations of a perfect integrator. Here, in accordance with a growing mathematical and empirical literature, we describe circumstances in which perfect integration is rendered suboptimal. In particular we highlight the impact of three biological constraints: (1) significant noise arising within the decision-making circuitry itself; (2) bounding of integration by maximal neural firing rates; and (3) time limitations on making a decision. Under conditions (1) and (2), an attractor system with stable attractor states can easily best an integrator when accuracy is more important than speed. Moreover, under conditions in which such stable attractor networks do not best the perfect integrator, a system with unstable initial states can do so if readout of the system’s final state is imperfect. Ubiquitously, an attractor system with a nonselective time-dependent input current is both more accurate and more robust to imprecise tuning of parameters than an integrator with such input. Given that neural responses that switch stochastically between discrete states can “masquerade” as integration in single-neuron and trial-averaged data, our results suggest that such networks should be considered as plausible alternatives to the integrator model.
Collapse
|
25
|
Gasser B, Cartmill EA, Arbib MA. Ontogenetic Ritualization of Primate Gesture as a Case Study in Dyadic Brain Modeling. Neuroinformatics 2013; 12:93-109. [DOI: 10.1007/s12021-013-9182-5] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
26
|
Adaptive Resonance Theory: How a brain learns to consciously attend, learn, and recognize a changing world. Neural Netw 2013; 37:1-47. [PMID: 23149242 DOI: 10.1016/j.neunet.2012.09.017] [Citation(s) in RCA: 183] [Impact Index Per Article: 16.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2012] [Revised: 08/24/2012] [Accepted: 09/24/2012] [Indexed: 11/17/2022]
|
27
|
Palma J, Grossberg S, Versace M. Persistence and storage of activity patterns in spiking recurrent cortical networks: modulation of sigmoid signals by after-hyperpolarization currents and acetylcholine. Front Comput Neurosci 2012; 6:42. [PMID: 22754524 PMCID: PMC3386521 DOI: 10.3389/fncom.2012.00042] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2012] [Accepted: 06/11/2012] [Indexed: 11/13/2022] Open
Abstract
Many cortical networks contain recurrent architectures that transform input patterns before storing them in short-term memory (STM). Theorems in the 1970's showed how feedback signal functions in rate-based recurrent on-center off-surround networks control this process. A sigmoid signal function induces a quenching threshold below which inputs are suppressed as noise and above which they are contrast-enhanced before pattern storage. This article describes how changes in feedback signaling, neuromodulation, and recurrent connectivity may alter pattern processing in recurrent on-center off-surround networks of spiking neurons. In spiking neurons, fast, medium, and slow after-hyperpolarization (AHP) currents control sigmoid signal threshold and slope. Modulation of AHP currents by acetylcholine (ACh) can change sigmoid shape and, with it, network dynamics. For example, decreasing signal function threshold and increasing slope can lengthen the persistence of a partially contrast-enhanced pattern, increase the number of active cells stored in STM, or, if connectivity is distance-dependent, cause cell activities to cluster. These results clarify how cholinergic modulation by the basal forebrain may alter the vigilance of category learning circuits, and thus their sensitivity to predictive mismatches, thereby controlling whether learned categories code concrete or abstract features, as predicted by Adaptive Resonance Theory. The analysis includes global, distance-dependent, and interneuron-mediated circuits. With an appropriate degree of recurrent excitation and inhibition, spiking networks maintain a partially contrast-enhanced pattern for 800 ms or longer after stimuli offset, then resolve to no stored pattern, or to winner-take-all (WTA) stored patterns with one or multiple winners. Strengthening inhibition prolongs a partially contrast-enhanced pattern by slowing the transition to stability, while strengthening excitation causes more winners when the network stabilizes.
Collapse
Affiliation(s)
| | - Stephen Grossberg
- Graduate Program in Cognitive and Neural Systems, Center for Adaptive Systems, Center of Excellence for Learning in Education, Science, and Technology, Center for Computational Neuroscience and Neural Technology, Boston University, BostonMA, USA
| | | |
Collapse
|