1
|
Uemura M, Katagiri Y, Imai E, Kawahara Y, Otani Y, Ichinose T, Kondo K, Kowa H. Dorsal Anterior Cingulate Cortex Coordinates Contextual Mental Imagery for Single-Beat Manipulation during Rhythmic Sensorimotor Synchronization. Brain Sci 2024; 14:757. [PMID: 39199452 PMCID: PMC11352649 DOI: 10.3390/brainsci14080757] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2024] [Revised: 07/17/2024] [Accepted: 07/23/2024] [Indexed: 09/01/2024] Open
Abstract
Flexible pulse-by-pulse regulation of sensorimotor synchronization is crucial for voluntarily showing rhythmic behaviors synchronously with external cueing; however, the underpinning neurophysiological mechanisms remain unclear. We hypothesized that the dorsal anterior cingulate cortex (dACC) plays a key role by coordinating both proactive and reactive motor outcomes based on contextual mental imagery. To test our hypothesis, a missing-oddball task in finger-tapping paradigms was conducted in 33 healthy young volunteers. The dynamic properties of the dACC were evaluated by event-related deep-brain activity (ER-DBA), supported by event-related potential (ERP) analysis and behavioral evaluation based on signal detection theory. We found that ER-DBA activation/deactivation reflected a strategic choice of motor control modality in accordance with mental imagery. Reverse ERP traces, as omission responses, confirmed that the imagery was contextual. We found that mental imagery was updated only by environmental changes via perceptual evidence and response-based abductive reasoning. Moreover, stable on-pulse tapping was achievable by maintaining proactive control while creating an imagery of syncopated rhythms from simple beat trains, whereas accuracy was degraded with frequent erroneous tapping for missing pulses. We conclude that the dACC voluntarily regulates rhythmic sensorimotor synchronization by utilizing contextual mental imagery based on experience and by creating novel rhythms.
Collapse
Affiliation(s)
- Maho Uemura
- Department of Rehabilitation Science, Kobe University Graduate School of Health Sciences, Kobe 654-0142, Japan; (Y.O.); (H.K.)
- School of Music, Mukogawa Women’s University, Nishinomiya 663-8558, Japan;
| | - Yoshitada Katagiri
- Department of Bioengineering, School of Engineering, The University of Tokyo, Tokyo 113-8655, Japan;
| | - Emiko Imai
- Department of Biophysics, Kobe University Graduate School of Health Sciences, Kobe 654-0142, Japan;
| | - Yasuhiro Kawahara
- Department of Human life and Health Sciences, Division of Arts and Sciences, The Open University of Japan, Chiba 261-8586, Japan;
| | - Yoshitaka Otani
- Department of Rehabilitation Science, Kobe University Graduate School of Health Sciences, Kobe 654-0142, Japan; (Y.O.); (H.K.)
- Faculty of Rehabilitation, Kobe International University, Kobe 658-0032, Japan
| | - Tomoko Ichinose
- School of Music, Mukogawa Women’s University, Nishinomiya 663-8558, Japan;
| | | | - Hisatomo Kowa
- Department of Rehabilitation Science, Kobe University Graduate School of Health Sciences, Kobe 654-0142, Japan; (Y.O.); (H.K.)
| |
Collapse
|
2
|
Hashim S, Küssner MB, Weinreich A, Omigie D. The neuro-oscillatory profiles of static and dynamic music-induced visual imagery. Int J Psychophysiol 2024; 199:112309. [PMID: 38242363 DOI: 10.1016/j.ijpsycho.2024.112309] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Revised: 12/22/2023] [Accepted: 01/12/2024] [Indexed: 01/21/2024]
Abstract
Visual imagery, i.e., seeing in the absence of the corresponding retinal input, has been linked to visual and motor processing areas of the brain. Music listening provides an ideal vehicle for exploring the neural correlates of visual imagery because it has been shown to reliably induce a broad variety of content, ranging from abstract shapes to dynamic scenes. Forty-two participants listened with closed eyes to twenty-four excerpts of music, while a 15-channel EEG was recorded, and, after each excerpt, rated the extent to which they experienced static and dynamic visual imagery. Our results show both static and dynamic imagery to be associated with posterior alpha suppression (especially in lower alpha) early in the onset of music listening, while static imagery was associated with an additional alpha enhancement later in the listening experience. With regard to the beta band, our results demonstrate beta enhancement to static imagery, but first beta suppression before enhancement in response to dynamic imagery. We also observed a positive association, early in the listening experience, between gamma power and dynamic imagery ratings that was not present for static imagery ratings. Finally, we offer evidence that musical training may selectively drive effects found with respect to static and dynamic imagery and alpha, beta, and gamma band oscillations. Taken together, our results show the promise of using music listening as an effective stimulus for examining the neural correlates of visual imagery and its contents. Our study also highlights the relevance of future work seeking to study the temporal dynamics of music-induced visual imagery.
Collapse
Affiliation(s)
- Sarah Hashim
- Department of Psychology, Goldsmiths, University of London, United Kingdom.
| | - Mats B Küssner
- Department of Psychology, Goldsmiths, University of London, United Kingdom; Department of Musicology and Media Studies, Humboldt-Universität zu Berlin, Germany
| | - André Weinreich
- Department of Psychology, BSP Business & Law School Berlin, Germany
| | - Diana Omigie
- Department of Psychology, Goldsmiths, University of London, United Kingdom
| |
Collapse
|
3
|
Krempel R, Monzel M. Aphantasia and involuntary imagery. Conscious Cogn 2024; 120:103679. [PMID: 38564857 DOI: 10.1016/j.concog.2024.103679] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Revised: 03/06/2024] [Accepted: 03/09/2024] [Indexed: 04/04/2024]
Abstract
Aphantasia is a condition that is often characterized as the impaired ability to create voluntary mental images. Aphantasia is assumed to selectively affect voluntary imagery mainly because even though aphantasics report being unable to visualize something at will, many report having visual dreams. We argue that this common characterization of aphantasia is incorrect. Studies on aphantasia are often not clear about whether they are assessing voluntary or involuntary imagery, but some studies show that several forms of involuntary imagery are also affected in aphantasia (including imagery in dreams). We also raise problems for two attempts to show that involuntary images are preserved in aphantasia. In addition, we report the results of a study about afterimages in aphantasia, which suggest that these tend to be less intense in aphantasics than in controls. Involuntary imagery is often treated as a unitary kind that is either present or absent in aphantasia. We suggest that this approach is mistaken and that we should look at different types of involuntary imagery case by case. Doing so reveals no evidence of preserved involuntary imagery in aphantasia. We suggest that a broader characterization of aphantasia, as a deficit in forming mental imagery, whether voluntary or not, is more appropriate. Characterizing aphantasia as a volitional deficit is likely to lead researchers to give incorrect explanations for aphantasia, and to look for the wrong mechanisms underlying it.
Collapse
Affiliation(s)
- Raquel Krempel
- Center for Logic, Epistemology and History of Science, State University of Campinas, R. Sérgio Buarque de Holanda, 251 - Cidade Universitária, Campinas, SP 13083-859, Brazil; Center for Philosophy of Science, University of Pittsburgh, 4200 Fifth Ave, Pittsburgh, PA 15260, USA.
| | - Merlin Monzel
- Department of Psychology, Personality Psychology and Biological Psychology, University of Bonn, Kaiser-Karl-Ring 9, 53111 Bonn, Germany.
| |
Collapse
|
4
|
Weber S, Christophel T, Görgen K, Soch J, Haynes J. Working memory signals in early visual cortex are present in weak and strong imagers. Hum Brain Mapp 2024; 45:e26590. [PMID: 38401134 PMCID: PMC10893972 DOI: 10.1002/hbm.26590] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Revised: 12/06/2023] [Accepted: 12/29/2023] [Indexed: 02/26/2024] Open
Abstract
It has been suggested that visual images are memorized across brief periods of time by vividly imagining them as if they were still there. In line with this, the contents of both working memory and visual imagery are known to be encoded already in early visual cortex. If these signals in early visual areas were indeed to reflect a combined imagery and memory code, one would predict them to be weaker for individuals with reduced visual imagery vividness. Here, we systematically investigated this question in two groups of participants. Strong and weak imagers were asked to remember images across brief delay periods. We were able to reliably reconstruct the memorized stimuli from early visual cortex during the delay. Importantly, in contrast to the prediction, the quality of reconstruction was equally accurate for both strong and weak imagers. The decodable information also closely reflected behavioral precision in both groups, suggesting it could contribute to behavioral performance, even in the extreme case of completely aphantasic individuals. Our data thus suggest that working memory signals in early visual cortex can be present even in the (near) absence of phenomenal imagery.
Collapse
Affiliation(s)
- Simon Weber
- Bernstein Center for Computational Neuroscience Berlin and Berlin Center for Advanced NeuroimagingCharité ‐ Universitätsmedizin Berlin, corporate member of the Freie Universität Berlin and Humboldt‐Universität zu BerlinBerlinGermany
- Research Training Group “Extrospection” and Berlin School of Mind and Brain, Humboldt‐Universität zu BerlinBerlinGermany
- Research Cluster of Excellence “Science of Intelligence”Technische Universität BerlinBerlinGermany
| | - Thomas Christophel
- Bernstein Center for Computational Neuroscience Berlin and Berlin Center for Advanced NeuroimagingCharité ‐ Universitätsmedizin Berlin, corporate member of the Freie Universität Berlin and Humboldt‐Universität zu BerlinBerlinGermany
- Department of PsychologyHumboldt‐Universität zu BerlinBerlinGermany
| | - Kai Görgen
- Bernstein Center for Computational Neuroscience Berlin and Berlin Center for Advanced NeuroimagingCharité ‐ Universitätsmedizin Berlin, corporate member of the Freie Universität Berlin and Humboldt‐Universität zu BerlinBerlinGermany
- Research Cluster of Excellence “Science of Intelligence”Technische Universität BerlinBerlinGermany
| | - Joram Soch
- Bernstein Center for Computational Neuroscience Berlin and Berlin Center for Advanced NeuroimagingCharité ‐ Universitätsmedizin Berlin, corporate member of the Freie Universität Berlin and Humboldt‐Universität zu BerlinBerlinGermany
- Institute of Psychology, Otto von Guericke University MagdeburgMagdeburgGermany
| | - John‐Dylan Haynes
- Bernstein Center for Computational Neuroscience Berlin and Berlin Center for Advanced NeuroimagingCharité ‐ Universitätsmedizin Berlin, corporate member of the Freie Universität Berlin and Humboldt‐Universität zu BerlinBerlinGermany
- Research Training Group “Extrospection” and Berlin School of Mind and Brain, Humboldt‐Universität zu BerlinBerlinGermany
- Research Cluster of Excellence “Science of Intelligence”Technische Universität BerlinBerlinGermany
- Department of PsychologyHumboldt‐Universität zu BerlinBerlinGermany
- Collaborative Research Center “Volition and Cognitive Control”Technische Universität DresdenDresdenGermany
| |
Collapse
|
5
|
Schneider I, Herpertz SC, Ueltzhöffer K, Neukel C. Stress and reward in the maternal brain of mothers with borderline personality disorder: a script-based fMRI study. Eur Arch Psychiatry Clin Neurosci 2024; 274:117-127. [PMID: 37354380 PMCID: PMC10786970 DOI: 10.1007/s00406-023-01634-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Accepted: 05/29/2023] [Indexed: 06/26/2023]
Abstract
Borderline personality disorder (BPD) is associated with altered neural activity in regions of salience and emotion regulation. An exaggerated sensitization to emotionally salient situations, increased experience of emotions, and dysfunctional regulative abilities could be reasons for increased distress also during parenting. Mothers with BPD tend to have less reciprocal mother-child interactions (MCI) and reveal altered cortisol and oxytocin reactivity in the interaction with their child, which could indicate altered processing of stress and reward. Here, we studied underlying neural mechanisms of disrupted MCI in BPD. Twenty-five mothers with BPD and 28 healthy mothers participated in a script-driven imagery functional magnetic resonance imaging (fMRI)-paradigm. Scripts described stressful or rewarding MCI with the own child, or situations in which the mother was alone. Mothers with BPD showed larger activities in the bilateral insula and anterior cingulate cortex (ACC) compared to healthy mothers during the imagination of MCI and non-MCI. Already in the precursory phase while listening to the scripts, a similar pattern emerged with stronger activity in the left anterior insula (AINS), but not in the ACC. This AINS activity correlated negatively with the quality of real-life MCI for mothers with BPD. Mothers with BPD reported lower affect and higher arousal. An exaggerated sensitization to different, emotionally salient situations together with dysfunctional emotion regulation abilities, as reflected by increased insula and ACC activity, might hinder sensitive maternal behavior in mothers with BPD. These results underline the importance for psychotherapeutic interventions to improve emotional hyperarousal and emotion regulation in patients with BPD, especially in affected mothers caring for young children.
Collapse
Affiliation(s)
- Isabella Schneider
- Department of General Psychiatry, Center for Psychosocial Medicine, Heidelberg University, Voßstr. 4, 69115, Heidelberg, Germany.
| | - Sabine C Herpertz
- Department of General Psychiatry, Center for Psychosocial Medicine, Heidelberg University, Voßstr. 4, 69115, Heidelberg, Germany
| | - Kai Ueltzhöffer
- European Molecular Biology Laboratory, Genome Biology Unit, Meyerhofstr. 1, 69117, Heidelberg, Germany
| | - Corinne Neukel
- Department of General Psychiatry, Center for Psychosocial Medicine, Heidelberg University, Voßstr. 4, 69115, Heidelberg, Germany
| |
Collapse
|
6
|
Alho J, Gotsopoulos A, Silvanto J. Where in the brain do internally generated and externally presented visual information interact? Brain Res 2023; 1821:148582. [PMID: 37717887 DOI: 10.1016/j.brainres.2023.148582] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Revised: 09/08/2023] [Accepted: 09/14/2023] [Indexed: 09/19/2023]
Abstract
Conscious experiences normally result from the flow of external input into our sensory systems. However, we can also create conscious percepts independently of sensory stimulation. These internally generated percepts are referred to as mental images, and they have many similarities with real visual percepts. Consequently, mental imagery is often referred to as "seeing in the mind's eye". While the neural basis of imagery has been widely studied, the interaction between internal and external sources of visual information has received little interest. Here we examined this question by using fMRI to record brain activity of healthy human volunteers while they were performing visual imagery that was distracted with a concurrent presentation of a visual stimulus. Multivariate pattern analysis (MVPA) was used to identify the brain basis of this interaction. Visual imagery was reflected in several brain areas in ventral temporal, lateral occipitotemporal, and posterior frontal cortices, with a left-hemisphere dominance. The key finding was that imagery content representations in the left lateral occipitotemporal cortex were disrupted when a visual distractor was presented during imagery. Our results thus demonstrate that the representations of internal and external visual information interact in brain areas associated with the encoding of visual objects and shapes.
Collapse
Affiliation(s)
- Jussi Alho
- Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, P.O. Box 21, Haartmaninkatu 3, Helsinki FI-00014, Finland; Department of Neuroscience and Biomedical Engineering, Aalto University School of Science, P.O. Box 12200, Rakentajanaukio 2, FI-00076 AALTO Espoo, Finland; Advanced Magnetic Imaging Centre, Aalto NeuroImaging, Aalto University, P.O. Box 12200, Otakaari 5 I, FI-00076 AALTO Espoo, Finland.
| | - Athanasios Gotsopoulos
- Department of Neuroscience and Biomedical Engineering, Aalto University School of Science, P.O. Box 12200, Rakentajanaukio 2, FI-00076 AALTO Espoo, Finland
| | - Juha Silvanto
- Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, P.O. Box 21, Haartmaninkatu 3, Helsinki FI-00014, Finland; School of Psychology, University of Surrey, Guildford, Surrey GU2 7XH, UK
| |
Collapse
|
7
|
Barbieri R, Töpfer FM, Soch J, Bogler C, Sprekeler H, Haynes JD. Encoding of continuous perceptual choices in human early visual cortex. Front Hum Neurosci 2023; 17:1277539. [PMID: 38021249 PMCID: PMC10679739 DOI: 10.3389/fnhum.2023.1277539] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2023] [Accepted: 10/25/2023] [Indexed: 12/01/2023] Open
Abstract
Introduction Research on the neural mechanisms of perceptual decision-making has typically focused on simple categorical choices, say between two alternative motion directions. Studies on such discrete alternatives have often suggested that choices are encoded either in a motor-based or in an abstract, categorical format in regions beyond sensory cortex. Methods In this study, we used motion stimuli that could vary anywhere between 0° and 360° to assess how the brain encodes choices for features that span the full sensory continuum. We employed a combination of neuroimaging and encoding models based on Gaussian process regression to assess how either stimuli or choices were encoded in brain responses. Results We found that single-voxel tuning patterns could be used to reconstruct the trial-by-trial physical direction of motion as well as the participants' continuous choices. Importantly, these continuous choice signals were primarily observed in early visual areas. The tuning properties in this region generalized between choice encoding and stimulus encoding, even for reports that reflected pure guessing. Discussion We found only little information related to the decision outcome in regions beyond visual cortex, such as parietal cortex, possibly because our task did not involve differential motor preparation. This could suggest that decisions for continuous stimuli take can place already in sensory brain regions, potentially using similar mechanisms to the sensory recruitment in visual working memory.
Collapse
Affiliation(s)
- Riccardo Barbieri
- Bernstein Center for Computational Neuroscience and Berlin Center for Advanced Neuroimaging, Department of Neurology, Charité – Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin and Berlin Institute of Health (BIH), Berlin, Germany
| | - Felix M. Töpfer
- Bernstein Center for Computational Neuroscience and Berlin Center for Advanced Neuroimaging, Department of Neurology, Charité – Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin and Berlin Institute of Health (BIH), Berlin, Germany
| | - Joram Soch
- Bernstein Center for Computational Neuroscience and Berlin Center for Advanced Neuroimaging, Department of Neurology, Charité – Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin and Berlin Institute of Health (BIH), Berlin, Germany
- German Center for Neurodegenerative Diseases, Göttingen, Germany
| | - Carsten Bogler
- Bernstein Center for Computational Neuroscience and Berlin Center for Advanced Neuroimaging, Department of Neurology, Charité – Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin and Berlin Institute of Health (BIH), Berlin, Germany
| | - Henning Sprekeler
- Department for Electrical Engineering and Computer Science, Technische Universität Berlin, Berlin, Germany
| | - John-Dylan Haynes
- Bernstein Center for Computational Neuroscience and Berlin Center for Advanced Neuroimaging, Department of Neurology, Charité – Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin and Berlin Institute of Health (BIH), Berlin, Germany
- Berlin School of Mind and Brain and Institute of Psychology, Humboldt-Universität zu Berlin, Berlin, Germany
| |
Collapse
|
8
|
Pace T, Koenig-Robert R, Pearson J. Different Mechanisms for Supporting Mental Imagery and Perceptual Representations: Modulation Versus Excitation. Psychol Sci 2023; 34:1229-1243. [PMID: 37782827 DOI: 10.1177/09567976231198435] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/04/2023] Open
Abstract
Recent research suggests imagery is functionally equivalent to a weak form of visual perception. Here we report evidence across five independent experiments on adults that perception and imagery are supported by fundamentally different mechanisms: Whereas perceptual representations are largely formed via increases in excitatory activity, imagery representations are largely supported by modulating nonimagined content. We developed two behavioral techniques that allowed us to first put the visual system into a state of adaptation and then probe the additivity of perception and imagery. If imagery drives similar excitatory visual activity to perception, pairing imagery with perceptual adapters should increase the state of adaptation. Whereas pairing weak perception with adapters increased measures of adaptation, pairing imagery reversed their effects. Further experiments demonstrated that these nonadditive effects were due to imagery weakening representations of nonimagined content. Together these data provide empirical evidence that the brain uses categorically different mechanisms to represent imagery and perception.
Collapse
Affiliation(s)
- Thomas Pace
- School of Psychology, University of New South Wales
| | | | - Joel Pearson
- School of Psychology, University of New South Wales
| |
Collapse
|
9
|
Sulfaro AA, Robinson AK, Carlson TA. Modelling perception as a hierarchical competition differentiates imagined, veridical, and hallucinated percepts. Neurosci Conscious 2023; 2023:niad018. [PMID: 37621984 PMCID: PMC10445666 DOI: 10.1093/nc/niad018] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Revised: 07/03/2023] [Accepted: 07/14/2023] [Indexed: 08/26/2023] Open
Abstract
Mental imagery is a process by which thoughts become experienced with sensory characteristics. Yet, it is not clear why mental images appear diminished compared to veridical images, nor how mental images are phenomenologically distinct from hallucinations, another type of non-veridical sensory experience. Current evidence suggests that imagination and veridical perception share neural resources. If so, we argue that considering how neural representations of externally generated stimuli (i.e. sensory input) and internally generated stimuli (i.e. thoughts) might interfere with one another can sufficiently differentiate between veridical, imaginary, and hallucinatory perception. We here use a simple computational model of a serially connected, hierarchical network with bidirectional information flow to emulate the primate visual system. We show that modelling even first approximations of neural competition can more coherently explain imagery phenomenology than non-competitive models. Our simulations predict that, without competing sensory input, imagined stimuli should ubiquitously dominate hierarchical representations. However, with competition, imagination should dominate high-level representations but largely fail to outcompete sensory inputs at lower processing levels. To interpret our findings, we assume that low-level stimulus information (e.g. in early visual cortices) contributes most to the sensory aspects of perceptual experience, while high-level stimulus information (e.g. towards temporal regions) contributes most to its abstract aspects. Our findings therefore suggest that ongoing bottom-up inputs during waking life may prevent imagination from overriding veridical sensory experience. In contrast, internally generated stimuli may be hallucinated when sensory input is dampened or eradicated. Our approach can explain individual differences in imagery, along with aspects of daydreaming, hallucinations, and non-visual mental imagery.
Collapse
Affiliation(s)
- Alexander A Sulfaro
- School of Psychology, Griffith Taylor Building, The University of Sydney, Camperdown, NSW 2006, Australia
| | - Amanda K Robinson
- School of Psychology, Griffith Taylor Building, The University of Sydney, Camperdown, NSW 2006, Australia
- Queensland Brain Institute, QBI Building 79, The University of Queensland, St Lucia, QLD 4067, Australia
| | - Thomas A Carlson
- School of Psychology, Griffith Taylor Building, The University of Sydney, Camperdown, NSW 2006, Australia
| |
Collapse
|
10
|
Seydell-Greenwald A, Wang X, Newport EL, Bi Y, Striem-Amit E. Spoken language processing activates the primary visual cortex. PLoS One 2023; 18:e0289671. [PMID: 37566582 PMCID: PMC10420367 DOI: 10.1371/journal.pone.0289671] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Accepted: 07/24/2023] [Indexed: 08/13/2023] Open
Abstract
Primary visual cortex (V1) is generally thought of as a low-level sensory area that primarily processes basic visual features. Although there is evidence for multisensory effects on its activity, these are typically found for the processing of simple sounds and their properties, for example spatially or temporally-congruent simple sounds. However, in congenitally blind individuals, V1 is involved in language processing, with no evidence of major changes in anatomical connectivity that could explain this seemingly drastic functional change. This is at odds with current accounts of neural plasticity, which emphasize the role of connectivity and conserved function in determining a neural tissue's role even after atypical early experiences. To reconcile what appears to be unprecedented functional reorganization with known accounts of plasticity limitations, we tested whether V1's multisensory roles include responses to spoken language in sighted individuals. Using fMRI, we found that V1 in normally sighted individuals was indeed activated by comprehensible spoken sentences as compared to an incomprehensible reversed speech control condition, and more strongly so in the left compared to the right hemisphere. Activation in V1 for language was also significant and comparable for abstract and concrete words, suggesting it was not driven by visual imagery. Last, this activation did not stem from increased attention to the auditory onset of words, nor was it correlated with attentional arousal ratings, making general attention accounts an unlikely explanation. Together these findings suggest that V1 responds to spoken language even in sighted individuals, reflecting the binding of multisensory high-level signals, potentially to predict visual input. This capability might be the basis for the strong V1 language activation observed in people born blind, re-affirming the notion that plasticity is guided by pre-existing connectivity and abilities in the typically developed brain.
Collapse
Affiliation(s)
- Anna Seydell-Greenwald
- Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington, DC, United States of America
| | - Xiaoying Wang
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Elissa L. Newport
- Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington, DC, United States of America
| | - Yanchao Bi
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Ella Striem-Amit
- Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington, DC, United States of America
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC, United States of America
| |
Collapse
|
11
|
Gu L, Li A, Yang R, Yang J, Pang Y, Qu J, Mei L. Category-specific and category-general neural codes of recognition memory in the ventral visual pathway. Cortex 2023; 164:77-89. [PMID: 37207411 DOI: 10.1016/j.cortex.2023.04.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Revised: 03/09/2023] [Accepted: 04/03/2023] [Indexed: 05/21/2023]
Abstract
Researchers have identified category-specific brain regions, such as the fusiform face area (FFA) and parahippocampal place area (PPA) in the ventral visual pathway, which respond preferentially to one particular category of visual objects. In addition to their category-specific role in visual object identification and categorization, regions in the ventral visual pathway play critical roles in recognition memory. Nevertheless, it is not clear whether the contributions of those brain regions to recognition memory are category-specific or category-general. To address this question, the present study adopted a subsequent memory paradigm and multivariate pattern analysis (MVPA) to explore category-specific and category-general neural codes of recognition memory in the visual pathway. The results revealed that the right FFA and the bilateral PPA showed category-specific neural patterns supporting recognition memory of faces and scenes, respectively. In contrast, the lateral occipital cortex seemed to carry category-general neural codes of recognition memory. These results provide neuroimaging evidence for category-specific and category-general neural mechanisms of recognition memory in the ventral visual pathway.
Collapse
Affiliation(s)
- Lala Gu
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents (South China Normal University), Ministry of Education, China; School of Psychology, South China Normal University, Guangzhou, China; Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, China
| | - Aqian Li
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents (South China Normal University), Ministry of Education, China; School of Psychology, South China Normal University, Guangzhou, China; Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, China
| | - Rui Yang
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents (South China Normal University), Ministry of Education, China; School of Psychology, South China Normal University, Guangzhou, China; Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, China
| | - Jiayi Yang
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents (South China Normal University), Ministry of Education, China; School of Psychology, South China Normal University, Guangzhou, China; Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, China
| | - Yingdan Pang
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents (South China Normal University), Ministry of Education, China; School of Psychology, South China Normal University, Guangzhou, China; Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, China
| | - Jing Qu
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents (South China Normal University), Ministry of Education, China; School of Psychology, South China Normal University, Guangzhou, China; Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, China
| | - Leilei Mei
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents (South China Normal University), Ministry of Education, China; School of Psychology, South China Normal University, Guangzhou, China; Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, China.
| |
Collapse
|
12
|
W D, C P, C ML, F L. Imagining and reading actions: Towards similar motor representations. Heliyon 2023; 9:e13426. [PMID: 36816230 PMCID: PMC9932708 DOI: 10.1016/j.heliyon.2023.e13426] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2022] [Revised: 12/27/2022] [Accepted: 01/30/2023] [Indexed: 02/04/2023] Open
Abstract
While action language and motor imagery both engage the motor system, determining whether these two processes indeed share the same motor representations would contribute to better understanding their underlying mechanisms. We conducted two experiments probing the mutual influence of these two processes. In Exp.1, hand-action verbs were presented subliminally, and participants (n = 36) selected the verb they thought they perceived from two alternatives. When congruent actions were imagined prior to this task, accuracy significantly increased, i.e. participants were better able to "see" the subliminal verbs. In Exp.2, participants (n = 19) imagined hand flexion or extension, while corticospinal excitability was measured via transcranial magnetic stimulation. Corticospinal excitability was modulated by action verbs subliminally presented prior to imagery. Specifically, the typical increase observed during imagery was suppressed after presentation of incongruent action verbs. This mutual influence of action language and motor imagery, both at behavioral and neurophysiological levels, suggests overlapping motor representations.
Collapse
Affiliation(s)
- Dupont W
- INSERM UMR1093-CAPS, Université Bourgogne Franche-Comté, UFR des Sciences du Sport, F-21000, Dijon, France
| | - Papaxanthis C
- INSERM UMR1093-CAPS, Université Bourgogne Franche-Comté, UFR des Sciences du Sport, F-21000, Dijon, France
| | - Madden-Lombardi C
- INSERM UMR1093-CAPS, Université Bourgogne Franche-Comté, UFR des Sciences du Sport, F-21000, Dijon, France
- Centre National de la Recherche Scientifique (CNRS), France
| | - Lebon F
- INSERM UMR1093-CAPS, Université Bourgogne Franche-Comté, UFR des Sciences du Sport, F-21000, Dijon, France
- Institut Universitaire de France (IUF), Paris, France
| |
Collapse
|
13
|
Experimental evidence for involvement of monocular channels in mental rotation. Psychon Bull Rev 2022; 30:575-584. [PMID: 36279047 DOI: 10.3758/s13423-022-02195-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/27/2022] [Indexed: 11/08/2022]
Abstract
According to the prevailing view, cognitive processes of mental rotation are carried out by visuospatial perceptual circuits located primarily in high cortical areas. Here, we examined the functional involvement of (mostly subcortical) monocular channels in mental rotation tasks. Images of two rotated objects (0°, 50°, 100°, or 150°; identical or mirrored) were presented either to one eye (monocular) or segregated between the eyes (interocular). The results indicated a causal role for low monocular visual channels in mental rotation: Response times for identical ("same") objects at high angular disparities (100°, 150°) were shorter when both objects were presented to a single eye than when each object was presented to a different eye. We suggest that mental rotation processes rely on cortico-subcortical loops that support visuospatial perception. More generally, the findings highlight the potential contribution of lower-level mechanisms to what are typically considered to be high-level cognitive functions, such as mental representation.
Collapse
|
14
|
Gaziv G, Beliy R, Granot N, Hoogi A, Strappini F, Golan T, Irani M. Self-supervised Natural Image Reconstruction and Large-scale Semantic Classification from Brain Activity. Neuroimage 2022; 254:119121. [PMID: 35342004 PMCID: PMC9133799 DOI: 10.1016/j.neuroimage.2022.119121] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2021] [Revised: 01/19/2022] [Accepted: 03/19/2022] [Indexed: 11/24/2022] Open
Abstract
Reconstructing natural images and decoding their semantic category from fMRI brain recordings is challenging. Acquiring sufficient pairs of images and their corresponding fMRI responses, which span the huge space of natural images, is prohibitive. We present a novel self-supervised approach that goes well beyond the scarce paired data, for achieving both: (i) state-of-the art fMRI-to-image reconstruction, and (ii) first-ever large-scale semantic classification from fMRI responses. By imposing cycle consistency between a pair of deep neural networks (from image-to-fMRI & from fMRI-to-image), we train our image reconstruction network on a large number of "unpaired" natural images (images without fMRI recordings) from many novel semantic categories. This enables to adapt our reconstruction network to a very rich semantic coverage without requiring any explicit semantic supervision. Specifically, we find that combining our self-supervised training with high-level perceptual losses, gives rise to new reconstruction & classification capabilities. In particular, this perceptual training enables to classify well fMRIs of never-before-seen semantic classes, without requiring any class labels during training. This gives rise to: (i) Unprecedented image-reconstruction from fMRI of never-before-seen images (evaluated by image metrics and human testing), and (ii) Large-scale semantic classification of categories that were never-before-seen during network training. Such large-scale (1000-way) semantic classification from fMRI recordings has never been demonstrated before. Finally, we provide evidence for the biological consistency of our learned model.
Collapse
Affiliation(s)
- Guy Gaziv
- Dept. of Computer Science and Applied Math, Weizmann Institute of Science, Rehovot, Israel.
| | - Roman Beliy
- Dept. of Computer Science and Applied Math, Weizmann Institute of Science, Rehovot, Israel
| | - Niv Granot
- Dept. of Computer Science and Applied Math, Weizmann Institute of Science, Rehovot, Israel
| | - Assaf Hoogi
- Dept. of Computer Science and Applied Math, Weizmann Institute of Science, Rehovot, Israel
| | | | - Tal Golan
- Zuckerman Institute, Columbia University, New York, NY USA
| | - Michal Irani
- Dept. of Computer Science and Applied Math, Weizmann Institute of Science, Rehovot, Israel.
| |
Collapse
|
15
|
Katayama R, Yoshida W, Ishii S. Confidence modulates the decodability of scene prediction during partially-observable maze exploration in humans. Commun Biol 2022; 5:367. [PMID: 35440615 PMCID: PMC9018866 DOI: 10.1038/s42003-022-03314-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2021] [Accepted: 03/23/2022] [Indexed: 11/23/2022] Open
Abstract
Prediction ability often involves some degree of uncertainty-a key determinant of confidence. Here, we sought to assess whether predictions are decodable in partially-observable environments where one's state is uncertain, and whether this information is sensitive to confidence produced by such uncertainty. We used functional magnetic resonance imaging-based, partially-observable maze navigation tasks in which subjects predicted upcoming scenes and reported their confidence regarding these predictions. Using a multi-voxel pattern analysis, we successfully decoded both scene predictions and subjective confidence from activities in the localized parietal and prefrontal regions. We also assessed confidence in their beliefs about where they were in the maze. Importantly, prediction decodability varied according to subjective scene confidence in the superior parietal lobule and state confidence estimated by the behavioral model in the inferior parietal lobule. These results demonstrate that prediction in uncertain environments depends on the prefrontal-parietal network within which prediction and confidence interact.
Collapse
Affiliation(s)
- Risa Katayama
- Graduate School of Informatics, Kyoto University, Kyoto, Kyoto, 606-8501, Japan.
| | - Wako Yoshida
- Nuffield Department of Clinical Neuroscience, University of Oxford, Oxford, OX3 9DU, UK
- Department of Neural Computation for Decision-making, Advanced Telecommunications Research Institute International, Soraku-gun, Kyoto, 619-0288, Japan
| | - Shin Ishii
- Graduate School of Informatics, Kyoto University, Kyoto, Kyoto, 606-8501, Japan
- Neural Information Analysis Laboratories, Advanced Telecommunications Research Institute International, Soraku-gun, Kyoto, 619-0288, Japan
- International Research Center for Neurointelligence, The University of Tokyo, Bunkyo-ku, Tokyo, 113-0033, Japan
| |
Collapse
|
16
|
Rybář M, Daly I. Neural decoding of semantic concepts: A systematic literature review. J Neural Eng 2022; 19. [PMID: 35344941 DOI: 10.1088/1741-2552/ac619a] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Accepted: 03/27/2022] [Indexed: 11/12/2022]
Abstract
Objective Semantic concepts are coherent entities within our minds. They underpin our thought processes and are a part of the basis for our understanding of the world. Modern neuroscience research is increasingly exploring how individual semantic concepts are encoded within our brains and a number of studies are beginning to reveal key patterns of neural activity that underpin specific concepts. Building upon this basic understanding of the process of semantic neural encoding, neural engineers are beginning to explore tools and methods for semantic decoding: identifying which semantic concepts an individual is focused on at a given moment in time from recordings of their neural activity. In this paper we review the current literature on semantic neural decoding. Approach We conducted this review according to the Preferred Reporting Items for Systematic reviews and Meta-Analysis (PRISMA) guidelines. Specifically, we assess the eligibility of published peer-reviewed reports via a search of PubMed and Google Scholar. We identify a total of 74 studies in which semantic neural decoding is used to attempt to identify individual semantic concepts from neural activity. Results Our review reveals how modern neuroscientific tools have been developed to allow decoding of individual concepts from a range of neuroimaging modalities. We discuss specific neuroimaging methods, experimental designs, and machine learning pipelines that are employed to aid the decoding of semantic concepts. We quantify the efficacy of semantic decoders by measuring information transfer rates. We also discuss current challenges presented by this research area and present some possible solutions. Finally, we discuss some possible emerging and speculative future directions for this research area. Significance Semantic decoding is a rapidly growing area of research. However, despite its increasingly widespread popularity and use in neuroscientific research this is the first literature review focusing on this topic across neuroimaging modalities and with a focus on quantifying the efficacy of semantic decoders.
Collapse
Affiliation(s)
- Milan Rybář
- School of Computer Science and Electronic Engineering, University of Essex, Wivenhoe Park, Colchester, Essex, CO4 3SQ, UNITED KINGDOM OF GREAT BRITAIN AND NORTHERN IRELAND
| | - Ian Daly
- University of Essex, School of Computer Science and Electronic Engineering, Wivenhoe Park, Colchester, Colchester, Essex, CO4 3SQ, UNITED KINGDOM OF GREAT BRITAIN AND NORTHERN IRELAND
| |
Collapse
|
17
|
Pham TQ, Nishiyama S, Sadato N, Chikazoe J. Distillation of Regional Activity Reveals Hidden Content of Neural Information in Visual Processing. Front Hum Neurosci 2021; 15:777464. [PMID: 34903962 PMCID: PMC8664645 DOI: 10.3389/fnhum.2021.777464] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Accepted: 11/09/2021] [Indexed: 11/24/2022] Open
Abstract
Multivoxel pattern analysis (MVPA) has become a standard tool for decoding mental states from brain activity patterns. Recent studies have demonstrated that MVPA can be applied to decode activity patterns of a certain region from those of the other regions. By applying a similar region-to-region decoding technique, we examined whether the information represented in the visual areas can be explained by those represented in the other visual areas. We first predicted the brain activity patterns of an area on the visual pathway from the others, then subtracted the predicted patterns from their originals. Subsequently, the visual features were derived from these residuals. During the visual perception task, the elimination of the top-down signals enhanced the simple visual features represented in the early visual cortices. By contrast, the elimination of the bottom-up signals enhanced the complex visual features represented in the higher visual cortices. The directions of such modulation effects varied across visual perception/imagery tasks, indicating that the information flow across the visual cortices is dynamically altered, reflecting the contents of visual processing. These results demonstrated that the distillation approach is a useful tool to estimate the hidden content of information conveyed across brain regions.
Collapse
Affiliation(s)
- Trung Quang Pham
- Section of Brain Function Information, Supportive Center for Brain Research, National Institute for Physiological Sciences, Okazaki, Japan
| | - Shota Nishiyama
- Section of Brain Function Information, Supportive Center for Brain Research, National Institute for Physiological Sciences, Okazaki, Japan.,Aichi Institute of Technology Graduate School of Business Administration and Computer Science, Toyota, Japan.,Araya Inc., Tokyo, Japan
| | - Norihiro Sadato
- Section of Brain Function Information, Supportive Center for Brain Research, National Institute for Physiological Sciences, Okazaki, Japan.,Division of Cerebral Integration, National Institute for Physiological Sciences, Okazaki, Japan
| | - Junichi Chikazoe
- Section of Brain Function Information, Supportive Center for Brain Research, National Institute for Physiological Sciences, Okazaki, Japan.,Araya Inc., Tokyo, Japan
| |
Collapse
|
18
|
The neural coding of face and body orientation in occipitotemporal cortex. Neuroimage 2021; 246:118783. [PMID: 34879251 DOI: 10.1016/j.neuroimage.2021.118783] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2021] [Revised: 11/09/2021] [Accepted: 12/04/2021] [Indexed: 11/20/2022] Open
Abstract
Face and body orientation convey important information for us to understand other people's actions, intentions and social interactions. It has been shown that several occipitotemporal areas respond differently to faces or bodies of different orientations. However, whether face and body orientation are processed by partially overlapping or completely separate brain networks remains unclear, as the neural coding of face and body orientation is often investigated separately. Here, we recorded participants' brain activity using fMRI while they viewed faces and bodies shown from three different orientations, while attending to either orientation or identity information. Using multivoxel pattern analysis we investigated which brain regions process face and body orientation respectively, and which regions encode both face and body orientation in a stimulus-independent manner. We found that patterns of neural responses evoked by different stimulus orientations in the occipital face area, extrastriate body area, lateral occipital complex and right early visual cortex could generalise across faces and bodies, suggesting a stimulus-independent encoding of person orientation in occipitotemporal cortex. This finding was consistent across functionally defined regions of interest and a whole-brain searchlight approach. The fusiform face area responded to face but not body orientation, suggesting that orientation responses in this area are face-specific. Moreover, neural responses to orientation were remarkably consistent regardless of whether participants attended to the orientation of faces and bodies or not. Together, these results demonstrate that face and body orientation are processed in a partially overlapping brain network, with a stimulus-independent neural code for face and body orientation in occipitotemporal cortex.
Collapse
|
19
|
Dijkstra N, van Gaal S, Geerligs L, Bosch SE, van Gerven MAJ. No Evidence for Neural Overlap between Unconsciously Processed and Imagined Stimuli. eNeuro 2021; 8:ENEURO.0228-21.2021. [PMID: 34593516 PMCID: PMC8577044 DOI: 10.1523/eneuro.0228-21.2021] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2021] [Revised: 09/01/2021] [Accepted: 09/02/2021] [Indexed: 11/23/2022] Open
Abstract
Visual representations can be generated via feedforward or feedback processes. The extent to which these processes result in overlapping representations remains unclear. Previous work has shown that imagined stimuli elicit similar representations as perceived stimuli throughout the visual cortex. However, while representations during imagery are indeed only caused by feedback processing, neural processing during perception is an interplay of both feedforward and feedback processing. This means that any representational overlap could be because of overlap in feedback processes. In the current study, we aimed to investigate this issue by characterizing the overlap between feedforward- and feedback-initiated category representations during imagined stimuli, conscious perception, and unconscious processing using fMRI in humans of either sex. While all three conditions elicited stimulus representations in left lateral occipital cortex (LOC), significant similarities were observed only between imagery and conscious perception in this area. Furthermore, connectivity analyses revealed stronger connectivity between frontal areas and left LOC during conscious perception and in imagery compared with unconscious processing. Together, these findings can be explained by the idea that long-range feedback modifies visual representations, thereby reducing representational overlap between purely feedforward- and feedback-initiated stimulus representations measured by fMRI. Neural representations influenced by feedback, either stimulus driven (perception) or purely internally driven (imagery), are, however, relatively similar.
Collapse
Affiliation(s)
- Nadine Dijkstra
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, 6500 GL, Nijmegen, The Netherlands
- Wellcome Centre for Human Neuroimaging, University College London, London WC1N 3AR, United Kingdom
| | - Simon van Gaal
- Department of Psychology, Brain & Cognition, University of Amsterdam, 1000 GG, Amsterdam, The Netherlands
| | - Linda Geerligs
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, 6500 GL, Nijmegen, The Netherlands
| | - Sander E Bosch
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, 6500 GL, Nijmegen, The Netherlands
| | - Marcel A J van Gerven
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, 6500 GL, Nijmegen, The Netherlands
| |
Collapse
|
20
|
Ragni F, Lingnau A, Turella L. Decoding category and familiarity information during visual imagery. Neuroimage 2021; 241:118428. [PMID: 34311066 DOI: 10.1016/j.neuroimage.2021.118428] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2021] [Revised: 06/29/2021] [Accepted: 07/22/2021] [Indexed: 10/20/2022] Open
Abstract
Visual imagery relies on a widespread network of brain regions, partly engaged during the perception of external stimuli. Beyond the recruitment of category-selective areas (FFA, PPA), perception of familiar faces and places has been reported to engage brain areas associated with semantic information, comprising the precuneus, temporo-parietal junction (TPJ), medial prefrontal cortex (mPFC) and posterior cingulate cortex (PCC). Here we used multivariate pattern analyzes (MVPA) to examine to which degree areas of the visual imagery network, category-selective and semantic areas contain information regarding the category and familiarity of imagined stimuli. Participants were instructed via auditory cues to imagine personally familiar and unfamiliar stimuli (i.e. faces and places). Using region-of-interest (ROI)-based MVPA, we were able to distinguish between imagined faces and places within nodes of the visual imagery network (V1, SPL, aIPS), within category-selective inferotemporal regions (FFA, PPA) and across all brain regions of the extended semantic network (i.e. precuneus, mPFC, IFG and TPJ). Moreover, we were able to decode familiarity of imagined stimuli in the SPL and aIPS, and in some regions of the extended semantic network (in particular, right precuneus, right TPJ), but not in V1. Our results suggest that posterior visual areas - including V1 - host categorical representations about imagined stimuli, and that stimulus familiarity might be an additional aspect that is shared between perception and visual imagery.
Collapse
Affiliation(s)
- Flavio Ragni
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Italy
| | | | - Luca Turella
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Italy.
| |
Collapse
|
21
|
Rybář M, Poli R, Daly I. Decoding of semantic categories of imagined concepts of animals and tools in fNIRS. J Neural Eng 2021; 18:046035. [PMID: 33780916 DOI: 10.1088/1741-2552/abf2e5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Accepted: 03/29/2021] [Indexed: 11/11/2022]
Abstract
Objective.Semantic decoding refers to the identification of semantic concepts from recordings of an individual's brain activity. It has been previously reported in functional magnetic resonance imaging and electroencephalography. We investigate whether semantic decoding is possible with functional near-infrared spectroscopy (fNIRS). Specifically, we attempt to differentiate between the semantic categories of animals and tools. We also identify suitable mental tasks for potential brain-computer interface (BCI) applications.Approach.We explore the feasibility of a silent naming task, for the first time in fNIRS, and propose three novel intuitive mental tasks based on imagining concepts using three sensory modalities: visual, auditory, and tactile. Participants are asked to visualize an object in their minds, imagine the sounds made by the object, and imagine the feeling of touching the object. A general linear model is used to extract hemodynamic responses that are then classified via logistic regression in a univariate and multivariate manner.Main results.We successfully classify all tasks with mean accuracies of 76.2% for the silent naming task, 80.9% for the visual imagery task, 72.8% for the auditory imagery task, and 70.4% for the tactile imagery task. Furthermore, we show that consistent neural representations of semantic categories exist by applying classifiers across tasks.Significance.These findings show that semantic decoding is possible in fNIRS. The study is the first step toward the use of semantic decoding for intuitive BCI applications for communication.
Collapse
Affiliation(s)
- Milan Rybář
- Brain-Computer Interfacing and Neural Engineering Laboratory, School of Computer Science and Electronic Engineering, University of Essex, Colchester, United Kingdom
| | - Riccardo Poli
- Brain-Computer Interfacing and Neural Engineering Laboratory, School of Computer Science and Electronic Engineering, University of Essex, Colchester, United Kingdom
| | - Ian Daly
- Brain-Computer Interfacing and Neural Engineering Laboratory, School of Computer Science and Electronic Engineering, University of Essex, Colchester, United Kingdom
| |
Collapse
|
22
|
Hahamy A, Wilf M, Rosin B, Behrmann M, Malach R. How do the blind 'see'? The role of spontaneous brain activity in self-generated perception. Brain 2021; 144:340-353. [PMID: 33367630 PMCID: PMC7880672 DOI: 10.1093/brain/awaa384] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Revised: 08/12/2020] [Accepted: 09/01/2020] [Indexed: 11/22/2022] Open
Abstract
Spontaneous activity of the human brain has been well documented, but little is known about the functional role of this ubiquitous neural phenomenon. It has previously been hypothesized that spontaneous brain activity underlies unprompted (internally generated) behaviour. We tested whether spontaneous brain activity might underlie internally-generated vision by studying the cortical visual system of five blind/visually-impaired individuals who experience vivid visual hallucinations (Charles Bonnet syndrome). Neural populations in the visual system of these individuals are deprived of external input, which may lead to their hyper-sensitization to spontaneous activity fluctuations. To test whether these spontaneous fluctuations can subserve visual hallucinations, the functional MRI brain activity of participants with Charles Bonnet syndrome obtained while they reported their hallucinations (spontaneous internally-generated vision) was compared to the: (i) brain activity evoked by veridical vision (externally-triggered vision) in sighted controls who were presented with a visual simulation of the hallucinatory streams; and (ii) brain activity of non-hallucinating blind controls during visual imagery (cued internally-generated vision). All conditions showed activity spanning large portions of the visual system. However, only the hallucination condition in the Charles Bonnet syndrome participants demonstrated unique temporal dynamics, characterized by a slow build-up of neural activity prior to the reported onset of hallucinations. This build-up was most pronounced in early visual cortex and then decayed along the visual hierarchy. These results suggest that, in the absence of external visual input, a build-up of spontaneous fluctuations in early visual cortex may activate the visual hierarchy, thereby triggering the experience of vision.
Collapse
Affiliation(s)
- Avital Hahamy
- The Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK.,Department of Neurobiology, Weizmann Institute of Science, Rehovot, 7610001, Israel
| | - Meytal Wilf
- Department of Clinical Neuroscience, Lausanne University Hospital (CHUV), Switzerland
| | - Boris Rosin
- Department of Ophthalmology, Hadassah-Hebrew University Medical Center, Jerusalem, 91120, Israel.,Department of Ophthalmology, University of Pittsburgh Medical Center (UPMC), Pittsburgh, PA 15213, USA
| | - Marlene Behrmann
- Department of Psychology and Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA 15213, USA
| | - Rafael Malach
- Department of Neurobiology, Weizmann Institute of Science, Rehovot, 7610001, Israel
| |
Collapse
|
23
|
Evidence for a visual bias when recalling complex narratives. PLoS One 2021; 16:e0249950. [PMID: 33852633 PMCID: PMC8046210 DOI: 10.1371/journal.pone.0249950] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2020] [Accepted: 03/29/2021] [Indexed: 11/19/2022] Open
Abstract
Although it is understood that episodic memories of everyday events involve encoding a wide array of perceptual and non-perceptual information, it is unclear how these distinct types of information are recalled. To address this knowledge gap, we examine how perceptual (visual versus auditory) and non-perceptual details described within a narrative, a proxy for everyday event memories, were retrieved. Based on previous work indicating a bias for visual content, we hypothesized that participants would be most accurate at recalling visually described details and would tend to falsely recall non-visual details with visual descriptors. In Study 1, participants watched videos of a protagonist telling narratives of everyday events under three conditions: with visual, auditory, or audiovisual details. All narratives contained the same non-perceptual content. Participants' free recall of these narratives under each condition were scored for the type of details recalled (perceptual, non-perceptual) and whether the detail was recalled with gist or verbatim memory. We found that participants were more accurate at gist and verbatim recall for visual perceptual details. This visual bias was also evident when we examined the errors made during recall such that participants tended to incorrectly recall details with visual information, but not with auditory information. Study 2 tested for this pattern of results when the narratives were presented in auditory only format. Results conceptually replicated Study 1 in that there was still a persistent visual bias in what was recollected from the complex narratives. Together, these findings indicate a bias for recruiting visualizable content to construct complex multi-detail memories.
Collapse
|
24
|
Boccia M, Sulpizio V, Bencivenga F, Guariglia C, Galati G. Neural representations underlying mental imagery as unveiled by representation similarity analysis. Brain Struct Funct 2021; 226:1511-1531. [PMID: 33821379 PMCID: PMC8096739 DOI: 10.1007/s00429-021-02266-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2020] [Accepted: 03/23/2021] [Indexed: 11/30/2022]
Abstract
It is commonly acknowledged that visual imagery and perception rely on the same content-dependent brain areas in the high-level visual cortex (HVC). However, the way in which our brain processes and organizes previous acquired knowledge to allow the generation of mental images is still a matter of debate. Here, we performed a representation similarity analysis of three previous fMRI experiments conducted in our laboratory to characterize the neural representation underlying imagery and perception of objects, buildings and faces and to disclose possible dissimilarities in the neural structure of such representations. To this aim, we built representational dissimilarity matrices (RDMs) by computing multivariate distances between the activity patterns associated with each pair of stimuli in the content-dependent areas of the HVC and HC. We found that spatial information is widely coded in the HVC during perception (i.e. RSC, PPA and OPA) and imagery (OPA and PPA). Also, visual information seems to be coded in both preferred and non-preferred regions of the HVC, supporting a distributed view of encoding. Overall, the present results shed light upon the spatial coding of imagined and perceived exemplars in the HVC.
Collapse
Affiliation(s)
- Maddalena Boccia
- Department of Psychology, "Sapienza" University of Rome, Via dei Marsi, 78, 00185, Rome, Italy. .,Cognitive and Motor Rehabilitation and Neuroimaging Unit, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy.
| | - Valentina Sulpizio
- Department of Psychology, "Sapienza" University of Rome, Via dei Marsi, 78, 00185, Rome, Italy.,Cognitive and Motor Rehabilitation and Neuroimaging Unit, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy
| | - Federica Bencivenga
- Department of Psychology, "Sapienza" University of Rome, Via dei Marsi, 78, 00185, Rome, Italy.,Cognitive and Motor Rehabilitation and Neuroimaging Unit, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy.,PhD Program in Behavioral Neuroscience, Sapienza University of Rome, Rome, Italy
| | - Cecilia Guariglia
- Department of Psychology, "Sapienza" University of Rome, Via dei Marsi, 78, 00185, Rome, Italy.,Cognitive and Motor Rehabilitation and Neuroimaging Unit, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy
| | - Gaspare Galati
- Department of Psychology, "Sapienza" University of Rome, Via dei Marsi, 78, 00185, Rome, Italy.,Cognitive and Motor Rehabilitation and Neuroimaging Unit, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy
| |
Collapse
|
25
|
Barhoun P, Fuelscher I, Do M, He JL, Bekkali S, Cerins A, Youssef GJ, Williams J, Enticott PG, Hyde C. Mental rotation performance in young adults with and without developmental coordination disorder. Hum Mov Sci 2021; 77:102787. [PMID: 33798929 DOI: 10.1016/j.humov.2021.102787] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2021] [Revised: 03/07/2021] [Accepted: 03/09/2021] [Indexed: 10/21/2022]
Abstract
While there have been consistent behavioural reports of atypical hand rotation task (HRT) performance in adults with developmental coordination disorder (DCD), this study aimed to clarify whether this deficit could be attributed to specific difficulties in motor imagery (MI), as opposed to broad deficits in general mental rotation. Participants were 57 young adults aged 18-30 years with (n = 22) and without DCD (n = 35). Participants were compared on the HRT, a measure of MI, and the letter number rotation task (LNRT), a common visual imagery task. Only participants whose behavioural performance on the HRT suggested use of a MI strategy were included in group comparisons. Young adults with DCD were significantly less efficient compared to controls when completing the HRT yet showed comparable performance on the LNRT relative to adults with typical motor ability. Our data are consistent with the view that atypical HRT performance in adults with DCD is likely to be attributed to specific difficulties engaging in MI, as opposed to deficits in general mental rotation. Based on the theory that MI provides insight into the integrity of internal action representations, these findings offer further support for the internal modelling deficit hypothesis of DCD.
Collapse
Affiliation(s)
- Pamela Barhoun
- Cognitive Neuroscience Unit, School of Psychology, Deakin University, Geelong, Australia.
| | - Ian Fuelscher
- Cognitive Neuroscience Unit, School of Psychology, Deakin University, Geelong, Australia
| | - Michael Do
- Cognitive Neuroscience Unit, School of Psychology, Deakin University, Geelong, Australia
| | - Jason L He
- Department of Forensic and Neurodevelopmental Sciences, Sackler Institute for Translational Neurodevelopment, Institute of Psychiatry, Psychology, and Neuroscience, King's College London, United Kingdom
| | - Soukayna Bekkali
- Cognitive Neuroscience Unit, School of Psychology, Deakin University, Geelong, Australia
| | - Andris Cerins
- Cognitive Neuroscience Unit, School of Psychology, Deakin University, Geelong, Australia
| | - George J Youssef
- Cognitive Neuroscience Unit, School of Psychology, Deakin University, Geelong, Australia; Murdoch Children's Research Institute, Centre for Adolescent Health, Royal Children's Hospital, Melbourne, Australia
| | - Jacqueline Williams
- Institute for Health and Sport, College of Sport and Exercise Science, Victoria University, Melbourne, Australia
| | - Peter G Enticott
- Cognitive Neuroscience Unit, School of Psychology, Deakin University, Geelong, Australia
| | - Christian Hyde
- Cognitive Neuroscience Unit, School of Psychology, Deakin University, Geelong, Australia
| |
Collapse
|
26
|
Koenig-Robert R, Pearson J. Why do imagery and perception look and feel so different? Philos Trans R Soc Lond B Biol Sci 2021; 376:20190703. [PMID: 33308061 PMCID: PMC7741076 DOI: 10.1098/rstb.2019.0703] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/12/2020] [Indexed: 12/16/2022] Open
Abstract
Despite the past few decades of research providing convincing evidence of the similarities in function and neural mechanisms between imagery and perception, for most of us, the experience of the two are undeniably different, why? Here, we review and discuss the differences between imagery and perception and the possible underlying causes of these differences, from function to neural mechanisms. Specifically, we discuss the directional flow of information (top-down versus bottom-up), the differences in targeted cortical layers in primary visual cortex and possible different neural mechanisms of modulation versus excitation. For the first time in history, neuroscience is beginning to shed light on this long-held mystery of why imagery and perception look and feel so different. This article is part of the theme issue 'Offline perception: voluntary and spontaneous perceptual experiences without matching external stimulation'.
Collapse
Affiliation(s)
| | - Joel Pearson
- School of Psychology, The University of New South Wales, Sydney, Australia
| |
Collapse
|
27
|
Gu J, Liu B, Yan W, Miao Q, Wei J. Investigating the Impact of the Missing Significant Objects in Scene Recognition Using Multivariate Pattern Analysis. Front Neurorobot 2021; 14:597471. [PMID: 33390924 PMCID: PMC7773817 DOI: 10.3389/fnbot.2020.597471] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2020] [Accepted: 11/30/2020] [Indexed: 11/13/2022] Open
Abstract
Significant objects in a scene can make a great contribution to scene recognition. Besides the three scene-selective regions: parahippocampal place area (PPA), retrosplenial complex (RSC), and occipital place area (OPA), some neuroimaging studies have shown that the lateral occipital complex (LOC) is also engaged in scene recognition processing. In this study, the multivariate pattern analysis was adopted to explore the object-scene association in scene recognition when different amounts of significant objects were masked. The scene classification only succeeded in the intact scene in the ROIs. In addition, the average signal intensity in LOC [including the lateral occipital cortex (LO) and the posterior fusiform area (pF)] decreased when there were masked objects, but such a decrease was not observed in scene-selective regions. These results suggested that LOC was sensitive to the loss of significant objects and mainly involved in scene recognition by the object-scene semantic association. The performance of the scene-selective areas may be mainly due to the fact that they responded to the change of the scene's entire attribute, such as the spatial information, when they were employed in the scene recognition processing. These findings further enrich our knowledge of the significant objects' influence on the activation pattern during the process of scene recognition.
Collapse
Affiliation(s)
- Jin Gu
- College of Intelligence and Computing, Tianjin University, Tianjin, China
| | - Baolin Liu
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, China
| | - Weiran Yan
- College of Intelligence and Computing, Tianjin University, Tianjin, China
| | - Qiaomu Miao
- College of Intelligence and Computing, Tianjin University, Tianjin, China
| | - Jianguo Wei
- College of Intelligence and Computing, Tianjin University, Tianjin, China
| |
Collapse
|
28
|
Lee SH, Kravitz DJ, Baker CI. Differential Representations of Perceived and Retrieved Visual Information in Hippocampus and Cortex. Cereb Cortex 2020; 29:4452-4461. [PMID: 30590463 DOI: 10.1093/cercor/bhy325] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2018] [Revised: 11/22/2018] [Accepted: 11/28/2018] [Indexed: 12/12/2022] Open
Abstract
Memory retrieval is thought to depend on interactions between hippocampus and cortex, but the nature of representation in these regions and their relationship remains unclear. Here, we performed an ultra-high field fMRI (7T) experiment, comprising perception, learning and retrieval sessions. We observed a fundamental difference between representations in hippocampus and high-level visual cortex during perception and retrieval. First, while object-selective posterior fusiform cortex showed consistent responses that allowed us to decode object identity across both perception and retrieval one day after learning, object decoding in hippocampus was much stronger during retrieval than perception. Second, in visual cortex but not hippocampus, there was consistency in response patterns between perception and retrieval, suggesting that substantial neural populations are shared for both perception and retrieval. Finally, the decoding in hippocampus during retrieval was not observed when retrieval was tested on the same day as learning suggesting that the retrieval process itself is not sufficient to elicit decodable object representations. Collectively, these findings suggest that while cortical representations are stable between perception and retrieval, hippocampal representations are much stronger during retrieval, implying some form of reorganization of the representations between perception and retrieval.
Collapse
Affiliation(s)
- Sue-Hyun Lee
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA.,Department of Bio and Brain Engineering, College of Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea.,Program of Brain and Cognitive Engineering, College of Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea
| | - Dwight J Kravitz
- Department of Psychology, The George Washington University, Washington, DC, USA
| | - Chris I Baker
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA
| |
Collapse
|
29
|
Lupyan G, Abdel Rahman R, Boroditsky L, Clark A. Effects of Language on Visual Perception. Trends Cogn Sci 2020; 24:930-944. [PMID: 33012687 DOI: 10.1016/j.tics.2020.08.005] [Citation(s) in RCA: 54] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2020] [Revised: 08/22/2020] [Accepted: 08/25/2020] [Indexed: 11/24/2022]
Abstract
Does language change what we perceive? Does speaking different languages cause us to perceive things differently? We review the behavioral and electrophysiological evidence for the influence of language on perception, with an emphasis on the visual modality. Effects of language on perception can be observed both in higher-level processes such as recognition and in lower-level processes such as discrimination and detection. A consistent finding is that language causes us to perceive in a more categorical way. Rather than being fringe or exotic, as they are sometimes portrayed, we discuss how effects of language on perception naturally arise from the interactive and predictive nature of perception.
Collapse
Affiliation(s)
- Gary Lupyan
- University of Wisconsin-Madison, Madison, WI, USA.
| | | | | | - Andy Clark
- University of Sussex, Brighton, UK; Macquarie University, Sydney, Australia
| |
Collapse
|
30
|
Koenig-Robert R, Pearson J. Decoding Nonconscious Thought Representations during Successful Thought Suppression. J Cogn Neurosci 2020; 32:2272-2284. [PMID: 32762524 DOI: 10.1162/jocn_a_01617] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Controlling our thoughts is central to mental well-being, and its failure is at the crux of a number of mental disorders. Paradoxically, behavioral evidence shows that thought suppression often fails. Despite the broad importance of understanding the mechanisms of thought control, little is known about the fate of neural representations of suppressed thoughts. Using fMRI, we investigated the brain areas involved in controlling visual thoughts and tracked suppressed thought representations using multivoxel pattern analysis. Participants were asked to either visualize a vegetable/fruit or suppress any visual thoughts about those objects. Surprisingly, the content (object identity) of successfully suppressed thoughts was still decodable in visual areas with algorithms trained on imagery. This suggests that visual representations of suppressed thoughts are still present despite reports that they are not. Thought generation was associated with the left hemisphere, and thought suppression was associated with right hemisphere engagement. Furthermore, general linear model analyses showed that subjective success in thought suppression was correlated with engagement of executive areas, whereas thought-suppression failure was associated with engagement of visual and memory-related areas. These results suggest that the content of suppressed thoughts exists hidden from awareness, seemingly without an individual's knowledge, providing a compelling reason why thought suppression is so ineffective. These data inform models of unconscious thought production and could be used to develop new treatment approaches to disorders involving maladaptive thoughts.
Collapse
|
31
|
Xie S, Kaiser D, Cichy RM. Visual Imagery and Perception Share Neural Representations in the Alpha Frequency Band. Curr Biol 2020; 30:2621-2627.e5. [PMID: 32531274 PMCID: PMC7342016 DOI: 10.1016/j.cub.2020.04.074] [Citation(s) in RCA: 45] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2019] [Revised: 04/06/2020] [Accepted: 04/27/2020] [Indexed: 11/21/2022]
Abstract
To behave adaptively with sufficient flexibility, biological organisms must cognize beyond immediate reaction to a physically present stimulus. For this, humans use visual mental imagery [1, 2], the ability to conjure up a vivid internal experience from memory that stands in for the percept of the stimulus. Visually imagined contents subjectively mimic perceived contents, suggesting that imagery and perception share common neural mechanisms. Using multivariate pattern analysis on human electroencephalography (EEG) data, we compared the oscillatory time courses of mental imagery and perception of objects. We found that representations shared between imagery and perception emerged specifically in the alpha frequency band. These representations were present in posterior, but not anterior, electrodes, suggesting an origin in parieto-occipital cortex. Comparison of the shared representations to computational models using representational similarity analysis revealed a relationship to later layers of deep neural networks trained on object representations, but not auditory or semantic models, suggesting representations of complex visual features as the basis of commonality. Together, our results identify and characterize alpha oscillations as a cortical signature of representations shared between visual mental imagery and perception. Perception and imagery share neural representations in the alpha frequency band Shared representations stem from parieto-occipital sources Modeling suggests contents of shared representations are complex visual features
Collapse
Affiliation(s)
- Siying Xie
- Department of Education and Psychology, Freie Universität Berlin, Habelschwerdter Allee 45, Berlin 14195, Germany.
| | - Daniel Kaiser
- Department of Psychology, University of York, Heslington, York YO10 5DD, UK
| | - Radoslaw M Cichy
- Department of Education and Psychology, Freie Universität Berlin, Habelschwerdter Allee 45, Berlin 14195, Germany; Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Unter den Linden 6, Berlin 10099, Germany; Bernstein Centre for Computational Neuroscience Berlin, Humboldt-Universität zu Berlin, Unter den Linden 6, Berlin 10099, Germany.
| |
Collapse
|
32
|
Generative Feedback Explains Distinct Brain Activity Codes for Seen and Mental Images. Curr Biol 2020; 30:2211-2224.e6. [DOI: 10.1016/j.cub.2020.04.014] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2019] [Revised: 02/03/2020] [Accepted: 04/06/2020] [Indexed: 11/21/2022]
|
33
|
Ruiz MJ, Dojat M, Hupé JM. Multivariate pattern analysis of fMRI data for imaginary and real colours in grapheme-colour synaesthesia. Eur J Neurosci 2020; 52:3434-3456. [PMID: 32384170 DOI: 10.1111/ejn.14774] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2020] [Revised: 04/22/2020] [Accepted: 05/03/2020] [Indexed: 11/29/2022]
Abstract
Grapheme-colour synaesthesia is a subjective phenomenon related to perception and imagination, in which some people involuntarily but systematically associate specific, idiosyncratic colours to achromatic letters or digits. Its investigation is relevant to unravel the neural correlates of colour perception in isolation from low-level neural processing of spectral components, as well as the neural correlates of imagination by being able to reliably trigger imaginary colour experiences. However, functional MRI studies using univariate analyses failed to provide univocal evidence of the activation of the "colour network" by synaesthesia. Applying multivariate (multivoxel) pattern analysis (MVPA) on 20 synaesthetes and 20 control participants, we tested whether the neural processing of real colours (concentric rings) and synaesthetic colours (black graphemes) shared patterns of activations. Region of interest analyses in retinotopically and anatomically defined visual areas revealed neither evidence of shared circuits for real and synaesthetic colour processing, nor processing difference between synaesthetes and controls. We also found no correlation with individual experiences, characterised by measuring the strength of synaesthetic associations. The whole brain searchlight analysis led to similar results. We conclude that revealing the neural coding of the synaesthetic experience of colours is a hard task which requires the improvement of our current methodology: for example involving more individuals and achieving higher MR signal to noise ratio and spatial resolution. So far, we have not found any evidence of the involvement of the cortical colour network in the subjective experience of synaesthetic colours.
Collapse
Affiliation(s)
- Mathieu J Ruiz
- Centre de Recherche Cerveau et Cognition, Université de Toulouse Paul Sabatier & CNRS, Toulouse, France.,Grenoble Institut des Neurosciences, Université Grenoble Alpes, INSERM & CHU Grenoble Alpes, Grenoble, France
| | - Michel Dojat
- Grenoble Institut des Neurosciences, Université Grenoble Alpes, INSERM & CHU Grenoble Alpes, Grenoble, France
| | - Jean-Michel Hupé
- Centre de Recherche Cerveau et Cognition, Université de Toulouse Paul Sabatier & CNRS, Toulouse, France
| |
Collapse
|
34
|
Bone MB, Ahmad F, Buchsbaum BR. Feature-specific neural reactivation during episodic memory. Nat Commun 2020; 11:1945. [PMID: 32327642 PMCID: PMC7181630 DOI: 10.1038/s41467-020-15763-2] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2019] [Accepted: 03/12/2020] [Indexed: 12/04/2022] Open
Abstract
We present a multi-voxel analytical approach, feature-specific informational connectivity (FSIC), that leverages hierarchical representations from a neural network to decode neural reactivation in fMRI data collected while participants performed an episodic visual recall task. We show that neural reactivation associated with low-level (e.g. edges), high-level (e.g. facial features), and semantic (e.g. “terrier”) features occur throughout the dorsal and ventral visual streams and extend into the frontal cortex. Moreover, we show that reactivation of both low- and high-level features correlate with the vividness of the memory, whereas only reactivation of low-level features correlates with recognition accuracy when the lure and target images are semantically similar. In addition to demonstrating the utility of FSIC for mapping feature-specific reactivation, these findings resolve the contributions of low- and high-level features to the vividness of visual memories and challenge a strict interpretation the posterior-to-anterior visual hierarchy. Memory recollection involves reactivation of neural activity that occurred during the recalled experience. Here, the authors show that neural reactivation can be decomposed into visual-semantic features, is widely synchronized throughout the brain, and predicts memory vividness and accuracy.
Collapse
Affiliation(s)
- Michael B Bone
- Rotman Research Institute at Baycrest, Toronto, ON, M6A 2E1, Canada. .,Department of Psychology, University of Toronto, Toronto, ON, M5S 1A1, Canada.
| | - Fahad Ahmad
- Rotman Research Institute at Baycrest, Toronto, ON, M6A 2E1, Canada
| | - Bradley R Buchsbaum
- Rotman Research Institute at Baycrest, Toronto, ON, M6A 2E1, Canada.,Department of Psychology, University of Toronto, Toronto, ON, M5S 1A1, Canada
| |
Collapse
|
35
|
Ragni F, Tucciarelli R, Andersson P, Lingnau A. Decoding stimulus identity in occipital, parietal and inferotemporal cortices during visual mental imagery. Cortex 2020; 127:371-387. [PMID: 32289581 DOI: 10.1016/j.cortex.2020.02.020] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2019] [Revised: 01/29/2020] [Accepted: 02/14/2020] [Indexed: 11/17/2022]
Abstract
In the absence of input from the external world, humans are still able to generate vivid mental images. This cognitive process, known as visual mental imagery, involves a network of prefrontal, parietal, inferotemporal, and occipital regions. Using multivariate pattern analysis (MVPA), previous studies were able to distinguish between the different orientations of imagined gratings, but not between more complex imagined stimuli, such as common objects, in early visual cortex (V1). Here we asked whether letters, simple shapes, and objects can be decoded in early visual areas during visual mental imagery. In a delayed spatial judgment task, we asked participants to observe or imagine stimuli. To examine whether it is possible to discriminate between neural patterns during perception and visual mental imagery, we performed ROI-based and whole-brain searchlight-based MVPA. We were able to decode imagined stimuli in early visual (V1, V2), parietal (SPL, IPL, aIPS), inferotemporal (LOC) and prefrontal (PMd) areas. In a subset of these areas (i.e., V1, V2, LOC, SPL, IPL and aIPS), we also obtained significant cross-decoding across visual imagery and perception. Moreover, we observed a linear relationship between behavioral accuracy and the amplitude of the BOLD signal in parietal and inferotemporal cortices, but not in early visual cortex, in line with the view that these areas contribute to the ability to perform visual imagery. Together, our results suggest that in the absence of bottom-up visual inputs, patterns of functional activation in early visual cortex allow distinguishing between different imagined stimulus exemplars, most likely mediated by signals from parietal and inferotemporal areas.
Collapse
Affiliation(s)
- Flavio Ragni
- Center for Mind/Brain Science (CIMeC), University of Trento, Rovereto, TN, Italy
| | - Raffaele Tucciarelli
- Center for Mind/Brain Science (CIMeC), University of Trento, Rovereto, TN, Italy; Department of Psychological Sciences, Birkbeck, University of London, London, UK
| | - Patrik Andersson
- Center for Mind/Brain Science (CIMeC), University of Trento, Rovereto, TN, Italy; Stockholm University Brain Imaging Centre (SUBIC), Stockholm, Sweden
| | - Angelika Lingnau
- Center for Mind/Brain Science (CIMeC), University of Trento, Rovereto, TN, Italy; Department of Psychology, Royal Holloway University of London, Egham, London, UK; Institute of Psychology, University of Regensburg, Regensburg, Germany.
| |
Collapse
|
36
|
Mattioni S, Rezk M, Battal C, Bottini R, Cuculiza Mendoza KE, Oosterhof NN, Collignon O. Categorical representation from sound and sight in the ventral occipito-temporal cortex of sighted and blind. eLife 2020; 9:50732. [PMID: 32108572 PMCID: PMC7108866 DOI: 10.7554/elife.50732] [Citation(s) in RCA: 37] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2019] [Accepted: 02/14/2020] [Indexed: 01/08/2023] Open
Abstract
Is vision necessary for the development of the categorical organization of the Ventral Occipito-Temporal Cortex (VOTC)? We used fMRI to characterize VOTC responses to eight categories presented acoustically in sighted and early blind individuals, and visually in a separate sighted group. We observed that VOTC reliably encodes sound categories in sighted and blind people using a representational structure and connectivity partially similar to the one found in vision. Sound categories were, however, more reliably encoded in the blind than the sighted group, using a representational format closer to the one found in vision. Crucially, VOTC in blind represents the categorical membership of sounds rather than their acoustic features. Our results suggest that sounds trigger categorical responses in the VOTC of congenitally blind and sighted people that partially match the topography and functional profile of the visual response, despite qualitative nuances in the categorical organization of VOTC between modalities and groups. The world is full of rich and dynamic visual information. To avoid information overload, the human brain groups inputs into categories such as faces, houses, or tools. A part of the brain called the ventral occipito-temporal cortex (VOTC) helps categorize visual information. Specific parts of the VOTC prefer different types of visual input; for example, one part may tend to respond more to faces, whilst another may prefer houses. However, it is not clear how the VOTC characterizes information. One idea is that similarities between certain types of visual information may drive how information is organized in the VOTC. For example, looking at faces requires using central vision, while looking at houses requires using peripheral vision. Furthermore, all faces have a roundish shape while houses tend to have a more rectangular shape. Another possibility, however, is that the categorization of different inputs cannot be explained just by vision, and is also be driven by higher-level aspects of each category. For instance, how humans use or interact with something may also influence how an input is categorized. If categories are established depending (at least partially) on these higher-level aspects, rather than purely through visual likeness, it is likely that the VOTC would respond similarly to both sounds and images representing these categories. Now, Mattioni et al. have tested how individuals with and without sight respond to eight different categories of information to find out whether or not categorization is driven purely by visual likeness. Each category was presented to participants using sounds while measuring their brain activity. In addition, a group of participants who could see were also presented with the categories visually. Mattioni et al. then compared what happened in the VOTC of the three groups – sighted people presented with sounds, blind people presented with sounds, and sighted people presented with images – in response to each category. The experiment revealed that the VOTC organizes both auditory and visual information in a similar way. However, there were more similarities between the way blind people categorized auditory information and how sighted people categorized visual information than between how sighted people categorized each type of input. Mattioni et al. also found that the region of the VOTC that responds to inanimate objects massively overlapped across the three groups, whereas the part of the VOTC that responds to living things was more variable. These findings suggest that the way that the VOTC organizes information is, at least partly, independent from vision. The experiments also provide some information about how the brain reorganizes in people who are born blind. Further studies may reveal how differences in the VOTC of people with and without sight affect regions typically associated with auditory categorization, and potentially explain how the brain reorganizes in people who become blind later in life.
Collapse
Affiliation(s)
- Stefania Mattioni
- Institute of research in Psychology (IPSY) & Institute of Neuroscience (IoNS) - University of Louvain (UCLouvain), Louvain-la-Neuve, Belgium
| | - Mohamed Rezk
- Institute of research in Psychology (IPSY) & Institute of Neuroscience (IoNS) - University of Louvain (UCLouvain), Louvain-la-Neuve, Belgium.,Centre for Mind/Brain Sciences, University of Trento, Trento, Italy
| | - Ceren Battal
- Institute of research in Psychology (IPSY) & Institute of Neuroscience (IoNS) - University of Louvain (UCLouvain), Louvain-la-Neuve, Belgium.,Centre for Mind/Brain Sciences, University of Trento, Trento, Italy
| | - Roberto Bottini
- Centre for Mind/Brain Sciences, University of Trento, Trento, Italy
| | | | | | - Olivier Collignon
- Institute of research in Psychology (IPSY) & Institute of Neuroscience (IoNS) - University of Louvain (UCLouvain), Louvain-la-Neuve, Belgium
| |
Collapse
|
37
|
Babakmehr M, St-Yves G, Naselaris T. Working with high-dimensional feature spaces. Mach Learn 2020. [DOI: 10.1016/b978-0-12-815739-8.00015-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
38
|
Pupillometric decoding of high-level musical imagery. Conscious Cogn 2019; 77:102862. [PMID: 31863916 DOI: 10.1016/j.concog.2019.102862] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2019] [Revised: 12/11/2019] [Accepted: 12/12/2019] [Indexed: 11/22/2022]
Abstract
Humans report imagining sound where no physical sound is present: we replay conversations, practice speeches, and "hear" music all within the confines of our minds. Research has identified neural substrates underlying auditory imagery; yet deciphering its explicit contents has been elusive. Here we present a novel pupillometric method for decoding what individuals hear "inside their heads". Independent of light, pupils dilate and constrict in response to noradrenergic activity. Hence, stimuli evoking unique and reliable patterns of attention and arousal even when imagined should concurrently produce identifiable patterns of pupil-size dynamics (PSDs). Participants listened to and then silently imagined music while eye-tracked. Using machine learning algorithms, we decoded the imagined songs within- and across-participants following classifier-training on PSDs collected during both imagination and perception. Echoing findings in vision, cross-domain decoding accuracy increased with imagery strength. These data suggest that light-independent PSDs are a neural signature sensitive enough to decode imagination.
Collapse
|
39
|
Predictive coding of action intentions in dorsal and ventral visual stream is based on visual anticipations, memory-based information and motor preparation. Brain Struct Funct 2019; 224:3291-3308. [PMID: 31673774 DOI: 10.1007/s00429-019-01970-1] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2019] [Accepted: 10/16/2019] [Indexed: 10/25/2022]
Abstract
Predictions of upcoming movements are based on several types of neural signals that span the visual, somatosensory, motor and cognitive system. Thus far, pre-movement signals have been investigated while participants viewed the object to be acted upon. Here, we studied the contribution of information other than vision to the classification of preparatory signals for action, even in the absence of online visual information. We used functional magnetic resonance imaging (fMRI) and multivoxel pattern analysis (MVPA) to test whether the neural signals evoked by visual, memory-based and somato-motor information can be reliably used to predict upcoming actions in areas of the dorsal and ventral visual stream during the preparatory phase preceding the action, while participants were lying still. Nineteen human participants (nine women) performed one of two actions towards an object with their eyes open or closed. Despite the well-known role of ventral stream areas in visual recognition tasks and the specialization of dorsal stream areas in somato-motor processes, we decoded action intention in areas of both streams based on visual, memory-based and somato-motor signals. Interestingly, we could reliably decode action intention in absence of visual information based on neural activity evoked when visual information was available and vice versa. Our results show a similar visual, memory and somato-motor representation of action planning in dorsal and ventral visual stream areas that allows predicting action intention across domains, regardless of the availability of visual information.
Collapse
|
40
|
Gu J, Zhang H, Liu B, Li X, Wang P, Wang B. An investigation of the neural association between auditory imagery and perception of complex sounds. Brain Struct Funct 2019; 224:2925-2937. [PMID: 31468120 DOI: 10.1007/s00429-019-01948-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2019] [Accepted: 08/23/2019] [Indexed: 01/24/2023]
Abstract
Neuroimaging studies have demonstrated that mental imagery and perception share similar neural substrates, however, there are still ambiguities according to different auditory imagery content. In addition, there is still a lack of information regarding the underlying neural correlation between the two modalities. In the present study, we adopted functional magnetic resonance imaging to explore the neural representation during imagery and perception of actual sounds in our surroundings. Univariate analysis was used to assess the differences between the modalities of average activation intensity, and stronger imagery activation was found in sensorimotor regions but weaker activation in auditory association cortices. Additionally, multi-voxel pattern analysis with a support vector machine classifier was implemented to decode environmental sounds within- or cross-modality. Significant above-chance accuracies were found in all overlapping regions in the classification of within-modality, while successful cross-modality classification only was found in sensorimotor regions. Both univariate and multivariate analyses found distinct representation between auditory imagery and perception in the overlapping regions, including superior temporal gyrus and inferior frontal sulcus as well as the precentral cortex and pre-supplementary motor area. Our results confirm the overlapping activation regions between auditory imagery and perception reported by previous studies and suggest that activation regions showed dissociable representation pattern in imagery and perception of sound categories.
Collapse
Affiliation(s)
- Jin Gu
- College of Intelligence and Computing, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin, 300350, People's Republic of China
| | - Hairuo Zhang
- College of Intelligence and Computing, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin, 300350, People's Republic of China
| | - Baolin Liu
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, 100083, People's Republic of China.
| | - Xianglin Li
- Medical Imaging Research Institute, Binzhou Medical University, Yantai, 264003, Shandong, People's Republic of China
| | - Peiyuan Wang
- Department of Radiology, Yantai Affiliated Hospital of Binzhou Medical University, Yantai, 264003, Shandong, People's Republic of China
| | - Bin Wang
- Medical Imaging Research Institute, Binzhou Medical University, Yantai, 264003, Shandong, People's Republic of China
| |
Collapse
|
41
|
van den Boom MA, Vansteensel MJ, Koppeschaar MI, Raemaekers MAH, Ramsey NF. Towards an intuitive communication-BCI: decoding visually imagined characters from the early visual cortex using high-field fMRI. Biomed Phys Eng Express 2019; 5. [PMID: 32983573 DOI: 10.1088/2057-1976/ab302c] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Brain-computer interfaces aim to provide people with paralysis with the possibility to use their neural signals to control devices. For communication, most BCIs are based on the selection of letters from a (digital) letter board to spell words and sentences. Visual mental imagery of letters could offer a new, fast and intuitive way to spell in a BCI-communication solution. Here we provide a proof of concept for the decoding of visually imagined characters from the early visual cortex using 7 Tesla functional MRI. Sixteen healthy participants visually imagined three different characters for 3, 5 and 7 s in a slow event-related design. Using single-trial classification, we were able to decode the characters with an average accuracy of 54%, which is significantly above chance level (33%). Furthermore, the imagined characters were classifiable shortly after cue onset and remained classifiable with prolonged imagery. These properties, combined with the cortical location of the early visual cortex and its decodable activity, encourage further research on intracranial interfacing using surface electrodes to bring us closer to such a visual imagery based BCI communication solution.
Collapse
Affiliation(s)
- Max A van den Boom
- Department of Neurology and Neurosurgery, UMC Utrecht Brain Center, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Mariska J Vansteensel
- Department of Neurology and Neurosurgery, UMC Utrecht Brain Center, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Melissa I Koppeschaar
- Department of Neurology and Neurosurgery, UMC Utrecht Brain Center, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Matthijs A H Raemaekers
- Department of Neurology and Neurosurgery, UMC Utrecht Brain Center, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Nick F Ramsey
- Department of Neurology and Neurosurgery, UMC Utrecht Brain Center, University Medical Center Utrecht, Utrecht, The Netherlands
| |
Collapse
|
42
|
Cichy RM, Kriegeskorte N, Jozwik KM, van den Bosch JJ, Charest I. The spatiotemporal neural dynamics underlying perceived similarity for real-world objects. Neuroimage 2019; 194:12-24. [PMID: 30894333 PMCID: PMC6547050 DOI: 10.1016/j.neuroimage.2019.03.031] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2018] [Revised: 01/25/2019] [Accepted: 03/13/2019] [Indexed: 01/19/2023] Open
Abstract
The degree to which we perceive real-world objects as similar or dissimilar structures our perception and guides categorization behavior. Here, we investigated the neural representations enabling perceived similarity using behavioral judgments, fMRI and MEG. As different object dimensions co-occur and partly correlate, to understand the relationship between perceived similarity and brain activity it is necessary to assess the unique role of multiple object dimensions. We thus behaviorally assessed perceived object similarity in relation to shape, function, color and background. We then used representational similarity analyses to relate these behavioral judgments to brain activity. We observed a link between each object dimension and representations in visual cortex. These representations emerged rapidly within 200 ms of stimulus onset. Assessing the unique role of each object dimension revealed partly overlapping and distributed representations: while color-related representations distinctly preceded shape-related representations both in the processing hierarchy of the ventral visual pathway and in time, several dimensions were linked to high-level ventral visual cortex. Further analysis singled out the shape dimension as neither fully accounted for by supra-category membership, nor a deep neural network trained on object categorization. Together our results comprehensively characterize the relationship between perceived similarity of key object dimensions and neural activity.
Collapse
Affiliation(s)
- Radoslaw M. Cichy
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany,Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany,Berlin School of Mind and Brain, Berlin, Germany,Corresponding author. Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany.
| | - Nikolaus Kriegeskorte
- Department of Psychology, Zuckerman Mind Brain Behavior Institute, Columbia University, New York, USA
| | - Kamila M. Jozwik
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany
| | | | - Ian Charest
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, UK,School of Psychology, University of Birmingham, Birmingham, UK
| |
Collapse
|
43
|
Ji JL, Holmes EA, MacLeod C, Murphy FC. Spontaneous cognition in dysphoria: reduced positive bias in imagining the future. PSYCHOLOGICAL RESEARCH 2019; 83:817-831. [PMID: 30097711 PMCID: PMC6529377 DOI: 10.1007/s00426-018-1071-y] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2018] [Accepted: 08/03/2018] [Indexed: 01/10/2023]
Abstract
Anomalies in future-oriented cognition are implicated in the maintenance of emotional disturbance within cognitive models of depression. Thinking about the future can involve mental imagery or verbal-linguistic mental representations. Research suggests that future thinking involving imagery representations may disproportionately impact on-going emotional experience in daily life relative to future thinking not involving imagery (verbal-linguistic representation only). However, while higher depression symptoms (dysphoria) are associated with impaired ability to deliberately generate positive relatively to negative imagery representations of the future (when instructed to do so), it is unclear whether dysphoria is associated with impairments in the tendency to do so spontaneously (when not instructed to deliberately generate task unrelated cognition of any kind). The present study investigated dysphoria-linked individual differences in the tendency to experience spontaneous future-oriented cognition as a function of emotional valence and representational format. Individuals varying in dysphoria level reported the occurrence of task unrelated thoughts (TUTs) in real time while completing a sustained attention go/no-go task, during exposure to auditory cues. Results indicate higher levels of dysphoria were associated with lower levels of positive bias in the number of imagery-based future TUTs reported, reflecting higher negative imagery-based future TUT generation (medium to large effect size), and lower positive imagery-based TUT generation (small to medium effect size). Further, this dysphoria-linked bias appeared to be specific in temporal orientation (future, not past) and representational format (imagery, not non-imagery). Reduced tendency to engage in positive relative to negative imagery-based future thinking appears to be implicated in dysphoria.
Collapse
Affiliation(s)
- Julie L Ji
- Centre for the Advancement of Research on Emotion, School of Psychological Science (M304), University of Western Australia, 35 Stirling Hwy, Crawley, 6009, WA, Australia.
| | - Emily A Holmes
- Department for Clinical Neuroscience, Karolinska Institutet, Berzelius väg 3, Solna, Stockholm, Sweden
- Medical Research Council Cognition and Brain Sciences Unit, University of Cambridge, 15 Chaucer Rd, Cambridge, UK
| | - Colin MacLeod
- Centre for the Advancement of Research on Emotion, School of Psychological Science (M304), University of Western Australia, 35 Stirling Hwy, Crawley, 6009, WA, Australia
| | - Fionnuala C Murphy
- Medical Research Council Cognition and Brain Sciences Unit, University of Cambridge, 15 Chaucer Rd, Cambridge, UK
| |
Collapse
|
44
|
Mental imagery training for treatment of central neuropathic pain: a narrative review. Acta Neurol Belg 2019; 119:175-186. [PMID: 30989503 DOI: 10.1007/s13760-019-01139-x] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2018] [Accepted: 04/05/2019] [Indexed: 12/11/2022]
Abstract
Mental imagery is a quasi-perceptual experience in the absence of external stimuli. This concept has intrigued psychologists, sportspersons, neurologists and other scientists for over a decade now. Imagery has been used in rehabilitation and the results have been promising. Researchers refer to this as healing the body through the mind. However, the challenge is lack of standardized protocols, homogeneity and consistency in application of mental imagery in different populations. The purpose of this review is to discuss and understand the role of mental imagery in the treatment of central neuropathic pain (CNP). Treatment options of CNP are inadequate and their benefits are short lived. We conducted an extensive search on various databases using combinations of different keywords and reviewed the available literature in this area. We were able to finalize twelve studies where mental imagery was used for treating CNP in spinal cord injury (SCI), stroke and multiple sclerosis. However, the methodology and techniques of mental imagery training used in these studies were non-homogeneous and inconsistent. This review provides a guiding framework to further explore the different techniques of mental imagery and their roles in treating CNP.
Collapse
|
45
|
Kriegeskorte N, Douglas PK. Interpreting encoding and decoding models. Curr Opin Neurobiol 2019; 55:167-179. [PMID: 31039527 DOI: 10.1016/j.conb.2019.04.002] [Citation(s) in RCA: 81] [Impact Index Per Article: 16.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2018] [Revised: 04/08/2019] [Accepted: 04/10/2019] [Indexed: 11/18/2022]
Abstract
Encoding and decoding models are widely used in systems, cognitive, and computational neuroscience to make sense of brain-activity data. However, the interpretation of their results requires care. Decoding models can help reveal whether particular information is present in a brain region in a format the decoder can exploit. Encoding models make comprehensive predictions about representational spaces. In the context of sensory experiments, where stimuli are experimentally controlled, encoding models enable us to test and compare brain-computational theories. Encoding and decoding models typically include fitted linear-model components. Sometimes the weights of the fitted linear combinations are interpreted as reflecting, in an encoding model, the contribution of different sensory features to the representation or, in a decoding model, the contribution of different measured brain responses to a decoded feature. Such interpretations can be problematic when the predictor variables or their noise components are correlated and when priors (or penalties) are used to regularize the fit. Encoding and decoding models are evaluated in terms of their generalization performance. The correct interpretation depends on the level of generalization a model achieves (e.g. to new response measurements for the same stimuli, to new stimuli from the same population, or to stimuli from a different population). Significant decoding or encoding performance of a single model (at whatever level of generality) does not provide strong constraints for theory. Many models must be tested and inferentially compared for analyses to drive theoretical progress.
Collapse
Affiliation(s)
- Nikolaus Kriegeskorte
- Department of Psychology, Department of Neuroscience, Department of Electrical Engineering, Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, United States.
| | - Pamela K Douglas
- Center for Cognitive Neuroscience, University of California, Los Angeles, CA, United States
| |
Collapse
|
46
|
Abstract
Visual mental imagery resembles visual working memory (VWM). Because both visual mental imagery and VWM involve the representation and manipulation of visual information, it was hypothesized that they would exert similar effects on visual attention. Several previous studies have demonstrated that working-memory representations guide attention toward a memory-matching task-irrelevant stimulus during visual-search tasks. Therefore, mental imagery may also guide attention toward imagery-matching stimuli. In the present study, five experiments were conducted to investigate the effects of visual mental imagery on visual attention during a visual-search task. Participants were instructed to visualize a color or an object clearly associated with a specific color, after which they were asked to detect a colored target in the visual-search task. Reaction times for target detection were shorter when the color of the target matched the imagined color, and when the color of the target was similar to that strongly associated with the imagined object, than when the color of the target did not match that of the mental representation. This effect was not observed when participants were not instructed to imagine a color. These results suggest that similar to VWM, visual mental imagery guides attention toward imagery-matching stimuli.
Collapse
|
47
|
Koenig-Robert R, Pearson J. Decoding the contents and strength of imagery before volitional engagement. Sci Rep 2019; 9:3504. [PMID: 30837493 PMCID: PMC6401098 DOI: 10.1038/s41598-019-39813-y] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2018] [Accepted: 01/07/2019] [Indexed: 11/27/2022] Open
Abstract
Is it possible to predict the freely chosen content of voluntary imagery from prior neural signals? Here we show that the content and strength of future voluntary imagery can be decoded from activity patterns in visual and frontal areas well before participants engage in voluntary imagery. Participants freely chose which of two images to imagine. Using functional magnetic resonance (fMRI) and multi-voxel pattern analysis, we decoded imagery content as far as 11 seconds before the voluntary decision, in visual, frontal and subcortical areas. Decoding in visual areas in addition to perception-imagery generalization suggested that predictive patterns correspond to visual representations. Importantly, activity patterns in the primary visual cortex (V1) from before the decision, predicted future imagery vividness. Our results suggest that the contents and strength of mental imagery are influenced by sensory-like neural representations that emerge spontaneously before volition.
Collapse
Affiliation(s)
| | - Joel Pearson
- School of Psychology, The University of New South Wales, Sydney, Australia
| |
Collapse
|
48
|
Schmidt TT, Blankenburg F. The Somatotopy of Mental Tactile Imagery. Front Hum Neurosci 2019; 13:10. [PMID: 30833894 PMCID: PMC6387936 DOI: 10.3389/fnhum.2019.00010] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2018] [Accepted: 01/10/2019] [Indexed: 01/19/2023] Open
Abstract
To what degree mental imagery (MI) bears on the same neuronal processes as perception has been a central question in the neurophysiological study of imagery. Sensory-recruitment models suggest that imagery of sensory material heavily relies on the involvement of sensory cortices. Empirical evidence mainly stems from the study of visual imagery and suggests that it depends on the mentally imagined material whether hierarchically lower regions are recruited. However, evidence from other modalities is necessary to infer generalized principles. In this fMRI study we used the somatotopic organization of the primary somatosensory cortex (SI) to test in how far MI of tactile sensations activates topographically sensory brain areas. Participants (N = 19) either perceived or imagined vibrotactile stimuli on their left or right thumbs or big toes. The direct comparison to a corresponding perception condition revealed that SI was somatotopically recruited during imagery. While stimulus driven bottom-up processing induced activity throughout all SI subareas, i.e., BA1, BA3a, BA3b, and BA2 defined by probabilistic cytoarchitectonic maps, top-down recruitment during imagery was limited to the hierarchically highest subarea BA2.
Collapse
Affiliation(s)
- Timo Torsten Schmidt
- Neurocomputation and Neuroimaging Unit, Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany
| | - Felix Blankenburg
- Neurocomputation and Neuroimaging Unit, Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany
| |
Collapse
|
49
|
Boccia M, Sulpizio V, Teghil A, Palermo L, Piccardi L, Galati G, Guariglia C. The dynamic contribution of the high-level visual cortex to imagery and perception. Hum Brain Mapp 2019; 40:2449-2463. [PMID: 30702203 DOI: 10.1002/hbm.24535] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2018] [Revised: 01/16/2019] [Accepted: 01/21/2019] [Indexed: 01/19/2023] Open
Abstract
Mental imagery and visual perception rely on the same content-dependent brain areas in the high-level visual cortex (HVC). However, little is known about dynamic mechanisms in these areas during imagery and perception. Here we disentangled local and inter-regional dynamic mechanisms underlying imagery and perception in the HVC and the hippocampus (HC), a key region for memory retrieval during imagery. Nineteen healthy participants watched or imagined a familiar scene or face during fMRI acquisition. The neural code for familiar landmarks and faces was distributed across the HVC and the HC, although with a different representational structure, and generalized across imagery and perception. However, different regional adaptation effects and inter-regional functional couplings were detected for faces and landmarks during imagery and perception. The left PPA showed opposite adaptation effects, with activity suppression following repeated observation of landmarks, but enhancement following repeated imagery of landmarks. Also, functional coupling between content-dependent brain areas of the HVC and HC changed as a function of task and content. These findings provide important information about the dynamic networks underlying imagery and perception in the HVC and shed some light upon the thin line between imagery and perception which has characterized the neuropsychological debates on mental imagery.
Collapse
Affiliation(s)
- Maddalena Boccia
- Cognitive and Motor Rehabilitation and Neuroimaging Unit, IRCCS Fondazione Santa Lucia, Rome, Italy
| | - Valentina Sulpizio
- Cognitive and Motor Rehabilitation and Neuroimaging Unit, IRCCS Fondazione Santa Lucia, Rome, Italy.,Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Alice Teghil
- Cognitive and Motor Rehabilitation and Neuroimaging Unit, IRCCS Fondazione Santa Lucia, Rome, Italy.,PhD Program in Behavioral Neuroscience, "Sapienza" University of Rome, Rome, Italy.,Department of Psychology, "Sapienza" University of Rome, Rome, Italy
| | - Liana Palermo
- Cognitive and Motor Rehabilitation and Neuroimaging Unit, IRCCS Fondazione Santa Lucia, Rome, Italy.,Department of Medical and Surgical Sciences, Magna Graecia University of Catanzaro, Catanzaro, Italy
| | - Laura Piccardi
- Cognitive and Motor Rehabilitation and Neuroimaging Unit, IRCCS Fondazione Santa Lucia, Rome, Italy.,Department of Life, Health and Environmental Sciences, L'Aquila University, L'Aquila, Italy
| | - Gaspare Galati
- Cognitive and Motor Rehabilitation and Neuroimaging Unit, IRCCS Fondazione Santa Lucia, Rome, Italy.,Department of Psychology, "Sapienza" University of Rome, Rome, Italy
| | - Cecilia Guariglia
- Cognitive and Motor Rehabilitation and Neuroimaging Unit, IRCCS Fondazione Santa Lucia, Rome, Italy.,Department of Psychology, "Sapienza" University of Rome, Rome, Italy
| |
Collapse
|
50
|
Senden M, Emmerling TC, van Hoof R, Frost MA, Goebel R. Reconstructing imagined letters from early visual cortex reveals tight topographic correspondence between visual mental imagery and perception. Brain Struct Funct 2019; 224:1167-1183. [PMID: 30637491 PMCID: PMC6499877 DOI: 10.1007/s00429-019-01828-6] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2018] [Accepted: 01/05/2019] [Indexed: 11/12/2022]
Abstract
Visual mental imagery is the quasi-perceptual experience of “seeing in the mind’s eye”. While a tight correspondence between imagery and perception in terms of subjective experience is well established, their correspondence in terms of neural representations remains insufficiently understood. In the present study, we exploit the high spatial resolution of functional magnetic resonance imaging (fMRI) at 7T, the retinotopic organization of early visual cortex, and machine-learning techniques to investigate whether visual imagery of letter shapes preserves the topographic organization of perceived shapes. Sub-millimeter resolution fMRI images were obtained from early visual cortex in six subjects performing visual imagery of four different letter shapes. Predictions of imagery voxel activation patterns based on a population receptive field-encoding model and physical letter stimuli provided first evidence in favor of detailed topographic organization. Subsequent visual field reconstructions of imagery data based on the inversion of the encoding model further showed that visual imagery preserves the geometric profile of letter shapes. These results open new avenues for decoding, as we show that a denoising autoencoder can be used to pretrain a classifier purely based on perceptual data before fine-tuning it on imagery data. Finally, we show that the autoencoder can project imagery-related voxel activations onto their perceptual counterpart allowing for visually recognizable reconstructions even at the single-trial level. The latter may eventually be utilized for the development of content-based BCI letter-speller systems.
Collapse
Affiliation(s)
- Mario Senden
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6201 BC, Maastricht, The Netherlands. .,Department of Cognitive Neuroscience, Maastricht Brain Imaging Centre, Faculty of Psychology and Neuroscience, Maastricht University, Oxfordlaan 55, P.O. Box 616, 6200 MD, Maastricht, The Netherlands.
| | - Thomas C Emmerling
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6201 BC, Maastricht, The Netherlands.,Department of Cognitive Neuroscience, Maastricht Brain Imaging Centre, Faculty of Psychology and Neuroscience, Maastricht University, Oxfordlaan 55, P.O. Box 616, 6200 MD, Maastricht, The Netherlands
| | - Rick van Hoof
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6201 BC, Maastricht, The Netherlands.,Department of Cognitive Neuroscience, Maastricht Brain Imaging Centre, Faculty of Psychology and Neuroscience, Maastricht University, Oxfordlaan 55, P.O. Box 616, 6200 MD, Maastricht, The Netherlands
| | - Martin A Frost
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6201 BC, Maastricht, The Netherlands.,Department of Cognitive Neuroscience, Maastricht Brain Imaging Centre, Faculty of Psychology and Neuroscience, Maastricht University, Oxfordlaan 55, P.O. Box 616, 6200 MD, Maastricht, The Netherlands
| | - Rainer Goebel
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6201 BC, Maastricht, The Netherlands.,Department of Cognitive Neuroscience, Maastricht Brain Imaging Centre, Faculty of Psychology and Neuroscience, Maastricht University, Oxfordlaan 55, P.O. Box 616, 6200 MD, Maastricht, The Netherlands.,Department of Neuroimaging and Neuromodeling, Netherlands Institute for Neuroscience, an Institute of the Royal Netherlands Academy of Arts and Sciences (KNAW), 1105 BA, Amsterdam, The Netherlands
| |
Collapse
|