1
|
Dirani J, Pylkkänen L. MEG Evidence That Modality-Independent Conceptual Representations Contain Semantic and Visual Features. J Neurosci 2024; 44:e0326242024. [PMID: 38806251 PMCID: PMC11223456 DOI: 10.1523/jneurosci.0326-24.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2024] [Revised: 04/22/2024] [Accepted: 05/15/2024] [Indexed: 05/30/2024] Open
Abstract
The semantic knowledge stored in our brains can be accessed from different stimulus modalities. For example, a picture of a cat and the word "cat" both engage similar conceptual representations. While existing research has found evidence for modality-independent representations, their content remains unknown. Modality-independent representations could be semantic, or they might also contain perceptual features. We developed a novel approach combining word/picture cross-condition decoding with neural network classifiers that learned latent modality-independent representations from MEG data (25 human participants, 15 females, 10 males). We then compared these representations to models representing semantic, sensory, and orthographic features. Results show that modality-independent representations correlate both with semantic and visual representations. There was no evidence that these results were due to picture-specific visual features or orthographic features automatically activated by the stimuli presented in the experiment. These findings support the notion that modality-independent concepts contain both perceptual and semantic representations.
Collapse
Affiliation(s)
- Julien Dirani
- Departments of Psychology, New York University, New York, New York 10003
| | - Liina Pylkkänen
- Departments of Psychology, New York University, New York, New York 10003
- Linguistics, New York University, New York, New York 10003
- NYUAD Research Institute, New York University Abu Dhabi, Abu Dhabi 129188, United Arab Emirates
| |
Collapse
|
2
|
Westlin C, Theriault JE, Katsumi Y, Nieto-Castanon A, Kucyi A, Ruf SF, Brown SM, Pavel M, Erdogmus D, Brooks DH, Quigley KS, Whitfield-Gabrieli S, Barrett LF. Improving the study of brain-behavior relationships by revisiting basic assumptions. Trends Cogn Sci 2023; 27:246-257. [PMID: 36739181 PMCID: PMC10012342 DOI: 10.1016/j.tics.2022.12.015] [Citation(s) in RCA: 15] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2022] [Revised: 12/23/2022] [Accepted: 12/29/2022] [Indexed: 02/05/2023]
Abstract
Neuroimaging research has been at the forefront of concerns regarding the failure of experimental findings to replicate. In the study of brain-behavior relationships, past failures to find replicable and robust effects have been attributed to methodological shortcomings. Methodological rigor is important, but there are other overlooked possibilities: most published studies share three foundational assumptions, often implicitly, that may be faulty. In this paper, we consider the empirical evidence from human brain imaging and the study of non-human animals that calls each foundational assumption into question. We then consider the opportunities for a robust science of brain-behavior relationships that await if scientists ground their research efforts in revised assumptions supported by current empirical evidence.
Collapse
Affiliation(s)
| | - Jordan E Theriault
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Yuta Katsumi
- Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Alfonso Nieto-Castanon
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, MA, USA; Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Aaron Kucyi
- Department of Psychological and Brain Sciences, Drexel University, Philadelphia, PA, USA
| | - Sebastian F Ruf
- Department of Civil and Environmental Engineering, Northeastern University, Boston, MA, USA
| | - Sarah M Brown
- Department of Computer Science and Statistics, University of Rhode Island, Kingston, RI, USA
| | - Misha Pavel
- Khoury College of Computer Sciences, Northeastern University, Boston, MA, USA; Bouvé College of Health Sciences, Northeastern University, Boston, MA, USA
| | - Deniz Erdogmus
- Department of Electrical and Computer Engineering, Northeastern University, Boston, MA, USA
| | - Dana H Brooks
- Department of Electrical and Computer Engineering, Northeastern University, Boston, MA, USA
| | - Karen S Quigley
- Department of Psychology, Northeastern University, Boston, MA, USA
| | | | - Lisa Feldman Barrett
- Department of Psychology, Northeastern University, Boston, MA, USA; A.A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA; Department of Psychiatry, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
3
|
Cabral L, Zubiaurre-Elorza L, Wild CJ, Linke A, Cusack R. Anatomical correlates of category-selective visual regions have distinctive signatures of connectivity in neonates. Dev Cogn Neurosci 2022; 58:101179. [PMID: 36521345 PMCID: PMC9768242 DOI: 10.1016/j.dcn.2022.101179] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2021] [Revised: 11/15/2022] [Accepted: 11/21/2022] [Indexed: 11/25/2022] Open
Abstract
The ventral visual stream is shaped during development by innate proto-organization within the visual system, such as the strong input from the fovea to the fusiform face area. In adults, category-selective regions have distinct signatures of connectivity to brain regions beyond the visual system, likely reflecting cross-modal and motoric associations. We tested if this long-range connectivity is part of the innate proto-organization, or if it develops with postnatal experience, by using diffusion-weighted imaging to characterize the connectivity of anatomical correlates of category-selective regions in neonates (N = 445), 1-9 month old infants (N = 11), and adults (N = 14). Using the HCP data we identified face- and place- selective regions and a third intermediate region with a distinct profile of selectivity. Using linear classifiers, these regions were found to have distinctive connectivity at birth, to other regions in the visual system and to those outside of it. The results support an extended proto-organization that includes long-range connectivity that shapes, and is shaped by, experience-dependent development.
Collapse
Affiliation(s)
- Laura Cabral
- Department of Radiology, University of Pittsburgh, Pittsburgh 15224, PA, USA.
| | - Leire Zubiaurre-Elorza
- Department of Psychology, Faculty of Health Sciences, University of Deusto, Bilbao 48007, Spain
| | - Conor J Wild
- Western Institute for Neuroscience, Western University, London, ON N6A 3K7, Canada; Department of Physiology and Pharmacology,Western University, London, ON N6A 3K7, Canada
| | - Annika Linke
- Brain Development Imaging Laboratories, San Diego State University, San Diego 92120, CA, USA
| | - Rhodri Cusack
- Trinity College Institute of Neuroscience, Trinity College Dublin, College Green, Dublin 2, Ireland
| |
Collapse
|
4
|
Bailey KM, Giordano BL, Kaas AL, Smith FW. Decoding sounds depicting hand-object interactions in primary somatosensory cortex. Cereb Cortex 2022; 33:3621-3635. [PMID: 36045002 DOI: 10.1093/cercor/bhac296] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Revised: 05/24/2022] [Accepted: 07/07/2022] [Indexed: 11/13/2022] Open
Abstract
Neurons, even in the earliest sensory regions of cortex, are subject to a great deal of contextual influences from both within and across modality connections. Recent work has shown that primary sensory areas can respond to and, in some cases, discriminate stimuli that are not of their target modality: for example, primary somatosensory cortex (SI) discriminates visual images of graspable objects. In the present work, we investigated whether SI would discriminate sounds depicting hand-object interactions (e.g. bouncing a ball). In a rapid event-related functional magnetic resonance imaging experiment, participants listened attentively to sounds from 3 categories: hand-object interactions, and control categories of pure tones and animal vocalizations, while performing a one-back repetition detection task. Multivoxel pattern analysis revealed significant decoding of hand-object interaction sounds within SI, but not for either control category. Crucially, in the hand-sensitive voxels defined from an independent tactile localizer, decoding accuracies were significantly higher for hand-object interactions compared to pure tones in left SI. Our findings indicate that simply hearing sounds depicting familiar hand-object interactions elicit different patterns of activity in SI, despite the complete absence of tactile stimulation. These results highlight the rich contextual information that can be transmitted across sensory modalities even to primary sensory areas.
Collapse
Affiliation(s)
- Kerri M Bailey
- School of Psychology, University of East Anglia, Norwich NR4 7TJ, United Kingdom
| | - Bruno L Giordano
- Institut des Neurosciences de La Timone, CNRS UMR 7289, Université Aix-Marseille, Marseille CNRS UMR 7289, France
| | - Amanda L Kaas
- Department of Cognitive Neuroscience, Maastricht University, Maastricht 6229 EV, The Netherlands
| | - Fraser W Smith
- School of Psychology, University of East Anglia, Norwich NR4 7TJ, United Kingdom
| |
Collapse
|
5
|
Vaccaro AG, Heydari P, Christov-Moore L, Damasio A, Kaplan JT. Perspective-taking is associated with increased discriminability of affective states in the ventromedial prefrontal cortex. Soc Cogn Affect Neurosci 2022; 17:1082-1090. [PMID: 35579186 PMCID: PMC9714424 DOI: 10.1093/scan/nsac035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2021] [Revised: 04/05/2022] [Accepted: 05/16/2022] [Indexed: 01/12/2023] Open
Abstract
Recent work using multivariate-pattern analysis (MVPA) on functional magnetic resonance imaging (fMRI) data has found that distinct affective states produce correspondingly distinct patterns of neural activity in the cerebral cortex. However, it is unclear whether individual differences in the distinctiveness of neural patterns evoked by affective stimuli underlie empathic abilities such as perspective-taking (PT). Accordingly, we examined whether we could predict PT tendency from the classification of blood-oxygen-level-dependent (BOLD) fMRI activation patterns while participants (n = 57) imagined themselves in affectively charged scenarios. We used an MVPA searchlight analysis to map where in the brain activity patterns permitted the classification of four affective states: happiness, sadness, fear and disgust. Classification accuracy was significantly above chance levels in most of the prefrontal cortex and in the posterior medial cortices. Furthermore, participants' self-reported PT was positively associated with classification accuracy in the ventromedial prefrontal cortex and insula. This finding has implications for understanding affective processing in the prefrontal cortex and for interpreting the cognitive significance of classifiable affective brain states. Our multivariate approach suggests that PT ability may rely on the grain of internally simulated affective representations rather than simply the global strength.
Collapse
Affiliation(s)
- Anthony G Vaccaro
- Jon Brain and Creativity Institute, Department of Psychology, University of Southern California, Los Angeles, CA 90089-0001, USA
| | - Panthea Heydari
- Jon Brain and Creativity Institute, Department of Psychology, University of Southern California, Los Angeles, CA 90089-0001, USA
| | - Leonardo Christov-Moore
- Jon Brain and Creativity Institute, Department of Psychology, University of Southern California, Los Angeles, CA 90089-0001, USA
| | - Antonio Damasio
- Jon Brain and Creativity Institute, Department of Psychology, University of Southern California, Los Angeles, CA 90089-0001, USA
| | - Jonas T Kaplan
- Correspondence should be addressed to Jonas T. Kaplan, Brain and Creativity Institute, 3620A McClintock Ave, Los Angeles, CA 90089, USA. E-mail:
| |
Collapse
|
6
|
Chai Y, Liu TT, Marrett S, Li L, Khojandi A, Handwerker DA, Alink A, Muckli L, Bandettini PA. Topographical and laminar distribution of audiovisual processing within human planum temporale. Prog Neurobiol 2021; 205:102121. [PMID: 34273456 DOI: 10.1016/j.pneurobio.2021.102121] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2021] [Revised: 05/20/2021] [Accepted: 07/13/2021] [Indexed: 10/20/2022]
Abstract
The brain is capable of integrating signals from multiple sensory modalities. Such multisensory integration can occur in areas that are commonly considered unisensory, such as planum temporale (PT) representing the auditory association cortex. However, the roles of different afferents (feedforward vs. feedback) to PT in multisensory processing are not well understood. Our study aims to understand that by examining laminar activity patterns in different topographical subfields of human PT under unimodal and multisensory stimuli. To this end, we adopted an advanced mesoscopic (sub-millimeter) fMRI methodology at 7 T by acquiring BOLD (blood-oxygen-level-dependent contrast, which has higher sensitivity) and VAPER (integrated blood volume and perfusion contrast, which has superior laminar specificity) signal concurrently, and performed all analyses in native fMRI space benefiting from an identical acquisition between functional and anatomical images. We found a division of function between visual and auditory processing in PT and distinct feedback mechanisms in different subareas. Specifically, anterior PT was activated more by auditory inputs and received feedback modulation in superficial layers. This feedback depended on task performance and likely arose from top-down influences from higher-order multimodal areas. In contrast, posterior PT was preferentially activated by visual inputs and received visual feedback in both superficial and deep layers, which is likely projected directly from the early visual cortex. Together, these findings provide novel insights into the mechanism of multisensory interaction in human PT at the mesoscopic spatial scale.
Collapse
Affiliation(s)
- Yuhui Chai
- Section on Functional Imaging Methods, Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA.
| | - Tina T Liu
- Section on Neurocircuitry, Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA
| | - Sean Marrett
- Functional MRI Core, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA
| | - Linqing Li
- Functional MRI Core, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA
| | - Arman Khojandi
- Section on Functional Imaging Methods, Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA
| | - Daniel A Handwerker
- Section on Functional Imaging Methods, Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA
| | - Arjen Alink
- University Medical Centre Hamburg-Eppendorf, Department of Systems Neuroscience, Hamburg, Germany
| | - Lars Muckli
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, UK
| | - Peter A Bandettini
- Section on Functional Imaging Methods, Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA; Functional MRI Core, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA
| |
Collapse
|
7
|
Regev M, Halpern AR, Owen AM, Patel AD, Zatorre RJ. Mapping Specific Mental Content during Musical Imagery. Cereb Cortex 2021; 31:3622-3640. [PMID: 33749742 DOI: 10.1093/cercor/bhab036] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2020] [Revised: 02/02/2021] [Accepted: 02/05/2021] [Indexed: 11/12/2022] Open
Abstract
Humans can mentally represent auditory information without an external stimulus, but the specificity of these internal representations remains unclear. Here, we asked how similar the temporally unfolding neural representations of imagined music are compared to those during the original perceived experience. We also tested whether rhythmic motion can influence the neural representation of music during imagery as during perception. Participants first memorized six 1-min-long instrumental musical pieces with high accuracy. Functional MRI data were collected during: 1) silent imagery of melodies to the beat of a visual metronome; 2) same but while tapping to the beat; and 3) passive listening. During imagery, inter-subject correlation analysis showed that melody-specific temporal response patterns were reinstated in right associative auditory cortices. When tapping accompanied imagery, the melody-specific neural patterns were reinstated in more extensive temporal-lobe regions bilaterally. These results indicate that the specific contents of conscious experience are encoded similarly during imagery and perception in the dynamic activity of auditory cortices. Furthermore, rhythmic motion can enhance the reinstatement of neural patterns associated with the experience of complex sounds, in keeping with models of motor to sensory influences in auditory processing.
Collapse
Affiliation(s)
- Mor Regev
- Montreal Neurological Institute, McGill University, Montreal, QC H3A 2B4, Canada.,International Laboratory for Brain, Music and Sound Research, Montreal, QC H2V 2J2, Canada.,Centre for Research in Language, Brain, and Music, Montreal, QC H3A 1E3, Canada
| | - Andrea R Halpern
- Department of Psychology, Bucknell University, Lewisburg, PA 17837, USA
| | - Adrian M Owen
- Brain and Mind Institute, Department of Psychology and Department of Physiology and Pharmacology, Western University, London, ON N6A 5B7, Canada.,Canadian Institute for Advanced Research, Brain, Mind, and Consciousness program
| | - Aniruddh D Patel
- Canadian Institute for Advanced Research, Brain, Mind, and Consciousness program.,Department of Psychology, Tufts University, Medford, MA 02155, USA
| | - Robert J Zatorre
- Montreal Neurological Institute, McGill University, Montreal, QC H3A 2B4, Canada.,International Laboratory for Brain, Music and Sound Research, Montreal, QC H2V 2J2, Canada.,Centre for Research in Language, Brain, and Music, Montreal, QC H3A 1E3, Canada.,Canadian Institute for Advanced Research, Brain, Mind, and Consciousness program
| |
Collapse
|
8
|
Viewing images of foods evokes taste quality-specific activity in gustatory insular cortex. Proc Natl Acad Sci U S A 2021; 118:2010932118. [PMID: 33384331 DOI: 10.1073/pnas.2010932118] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023] Open
Abstract
Previous studies have shown that the conceptual representation of food involves brain regions associated with taste perception. The specificity of this response, however, is unknown. Does viewing pictures of food produce a general, nonspecific response in taste-sensitive regions of the brain? Or is the response specific for how a particular food tastes? Building on recent findings that specific tastes can be decoded from taste-sensitive regions of insular cortex, we asked whether viewing pictures of foods associated with a specific taste (e.g., sweet, salty, and sour) can also be decoded from these same regions, and if so, are the patterns of neural activity elicited by the pictures and their associated tastes similar? Using ultrahigh-resolution functional magnetic resonance imaging at high magnetic field strength (7-Tesla), we were able to decode specific tastes delivered during scanning, as well as the specific taste category associated with food pictures within the dorsal mid-insula, a primary taste responsive region of brain. Thus, merely viewing food pictures triggers an automatic retrieval of specific taste quality information associated with the depicted foods, within gustatory cortex. However, the patterns of activity elicited by pictures and their associated tastes were unrelated, thus suggesting a clear neural distinction between inferred and directly experienced sensory events. These data show how higher-order inferences derived from stimuli in one modality (i.e., vision) can be represented in brain regions typically thought to represent only low-level information about a different modality (i.e., taste).
Collapse
|
9
|
Behroozi M, Helluy X, Ströckens F, Gao M, Pusch R, Tabrik S, Tegenthoff M, Otto T, Axmacher N, Kumsta R, Moser D, Genc E, Güntürkün O. Event-related functional MRI of awake behaving pigeons at 7T. Nat Commun 2020; 11:4715. [PMID: 32948772 PMCID: PMC7501281 DOI: 10.1038/s41467-020-18437-1] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2020] [Accepted: 08/20/2020] [Indexed: 11/08/2022] Open
Abstract
Animal-fMRI is a powerful method to understand neural mechanisms of cognition, but it remains a major challenge to scan actively participating small animals under low-stress conditions. Here, we present an event-related functional MRI platform in awake pigeons using single-shot RARE fMRI to investigate the neural fundaments for visually-guided decision making. We established a head-fixated Go/NoGo paradigm, which the animals quickly learned under low-stress conditions. The animals were motivated by water reward and behavior was assessed by logging mandibulations during the fMRI experiment with close to zero motion artifacts over hundreds of repeats. To achieve optimal results, we characterized the species-specific hemodynamic response function. As a proof-of-principle, we run a color discrimination task and discovered differential neural networks for Go-, NoGo-, and response execution-phases. Our findings open the door to visualize the neural fundaments of perceptual and cognitive functions in birds-a vertebrate class of which some clades are cognitively on par with primates.
Collapse
Affiliation(s)
- Mehdi Behroozi
- Department of Biopsychology, Faculty of Psychology, Institute of Cognitive Neuroscience, Ruhr University Bochum, Universitätsstraße 150, 44780, Bochum, Germany.
| | - Xavier Helluy
- Department of Biopsychology, Faculty of Psychology, Institute of Cognitive Neuroscience, Ruhr University Bochum, Universitätsstraße 150, 44780, Bochum, Germany
- Department of Neurophysiology, Faculty of Medicine, Ruhr University Bochum, Universitätsstraße 150, 44780, Bochum, Germany
| | - Felix Ströckens
- Department of Biopsychology, Faculty of Psychology, Institute of Cognitive Neuroscience, Ruhr University Bochum, Universitätsstraße 150, 44780, Bochum, Germany
| | - Meng Gao
- Department of Biopsychology, Faculty of Psychology, Institute of Cognitive Neuroscience, Ruhr University Bochum, Universitätsstraße 150, 44780, Bochum, Germany
| | - Roland Pusch
- Department of Biopsychology, Faculty of Psychology, Institute of Cognitive Neuroscience, Ruhr University Bochum, Universitätsstraße 150, 44780, Bochum, Germany
| | - Sepideh Tabrik
- Department of Neurology, BG-University Hospital Bergmannsheil, Ruhr University Bochum, Bürkle-de-la-Camp-Platz 1, 44789, Bochum, Germany
| | - Martin Tegenthoff
- Department of Neurology, BG-University Hospital Bergmannsheil, Ruhr University Bochum, Bürkle-de-la-Camp-Platz 1, 44789, Bochum, Germany
| | - Tobias Otto
- Department of Cognitive Psychology, Faculty of Psychology, Institute of Cognitive Neuroscience, Ruhr University Bochum, Universitätsstraße 150, 44780, Bochum, Germany
| | - Nikolai Axmacher
- Department of Neuropsychology, Faculty of Psychology, Institute of Cognitive Neuroscience, Ruhr University Bochum, Universitätsstraße 150, 44780, Bochum, Germany
| | - Robert Kumsta
- Department of Genetic Psychology, Faculty of Psychology, Ruhr University Bochum, Universitätsstraße 150, 44780, Bochum, Germany
| | - Dirk Moser
- Department of Genetic Psychology, Faculty of Psychology, Ruhr University Bochum, Universitätsstraße 150, 44780, Bochum, Germany
| | - Erhan Genc
- Department of Biopsychology, Faculty of Psychology, Institute of Cognitive Neuroscience, Ruhr University Bochum, Universitätsstraße 150, 44780, Bochum, Germany
- Department of Psychology and Neurosciences, Leibniz Research Centre for Working Environment and Human Factors (IfADo), 44139, Dortmund, Germany
| | - Onur Güntürkün
- Department of Biopsychology, Faculty of Psychology, Institute of Cognitive Neuroscience, Ruhr University Bochum, Universitätsstraße 150, 44780, Bochum, Germany.
| |
Collapse
|
10
|
Schulz E, Stankewitz A, Winkler AM, Irving S, Witkovský V, Tracey I. Ultra-high-field imaging reveals increased whole brain connectivity underpins cognitive strategies that attenuate pain. eLife 2020; 9:55028. [PMID: 32876049 PMCID: PMC7498261 DOI: 10.7554/elife.55028] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2020] [Accepted: 08/28/2020] [Indexed: 11/24/2022] Open
Abstract
We investigated how the attenuation of pain with cognitive interventions affects brain connectivity using neuroimaging and a whole brain novel analysis approach. While receiving tonic cold pain, 20 healthy participants performed three different pain attenuation strategies during simultaneous collection of functional imaging data at seven tesla. Participants were asked to rate their pain after each trial. We related the trial-by-trial variability of the attenuation performance to the trial-by-trial functional connectivity strength change of brain data. Across all conditions, we found that a higher performance of pain attenuation was predominantly associated with higher functional connectivity. Of note, we observed an association between low pain and high connectivity for regions that belong to brain regions long associated with pain processing, the insular and cingulate cortices. For one of the cognitive strategies (safe place), the performance of pain attenuation was explained by diffusion tensor imaging metrics of increased white matter integrity.
Collapse
Affiliation(s)
- Enrico Schulz
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom.,Department of Neurology, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Anne Stankewitz
- Department of Neurology, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Anderson M Winkler
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom.,Emotion and Development Branch, National Institute of Mental Health, National Institutes of Health, Bethesda, United States
| | - Stephanie Irving
- Department of Neurology, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Viktor Witkovský
- Department of Theoretical Methods, Institute of Measurement Science, Slovak Academy of Sciences, Bratislava, Slovakia
| | - Irene Tracey
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
11
|
|
12
|
Integrating functional connectivity and MVPA through a multiple constraint network analysis. Neuroimage 2019; 208:116412. [PMID: 31790752 DOI: 10.1016/j.neuroimage.2019.116412] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2019] [Revised: 11/01/2019] [Accepted: 11/27/2019] [Indexed: 11/20/2022] Open
Abstract
Traditional general linear model-based brain mapping efforts using functional neuroimaging are complemented by more recent multivariate pattern analyses (MVPA) that apply machine learning techniques to identify the cognitive states associated with regional BOLD activation patterns, and by connectivity analyses that identify networks of interacting regions that support particular cognitive processes. We introduce a novel analysis representing the union of these approaches, and explore the insights gained when MVPA and functional connectivity analyses are allowed to mutually constrain each other within a single model. We explored multisensory semantic representations of concrete object concepts using a self-paced multisensory imagery task. Multilayer neural networks learned the real-world categories associated with macro-scale cortical BOLD activity patterns from the task, with some models additionally encoding regional functional connectivity. Models trained to encode functional connections demonstrated superior classification accuracy and more pronounced lesion-site appropriate category-specific impairments. We replicated these results in a data set from the openneuro.org open fMRI data repository. We conclude that mutually constrained network analyses encourage parsimonious models that may benefit from improved biological plausibility and facilitate discovery.
Collapse
|
13
|
Wong NA, Rafique SA, Moro SS, Kelly KR, Steeves JKE. Altered white matter structure in auditory tracts following early monocular enucleation. NEUROIMAGE-CLINICAL 2019; 24:102006. [PMID: 31622842 PMCID: PMC6812283 DOI: 10.1016/j.nicl.2019.102006] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/02/2019] [Revised: 09/04/2019] [Accepted: 09/14/2019] [Indexed: 01/29/2023]
Abstract
Purpose: Similar to early blindness, monocular enucleation (the removal of one eye) early in life results in crossmodal behavioral and morphological adaptations. Previously it has been shown that partial visual deprivation from early monocular enucleation results in structural white matter changes throughout the visual system (Wong et al., 2018). The current study investigated structural white matter of the auditory system in adults who have undergone early monocular enucleation compared to binocular control participants. Methods: We reconstructed four auditory and audiovisual tracts of interest using probabilistic tractography and compared microstructural properties of these tracts to binocularly intact controls using standard diffusion indices. Results: Although both groups demonstrated asymmetries in indices in intrahemispheric tracts, monocular enucleation participants showed asymmetries opposite to control participants in the auditory and A1-V1 tracts. Monocular enucleation participants also demonstrated significantly lower fractional anisotropy in the audiovisual projections contralateral to the enucleated eye relative to control participants. Conclusions: Partial vision loss from early monocular enucleation results in altered structural connectivity that extends into the auditory system, beyond tracts primarily dedicated to vision. Does losing one eye during postnatal maturation affect auditory white matter? Performed DTI of auditory and audiovisual tracts using probabilistic tractography. Patients differed in diffusion indices for auditory and audiovisual tracts. Early eye removal alters auditory white matter in addition to visual tracts.
Collapse
Affiliation(s)
- Nikita A Wong
- Department of Psychology, York University, Toronto, ON, Canada; Centre for Vision Research, York University, Toronto, ON, Canada
| | - Sara A Rafique
- Department of Psychology, York University, Toronto, ON, Canada; Centre for Vision Research, York University, Toronto, ON, Canada
| | - Stefania S Moro
- Department of Psychology, York University, Toronto, ON, Canada; Centre for Vision Research, York University, Toronto, ON, Canada; Department of Ophthalmology and Visual Sciences, The Hospital for Sick Children, Toronto, ON, Canada
| | | | - Jennifer K E Steeves
- Department of Psychology, York University, Toronto, ON, Canada; Centre for Vision Research, York University, Toronto, ON, Canada; Department of Ophthalmology and Visual Sciences, The Hospital for Sick Children, Toronto, ON, Canada.
| |
Collapse
|
14
|
Gu J, Liu B, Li X, Wang P, Wang B. Cross-modal representations in early visual and auditory cortices revealed by multi-voxel pattern analysis. Brain Imaging Behav 2019; 14:1908-1920. [PMID: 31183774 DOI: 10.1007/s11682-019-00135-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
Primary sensory cortices can respond not only to their defined sensory modality but also to cross-modal information. In addition to the observed cross-modal phenomenon, it is valuable to research further whether cross-modal information can be valuable for categorizing stimuli and what effect other factors, such as experience and imagination, may have on cross-modal processing. In this study, we researched cross-modal information processing in the early visual cortex (EVC, including the visual area 1, 2, and 3 (V1, V2, and V3)) and auditory cortex (primary (A1) and secondary (A2) auditory cortex). Images and sound clips were presented to participants separately in two experiments in which participants' imagination and expectations were restricted by an orthogonal fixation task and the data were collected by functional magnetic resonance imaging (fMRI). We successfully decoded categories of the cross-modal stimuli in the ROIs except for V1 by multi-voxel pattern analysis (MVPA). It was further shown that familiar sounds had the advantage of classification accuracies in V2 and V3 when compared with unfamiliar sounds. The results of the cross-classification analysis showed that there was no significant similarity between the activity patterns induced by different stimulus modalities. Even though the cross-modal representation is robust when considering the restriction of top-down expectations and mental imagery in our experiments, the sound experience showed effects on cross-modal representation in V2 and V3. In addition, primary sensory cortices may receive information from different modalities in different ways, so the activity patterns between two modalities were not similar enough to complete the cross-classification successfully.
Collapse
Affiliation(s)
- Jin Gu
- College of Intelligence and Computing, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin, 300350, People's Republic of China
| | - Baolin Liu
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, 100083, People's Republic of China.
| | - Xianglin Li
- Medical Imaging Research Institute, Binzhou Medical University, Yantai, Shandong, 264003, People's Republic of China
| | - Peiyuan Wang
- Department of Radiology, Yantai Affiliated Hospital of Binzhou Medical University, Yantai, Shandong, 264003, People's Republic of China
| | - Bin Wang
- Medical Imaging Research Institute, Binzhou Medical University, Yantai, Shandong, 264003, People's Republic of China
| |
Collapse
|
15
|
Deroy O, Faivre N, Lunghi C, Spence C, Aller M, Noppeney U. The Complex Interplay Between Multisensory Integration and Perceptual Awareness. Multisens Res 2018; 29:585-606. [PMID: 27795942 DOI: 10.1163/22134808-00002529] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
The integration of information has been considered a hallmark of human consciousness, as it requires information being globally available via widespread neural interactions. Yet the complex interdependencies between multisensory integration and perceptual awareness, or consciousness, remain to be defined. While perceptual awareness has traditionally been studied in a single sense, in recent years we have witnessed a surge of interest in the role of multisensory integration in perceptual awareness. Based on a recent IMRF symposium on multisensory awareness, this review discusses three key questions from conceptual, methodological and experimental perspectives: (1) What do we study when we study multisensory awareness? (2) What is the relationship between multisensory integration and perceptual awareness? (3) Which experimental approaches are most promising to characterize multisensory awareness? We hope that this review paper will provoke lively discussions, novel experiments, and conceptual considerations to advance our understanding of the multifaceted interplay between multisensory integration and consciousness.
Collapse
Affiliation(s)
- O Deroy
- Centre for the Study of the Senses, Institute of Philosophy, School of Advanced Study, University of London, London, UK
| | - N Faivre
- Laboratory of Cognitive Neuroscience, Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - C Lunghi
- Department of Translational Research on New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy
| | - C Spence
- Crossmodal Research Laboratory, Department of Experimental Psychology, Oxford University, Oxford, UK
| | - M Aller
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, UK
| | - U Noppeney
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, UK
| |
Collapse
|
16
|
Lu L, Zhang G, Xu J, Liu B. Semantically Congruent Sounds Facilitate the Decoding of Degraded Images. Neuroscience 2018; 377:12-25. [PMID: 29408368 DOI: 10.1016/j.neuroscience.2018.01.051] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2017] [Revised: 01/20/2018] [Accepted: 01/23/2018] [Indexed: 11/19/2022]
Abstract
Semantically congruent sounds can facilitate perception of visual objects in the human brain. However, the manner in which semantically congruent sounds affect cognitive processing for degraded visual stimuli remains unclear. We presented participants with naturalistic degraded images and semantically congruent sounds from different conceptual categories in three modalities: degraded visual only, auditory only, and auditory and degraded visual. Functional magnetic resonance imaging was performed to assess variations in brain-activation spatial patterns. In order to account for the facilitation of auditory modulation at different levels, four conceptual categories of stimuli were divided into coarse and fine groups. Conjunction analysis and multivariate pattern analysis were used to investigate integrative properties. Superadditive interactions were found in the visual association cortex and subadditive interactions were observed in the superior temporal sulcus/superior temporal gyrus (STS/STG). Our results demonstrate that the visual association cortex and STS/STG are involved in the integration of auditory and degraded visual information. In addition, the pattern classification results imply that semantically congruent sounds may facilitate identification of degraded images in both coarse and fine groups. Importantly, when naturalistic visual stimuli were further subdivided, facilitation through auditory modulation exhibited category selectivity.
Collapse
Affiliation(s)
- Lu Lu
- School of Computer Science and Technology, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin 300350, PR China
| | - Gaoyan Zhang
- School of Computer Science and Technology, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin 300350, PR China
| | - Junhai Xu
- School of Computer Science and Technology, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin 300350, PR China
| | - Baolin Liu
- School of Computer Science and Technology, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin 300350, PR China; State Key Laboratory of Intelligent Technology and Systems, National Laboratory for Information Science and Technology, Tsinghua University, Beijing 100084, PR China.
| |
Collapse
|
17
|
Greening SG, Mitchell DG, Smith FW. Spatially generalizable representations of facial expressions: Decoding across partial face samples. Cortex 2018; 101:31-43. [DOI: 10.1016/j.cortex.2017.11.016] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2017] [Revised: 11/02/2017] [Accepted: 11/28/2017] [Indexed: 10/18/2022]
|
18
|
Abstract
Behaviorally, it is well established that human observers integrate signals near-optimally weighted in proportion to their reliabilities as predicted by maximum likelihood estimation. Yet, despite abundant behavioral evidence, it is unclear how the human brain accomplishes this feat. In a spatial ventriloquist paradigm, participants were presented with auditory, visual, and audiovisual signals and reported the location of the auditory or the visual signal. Combining psychophysics, multivariate functional MRI (fMRI) decoding, and models of maximum likelihood estimation (MLE), we characterized the computational operations underlying audiovisual integration at distinct cortical levels. We estimated observers' behavioral weights by fitting psychometric functions to participants' localization responses. Likewise, we estimated the neural weights by fitting neurometric functions to spatial locations decoded from regional fMRI activation patterns. Our results demonstrate that low-level auditory and visual areas encode predominantly the spatial location of the signal component of a region's preferred auditory (or visual) modality. By contrast, intraparietal sulcus forms spatial representations by integrating auditory and visual signals weighted by their reliabilities. Critically, the neural and behavioral weights and the variance of the spatial representations depended not only on the sensory reliabilities as predicted by the MLE model but also on participants' modality-specific attention and report (i.e., visual vs. auditory). These results suggest that audiovisual integration is not exclusively determined by bottom-up sensory reliabilities. Instead, modality-specific attention and report can flexibly modulate how intraparietal sulcus integrates sensory signals into spatial representations to guide behavioral responses (e.g., localization and orienting).
Collapse
|
19
|
Kaplan JT, Gimbel SI, Dehghani M, Immordino-Yang MH, Sagae K, Wong JD, Tipper CM, Damasio H, Gordon AS, Damasio A. Processing Narratives Concerning Protected Values: A Cross-Cultural Investigation of Neural Correlates. Cereb Cortex 2018; 27:1428-1438. [PMID: 26744541 DOI: 10.1093/cercor/bhv325] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023] Open
Abstract
Narratives are an important component of culture and play a central role in transmitting social values. Little is known, however, about how the brain of a listener/reader processes narratives. A receiver's response to narration is influenced by the narrator's framing and appeal to values. Narratives that appeal to "protected values," including core personal, national, or religious values, may be particularly effective at influencing receivers. Protected values resist compromise and are tied with identity, affective value, moral decision-making, and other aspects of social cognition. Here, we investigated the neural mechanisms underlying reactions to protected values in narratives. During fMRI scanning, we presented 78 American, Chinese, and Iranian participants with real-life stories distilled from a corpus of over 20 million weblogs. Reading these stories engaged the posterior medial, medial prefrontal, and temporo-parietal cortices. When participants believed that the protagonist was appealing to a protected value, signal in these regions was increased compared with when no protected value was perceived, possibly reflecting the intensive and iterative search required to process this material. The effect strength also varied across groups, potentially reflecting cultural differences in the degree of concern for protected values.
Collapse
Affiliation(s)
- Jonas T Kaplan
- Brain and Creativity Institute.,Department of Psychology
| | | | - Morteza Dehghani
- Brain and Creativity Institute.,Department of Psychology.,Department of Computer Science
| | - Mary Helen Immordino-Yang
- Brain and Creativity Institute.,Rossier School of Education, University of Southern California, Los Angeles, CA, USA
| | - Kenji Sagae
- Department of Computer Science.,Institute for Creative Technologies
| | | | | | - Hanna Damasio
- Brain and Creativity Institute.,Department of Psychology
| | - Andrew S Gordon
- Department of Computer Science.,Institute for Creative Technologies
| | | |
Collapse
|
20
|
The vestibulocochlear bases for wartime posttraumatic stress disorder manifestations. Med Hypotheses 2017; 106:44-56. [DOI: 10.1016/j.mehy.2017.06.027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2016] [Accepted: 06/28/2017] [Indexed: 11/23/2022]
|
21
|
Perrone-Capano C, Volpicelli F, di Porzio U. Biological bases of human musicality. Rev Neurosci 2017; 28:235-245. [PMID: 28107174 DOI: 10.1515/revneuro-2016-0046] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2016] [Accepted: 11/04/2016] [Indexed: 11/15/2022]
Abstract
Music is a universal language, present in all human societies. It pervades the lives of most human beings and can recall memories and feelings of the past, can exert positive effects on our mood, can be strongly evocative and ignite intense emotions, and can establish or strengthen social bonds. In this review, we summarize the research and recent progress on the origins and neural substrates of human musicality as well as the changes in brain plasticity elicited by listening or performing music. Indeed, music improves performance in a number of cognitive tasks and may have beneficial effects on diseased brains. The emerging picture begins to unravel how and why particular brain circuits are affected by music. Numerous studies show that music affects emotions and mood, as it is strongly associated with the brain's reward system. We can therefore assume that an in-depth study of the relationship between music and the brain may help to shed light on how the mind works and how the emotions arise and may improve the methods of music-based rehabilitation for people with neurological disorders. However, many facets of the mind-music connection still remain to be explored and enlightened.
Collapse
|
22
|
Dykstra AR, Cariani PA, Gutschalk A. A roadmap for the study of conscious audition and its neural basis. Philos Trans R Soc Lond B Biol Sci 2017; 372:20160103. [PMID: 28044014 PMCID: PMC5206271 DOI: 10.1098/rstb.2016.0103] [Citation(s) in RCA: 37] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/03/2016] [Indexed: 12/16/2022] Open
Abstract
How and which aspects of neural activity give rise to subjective perceptual experience-i.e. conscious perception-is a fundamental question of neuroscience. To date, the vast majority of work concerning this question has come from vision, raising the issue of generalizability of prominent resulting theories. However, recent work has begun to shed light on the neural processes subserving conscious perception in other modalities, particularly audition. Here, we outline a roadmap for the future study of conscious auditory perception and its neural basis, paying particular attention to how conscious perception emerges (and of which elements or groups of elements) in complex auditory scenes. We begin by discussing the functional role of the auditory system, particularly as it pertains to conscious perception. Next, we ask: what are the phenomena that need to be explained by a theory of conscious auditory perception? After surveying the available literature for candidate neural correlates, we end by considering the implications that such results have for a general theory of conscious perception as well as prominent outstanding questions and what approaches/techniques can best be used to address them.This article is part of the themed issue 'Auditory and visual scene analysis'.
Collapse
Affiliation(s)
- Andrew R Dykstra
- Department of Neurology, Ruprecht-Karls-Universität Heidelberg, Heidelberg, Germany
| | | | - Alexander Gutschalk
- Department of Neurology, Ruprecht-Karls-Universität Heidelberg, Heidelberg, Germany
| |
Collapse
|
23
|
Cortico-Cortical Connections of Primary Sensory Areas and Associated Symptoms in Migraine. eNeuro 2017; 3:eN-NWR-0163-16. [PMID: 28101529 PMCID: PMC5239993 DOI: 10.1523/eneuro.0163-16.2016] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2016] [Revised: 10/28/2016] [Accepted: 11/28/2016] [Indexed: 01/11/2023] Open
Abstract
Migraine is a recurring, episodic neurological disorder characterized by headache, nausea, vomiting, and sensory disturbances. These events are thought to arise from the activation and sensitization of neurons along the trigemino-vascular pathway. From animal studies, it is known that thalamocortical projections play an important role in the transmission of nociceptive signals from the meninges to the cortex. However, little is currently known about the potential involvement of cortico-cortical feedback projections from higher-order multisensory areas and/or feedforward projections from principle primary sensory areas or subcortical structures. In a large cohort of human migraine patients (N = 40) and matched healthy control subjects (N = 40), we used resting-state intrinsic functional connectivity to examine the cortical networks associated with the three main sensory perceptual modalities of vision, audition, and somatosensation. Specifically, we sought to explore the complexity of the sensory networks as they converge and become functionally coupled in multimodal systems. We also compared self-reported retrospective migraine symptoms in the same patients, examining the prevalence of sensory symptoms across the different phases of the migraine cycle. Our results show widespread and persistent disturbances in the perceptions of multiple sensory modalities. Consistent with this observation, we discovered that primary sensory areas maintain local functional connectivity but express impaired long-range connections to higher-order association areas (including regions of the default mode and salience network). We speculate that cortico-cortical interactions are necessary for the integration of information within and across the sensory modalities and, thus, could play an important role in the initiation of migraine and/or the development of its associated symptoms.
Collapse
|
24
|
Bakkour A, Lewis-Peacock JA, Poldrack RA, Schonberg T. Neural mechanisms of cue-approach training. Neuroimage 2016; 151:92-104. [PMID: 27677231 DOI: 10.1016/j.neuroimage.2016.09.059] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2016] [Revised: 07/20/2016] [Accepted: 09/23/2016] [Indexed: 10/21/2022] Open
Abstract
Biasing choices may prove a useful way to implement behavior change. Previous work has shown that a simple training task (the cue-approach task), which does not rely on external reinforcement, can robustly influence choice behavior by biasing choice toward items that were targeted during training. In the current study, we replicate previous behavioral findings and explore the neural mechanisms underlying the shift in preferences following cue-approach training. Given recent successes in the development and application of machine learning techniques to task-based fMRI data, which have advanced understanding of the neural substrates of cognition, we sought to leverage the power of these techniques to better understand neural changes during cue-approach training that subsequently led to a shift in choice behavior. Contrary to our expectations, we found that machine learning techniques applied to fMRI data during non-reinforced training were unsuccessful in elucidating the neural mechanism underlying the behavioral effect. However, univariate analyses during training revealed that the relationship between BOLD and choices for Go items increases as training progresses compared to choices of NoGo items primarily in lateral prefrontal cortical areas. This new imaging finding suggests that preferences are shifted via differential engagement of task control networks that interact with value networks during cue-approach training.
Collapse
Affiliation(s)
- Akram Bakkour
- Imaging Research Center, The University of Texas at Austin, 100 E 24th St, Stop R9975, Austin, TX 78712, USA; Department of Neuroscience, The University of Texas at Austin, 100 E 24th St, Stop C7000, Austin, TX 78712, USA
| | - Jarrod A Lewis-Peacock
- Imaging Research Center, The University of Texas at Austin, 100 E 24th St, Stop R9975, Austin, TX 78712, USA; Department of Neuroscience, The University of Texas at Austin, 100 E 24th St, Stop C7000, Austin, TX 78712, USA; Department of Psychology, The University of Texas at Austin, 108 E Dean Keeton, Stop A8000, Austin, TX 78712, USA
| | - Russell A Poldrack
- Imaging Research Center, The University of Texas at Austin, 100 E 24th St, Stop R9975, Austin, TX 78712, USA; Department of Neuroscience, The University of Texas at Austin, 100 E 24th St, Stop C7000, Austin, TX 78712, USA; Department of Psychology, The University of Texas at Austin, 108 E Dean Keeton, Stop A8000, Austin, TX 78712, USA
| | - Tom Schonberg
- Imaging Research Center, The University of Texas at Austin, 100 E 24th St, Stop R9975, Austin, TX 78712, USA; Department of Neurobiology, Faculty of Life Sciences and Sagol School of Neuroscience, Tel Aviv University, P.O. Box 39040, Tel Aviv 6997801, Israel.
| |
Collapse
|
25
|
Martin A. GRAPES-Grounding representations in action, perception, and emotion systems: How object properties and categories are represented in the human brain. Psychon Bull Rev 2016; 23:979-90. [PMID: 25968087 PMCID: PMC5111803 DOI: 10.3758/s13423-015-0842-3] [Citation(s) in RCA: 157] [Impact Index Per Article: 19.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
In this article, I discuss some of the latest functional neuroimaging findings on the organization of object concepts in the human brain. I argue that these data provide strong support for viewing concepts as the products of highly interactive neural circuits grounded in the action, perception, and emotion systems. The nodes of these circuits are defined by regions representing specific object properties (e.g., form, color, and motion) and thus are property-specific, rather than strictly modality-specific. How these circuits are modified by external and internal environmental demands, the distinction between representational content and format, and the grounding of abstract social concepts are also discussed.
Collapse
Affiliation(s)
- Alex Martin
- Laboratory of Brain and Cognition, National Institute of Mental Health, Building 10, Room 4C-104, 10 Center Drive MSC 1366, Bethesda, MD, 20892-1366, USA.
| |
Collapse
|
26
|
Distinct Computational Principles Govern Multisensory Integration in Primary Sensory and Association Cortices. Curr Biol 2016; 26:509-14. [DOI: 10.1016/j.cub.2015.12.056] [Citation(s) in RCA: 104] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2015] [Revised: 11/06/2015] [Accepted: 12/22/2015] [Indexed: 11/19/2022]
|
27
|
Almeida J, He D, Chen Q, Mahon BZ, Zhang F, Gonçalves ÓF, Fang F, Bi Y. Decoding Visual Location From Neural Patterns in the Auditory Cortex of the Congenitally Deaf. Psychol Sci 2015; 26:1771-82. [PMID: 26423461 DOI: 10.1177/0956797615598970] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2014] [Accepted: 07/14/2015] [Indexed: 11/17/2022] Open
Abstract
Sensory cortices of individuals who are congenitally deprived of a sense can exhibit considerable plasticity and be recruited to process information from the senses that remain intact. Here, we explored whether the auditory cortex of congenitally deaf individuals represents visual field location of a stimulus-a dimension that is represented in early visual areas. We used functional MRI to measure neural activity in auditory and visual cortices of congenitally deaf and hearing humans while they observed stimuli typically used for mapping visual field preferences in visual cortex. We found that the location of a visual stimulus can be successfully decoded from the patterns of neural activity in auditory cortex of congenitally deaf but not hearing individuals. This is particularly true for locations within the horizontal plane and within peripheral vision. These data show that the representations stored within neuroplastically changed auditory cortex can align with dimensions that are typically represented in visual cortex.
Collapse
Affiliation(s)
- Jorge Almeida
- Faculty of Psychology and Educational Sciences, University of Coimbra Proaction Laboratory, Faculty of Psychology and Educational Sciences, University of Coimbra
| | - Dongjun He
- Department of Psychology, Peking University Key Laboratory of Machine Perception (Ministry of Education), Peking University
| | - Quanjing Chen
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University IDG/McGovern Institute for Brain Research, Beijing Normal University Department of Brain and Cognitive Sciences, University of Rochester
| | - Bradford Z Mahon
- Department of Brain and Cognitive Sciences, University of Rochester Department of Neurosurgery, University of Rochester Center for Visual Science, University of Rochester
| | - Fan Zhang
- Faculty of Psychology and Educational Sciences, University of Coimbra Proaction Laboratory, Faculty of Psychology and Educational Sciences, University of Coimbra
| | - Óscar F Gonçalves
- School of Psychology, University of Minho Neuropsychophysiology Laboratory, Research Center in Psychology, School of Psychology, University of Minho Bouvé College of Health Sciences, Northeastern University
| | - Fang Fang
- Department of Psychology, Peking University Key Laboratory of Machine Perception (Ministry of Education), Peking University Peking-Tsinghua Center for Life Sciences, Peking University PKU-IDG/McGovern Institute for Brain Research, Peking University
| | - Yanchao Bi
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University IDG/McGovern Institute for Brain Research, Beijing Normal University
| |
Collapse
|
28
|
Riedel P, Ragert P, Schelinski S, Kiebel SJ, von Kriegstein K. Visual face-movement sensitive cortex is relevant for auditory-only speech recognition. Cortex 2015; 68:86-99. [DOI: 10.1016/j.cortex.2014.11.016] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2014] [Revised: 10/24/2014] [Accepted: 11/25/2014] [Indexed: 12/31/2022]
|
29
|
Man K, Damasio A, Meyer K, Kaplan JT. Convergent and invariant object representations for sight, sound, and touch. Hum Brain Mapp 2015; 36:3629-40. [PMID: 26047030 DOI: 10.1002/hbm.22867] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2014] [Revised: 05/21/2015] [Accepted: 05/21/2015] [Indexed: 12/30/2022] Open
Abstract
We continuously perceive objects in the world through multiple sensory channels. In this study, we investigated the convergence of information from different sensory streams within the cerebral cortex. We presented volunteers with three common objects via three different modalities-sight, sound, and touch-and used multivariate pattern analysis of functional magnetic resonance imaging data to map the cortical regions containing information about the identity of the objects. We could reliably predict which of the three stimuli a subject had seen, heard, or touched from the pattern of neural activity in the corresponding early sensory cortices. Intramodal classification was also successful in large portions of the cerebral cortex beyond the primary areas, with multiple regions showing convergence of information from two or all three modalities. Using crossmodal classification, we also searched for brain regions that would represent objects in a similar fashion across different modalities of presentation. We trained a classifier to distinguish objects presented in one modality and then tested it on the same objects presented in a different modality. We detected audiovisual invariance in the right temporo-occipital junction, audiotactile invariance in the left postcentral gyrus and parietal operculum, and visuotactile invariance in the right postcentral and supramarginal gyri. Our maps of multisensory convergence and crossmodal generalization reveal the underlying organization of the association cortices, and may be related to the neural basis for mental concepts.
Collapse
Affiliation(s)
- Kingson Man
- Brain and Creativity Institute, University of Southern California, Los Angeles, California, 90089
| | - Antonio Damasio
- Brain and Creativity Institute, University of Southern California, Los Angeles, California, 90089
| | - Kaspar Meyer
- Brain and Creativity Institute, University of Southern California, Los Angeles, California, 90089.,Institute of Anesthesiology, University Hospital, University of Zurich, Zurich, Switzerland
| | - Jonas T Kaplan
- Brain and Creativity Institute, University of Southern California, Los Angeles, California, 90089
| |
Collapse
|
30
|
Abstract
Abstract
Despite considerable progress in the identification of the molecular targets of general anesthetics, it remains unclear how these drugs affect the brain at the systems level to suppress consciousness. According to recent proposals, anesthetics may achieve this feat by interfering with corticocortical top–down processes, that is, by interrupting information flow from association to early sensory cortices. Such a view entails two immediate questions. First, at which anatomical site, and by virtue of which physiological mechanism, do anesthetics interfere with top–down signals? Second, why does a breakdown of top–down signaling cause unconsciousness? While an answer to the first question can be gleaned from emerging neurophysiological evidence on dendritic signaling in cortical pyramidal neurons, a response to the second is offered by increasingly popular theoretical frameworks that place the element of prediction at the heart of conscious perception.
Collapse
|
31
|
Zhang X, Zhang Q, Hu X, Zhang B. Neural representation of three-dimensional acoustic space in the human temporal lobe. Front Hum Neurosci 2015; 9:203. [PMID: 25932011 PMCID: PMC4399328 DOI: 10.3389/fnhum.2015.00203] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2015] [Accepted: 03/27/2015] [Indexed: 11/13/2022] Open
Abstract
Sound localization is an important function of the human brain, but the underlying cortical mechanisms remain unclear. In this study, we recorded auditory stimuli in three-dimensional space and then replayed the stimuli through earphones during functional magnetic resonance imaging (fMRI). By employing a machine learning algorithm, we successfully decoded sound location from the blood oxygenation level-dependent signals in the temporal lobe. Analysis of the data revealed that different cortical patterns were evoked by sounds from different locations. Specifically, discrimination of sound location along the abscissa axis evoked robust responses in the left posterior superior temporal gyrus (STG) and right mid-STG, discrimination along the elevation (EL) axis evoked robust responses in the left posterior middle temporal lobe (MTL) and right STG, and discrimination along the ordinate axis evoked robust responses in the left mid-MTL and right mid-STG. These results support a distributed representation of acoustic space in human cortex.
Collapse
Affiliation(s)
- Xiaolu Zhang
- State Key Laboratory of Intelligent Technology and Systems, Tsinghua National Laboratory for Information Science and Technology (TNList), Department of Computer Science and Technology, Tsinghua University Beijing, China
| | - Qingtian Zhang
- State Key Laboratory of Intelligent Technology and Systems, Tsinghua National Laboratory for Information Science and Technology (TNList), Department of Computer Science and Technology, Tsinghua University Beijing, China
| | - Xiaolin Hu
- State Key Laboratory of Intelligent Technology and Systems, Tsinghua National Laboratory for Information Science and Technology (TNList), Department of Computer Science and Technology, Tsinghua University Beijing, China ; Center for Brain-Inspired Computing Research (CBICR), Tsinghua University Beijing, China
| | - Bo Zhang
- State Key Laboratory of Intelligent Technology and Systems, Tsinghua National Laboratory for Information Science and Technology (TNList), Department of Computer Science and Technology, Tsinghua University Beijing, China ; Center for Brain-Inspired Computing Research (CBICR), Tsinghua University Beijing, China
| |
Collapse
|
32
|
Robinson AK, Reinhard J, Mattingley JB. Olfaction Modulates Early Neural Responses to Matching Visual Objects. J Cogn Neurosci 2015; 27:832-41. [DOI: 10.1162/jocn_a_00732] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
Sensory information is initially registered within anatomically and functionally segregated brain networks but is also integrated across modalities in higher cortical areas. Although considerable research has focused on uncovering the neural correlates of multisensory integration for the modalities of vision, audition, and touch, much less attention has been devoted to understanding interactions between vision and olfaction in humans. In this study, we asked how odors affect neural activity evoked by images of familiar visual objects associated with characteristic smells. We employed scalp-recorded EEG to measure visual ERPs evoked by briefly presented pictures of familiar objects, such as an orange, mint leaves, or a rose. During presentation of each visual stimulus, participants inhaled either a matching odor, a nonmatching odor, or plain air. The N1 component of the visual ERP was significantly enhanced for matching odors in women, but not in men. This is consistent with evidence that women are superior in detecting, discriminating, and identifying odors and that they have a higher gray matter concentration in olfactory areas of the OFC. We conclude that early visual processing is influenced by olfactory cues because of associations between odors and the objects that emit them, and that these associations are stronger in women than in men.
Collapse
|
33
|
Lee D, Jang C, Park HJ. Multivariate detrending of fMRI signal drifts for real-time multiclass pattern classification. Neuroimage 2015; 108:203-13. [DOI: 10.1016/j.neuroimage.2014.12.062] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2014] [Revised: 12/18/2014] [Accepted: 12/24/2014] [Indexed: 10/24/2022] Open
|
34
|
Axelrod V, Yovel G. Successful decoding of famous faces in the fusiform face area. PLoS One 2015; 10:e0117126. [PMID: 25714434 PMCID: PMC4340964 DOI: 10.1371/journal.pone.0117126] [Citation(s) in RCA: 77] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2014] [Accepted: 12/19/2014] [Indexed: 11/18/2022] Open
Abstract
What are the neural mechanisms of face recognition? It is believed that the network of face-selective areas, which spans the occipital, temporal, and frontal cortices, is important in face recognition. A number of previous studies indeed reported that face identity could be discriminated based on patterns of multivoxel activity in the fusiform face area and the anterior temporal lobe. However, given the difficulty in localizing the face-selective area in the anterior temporal lobe, its role in face recognition is still unknown. Furthermore, previous studies limited their analysis to occipito-temporal regions without testing identity decoding in more anterior face-selective regions, such as the amygdala and prefrontal cortex. In the current high-resolution functional Magnetic Resonance Imaging study, we systematically examined the decoding of the identity of famous faces in the temporo-frontal network of face-selective and adjacent non-face-selective regions. A special focus has been put on the face-area in the anterior temporal lobe, which was reliably localized using an optimized scanning protocol. We found that face-identity could be discriminated above chance level only in the fusiform face area. Our results corroborate the role of the fusiform face area in face recognition. Future studies are needed to further explore the role of the more recently discovered anterior face-selective areas in face recognition.
Collapse
Affiliation(s)
- Vadim Axelrod
- School of Psychological Sciences, Tel Aviv University, Tel Aviv, Israel
| | - Galit Yovel
- School of Psychological Sciences, Tel Aviv University, Tel Aviv, Israel; School of Neuroscience, Tel Aviv University, Tel Aviv, Israel
| |
Collapse
|
35
|
Decoding multiple sound categories in the human temporal cortex using high resolution fMRI. PLoS One 2015; 10:e0117303. [PMID: 25692885 PMCID: PMC4333227 DOI: 10.1371/journal.pone.0117303] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2013] [Accepted: 12/22/2014] [Indexed: 11/19/2022] Open
Abstract
Perception of sound categories is an important aspect of auditory perception. The extent to which the brain's representation of sound categories is encoded in specialized subregions or distributed across the auditory cortex remains unclear. Recent studies using multivariate pattern analysis (MVPA) of brain activations have provided important insights into how the brain decodes perceptual information. In the large existing literature on brain decoding using MVPA methods, relatively few studies have been conducted on multi-class categorization in the auditory domain. Here, we investigated the representation and processing of auditory categories within the human temporal cortex using high resolution fMRI and MVPA methods. More importantly, we considered decoding multiple sound categories simultaneously through multi-class support vector machine-recursive feature elimination (MSVM-RFE) as our MVPA tool. Results show that for all classifications the model MSVM-RFE was able to learn the functional relation between the multiple sound categories and the corresponding evoked spatial patterns and classify the unlabeled sound-evoked patterns significantly above chance. This indicates the feasibility of decoding multiple sound categories not only within but across subjects. However, the across-subject variation affects classification performance more than the within-subject variation, as the across-subject analysis has significantly lower classification accuracies. Sound category-selective brain maps were identified based on multi-class classification and revealed distributed patterns of brain activity in the superior temporal gyrus and the middle temporal gyrus. This is in accordance with previous studies, indicating that information in the spatially distributed patterns may reflect a more abstract perceptual level of representation of sound categories. Further, we show that the across-subject classification performance can be significantly improved by averaging the fMRI images over items, because the irrelevant variations between different items of the same sound category are reduced and in turn the proportion of signals relevant to sound categorization increases.
Collapse
|
36
|
Beyond the word and image: characteristics of a common meaning system for language and vision revealed by functional and structural imaging. Neuroimage 2015; 106:72-85. [DOI: 10.1016/j.neuroimage.2014.11.024] [Citation(s) in RCA: 36] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2014] [Revised: 10/23/2014] [Accepted: 11/11/2014] [Indexed: 12/19/2022] Open
|
37
|
Linke AC, Cusack R. Flexible information coding in human auditory cortex during perception, imagery, and STM of complex sounds. J Cogn Neurosci 2015; 27:1322-33. [PMID: 25603030 DOI: 10.1162/jocn_a_00780] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Auditory cortex is the first cortical region of the human brain to process sounds. However, it has recently been shown that its neurons also fire in the absence of direct sensory input, during memory maintenance and imagery. This has commonly been taken to reflect neural coding of the same acoustic information as during the perception of sound. However, the results of the current study suggest that the type of information encoded in auditory cortex is highly flexible. During perception and memory maintenance, neural activity patterns are stimulus specific, reflecting individual sound properties. Auditory imagery of the same sounds evokes similar overall activity in auditory cortex as perception. However, during imagery abstracted, categorical information is encoded in the neural patterns, particularly when individuals are experiencing more vivid imagery. This highlights the necessity to move beyond traditional "brain mapping" inference in human neuroimaging, which assumes common regional activation implies similar mental representations.
Collapse
Affiliation(s)
- Annika C Linke
- 1Western University, London, ON, Canada.,2Medical Research Council, Cambridge, United Kingdom
| | - Rhodri Cusack
- 1Western University, London, ON, Canada.,2Medical Research Council, Cambridge, United Kingdom
| |
Collapse
|
38
|
Blank H, Kiebel SJ, von Kriegstein K. How the human brain exchanges information across sensory modalities to recognize other people. Hum Brain Mapp 2014; 36:324-39. [PMID: 25220190 DOI: 10.1002/hbm.22631] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2014] [Revised: 08/29/2014] [Accepted: 08/29/2014] [Indexed: 11/09/2022] Open
Abstract
Recognizing the identity of other individuals across different sensory modalities is critical for successful social interaction. In the human brain, face- and voice-sensitive areas are separate, but structurally connected. What kind of information is exchanged between these specialized areas during cross-modal recognition of other individuals is currently unclear. For faces, specific areas are sensitive to identity and to physical properties. It is an open question whether voices activate representations of face identity or physical facial properties in these areas. To address this question, we used functional magnetic resonance imaging in humans and a voice-face priming design. In this design, familiar voices were followed by morphed faces that matched or mismatched with respect to identity or physical properties. The results showed that responses in face-sensitive regions were modulated when face identity or physical properties did not match to the preceding voice. The strength of this mismatch signal depended on the level of certainty the participant had about the voice identity. This suggests that both identity and physical property information was provided by the voice to face areas. The activity and connectivity profiles differed between face-sensitive areas: (i) the occipital face area seemed to receive information about both physical properties and identity, (ii) the fusiform face area seemed to receive identity, and (iii) the anterior temporal lobe seemed to receive predominantly identity information from the voice. We interpret these results within a prediction coding scheme in which both identity and physical property information is used across sensory modalities to recognize individuals.
Collapse
Affiliation(s)
- Helen Blank
- Max Planck Institute for Human Cognitive and Brain Sciences, 04103, Leipzig, Germany; MRC Cognition and Brain Sciences Unit, Cambridge CB2 7EF, United Kingdom
| | | | | |
Collapse
|
39
|
Sensory modality-specific spatio-temporal dynamics in response to counting tasks. Neurosci Lett 2014; 581:20-5. [PMID: 25130313 DOI: 10.1016/j.neulet.2014.08.015] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2014] [Revised: 07/21/2014] [Accepted: 08/06/2014] [Indexed: 11/22/2022]
Abstract
From perception to behavior, the human brain processes information in a flexible and abstract manner independent of an input sensory modality. However, the mechanism of such multisensory neural information processing in the brain remains under debate. Relatedly, studies often aim to investigate whether certain brain regions behave in a modality-specific manner or invariantly. Previous studies regarding multisensory information processing have commonly reported only on the activation of brain regions in response to unimodal or multimodal sensory stimuli. However, less attention has been given to the modality effect on the dynamics of such regions, which could advance our understanding of neuronal information processing. In this study, we investigated whether brain regions show modality-specific or invariant high-temporal dynamics. Electrocardiogram (EEG) was recorded from healthy, normal subjects during beep-, flash- and click-counting tasks, which corresponded to auditory, visual and tactile modalities, respectively. EEG dynamics regarding event-related spectral perturbations (ERSP) in ICA time-series data were compared across the sensory modalities using a multivariate pattern analysis. We found modality-specific EEG dynamics in the prefrontal cortex, whereas we found modality-specific and cross-modal dynamics in the early visual cortex.
Collapse
|
40
|
Abstract
In brain imaging, solving learning problems in multi-subjects settings is difficult because of the differences that exist across individuals. Here we introduce a novel classification framework based on group-invariant graphical representations, allowing to overcome the inter-subject variability present in functional magnetic resonance imaging (fMRI) data and to perform multivariate pattern analysis across subjects. Our contribution is twofold: first, we propose an unsupervised representation learning scheme that encodes all relevant characteristics of distributed fMRI patterns into attributed graphs; second, we introduce a custom-designed graph kernel that exploits all these characteristics and makes it possible to perform supervised learning (here, classification) directly in graph space. The well-foundedness of our technique and the robustness of the performance to the parameter setting are demonstrated through inter-subject classification experiments conducted on both artificial data and a real fMRI experiment aimed at characterizing local cortical representations. Our results show that our framework produces accurate inter-subject predictions and that it outperforms a wide range of state-of-the-art vector- and parcel-based classification methods. Moreover, the genericity of our method makes it is easily adaptable to a wide range of potential applications. The dataset used in this study and an implementation of our framework are available at http://dx.doi.org/10.6084/m9.figshare.1086317.
Collapse
|
41
|
Gallivan JP, Cant JS, Goodale MA, Flanagan JR. Representation of object weight in human ventral visual cortex. Curr Biol 2014; 24:1866-73. [PMID: 25065755 DOI: 10.1016/j.cub.2014.06.046] [Citation(s) in RCA: 81] [Impact Index Per Article: 8.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2014] [Revised: 05/21/2014] [Accepted: 06/17/2014] [Indexed: 10/25/2022]
Abstract
Skilled manipulation requires the ability to predict the weights of viewed objects based on learned associations linking object weight to object visual appearance. However, the neural mechanisms involved in extracting weight information from viewed object properties are unknown. Given that ventral visual pathway areas represent a wide variety of object features, one intriguing but as yet untested possibility is that these areas also represent object weight, a nonvisual motor-relevant object property. Here, using event-related fMRI and pattern classification techniques, we tested the novel hypothesis that object-sensitive regions in occipitotemporal cortex (OTC), in addition to traditional motor-related brain areas, represent object weight when preparing to lift that object. In two studies, the same participants prepared and then executed lifting actions with objects of varying weight. In the first study, we show that when lifting visually identical objects, where predicted weight is based solely on sensorimotor memory, weight is represented in object-sensitive OTC. In the second study, we show that when object weight is associated with a particular surface texture, that texture-sensitive OTC areas also come to represent object weight. Notably, these texture-sensitive areas failed to carry information about weight in the first study, when object surface properties did not specify weight. Our results indicate that the integration of visual and motor-relevant object information occurs at the level of single OTC areas and provide evidence that the ventral visual pathway is actively and flexibly engaged in processing object weight, an object property critical for action planning and control.
Collapse
Affiliation(s)
- Jason P Gallivan
- Centre for Neuroscience Studies, Queen's University, Kingston, ON K7L 3N6, Canada; Department of Psychology, Queen's University, Kingston, ON K7L 3N6, Canada.
| | - Jonathan S Cant
- Department of Psychology, University of Toronto, Scarborough, ON M1C 1A4, Canada
| | - Melvyn A Goodale
- Brain and Mind Institute, University of Western Ontario, London, ON N6A 5B7, Canada; Department of Psychology, University of Western Ontario, London, ON N6A 5C2, Canada
| | - J Randall Flanagan
- Centre for Neuroscience Studies, Queen's University, Kingston, ON K7L 3N6, Canada; Department of Psychology, Queen's University, Kingston, ON K7L 3N6, Canada.
| |
Collapse
|
42
|
Man K, Kaplan J, Damasio H, Damasio A. Neural convergence and divergence in the mammalian cerebral cortex: from experimental neuroanatomy to functional neuroimaging. J Comp Neurol 2014; 521:4097-111. [PMID: 23840023 DOI: 10.1002/cne.23408] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2013] [Revised: 04/30/2013] [Accepted: 06/28/2013] [Indexed: 11/08/2022]
Abstract
A development essential for understanding the neural basis of complex behavior and cognition is the description, during the last quarter of the twentieth century, of detailed patterns of neuronal circuitry in the mammalian cerebral cortex. This effort established that sensory pathways exhibit successive levels of convergence, from the early sensory cortices to sensory-specific and multisensory association cortices, culminating in maximally integrative regions. It was also established that this convergence is reciprocated by successive levels of divergence, from the maximally integrative areas all the way back to the early sensory cortices. This article first provides a brief historical review of these neuroanatomical findings, which were relevant to the study of brain and mind-behavior relationships and to the proposal of heuristic anatomofunctional frameworks. In a second part, the article reviews new evidence that has accumulated from studies of functional neuroimaging, employing both univariate and multivariate analyses, as well as electrophysiology, in humans and other mammals, that the integration of information across the auditory, visual, and somatosensory-motor modalities proceeds in a content-rich manner. Behaviorally and cognitively relevant information is extracted from and conserved across the different modalities, both in higher order association cortices and in early sensory cortices. Such stimulus-specific information is plausibly relayed along the neuroanatomical pathways alluded to above. The evidence reviewed here suggests the need for further in-depth exploration of the intricate connectivity of the mammalian cerebral cortex in experimental neuroanatomical studies.
Collapse
Affiliation(s)
- Kingson Man
- Brain and Creativity Institute, University of Southern California, Los Angeles, California, 90089
| | | | | | | |
Collapse
|
43
|
Vetter P, Smith FW, Muckli L. Decoding sound and imagery content in early visual cortex. Curr Biol 2014; 24:1256-62. [PMID: 24856208 PMCID: PMC4046224 DOI: 10.1016/j.cub.2014.04.020] [Citation(s) in RCA: 146] [Impact Index Per Article: 14.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2013] [Revised: 02/28/2014] [Accepted: 04/08/2014] [Indexed: 11/17/2022]
Abstract
Human early visual cortex was traditionally thought to process simple visual features such as orientation, contrast, and spatial frequency via feedforward input from the lateral geniculate nucleus (e.g., [1]). However, the role of nonretinal influence on early visual cortex is so far insufficiently investigated despite much evidence that feedback connections greatly outnumber feedforward connections [2–5]. Here, we explored in five fMRI experiments how information originating from audition and imagery affects the brain activity patterns in early visual cortex in the absence of any feedforward visual stimulation. We show that category-specific information from both complex natural sounds and imagery can be read out from early visual cortex activity in blindfolded participants. The coding of nonretinal information in the activity patterns of early visual cortex is common across actual auditory perception and imagery and may be mediated by higher-level multisensory areas. Furthermore, this coding is robust to mild manipulations of attention and working memory but affected by orthogonal, cognitively demanding visuospatial processing. Crucially, the information fed down to early visual cortex is category specific and generalizes to sound exemplars of the same category, providing evidence for abstract information feedback rather than precise pictorial feedback. Our results suggest that early visual cortex receives nonretinal input from other brain areas when it is generated by auditory perception and/or imagery, and this input carries common abstract information. Our findings are compatible with feedback of predictive information to the earliest visual input level (e.g., [6]), in line with predictive coding models [7–10]. Early visual cortex receives nonretinal input carrying abstract information Both auditory perception and imagery generate consistent top-down input Information feedback may be mediated by multisensory areas Feedback is robust to attentional, but not visuospatial, manipulation
Collapse
Affiliation(s)
- Petra Vetter
- Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, College of Medical, Veterinary and Life Sciences, University of Glasgow, 58 Hillhead Street, Glasgow G12 8QB, UK; Laboratory for Behavioral Neurology and Imaging of Cognition, Department of Neuroscience, Medical School and Swiss Center for Affective Sciences, University of Geneva, Campus Biotech, Case Postale 60, 1211 Geneva, Switzerland.
| | - Fraser W Smith
- Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, College of Medical, Veterinary and Life Sciences, University of Glasgow, 58 Hillhead Street, Glasgow G12 8QB, UK
| | - Lars Muckli
- Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, College of Medical, Veterinary and Life Sciences, University of Glasgow, 58 Hillhead Street, Glasgow G12 8QB, UK.
| |
Collapse
|
44
|
Using fMRI to decode true thoughts independent of intention to conceal. Neuroimage 2014; 99:80-92. [PMID: 24844742 DOI: 10.1016/j.neuroimage.2014.05.034] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2013] [Revised: 04/11/2014] [Accepted: 05/13/2014] [Indexed: 11/23/2022] Open
Abstract
Multi-variate pattern analysis (MVPA) applied to BOLD-fMRI has proven successful at decoding complicated fMRI signal patterns associated with a variety of cognitive processes. One cognitive process, not yet investigated, is the mental representation of "Yes/No" thoughts that precede the actual overt response to a binary "Yes/No" question. In this study, we focus on examining: (1) whether spatial patterns of the hemodynamic response carry sufficient information to allow reliable decoding of "Yes/No" thoughts; and (2) whether decoding of "Yes/No" thoughts is independent of the intention to respond honestly or dishonestly. To achieve this goal, we conducted two separate experiments. Experiment 1, collected on a 3T scanner, examined the whole brain to identify regions that carry sufficient information to permit significantly above-chance prediction of "Yes/No" thoughts at the group level. In Experiment 2, collected on a 7T scanner, we focused on the regions identified in Experiment 1 to examine the capability of achieving high decoding accuracy at the single subject level. A set of regions--namely right superior temporal gyrus, left supra-marginal gyrus, and left middle frontal gyrus--exhibited high decoding power. Decoding accuracy for these regions increased with trial averaging. When 18 trials were averaged, the median accuracies were 82.5%, 77.5%, and 79.5%, respectively. When trials were separated according to deceptive intentions (set via experimental cues), and classifiers were trained on honest trials, but tested on trials where subjects were asked to deceive, the median accuracies of these regions still reached 66%, 75%, and 78.5%. These results provide evidence that concealed "Yes/No" thoughts are encoded in the BOLD signal, retaining some level of independence from the subject's intentions to answer honestly or dishonestly. These findings also suggest the theoretical possibility for more efficient brain-computer interfaces where subjects only need to think their answers to communicate.
Collapse
|
45
|
Brain-based translation: fMRI decoding of spoken words in bilinguals reveals language-independent semantic representations in anterior temporal lobe. J Neurosci 2014; 34:332-8. [PMID: 24381294 DOI: 10.1523/jneurosci.1302-13.2014] [Citation(s) in RCA: 64] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Bilinguals derive the same semantic concepts from equivalent, but acoustically different, words in their first and second languages. The neural mechanisms underlying the representation of language-independent concepts in the brain remain unclear. Here, we measured fMRI in human bilingual listeners and reveal that response patterns to individual spoken nouns in one language (e.g., "horse" in English) accurately predict the response patterns to equivalent nouns in the other language (e.g., "paard" in Dutch). Stimuli were four monosyllabic words in both languages, all from the category of "animal" nouns. For each word, pronunciations from three different speakers were included, allowing the investigation of speaker-independent representations of individual words. We used multivariate classifiers and a searchlight method to map the informative fMRI response patterns that enable decoding spoken words within languages (within-language discrimination) and across languages (across-language generalization). Response patterns discriminative of spoken words within language were distributed in multiple cortical regions, reflecting the complexity of the neural networks recruited during speech and language processing. Response patterns discriminative of spoken words across language were limited to localized clusters in the left anterior temporal lobe, the left angular gyrus and the posterior bank of the left postcentral gyrus, the right posterior superior temporal sulcus/superior temporal gyrus, the right medial anterior temporal lobe, the right anterior insula, and bilateral occipital cortex. These results corroborate the existence of "hub" regions organizing semantic-conceptual knowledge in abstract form at the fine-grained level of within semantic category discriminations.
Collapse
|
46
|
Abstract
Language is a high-level cognitive function, so exploring the neural correlates of unconscious language processing is essential for understanding the limits of unconscious processing in general. The results of several functional magnetic resonance imaging studies have suggested that unconscious lexical and semantic processing is confined to the posterior temporal lobe, without involvement of the frontal lobe—the regions that are indispensable for conscious language processing. However, previous studies employed a similarly designed masked priming paradigm with briefly presented single and contextually unrelated words. It is thus possible, that the stimulation level was insufficiently strong to be detected in the high-level frontal regions. Here, in a high-resolution fMRI and multivariate pattern analysis study we explored the neural correlates of subliminal language processing using a novel paradigm, where written meaningful sentences were suppressed from awareness for extended duration using continuous flash suppression. We found that subjectively and objectively invisible meaningful sentences and unpronounceable nonwords could be discriminated not only in the left posterior superior temporal sulcus (STS), but critically, also in the left middle frontal gyrus. We conclude that frontal lobes play a role in unconscious language processing and that activation of the frontal lobes per se might not be sufficient for achieving conscious awareness.
Collapse
Affiliation(s)
- Vadim Axelrod
- The Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan, Israel UCL Institute of Cognitive Neuroscience
| | - Moshe Bar
- The Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan, Israel Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Charlestown, MA, USA
| | - Geraint Rees
- UCL Institute of Cognitive Neuroscience Wellcome Trust Centre for Neuroimaging, University College London, London, UK
| | - Galit Yovel
- School of Psychological Sciences Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel
| |
Collapse
|
47
|
Representational similarity analysis reveals commonalities and differences in the semantic processing of words and objects. J Neurosci 2014; 33:18906-16. [PMID: 24285896 DOI: 10.1523/jneurosci.3809-13.2013] [Citation(s) in RCA: 125] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Understanding the meanings of words and objects requires the activation of underlying conceptual representations. Semantic representations are often assumed to be coded such that meaning is evoked regardless of the input modality. However, the extent to which meaning is coded in modality-independent or amodal systems remains controversial. We address this issue in a human fMRI study investigating the neural processing of concepts, presented separately as written words and pictures. Activation maps for each individual word and picture were used as input for searchlight-based multivoxel pattern analyses. Representational similarity analysis was used to identify regions correlating with low-level visual models of the words and objects and the semantic category structure common to both. Common semantic category effects for both modalities were found in a left-lateralized network, including left posterior middle temporal gyrus (LpMTG), left angular gyrus, and left intraparietal sulcus (LIPS), in addition to object- and word-specific semantic processing in ventral temporal cortex and more anterior MTG, respectively. To explore differences in representational content across regions and modalities, we developed novel data-driven analyses, based on k-means clustering of searchlight dissimilarity matrices and seeded correlation analysis. These revealed subtle differences in the representations in semantic-sensitive regions, with representations in LIPS being relatively invariant to stimulus modality and representations in LpMTG being uncorrelated across modality. These results suggest that, although both LpMTG and LIPS are involved in semantic processing, only the functional role of LIPS is the same regardless of the visual input, whereas the functional role of LpMTG differs for words and objects.
Collapse
|
48
|
Liang M, Mouraux A, Hu L, Iannetti GD. Primary sensory cortices contain distinguishable spatial patterns of activity for each sense. Nat Commun 2013; 4:1979. [PMID: 23752667 PMCID: PMC3709474 DOI: 10.1038/ncomms2979] [Citation(s) in RCA: 104] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2013] [Accepted: 05/07/2013] [Indexed: 11/09/2022] Open
Abstract
Whether primary sensory cortices are essentially multisensory or whether they respond to only one sense is an emerging debate in neuroscience. Here we use a multivariate pattern analysis of functional magnetic resonance imaging data in humans to demonstrate that simple and isolated stimuli of one sense elicit distinguishable spatial patterns of neuronal responses, not only in their corresponding primary sensory cortex, but in other primary sensory cortices. These results indicate that primary sensory cortices, traditionally regarded as unisensory, contain unique signatures of other senses and, thereby, prompt a reconsideration of how sensory information is coded in the human brain. Human primary sensory cortices are traditionally regarded as being able to process only one sensory modality. Liang and colleagues use brain imaging to show that, as well as being processed in typically corresponding cortical areas, different sensory modalities are also processed in atypical cortical areas.
Collapse
Affiliation(s)
- M Liang
- Department of Neuroscience, Physiology and Pharmacology, University College London, London WC1E 6BT, UK.
| | | | | | | |
Collapse
|
49
|
Abstract
Humans typically rely upon vision to identify object shape, but we can also recognize shape via touch (haptics). Our haptic shape recognition ability raises an intriguing question: To what extent do visual cortical shape recognition mechanisms support haptic object recognition? We addressed this question using a haptic fMRI repetition design, which allowed us to identify neuronal populations sensitive to the shape of objects that were touched but not seen. In addition to the expected shape-selective fMRI responses in dorsal frontoparietal areas, we observed widespread shape-selective responses in the ventral visual cortical pathway, including primary visual cortex. Our results indicate that shape processing via touch engages many of the same neural mechanisms as visual object recognition. The shape-specific repetition effects we observed in primary visual cortex show that visual sensory areas are engaged during the haptic exploration of object shape, even in the absence of concurrent shape-related visual input. Our results complement related findings in visually deprived individuals and highlight the fundamental role of the visual system in the processing of object shape.
Collapse
|
50
|
Abstract
Neurons, even in the earliest sensory areas of cortex, are subject to a great deal of contextual influence from both within and across modality connections. In the present work, we investigated whether the earliest regions of somatosensory cortex (S1 and S2) would contain content-specific information about visual object categories. We reasoned that this might be possible due to the associations formed through experience that link different sensory aspects of a given object. Participants were presented with visual images of different object categories in 2 fMRI experiments. Multivariate pattern analysis revealed reliable decoding of familiar visual object category in bilateral S1 (i.e., postcentral gyri) and right S2. We further show that this decoding is observed for familiar but not unfamiliar visual objects in S1. In addition, whole-brain searchlight decoding analyses revealed several areas in the parietal lobe that could mediate the observed context effects between vision and somatosensation. These results demonstrate that even the first cortical stages of somatosensory processing carry information about the category of visually presented familiar objects.
Collapse
Affiliation(s)
- Fraser W Smith
- School of Psychology, University of East Anglia, Norwich NR4 7TJ, UK The Brain and Mind Institute, University of Western Ontario, London, ON, Canada N6A 5B7
| | - Melvyn A Goodale
- The Brain and Mind Institute, University of Western Ontario, London, ON, Canada N6A 5B7
| |
Collapse
|