1
|
Froesel M, Gacoin M, Clavagnier S, Hauser M, Goudard Q, Ben Hamed S. Macaque claustrum, pulvinar and putative dorsolateral amygdala support the cross-modal association of social audio-visual stimuli based on meaning. Eur J Neurosci 2024; 59:3203-3223. [PMID: 38637993 DOI: 10.1111/ejn.16328] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Revised: 02/14/2024] [Accepted: 03/07/2024] [Indexed: 04/20/2024]
Abstract
Social communication draws on several cognitive functions such as perception, emotion recognition and attention. The association of audio-visual information is essential to the processing of species-specific communication signals. In this study, we use functional magnetic resonance imaging in order to identify the subcortical areas involved in the cross-modal association of visual and auditory information based on their common social meaning. We identified three subcortical regions involved in audio-visual processing of species-specific communicative signals: the dorsolateral amygdala, the claustrum and the pulvinar. These regions responded to visual, auditory congruent and audio-visual stimulations. However, none of them was significantly activated when the auditory stimuli were semantically incongruent with the visual context, thus showing an influence of visual context on auditory processing. For example, positive vocalization (coos) activated the three subcortical regions when presented in the context of positive facial expression (lipsmacks) but not when presented in the context of negative facial expression (aggressive faces). In addition, the medial pulvinar and the amygdala presented multisensory integration such that audiovisual stimuli resulted in activations that were significantly higher than those observed for the highest unimodal response. Last, the pulvinar responded in a task-dependent manner, along a specific spatial sensory gradient. We propose that the dorsolateral amygdala, the claustrum and the pulvinar belong to a multisensory network that modulates the perception of visual socioemotional information and vocalizations as a function of the relevance of the stimuli in the social context. SIGNIFICANCE STATEMENT: Understanding and correctly associating socioemotional information across sensory modalities, such that happy faces predict laughter and escape scenes predict screams, is essential when living in complex social groups. With the use of functional magnetic imaging in the awake macaque, we identify three subcortical structures-dorsolateral amygdala, claustrum and pulvinar-that only respond to auditory information that matches the ongoing visual socioemotional context, such as hearing positively valenced coo calls and seeing positively valenced mutual grooming monkeys. We additionally describe task-dependent activations in the pulvinar, organizing along a specific spatial sensory gradient, supporting its role as a network regulator.
Collapse
Affiliation(s)
- Mathilda Froesel
- Institut des Sciences Cognitives Marc Jeannerod, UMR5229 CNRS Université de Lyon, Bron Cedex, France
| | - Maëva Gacoin
- Institut des Sciences Cognitives Marc Jeannerod, UMR5229 CNRS Université de Lyon, Bron Cedex, France
| | - Simon Clavagnier
- Institut des Sciences Cognitives Marc Jeannerod, UMR5229 CNRS Université de Lyon, Bron Cedex, France
| | - Marc Hauser
- Risk-Eraser, West Falmouth, Massachusetts, USA
| | - Quentin Goudard
- Institut des Sciences Cognitives Marc Jeannerod, UMR5229 CNRS Université de Lyon, Bron Cedex, France
| | - Suliann Ben Hamed
- Institut des Sciences Cognitives Marc Jeannerod, UMR5229 CNRS Université de Lyon, Bron Cedex, France
| |
Collapse
|
2
|
Crucianelli L, Reader AT, Ehrsson HH. Subcortical contributions to the sense of body ownership. Brain 2024; 147:390-405. [PMID: 37847057 PMCID: PMC10834261 DOI: 10.1093/brain/awad359] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Revised: 09/01/2023] [Accepted: 10/03/2023] [Indexed: 10/18/2023] Open
Abstract
The sense of body ownership (i.e. the feeling that our body or its parts belong to us) plays a key role in bodily self-consciousness and is believed to stem from multisensory integration. Experimental paradigms such as the rubber hand illusion have been developed to allow the controlled manipulation of body ownership in laboratory settings, providing effective tools for investigating malleability in the sense of body ownership and the boundaries that distinguish self from other. Neuroimaging studies of body ownership converge on the involvement of several cortical regions, including the premotor cortex and posterior parietal cortex. However, relatively less attention has been paid to subcortical structures that may also contribute to body ownership perception, such as the cerebellum and putamen. Here, on the basis of neuroimaging and neuropsychological observations, we provide an overview of relevant subcortical regions and consider their potential role in generating and maintaining a sense of ownership over the body. We also suggest novel avenues for future research targeting the role of subcortical regions in making sense of the body as our own.
Collapse
Affiliation(s)
- Laura Crucianelli
- Department of Biological and Experimental Psychology, Queen Mary University of London, London E1 4DQ, UK
- Department of Neuroscience, Karolinska Institutet, Stockholm 171 65, Sweden
| | - Arran T Reader
- Department of Psychology, Faculty of Natural Sciences, University of Stirling, Stirling FK9 4LA, UK
| | - H Henrik Ehrsson
- Department of Neuroscience, Karolinska Institutet, Stockholm 171 65, Sweden
| |
Collapse
|
3
|
Barany DA, Lacey S, Matthews KL, Nygaard LC, Sathian K. Neural basis of sound-symbolic pseudoword-shape correspondences. Neuropsychologia 2023; 188:108657. [PMID: 37543139 PMCID: PMC10529692 DOI: 10.1016/j.neuropsychologia.2023.108657] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Revised: 06/23/2023] [Accepted: 08/02/2023] [Indexed: 08/07/2023]
Abstract
Non-arbitrary mapping between the sound of a word and its meaning, termed sound symbolism, is commonly studied through crossmodal correspondences between sounds and visual shapes, e.g., auditory pseudowords, like 'mohloh' and 'kehteh', are matched to rounded and pointed visual shapes, respectively. Here, we used functional magnetic resonance imaging (fMRI) during a crossmodal matching task to investigate the hypotheses that sound symbolism (1) involves language processing; (2) depends on multisensory integration; (3) reflects embodiment of speech in hand movements. These hypotheses lead to corresponding neuroanatomical predictions of crossmodal congruency effects in (1) the language network; (2) areas mediating multisensory processing, including visual and auditory cortex; (3) regions responsible for sensorimotor control of the hand and mouth. Right-handed participants (n = 22) encountered audiovisual stimuli comprising a simultaneously presented visual shape (rounded or pointed) and an auditory pseudoword ('mohloh' or 'kehteh') and indicated via a right-hand keypress whether the stimuli matched or not. Reaction times were faster for congruent than incongruent stimuli. Univariate analysis showed that activity was greater for the congruent compared to the incongruent condition in the left primary and association auditory cortex, and left anterior fusiform/parahippocampal gyri. Multivoxel pattern analysis revealed higher classification accuracy for the audiovisual stimuli when congruent than when incongruent, in the pars opercularis of the left inferior frontal (Broca's area), the left supramarginal, and the right mid-occipital gyri. These findings, considered in relation to the neuroanatomical predictions, support the first two hypotheses and suggest that sound symbolism involves both language processing and multisensory integration.
Collapse
Affiliation(s)
- Deborah A Barany
- Department of Kinesiology, University of Georgia and Augusta University/University of Georgia Medical Partnership, Athens, GA, 30602, USA
| | - Simon Lacey
- Department of Neurology, Penn State College of Medicine, Hershey, PA, 17033-0859, USA; Department of Neural & Behavioral Sciences, Penn State College of Medicine, Hershey, PA, 17033-0859, USA; Department of Psychology, Penn State College of Liberal Arts, University Park, PA, 16802, USA
| | - Kaitlyn L Matthews
- Department of Psychology, Emory University, Atlanta, GA, 30322, USA; Present address: Department of Psychological & Brain Sciences, Washington University in St. Louis, St. Louis, MO, 63130, USA
| | - Lynne C Nygaard
- Department of Psychology, Emory University, Atlanta, GA, 30322, USA
| | - K Sathian
- Department of Neurology, Penn State College of Medicine, Hershey, PA, 17033-0859, USA; Department of Neural & Behavioral Sciences, Penn State College of Medicine, Hershey, PA, 17033-0859, USA; Department of Psychology, Penn State College of Liberal Arts, University Park, PA, 16802, USA.
| |
Collapse
|
4
|
Barany DA, Lacey S, Matthews KL, Nygaard LC, Sathian K. Neural Basis Of Sound-Symbolic Pseudoword-Shape Correspondences. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.04.14.536865. [PMID: 37425853 PMCID: PMC10327042 DOI: 10.1101/2023.04.14.536865] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/11/2023]
Abstract
Non-arbitrary mapping between the sound of a word and its meaning, termed sound symbolism, is commonly studied through crossmodal correspondences between sounds and visual shapes, e.g., auditory pseudowords, like 'mohloh' and 'kehteh', are matched to rounded and pointed visual shapes, respectively. Here, we used functional magnetic resonance imaging (fMRI) during a crossmodal matching task to investigate the hypotheses that sound symbolism (1) involves language processing; (2) depends on multisensory integration; (3) reflects embodiment of speech in hand movements. These hypotheses lead to corresponding neuroanatomical predictions of crossmodal congruency effects in (1) the language network; (2) areas mediating multisensory processing, including visual and auditory cortex; (3) regions responsible for sensorimotor control of the hand and mouth. Right-handed participants ( n = 22) encountered audiovisual stimuli comprising a simultaneously presented visual shape (rounded or pointed) and an auditory pseudoword ('mohloh' or 'kehteh') and indicated via a right-hand keypress whether the stimuli matched or not. Reaction times were faster for congruent than incongruent stimuli. Univariate analysis showed that activity was greater for the congruent compared to the incongruent condition in the left primary and association auditory cortex, and left anterior fusiform/parahippocampal gyri. Multivoxel pattern analysis revealed higher classification accuracy for the audiovisual stimuli when congruent than when incongruent, in the pars opercularis of the left inferior frontal (Broca's area), the left supramarginal, and the right mid-occipital gyri. These findings, considered in relation to the neuroanatomical predictions, support the first two hypotheses and suggest that sound symbolism involves both language processing and multisensory integration. HIGHLIGHTS fMRI investigation of sound-symbolic correspondences between auditory pseudowords and visual shapesFaster reaction times for congruent than incongruent audiovisual stimuliGreater activation in auditory and visual cortices for congruent stimuliHigher classification accuracy for congruent stimuli in language and visual areasSound symbolism involves language processing and multisensory integration.
Collapse
Affiliation(s)
- Deborah A. Barany
- Department of Kinesiology, University of Georgia and Augusta University/University of Georgia Medical Partnership, Athens, GA, 30602, USA
| | - Simon Lacey
- Department of Neurology, Penn State Colleges of Medicine and Liberal Arts, Hershey, PA 17033-0859, USA
- Department of Neural & Behavioral Sciences, Penn State Colleges of Medicine and Liberal Arts, Hershey, PA 17033-0859, USA
- Department of Psychology, Penn State Colleges of Medicine and Liberal Arts, Hershey, PA 17033-0859, USA
| | - Kaitlyn L. Matthews
- Department of Psychology, Emory University, Atlanta, GA 30322, USA
- Present address: Department of Psychological & Brain Sciences, Washington University in St. Louis, St. Louis, MO 63130
| | - Lynne C. Nygaard
- Department of Psychology, Emory University, Atlanta, GA 30322, USA
| | - K. Sathian
- Department of Neurology, Penn State Colleges of Medicine and Liberal Arts, Hershey, PA 17033-0859, USA
- Department of Neural & Behavioral Sciences, Penn State Colleges of Medicine and Liberal Arts, Hershey, PA 17033-0859, USA
- Department of Psychology, Penn State Colleges of Medicine and Liberal Arts, Hershey, PA 17033-0859, USA
| |
Collapse
|
5
|
Scheliga S, Kellermann T, Lampert A, Rolke R, Spehr M, Habel U. Neural correlates of multisensory integration in the human brain: an ALE meta-analysis. Rev Neurosci 2023; 34:223-245. [PMID: 36084305 DOI: 10.1515/revneuro-2022-0065] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Accepted: 07/22/2022] [Indexed: 02/07/2023]
Abstract
Previous fMRI research identified superior temporal sulcus as central integration area for audiovisual stimuli. However, less is known about a general multisensory integration network across senses. Therefore, we conducted activation likelihood estimation meta-analysis with multiple sensory modalities to identify a common brain network. We included 49 studies covering all Aristotelian senses i.e., auditory, visual, tactile, gustatory, and olfactory stimuli. Analysis revealed significant activation in bilateral superior temporal gyrus, middle temporal gyrus, thalamus, right insula, and left inferior frontal gyrus. We assume these regions to be part of a general multisensory integration network comprising different functional roles. Here, thalamus operate as first subcortical relay projecting sensory information to higher cortical integration centers in superior temporal gyrus/sulcus while conflict-processing brain regions as insula and inferior frontal gyrus facilitate integration of incongruent information. We additionally performed meta-analytic connectivity modelling and found each brain region showed co-activations within the identified multisensory integration network. Therefore, by including multiple sensory modalities in our meta-analysis the results may provide evidence for a common brain network that supports different functional roles for multisensory integration.
Collapse
Affiliation(s)
- Sebastian Scheliga
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Thilo Kellermann
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany.,JARA-Institute Brain Structure Function Relationship, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Angelika Lampert
- Institute of Physiology, Medical Faculty RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Roman Rolke
- Department of Palliative Medicine, Medical Faculty RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Marc Spehr
- Department of Chemosensation, RWTH Aachen University, Institute for Biology, Worringerweg 3, 52074 Aachen, Germany
| | - Ute Habel
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany.,JARA-Institute Brain Structure Function Relationship, Pauwelsstraße 30, 52074 Aachen, Germany
| |
Collapse
|
6
|
Ross LA, Molholm S, Butler JS, Bene VAD, Foxe JJ. Neural correlates of multisensory enhancement in audiovisual narrative speech perception: a fMRI investigation. Neuroimage 2022; 263:119598. [PMID: 36049699 DOI: 10.1016/j.neuroimage.2022.119598] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Revised: 08/26/2022] [Accepted: 08/28/2022] [Indexed: 11/25/2022] Open
Abstract
This fMRI study investigated the effect of seeing articulatory movements of a speaker while listening to a naturalistic narrative stimulus. It had the goal to identify regions of the language network showing multisensory enhancement under synchronous audiovisual conditions. We expected this enhancement to emerge in regions known to underlie the integration of auditory and visual information such as the posterior superior temporal gyrus as well as parts of the broader language network, including the semantic system. To this end we presented 53 participants with a continuous narration of a story in auditory alone, visual alone, and both synchronous and asynchronous audiovisual speech conditions while recording brain activity using BOLD fMRI. We found multisensory enhancement in an extensive network of regions underlying multisensory integration and parts of the semantic network as well as extralinguistic regions not usually associated with multisensory integration, namely the primary visual cortex and the bilateral amygdala. Analysis also revealed involvement of thalamic brain regions along the visual and auditory pathways more commonly associated with early sensory processing. We conclude that under natural listening conditions, multisensory enhancement not only involves sites of multisensory integration but many regions of the wider semantic network and includes regions associated with extralinguistic sensory, perceptual and cognitive processing.
Collapse
Affiliation(s)
- Lars A Ross
- The Frederick J. and Marion A. Schindler Cognitive Neurophysiology Laboratory, The Ernest J. Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, New York, 14642, USA; Department of Imaging Sciences, University of Rochester Medical Center, University of Rochester School of Medicine and Dentistry, Rochester, New York, 14642, USA; The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, New York, 10461, USA.
| | - Sophie Molholm
- The Frederick J. and Marion A. Schindler Cognitive Neurophysiology Laboratory, The Ernest J. Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, New York, 14642, USA; The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, New York, 10461, USA
| | - John S Butler
- The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, New York, 10461, USA; School of Mathematical Sciences, Technological University Dublin, Kevin Street Campus, Dublin, Ireland
| | - Victor A Del Bene
- The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, New York, 10461, USA; University of Alabama at Birmingham, Heersink School of Medicine, Department of Neurology, Birmingham, Alabama, 35233, USA
| | - John J Foxe
- The Frederick J. and Marion A. Schindler Cognitive Neurophysiology Laboratory, The Ernest J. Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, New York, 14642, USA; The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, New York, 10461, USA.
| |
Collapse
|
7
|
Muller AM, Dalal TC, Stevenson RA. Schizotypal traits are not related to multisensory integration or audiovisual speech perception. Conscious Cogn 2020; 86:103030. [PMID: 33120291 DOI: 10.1016/j.concog.2020.103030] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2020] [Revised: 09/02/2020] [Accepted: 10/04/2020] [Indexed: 12/01/2022]
Abstract
Multisensory integration, the binding of sensory information from different sensory modalities, may contribute to perceptual symptomatology in schizophrenia, including hallucinations and aberrant speech perception. Differences in multisensory integration and temporal processing, an important component of multisensory integration, are consistently found in schizophrenia. Evidence is emerging that these differences extend across the schizophrenia spectrum, including individuals in the general population with higher schizotypal traits. In the current study, we investigated the relationship between schizotypal traits and perceptual functioning, using audiovisual speech-in-noise, McGurk, and ternary synchrony judgment tasks. We measured schizotypal traits using the Schizotypal Personality Questionnaire (SPQ), hypothesizing that higher scores on Unusual Perceptual Experiences and Odd Speech subscales would be associated with decreased multisensory integration, increased susceptibility to distracting auditory speech, and less precise temporal processing. Surprisingly, these measures were not associated with the predicted subscales, suggesting that these perceptual differences may not be present across the schizophrenia spectrum.
Collapse
Affiliation(s)
- Anne-Marie Muller
- Department of Psychology, University of Western Ontario, London, ON, Canada; Brain and Mind Institute, University of Western Ontario, London, ON, Canada
| | - Tyler C Dalal
- Department of Psychology, University of Western Ontario, London, ON, Canada; Brain and Mind Institute, University of Western Ontario, London, ON, Canada
| | - Ryan A Stevenson
- Department of Psychology, University of Western Ontario, London, ON, Canada; Brain and Mind Institute, University of Western Ontario, London, ON, Canada.
| |
Collapse
|
8
|
Stevenson RA, Sheffield SW, Butera IM, Gifford RH, Wallace MT. Multisensory Integration in Cochlear Implant Recipients. Ear Hear 2018; 38:521-538. [PMID: 28399064 DOI: 10.1097/aud.0000000000000435] [Citation(s) in RCA: 53] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
Speech perception is inherently a multisensory process involving integration of auditory and visual cues. Multisensory integration in cochlear implant (CI) recipients is a unique circumstance in that the integration occurs after auditory deprivation and the provision of hearing via the CI. Despite the clear importance of multisensory cues for perception, in general, and for speech intelligibility, specifically, the topic of multisensory perceptual benefits in CI users has only recently begun to emerge as an area of inquiry. We review the research that has been conducted on multisensory integration in CI users to date and suggest a number of areas needing further research. The overall pattern of results indicates that many CI recipients show at least some perceptual gain that can be attributable to multisensory integration. The extent of this gain, however, varies based on a number of factors, including age of implantation and specific task being assessed (e.g., stimulus detection, phoneme perception, word recognition). Although both children and adults with CIs obtain audiovisual benefits for phoneme, word, and sentence stimuli, neither group shows demonstrable gain for suprasegmental feature perception. Additionally, only early-implanted children and the highest performing adults obtain audiovisual integration benefits similar to individuals with normal hearing. Increasing age of implantation in children is associated with poorer gains resultant from audiovisual integration, suggesting a sensitive period in development for the brain networks that subserve these integrative functions, as well as length of auditory experience. This finding highlights the need for early detection of and intervention for hearing loss, not only in terms of auditory perception, but also in terms of the behavioral and perceptual benefits of audiovisual processing. Importantly, patterns of auditory, visual, and audiovisual responses suggest that underlying integrative processes may be fundamentally different between CI users and typical-hearing listeners. Future research, particularly in low-level processing tasks such as signal detection will help to further assess mechanisms of multisensory integration for individuals with hearing loss, both with and without CIs.
Collapse
Affiliation(s)
- Ryan A Stevenson
- 1Department of Psychology, University of Western Ontario, London, Ontario, Canada; 2Brain and Mind Institute, University of Western Ontario, London, Ontario, Canada; 3Walter Reed National Military Medical Center, Audiology and Speech Pathology Center, London, Ontario, Canada; 4Vanderbilt Brain Institute, Nashville, Tennesse; 5Vanderbilt Kennedy Center, Nashville, Tennesse; 6Department of Psychology, Vanderbilt University, Nashville, Tennesse; 7Department of Psychiatry, Vanderbilt University Medical Center, Nashville, Tennesse; and 8Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennesse
| | | | | | | | | |
Collapse
|
9
|
Abstract
Purpose of Review The integration of information across sensory modalities into unified percepts is a fundamental sensory process upon which a multitude of cognitive processes are based. We review the body of literature exploring aging-related changes in audiovisual integration published over the last five years. Specifically, we review the impact of changes in temporal processing, the influence of the effectiveness of sensory inputs, the role of working memory, and the newer studies of intra-individual variability during these processes. Recent Findings Work in the last five years on bottom-up influences of sensory perception has garnered significant attention. Temporal processing, a driving factors of multisensory integration, has now been shown to decouple with multisensory integration in aging, despite their co-decline with aging. The impact of stimulus effectiveness also changes with age, where older adults show maximal benefit from multisensory gain at high signal-to-noise ratios. Following sensory decline, high working memory capacities have now been shown to be somewhat of a protective factor against age-related declines in audiovisual speech perception, particularly in noise. Finally, newer research is emerging focusing on the general intra-individual variability observed with aging. Summary Overall, the studies of the past five years have replicated and expanded on previous work that highlights the role of bottom-up sensory changes with aging and their influence on audiovisual integration, as well as the top-down influence of working memory.
Collapse
Affiliation(s)
- Sarah H Baum
- Department of Psychology, University of Washington
| | - Ryan Stevenson
- Department of Psychology, Western University.,Brain and Mind Institute, Western University.,Department of Psychiatry, Schulich School of Medicine and Dentistry, Western University.,Program in Neuroscience, Schulich School of Medicine and Dentistry, Western University.,Centre for Vision Research, York University
| |
Collapse
|
10
|
Stevenson YA, Baum SH, Segers M, Ferber S, Barense MD, Wallace MT. Multisensory speech perception in autism spectrum disorder: From phoneme to whole-word perception. Autism Res 2017; 10:1280-1290. [PMID: 28339177 PMCID: PMC5513806 DOI: 10.1002/aur.1776] [Citation(s) in RCA: 41] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2016] [Revised: 01/19/2017] [Accepted: 02/06/2017] [Indexed: 11/11/2022]
Abstract
Speech perception in noisy environments is boosted when a listener can see the speaker's mouth and integrate the auditory and visual speech information. Autistic children have a diminished capacity to integrate sensory information across modalities, which contributes to core symptoms of autism, such as impairments in social communication. We investigated the abilities of autistic and typically-developing (TD) children to integrate auditory and visual speech stimuli in various signal-to-noise ratios (SNR). Measurements of both whole-word and phoneme recognition were recorded. At the level of whole-word recognition, autistic children exhibited reduced performance in both the auditory and audiovisual modalities. Importantly, autistic children showed reduced behavioral benefit from multisensory integration with whole-word recognition, specifically at low SNRs. At the level of phoneme recognition, autistic children exhibited reduced performance relative to their TD peers in auditory, visual, and audiovisual modalities. However, and in contrast to their performance at the level of whole-word recognition, both autistic and TD children showed benefits from multisensory integration for phoneme recognition. In accordance with the principle of inverse effectiveness, both groups exhibited greater benefit at low SNRs relative to high SNRs. Thus, while autistic children showed typical multisensory benefits during phoneme recognition, these benefits did not translate to typical multisensory benefit of whole-word recognition in noisy environments. We hypothesize that sensory impairments in autistic children raise the SNR threshold needed to extract meaningful information from a given sensory input, resulting in subsequent failure to exhibit behavioral benefits from additional sensory information at the level of whole-word recognition. Autism Res 2017. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. Autism Res 2017, 10: 1280-1290. © 2017 International Society for Autism Research, Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- yan A. Stevenson
- Department of Psychology, Western University, London, ON, Canada
- Brain and Mind Institute, Western University, London, ON, Canada
| | - Sarah H. Baum
- Department of Psychology, University of Washington, Seattle, WA, USA
| | | | - Susanne Ferber
- Dept. of Psychology, University of Toronto, Toronto, ON, Canada
- Rotman Research Institute, Toronto, ON, Canada
| | - Morgan D. Barense
- Dept. of Psychology, University of Toronto, Toronto, ON, Canada
- Rotman Research Institute, Toronto, ON, Canada
| | - Mark T. Wallace
- Vanderbilt Brain Institute, Nashville, TN, USA
- Vanderbilt Kennedy Center, Nashville, TN, USA
- Vanderbilt University, Nashville, TN, USA
- Dept. of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
- Dept. of Psychology, Vanderbilt University, Nashville, TN, USA
| |
Collapse
|
11
|
Stevenson RA, Segers M, Ncube BL, Black KR, Bebko JM, Ferber S, Barense MD. The cascading influence of multisensory processing on speech perception in autism. AUTISM : THE INTERNATIONAL JOURNAL OF RESEARCH AND PRACTICE 2017; 22:609-624. [PMID: 28506185 DOI: 10.1177/1362361317704413] [Citation(s) in RCA: 76] [Impact Index Per Article: 10.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
It has been recently theorized that atypical sensory processing in autism relates to difficulties in social communication. Through a series of tasks concurrently assessing multisensory temporal processes, multisensory integration and speech perception in 76 children with and without autism, we provide the first behavioral evidence of such a link. Temporal processing abilities in children with autism contributed to impairments in speech perception. This relationship was significantly mediated by their abilities to integrate social information across auditory and visual modalities. These data describe the cascading impact of sensory abilities in autism, whereby temporal processing impacts multisensory information of social information, which, in turn, contributes to deficits in speech perception. These relationships were found to be specific to autism, specific to multisensory but not unisensory integration, and specific to the processing of social information.
Collapse
Affiliation(s)
| | | | | | | | | | - Susanne Ferber
- 3 University of Toronto, Canada.,4 Rotman Research Institute at Baycrest, Canada
| | - Morgan D Barense
- 3 University of Toronto, Canada.,4 Rotman Research Institute at Baycrest, Canada
| |
Collapse
|
12
|
Hao Y, Riehle A, Brochier TG. Mapping Horizontal Spread of Activity in Monkey Motor Cortex Using Single Pulse Microstimulation. Front Neural Circuits 2016; 10:104. [PMID: 28018182 PMCID: PMC5159418 DOI: 10.3389/fncir.2016.00104] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2016] [Accepted: 12/01/2016] [Indexed: 12/13/2022] Open
Abstract
Anatomical studies have demonstrated that distant cortical points are interconnected through long range axon collaterals of pyramidal cells. However, the functional properties of these intrinsic synaptic connections, especially their relationship with the cortical representations of body movements, have not been systematically investigated. To address this issue, we used multielectrode arrays chronically implanted in the motor cortex of two rhesus monkeys to analyze the effects of single-pulse intracortical microstimulation (sICMS) applied at one electrode on the neuronal activities recorded at all other electrodes. The temporal and spatial distribution of the evoked responses of single and multiunit activities was quantified to determine the properties of horizontal propagation. The typical responses were characterized by a brief excitatory peak followed by inhibition of longer duration. Significant excitatory responses to sICMS could be evoked up to 4 mm away from the stimulation site, but the strength of the response decreased exponentially and its latency increased linearly with the distance. We then quantified the direction and strength of the propagation in relation to the somatotopic organization of the motor cortex. We observed that following sICMS the propagation of neural activity is mainly directed rostro-caudally near the central sulcus but follows medio-lateral direction at the most anterior electrodes. The fact that these interactions are not entirely symmetrical may characterize a critical functional property of the motor cortex for the control of upper limb movements. Overall, these results support the assumption that the motor cortex is not functionally homogeneous but forms a complex network of interacting subregions.
Collapse
Affiliation(s)
- Yaoyao Hao
- Institut de Neurosciences de la Timone, CNRS - Aix-Marseille Université, UMR7289 Marseille, France
| | - Alexa Riehle
- Institut de Neurosciences de la Timone, CNRS - Aix-Marseille Université, UMR7289Marseille, France; RIKEN Brain Science InstituteSaitama, Japan; Institute of Neuroscience and Medicine, Forschungszentrum JülichJülich, Germany
| | - Thomas G Brochier
- Institut de Neurosciences de la Timone, CNRS - Aix-Marseille Université, UMR7289 Marseille, France
| |
Collapse
|
13
|
Interactions between space and effectiveness in human multisensory performance. Neuropsychologia 2016; 88:83-91. [PMID: 26826522 DOI: 10.1016/j.neuropsychologia.2016.01.031] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2015] [Revised: 12/30/2015] [Accepted: 01/26/2016] [Indexed: 11/23/2022]
Abstract
Several stimulus factors are important in multisensory integration, including the spatial and temporal relationships of the paired stimuli as well as their effectiveness. Changes in these factors have been shown to dramatically change the nature and magnitude of multisensory interactions. Typically, these factors are considered in isolation, although there is a growing appreciation for the fact that they are likely to be strongly interrelated. Here, we examined interactions between two of these factors - spatial location and effectiveness - in dictating performance in the localization of an audiovisual target. A psychophysical experiment was conducted in which participants reported the perceived location of visual flashes and auditory noise bursts presented alone and in combination. Stimuli were presented at four spatial locations relative to fixation (0°, 30°, 60°, 90°) and at two intensity levels (high, low). Multisensory combinations were always spatially coincident and of the matching intensity (high-high or low-low). In responding to visual stimuli alone, localization accuracy decreased and response times (RTs) increased as stimuli were presented at more eccentric locations. In responding to auditory stimuli, performance was poorest at the 30° and 60° locations. For both visual and auditory stimuli, accuracy was greater and RTs were faster for more intense stimuli. For responses to visual-auditory stimulus combinations, performance enhancements were found at locations in which the unisensory performance was lowest, results concordant with the concept of inverse effectiveness. RTs for these multisensory presentations frequently violated race-model predictions, implying integration of these inputs, and a significant location-by-intensity interaction was observed. Performance gains under multisensory conditions were larger as stimuli were positioned at more peripheral locations, and this increase was most pronounced for the low-intensity conditions. These results provide strong support that the effects of stimulus location and effectiveness on multisensory integration are interdependent, with both contributing to the overall effectiveness of the stimuli in driving the resultant multisensory response.
Collapse
|
14
|
Takahashi HK, Kitada R, Sasaki AT, Kawamichi H, Okazaki S, Kochiyama T, Sadato N. Brain networks of affective mentalizing revealed by the tear effect: The integrative role of the medial prefrontal cortex and precuneus. Neurosci Res 2015. [DOI: 10.1016/j.neures.2015.07.005] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
15
|
Stevenson RA, Segers M, Ferber S, Barense MD, Camarata S, Wallace MT. Keeping time in the brain: Autism spectrum disorder and audiovisual temporal processing. Autism Res 2015; 9:720-38. [PMID: 26402725 DOI: 10.1002/aur.1566] [Citation(s) in RCA: 58] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2015] [Revised: 08/22/2015] [Accepted: 08/29/2015] [Indexed: 12/21/2022]
Abstract
A growing area of interest and relevance in the study of autism spectrum disorder (ASD) focuses on the relationship between multisensory temporal function and the behavioral, perceptual, and cognitive impairments observed in ASD. Atypical sensory processing is becoming increasingly recognized as a core component of autism, with evidence of atypical processing across a number of sensory modalities. These deviations from typical processing underscore the value of interpreting ASD within a multisensory framework. Furthermore, converging evidence illustrates that these differences in audiovisual processing may be specifically related to temporal processing. This review seeks to bridge the connection between temporal processing and audiovisual perception, and to elaborate on emerging data showing differences in audiovisual temporal function in autism. We also discuss the consequence of such changes, the specific impact on the processing of different classes of audiovisual stimuli (e.g. speech vs. nonspeech, etc.), and the presumptive brain processes and networks underlying audiovisual temporal integration. Finally, possible downstream behavioral implications, and possible remediation strategies are outlined. Autism Res 2016, 9: 720-738. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Ryan A Stevenson
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada
| | - Magali Segers
- Department of Psychology, York University, Toronto, Ontario, Canada
| | - Susanne Ferber
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada.,Rotman Research Institute, Toronto, Ontario, Canada
| | - Morgan D Barense
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada.,Rotman Research Institute, Toronto, Ontario, Canada
| | - Stephen Camarata
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee.,Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, Tennessee
| | - Mark T Wallace
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee.,Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, Tennessee.,Vanderbilt Brain Institute, Vanderbilt University Medical Center, Nashville, Tennessee.,Department of Psychology, Vanderbilt University, Nashville, Tennessee.,Department of Psychiatry, Vanderbilt University Medical Center, Nashville, Tennessee
| |
Collapse
|
16
|
Yalachkov Y, Kaiser J, Doehrmann O, Naumer MJ. Enhanced visuo-haptic integration for the non-dominant hand. Brain Res 2015; 1614:75-85. [PMID: 25911582 DOI: 10.1016/j.brainres.2015.04.020] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2014] [Revised: 03/13/2015] [Accepted: 04/11/2015] [Indexed: 10/23/2022]
Abstract
Visuo-haptic integration contributes essentially to object shape recognition. Although there has been a considerable advance in elucidating the neural underpinnings of multisensory perception, it is still unclear whether seeing an object and exploring it with the dominant hand elicits the same brain response as compared to the non-dominant hand. Using fMRI to measure brain activation in right-handed participants, we found that for both left- and right-hand stimulation the left lateral occipital complex (LOC) and anterior cerebellum (aCER) were involved in visuo-haptic integration of familiar objects. These two brain regions were then further investigated in another study, where unfamiliar, novel objects were presented to a different group of right-handers. Here the left LOC and aCER were more strongly activated by bimodal than unimodal stimuli only when the left but not the right hand was used. A direct comparison indicated that the multisensory gain of the fMRI activation was significantly higher for the left than the right hand. These findings are in line with the principle of "inverse effectiveness", implying that processing of bimodally presented stimuli is particularly enhanced when the unimodal stimuli are weak. This applies also when right-handed subjects see and simultaneously touch unfamiliar objects with their non-dominant left hand. Thus, the fMRI signal in the left LOC and aCER induced by visuo-haptic stimulation is dependent on which hand was employed for haptic exploration.
Collapse
Affiliation(s)
- Yavor Yalachkov
- Institute of Medical Psychology, Goethe-University, Heinrich-Hoffmann-Strasse 10, D-60528 Frankfurt am Main, Germany.
| | - Jochen Kaiser
- Institute of Medical Psychology, Goethe-University, Heinrich-Hoffmann-Strasse 10, D-60528 Frankfurt am Main, Germany
| | - Oliver Doehrmann
- Institute of Medical Psychology, Goethe-University, Heinrich-Hoffmann-Strasse 10, D-60528 Frankfurt am Main, Germany
| | - Marcus J Naumer
- Institute of Medical Psychology, Goethe-University, Heinrich-Hoffmann-Strasse 10, D-60528 Frankfurt am Main, Germany
| |
Collapse
|
17
|
Stevenson RA, Nelms CE, Baum SH, Zurkovsky L, Barense MD, Newhouse PA, Wallace MT. Deficits in audiovisual speech perception in normal aging emerge at the level of whole-word recognition. Neurobiol Aging 2015; 36:283-91. [PMID: 25282337 PMCID: PMC4268368 DOI: 10.1016/j.neurobiolaging.2014.08.003] [Citation(s) in RCA: 36] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2014] [Revised: 07/22/2014] [Accepted: 08/02/2014] [Indexed: 01/20/2023]
Abstract
Over the next 2 decades, a dramatic shift in the demographics of society will take place, with a rapid growth in the population of older adults. One of the most common complaints with healthy aging is a decreased ability to successfully perceive speech, particularly in noisy environments. In such noisy environments, the presence of visual speech cues (i.e., lip movements) provide striking benefits for speech perception and comprehension, but previous research suggests that older adults gain less from such audiovisual integration than their younger peers. To determine at what processing level these behavioral differences arise in healthy-aging populations, we administered a speech-in-noise task to younger and older adults. We compared the perceptual benefits of having speech information available in both the auditory and visual modalities and examined both phoneme and whole-word recognition across varying levels of signal-to-noise ratio. For whole-word recognition, older adults relative to younger adults showed greater multisensory gains at intermediate SNRs but reduced benefit at low SNRs. By contrast, at the phoneme level both younger and older adults showed approximately equivalent increases in multisensory gain as signal-to-noise ratio decreased. Collectively, the results provide important insights into both the similarities and differences in how older and younger adults integrate auditory and visual speech cues in noisy environments and help explain some of the conflicting findings in previous studies of multisensory speech perception in healthy aging. These novel findings suggest that audiovisual processing is intact at more elementary levels of speech perception in healthy-aging populations and that deficits begin to emerge only at the more complex word-recognition level of speech signals.
Collapse
Affiliation(s)
- Ryan A Stevenson
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada; Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA; Vanderbilt Brain Institute, Nashville, TN, USA; Vanderbilt Kennedy Center, Nashville, TN, USA.
| | - Caitlin E Nelms
- Department of Psychology, Austin Peay State University, Clarksville, TN, USA; Department of Communication Sciences and Disorders, University of Memphis, Memphis, TN, USA
| | - Sarah H Baum
- Vanderbilt Brain Institute, Nashville, TN, USA; Department of Neurobiology and Anatomy, University of Texas Medical School at Houston, TX, USA
| | - Lilia Zurkovsky
- Center for Cognitive Medicine, Department of Psychiatry, Vanderbilt University, Nashville, TN, USA
| | - Morgan D Barense
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada; Rotman Research Institute, Toronto, Ontario, Canada
| | - Paul A Newhouse
- Center for Cognitive Medicine, Department of Psychiatry, Vanderbilt University, Nashville, TN, USA
| | - Mark T Wallace
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA; Vanderbilt Brain Institute, Nashville, TN, USA; Vanderbilt Kennedy Center, Nashville, TN, USA; Center for Cognitive Medicine, Department of Psychiatry, Vanderbilt University, Nashville, TN, USA; Department of Psychology, Vanderbilt University, Nashville, TN, USA
| |
Collapse
|
18
|
Crossmodal plasticity in the fusiform gyrus of late blind individuals during voice recognition. Neuroimage 2014; 103:374-382. [DOI: 10.1016/j.neuroimage.2014.09.050] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2014] [Revised: 09/04/2014] [Accepted: 09/22/2014] [Indexed: 11/19/2022] Open
|
19
|
Wallace MT, Stevenson RA. The construct of the multisensory temporal binding window and its dysregulation in developmental disabilities. Neuropsychologia 2014; 64:105-23. [PMID: 25128432 PMCID: PMC4326640 DOI: 10.1016/j.neuropsychologia.2014.08.005] [Citation(s) in RCA: 200] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2014] [Revised: 08/04/2014] [Accepted: 08/05/2014] [Indexed: 01/18/2023]
Abstract
Behavior, perception and cognition are strongly shaped by the synthesis of information across the different sensory modalities. Such multisensory integration often results in performance and perceptual benefits that reflect the additional information conferred by having cues from multiple senses providing redundant or complementary information. The spatial and temporal relationships of these cues provide powerful statistical information about how these cues should be integrated or "bound" in order to create a unified perceptual representation. Much recent work has examined the temporal factors that are integral in multisensory processing, with many focused on the construct of the multisensory temporal binding window - the epoch of time within which stimuli from different modalities is likely to be integrated and perceptually bound. Emerging evidence suggests that this temporal window is altered in a series of neurodevelopmental disorders, including autism, dyslexia and schizophrenia. In addition to their role in sensory processing, these deficits in multisensory temporal function may play an important role in the perceptual and cognitive weaknesses that characterize these clinical disorders. Within this context, focus on improving the acuity of multisensory temporal function may have important implications for the amelioration of the "higher-order" deficits that serve as the defining features of these disorders.
Collapse
Affiliation(s)
- Mark T Wallace
- Vanderbilt Brain Institute, Vanderbilt University, 465 21st Avenue South, Nashville, TN 37232, USA; Department of Hearing & Speech Sciences, Vanderbilt University, Nashville, TN, USA; Department of Psychology, Vanderbilt University, Nashville, TN, USA; Department of Psychiatry, Vanderbilt University, Nashville, TN, USA.
| | - Ryan A Stevenson
- Department of Psychology, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
20
|
Bauer CC, Díaz JL, Concha L, Barrios FA. Sustained attention to spontaneous thumb sensations activates brain somatosensory and other proprioceptive areas. Brain Cogn 2014; 87:86-96. [DOI: 10.1016/j.bandc.2014.03.009] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2013] [Revised: 10/21/2013] [Accepted: 03/18/2014] [Indexed: 12/01/2022]
|
21
|
Identifying and quantifying multisensory integration: a tutorial review. Brain Topogr 2014; 27:707-30. [PMID: 24722880 DOI: 10.1007/s10548-014-0365-7] [Citation(s) in RCA: 133] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2013] [Accepted: 03/26/2014] [Indexed: 12/19/2022]
Abstract
We process information from the world through multiple senses, and the brain must decide what information belongs together and what information should be segregated. One challenge in studying such multisensory integration is how to quantify the multisensory interactions, a challenge that is amplified by the host of methods that are now used to measure neural, behavioral, and perceptual responses. Many of the measures that have been developed to quantify multisensory integration (and which have been derived from single unit analyses), have been applied to these different measures without much consideration for the nature of the process being studied. Here, we provide a review focused on the means with which experimenters quantify multisensory processes and integration across a range of commonly used experimental methodologies. We emphasize the most commonly employed measures, including single- and multiunit responses, local field potentials, functional magnetic resonance imaging, and electroencephalography, along with behavioral measures of detection, accuracy, and response times. In each section, we will discuss the different metrics commonly used to quantify multisensory interactions, including the rationale for their use, their advantages, and the drawbacks and caveats associated with them. Also discussed are possible alternatives to the most commonly used metrics.
Collapse
|
22
|
Hertz U, Amedi A. Flexibility and Stability in Sensory Processing Revealed Using Visual-to-Auditory Sensory Substitution. Cereb Cortex 2014; 25:2049-64. [PMID: 24518756 PMCID: PMC4494022 DOI: 10.1093/cercor/bhu010] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022] Open
Abstract
The classical view of sensory processing involves independent processing in sensory cortices and multisensory integration in associative areas. This hierarchical structure has been challenged by evidence of multisensory responses in sensory areas, and dynamic weighting of sensory inputs in associative areas, thus far reported independently. Here, we used a visual-to-auditory sensory substitution algorithm (SSA) to manipulate the information conveyed by sensory inputs while keeping the stimuli intact. During scan sessions before and after SSA learning, subjects were presented with visual images and auditory soundscapes. The findings reveal 2 dynamic processes. First, crossmodal attenuation of sensory cortices changed direction after SSA learning from visual attenuations of the auditory cortex to auditory attenuations of the visual cortex. Secondly, associative areas changed their sensory response profile from strongest response for visual to that for auditory. The interaction between these phenomena may play an important role in multisensory processing. Consistent features were also found in the sensory dominance in sensory areas and audiovisual convergence in associative area Middle Temporal Gyrus. These 2 factors allow for both stability and a fast, dynamic tuning of the system when required.
Collapse
Affiliation(s)
- Uri Hertz
- Department of Medical Neurobiology, Institute for Medical Research Israel-Canada (IMRIC), Hadassah Medical School, Hebrew University of Jerusalem, Jerusalem 91220, Israel Interdisciplinary Center for Neural Computation, The Edmond & Lily Safra Center for Brain Sciences (ELSC), Hebrew University of Jerusalem, Jerusalem 91905, Israel
| | - Amir Amedi
- Department of Medical Neurobiology, Institute for Medical Research Israel-Canada (IMRIC), Hadassah Medical School, Hebrew University of Jerusalem, Jerusalem 91220, Israel Interdisciplinary Center for Neural Computation, The Edmond & Lily Safra Center for Brain Sciences (ELSC), Hebrew University of Jerusalem, Jerusalem 91905, Israel
| |
Collapse
|
23
|
Xu J, Rees G, Yin X, Song C, Han Y, Ge H, Pang Z, Xu W, Tang Y, Friston K, Liu S. Spontaneous neuronal activity predicts intersubject variations in executive control of attention. Neuroscience 2014; 263:181-92. [PMID: 24447598 DOI: 10.1016/j.neuroscience.2014.01.020] [Citation(s) in RCA: 42] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2013] [Revised: 01/10/2014] [Accepted: 01/10/2014] [Indexed: 01/27/2023]
Abstract
Executive control of attention regulates our thoughts, emotion and behavior. Individual differences in executive control are associated with task-related differences in brain activity. But it is unknown whether attentional differences depend on endogenous (resting state) brain activity and to what extent regional fluctuations and functional connectivity contribute to individual variations in executive control processing. Here, we explored the potential contribution of intrinsic brain activity to executive control by using resting-state functional magnetic resonance imaging (fMRI). Using the amplitude of low-frequency fluctuations (ALFF) as an index of spontaneous brain activity, we found that ALFF in the right precuneus (PCUN) and the medial part of left superior frontal gyrus (msFC) was significantly correlated with the efficiency of executive control processing. Crucially, the strengths of functional connectivity between the right PCUN/left msFC and distributed brain regions, including the left fusiform gyrus, right inferior frontal gyrus, left superior frontal gyrus and right precentral gyrus, were correlated with individual differences in executive performance. Together, the ALFF and functional connectivity accounted for 67% of the variability in behavioral performance. Moreover, the strength of functional connectivity between specific regions could predict more individual variability in executive control performance than regionally specific fluctuations. In conclusion, our findings suggest that spontaneous brain activity may reflect or underpin executive control of attention. It will provide new insights into the origins of inter-individual variability in human executive control processing.
Collapse
Affiliation(s)
- J Xu
- Research Center for Sectional and Imaging Anatomy, Shandong University School of Medicine, Jinan, Shandong, China; UCL Institute of Cognitive Neuroscience, London, United Kingdom; Wellcome Trust Centre for Neuroimaging, University College London (UCL) Institute of Neurology, London, United Kingdom
| | - G Rees
- UCL Institute of Cognitive Neuroscience, London, United Kingdom; Wellcome Trust Centre for Neuroimaging, University College London (UCL) Institute of Neurology, London, United Kingdom
| | - X Yin
- Research Center for Sectional and Imaging Anatomy, Shandong University School of Medicine, Jinan, Shandong, China
| | - C Song
- UCL Institute of Cognitive Neuroscience, London, United Kingdom
| | - Y Han
- Department of Radiology, Affiliated Hospital of Medical College, Qingdao University, Qingdao, Shandong, China
| | - H Ge
- Research Center for Sectional and Imaging Anatomy, Shandong University School of Medicine, Jinan, Shandong, China
| | - Z Pang
- Department of Epidemiology, Qingdao Municipal Central for Disease Control and Prevention, Qingdao, Shandong, China
| | - W Xu
- Department of Radiology, Affiliated Hospital of Medical College, Qingdao University, Qingdao, Shandong, China
| | - Y Tang
- Research Center for Sectional and Imaging Anatomy, Shandong University School of Medicine, Jinan, Shandong, China
| | - K Friston
- Wellcome Trust Centre for Neuroimaging, University College London (UCL) Institute of Neurology, London, United Kingdom
| | - S Liu
- Research Center for Sectional and Imaging Anatomy, Shandong University School of Medicine, Jinan, Shandong, China.
| |
Collapse
|
24
|
Learning to associate auditory and visual stimuli: behavioral and neural mechanisms. Brain Topogr 2013; 28:479-93. [PMID: 24276220 DOI: 10.1007/s10548-013-0333-7] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2013] [Accepted: 11/11/2013] [Indexed: 12/20/2022]
Abstract
The ability to effectively combine sensory inputs across modalities is vital for acquiring a unified percept of events. For example, watching a hammer hit a nail while simultaneously identifying the sound as originating from the event requires the ability to identify spatio-temporal congruencies and statistical regularities. In this study, we applied a reaction time and hazard function measure known as capacity (e.g., Townsend and AshbyCognitive Theory 200-239, 1978) to quantify the extent to which observers learn paired associations between simple auditory and visual patterns in a model theoretic manner. As expected, results showed that learning was associated with an increase in accuracy, but more significantly, an increase in capacity. The aim of this study was to associate capacity measures of multisensory learning, with neural based measures, namely mean global field power (GFP). We observed a co-variation between an increase in capacity, and a decrease in GFP amplitude as learning occurred. This suggests that capacity constitutes a reliable behavioral index of efficient energy expenditure in the neural domain.
Collapse
|
25
|
van Atteveldt NM, Peterson BS, Schroeder CE. Contextual control of audiovisual integration in low-level sensory cortices. Hum Brain Mapp 2013; 35:2394-411. [PMID: 23982946 DOI: 10.1002/hbm.22336] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2013] [Revised: 05/07/2013] [Accepted: 05/15/2013] [Indexed: 11/06/2022] Open
Abstract
Potential sources of multisensory influences on low-level sensory cortices include direct projections from sensory cortices of different modalities, as well as more indirect feedback inputs from higher order multisensory cortical regions. These multiple architectures may be functionally complementary, but the exact roles and inter-relationships of the circuits are unknown. Using a fully balanced context manipulation, we tested the hypotheses that: (1) feedforward and lateral pathways subserve speed functions, such as detecting peripheral stimuli. Multisensory integration effects in this context are predicted in peripheral fields of low-level sensory cortices. (2) Slower feedback pathways underpin accuracy functions, such as object discrimination. Integration effects in this context are predicted in higher-order association cortices and central/foveal fields of low-level sensory cortex. We used functional magnetic resonance imaging to compare the effects of central versus peripheral stimulation on audiovisual integration, while varying speed and accuracy requirements for behavioral responses. We found that interactions of task demands and stimulus eccentricity in low-level sensory cortices are more complex than would be predicted by a simple dichotomy such as our hypothesized peripheral/speed and foveal/accuracy functions. Additionally, our findings point to individual differences in integration that may be related to skills and strategy. Overall, our findings suggest that instead of using fixed, specialized pathways, the exact circuits and mechanisms that are used for low-level multisensory integration are much more flexible and contingent upon both individual and contextual factors than previously assumed.
Collapse
Affiliation(s)
- Nienke M van Atteveldt
- Department of Cognitive Neuroscience, Maastricht University, Maastricht, The Netherlands; Neuroimaging and Neuromodeling Group, Netherlands Institute for Neuroscience, Amsterdam, The Netherlands; Department of Psychiatry, New York State Psychiatric Institute, Columbia University, New York, New York
| | | | | |
Collapse
|
26
|
Stevenson RA, Wallace MT. Multisensory temporal integration: task and stimulus dependencies. Exp Brain Res 2013; 227:249-61. [PMID: 23604624 PMCID: PMC3711231 DOI: 10.1007/s00221-013-3507-3] [Citation(s) in RCA: 160] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2012] [Accepted: 03/28/2013] [Indexed: 12/19/2022]
Abstract
The ability of human sensory systems to integrate information across the different modalities provides a wide range of behavioral and perceptual benefits. This integration process is dependent upon the temporal relationship of the different sensory signals, with stimuli occurring close together in time typically resulting in the largest behavior changes. The range of temporal intervals over which such benefits are seen is typically referred to as the temporal binding window (TBW). Given the importance of temporal factors in multisensory integration under both normal and atypical circumstances such as autism and dyslexia, the TBW has been measured with a variety of experimental protocols that differ according to criterion, task, and stimulus type, making comparisons across experiments difficult. In the current study, we attempt to elucidate the role that these various factors play in the measurement of this important construct. The results show a strong effect of stimulus type, with the TBW assessed with speech stimuli being both larger and more symmetrical than that seen using simple and complex non-speech stimuli. These effects are robust across task and statistical criteria and are highly consistent within individuals, suggesting substantial overlap in the neural and cognitive operations that govern multisensory temporal processes.
Collapse
Affiliation(s)
- Ryan A Stevenson
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 7110 MRB III BioSci Bldg 465, 21st Ave South, Nashville, TN 37232, USA.
| | | |
Collapse
|
27
|
Kassuba T, Klinge C, Hölig C, Röder B, Siebner HR. Vision holds a greater share in visuo-haptic object recognition than touch. Neuroimage 2013; 65:59-68. [DOI: 10.1016/j.neuroimage.2012.09.054] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2012] [Revised: 09/19/2012] [Accepted: 09/20/2012] [Indexed: 10/27/2022] Open
|
28
|
Arsalidou M, Duerden EG, Taylor MJ. The centre of the brain: topographical model of motor, cognitive, affective, and somatosensory functions of the basal ganglia. Hum Brain Mapp 2012; 34:3031-54. [PMID: 22711692 DOI: 10.1002/hbm.22124] [Citation(s) in RCA: 146] [Impact Index Per Article: 12.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2012] [Revised: 04/09/2012] [Accepted: 04/20/2012] [Indexed: 01/11/2023] Open
Abstract
The basal ganglia have traditionally been viewed as motor processing nuclei; however, functional neuroimaging evidence has implicated these structures in more complex cognitive and affective processes that are fundamental for a range of human activities. Using quantitative meta-analysis methods we assessed the functional subdivisions of basal ganglia nuclei in relation to motor (body and eye movements), cognitive (working-memory and executive), affective (emotion and reward) and somatosensory functions in healthy participants. We document affective processes in the anterior parts of the caudate head with the most overlap within the left hemisphere. Cognitive processes showed the most widespread response, whereas motor processes occupied more central structures. On the basis of these demonstrated functional roles of the basal ganglia, we provide a new comprehensive topographical model of these nuclei and insight into how they are linked to a wide range of behaviors.
Collapse
Affiliation(s)
- Marie Arsalidou
- Diagnostic Imaging and Research Institute, Hospital for Sick Children, Toronto, Canada
| | | | | |
Collapse
|
29
|
Kassuba T, Menz MM, Röder B, Siebner HR. Multisensory interactions between auditory and haptic object recognition. ACTA ACUST UNITED AC 2012; 23:1097-107. [PMID: 22518017 DOI: 10.1093/cercor/bhs076] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
Object manipulation produces characteristic sounds and causes specific haptic sensations that facilitate the recognition of the manipulated object. To identify the neural correlates of audio-haptic binding of object features, healthy volunteers underwent functional magnetic resonance imaging while they matched a target object to a sample object within and across audition and touch. By introducing a delay between the presentation of sample and target stimuli, it was possible to dissociate haptic-to-auditory and auditory-to-haptic matching. We hypothesized that only semantically coherent auditory and haptic object features activate cortical regions that host unified conceptual object representations. The left fusiform gyrus (FG) and posterior superior temporal sulcus (pSTS) showed increased activation during crossmodal matching of semantically congruent but not incongruent object stimuli. In the FG, this effect was found for haptic-to-auditory and auditory-to-haptic matching, whereas the pSTS only displayed a crossmodal matching effect for congruent auditory targets. Auditory and somatosensory association cortices showed increased activity during crossmodal object matching which was, however, independent of semantic congruency. Together, the results show multisensory interactions at different hierarchical stages of auditory and haptic object processing. Object-specific crossmodal interactions culminate in the left FG, which may provide a higher order convergence zone for conceptual object knowledge.
Collapse
Affiliation(s)
- Tanja Kassuba
- Danish Research Centre for Magnetic Resonance, Copenhagen University Hospital Hvidovre, 2650 Hvidovre, Denmark.
| | | | | | | |
Collapse
|
30
|
Stevenson RA, Fister JK, Barnett ZP, Nidiffer AR, Wallace MT. Interactions between the spatial and temporal stimulus factors that influence multisensory integration in human performance. Exp Brain Res 2012; 219:121-37. [PMID: 22447249 DOI: 10.1007/s00221-012-3072-1] [Citation(s) in RCA: 72] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2011] [Accepted: 03/06/2012] [Indexed: 12/19/2022]
Abstract
In natural environments, human sensory systems work in a coordinated and integrated manner to perceive and respond to external events. Previous research has shown that the spatial and temporal relationships of sensory signals are paramount in determining how information is integrated across sensory modalities, but in ecologically plausible settings, these factors are not independent. In the current study, we provide a novel exploration of the impact on behavioral performance for systematic manipulations of the spatial location and temporal synchrony of a visual-auditory stimulus pair. Simple auditory and visual stimuli were presented across a range of spatial locations and stimulus onset asynchronies (SOAs), and participants performed both a spatial localization and simultaneity judgment task. Response times in localizing paired visual-auditory stimuli were slower in the periphery and at larger SOAs, but most importantly, an interaction was found between the two factors, in which the effect of SOA was greater in peripheral as opposed to central locations. Simultaneity judgments also revealed a novel interaction between space and time: individuals were more likely to judge stimuli as synchronous when occurring in the periphery at large SOAs. The results of this study provide novel insights into (a) how the speed of spatial localization of an audiovisual stimulus is affected by location and temporal coincidence and the interaction between these two factors and (b) how the location of a multisensory stimulus impacts judgments concerning the temporal relationship of the paired stimuli. These findings provide strong evidence for a complex interdependency between spatial location and temporal structure in determining the ultimate behavioral and perceptual outcome associated with a paired multisensory (i.e., visual-auditory) stimulus.
Collapse
Affiliation(s)
- Ryan A Stevenson
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA.
| | | | | | | | | |
Collapse
|
31
|
Stevenson RA, Zemtsov RK, Wallace MT. Individual differences in the multisensory temporal binding window predict susceptibility to audiovisual illusions. J Exp Psychol Hum Percept Perform 2012; 38:1517-29. [PMID: 22390292 DOI: 10.1037/a0027339] [Citation(s) in RCA: 171] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Human multisensory systems are known to bind inputs from the different sensory modalities into a unified percept, a process that leads to measurable behavioral benefits. This integrative process can be observed through multisensory illusions, including the McGurk effect and the sound-induced flash illusion, both of which demonstrate the ability of one sensory modality to modulate perception in a second modality. Such multisensory integration is highly dependent upon the temporal relationship of the different sensory inputs, with perceptual binding occurring within a limited range of asynchronies known as the temporal binding window (TBW). Previous studies have shown that this window is highly variable across individuals, but it is unclear how these variations in the TBW relate to an individual's ability to integrate multisensory cues. Here we provide evidence linking individual differences in multisensory temporal processes to differences in the individual's audiovisual integration of illusory stimuli. Our data provide strong evidence that the temporal processing of multiple sensory signals and the merging of multiple signals into a single, unified perception, are highly related. Specifically, the width of right side of an individuals' TBW, where the auditory stimulus follows the visual, is significantly correlated with the strength of illusory percepts, as indexed via both an increase in the strength of binding synchronous sensory signals and in an improvement in correctly dissociating asynchronous signals. These findings are discussed in terms of their possible neurobiological basis, relevance to the development of sensory integration, and possible importance for clinical conditions in which there is growing evidence that multisensory integration is compromised.
Collapse
Affiliation(s)
- Ryan A Stevenson
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center.
| | | | | |
Collapse
|
32
|
Stevenson RA, Bushmakin M, Kim S, Wallace MT, Puce A, James TW. Inverse effectiveness and multisensory interactions in visual event-related potentials with audiovisual speech. Brain Topogr 2012; 25:308-26. [PMID: 22367585 DOI: 10.1007/s10548-012-0220-7] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2011] [Accepted: 02/05/2012] [Indexed: 10/28/2022]
Abstract
In recent years, it has become evident that neural responses previously considered to be unisensory can be modulated by sensory input from other modalities. In this regard, visual neural activity elicited to viewing a face is strongly influenced by concurrent incoming auditory information, particularly speech. Here, we applied an additive-factors paradigm aimed at quantifying the impact that auditory speech has on visual event-related potentials (ERPs) elicited to visual speech. These multisensory interactions were measured across parametrically varied stimulus salience, quantified in terms of signal to noise, to provide novel insights into the neural mechanisms of audiovisual speech perception. First, we measured a monotonic increase of the amplitude of the visual P1-N1-P2 ERP complex during a spoken-word recognition task with increases in stimulus salience. ERP component amplitudes varied directly with stimulus salience for visual, audiovisual, and summed unisensory recordings. Second, we measured changes in multisensory gain across salience levels. During audiovisual speech, the P1 and P1-N1 components exhibited less multisensory gain relative to the summed unisensory components with reduced salience, while N1-P2 amplitude exhibited greater multisensory gain as salience was reduced, consistent with the principle of inverse effectiveness. The amplitude interactions were correlated with behavioral measures of multisensory gain across salience levels as measured by response times, suggesting that change in multisensory gain associated with unisensory salience modulations reflects an increased efficiency of visual speech processing.
Collapse
Affiliation(s)
- Ryan A Stevenson
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN, USA.
| | | | | | | | | | | |
Collapse
|
33
|
Speech comprehension aided by multiple modalities: behavioural and neural interactions. Neuropsychologia 2012; 50:762-76. [PMID: 22266262 DOI: 10.1016/j.neuropsychologia.2012.01.010] [Citation(s) in RCA: 67] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2011] [Revised: 12/30/2011] [Accepted: 01/08/2012] [Indexed: 11/24/2022]
Abstract
Speech comprehension is a complex human skill, the performance of which requires the perceiver to combine information from several sources - e.g. voice, face, gesture, linguistic context - to achieve an intelligible and interpretable percept. We describe a functional imaging investigation of how auditory, visual and linguistic information interact to facilitate comprehension. Our specific aims were to investigate the neural responses to these different information sources, alone and in interaction, and further to use behavioural speech comprehension scores to address sites of intelligibility-related activation in multifactorial speech comprehension. In fMRI, participants passively watched videos of spoken sentences, in which we varied Auditory Clarity (with noise-vocoding), Visual Clarity (with Gaussian blurring) and Linguistic Predictability. Main effects of enhanced signal with increased auditory and visual clarity were observed in overlapping regions of posterior STS. Two-way interactions of the factors (auditory × visual, auditory × predictability) in the neural data were observed outside temporal cortex, where positive signal change in response to clearer facial information and greater semantic predictability was greatest at intermediate levels of auditory clarity. Overall changes in stimulus intelligibility by condition (as determined using an independent behavioural experiment) were reflected in the neural data by increased activation predominantly in bilateral dorsolateral temporal cortex, as well as inferior frontal cortex and left fusiform gyrus. Specific investigation of intelligibility changes at intermediate auditory clarity revealed a set of regions, including posterior STS and fusiform gyrus, showing enhanced responses to both visual and linguistic information. Finally, an individual differences analysis showed that greater comprehension performance in the scanning participants (measured in a post-scan behavioural test) were associated with increased activation in left inferior frontal gyrus and left posterior STS. The current multimodal speech comprehension paradigm demonstrates recruitment of a wide comprehension network in the brain, in which posterior STS and fusiform gyrus form sites for convergence of auditory, visual and linguistic information, while left-dominant sites in temporal and frontal cortex support successful comprehension.
Collapse
|
34
|
Kim S, Stevenson RA, James TW. Visuo-haptic neuronal convergence demonstrated with an inversely effective pattern of BOLD activation. J Cogn Neurosci 2011; 24:830-42. [PMID: 22185495 DOI: 10.1162/jocn_a_00176] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Abstract
We investigated the neural substrates involved in visuo-haptic neuronal convergence using an additive-factors design in combination with fMRI. Stimuli were explored under three sensory modality conditions: viewing the object through a mirror without touching (V), touching the object with eyes closed (H), or simultaneously viewing and touching the object (VH). This modality factor was crossed with a task difficulty factor, which had two levels. On the basis of an idea similar to the principle of inverse effectiveness, we predicted that increasing difficulty would increase the relative level of multisensory gain in brain regions where visual and haptic sensory inputs converged. An ROI analysis focused on the lateral occipital tactile-visual area found evidence of inverse effectiveness in the left lateral occipital tactile-visual area, but not in the right. A whole-brain analysis also found evidence for the same pattern in the anterior aspect of the intraparietal sulcus, the premotor cortex, and the posterior insula, all in the left hemisphere. In conclusion, this study is the first to demonstrate visuo-haptic neuronal convergence based on an inversely effective pattern of brain activation.
Collapse
Affiliation(s)
- Sunah Kim
- 360 Minor Hall, University of California, Berkeley, Berkeley, CA 94720, USA.
| | | | | |
Collapse
|
35
|
|
36
|
|
37
|
|
38
|
|
39
|
Kassuba T, Klinge C, Hölig C, Menz MM, Ptito M, Röder B, Siebner HR. The left fusiform gyrus hosts trisensory representations of manipulable objects. Neuroimage 2011; 56:1566-77. [DOI: 10.1016/j.neuroimage.2011.02.032] [Citation(s) in RCA: 44] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2011] [Revised: 02/09/2011] [Accepted: 02/10/2011] [Indexed: 11/25/2022] Open
|
40
|
Abstract
Our vision remains stable even though the movements of our eyes, head and bodies create a motion pattern on the retina. One of the most important, yet basic, feats of the visual system is to correctly determine whether this retinal motion is owing to real movement in the world or rather our own self-movement. This problem has occupied many great thinkers, such as Descartes and Helmholtz, at least since the time of Alhazen. This theme issue brings together leading researchers from animal neurophysiology, clinical neurology, psychophysics and cognitive neuroscience to summarize the state of the art in the study of visual stability. Recently, there has been significant progress in understanding the limits of visual stability in humans and in identifying many of the brain circuits involved in maintaining a stable percept of the world. Clinical studies and new experimental methods, such as transcranial magnetic stimulation, now make it possible to test the causal role of different brain regions in creating visual stability and also allow us to measure the consequences when the mechanisms of visual stability break down.
Collapse
Affiliation(s)
- David Melcher
- Faculty of Cognitive Science, University of Trento, Italy.
| |
Collapse
|
41
|
Naumer MJ, van den Bosch JJF, Wibral M, Kohler A, Singer W, Kaiser J, van de Ven V, Muckli L. Investigating human audio-visual object perception with a combination of hypothesis-generating and hypothesis-testing fMRI analysis tools. Exp Brain Res 2011; 213:309-20. [PMID: 21503649 PMCID: PMC3155044 DOI: 10.1007/s00221-011-2669-0] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2010] [Accepted: 03/28/2011] [Indexed: 11/18/2022]
Abstract
Primate multisensory object perception involves distributed brain regions. To investigate the network character of these regions of the human brain, we applied data-driven group spatial independent component analysis (ICA) to a functional magnetic resonance imaging (fMRI) data set acquired during a passive audio-visual (AV) experiment with common object stimuli. We labeled three group-level independent component (IC) maps as auditory (A), visual (V), and AV, based on their spatial layouts and activation time courses. The overlap between these IC maps served as definition of a distributed network of multisensory candidate regions including superior temporal, ventral occipito-temporal, posterior parietal and prefrontal regions. During an independent second fMRI experiment, we explicitly tested their involvement in AV integration. Activations in nine out of these twelve regions met the max-criterion (A < AV > V) for multisensory integration. Comparison of this approach with a general linear model-based region-of-interest definition revealed its complementary value for multisensory neuroimaging. In conclusion, we estimated functional networks of uni- and multisensory functional connectivity from one dataset and validated their functional roles in an independent dataset. These findings demonstrate the particular value of ICA for multisensory neuroimaging research and using independent datasets to test hypotheses generated from a data-driven analysis.
Collapse
Affiliation(s)
- Marcus J Naumer
- Crossmodal Neuroimaging Lab, Institute of Medical Psychology, Goethe-University of Frankfurt, Heinrich-Hoffmann-Strasse 10, 60528 Frankfurt am Main, Germany.
| | | | | | | | | | | | | | | |
Collapse
|
42
|
Dynamic changes in superior temporal sulcus connectivity during perception of noisy audiovisual speech. J Neurosci 2011; 31:1704-14. [PMID: 21289179 DOI: 10.1523/jneurosci.4853-10.2011] [Citation(s) in RCA: 133] [Impact Index Per Article: 10.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Humans are remarkably adept at understanding speech, even when it is contaminated by noise. Multisensory integration may explain some of this ability: combining independent information from the auditory modality (vocalizations) and the visual modality (mouth movements) reduces noise and increases accuracy. Converging evidence suggests that the superior temporal sulcus (STS) is a critical brain area for multisensory integration, but little is known about its role in the perception of noisy speech. Behavioral studies have shown that perceptual judgments are weighted by the reliability of the sensory modality: more reliable modalities are weighted more strongly, even if the reliability changes rapidly. We hypothesized that changes in the functional connectivity of STS with auditory and visual cortex could provide a neural mechanism for perceptual reliability weighting. To test this idea, we performed five blood oxygenation level-dependent functional magnetic resonance imaging and behavioral experiments in 34 healthy subjects. We found increased functional connectivity between the STS and auditory cortex when the auditory modality was more reliable (less noisy) and increased functional connectivity between the STS and visual cortex when the visual modality was more reliable, even when the reliability changed rapidly during presentation of successive words. This finding matched the results of a behavioral experiment in which the perception of incongruent audiovisual syllables was biased toward the more reliable modality, even with rapidly changing reliability. Changes in STS functional connectivity may be an important neural mechanism underlying the perception of noisy speech.
Collapse
|
43
|
Butler AJ, James TW, James KH. Enhanced multisensory integration and motor reactivation after active motor learning of audiovisual associations. J Cogn Neurosci 2011; 23:3515-28. [PMID: 21452947 DOI: 10.1162/jocn_a_00015] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Everyday experience affords us many opportunities to learn about objects through multiple senses using physical interaction. Previous work has shown that active motor learning of unisensory items enhances memory and leads to the involvement of motor systems during subsequent perception. However, the impact of active motor learning on subsequent perception and recognition of associations among multiple senses has not been investigated. Twenty participants were included in an fMRI study that explored the impact of active motor learning on subsequent processing of unisensory and multisensory stimuli. Participants were exposed to visuo-motor associations between novel objects and novel sounds either through self-generated actions on the objects or by observing an experimenter produce the actions. Immediately after exposure, accuracy, RT, and BOLD fMRI measures were collected with unisensory and multisensory stimuli in associative perception and recognition tasks. Response times during audiovisual associative and unisensory recognition were enhanced by active learning, as was accuracy during audiovisual associative recognition. The difference in motor cortex activation between old and new associations was greater for the active than the passive group. Furthermore, functional connectivity between visual and motor cortices was stronger after active learning than passive learning. Active learning also led to greater activation of the fusiform gyrus during subsequent unisensory visual perception. Finally, brain regions implicated in audiovisual integration (e.g., STS) showed greater multisensory gain after active learning than after passive learning. Overall, the results show that active motor learning modulates the processing of multisensory associations.
Collapse
|
44
|
Stevenson RA, VanDerKlok RM, Pisoni DB, James TW. Discrete neural substrates underlie complementary audiovisual speech integration processes. Neuroimage 2010; 55:1339-45. [PMID: 21195198 DOI: 10.1016/j.neuroimage.2010.12.063] [Citation(s) in RCA: 81] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2010] [Revised: 12/14/2010] [Accepted: 12/23/2010] [Indexed: 11/25/2022] Open
Abstract
The ability to combine information from multiple sensory modalities into a single, unified percept is a key element in an organism's ability to interact with the external world. This process of perceptual fusion, the binding of multiple sensory inputs into a perceptual gestalt, is highly dependent on the temporal synchrony of the sensory inputs. Using fMRI, we identified two anatomically distinct brain regions in the superior temporal cortex, one involved with processing temporal-synchrony, and one with processing perceptual fusion of audiovisual speech. This dissociation suggests that the superior temporal cortex should be considered a "neuronal hub" composed of multiple discrete subregions that underlie an array of complementary low- and high-level multisensory integration processes. In this role, abnormalities in the structure and function of superior temporal cortex provide a possible common etiology for temporal-processing and perceptual-fusion deficits seen in a number of clinical populations, including individuals with autism spectrum disorder, dyslexia, and schizophrenia.
Collapse
Affiliation(s)
- Ryan A Stevenson
- Department of Psychological and Brain Sciences, Indiana University, USA.
| | | | | | | |
Collapse
|
45
|
Gentile G, Petkova VI, Ehrsson HH. Integration of visual and tactile signals from the hand in the human brain: an FMRI study. J Neurophysiol 2010; 105:910-22. [PMID: 21148091 DOI: 10.1152/jn.00840.2010] [Citation(s) in RCA: 187] [Impact Index Per Article: 13.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
In the non-human primate brain, a number of multisensory areas have been described where individual neurons respond to visual, tactile and bimodal visuotactile stimulation of the upper limb. It has been shown that such bimodal neurons can integrate sensory inputs in a linear or nonlinear fashion. In humans, activity in a similar set of brain regions has been associated with visuotactile stimulation of the hand. However, little is known about how these areas integrate visual and tactile information. In this functional magnetic resonance imaging experiment, we employed tactile, visual, and visuotactile stimulation of the right hand in an ecologically valid setup where participants were looking directly at their upper limb. We identified brain regions that were activated by both visual and tactile stimuli as well as areas exhibiting greater activity in the visuotactile condition than in both unisensory ones. The posterior and inferior parietal, dorsal, and ventral premotor cortices, as well as the cerebellum, all showed evidence of multisensory linear (additive) responses. Nonlinear, superadditive responses were observed in the cortex lining the left anterior intraparietal sulcus, the insula, dorsal premotor cortex, and, subcortically, the putamen. These results identify a set of candidate frontal, parietal and subcortical regions that integrate visual and tactile information for the multisensory perception of one's own hand.
Collapse
Affiliation(s)
- Giovanni Gentile
- Brain, Body and Self Laboratory, Department of Neuroscience, Karolinska Institutet, Stockholm, Sweden.
| | | | | |
Collapse
|
46
|
James TW, VanDerKlok RM, Stevenson RA, James KH. Multisensory perception of action in posterior temporal and parietal cortices. Neuropsychologia 2010; 49:108-14. [PMID: 21036183 DOI: 10.1016/j.neuropsychologia.2010.10.030] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2010] [Revised: 09/29/2010] [Accepted: 10/22/2010] [Indexed: 11/16/2022]
Abstract
Environmental events produce many sensory cues for identifying the action that evoked the event, the agent that performed the action, and the object targeted by the action. The cues for identifying environmental events are usually distributed across multiple sensory systems. Thus, to understand how environmental events are recognized requires an understanding of the fundamental cognitive and neural processes involved in multisensory object and action recognition. Here, we investigated the neural substrates involved in auditory and visual recognition of object-directed actions. Consistent with previous work on visual recognition of isolated objects, visual recognition of actions, and recognition of environmental sounds, we found evidence for multisensory audiovisual event-selective activation bilaterally at the junction of the posterior middle temporal gyrus and the lateral occipital cortex, the left superior temporal sulcus, and bilaterally in the intraparietal sulcus. The results suggest that recognition of events through convergence of visual and auditory cues is accomplished through a network of brain regions that was previously implicated only in visual recognition of action.
Collapse
Affiliation(s)
- Thomas W James
- Department of Psychological and Brain Sciences, Indiana University, United States.
| | | | | | | |
Collapse
|
47
|
Meyer GF, Greenlee M, Wuerger S. Interactions between auditory and visual semantic stimulus classes: evidence for common processing networks for speech and body actions. J Cogn Neurosci 2010; 23:2291-308. [PMID: 20954938 DOI: 10.1162/jocn.2010.21593] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Incongruencies between auditory and visual signals negatively affect human performance and cause selective activation in neuroimaging studies; therefore, they are increasingly used to probe audiovisual integration mechanisms. An open question is whether the increased BOLD response reflects computational demands in integrating mismatching low-level signals or reflects simultaneous unimodal conceptual representations of the competing signals. To address this question, we explore the effect of semantic congruency within and across three signal categories (speech, body actions, and unfamiliar patterns) for signals with matched low-level statistics. In a localizer experiment, unimodal (auditory and visual) and bimodal stimuli were used to identify ROIs. All three semantic categories cause overlapping activation patterns. We find no evidence for areas that show greater BOLD response to bimodal stimuli than predicted by the sum of the two unimodal responses. Conjunction analysis of the unimodal responses in each category identifies a network including posterior temporal, inferior frontal, and premotor areas. Semantic congruency effects are measured in the main experiment. We find that incongruent combinations of two meaningful stimuli (speech and body actions) but not combinations of meaningful with meaningless stimuli lead to increased BOLD response in the posterior STS (pSTS) bilaterally, the left SMA, the inferior frontal gyrus, the inferior parietal lobule, and the anterior insula. These interactions are not seen in premotor areas. Our findings are consistent with the hypothesis that pSTS and frontal areas form a recognition network that combines sensory categorical representations (in pSTS) with action hypothesis generation in inferior frontal gyrus/premotor areas. We argue that the same neural networks process speech and body actions.
Collapse
Affiliation(s)
- Georg F Meyer
- School of Psychology, Liverpool University, Eleanor Rathbone Building, Liverpool, United Kingdom.
| | | | | |
Collapse
|
48
|
Auditory-visual multisensory interactions in humans: timing, topography, directionality, and sources. J Neurosci 2010; 30:12572-80. [PMID: 20861363 DOI: 10.1523/jneurosci.1099-10.2010] [Citation(s) in RCA: 104] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Current models of brain organization include multisensory interactions at early processing stages and within low-level, including primary, cortices. Embracing this model with regard to auditory-visual (AV) interactions in humans remains problematic. Controversy surrounds the application of an additive model to the analysis of event-related potentials (ERPs), and conventional ERP analysis methods have yielded discordant latencies of effects and permitted limited neurophysiologic interpretability. While hemodynamic imaging and transcranial magnetic stimulation studies provide general support for the above model, the precise timing, superadditive/subadditive directionality, topographic stability, and sources remain unresolved. We recorded ERPs in humans to attended, but task-irrelevant stimuli that did not require an overt motor response, thereby circumventing paradigmatic caveats. We applied novel ERP signal analysis methods to provide details concerning the likely bases of AV interactions. First, nonlinear interactions occur at 60-95 ms after stimulus and are the consequence of topographic, rather than pure strength, modulations in the ERP. AV stimuli engage distinct configurations of intracranial generators, rather than simply modulating the amplitude of unisensory responses. Second, source estimations (and statistical analyses thereof) identified primary visual, primary auditory, and posterior superior temporal regions as mediating these effects. Finally, scalar values of current densities in all of these regions exhibited functionally coupled, subadditive nonlinear effects, a pattern increasingly consistent with the mounting evidence in nonhuman primates. In these ways, we demonstrate how neurophysiologic bases of multisensory interactions can be noninvasively identified in humans, allowing for a synthesis across imaging methods on the one hand and species on the other.
Collapse
|
49
|
Cortical integration of audio-visual speech and non-speech stimuli. Brain Cogn 2010; 74:97-106. [PMID: 20709442 DOI: 10.1016/j.bandc.2010.07.002] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2010] [Revised: 07/09/2010] [Accepted: 07/12/2010] [Indexed: 11/20/2022]
Abstract
Using fMRI we investigated the neural basis of audio-visual processing of speech and non-speech stimuli using physically similar auditory stimuli (speech and sinusoidal tones) and visual stimuli (animated circles and ellipses). Relative to uni-modal stimuli, the different multi-modal stimuli showed increased activation in largely non-overlapping areas. Ellipse-Speech, which most resembles naturalistic audio-visual speech, showed higher activation in the right inferior frontal gyrus, fusiform gyri, left posterior superior temporal sulcus, and lateral occipital cortex. Circle-Tone, an arbitrary audio-visual pairing with no speech association, activated middle temporal gyri and lateral occipital cortex. Circle-Speech showed activation in lateral occipital cortex, and Ellipse-Tone did not show increased activation relative to uni-modal stimuli. Further analysis revealed that middle temporal regions, although identified as multi-modal only in the Circle-Tone condition, were more strongly active to Ellipse-Speech or Circle-Speech, but regions that were identified as multi-modal for Ellipse-Speech were always strongest for Ellipse-Speech. Our results suggest that combinations of auditory and visual stimuli may together be processed by different cortical networks, depending on the extent to which multi-modal speech or non-speech percepts are evoked.
Collapse
|
50
|
Kim S, James TW. Enhanced effectiveness in visuo-haptic object-selective brain regions with increasing stimulus salience. Hum Brain Mapp 2010; 31:678-93. [PMID: 19830683 DOI: 10.1002/hbm.20897] [Citation(s) in RCA: 44] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
The occipital and parietal lobes contain regions that are recruited for both visual and haptic object processing. The purpose of the present study was to characterize the underlying neural mechanisms for bimodal integration of vision and haptics in these visuo-haptic object-selective brain regions to find out whether these brain regions are sites of neuronal or areal convergence. Our sensory conditions consisted of visual-only (V), haptic-only (H), and visuo-haptic (VH), which allowed us to evaluate integration using the superadditivity metric. We also presented each stimulus condition at two different levels of signal-to-noise ratio or salience. The salience manipulation allowed us to assess integration using the rule of inverse effectiveness. We were able to localize previously described visuo-haptic object-selective regions in the lateral occipital cortex (lateral occipital tactile-visual area) and the intraparietal sulcus, and also localized a new region in the left anterior fusiform gyrus. There was no evidence of superadditivity with the VH stimulus at either level of salience in any of the regions. There was, however, a strong effect of salience on multisensory enhancement: the response to the VH stimulus was more enhanced at higher salience across all regions. In other words, the regions showed enhanced integration of the VH stimulus with increasing effectiveness of the unisensory stimuli. We called the effect "enhanced effectiveness." The presence of enhanced effectiveness in visuo-haptic object-selective brain regions demonstrates neuronal convergence of visual and haptic sensory inputs for the purpose of processing object shape.
Collapse
Affiliation(s)
- Sunah Kim
- Cognitive Science Program, Indiana University, Bloomington, Indiana 47405, USA.
| | | |
Collapse
|