1
|
DeYoe EA, Huddleston W, Greenberg AS. Are neuronal mechanisms of attention universal across human sensory and motor brain maps? Psychon Bull Rev 2024:10.3758/s13423-024-02495-3. [PMID: 38587756 DOI: 10.3758/s13423-024-02495-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/10/2024] [Indexed: 04/09/2024]
Abstract
One's experience of shifting attention from the color to the smell to the act of picking a flower seems like a unitary process applied, at will, to one modality after another. Yet, the unique and separable experiences of sight versus smell versus movement might suggest that the neural mechanisms of attention have been separately optimized to employ each modality to its greatest advantage. Moreover, addressing the issue of universality can be particularly difficult due to a paucity of existing cross-modal comparisons and a dearth of neurophysiological methods that can be applied equally well across disparate modalities. Here we outline some of the conceptual and methodological issues related to this problem and present an instructive example of an experimental approach that can be applied widely throughout the human brain to permit detailed, quantitative comparison of attentional mechanisms across modalities. The ultimate goal is to spur efforts across disciplines to provide a large and varied database of empirical observations that will either support the notion of a universal neural substrate for attention or more clearly identify the degree to which attentional mechanisms are specialized for each modality.
Collapse
Affiliation(s)
- Edgar A DeYoe
- Department of Radiology, Medical College of Wisconsin, 8701 Watertown Plank Rd, Milwaukee, WI, 53226, USA.
- , Signal Mountain, USA.
| | - Wendy Huddleston
- School of Rehabilitation Sciences and Technology, College of Health Professions and Sciences, University of Wisconsin - Milwaukee, 3409 N. Downer Ave, Milwaukee, WI, 53211, USA
| | - Adam S Greenberg
- Department of Biomedical Engineering, Medical College of Wisconsin and Marquette University, Milwaukee, WI, 53226, USA
| |
Collapse
|
2
|
Wang K, Fang Y, Guo Q, Shen L, Chen Q. Superior Attentional Efficiency of Auditory Cue via the Ventral Auditory-thalamic Pathway. J Cogn Neurosci 2024; 36:303-326. [PMID: 38010315 DOI: 10.1162/jocn_a_02090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2023]
Abstract
Auditory commands are often executed more efficiently than visual commands. However, empirical evidence on the underlying behavioral and neural mechanisms remains scarce. In two experiments, we manipulated the delivery modality of informative cues and the prediction violation effect and found consistently enhanced RT benefits for the matched auditory cues compared with the matched visual cues. At the neural level, when the bottom-up perceptual input matched the prior prediction induced by the auditory cue, the auditory-thalamic pathway was significantly activated. Moreover, the stronger the auditory-thalamic connectivity, the higher the behavioral benefits of the matched auditory cue. When the bottom-up input violated the prior prediction induced by the auditory cue, the ventral auditory pathway was specifically involved. Moreover, the stronger the ventral auditory-prefrontal connectivity, the larger the behavioral costs caused by the violation of the auditory cue. In addition, the dorsal frontoparietal network showed a supramodal function in reacting to the violation of informative cues irrespective of the delivery modality of the cue. Taken together, the results reveal novel behavioral and neural evidence that the superior efficiency of the auditory cue is twofold: The auditory-thalamic pathway is associated with improvements in task performance when the bottom-up input matches the auditory cue, whereas the ventral auditory-prefrontal pathway is involved when the auditory cue is violated.
Collapse
Affiliation(s)
- Ke Wang
- South China Normal University, Guangzhou, China
| | - Ying Fang
- South China Normal University, Guangzhou, China
| | - Qiang Guo
- Guangdong Sanjiu Brain Hospital, Guangzhou, China
| | - Lu Shen
- South China Normal University, Guangzhou, China
| | - Qi Chen
- South China Normal University, Guangzhou, China
| |
Collapse
|
3
|
Gurariy G, Randall R, Greenberg AS. Neuroimaging evidence for the direct role of auditory scene analysis in object perception. Cereb Cortex 2023; 33:6257-6272. [PMID: 36562994 PMCID: PMC10183742 DOI: 10.1093/cercor/bhac501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2022] [Revised: 11/29/2022] [Accepted: 11/30/2022] [Indexed: 12/24/2022] Open
Abstract
Auditory Scene Analysis (ASA) refers to the grouping of acoustic signals into auditory objects. Previously, we have shown that perceived musicality of auditory sequences varies with high-level organizational features. Here, we explore the neural mechanisms mediating ASA and auditory object perception. Participants performed musicality judgments on randomly generated pure-tone sequences and manipulated versions of each sequence containing low-level changes (amplitude; timbre). Low-level manipulations affected auditory object perception as evidenced by changes in musicality ratings. fMRI was used to measure neural activation to sequences rated most and least musical, and the altered versions of each sequence. Next, we generated two partially overlapping networks: (i) a music processing network (music localizer) and (ii) an ASA network (base sequences vs. ASA manipulated sequences). Using Representational Similarity Analysis, we correlated the functional profiles of each ROI to a model generated from behavioral musicality ratings as well as models corresponding to low-level feature processing and music perception. Within overlapping regions, areas near primary auditory cortex correlated with low-level ASA models, whereas right IPS was correlated with musicality ratings. Shared neural mechanisms that correlate with behavior and underlie both ASA and music perception suggests that low-level features of auditory stimuli play a role in auditory object perception.
Collapse
Affiliation(s)
- Gennadiy Gurariy
- Department of Biomedical Engineering, Medical College of Wisconsin and Marquette University, 8701 W Watertown Plank Rd, Milwaukee, WI 53233, United States
| | - Richard Randall
- School of Music and Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA 15213, United States
| | - Adam S Greenberg
- Department of Biomedical Engineering, Medical College of Wisconsin and Marquette University, 8701 W Watertown Plank Rd, Milwaukee, WI 53233, United States
| |
Collapse
|
4
|
Brain–computer interface in an inter-individual approach using spatial coherence: Identification of better channels and tests repetition using auditory selective attention. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104573] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/30/2023]
|
5
|
Kiremitçi I, Yilmaz Ö, Çelik E, Shahdloo M, Huth AG, Çukur T. Attentional Modulation of Hierarchical Speech Representations in a Multitalker Environment. Cereb Cortex 2021; 31:4986-5005. [PMID: 34115102 PMCID: PMC8491717 DOI: 10.1093/cercor/bhab136] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2020] [Revised: 04/01/2021] [Accepted: 04/21/2021] [Indexed: 11/13/2022] Open
Abstract
Humans are remarkably adept in listening to a desired speaker in a crowded environment, while filtering out nontarget speakers in the background. Attention is key to solving this difficult cocktail-party task, yet a detailed characterization of attentional effects on speech representations is lacking. It remains unclear across what levels of speech features and how much attentional modulation occurs in each brain area during the cocktail-party task. To address these questions, we recorded whole-brain blood-oxygen-level-dependent (BOLD) responses while subjects either passively listened to single-speaker stories, or selectively attended to a male or a female speaker in temporally overlaid stories in separate experiments. Spectral, articulatory, and semantic models of the natural stories were constructed. Intrinsic selectivity profiles were identified via voxelwise models fit to passive listening responses. Attentional modulations were then quantified based on model predictions for attended and unattended stories in the cocktail-party task. We find that attention causes broad modulations at multiple levels of speech representations while growing stronger toward later stages of processing, and that unattended speech is represented up to the semantic level in parabelt auditory cortex. These results provide insights on attentional mechanisms that underlie the ability to selectively listen to a desired speaker in noisy multispeaker environments.
Collapse
Affiliation(s)
- Ibrahim Kiremitçi
- Neuroscience Program, Sabuncu Brain Research Center, Bilkent University, Ankara TR-06800, Turkey
- National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara TR-06800, Turkey
| | - Özgür Yilmaz
- National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara TR-06800, Turkey
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara TR-06800, Turkey
| | - Emin Çelik
- Neuroscience Program, Sabuncu Brain Research Center, Bilkent University, Ankara TR-06800, Turkey
- National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara TR-06800, Turkey
| | - Mo Shahdloo
- National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara TR-06800, Turkey
- Department of Experimental Psychology, Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford OX3 9DU, UK
| | - Alexander G Huth
- Department of Neuroscience, The University of Texas at Austin, Austin, TX 78712, USA
- Department of Computer Science, The University of Texas at Austin, Austin, TX 78712, USA
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA 94702, USA
| | - Tolga Çukur
- Neuroscience Program, Sabuncu Brain Research Center, Bilkent University, Ankara TR-06800, Turkey
- National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara TR-06800, Turkey
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara TR-06800, Turkey
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA 94702, USA
| |
Collapse
|
6
|
Wikman P, Sahari E, Salmela V, Leminen A, Leminen M, Laine M, Alho K. Breaking down the cocktail party: Attentional modulation of cerebral audiovisual speech processing. Neuroimage 2020; 224:117365. [PMID: 32941985 DOI: 10.1016/j.neuroimage.2020.117365] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2020] [Revised: 08/19/2020] [Accepted: 09/07/2020] [Indexed: 12/20/2022] Open
Abstract
Recent studies utilizing electrophysiological speech envelope reconstruction have sparked renewed interest in the cocktail party effect by showing that auditory neurons entrain to selectively attended speech. Yet, the neural networks of attention to speech in naturalistic audiovisual settings with multiple sound sources remain poorly understood. We collected functional brain imaging data while participants viewed audiovisual video clips of lifelike dialogues with concurrent distracting speech in the background. Dialogues were presented in a full-factorial design, comprising task (listen to the dialogues vs. ignore them), audiovisual quality and semantic predictability. We used univariate analyses in combination with multivariate pattern analysis (MVPA) to study modulations of brain activity related to attentive processing of audiovisual speech. We found attentive speech processing to cause distinct spatiotemporal modulation profiles in distributed cortical areas including sensory and frontal-control networks. Semantic coherence modulated attention-related activation patterns in the earliest stages of auditory cortical processing, suggesting that the auditory cortex is involved in high-level speech processing. Our results corroborate views that emphasize the dynamic nature of attention, with task-specificity and context as cornerstones of the underlying neuro-cognitive mechanisms.
Collapse
Affiliation(s)
- Patrik Wikman
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland.
| | - Elisa Sahari
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland
| | - Viljami Salmela
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland; Advanced Magnetic Imaging Centre, Aalto NeuroImaging, Aalto University, Espoo, Finland
| | - Alina Leminen
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland; Department of Digital Humanities, University of Helsinki, Helsinki, Finland
| | - Miika Leminen
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland; Department of Phoniatrics, Helsinki University Hospital, Helsinki, Finland
| | - Matti Laine
- Department of Psychology, Åbo Akademi University, Turku, Finland
| | - Kimmo Alho
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland; Advanced Magnetic Imaging Centre, Aalto NeuroImaging, Aalto University, Espoo, Finland
| |
Collapse
|
7
|
Gurariy G, Randall R, Greenberg AS. Manipulation of low-level features modulates grouping strength of auditory objects. PSYCHOLOGICAL RESEARCH 2020; 85:2256-2270. [PMID: 32691138 DOI: 10.1007/s00426-020-01391-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2019] [Accepted: 07/10/2020] [Indexed: 11/29/2022]
Abstract
A central challenge of auditory processing involves the segregation, analysis, and integration of acoustic information into auditory perceptual objects for processing by higher order cognitive operations. This study explores the influence of low-level features on auditory object perception. Participants provided perceived musicality ratings in response to randomly generated pure tone sequences. Previous work has shown that music perception relies on the integration of discrete sounds into a holistic structure. Hence, high (versus low) ratings were viewed as indicative of strong (versus weak) object formation. Additionally, participants rated sequences in which random subsets of tones were manipulated along one of three low-level dimensions (timbre, amplitude, or fade-in) at one of three strengths (low, medium, or high). Our primary findings demonstrate how low-level acoustic features modulate the perception of auditory objects, as measured by changes in musicality ratings for manipulated sequences. Secondarily, we used principal component analysis to categorize participants into subgroups based on differential sensitivities to low-level auditory dimensions, thereby highlighting the importance of individual differences in auditory perception. Finally, we report asymmetries regarding the effects of low-level dimensions; specifically, the perceptual significance of timbre. Together, these data contribute to our understanding of how low-level auditory features modulate auditory object perception.
Collapse
Affiliation(s)
- Gennadiy Gurariy
- Department of Biomedical Engineering, Medical College of Wisconsin & Marquette University, Milwaukee, USA
| | - Richard Randall
- School of Music and Neuroscience Institute, Carnegie Mellon University, Pittsburgh, USA.
| | - Adam S Greenberg
- Department of Biomedical Engineering, Medical College of Wisconsin & Marquette University, Milwaukee, USA
| |
Collapse
|
8
|
Yuriko Santos Kawata N, Hashimoto T, Kawashima R. Neural mechanisms underlying concurrent listening of simultaneous speech. Brain Res 2020; 1738:146821. [PMID: 32259518 DOI: 10.1016/j.brainres.2020.146821] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2019] [Revised: 03/31/2020] [Accepted: 04/03/2020] [Indexed: 10/24/2022]
Abstract
Can we identify what two people are saying at the same time? Although it is difficult to perfectly repeat two or more simultaneous messages, listeners can report information from both speakers. In a concurrent/divided listening task, enhanced attention and segregation of speech can be required rather than selection and suppression. However, the neural mechanisms of concurrent listening to multi-speaker concurrent speech has yet to be clarified. The present study utilized functional magnetic resonance imaging to examine the neural responses of healthy young adults listening to concurrent male and female speakers in an attempt to reveal the mechanism of concurrent listening. After practice and multiple trials testing concurrent listening, 31 participants achieved performance comparable with that of selective listening. Furthermore, compared to selective listening, concurrent listening induced greater activation in the anterior cingulate cortex, bilateral anterior insula, frontoparietal regions, and the periaqueductal gray region. In addition to the salience network for multi-speaker listening, attentional modulation and enhanced segregation of these signals could be used to achieve successful concurrent listening. These results indicate the presence of a potential mechanism by which one can listen to two voices with enhanced attention to saliency signals.
Collapse
Affiliation(s)
- Natasha Yuriko Santos Kawata
- Department of Functional Brain Imaging, Institute of Development, Aging and Cancer (IDAC), Tohoku University, Japan
| | - Teruo Hashimoto
- Division of Developmental Cognitive Neuroscience, Institute of Development, Aging and Cancer (IDAC), Tohoku University, Japan.
| | - Ryuta Kawashima
- Department of Functional Brain Imaging, Institute of Development, Aging and Cancer (IDAC), Tohoku University, Japan; Division of Developmental Cognitive Neuroscience, Institute of Development, Aging and Cancer (IDAC), Tohoku University, Japan
| |
Collapse
|
9
|
What you say versus how you say it: Comparing sentence comprehension and emotional prosody processing using fMRI. Neuroimage 2019; 209:116509. [PMID: 31899288 DOI: 10.1016/j.neuroimage.2019.116509] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2019] [Revised: 12/23/2019] [Accepted: 12/26/2019] [Indexed: 11/24/2022] Open
Abstract
While language processing is often described as lateralized to the left hemisphere (LH), the processing of emotion carried by vocal intonation is typically attributed to the right hemisphere (RH) and more specifically, to areas mirroring the LH language areas. However, the evidence base for this hypothesis is inconsistent, with some studies supporting right-lateralization but others favoring bilateral involvement in emotional prosody processing. Here we compared fMRI activations for an emotional prosody task with those for a sentence comprehension task in 20 neurologically healthy adults, quantifying lateralization using a lateralization index. We observed right-lateralized frontotemporal activations for emotional prosody that roughly mirrored the left-lateralized activations for sentence comprehension. In addition, emotional prosody also evoked bilateral activation in pars orbitalis (BA47), amygdala, and anterior insula. These findings are consistent with the idea that analysis of the auditory speech signal is split between the hemispheres, possibly according to their preferred temporal resolution, with the left preferentially encoding phonetic and the right encoding prosodic information. Once processed, emotional prosody information is fed to domain-general emotion processing areas and integrated with semantic information, resulting in additional bilateral activations.
Collapse
|
10
|
Halder S, Leinfelder T, Schulz SM, Kübler A. Neural mechanisms of training an auditory event-related potential task in a brain-computer interface context. Hum Brain Mapp 2019; 40:2399-2412. [PMID: 30693612 DOI: 10.1002/hbm.24531] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2018] [Revised: 12/18/2018] [Accepted: 01/11/2019] [Indexed: 11/12/2022] Open
Abstract
Effective use of brain-computer interfaces (BCIs) typically requires training. Improved understanding of the neural mechanisms underlying BCI training will facilitate optimisation of BCIs. The current study examined the neural mechanisms related to training for electroencephalography (EEG)-based communication with an auditory event-related potential (ERP) BCI. Neural mechanisms of training in 10 healthy volunteers were assessed with functional magnetic resonance imaging (fMRI) during an auditory ERP-based BCI task before (t1) and after (t5) three ERP-BCI training sessions outside the fMRI scanner (t2, t3, and t4). Attended stimuli were contrasted with ignored stimuli in the first-level fMRI data analysis (t1 and t5); the training effect was verified using the EEG data (t2-t4); and brain activation was contrasted before and after training in the second-level fMRI data analysis (t1 vs. t5). Training increased the communication speed from 2.9 bits/min (t2) to 4 bits/min (t4). Strong activation was found in the putamen, supplementary motor area (SMA), and superior temporal gyrus (STG) associated with attention to the stimuli. Training led to decreased activation in the superior frontal gyrus and stronger haemodynamic rebound in the STG and supramarginal gyrus. The neural mechanisms of ERP-BCI training indicate improved stimulus perception and reduced mental workload. The ERP task used in the current study showed overlapping activations with a motor imagery based BCI task from a previous study on the neural mechanisms of BCI training in the SMA and putamen. This suggests commonalities between the neural mechanisms of training for both BCI paradigms.
Collapse
Affiliation(s)
- Sebastian Halder
- School of Computer Science and Electronic Engineering, University of Essex, Colchester, United Kingdom.,Institute of Psychology, University of Würzburg, Würzburg, Germany.,Human-Computer Interaction, University of Würzburg, Würzburg, Germany.,Department of Molecular Medicine, University of Oslo, Oslo, Norway
| | | | - Stefan M Schulz
- Institute of Psychology, University of Würzburg, Würzburg, Germany.,Clinical Psychology, Psychotherapy, and Experimental Psychopathology, Johannes Gutenberg University, Mainz, Germany
| | - Andrea Kübler
- Institute of Psychology, University of Würzburg, Würzburg, Germany
| |
Collapse
|
11
|
Jeong E, Ryu H, Shin JH, Kwon GH, Jo G, Lee JY. High Oxygen Exchange to Music Indicates Auditory Distractibility in Acquired Brain Injury: An fNIRS Study with a Vector-Based Phase Analysis. Sci Rep 2018; 8:16737. [PMID: 30425287 PMCID: PMC6233191 DOI: 10.1038/s41598-018-35172-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2018] [Accepted: 10/31/2018] [Indexed: 01/30/2023] Open
Abstract
Attention deficits due to auditory distractibility are pervasive among patients with acquired brain injury (ABI). It remains unclear, however, whether attention deficits following ABI specific to auditory modality are associated with altered haemodynamic responses. Here, we examined cerebral haemodynamic changes using functional near-infrared spectroscopy combined with a topological vector-based analysis method. A total of thirty-seven participants (22 healthy adults, 15 patients with ABI) performed a melodic contour identification task (CIT) that simulates auditory distractibility. Findings demonstrated that the melodic CIT was able to detect auditory distractibility in patients with ABI. The rate-corrected score showed that the ABI group performed significantly worse than the non-ABI group in both CIT1 (target contour identification against environmental sounds) and CIT2 (target contour identification against target-like distraction). Phase-associated response intensity during the CITs was greater in the ABI group than in the non-ABI group. Moreover, there existed a significant interaction effect in the left dorsolateral prefrontal cortex (DLPFC) during CIT1 and CIT2. These findings indicated that stronger hemodynamic responses involving oxygen exchange in the left DLPFC can serve as a biomarker for evaluating and monitoring auditory distractibility, which could potentially lead to the discovery of the underlying mechanism that causes auditory attention deficits in patients with ABI.
Collapse
Affiliation(s)
- Eunju Jeong
- Department of Arts and Technology, Hanyang University, Seoul, 04763, Republic of Korea.
- Division of Industrial Information Studies, Hanyang University, Seoul, 04763, Republic of Korea.
| | - Hokyoung Ryu
- Department of Arts and Technology, Hanyang University, Seoul, 04763, Republic of Korea
- Graduate School of Technology and Innovation Management, Hanyang University, Seoul, 04763, Republic of Korea
| | - Joon-Ho Shin
- Department of Neurorehabilitation, National Rehabilitation Center, Ministry of Health and Welfare, Seoul, 01022, Republic of Korea
| | - Gyu Hyun Kwon
- Department of Arts and Technology, Hanyang University, Seoul, 04763, Republic of Korea
- Graduate School of Technology and Innovation Management, Hanyang University, Seoul, 04763, Republic of Korea
| | - Geonsang Jo
- Department of Arts and Technology, Hanyang University, Seoul, 04763, Republic of Korea
| | - Ji-Yeong Lee
- Department of Neurorehabilitation, National Rehabilitation Center, Ministry of Health and Welfare, Seoul, 01022, Republic of Korea
| |
Collapse
|
12
|
O'Regan L, Serrien DJ. Individual Differences and Hemispheric Asymmetries for Language and Spatial Attention. Front Hum Neurosci 2018; 12:380. [PMID: 30337864 PMCID: PMC6180149 DOI: 10.3389/fnhum.2018.00380] [Citation(s) in RCA: 30] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2018] [Accepted: 09/04/2018] [Indexed: 11/13/2022] Open
Abstract
Language and spatial processing are cognitive functions that are asymmetrically distributed across both cerebral hemispheres. In the present study, we compare left- and right-handers on word comprehension using a divided visual field paradigm and spatial attention using a landmark task. We investigate hemispheric asymmetries by assessing the participants' behavioral metrics; response accuracy, reaction time and their laterality index. The data showed that right-handers benefitted more from left-hemispheric lateralization for language comprehension and right-hemispheric lateralization for spatial attention than left-handers. Furthermore, left-handers demonstrated a more variable distribution across both hemispheres, supporting a less focal profile of functional brain organization. Taken together, the results underline that handedness distinctively modulates hemispheric processing and behavioral performance during verbal and nonverbal tasks. In particular, typical lateralization is most prevalent for right-handers whereas atypical lateralization is more evident for left-handers. These insights contribute to the understanding of individual variation of brain asymmetries and the mechanisms related to changes in cerebral dominance.
Collapse
Affiliation(s)
- Louise O'Regan
- School of Psychology, University of Nottingham, Nottingham, United Kingdom
| | - Deborah J Serrien
- School of Psychology, University of Nottingham, Nottingham, United Kingdom
| |
Collapse
|
13
|
Wiegand K, Heiland S, Uhlig CH, Dykstra AR, Gutschalk A. Cortical networks for auditory detection with and without informational masking: Task effects and implications for conscious perception. Neuroimage 2018; 167:178-190. [DOI: 10.1016/j.neuroimage.2017.11.036] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2017] [Revised: 10/06/2017] [Accepted: 11/18/2017] [Indexed: 01/08/2023] Open
|
14
|
Alemi R, Batouli SAH, Behzad E, Ebrahimpoor M, Oghabian MA. Not single brain areas but a network is involved in language: Applications in presurgical planning. Clin Neurol Neurosurg 2018; 165:116-128. [PMID: 29334640 DOI: 10.1016/j.clineuro.2018.01.009] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2017] [Revised: 01/03/2018] [Accepted: 01/08/2018] [Indexed: 01/22/2023]
Abstract
OBJECTIVES Language is an important human function, and is a determinant of the quality of life. In conditions such as brain lesions, disruption of the language function may occur, and lesion resection is a solution for that. Presurgical planning to determine the language-related brain areas would enhance the chances of language preservation after the operation; however, availability of a normative language template is essential. PATIENTS AND METHODS In this study, using data from 60 young individuals who were meticulously checked for mental and physical health, and using fMRI and robust imaging and data analysis methods, functional brain maps for the language production, perception and semantic were produced. RESULTS The obtained templates showed that the language function should be considered as the product of the collaboration of a network of brain regions, instead of considering only few brain areas to be involved in that. CONCLUSION This study has important clinical applications, and extends our knowledge on the neuroanatomy of the language function.
Collapse
Affiliation(s)
- Razieh Alemi
- Department of Neuroscience and Addiction Studies, School of Advanced Technologies in Medicine, Tehran University of Medical Sciences, Tehran, Iran; Department of Otorhinolaryngology, Faculty of Medicine, McGill University, Canada
| | - Seyed Amir Hossein Batouli
- Department of Neuroscience and Addiction Studies, School of Advanced Technologies in Medicine, Tehran University of Medical Sciences, Tehran, Iran; Neuroimaging and Analysis Group, Tehran University of Medical Sciences, Tehran, Iran
| | - Ebrahim Behzad
- Department of Neuroscience and Addiction Studies, School of Advanced Technologies in Medicine, Tehran University of Medical Sciences, Tehran, Iran
| | - Mitra Ebrahimpoor
- Neuroimaging and Analysis Group, Tehran University of Medical Sciences, Tehran, Iran
| | - Mohammad Ali Oghabian
- Neuroimaging and Analysis Group, Tehran University of Medical Sciences, Tehran, Iran; Medical Physics and Biomedical Engineering Department, Tehran University of Medical Sciences, Tehran, Iran.
| |
Collapse
|
15
|
The Right Temporoparietal Junction Supports Speech Tracking During Selective Listening: Evidence from Concurrent EEG-fMRI. J Neurosci 2017; 37:11505-11516. [PMID: 29061698 DOI: 10.1523/jneurosci.1007-17.2017] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2017] [Revised: 08/28/2017] [Accepted: 09/05/2017] [Indexed: 11/21/2022] Open
Abstract
Listening selectively to one out of several competing speakers in a "cocktail party" situation is a highly demanding task. It relies on a widespread cortical network, including auditory sensory, but also frontal and parietal brain regions involved in controlling auditory attention. Previous work has shown that, during selective listening, ongoing neural activity in auditory sensory areas is dominated by the attended speech stream, whereas competing input is suppressed. The relationship between these attentional modulations in the sensory tracking of the attended speech stream and frontoparietal activity during selective listening is, however, not understood. We studied this question in young, healthy human participants (both sexes) using concurrent EEG-fMRI and a sustained selective listening task, in which one out of two competing speech streams had to be attended selectively. An EEG-based speech envelope reconstruction method was applied to assess the strength of the cortical tracking of the to-be-attended and the to-be-ignored stream during selective listening. Our results show that individual speech envelope reconstruction accuracies obtained for the to-be-attended speech stream were positively correlated with the amplitude of sustained BOLD responses in the right temporoparietal junction, a core region of the ventral attention network. This brain region further showed task-related functional connectivity to secondary auditory cortex and regions of the frontoparietal attention network, including the intraparietal sulcus and the inferior frontal gyrus. This suggests that the right temporoparietal junction is involved in controlling attention during selective listening, allowing for a better cortical tracking of the attended speech stream.SIGNIFICANCE STATEMENT Listening selectively to one out of several simultaneously talking speakers in a "cocktail party" situation is a highly demanding task. It activates a widespread network of auditory sensory and hierarchically higher frontoparietal brain regions. However, how these different processing levels interact during selective listening is not understood. Here, we investigated this question using fMRI and concurrently acquired scalp EEG. We found that activation levels in the right temporoparietal junction correlate with the sensory representation of a selectively attended speech stream. In addition, this region showed significant functional connectivity to both auditory sensory and other frontoparietal brain areas during selective listening. This suggests that the right temporoparietal junction contributes to controlling selective auditory attention in "cocktail party" situations.
Collapse
|
16
|
Sensory-Biased and Multiple-Demand Processing in Human Lateral Frontal Cortex. J Neurosci 2017; 37:8755-8766. [PMID: 28821668 DOI: 10.1523/jneurosci.0660-17.2017] [Citation(s) in RCA: 31] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2017] [Revised: 07/27/2017] [Accepted: 08/01/2017] [Indexed: 11/21/2022] Open
Abstract
The functionality of much of human lateral frontal cortex (LFC) has been characterized as "multiple demand" (MD) as these regions appear to support a broad range of cognitive tasks. In contrast to this domain-general account, recent evidence indicates that portions of LFC are consistently selective for sensory modality. Michalka et al. (2015) reported two bilateral regions that are biased for visual attention, superior precentral sulcus (sPCS) and inferior precentral sulcus (iPCS), interleaved with two bilateral regions that are biased for auditory attention, transverse gyrus intersecting precentral sulcus (tgPCS) and caudal inferior frontal sulcus (cIFS). In the present study, we use fMRI to examine both the multiple-demand and sensory-bias hypotheses within caudal portions of human LFC (both men and women participated). Using visual and auditory 2-back tasks, we replicate the finding of two bilateral visual-biased and two bilateral auditory-biased LFC regions, corresponding to sPCS and iPCS and to tgPCS and cIFS, and demonstrate high within-subject reliability of these regions over time and across tasks. In addition, we assess MD responsiveness using BOLD signal recruitment and multi-task activation indices. In both, we find that the two visual-biased regions, sPCS and iPCS, exhibit stronger MD responsiveness than do the auditory-biased LFC regions, tgPCS and cIFS; however, neither reaches the degree of MD responsiveness exhibited by dorsal anterior cingulate/presupplemental motor area or by anterior insula. These results reconcile two competing views of LFC by demonstrating the coexistence of sensory specialization and MD functionality, especially in visual-biased LFC structures.SIGNIFICANCE STATEMENT Lateral frontal cortex (LFC) is known to play a number of critical roles in supporting human cognition; however, the functional organization of LFC remains controversial. The "multiple demand" (MD) hypothesis suggests that LFC regions provide domain-general support for cognition. Recent evidence challenges the MD view by demonstrating that a preference for sensory modality, vision or audition, defines four discrete LFC regions. Here, the sensory-biased LFC results are reproduced using a new task, and MD responsiveness of these regions is tested. The two visual-biased regions exhibit MD behavior, whereas the auditory-biased regions have no more than weak MD responses. These findings help to reconcile two competing views of LFC functional organization.
Collapse
|
17
|
Uhlig CH, Gutschalk A. Transient human auditory cortex activation during volitional attention shifting. PLoS One 2017; 12:e0172907. [PMID: 28273110 PMCID: PMC5342206 DOI: 10.1371/journal.pone.0172907] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2016] [Accepted: 02/02/2017] [Indexed: 11/29/2022] Open
Abstract
While strong activation of auditory cortex is generally found for exogenous orienting of attention, endogenous, intra-modal shifting of auditory attention has not yet been demonstrated to evoke transient activation of the auditory cortex. Here, we used fMRI to test if endogenous shifting of attention is also associated with transient activation of the auditory cortex. In contrast to previous studies, attention shifts were completely self-initiated and not cued by transient auditory or visual stimuli. Stimuli were two dichotic, continuous streams of tones, whose perceptual grouping was not ambiguous. Participants were instructed to continuously focus on one of the streams and switch between the two after a while, indicating the time and direction of each attentional shift by pressing one of two response buttons. The BOLD response around the time of the button presses revealed robust activation of the auditory cortex, along with activation of a distributed task network. To test if the transient auditory cortex activation was specifically related to auditory orienting, a self-paced motor task was added, where participants were instructed to ignore the auditory stimulation while they pressed the response buttons in alternation and at a similar pace. Results showed that attentional orienting produced stronger activity in auditory cortex, but auditory cortex activation was also observed for button presses without focused attention to the auditory stimulus. The response related to attention shifting was stronger contralateral to the side where attention was shifted to. Contralateral-dominant activation was also observed in dorsal parietal cortex areas, confirming previous observations for auditory attention shifting in studies that used auditory cues.
Collapse
Affiliation(s)
- Christian Harm Uhlig
- Department of Neurology, Ruprecht-Karls-Universität Heidelberg, Heidelberg, Germany
| | - Alexander Gutschalk
- Department of Neurology, Ruprecht-Karls-Universität Heidelberg, Heidelberg, Germany
- * E-mail:
| |
Collapse
|
18
|
Braga RM, Hellyer PJ, Wise RJS, Leech R. Auditory and visual connectivity gradients in frontoparietal cortex. Hum Brain Mapp 2016; 38:255-270. [PMID: 27571304 PMCID: PMC5215394 DOI: 10.1002/hbm.23358] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2015] [Revised: 08/09/2016] [Accepted: 08/15/2016] [Indexed: 11/06/2022] Open
Abstract
A frontoparietal network of brain regions is often implicated in both auditory and visual information processing. Although it is possible that the same set of multimodal regions subserves both modalities, there is increasing evidence that there is a differentiation of sensory function within frontoparietal cortex. Magnetic resonance imaging (MRI) in humans was used to investigate whether different frontoparietal regions showed intrinsic biases in connectivity with visual or auditory modalities. Structural connectivity was assessed with diffusion tractography and functional connectivity was tested using functional MRI. A dorsal-ventral gradient of function was observed, where connectivity with visual cortex dominates dorsal frontal and parietal connections, while connectivity with auditory cortex dominates ventral frontal and parietal regions. A gradient was also observed along the posterior-anterior axis, although in opposite directions in prefrontal and parietal cortices. The results suggest that the location of neural activity within frontoparietal cortex may be influenced by these intrinsic biases toward visual and auditory processing. Thus, the location of activity in frontoparietal cortex may be influenced as much by stimulus modality as the cognitive demands of a task. It was concluded that stimulus modality was spatially encoded throughout frontal and parietal cortices, and was speculated that such an arrangement allows for top-down modulation of modality-specific information to occur within higher-order cortex. This could provide a potentially faster and more efficient pathway by which top-down selection between sensory modalities could occur, by constraining modulations to within frontal and parietal regions, rather than long-range connections to sensory cortices. Hum Brain Mapp 38:255-270, 2017. © 2016 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Rodrigo M Braga
- Center for Brain Science, Harvard University, Cambridge, Massachusetts.,Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital & Harvard Medical School, Charlestown, Massachusetts.,The Computational, Cognitive and Clinical Neuroimaging Laboratory, Division of Brain Sciences, Hammersmith Hospital Campus, Imperial College London, London, United Kingdom
| | - Peter J Hellyer
- The Computational, Cognitive and Clinical Neuroimaging Laboratory, Division of Brain Sciences, Hammersmith Hospital Campus, Imperial College London, London, United Kingdom.,Centre for Neuroimaging Sciences, Institute of Psychiatry, Psychology & Neuroscience, King's College London, London, United Kingdom
| | - Richard J S Wise
- The Computational, Cognitive and Clinical Neuroimaging Laboratory, Division of Brain Sciences, Hammersmith Hospital Campus, Imperial College London, London, United Kingdom
| | - Robert Leech
- The Computational, Cognitive and Clinical Neuroimaging Laboratory, Division of Brain Sciences, Hammersmith Hospital Campus, Imperial College London, London, United Kingdom
| |
Collapse
|
19
|
Jeong E, Ryu H. Melodic Contour Identification Reflects the Cognitive Threshold of Aging. Front Aging Neurosci 2016; 8:134. [PMID: 27378907 PMCID: PMC4904015 DOI: 10.3389/fnagi.2016.00134] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2016] [Accepted: 05/27/2016] [Indexed: 01/16/2023] Open
Abstract
Cognitive decline is a natural phenomenon of aging. Although there exists a consensus that sensitivity to acoustic features of music is associated with such decline, no solid evidence has yet shown that structural elements and contexts of music explain this loss of cognitive performance. This study examined the extent and the type of cognitive decline that is related to the contour identification task (CIT) using tones with different pitches (i.e., melodic contours). Both younger and older adult groups participated in the CIT given in three listening conditions (i.e., focused, selective, and alternating). Behavioral data (accuracy and response times) and hemodynamic reactions were measured using functional near-infrared spectroscopy (fNIRS). Our findings showed cognitive declines in the older adult group but with a subtle difference from the younger adult group. The accuracy of the melodic CITs given in the target-like distraction task (CIT2) was significantly lower than that in the environmental noise (CIT1) condition in the older adult group, indicating that CIT2 may be a benchmark test for age-specific cognitive decline. The fNIRS findings also agreed with this interpretation, revealing significant increases in oxygenated hemoglobin (oxyHb) concentration in the younger (p < 0.05 for Δpre - on task; p < 0.01 for Δon – post task) rather than the older adult group (n.s for Δpre - on task; n.s for Δon – post task). We further concluded that the oxyHb difference was present in the brain regions near the right dorsolateral prefrontal cortex. Taken together, these findings suggest that CIT2 (i.e., the melodic contour task in the target-like distraction) is an optimized task that could indicate the degree and type of age-related cognitive decline.
Collapse
Affiliation(s)
- Eunju Jeong
- Department of Arts and Technology, Hanyang University Seoul, South Korea
| | - Hokyoung Ryu
- Department of Arts and Technology, Hanyang University Seoul, South Korea
| |
Collapse
|
20
|
Braga RM, Fu RZ, Seemungal BM, Wise RJS, Leech R. Eye Movements during Auditory Attention Predict Individual Differences in Dorsal Attention Network Activity. Front Hum Neurosci 2016; 10:164. [PMID: 27242465 PMCID: PMC4860869 DOI: 10.3389/fnhum.2016.00164] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2015] [Accepted: 04/01/2016] [Indexed: 11/13/2022] Open
Abstract
The neural mechanisms supporting auditory attention are not fully understood. A dorsal frontoparietal network of brain regions is thought to mediate the spatial orienting of attention across all sensory modalities. Key parts of this network, the frontal eye fields (FEF) and the superior parietal lobes (SPL), contain retinotopic maps and elicit saccades when stimulated. This suggests that their recruitment during auditory attention might reflect crossmodal oculomotor processes; however this has not been confirmed experimentally. Here we investigate whether task-evoked eye movements during an auditory task can predict the magnitude of activity within the dorsal frontoparietal network. A spatial and non-spatial listening task was used with on-line eye-tracking and functional magnetic resonance imaging (fMRI). No visual stimuli or cues were used. The auditory task elicited systematic eye movements, with saccade rate and gaze position predicting attentional engagement and the cued sound location, respectively. Activity associated with these separate aspects of evoked eye-movements dissociated between the SPL and FEF. However these observed eye movements could not account for all the activation in the frontoparietal network. Our results suggest that the recruitment of the SPL and FEF during attentive listening reflects, at least partly, overt crossmodal oculomotor processes during non-visual attention. Further work is needed to establish whether the network’s remaining contribution to auditory attention is through covert crossmodal processes, or is directly involved in the manipulation of auditory information.
Collapse
Affiliation(s)
- Rodrigo M Braga
- Computational, Cognitive and Clinical Neuroimaging Laboratory, Division of Brain Sciences, Imperial College London, Hammersmith Hospital CampusLondon, UK; Center for Brain Science, Harvard UniversityCambridge, MA, USA; Aathinoula A. Martinos Center for Biomedical ImagingCharlestown, MA, USA
| | - Richard Z Fu
- Computational, Cognitive and Clinical Neuroimaging Laboratory, Division of Brain Sciences, Imperial College London, Hammersmith Hospital Campus London, UK
| | - Barry M Seemungal
- Computational, Cognitive and Clinical Neuroimaging Laboratory, Division of Brain Sciences, Imperial College London, Hammersmith Hospital Campus London, UK
| | - Richard J S Wise
- Computational, Cognitive and Clinical Neuroimaging Laboratory, Division of Brain Sciences, Imperial College London, Hammersmith Hospital Campus London, UK
| | - Robert Leech
- Computational, Cognitive and Clinical Neuroimaging Laboratory, Division of Brain Sciences, Imperial College London, Hammersmith Hospital Campus London, UK
| |
Collapse
|
21
|
Leaver AM, Seydell-Greenwald A, Rauschecker JP. Auditory-limbic interactions in chronic tinnitus: Challenges for neuroimaging research. Hear Res 2015; 334:49-57. [PMID: 26299843 DOI: 10.1016/j.heares.2015.08.005] [Citation(s) in RCA: 80] [Impact Index Per Article: 8.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/10/2015] [Revised: 07/07/2015] [Accepted: 08/17/2015] [Indexed: 01/09/2023]
Abstract
Tinnitus is a widespread auditory disorder affecting approximately 10-15% of the population, often with debilitating consequences. Although tinnitus commonly begins with damage to the auditory system due to loud-noise exposure, aging, or other etiologies, the exact neurophysiological basis of chronic tinnitus remains unknown. Many researchers point to a central auditory origin of tinnitus; however, a growing body of evidence also implicates other brain regions, including the limbic system. Correspondingly, we and others have proposed models of tinnitus in which the limbic and auditory systems both play critical roles and interact with one another. Specifically, we argue that damage to the auditory system generates an initial tinnitus signal, consistent with previous research. In our model, this "transient" tinnitus is suppressed when a limbic frontostriatal network, comprised of ventromedial prefrontal cortex and ventral striatum, successfully modulates thalamocortical transmission in the auditory system. Thus, in chronic tinnitus, limbic-system damage and resulting inefficiency of auditory-limbic interactions prevents proper compensation of the tinnitus signal. Neuroimaging studies utilizing connectivity methods like resting-state fMRI and diffusion MRI continue to uncover tinnitus-related anomalies throughout auditory, limbic, and other brain systems. However, directly assessing interactions between these brain regions and networks has proved to be more challenging. Here, we review existing empirical support for models of tinnitus stressing a critical role for involvement of "non-auditory" structures in tinnitus pathophysiology, and discuss the possible impact of newly refined connectivity techniques from neuroimaging on tinnitus research.
Collapse
Affiliation(s)
- Amber M Leaver
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC, USA; Department of Neurology, University of California Los Angeles, Los Angeles, CA, USA
| | | | - Josef P Rauschecker
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC, USA; Institute for Advanced Study, TUM, Munich, Germany.
| |
Collapse
|
22
|
Plakke B, Romanski LM. Auditory connections and functions of prefrontal cortex. Front Neurosci 2014; 8:199. [PMID: 25100931 PMCID: PMC4107948 DOI: 10.3389/fnins.2014.00199] [Citation(s) in RCA: 89] [Impact Index Per Article: 8.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2014] [Accepted: 06/26/2014] [Indexed: 12/17/2022] Open
Abstract
The functional auditory system extends from the ears to the frontal lobes with successively more complex functions occurring as one ascends the hierarchy of the nervous system. Several areas of the frontal lobe receive afferents from both early and late auditory processing regions within the temporal lobe. Afferents from the early part of the cortical auditory system, the auditory belt cortex, which are presumed to carry information regarding auditory features of sounds, project to only a few prefrontal regions and are most dense in the ventrolateral prefrontal cortex (VLPFC). In contrast, projections from the parabelt and the rostral superior temporal gyrus (STG) most likely convey more complex information and target a larger, widespread region of the prefrontal cortex. Neuronal responses reflect these anatomical projections as some prefrontal neurons exhibit responses to features in acoustic stimuli, while other neurons display task-related responses. For example, recording studies in non-human primates indicate that VLPFC is responsive to complex sounds including vocalizations and that VLPFC neurons in area 12/47 respond to sounds with similar acoustic morphology. In contrast, neuronal responses during auditory working memory involve a wider region of the prefrontal cortex. In humans, the frontal lobe is involved in auditory detection, discrimination, and working memory. Past research suggests that dorsal and ventral subregions of the prefrontal cortex process different types of information with dorsal cortex processing spatial/visual information and ventral cortex processing non-spatial/auditory information. While this is apparent in the non-human primate and in some neuroimaging studies, most research in humans indicates that specific task conditions, stimuli or previous experience may bias the recruitment of specific prefrontal regions, suggesting a more flexible role for the frontal lobe during auditory cognition.
Collapse
Affiliation(s)
- Bethany Plakke
- Department of Neurobiology and Anatomy, University of Rochester School of Medicine and Dentistry Rochester, NY, USA
| | - Lizabeth M Romanski
- Department of Neurobiology and Anatomy, University of Rochester School of Medicine and Dentistry Rochester, NY, USA
| |
Collapse
|
23
|
Nataf S. The sensory immune system: a neural twist to the antigenic discontinuity theory. Nat Rev Immunol 2014; 14:280. [PMID: 24662388 DOI: 10.1038/nri3521-c1] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Affiliation(s)
- Serge Nataf
- Lyon Neuroscience Research Center, INSERM 1028 CNRS UMR5292, University Lyon-1, Banque de tissus et de cellules, Hôpital Edouard Herriot, Lyon University Hospital (Hospices Civils de Lyon), Lyon F-69000, France
| |
Collapse
|
24
|
Bilodeau-Mercure M, Lortie CL, Sato M, Guitton MJ, Tremblay P. The neurobiology of speech perception decline in aging. Brain Struct Funct 2014; 220:979-97. [PMID: 24402675 DOI: 10.1007/s00429-013-0695-3] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2013] [Accepted: 12/23/2013] [Indexed: 11/27/2022]
Abstract
Speech perception difficulties are common among elderlies; yet the underlying neural mechanisms are still poorly understood. New empirical evidence suggesting that brain senescence may be an important contributor to these difficulties has challenged the traditional view that peripheral hearing loss was the main factor in the etiology of these difficulties. Here, we investigated the relationship between structural and functional brain senescence and speech perception skills in aging. Following audiometric evaluations, participants underwent MRI while performing a speech perception task at different intelligibility levels. As expected, with age speech perception declined, even after controlling for hearing sensitivity using an audiological measure (pure tone averages), and a bioacoustical measure (DPOAEs recordings). Our results reveal that the core speech network, centered on the supratemporal cortex and ventral motor areas bilaterally, decreased in spatial extent in older adults. Importantly, our results also show that speech skills in aging are affected by changes in cortical thickness and in brain functioning. Age-independent intelligibility effects were found in several motor and premotor areas, including the left ventral premotor cortex and the right supplementary motor area (SMA). Age-dependent intelligibility effects were also found, mainly in sensorimotor cortical areas, and in the left dorsal anterior insula. In this region, changes in BOLD signal modulated the relationship between age and speech perception skills suggesting a role for this region in maintaining speech perception in older ages. These results provide important new insights into the neurobiology of speech perception in aging.
Collapse
Affiliation(s)
- Mylène Bilodeau-Mercure
- Centre de Recherche de l'Institut Universitaire en Santé Mentale de Québec, Quebec City, QC, G1J 2G3, Canada
| | | | | | | | | |
Collapse
|