1
|
McMullin MA, Kumar R, Higgins NC, Gygi B, Elhilali M, Snyder JS. Preliminary Evidence for Global Properties in Human Listeners During Natural Auditory Scene Perception. Open Mind (Camb) 2024; 8:333-365. [PMID: 38571530 PMCID: PMC10990578 DOI: 10.1162/opmi_a_00131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Accepted: 02/10/2024] [Indexed: 04/05/2024] Open
Abstract
Theories of auditory and visual scene analysis suggest the perception of scenes relies on the identification and segregation of objects within it, resembling a detail-oriented processing style. However, a more global process may occur while analyzing scenes, which has been evidenced in the visual domain. It is our understanding that a similar line of research has not been explored in the auditory domain; therefore, we evaluated the contributions of high-level global and low-level acoustic information to auditory scene perception. An additional aim was to increase the field's ecological validity by using and making available a new collection of high-quality auditory scenes. Participants rated scenes on 8 global properties (e.g., open vs. enclosed) and an acoustic analysis evaluated which low-level features predicted the ratings. We submitted the acoustic measures and average ratings of the global properties to separate exploratory factor analyses (EFAs). The EFA of the acoustic measures revealed a seven-factor structure explaining 57% of the variance in the data, while the EFA of the global property measures revealed a two-factor structure explaining 64% of the variance in the data. Regression analyses revealed each global property was predicted by at least one acoustic variable (R2 = 0.33-0.87). These findings were extended using deep neural network models where we examined correlations between human ratings of global properties and deep embeddings of two computational models: an object-based model and a scene-based model. The results support that participants' ratings are more strongly explained by a global analysis of the scene setting, though the relationship between scene perception and auditory perception is multifaceted, with differing correlation patterns evident between the two models. Taken together, our results provide evidence for the ability to perceive auditory scenes from a global perspective. Some of the acoustic measures predicted ratings of global scene perception, suggesting representations of auditory objects may be transformed through many stages of processing in the ventral auditory stream, similar to what has been proposed in the ventral visual stream. These findings and the open availability of our scene collection will make future studies on perception, attention, and memory for natural auditory scenes possible.
Collapse
Affiliation(s)
| | - Rohit Kumar
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Nathan C. Higgins
- Department of Communication Sciences & Disorders, University of South Florida, Tampa, FL, USA
| | - Brian Gygi
- East Bay Institute for Research and Education, Martinez, CA, USA
| | - Mounya Elhilali
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Joel S. Snyder
- Department of Psychology, University of Nevada, Las Vegas, Las Vegas, NV, USA
| |
Collapse
|
2
|
Lee S, Kim KR, Lee W. Exploring the link between pediatric headaches and environmental noise exposure. BMC Pediatr 2024; 24:94. [PMID: 38308216 PMCID: PMC10835846 DOI: 10.1186/s12887-023-04490-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Accepted: 12/16/2023] [Indexed: 02/04/2024] Open
Abstract
BACKGROUND Headaches are the most common neurologic symptoms in the pediatric population. Most primary headache in children and adolescents focuses on associated factors, including noise. Auditory discomfort is related to recognizing the pain. We aimed to analyze the headache profile of pediatric populations and the connection between noise exposure and head pain in children and adolescents. METHODS We reviewed retrospectively medical records of the pediatric population with headaches in Gyeongsang National University Changwon Hospital from January 2022 to April 2023. Personal headache profiling from self-questionnaires and environmental noise data from the National Noise Information System (NNIS) were used to analyze each variable, and chi-square tests and linear regression models by SAS were used to analyze the statistical correlation. RESULTS Of the 224 participants, 125 were clinically diagnosed with headaches. Of the 104 pubertal subjects, 56.7% were diagnosed with headaches, compared to 60% in the prepubertal group. Both daytime and nighttime noise was significantly higher in the diagnosed headache group than in the non-diagnosed group. Headache duration increased by daytime and nighttime noise with statistical significance in age-adjusted models. CONCLUSION We found that noise exposure is correlated to headaches in children and adolescents. Daytime and nighttime environmental noise exposure was significantly associated with the duration of headaches through our data. Therefore, we assume that noise exposure is vitally relevant to prolonged headaches in the pediatric population. Further research is needed to improve our data.
Collapse
Affiliation(s)
- Sunho Lee
- Department of Pediatrics, CHA Ilsan Medical Center, CHA University, Goyang, Republic of Korea
| | - Kyung-Ran Kim
- Department of Pediatrics, Gyeongsang National University Changwon Hospital, Changwon, Republic of Korea
| | - Wanhyung Lee
- Department of Preventive Medicine, College of Medicine, Chung-Ang University, Seoul, Republic of Korea.
| |
Collapse
|
3
|
Zhang J, Zhang G, Li X, Wang P, Wang B, Liu B. Decoding sound categories based on whole-brain functional connectivity patterns. Brain Imaging Behav 2018; 14:100-109. [PMID: 30361945 DOI: 10.1007/s11682-018-9976-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Abstract
2Sound decoding is important for patients with sensory loss, such as the blind. Previous studies on sound categorization were conducted by estimating brain activity using univariate analysis or voxel-wise multivariate decoding methods and suggested some regions were sensitive to auditory categories. It is proposed that feedback connections between brain areas may facilitate auditory object selection. Therefore, it is important to explore whether functional connectivity among regions can be used to decode sound category. In this study, we constructed whole-brain functional connectivity patterns when subjects perceived four different sound categories and combined them with multivariate pattern classification analysis for sound decoding. The categorical discriminative networks and regions were determined based on the weight maps. Results showed that a high accuracy in multi-category classification was obtained based on the whole-brain functional connectivity patterns and the results were verified by different preprocessing parameters. Insight into the category discriminative functional networks showed that contributive connections crossed the left and right brain, and ranged from primary regions to high-level cognitive regions, which provide new evidence for the distributed representation of auditory object. Further analysis of brain regions in the discriminative networks showed that superior temporal gyrus and Heschl's gyrus significantly contributed to discriminating sound categories. Together, the findings reveal that functional connectivity based multivariate classification method provides rich information for auditory category decoding. The successful decoding results implicate the interactive properties of the distributed brain areas in auditory sound representation.
Collapse
Affiliation(s)
- Jinliang Zhang
- School of Computer Science and Technology, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin, 300350, People's Republic of China
| | - Gaoyan Zhang
- School of Computer Science and Technology, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin, 300350, People's Republic of China
| | - Xianglin Li
- Medical Imaging Research Institute, Binzhou Medical University, Yantai, Shandong, 264003, People's Republic of China
| | - Peiyuan Wang
- Department of Radiology, Yantai Affiliated Hospital of Binzhou Medical University, Yantai, Shandong, 264003, People's Republic of China
| | - Bin Wang
- Medical Imaging Research Institute, Binzhou Medical University, Yantai, Shandong, 264003, People's Republic of China
| | - Baolin Liu
- School of Computer Science and Technology, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin, 300350, People's Republic of China. .,State Key Laboratory of Intelligent Technology and Systems, National Laboratory for Information Science and Technology, Tsinghua University, Beijing, 100084, People's Republic of China.
| |
Collapse
|
4
|
Abstract
Previous studies have shown that the amygdala is more involved in processing animate categories, such as humans and animals, than inanimate objects, but little is known regarding whether this animate advantage applies to auditory stimuli. To address this issue, we performed a functional Magnetic Resonance Imaging (fMRI) study with emotion and category as factors, in which subjects heard sounds from different categories (i.e., humans, animals, and objects) in negative and neutral dimensions. Emotional levels and semantic familiarity were matched across categories. The results showed that the amygdala responded more to human vocalization than to animal vocalization and sounds of inanimate objects in both negative and neutral valences, and more to animal sounds than to objects in neural condition. In addition, the amygdala, together with the insula and the right superior temporal sulcus, further distinguished human voices from animal sounds. These data indicated that the amygdala is prepared to respond to animate sources, especially human vocalizations in auditory modality.
Collapse
Affiliation(s)
- Yanbing Zhao
- a School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health , Peking University , Beijing , China
| | - Qing Sun
- a School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health , Peking University , Beijing , China
| | - Gang Chen
- b Scientific and Statistical Computing Core , National Institute of Mental Health, National Institutes of Health , Bethesda , MD , USA
| | - Jiongjiong Yang
- a School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health , Peking University , Beijing , China
| |
Collapse
|
5
|
Hong KS, Santosa H. Decoding four different sound-categories in the auditory cortex using functional near-infrared spectroscopy. Hear Res 2016; 333:157-166. [PMID: 26828741 DOI: 10.1016/j.heares.2016.01.009] [Citation(s) in RCA: 64] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/14/2015] [Revised: 01/15/2016] [Accepted: 01/18/2016] [Indexed: 01/13/2023]
Abstract
The ability of the auditory cortex in the brain to distinguish different sounds is important in daily life. This study investigated whether activations in the auditory cortex caused by different sounds can be distinguished using functional near-infrared spectroscopy (fNIRS). The hemodynamic responses (HRs) in both hemispheres using fNIRS were measured in 18 subjects while exposing them to four sound categories (English-speech, non-English-speech, annoying sounds, and nature sounds). As features for classifying the different signals, the mean, slope, and skewness of the oxy-hemoglobin (HbO) signal were used. With regard to the language-related stimuli, the HRs evoked by understandable speech (English) were observed in a broader brain region than were those evoked by non-English speech. Also, the magnitudes of the HbO signals evoked by English-speech were higher than those of non-English speech. The ratio of the peak values of non-English and English speech was 72.5%. Also, the brain region evoked by annoying sounds was wider than that by nature sounds. However, the signal strength for nature sounds was stronger than that for annoying sounds. Finally, for brain-computer interface (BCI) purposes, the linear discriminant analysis (LDA) and support vector machine (SVM) classifiers were applied to the four sound categories. The overall classification performance for the left hemisphere was higher than that for the right hemisphere. Therefore, for decoding of auditory commands, the left hemisphere is recommended. Also, in two-class classification, the annoying vs. nature sounds comparison provides a higher classification accuracy than the English vs. non-English speech comparison. Finally, LDA performs better than SVM.
Collapse
Affiliation(s)
- Keum-Shik Hong
- Department of Cogno-Mechatronics Engineering, Pusan National University, 2 Busandaehak-ro, Geumjeong-gu, Busan 46241, Republic of Korea; School of Mechanical Engineering, Pusan National University, 2 Busandaehak-ro, Geumjeong-gu, Busan 46241, Republic of Korea.
| | - Hendrik Santosa
- Department of Cogno-Mechatronics Engineering, Pusan National University, 2 Busandaehak-ro, Geumjeong-gu, Busan 46241, Republic of Korea
| |
Collapse
|
6
|
Tomasino B, Canderan C, Marin D, Maieron M, Gremese M, D'Agostini S, Fabbro F, Skrap M. Identifying environmental sounds: a multimodal mapping study. Front Hum Neurosci 2015; 9:567. [PMID: 26539096 PMCID: PMC4612670 DOI: 10.3389/fnhum.2015.00567] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2015] [Accepted: 09/28/2015] [Indexed: 11/13/2022] Open
Abstract
Our environment is full of auditory events such as warnings or hazards, and their correct recognition is essential. We explored environmental sounds (ES) recognition in a series of studies. In study 1 we performed an Activation Likelihood Estimation (ALE) meta-analysis of neuroimaging experiments addressing ES processing to delineate the network of areas consistently involved in ES processing. Areas consistently activated in the ALE meta-analysis were the STG/MTG, insula/rolandic operculum, parahippocampal gyrus and inferior frontal gyrus bilaterally. Some of these areas truly reflect ES processing, whereas others are related to design choices, e.g., type of task, type of control condition, type of stimulus. In study 2 we report on 7 neurosurgical patients with lesions involving the areas which were found to be activated by the ALE meta-analysis. We tested their ES recognition abilities and found an impairment of ES recognition. These results indicate that deficits of ES recognition do not exclusively reflect lesions to the right or to the left hemisphere but both hemispheres are involved. The most frequently lesioned area is the hippocampus/insula/STG. We made sure that any impairment in ES recognition would not be related to language problems, but reflect impaired ES processing. In study 3 we carried out an fMRI study on patients (vs. healthy controls) to investigate how the areas involved in ES might be functionally deregulated because of a lesion. The fMRI evidenced that controls activated the right IFG, the STG bilaterally and the left insula. We applied a multimodal mapping approach and found that, although the meta-analysis showed that part of the left and right STG/MTG activation during ES processing might in part be related to design choices, this area was one of the most frequently lesioned areas in our patients, thus highlighting its causal role in ES processing. We found that the ROIs we drew on the two clusters of activation found in the left and in the right STG overlapped with the lesions of at least 4 out of the 7 patients' lesions, indicating that the lack of STG activation found for patients is related to brain damage and is crucial for explaining the ES deficit.
Collapse
Affiliation(s)
- Barbara Tomasino
- Istituto di Ricovero e Cura a Carattere Scientifico “E. Medea”, Polo Regionale del Friuli Venezia GiuliaUdine, Italy
| | - Cinzia Canderan
- Istituto di Ricovero e Cura a Carattere Scientifico “E. Medea”, Polo Regionale del Friuli Venezia GiuliaUdine, Italy
| | - Dario Marin
- Istituto di Ricovero e Cura a Carattere Scientifico “E. Medea”, Polo Regionale del Friuli Venezia GiuliaUdine, Italy
| | - Marta Maieron
- Fisica Medica A.O.S. Maria della MisericordiaUdine, Italy
| | - Michele Gremese
- Istituto di Ricovero e Cura a Carattere Scientifico “E. Medea”, Polo Regionale del Friuli Venezia GiuliaUdine, Italy
| | - Serena D'Agostini
- Unità Operativa di Neuroradiologia, A.O.S. Maria della MisericordiaUdine, Italy
| | - Franco Fabbro
- Istituto di Ricovero e Cura a Carattere Scientifico “E. Medea”, Polo Regionale del Friuli Venezia GiuliaUdine, Italy
| | - Miran Skrap
- Unità Operativa di Neurochirurgia, A.O.S. Maria della MisericordiaUdine, Italy
| |
Collapse
|
7
|
Kumar S, Bonnici HM, Teki S, Agus TR, Pressnitzer D, Maguire EA, Griffiths TD. Representations of specific acoustic patterns in the auditory cortex and hippocampus. Proc Biol Sci 2015; 281:20141000. [PMID: 25100695 PMCID: PMC4132675 DOI: 10.1098/rspb.2014.1000] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
Previous behavioural studies have shown that repeated presentation of a randomly chosen acoustic pattern leads to the unsupervised learning of some of its specific acoustic features. The objective of our study was to determine the neural substrate for the representation of freshly learnt acoustic patterns. Subjects first performed a behavioural task that resulted in the incidental learning of three different noise-like acoustic patterns. During subsequent high-resolution functional magnetic resonance imaging scanning, subjects were then exposed again to these three learnt patterns and to others that had not been learned. Multi-voxel pattern analysis was used to test if the learnt acoustic patterns could be ‘decoded’ from the patterns of activity in the auditory cortex and medial temporal lobe. We found that activity in planum temporale and the hippocampus reliably distinguished between the learnt acoustic patterns. Our results demonstrate that these structures are involved in the neural representation of specific acoustic patterns after they have been learnt.
Collapse
Affiliation(s)
- Sukhbinder Kumar
- Institute of Neuroscience, Medical School, Newcastle University, Newcastle upon Tyne NE2 4HH, UK Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, 12 Queen Square, London WC1N 3BG, UK
| | - Heidi M Bonnici
- Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, 12 Queen Square, London WC1N 3BG, UK
| | - Sundeep Teki
- Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, 12 Queen Square, London WC1N 3BG, UK
| | - Trevor R Agus
- Laboratoire des Systèmes Perceptifs, CNRS UMR 8248, and Ecole Normale Superieure, Paris, France
| | - Daniel Pressnitzer
- Laboratoire des Systèmes Perceptifs, CNRS UMR 8248, and Ecole Normale Superieure, Paris, France
| | - Eleanor A Maguire
- Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, 12 Queen Square, London WC1N 3BG, UK
| | - Timothy D Griffiths
- Institute of Neuroscience, Medical School, Newcastle University, Newcastle upon Tyne NE2 4HH, UK Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, 12 Queen Square, London WC1N 3BG, UK
| |
Collapse
|
8
|
Da Costa S, Bourquin NMP, Knebel JF, Saenz M, van der Zwaag W, Clarke S. Representation of Sound Objects within Early-Stage Auditory Areas: A Repetition Effect Study Using 7T fMRI. PLoS One 2015; 10:e0124072. [PMID: 25938430 PMCID: PMC4418571 DOI: 10.1371/journal.pone.0124072] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2014] [Accepted: 02/25/2015] [Indexed: 11/26/2022] Open
Abstract
Environmental sounds are highly complex stimuli whose recognition depends on the interaction of top-down and bottom-up processes in the brain. Their semantic representations were shown to yield repetition suppression effects, i. e. a decrease in activity during exposure to a sound that is perceived as belonging to the same source as a preceding sound. Making use of the high spatial resolution of 7T fMRI we have investigated the representations of sound objects within early-stage auditory areas on the supratemporal plane. The primary auditory cortex was identified by means of tonotopic mapping and the non-primary areas by comparison with previous histological studies. Repeated presentations of different exemplars of the same sound source, as compared to the presentation of different sound sources, yielded significant repetition suppression effects within a subset of early-stage areas. This effect was found within the right hemisphere in primary areas A1 and R as well as two non-primary areas on the antero-medial part of the planum temporale, and within the left hemisphere in A1 and a non-primary area on the medial part of Heschl’s gyrus. Thus, several, but not all early-stage auditory areas encode the meaning of environmental sounds.
Collapse
Affiliation(s)
- Sandra Da Costa
- Service de Neuropsychologie et de Neuroréhabilitation, Département des Neurosciences Cliniques, Centre Hospitalier Universitaire Vaudois, Université de Lausanne, Lausanne, Switzerland
- * E-mail:
| | - Nathalie M.-P. Bourquin
- Service de Neuropsychologie et de Neuroréhabilitation, Département des Neurosciences Cliniques, Centre Hospitalier Universitaire Vaudois, Université de Lausanne, Lausanne, Switzerland
| | - Jean-François Knebel
- National Center of Competence in Research, SYNAPSY—The Synaptic Bases of Mental Diseases, Service de Neuropsychologie et de Neuroréhabilitation, Département des Neurosciences Cliniques, Centre Hospitalier Universitaire Vaudois, Université de Lausanne, Lausanne, Switzerland
| | - Melissa Saenz
- Laboratoire de Recherche en Neuroimagerie, Département des Neurosciences Cliniques, Centre Hospitalier Universitaire Vaudois, Université de Lausanne, Lausanne, Switzerland
| | - Wietske van der Zwaag
- Centre d’Imagerie BioMédicale, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Stephanie Clarke
- Service de Neuropsychologie et de Neuroréhabilitation, Département des Neurosciences Cliniques, Centre Hospitalier Universitaire Vaudois, Université de Lausanne, Lausanne, Switzerland
| |
Collapse
|
9
|
Scharinger M, Herrmann B, Nierhaus T, Obleser J. Simultaneous EEG-fMRI brain signatures of auditory cue utilization. Front Neurosci 2014; 8:137. [PMID: 24926232 PMCID: PMC4044900 DOI: 10.3389/fnins.2014.00137] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2014] [Accepted: 05/17/2014] [Indexed: 11/13/2022] Open
Abstract
Optimal utilization of acoustic cues during auditory categorization is a vital skill, particularly when informative cues become occluded or degraded. Consequently, the acoustic environment requires flexible choosing and switching amongst available cues. The present study targets the brain functions underlying such changes in cue utilization. Participants performed a categorization task with immediate feedback on acoustic stimuli from two categories that varied in duration and spectral properties, while we simultaneously recorded Blood Oxygenation Level Dependent (BOLD) responses in fMRI and electroencephalograms (EEGs). In the first half of the experiment, categories could be best discriminated by spectral properties. Halfway through the experiment, spectral degradation rendered the stimulus duration the more informative cue. Behaviorally, degradation decreased the likelihood of utilizing spectral cues. Spectrally degrading the acoustic signal led to increased alpha power compared to nondegraded stimuli. The EEG-informed fMRI analyses revealed that alpha power correlated with BOLD changes in inferior parietal cortex and right posterior superior temporal gyrus (including planum temporale). In both areas, spectral degradation led to a weaker coupling of BOLD response to behavioral utilization of the spectral cue. These data provide converging evidence from behavioral modeling, electrophysiology, and hemodynamics that (a) increased alpha power mediates the inhibition of uninformative (here spectral) stimulus features, and that (b) the parietal attention network supports optimal cue utilization in auditory categorization. The results highlight the complex cortical processing of auditory categorization under realistic listening challenges.
Collapse
Affiliation(s)
- Mathias Scharinger
- Max Planck Research Group "Auditory Cognition," Max Planck Institute for Human Cognitive and Brain Sciences Leipzig, Germany
| | - Björn Herrmann
- Max Planck Research Group "Auditory Cognition," Max Planck Institute for Human Cognitive and Brain Sciences Leipzig, Germany
| | - Till Nierhaus
- Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences Leipzig, Germany
| | - Jonas Obleser
- Max Planck Research Group "Auditory Cognition," Max Planck Institute for Human Cognitive and Brain Sciences Leipzig, Germany
| |
Collapse
|
10
|
Altered regional and circuit resting-state activity associated with unilateral hearing loss. PLoS One 2014; 9:e96126. [PMID: 24788317 PMCID: PMC4006821 DOI: 10.1371/journal.pone.0096126] [Citation(s) in RCA: 47] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2014] [Accepted: 04/03/2014] [Indexed: 01/20/2023] Open
Abstract
The deprivation of sensory input after hearing damage results in functional reorganization of the brain including cross-modal plasticity in the sensory cortex and changes in cognitive processing. However, it remains unclear whether partial deprivation from unilateral auditory loss (UHL) would similarly affect the neural circuitry of cognitive processes in addition to the functional organization of sensory cortex. Here, we used resting-state functional magnetic resonance imaging to investigate intrinsic activity in 34 participants with UHL from acoustic neuroma in comparison with 22 matched normal controls. In sensory regions, we found decreased regional homogeneity (ReHo) in the bilateral calcarine cortices in UHL. However, there was an increase of ReHo in the right anterior insular cortex (rAI), the key node of cognitive control network (CCN) and multimodal sensory integration, as well as in the left parahippocampal cortex (lPHC), a key node in the default mode network (DMN). Moreover, seed-based resting-state functional connectivity analysis showed an enhanced relationship between rAI and several key regions of the DMN. Meanwhile, lPHC showed more negative relationship with components in the CCN and greater positive relationship in the DMN. Such reorganizations of functional connectivity within the DMN and between the DMN and CCN were confirmed by a graph theory analysis. These results suggest that unilateral sensory input damage not only alters the activity of the sensory areas but also reshapes the regional and circuit functional organization of the cognitive control network.
Collapse
|
11
|
Dole M, Meunier F, Hoen M. Gray and white matter distribution in dyslexia: a VBM study of superior temporal gyrus asymmetry. PLoS One 2013; 8:e76823. [PMID: 24098565 PMCID: PMC3788100 DOI: 10.1371/journal.pone.0076823] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2012] [Accepted: 09/03/2013] [Indexed: 01/18/2023] Open
Abstract
In the present study, we investigated brain morphological signatures of dyslexia by using a voxel-based asymmetry analysis. Dyslexia is a developmental disorder that affects the acquisition of reading and spelling abilities and is associated with a phonological deficit. Speech perception disabilities have been associated with this deficit, particularly when listening conditions are challenging, such as in noisy environments. These deficits are associated with known neurophysiological correlates, such as a reduction in the functional activation or a modification of functional asymmetry in the cortical regions involved in speech processing, such as the bilateral superior temporal areas. These functional deficits have been associated with macroscopic morphological abnormalities, which potentially include a reduction in gray and white matter volumes, combined with modifications of the leftward asymmetry along the perisylvian areas. The purpose of this study was to investigate gray/white matter distribution asymmetries in dyslexic adults using automated image processing derived from the voxel-based morphometry technique. Correlations with speech-in-noise perception abilities were also investigated. The results confirmed the presence of gray matter distribution abnormalities in the superior temporal gyrus (STG) and the superior temporal Sulcus (STS) in individuals with dyslexia. Specifically, the gray matter of adults with dyslexia was symmetrically distributed over one particular region of the STS, the temporal voice area, whereas normal readers showed a clear rightward gray matter asymmetry in this area. We also identified a region in the left posterior STG in which the white matter distribution asymmetry was correlated to speech-in-noise comprehension abilities in dyslexic adults. These results provide further information concerning the morphological alterations observed in dyslexia, revealing the presence of both gray and white matter distribution anomalies and the potential involvement of these defects in speech-in-noise deficits.
Collapse
Affiliation(s)
- Marjorie Dole
- Laboratoire de Psychologie et NeuroCognition, CNRS UMR 5105, université Pierre Mendès France, Grenoble, France
- * E-mail:
| | - Fanny Meunier
- L2C2, CNRS UMR 5304, Institut des Sciences Cognitives, Lyon, France
- Université de Lyon, Université Lyon 1, Lyon, France
| | - Michel Hoen
- INSERM U1028, Lyon Neuroscience Research Center, Brain Dynamics and Cognition Team, Lyon, France
- CNRS UMR 5292, Lyon Neuroscience Research Center, Brain Dynamics and Cognition Team, Lyon, France
- Université de Lyon, Université Lyon 1, Lyon, France
| |
Collapse
|
12
|
Scharinger M, Henry MJ, Erb J, Meyer L, Obleser J. Thalamic and parietal brain morphology predicts auditory category learning. Neuropsychologia 2013; 53:75-83. [PMID: 24035788 DOI: 10.1016/j.neuropsychologia.2013.09.012] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2013] [Revised: 09/02/2013] [Accepted: 09/04/2013] [Indexed: 01/13/2023]
Abstract
Auditory categorization is a vital skill involving the attribution of meaning to acoustic events, engaging domain-specific (i.e., auditory) as well as domain-general (e.g., executive) brain networks. A listener's ability to categorize novel acoustic stimuli should therefore depend on both, with the domain-general network being particularly relevant for adaptively changing listening strategies and directing attention to relevant acoustic cues. Here we assessed adaptive listening behavior, using complex acoustic stimuli with an initially salient (but later degraded) spectral cue and a secondary, duration cue that remained nondegraded. We employed voxel-based morphometry (VBM) to identify cortical and subcortical brain structures whose individual neuroanatomy predicted task performance and the ability to optimally switch to making use of temporal cues after spectral degradation. Behavioral listening strategies were assessed by logistic regression and revealed mainly strategy switches in the expected direction, with considerable individual differences. Gray-matter probability in the left inferior parietal lobule (BA 40) and left precentral gyrus was predictive of "optimal" strategy switch, while gray-matter probability in thalamic areas, comprising the medial geniculate body, co-varied with overall performance. Taken together, our findings suggest that successful auditory categorization relies on domain-specific neural circuits in the ascending auditory pathway, while adaptive listening behavior depends more on brain structure in parietal cortex, enabling the (re)direction of attention to salient stimulus properties.
Collapse
Affiliation(s)
- Mathias Scharinger
- Max Planck Research Group "Auditory Cognition", Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.
| | - Molly J Henry
- Max Planck Research Group "Auditory Cognition", Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Julia Erb
- Max Planck Research Group "Auditory Cognition", Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Lars Meyer
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Jonas Obleser
- Max Planck Research Group "Auditory Cognition", Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
13
|
Alho K, Rinne T, Herron TJ, Woods DL. Stimulus-dependent activations and attention-related modulations in the auditory cortex: a meta-analysis of fMRI studies. Hear Res 2013; 307:29-41. [PMID: 23938208 DOI: 10.1016/j.heares.2013.08.001] [Citation(s) in RCA: 99] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/17/2013] [Revised: 07/22/2013] [Accepted: 08/01/2013] [Indexed: 11/28/2022]
Abstract
We meta-analyzed 115 functional magnetic resonance imaging (fMRI) studies reporting auditory-cortex (AC) coordinates for activations related to active and passive processing of pitch and spatial location of non-speech sounds, as well as to the active and passive speech and voice processing. We aimed at revealing any systematic differences between AC surface locations of these activations by statistically analyzing the activation loci using the open-source Matlab toolbox VAMCA (Visualization and Meta-analysis on Cortical Anatomy). AC activations associated with pitch processing (e.g., active or passive listening to tones with a varying vs. fixed pitch) had median loci in the middle superior temporal gyrus (STG), lateral to Heschl's gyrus. However, median loci of activations due to the processing of infrequent pitch changes in a tone stream were centered in the STG or planum temporale (PT), significantly posterior to the median loci for other types of pitch processing. Median loci of attention-related modulations due to focused attention to pitch (e.g., attending selectively to low or high tones delivered in concurrent sequences) were, in turn, centered in the STG or superior temporal sulcus (STS), posterior to median loci for passive pitch processing. Activations due to spatial processing were centered in the posterior STG or PT, significantly posterior to pitch processing loci (processing of infrequent pitch changes excluded). In the right-hemisphere AC, the median locus of spatial attention-related modulations was in the STS, significantly inferior to the median locus for passive spatial processing. Activations associated with speech processing and those associated with voice processing had indistinguishable median loci at the border of mid-STG and mid-STS. Median loci of attention-related modulations due to attention to speech were in the same mid-STG/STS region. Thus, while attention to the pitch or location of non-speech sounds seems to recruit AC areas less involved in passive pitch or location processing, focused attention to speech predominantly enhances activations in regions that already respond to human vocalizations during passive listening. This suggests that distinct attention mechanisms might be engaged by attention to speech and attention to more elemental auditory features such as tone pitch or location. This article is part of a Special Issue entitled Human Auditory Neuroimaging.
Collapse
Affiliation(s)
- Kimmo Alho
- Helsinki Collegium for Advanced Studies, University of Helsinki, PO Box 4, FI 00014 Helsinki, Finland; Institute of Behavioural Sciences, University of Helsinki, PO Box 9, FI 00014 Helsinki, Finland.
| | | | | | | |
Collapse
|