1
|
Gao D, Liang X, Ting Q, Nichols ES, Bai Z, Xu C, Cai M, Liu L. A meta-analysis of letter-sound integration: Assimilation and accommodation in the superior temporal gyrus. Hum Brain Mapp 2024; 45:e26713. [PMID: 39447213 PMCID: PMC11501095 DOI: 10.1002/hbm.26713] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Revised: 04/15/2024] [Accepted: 05/02/2024] [Indexed: 10/26/2024] Open
Abstract
Despite being a relatively new cultural phenomenon, the ability to perform letter-sound integration is readily acquired even though it has not had time to evolve in the brain. Leading theories of how the brain accommodates literacy acquisition include the neural recycling hypothesis and the assimilation-accommodation hypothesis. The neural recycling hypothesis proposes that a new cultural skill is developed by "invading" preexisting neural structures to support a similar cognitive function, while the assimilation-accommodation hypothesis holds that a new cognitive skill relies on direct invocation of preexisting systems (assimilation) and adds brain areas based on task requirements (accommodation). Both theories agree that letter-sound integration may be achieved by reusing pre-existing functionally similar neural bases, but differ in their proposals of how this occurs. We examined the evidence for each hypothesis by systematically comparing the similarities and differences between letter-sound integration and two other types of preexisting and functionally similar audiovisual (AV) processes, namely object-sound and speech-sound integration, by performing an activation likelihood estimation (ALE) meta-analysis. All three types of AV integration recruited the left posterior superior temporal gyrus (STG), while speech-sound integration additionally activated the bilateral middle STG and letter-sound integration directly invoked the AV areas involved in speech-sound integration. These findings suggest that letter-sound integration may reuse the STG for speech-sound and object-sound integration through an assimilation-accommodation mechanism.
Collapse
Affiliation(s)
- Danqi Gao
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain ResearchBeijing Normal UniversityBeijingChina
| | - Xitong Liang
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain ResearchBeijing Normal UniversityBeijingChina
| | - Qi Ting
- Department of Brain Cognition and Intelligent MedicineBeijing University of Posts and TelecommunicationsBeijingChina
| | | | - Zilin Bai
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain ResearchBeijing Normal UniversityBeijingChina
| | - Chaoying Xu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain ResearchBeijing Normal UniversityBeijingChina
| | - Mingnan Cai
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain ResearchBeijing Normal UniversityBeijingChina
| | - Li Liu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain ResearchBeijing Normal UniversityBeijingChina
| |
Collapse
|
2
|
Yang T, Fan X, Hou B, Wang J, Chen X. Linguistic network in early deaf individuals: A neuroimaging meta-analysis. Neuroimage 2024; 299:120720. [PMID: 38971484 DOI: 10.1016/j.neuroimage.2024.120720] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2024] [Revised: 07/01/2024] [Accepted: 07/03/2024] [Indexed: 07/08/2024] Open
Abstract
This meta-analysis summarizes evidence from 44 neuroimaging experiments and characterizes the general linguistic network in early deaf individuals. Meta-analytic comparisons with hearing individuals found that a specific set of regions (in particular the left inferior frontal gyrus and posterior middle temporal gyrus) participates in supramodal language processing. In addition to previously described modality-specific differences, the present study showed that the left calcarine gyrus and the right caudate were additionally recruited in deaf compared with hearing individuals. In addition, this study showed that the bilateral posterior superior temporal gyrus is shaped by cross-modal plasticity, whereas the left frontotemporal areas are shaped by early language experience. Although an overall left-lateralized pattern for language processing was observed in the early deaf individuals, regional lateralization was altered in the inferior frontal gyrus and anterior temporal lobe. These findings indicate that the core language network functions in a modality-independent manner, and provide a foundation for determining the contributions of sensory and linguistic experiences in shaping the neural bases of language processing.
Collapse
Affiliation(s)
- Tengyu Yang
- Department of Otolaryngology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, PR China
| | - Xinmiao Fan
- Department of Otolaryngology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, PR China
| | - Bo Hou
- Department of Radiology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, PR China
| | - Jian Wang
- Department of Otolaryngology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, PR China.
| | - Xiaowei Chen
- Department of Otolaryngology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, PR China.
| |
Collapse
|
3
|
Scheliga S, Kellermann T, Lampert A, Rolke R, Spehr M, Habel U. Neural correlates of multisensory integration in the human brain: an ALE meta-analysis. Rev Neurosci 2023; 34:223-245. [PMID: 36084305 DOI: 10.1515/revneuro-2022-0065] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Accepted: 07/22/2022] [Indexed: 02/07/2023]
Abstract
Previous fMRI research identified superior temporal sulcus as central integration area for audiovisual stimuli. However, less is known about a general multisensory integration network across senses. Therefore, we conducted activation likelihood estimation meta-analysis with multiple sensory modalities to identify a common brain network. We included 49 studies covering all Aristotelian senses i.e., auditory, visual, tactile, gustatory, and olfactory stimuli. Analysis revealed significant activation in bilateral superior temporal gyrus, middle temporal gyrus, thalamus, right insula, and left inferior frontal gyrus. We assume these regions to be part of a general multisensory integration network comprising different functional roles. Here, thalamus operate as first subcortical relay projecting sensory information to higher cortical integration centers in superior temporal gyrus/sulcus while conflict-processing brain regions as insula and inferior frontal gyrus facilitate integration of incongruent information. We additionally performed meta-analytic connectivity modelling and found each brain region showed co-activations within the identified multisensory integration network. Therefore, by including multiple sensory modalities in our meta-analysis the results may provide evidence for a common brain network that supports different functional roles for multisensory integration.
Collapse
Affiliation(s)
- Sebastian Scheliga
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Thilo Kellermann
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany.,JARA-Institute Brain Structure Function Relationship, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Angelika Lampert
- Institute of Physiology, Medical Faculty RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Roman Rolke
- Department of Palliative Medicine, Medical Faculty RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Marc Spehr
- Department of Chemosensation, RWTH Aachen University, Institute for Biology, Worringerweg 3, 52074 Aachen, Germany
| | - Ute Habel
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany.,JARA-Institute Brain Structure Function Relationship, Pauwelsstraße 30, 52074 Aachen, Germany
| |
Collapse
|
4
|
Šabić E, Henning D, Myüz H, Morrow A, Hout MC, MacDonald JA. Examining the Role of Eye Movements During Conversational Listening in Noise. Front Psychol 2020; 11:200. [PMID: 32116975 PMCID: PMC7033431 DOI: 10.3389/fpsyg.2020.00200] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2019] [Accepted: 01/28/2020] [Indexed: 12/02/2022] Open
Abstract
Speech comprehension is often thought of as an entirely auditory process, but both normal hearing and hearing-impaired individuals sometimes use visual attention to disambiguate speech, particularly when it is difficult to hear. Many studies have investigated how visual attention (or the lack thereof) impacts the perception of simple speech sounds such as isolated consonants, but there is a gap in the literature concerning visual attention during natural speech comprehension. This issue needs to be addressed, as individuals process sounds and words in everyday speech differently than when they are separated into individual elements with no competing sound sources or noise. Moreover, further research is needed to explore patterns of eye movements during speech comprehension - especially in the presence of noise - as such an investigation would allow us to better understand how people strategically use visual information while processing speech. To this end, we conducted an experiment to track eye-gaze behavior during a series of listening tasks as a function of the number of speakers, background noise intensity, and the presence or absence of simulated hearing impairment. Our specific aims were to discover how individuals might adapt their oculomotor behavior to compensate for the difficulty of the listening scenario, such as when listening in noisy environments or experiencing simulated hearing loss. Speech comprehension difficulty was manipulated by simulating hearing loss and varying background noise intensity. Results showed that eye movements were affected by the number of speakers, simulated hearing impairment, and the presence of noise. Further, findings showed that differing levels of signal-to-noise ratio (SNR) led to changes in eye-gaze behavior. Most notably, we found that the addition of visual information (i.e. videos vs. auditory information only) led to enhanced speech comprehension - highlighting the strategic usage of visual information during this process.
Collapse
Affiliation(s)
- Edin Šabić
- Hearing Enhancement and Augmented Reality Lab, Department of Psychology, New Mexico State University, Las Cruces, NM, United States
| | | | | | | | | | | |
Collapse
|
5
|
Bayard C, Machart L, Strauß A, Gerber S, Aubanel V, Schwartz JL. Cued Speech Enhances Speech-in-Noise Perception. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2019; 24:223-233. [PMID: 30809665 DOI: 10.1093/deafed/enz003] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/26/2018] [Revised: 01/28/2019] [Accepted: 01/31/2019] [Indexed: 06/09/2023]
Abstract
Speech perception in noise remains challenging for Deaf/Hard of Hearing people (D/HH), even fitted with hearing aids or cochlear implants. The perception of sentences in noise by 20 implanted or aided D/HH subjects mastering Cued Speech (CS), a system of hand gestures complementing lip movements, was compared with the perception of 15 typically hearing (TH) controls in three conditions: audio only, audiovisual, and audiovisual + CS. Similar audiovisual scores were obtained for signal-to-noise ratios (SNRs) 11 dB higher in D/HH participants compared with TH ones. Adding CS information enabled D/HH participants to reach a mean score of 83% in the audiovisual + CS condition at a mean SNR of 0 dB, similar to the usual audio score for TH participants at this SNR. This confirms that the combination of lipreading and Cued Speech system remains extremely important for persons with hearing loss, particularly in adverse hearing conditions.
Collapse
Affiliation(s)
| | | | - Antje Strauß
- Zukunftskolleg, FB Sprachwissenschaft, University of Konstanz
| | | | | | | |
Collapse
|