1
|
Saalasti S, Alho J, Lahnakoski JM, Bacha-Trams M, Glerean E, Jääskeläinen IP, Hasson U, Sams M. Lipreading a naturalistic narrative in a female population: Neural characteristics shared with listening and reading. Brain Behav 2023; 13:e2869. [PMID: 36579557 PMCID: PMC9927859 DOI: 10.1002/brb3.2869] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/11/2022] [Revised: 11/29/2022] [Accepted: 12/06/2022] [Indexed: 12/30/2022] Open
Abstract
INTRODUCTION Few of us are skilled lipreaders while most struggle with the task. Neural substrates that enable comprehension of connected natural speech via lipreading are not yet well understood. METHODS We used a data-driven approach to identify brain areas underlying the lipreading of an 8-min narrative with participants whose lipreading skills varied extensively (range 6-100%, mean = 50.7%). The participants also listened to and read the same narrative. The similarity between individual participants' brain activity during the whole narrative, within and between conditions, was estimated by a voxel-wise comparison of the Blood Oxygenation Level Dependent (BOLD) signal time courses. RESULTS Inter-subject correlation (ISC) of the time courses revealed that lipreading, listening to, and reading the narrative were largely supported by the same brain areas in the temporal, parietal and frontal cortices, precuneus, and cerebellum. Additionally, listening to and reading connected naturalistic speech particularly activated higher-level linguistic processing in the parietal and frontal cortices more consistently than lipreading, probably paralleling the limited understanding obtained via lip-reading. Importantly, higher lipreading test score and subjective estimate of comprehension of the lipread narrative was associated with activity in the superior and middle temporal cortex. CONCLUSIONS Our new data illustrates that findings from prior studies using well-controlled repetitive speech stimuli and stimulus-driven data analyses are also valid for naturalistic connected speech. Our results might suggest an efficient use of brain areas dealing with phonological processing in skilled lipreaders.
Collapse
Affiliation(s)
- Satu Saalasti
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland.,Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland.,Advanced Magnetic Imaging (AMI) Centre, Aalto NeuroImaging, School of Science, Aalto University, Espoo, Finland
| | - Jussi Alho
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland
| | - Juha M Lahnakoski
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland.,Independent Max Planck Research Group for Social Neuroscience, Max Planck Institute of Psychiatry, Munich, Germany.,Institute of Neuroscience and Medicine, Brain & Behaviour (INM-7), Research Center Jülich, Jülich, Germany.,Institute of Systems Neuroscience, Medical Faculty, Heinrich Heine University Düsseldorf, Düsseldorf, Germany
| | - Mareike Bacha-Trams
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland
| | - Enrico Glerean
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland.,Department of Psychology and the Neuroscience Institute, Princeton University, Princeton, USA
| | - Iiro P Jääskeläinen
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland
| | - Uri Hasson
- Department of Psychology and the Neuroscience Institute, Princeton University, Princeton, USA
| | - Mikko Sams
- Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland.,Aalto Studios - MAGICS, Aalto University, Espoo, Finland
| |
Collapse
|
2
|
Liang M, Liu J, Cai Y, Zhao F, Chen S, Chen L, Chen Y, Zheng Y. Event-Related Potential Evidence of Enhanced Visual Processing in Auditory-Associated Cortex in Adults with Hearing Loss. Audiol Neurootol 2020; 25:237-248. [PMID: 32320979 DOI: 10.1159/000505608] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2019] [Accepted: 12/23/2019] [Indexed: 11/19/2022] Open
Abstract
OBJECTIVE The present study investigated the characteristics of visual processing in the auditory-associated cortex in adults with hearing loss using event-related potentials. METHODS Ten subjects with bilateral postlingual hearing loss were recruited. Ten age- and sex-matched normal-hearing subjects were included as controls. Visual ("sound" and "non-sound" photos)-evoked potentials were performed. The P170 response in the occipital area as well as N1 and N2 responses in FC3 and FC4 were analyzed. RESULTS Adults with hearing loss had higher P170 amplitudes, significantly higher N2 amplitudes, and shorter N2 latency in response to "sound" and "non-sound" photo stimuli at both FC3 and FC4, with the exception of the N2 amplitude which responded to "sound" photo stimuli at FC3. Further topographic mapping analysis revealed that patients had a large difference in response to "sound" and "non-sound" photos in the right frontotemporal area, starting from approximately 200 to 400 ms. Localization of source showed the difference to be located in the middle frontal gyrus region (BA10) at around 266 ms. CONCLUSIONS The significantly stronger responses to visual stimuli indicate enhanced visual processing in the auditory-associated cortex in adults with hearing loss, which may be attributed to cortical visual reorganization involving the right frontotemporal cortex.
Collapse
Affiliation(s)
- Maojin Liang
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital and Institute of Hearing and Speech-Language Science, Sun Yat-sen University, Guangzhou, China.,Department of Hearing and Speech Science, Xinhua College, Sun Yat-Sen University, Guangzhou, China
| | - Jiahao Liu
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital and Institute of Hearing and Speech-Language Science, Sun Yat-sen University, Guangzhou, China.,Department of Hearing and Speech Science, Xinhua College, Sun Yat-Sen University, Guangzhou, China
| | - Yuexin Cai
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital and Institute of Hearing and Speech-Language Science, Sun Yat-sen University, Guangzhou, China.,Department of Hearing and Speech Science, Xinhua College, Sun Yat-Sen University, Guangzhou, China
| | - Fei Zhao
- Centre for Speech and Language Therapy and Hearing Science, Cardiff School of Sport and Health Sciences, Cardiff Metropolitan University, Cardiff, United Kingdom.,Department of Hearing and Speech Science, Xinhua College, Sun Yat-Sen University, Guangzhou, China
| | - Suijun Chen
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital and Institute of Hearing and Speech-Language Science, Sun Yat-sen University, Guangzhou, China.,Department of Hearing and Speech Science, Xinhua College, Sun Yat-Sen University, Guangzhou, China
| | - Lin Chen
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital and Institute of Hearing and Speech-Language Science, Sun Yat-sen University, Guangzhou, China.,Department of Hearing and Speech Science, Xinhua College, Sun Yat-Sen University, Guangzhou, China
| | - Yuebo Chen
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital and Institute of Hearing and Speech-Language Science, Sun Yat-sen University, Guangzhou, China.,Department of Hearing and Speech Science, Xinhua College, Sun Yat-Sen University, Guangzhou, China
| | - Yiqing Zheng
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital and Institute of Hearing and Speech-Language Science, Sun Yat-sen University, Guangzhou, China, .,Department of Hearing and Speech Science, Xinhua College, Sun Yat-Sen University, Guangzhou, China,
| |
Collapse
|
3
|
Bernstein LE. Response Errors in Females' and Males' Sentence Lipreading Necessitate Structurally Different Models for Predicting Lipreading Accuracy. LANGUAGE LEARNING 2018; 68:127-158. [PMID: 31485084 PMCID: PMC6724546 DOI: 10.1111/lang.12281] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Lipreaders recognize words with phonetically impoverished stimuli, an ability that is generally poor in normal-hearing adults. Individual sentence lipreading trials from 341 young adults were modeled to predict words and phonemes correct in terms of measures of phoneme response dissimilarity (PRD), number of inserted incorrect response phonemes, lipreader gender, and a measure of speech perception in noise. Interactions with lipreaders' gender necessitated structurally different models of males' and females' lipreading. Overall, female lipreaders are more accurate, their ability to recognize words with impoverished or degraded input is consistent across visual and auditory modalities, and they amplify their correct responding through top-down insertion of text. Males' responses suggest that individuals with poorer auditory speech perception in noise amplify their responses by shifting towards including text in their response that is more perceptually discrepant from the stimulus. Attention to gender differences merits attention in future studies that use visual speech stimuli.
Collapse
Affiliation(s)
- Lynne E Bernstein
- Department of Speech, Language, and Hearing Science, George Washington University, 2121 I St NW, Washington, DC 20052
| |
Collapse
|
4
|
Vanneste S, Joos K, De Ridder D. Prefrontal cortex based sex differences in tinnitus perception: same tinnitus intensity, same tinnitus distress, different mood. PLoS One 2012; 7:e31182. [PMID: 22348053 PMCID: PMC3277500 DOI: 10.1371/journal.pone.0031182] [Citation(s) in RCA: 55] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2010] [Accepted: 01/04/2012] [Indexed: 01/31/2023] Open
Abstract
Background Tinnitus refers to auditory phantom sensation. It is estimated that for 2% of the population this auditory phantom percept severely affects the quality of life, due to tinnitus related distress. Although the overall distress levels do not differ between sexes in tinnitus, females are more influenced by distress than males. Typically, pain, sleep, and depression are perceived as significantly more severe by female tinnitus patients. Studies on gender differences in emotional regulation indicate that females with high depressive symptoms show greater attention to emotion, and use less anti-rumination emotional repair strategies than males. Methodology The objective of this study was to verify whether the activity and connectivity of the resting brain is different for male and female tinnitus patients using resting-state EEG. Conclusions Females had a higher mean score than male tinnitus patients on the BDI–II. Female tinnitus patients differ from male tinnitus patients in the orbitofrontal cortex (OFC) extending to the frontopolar cortex in beta1 and beta2. The OFC is important for emotional processing of sounds. Increased functional alpha connectivity is found between the OFC, insula, subgenual anterior cingulate (sgACC), parahippocampal (PHC) areas and the auditory cortex in females. Our data suggest increased functional connectivity that binds tinnitus-related auditory cortex activity to auditory emotion-related areas via the PHC-sgACC connections resulting in a more depressive state even though the tinnitus intensity and tinnitus-related distress are not different from men. Comparing male tinnitus patients to a control group of males significant differences could be found for beta3 in the posterior cingulate cortex (PCC). The PCC might be related to cognitive and memory-related aspects of the tinnitus percept. Our results propose that sex influences in tinnitus research cannot be ignored and should be taken into account in functional imaging studies related to tinnitus.
Collapse
Affiliation(s)
- Sven Vanneste
- Brain, TRI & Department of Neurosurgery, University Hospital Antwerp, Belgium.
| | | | | |
Collapse
|
5
|
Hertrich I, Dietrich S, Ackermann H. Cross-modal interactions during perception of audiovisual speech and nonspeech signals: an fMRI study. J Cogn Neurosci 2011; 23:221-37. [PMID: 20044895 DOI: 10.1162/jocn.2010.21421] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
During speech communication, visual information may interact with the auditory system at various processing stages. Most noteworthy, recent magnetoencephalography (MEG) data provided first evidence for early and preattentive phonetic/phonological encoding of the visual data stream--prior to its fusion with auditory phonological features [Hertrich, I., Mathiak, K., Lutzenberger, W., & Ackermann, H. Time course of early audiovisual interactions during speech and non-speech central-auditory processing: An MEG study. Journal of Cognitive Neuroscience, 21, 259-274, 2009]. Using functional magnetic resonance imaging, the present follow-up study aims to further elucidate the topographic distribution of visual-phonological operations and audiovisual (AV) interactions during speech perception. Ambiguous acoustic syllables--disambiguated to /pa/ or /ta/ by the visual channel (speaking face)--served as test materials, concomitant with various control conditions (nonspeech AV signals, visual-only and acoustic-only speech, and nonspeech stimuli). (i) Visual speech yielded an AV-subadditive activation of primary auditory cortex and the anterior superior temporal gyrus (STG), whereas the posterior STG responded both to speech and nonspeech motion. (ii) The inferior frontal and the fusiform gyrus of the right hemisphere showed a strong phonetic/phonological impact (differential effects of visual /pa/ vs. /ta/) upon hemodynamic activation during presentation of speaking faces. Taken together with the previous MEG data, these results point at a dual-pathway model of visual speech information processing: On the one hand, access to the auditory system via the anterior supratemporal “what" path may give rise to direct activation of "auditory objects." On the other hand, visual speech information seems to be represented in a right-hemisphere visual working memory, providing a potential basis for later interactions with auditory information such as the McGurk effect.
Collapse
Affiliation(s)
- Ingo Hertrich
- Department of General Neurology, University of Tübingen, Tübingen, Germany.
| | | | | |
Collapse
|
6
|
Neural correlates of human somatosensory integration in tinnitus. Hear Res 2010; 267:78-88. [DOI: 10.1016/j.heares.2010.04.006] [Citation(s) in RCA: 45] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/04/2009] [Revised: 04/14/2010] [Accepted: 04/19/2010] [Indexed: 11/24/2022]
|
7
|
Lanting C, de Kleine E, van Dijk P. Neural activity underlying tinnitus generation: Results from PET and fMRI. Hear Res 2009; 255:1-13. [PMID: 19545617 DOI: 10.1016/j.heares.2009.06.009] [Citation(s) in RCA: 216] [Impact Index Per Article: 14.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/23/2009] [Revised: 06/15/2009] [Accepted: 06/15/2009] [Indexed: 10/20/2022]
|
8
|
Ruytjens L, Georgiadis JR, Holstege G, Wit HP, Albers FWJ, Willemsen ATM. Functional sex differences in human primary auditory cortex. Eur J Nucl Med Mol Imaging 2007; 34:2073-81. [PMID: 17703299 PMCID: PMC2100432 DOI: 10.1007/s00259-007-0517-z] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2007] [Accepted: 06/22/2007] [Indexed: 11/25/2022]
Abstract
Background We used PET to study cortical activation during auditory stimulation and found sex differences in the human primary auditory cortex (PAC). Regional cerebral blood flow (rCBF) was measured in 10 male and 10 female volunteers while listening to sounds (music or white noise) and during a baseline (no auditory stimulation). Results and discussion We found a sex difference in activation of the left and right PAC when comparing music to noise. The PAC was more activated by music than by noise in both men and women. But this difference between the two stimuli was significantly higher in men than in women. To investigate whether this difference could be attributed to either music or noise, we compared both stimuli with the baseline and revealed that noise gave a significantly higher activation in the female PAC than in the male PAC. Moreover, the male group showed a deactivation in the right prefrontal cortex when comparing noise to the baseline, which was not present in the female group. Interestingly, the auditory and prefrontal regions are anatomically and functionally linked and the prefrontal cortex is known to be engaged in auditory tasks that involve sustained or selective auditory attention. Thus we hypothesize that differences in attention result in a different deactivation of the right prefrontal cortex, which in turn modulates the activation of the PAC and thus explains the sex differences found in the activation of the PAC. Conclusion Our results suggest that sex is an important factor in auditory brain studies.
Collapse
Affiliation(s)
- Liesbet Ruytjens
- Department of Otorhinolaryngology, University Medical Center Groningen, Groningen, The Netherlands.
| | | | | | | | | | | |
Collapse
|