1
|
Saalasti S, Alho J, Lahnakoski JM, Bacha-Trams M, Glerean E, Jääskeläinen IP, Hasson U, Sams M. Lipreading a naturalistic narrative in a female population: Neural characteristics shared with listening and reading. Brain Behav 2023; 13:e2869. [PMID: 36579557 PMCID: PMC9927859 DOI: 10.1002/brb3.2869] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/11/2022] [Revised: 11/29/2022] [Accepted: 12/06/2022] [Indexed: 12/30/2022] Open
Abstract
INTRODUCTION Few of us are skilled lipreaders while most struggle with the task. Neural substrates that enable comprehension of connected natural speech via lipreading are not yet well understood. METHODS We used a data-driven approach to identify brain areas underlying the lipreading of an 8-min narrative with participants whose lipreading skills varied extensively (range 6-100%, mean = 50.7%). The participants also listened to and read the same narrative. The similarity between individual participants' brain activity during the whole narrative, within and between conditions, was estimated by a voxel-wise comparison of the Blood Oxygenation Level Dependent (BOLD) signal time courses. RESULTS Inter-subject correlation (ISC) of the time courses revealed that lipreading, listening to, and reading the narrative were largely supported by the same brain areas in the temporal, parietal and frontal cortices, precuneus, and cerebellum. Additionally, listening to and reading connected naturalistic speech particularly activated higher-level linguistic processing in the parietal and frontal cortices more consistently than lipreading, probably paralleling the limited understanding obtained via lip-reading. Importantly, higher lipreading test score and subjective estimate of comprehension of the lipread narrative was associated with activity in the superior and middle temporal cortex. CONCLUSIONS Our new data illustrates that findings from prior studies using well-controlled repetitive speech stimuli and stimulus-driven data analyses are also valid for naturalistic connected speech. Our results might suggest an efficient use of brain areas dealing with phonological processing in skilled lipreaders.
Collapse
Affiliation(s)
- Satu Saalasti
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland.,Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland.,Advanced Magnetic Imaging (AMI) Centre, Aalto NeuroImaging, School of Science, Aalto University, Espoo, Finland
| | - Jussi Alho
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland
| | - Juha M Lahnakoski
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland.,Independent Max Planck Research Group for Social Neuroscience, Max Planck Institute of Psychiatry, Munich, Germany.,Institute of Neuroscience and Medicine, Brain & Behaviour (INM-7), Research Center Jülich, Jülich, Germany.,Institute of Systems Neuroscience, Medical Faculty, Heinrich Heine University Düsseldorf, Düsseldorf, Germany
| | - Mareike Bacha-Trams
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland
| | - Enrico Glerean
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland.,Department of Psychology and the Neuroscience Institute, Princeton University, Princeton, USA
| | - Iiro P Jääskeläinen
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland
| | - Uri Hasson
- Department of Psychology and the Neuroscience Institute, Princeton University, Princeton, USA
| | - Mikko Sams
- Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland.,Aalto Studios - MAGICS, Aalto University, Espoo, Finland
| |
Collapse
|
2
|
Erdener D, Evren Erdener Ş. Speechreading as a secondary diagnostic tool in bipolar disorder. Med Hypotheses 2022. [DOI: 10.1016/j.mehy.2021.110744] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
3
|
Vannuscorps G, Andres M, Carneiro SP, Rombaux E, Caramazza A. Typically Efficient Lipreading without Motor Simulation. J Cogn Neurosci 2021; 33:611-621. [PMID: 33416443 DOI: 10.1162/jocn_a_01666] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
All it takes is a face-to-face conversation in a noisy environment to realize that viewing a speaker's lip movements contributes to speech comprehension. What are the processes underlying the perception and interpretation of visual speech? Brain areas that control speech production are also recruited during lipreading. This finding raises the possibility that lipreading may be supported, at least to some extent, by a covert unconscious imitation of the observed speech movements in the observer's own speech motor system-a motor simulation. However, whether, and if so to what extent, motor simulation contributes to visual speech interpretation remains unclear. In two experiments, we found that several participants with congenital facial paralysis were as good at lipreading as the control population and performed these tasks in a way that is qualitatively similar to the controls despite severely reduced or even completely absent lip motor representations. Although it remains an open question whether this conclusion generalizes to other experimental conditions and to typically developed participants, these findings considerably narrow the space of hypothesis for a role of motor simulation in lipreading. Beyond its theoretical significance in the field of speech perception, this finding also calls for a re-examination of the more general hypothesis that motor simulation underlies action perception and interpretation developed in the frameworks of motor simulation and mirror neuron hypotheses.
Collapse
|
4
|
Harris LT, van Etten N, Gimenez-Fernandez T. Exploring how harming and helping behaviors drive prediction and explanation during anthropomorphism. Soc Neurosci 2020; 16:39-56. [PMID: 32698660 DOI: 10.1080/17470919.2020.1799859] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
Cacioppo and colleagues advanced the study of anthropomorphism by positing three motives that moderated the occurrence of this phenomenon; belonging, effectance, and explanation. Here, we further this literature by exploring the extent to which the valence of a target's behavior influences its anthropomorphism when perceivers attempt to explain and predict that target's behavior, and the involvement of brain regions associated with explanation and prediction in such anthropomorphism. Participants viewed videos of varying visually complex agents - geometric shapes, computer generated (CG) faces, and greebles - in nonrandom motion performing harming and helping behaviors. Across two studies, participants reported a narrative that explained the observed behavior (both studies) while we recorded brain activity (study one), and participants predicted future behavior of the protagonist shapes (study two). Brain regions implicated in prediction error (striatum), not language generation (inferior frontal gyrus; IFG) engaged more to harming than helping behaviors during the anthropomorphism of such stimuli. Behaviorally, we found greater anthropomorphism in explanations of harming rather than helping behaviors, but the opposite pattern when participants predicted the agents' behavior. Together, these studies build upon the anthropomorphism literature by exploring how the valence of behavior drives explanation and prediction.
Collapse
Affiliation(s)
- Lasana T Harris
- Department of Experimental Psychology, University College London , London, UK
| | - Noor van Etten
- Department of Social and Organizational Psychology, Leiden University , Leiden, Netherlands
| | | |
Collapse
|
5
|
Borowiak K, Maguinness C, von Kriegstein K. Dorsal-movement and ventral-form regions are functionally connected during visual-speech recognition. Hum Brain Mapp 2020; 41:952-972. [PMID: 31749219 PMCID: PMC7267922 DOI: 10.1002/hbm.24852] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2019] [Revised: 09/03/2019] [Accepted: 10/21/2019] [Indexed: 01/17/2023] Open
Abstract
Faces convey social information such as emotion and speech. Facial emotion processing is supported via interactions between dorsal-movement and ventral-form visual cortex regions. Here, we explored, for the first time, whether similar dorsal-ventral interactions (assessed via functional connectivity), might also exist for visual-speech processing. We then examined whether altered dorsal-ventral connectivity is observed in adults with high-functioning autism spectrum disorder (ASD), a disorder associated with impaired visual-speech recognition. We acquired functional magnetic resonance imaging (fMRI) data with concurrent eye tracking in pairwise matched control and ASD participants. In both groups, dorsal-movement regions in the visual motion area 5 (V5/MT) and the temporal visual speech area (TVSA) were functionally connected to ventral-form regions (i.e., the occipital face area [OFA] and the fusiform face area [FFA]) during the recognition of visual speech, in contrast to the recognition of face identity. Notably, parts of this functional connectivity were decreased in the ASD group compared to the controls (i.e., right V5/MT-right OFA, left TVSA-left FFA). The results confirmed our hypothesis that functional connectivity between dorsal-movement and ventral-form regions exists during visual-speech processing. Its partial dysfunction in ASD might contribute to difficulties in the recognition of dynamic face information relevant for successful face-to-face communication.
Collapse
Affiliation(s)
- Kamila Borowiak
- Chair of Cognitive and Clinical Neuroscience, Faculty of Psychology, Technische Universität DresdenDresdenGermany
- Max Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
- Berlin School of Mind and Brain, Humboldt University of BerlinBerlinGermany
| | - Corrina Maguinness
- Chair of Cognitive and Clinical Neuroscience, Faculty of Psychology, Technische Universität DresdenDresdenGermany
- Max Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
| | - Katharina von Kriegstein
- Chair of Cognitive and Clinical Neuroscience, Faculty of Psychology, Technische Universität DresdenDresdenGermany
- Max Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
| |
Collapse
|
6
|
Jacobs CL, Loucks TM, Watson DG, Dell GS. Masking auditory feedback does not eliminate repetition reduction. LANGUAGE, COGNITION AND NEUROSCIENCE 2019; 35:485-497. [PMID: 35992578 PMCID: PMC9390968 DOI: 10.1080/23273798.2019.1693051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/09/2019] [Accepted: 10/30/2019] [Indexed: 06/15/2023]
Abstract
Repetition reduces word duration. Explanations of this process have appealed to audience design, internal production mechanisms, and combinations thereof (e.g. Kahn & Arnold, 2015). Jacobs, Yiu, Watson, and Dell (2015) proposed the auditory feedback hypothesis, which states that speakers must hear a word, produced either by themselves or another speaker, in order for duration reduction on a subsequent production. We conducted a strong test of the auditory feedback hypothesis in two experiments, in which we used masked auditory feedback and whispering to prevent speakers from hearing themselves fully. Both experiments showed that despite limiting the sources of normal auditory feedback, repetition reduction was observed to equal extents in masked and unmasked conditions, suggesting that repetition reduction may be supported by multiple sources, such as somatosensory feedback and feedforward signals, depending on their availability.
Collapse
|
7
|
Borowiak K, Schelinski S, von Kriegstein K. Recognizing visual speech: Reduced responses in visual-movement regions, but not other speech regions in autism. Neuroimage Clin 2018; 20:1078-1091. [PMID: 30368195 PMCID: PMC6202694 DOI: 10.1016/j.nicl.2018.09.019] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2018] [Revised: 09/19/2018] [Accepted: 09/21/2018] [Indexed: 12/23/2022]
Abstract
Speech information inherent in face movements is important for understanding what is said in face-to-face communication. Individuals with autism spectrum disorders (ASD) have difficulties in extracting speech information from face movements, a process called visual-speech recognition. Currently, it is unknown what dysfunctional brain regions or networks underlie the visual-speech recognition deficit in ASD. We conducted a functional magnetic resonance imaging (fMRI) study with concurrent eye tracking to investigate visual-speech recognition in adults diagnosed with high-functioning autism and pairwise matched typically developed controls. Compared to the control group (n = 17), the ASD group (n = 17) showed decreased Blood Oxygenation Level Dependent (BOLD) response during visual-speech recognition in the right visual area 5 (V5/MT) and left temporal visual speech area (TVSA) - brain regions implicated in visual-movement perception. The right V5/MT showed positive correlation with visual-speech task performance in the ASD group, but not in the control group. Psychophysiological interaction analysis (PPI) revealed that functional connectivity between the left TVSA and the bilateral V5/MT and between the right V5/MT and the left IFG was lower in the ASD than in the control group. In contrast, responses in other speech-motor regions and their connectivity were on the neurotypical level. Reduced responses and network connectivity of the visual-movement regions in conjunction with intact speech-related mechanisms indicate that perceptual mechanisms might be at the core of the visual-speech recognition deficit in ASD. Communication deficits in ASD might at least partly stem from atypical sensory processing and not higher-order cognitive processing of socially relevant information.
Collapse
Affiliation(s)
- Kamila Borowiak
- Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstraße 1a, 04103 Leipzig, Germany; Berlin School of Mind and Brain, Humboldt University of Berlin, Luisenstraße 56, 10117 Berlin, Germany; Technische Universität Dresden, Bamberger Straße 7, 01187 Dresden, Germany.
| | - Stefanie Schelinski
- Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstraße 1a, 04103 Leipzig, Germany; Technische Universität Dresden, Bamberger Straße 7, 01187 Dresden, Germany
| | - Katharina von Kriegstein
- Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstraße 1a, 04103 Leipzig, Germany; Technische Universität Dresden, Bamberger Straße 7, 01187 Dresden, Germany
| |
Collapse
|
8
|
Giordano BL, Ince RAA, Gross J, Schyns PG, Panzeri S, Kayser C. Contributions of local speech encoding and functional connectivity to audio-visual speech perception. eLife 2017; 6. [PMID: 28590903 PMCID: PMC5462535 DOI: 10.7554/elife.24763] [Citation(s) in RCA: 43] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2016] [Accepted: 05/07/2017] [Indexed: 11/13/2022] Open
Abstract
Seeing a speaker’s face enhances speech intelligibility in adverse environments. We investigated the underlying network mechanisms by quantifying local speech representations and directed connectivity in MEG data obtained while human participants listened to speech of varying acoustic SNR and visual context. During high acoustic SNR speech encoding by temporally entrained brain activity was strong in temporal and inferior frontal cortex, while during low SNR strong entrainment emerged in premotor and superior frontal cortex. These changes in local encoding were accompanied by changes in directed connectivity along the ventral stream and the auditory-premotor axis. Importantly, the behavioral benefit arising from seeing the speaker’s face was not predicted by changes in local encoding but rather by enhanced functional connectivity between temporal and inferior frontal cortex. Our results demonstrate a role of auditory-frontal interactions in visual speech representations and suggest that functional connectivity along the ventral pathway facilitates speech comprehension in multisensory environments. DOI:http://dx.doi.org/10.7554/eLife.24763.001 When listening to someone in a noisy environment, such as a cocktail party, we can understand the speaker more easily if we can also see his or her face. Movements of the lips and tongue convey additional information that helps the listener’s brain separate out syllables, words and sentences. However, exactly where in the brain this effect occurs and how it works remain unclear. To find out, Giordano et al. scanned the brains of healthy volunteers as they watched clips of people speaking. The clarity of the speech varied between clips. Furthermore, in some of the clips the lip movements of the speaker corresponded to the speech in question, whereas in others the lip movements were nonsense babble. As expected, the volunteers performed better on a word recognition task when the speech was clear and when the lips movements agreed with the spoken dialogue. Watching the video clips stimulated rhythmic activity in multiple regions of the volunteers’ brains, including areas that process sound and areas that plan movements. Speech is itself rhythmic, and the volunteers’ brain activity synchronized with the rhythms of the speech they were listening to. Seeing the speaker’s face increased this degree of synchrony. However, it also made it easier for sound-processing regions within the listeners’ brains to transfer information to one other. Notably, only the latter effect predicted improved performance on the word recognition task. This suggests that seeing a person’s face makes it easier to understand his or her speech by boosting communication between brain regions, rather than through effects on individual areas. Further work is required to determine where and how the brain encodes lip movements and speech sounds. The next challenge will be to identify where these two sets of information interact, and how the brain merges them together to generate the impression of specific words. DOI:http://dx.doi.org/10.7554/eLife.24763.002
Collapse
Affiliation(s)
- Bruno L Giordano
- Institut de Neurosciences de la Timone UMR 7289, Aix Marseille Université - Centre National de la Recherche Scientifique, Marseille, France.,Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom
| | - Robin A A Ince
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom
| | - Joachim Gross
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom
| | - Philippe G Schyns
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom
| | - Stefano Panzeri
- Neural Computation Laboratory, Center for Neuroscience and Cognitive Systems, Istituto Italiano di Tecnologia, Rovereto, Italy
| | - Christoph Kayser
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom
| |
Collapse
|
9
|
Yoshimura N, Nishimoto A, Belkacem AN, Shin D, Kambara H, Hanakawa T, Koike Y. Decoding of Covert Vowel Articulation Using Electroencephalography Cortical Currents. Front Neurosci 2016; 10:175. [PMID: 27199638 PMCID: PMC4853397 DOI: 10.3389/fnins.2016.00175] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2015] [Accepted: 04/06/2016] [Indexed: 11/30/2022] Open
Abstract
With the goal of providing assistive technology for the communication impaired, we proposed electroencephalography (EEG) cortical currents as a new approach for EEG-based brain-computer interface spellers. EEG cortical currents were estimated with a variational Bayesian method that uses functional magnetic resonance imaging (fMRI) data as a hierarchical prior. EEG and fMRI data were recorded from ten healthy participants during covert articulation of Japanese vowels /a/ and /i/, as well as during a no-imagery control task. Applying a sparse logistic regression (SLR) method to classify the three tasks, mean classification accuracy using EEG cortical currents was significantly higher than that using EEG sensor signals and was also comparable to accuracies in previous studies using electrocorticography. SLR weight analysis revealed vertices of EEG cortical currents that were highly contributive to classification for each participant, and the vertices showed discriminative time series signals according to the three tasks. Furthermore, functional connectivity analysis focusing on the highly contributive vertices revealed positive and negative correlations among areas related to speech processing. As the same findings were not observed using EEG sensor signals, our results demonstrate the potential utility of EEG cortical currents not only for engineering purposes such as brain-computer interfaces but also for neuroscientific purposes such as the identification of neural signaling related to language processing.
Collapse
Affiliation(s)
- Natsue Yoshimura
- Precision and Intelligence Laboratory, Tokyo Institute of TechnologyYokohama, Japan
- Department of Functional Brain Research, National Center of Neurology and Psychiatry, National Institute of NeuroscienceTokyo, Japan
- Department of Advanced Neuroimaging, Integrative Brain Imaging Center, National Center of Neurology and PsychiatryTokyo, Japan
| | - Atsushi Nishimoto
- Precision and Intelligence Laboratory, Tokyo Institute of TechnologyYokohama, Japan
- Department of Functional Brain Research, National Center of Neurology and Psychiatry, National Institute of NeuroscienceTokyo, Japan
- Department of Advanced Neuroimaging, Integrative Brain Imaging Center, National Center of Neurology and PsychiatryTokyo, Japan
| | | | - Duk Shin
- Department of Electronics and Mechatronics, Tokyo Polytechnic UniversityAtsugi, Japan
| | - Hiroyuki Kambara
- Precision and Intelligence Laboratory, Tokyo Institute of TechnologyYokohama, Japan
| | - Takashi Hanakawa
- Department of Functional Brain Research, National Center of Neurology and Psychiatry, National Institute of NeuroscienceTokyo, Japan
- Department of Advanced Neuroimaging, Integrative Brain Imaging Center, National Center of Neurology and PsychiatryTokyo, Japan
- Precursory Research for Embryonic Science and Technology, Japan Science and Technology AgencyTokyo, Japan
| | - Yasuharu Koike
- Precision and Intelligence Laboratory, Tokyo Institute of TechnologyYokohama, Japan
- Department of Advanced Neuroimaging, Integrative Brain Imaging Center, National Center of Neurology and PsychiatryTokyo, Japan
- Solution Science Research Laboratory, Tokyo Institute of TechnologyYokohama, Japan
| |
Collapse
|
10
|
Meier EL, Kapse KJ, Kiran S. The Relationship between Frontotemporal Effective Connectivity during Picture Naming, Behavior, and Preserved Cortical Tissue in Chronic Aphasia. Front Hum Neurosci 2016; 10:109. [PMID: 27014039 PMCID: PMC4792868 DOI: 10.3389/fnhum.2016.00109] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2015] [Accepted: 02/29/2016] [Indexed: 11/17/2022] Open
Abstract
While several studies of task-based effective connectivity of normal language processing exist, little is known about the functional reorganization of language networks in patients with stroke-induced chronic aphasia. During oral picture naming, activation in neurologically intact individuals is found in "classic" language regions involved with retrieval of lexical concepts [e.g., left middle temporal gyrus (LMTG)], word form encoding [e.g., left posterior superior temporal gyrus, (LpSTG)], and controlled retrieval of semantic and phonological information [e.g., left inferior frontal gyrus (LIFG)] as well as domain-general regions within the multiple demands network [e.g., left middle frontal gyrus (LMFG)]. After stroke, lesions to specific parts of the left hemisphere language network force reorganization of this system. While individuals with aphasia have been found to recruit similar regions for language tasks as healthy controls, the relationship between the dynamic functioning of the language network and individual differences in underlying neural structure and behavioral performance is still unknown. Therefore, in the present study, we used dynamic causal modeling (DCM) to investigate differences between individuals with aphasia and healthy controls in terms of task-induced regional interactions between three regions (i.e., LIFG, LMFG, and LMTG) vital for picture naming. The DCM model space was organized according to exogenous input to these regions and partitioned into separate families. At the model level, random effects family wise Bayesian Model Selection revealed that models with driving input to LIFG best fit the control data whereas models with driving input to LMFG best fit the patient data. At the parameter level, a significant between-group difference in the connection strength from LMTG to LIFG was seen. Within the patient group, several significant relationships between network connectivity parameters, spared cortical tissue, and behavior were observed. Overall, this study provides some preliminary findings regarding how neural networks for language reorganize for individuals with aphasia and how brain connectivity relates to underlying structural integrity and task performance.
Collapse
Affiliation(s)
- Erin L. Meier
- Department of Speech Language and Hearing Sciences, Aphasia Research Laboratory, Sargent College, Boston University, BostonMA, USA
| | | | - Swathi Kiran
- Department of Speech Language and Hearing Sciences, Aphasia Research Laboratory, Sargent College, Boston University, BostonMA, USA
| |
Collapse
|
11
|
McDowell A, Felton A, Vazquez D, Chiarello C. Neurostructural correlates of consistent and weak handedness. Laterality 2015; 21:348-370. [PMID: 26470000 DOI: 10.1080/1357650x.2015.1096939] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
Various cognitive differences have been reported between consistent and weak handers, but little is known about the neurobiological factors that may be associated with this distinction. The current study examined cortical structural lateralization and corpus callosum volume in a large, well-matched sample of young adults (N = 164) to explore potential neurostructural bases for this hand group difference. The groups did not differ in corpus callosum volume. However, at the global hemispheric level, weak handers had reduced or absent asymmetries for grey and white matter volume, cortical surface area, thickness, and local gyrification, relative to consistent handers. Group differences were also observed for some regional hemispheric asymmetries, the most prominent of which was reduced or absent gyrification asymmetry for weak handers in a large region surrounding the central sulcus and extending into parietal association cortex. The findings imply that variations in handedness strength are associated with differences in structural lateralization, not only in somatomotor regions, but also in areas associated with high level cognitive control of action.
Collapse
Affiliation(s)
- Alessandra McDowell
- a Department of Psychology , University of California , Riverside , CA , USA
| | - Adam Felton
- a Department of Psychology , University of California , Riverside , CA , USA
| | - David Vazquez
- a Department of Psychology , University of California , Riverside , CA , USA
| | - Christine Chiarello
- a Department of Psychology , University of California , Riverside , CA , USA
| |
Collapse
|
12
|
Kim H, Hahm J, Lee H, Kang E, Kang H, Lee DS. Brain networks engaged in audiovisual integration during speech perception revealed by persistent homology-based network filtration. Brain Connect 2015; 5:245-58. [PMID: 25495216 DOI: 10.1089/brain.2013.0218] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
The human brain naturally integrates audiovisual information to improve speech perception. However, in noisy environments, understanding speech is difficult and may require much effort. Although the brain network is supposed to be engaged in speech perception, it is unclear how speech-related brain regions are connected during natural bimodal audiovisual or unimodal speech perception with counterpart irrelevant noise. To investigate the topological changes of speech-related brain networks at all possible thresholds, we used a persistent homological framework through hierarchical clustering, such as single linkage distance, to analyze the connected component of the functional network during speech perception using functional magnetic resonance imaging. For speech perception, bimodal (audio-visual speech cue) or unimodal speech cues with counterpart irrelevant noise (auditory white-noise or visual gum-chewing) were delivered to 15 subjects. In terms of positive relationship, similar connected components were observed in bimodal and unimodal speech conditions during filtration. However, during speech perception by congruent audiovisual stimuli, the tighter couplings of left anterior temporal gyrus-anterior insula component and right premotor-visual components were observed than auditory or visual speech cue conditions, respectively. Interestingly, visual speech is perceived under white noise by tight negative coupling in the left inferior frontal region-right anterior cingulate, left anterior insula, and bilateral visual regions, including right middle temporal gyrus, right fusiform components. In conclusion, the speech brain network is tightly positively or negatively connected, and can reflect efficient or effortful processes during natural audiovisual integration or lip-reading, respectively, in speech perception.
Collapse
Affiliation(s)
- Heejung Kim
- 1 Department of Nuclear Medicine, College of Medicine, Seoul National University , Seoul, Korea
| | | | | | | | | | | |
Collapse
|
13
|
Simonyan K, Fuertinger S. Speech networks at rest and in action: interactions between functional brain networks controlling speech production. J Neurophysiol 2015; 113:2967-78. [PMID: 25673742 DOI: 10.1152/jn.00964.2014] [Citation(s) in RCA: 50] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2014] [Accepted: 02/06/2015] [Indexed: 01/08/2023] Open
Abstract
Speech production is one of the most complex human behaviors. Although brain activation during speaking has been well investigated, our understanding of interactions between the brain regions and neural networks remains scarce. We combined seed-based interregional correlation analysis with graph theoretical analysis of functional MRI data during the resting state and sentence production in healthy subjects to investigate the interface and topology of functional networks originating from the key brain regions controlling speech, i.e., the laryngeal/orofacial motor cortex, inferior frontal and superior temporal gyri, supplementary motor area, cingulate cortex, putamen, and thalamus. During both resting and speaking, the interactions between these networks were bilaterally distributed and centered on the sensorimotor brain regions. However, speech production preferentially recruited the inferior parietal lobule (IPL) and cerebellum into the large-scale network, suggesting the importance of these regions in facilitation of the transition from the resting state to speaking. Furthermore, the cerebellum (lobule VI) was the most prominent region showing functional influences on speech-network integration and segregation. Although networks were bilaterally distributed, interregional connectivity during speaking was stronger in the left vs. right hemisphere, which may have underlined a more homogeneous overlap between the examined networks in the left hemisphere. Among these, the laryngeal motor cortex (LMC) established a core network that fully overlapped with all other speech-related networks, determining the extent of network interactions. Our data demonstrate complex interactions of large-scale brain networks controlling speech production and point to the critical role of the LMC, IPL, and cerebellum in the formation of speech production network.
Collapse
Affiliation(s)
- Kristina Simonyan
- Department of Neurology, Icahn School of Medicine at Mount Sinai, New York, New York; Department Otolaryngology, Icahn School of Medicine at Mount Sinai, New York, New York
| | - Stefan Fuertinger
- Department of Neurology, Icahn School of Medicine at Mount Sinai, New York, New York
| |
Collapse
|
14
|
Bernstein LE, Liebenthal E. Neural pathways for visual speech perception. Front Neurosci 2014; 8:386. [PMID: 25520611 PMCID: PMC4248808 DOI: 10.3389/fnins.2014.00386] [Citation(s) in RCA: 89] [Impact Index Per Article: 8.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2014] [Accepted: 11/10/2014] [Indexed: 12/03/2022] Open
Abstract
This paper examines the questions, what levels of speech can be perceived visually, and how is visual speech represented by the brain? Review of the literature leads to the conclusions that every level of psycholinguistic speech structure (i.e., phonetic features, phonemes, syllables, words, and prosody) can be perceived visually, although individuals differ in their abilities to do so; and that there are visual modality-specific representations of speech qua speech in higher-level vision brain areas. That is, the visual system represents the modal patterns of visual speech. The suggestion that the auditory speech pathway receives and represents visual speech is examined in light of neuroimaging evidence on the auditory speech pathways. We outline the generally agreed-upon organization of the visual ventral and dorsal pathways and examine several types of visual processing that might be related to speech through those pathways, specifically, face and body, orthography, and sign language processing. In this context, we examine the visual speech processing literature, which reveals widespread diverse patterns of activity in posterior temporal cortices in response to visual speech stimuli. We outline a model of the visual and auditory speech pathways and make several suggestions: (1) The visual perception of speech relies on visual pathway representations of speech qua speech. (2) A proposed site of these representations, the temporal visual speech area (TVSA) has been demonstrated in posterior temporal cortex, ventral and posterior to multisensory posterior superior temporal sulcus (pSTS). (3) Given that visual speech has dynamic and configural features, its representations in feedforward visual pathways are expected to integrate these features, possibly in TVSA.
Collapse
Affiliation(s)
- Lynne E Bernstein
- Department of Speech and Hearing Sciences, George Washington University Washington, DC, USA
| | - Einat Liebenthal
- Department of Neurology, Medical College of Wisconsin Milwaukee, WI, USA ; Department of Psychiatry, Brigham and Women's Hospital Boston, MA, USA
| |
Collapse
|
15
|
Similarities and differences in brain activation and functional connectivity in first and second language reading: Evidence from Chinese learners of English. Neuropsychologia 2014; 63:275-84. [DOI: 10.1016/j.neuropsychologia.2014.09.001] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2014] [Revised: 07/17/2014] [Accepted: 09/01/2014] [Indexed: 11/22/2022]
|