1
|
Yu RWL, Chan AHS. Effects of player-video game interaction on the mental effort of older adults with the use of electroencephalography and NASA-TLX. Arch Gerontol Geriatr 2024; 124:105442. [PMID: 38676979 DOI: 10.1016/j.archger.2024.105442] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2022] [Revised: 04/04/2024] [Accepted: 04/08/2024] [Indexed: 04/29/2024]
Abstract
While player-video game interaction appears to affect older adults in gaming, there is limited knowledge regarding the cognitive demands associated with the anticipation of performing a button press, specifically focusing on the input and game elements relation (I/E relation) in game environments. The study aims to investigate the effects of lateral and rotational displacement amplitudes of game elements, triggered by a single button-press, on the cognitive effort of older adults. Both subjective and objective measurement methods were employed to assess these effects. A total of 48 older adults participated in three casual video game tasks encompassing lateral and rotational displacements at varying I/E relations (low, medium, and high). Results obtained from the NASA Task Load Index and electroencephalography (EEG) measurements revealed significant differences between the I/E relations. Specifically, the subjective rating of cognitive demand among older players was significantly impacted by a small rotation angle associated with a button press, leading to increased mental, physical, and temporal demands, along with decreased performance. Surprisingly, the analysis of EEG data, particularly the theta-alpha ratio, revealed significant interaction effects of I/E relations, button press type, and game type on the cognitive demand required during gameplay. These findings offer practical implications and point towards future avenues for developing player-video game interactions that are more cognitively friendly for older players in gaming environments.
Collapse
Affiliation(s)
- R W L Yu
- Department of Systems Engineering, City University of Hong Kong, Hong Kong, China.
| | - A H S Chan
- Department of Systems Engineering, City University of Hong Kong, Hong Kong, China; Shenzhen Research Institute, City University of Hong Kong, Shenzhen, China
| |
Collapse
|
2
|
MacLean J, Stirn J, Bidelman GM. Auditory-motor entrainment and listening experience shape the perceptual learning of concurrent speech. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.07.18.604167. [PMID: 39071391 PMCID: PMC11275804 DOI: 10.1101/2024.07.18.604167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/30/2024]
Abstract
Background Plasticity from auditory experience shapes the brain's encoding and perception of sound. Though prior research demonstrates that neural entrainment (i.e., brain-to-acoustic synchronization) aids speech perception, how long- and short-term plasticity influence entrainment to concurrent speech has not been investigated. Here, we explored neural entrainment mechanisms and the interplay between short- and long-term neuroplasticity for rapid auditory perceptual learning of concurrent speech sounds in young, normal-hearing musicians and nonmusicians. Method Participants learned to identify double-vowel mixtures during ∼45 min training sessions with concurrent high-density EEG recordings. We examined the degree to which brain responses entrained to the speech-stimulus train (∼9 Hz) to investigate whether entrainment to speech prior to behavioral decision predicted task performance. Source and directed functional connectivity analyses of the EEG probed whether behavior was driven by group differences auditory-motor coupling. Results Both musicians and nonmusicians showed rapid perceptual learning in accuracy with training. Interestingly, listeners' neural entrainment strength prior to target speech mixtures predicted behavioral identification performance; stronger neural synchronization was observed preceding incorrect compared to correct trial responses. We also found stark hemispheric biases in auditory-motor coupling during speech entrainment, with greater auditory-motor connectivity in the right compared to left hemisphere for musicians (R>L) but not in nonmusicians (R=L). Conclusions Our findings confirm stronger neuroacoustic synchronization and auditory-motor coupling during speech processing in musicians. Stronger neural entrainment to rapid stimulus trains preceding incorrect behavioral responses supports the notion that alpha-band (∼10 Hz) arousal/suppression in brain activity is an important modulator of trial-by-trial success in perceptual processing.
Collapse
|
3
|
Bowers A, Hudock D. Lower nonword syllable sequence repetition accuracy in adults who stutter is related to differences in audio-motor oscillations. Neuropsychologia 2024; 199:108906. [PMID: 38740180 DOI: 10.1016/j.neuropsychologia.2024.108906] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2023] [Revised: 03/05/2024] [Accepted: 05/06/2024] [Indexed: 05/16/2024]
Abstract
OBJECTIVE The goal of this study was to use independent component analysis (ICA) of high-density electroencephalography (EEG) to investigate whether differences in audio-motor neural oscillations are related to nonword syllable repetition accuracy in a group of adults who stutter compared to typically fluent speakers. METHODS EEG was recorded using 128 channels from 23 typically fluent speakers and 23 adults who stutter matched for age, sex, and handedness. EEG was recorded during delayed, 2 and 4 bilabial nonword syllable repetition conditions. Scalp-topography, dipole source estimates, and power spectral density (PSD) were computed for each independent component (IC) and used to cluster similar ICs across participants. Event-related spectral perturbations (ERSPs) were computed for each IC cluster to examine changes over time in the repetition conditions and to examine how dynamic changes in ERSPs are related to syllable repetition accuracy. RESULTS Findings indicated significantly lower accuracy on a measure of percentage correct trials in the AWS group and for a normalized measure of syllable load performance across conditions. Analysis of ERSPs revealed significantly lower alpha/beta ERD in left and right μ ICs and in left and right posterior temporal lobe α ICs in AWS compared to TFS (CC p < 0.05). Pearson correlations with %CT for frequency across time showed strong relationships with accuracy (FWE<0.05) during maintenance in the TFS group and during execution in the AWS group. CONCLUSIONS Findings implicate lower alpha/beta ERD (8-30 Hz) during syllable encoding over posterior temporal ICs and execution in left temporal/sensorimotor components. Strong correlations with accuracy and interindividual differences in ∼6-8 Hz ERSPs during execution implicate differences in motor and auditory-sensory monitoring during syllable sequence execution in AWS.
Collapse
Affiliation(s)
- Andrew Bowers
- University of Arkansas, 275 Epley Center, 606 North Razorback Rd. Fayetteville AR, 72701, United States.
| | - Daniel Hudock
- Idaho State University, 921 S. 8th Ave, Mailstop 8116, Pocatello, ID 83209, United States
| |
Collapse
|
4
|
Bayat M, Boostani R, Sabeti M, Yadegari F, Pirmoradi M, Rao KS, Nami M. Source Localization and Spectrum Analyzing of EEG in Stuttering State upon Dysfluent Utterances. Clin EEG Neurosci 2024; 55:371-383. [PMID: 36627837 DOI: 10.1177/15500594221150638] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
Abstract
Purpose: The present study which addressed adults who stutter (AWS) attempted to investigate power spectral dynamics in the stuttering state by answering the questions using quantitative electroencephalography (qEEG). Method: A 64-channel electroencephalography (EEG) setup was used for data acquisition at 20 AWS. Since the speech, especially stuttering, causes significant noise in the EEG, 2 conditions of speech preparation (SP) and imagined speech (IS) were considered. EEG signals were decomposed into 6 bands. The corresponding sources were localized using the standard low-resolution electromagnetic tomography (sLORETA) tool in both fluent and dysfluent states. Results: Significant differences were noted after analyzing the time-locked EEG signals in fluent and dysfluent utterances. Consistent with previous studies, poor alpha and beta suppression in SP and IS conditions were localized in the left frontotemporal areas in a dysfluent state. This was partly true for the right frontal regions. In the theta range, disfluency was concurrence with increased activation in the left and right motor areas. Increased delta power in the left and right motor areas as well as increased beta2 power over left parietal regions was notable EEG features upon fluent speech. Conclusion: Based on the present findings and those of earlier studies, explaining the neural circuitries involved in stuttering probably requires an examination of the entire frequency spectrum involved in speech.
Collapse
Affiliation(s)
- Masoumeh Bayat
- Department of Neuroscience, School of Advanced Medical Sciences and Technologies, Shiraz University of Medical Sciences, Shiraz, Iran
| | - Reza Boostani
- Department of Computer Sciences and Engineering, School of Engineering, Shiraz University, Shiraz, Iran
| | - Malihe Sabeti
- Department of Computer Engineering, Islamic Azad University, North Tehran Branch, Tehran, Iran
| | - Fariba Yadegari
- Department of Speech and Language Pathology, University of Social Welfare and Rehabilitation Sciences, Tehran, Iran
| | - Mohammadreza Pirmoradi
- Department of Clinical Psychology, School of Behavioral Sciences and Mental Health, Iran University of Medical Sciences, Tehran, Iran
| | - K S Rao
- Neuroscience Center, INDICASAT-AIP, Panama City, Republic of Panama
| | - Mohammad Nami
- Department of Neuroscience, School of Advanced Medical Sciences and Technologies, Shiraz University of Medical Sciences, Shiraz, Iran
- Neuroscience Center, INDICASAT-AIP, Panama City, Republic of Panama
- Dana Brain Health Institute, Iranian Neuroscience Society-Fars Chapter, Shiraz, Iran
- Academy of Health, Senses Cultural Foundation, Sacramento, CA, USA
| |
Collapse
|
5
|
Gu J, Deng K, Luo X, Ma W, Tang X. Investigating the different mechanisms in related neural activities: a focus on auditory perception and imagery. Cereb Cortex 2024; 34:bhae139. [PMID: 38629796 DOI: 10.1093/cercor/bhae139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Revised: 03/17/2024] [Accepted: 03/20/2024] [Indexed: 04/19/2024] Open
Abstract
Neuroimaging studies have shown that the neural representation of imagery is closely related to the perception modality; however, the undeniable different experiences between perception and imagery indicate that there are obvious neural mechanism differences between them, which cannot be explained by the simple theory that imagery is a form of weak perception. Considering the importance of functional integration of brain regions in neural activities, we conducted correlation analysis of neural activity in brain regions jointly activated by auditory imagery and perception, and then brain functional connectivity (FC) networks were obtained with a consistent structure. However, the connection values between the areas in the superior temporal gyrus and the right precentral cortex were significantly higher in auditory perception than in the imagery modality. In addition, the modality decoding based on FC patterns showed that the FC network of auditory imagery and perception can be significantly distinguishable. Subsequently, voxel-level FC analysis further verified the distribution regions of voxels with significant connectivity differences between the 2 modalities. This study complemented the correlation and difference between auditory imagery and perception in terms of brain information interaction, and it provided a new perspective for investigating the neural mechanisms of different modal information representations.
Collapse
Affiliation(s)
- Jin Gu
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, No. 999, Xi'an Road, Pidu District, Chengdu, China
- Manufacturing Industry Chains Collaboration and Information Support Technology Key Laboratory of Sichuan Province, No. 999, Xi'an Road, Pidu District, Chengdu, China
| | - Kexin Deng
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, No. 999, Xi'an Road, Pidu District, Chengdu, China
| | - Xiaoqi Luo
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, No. 999, Xi'an Road, Pidu District, Chengdu, China
| | - Wanli Ma
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, No. 999, Xi'an Road, Pidu District, Chengdu, China
| | - Xuegang Tang
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, No. 999, Xi'an Road, Pidu District, Chengdu, China
| |
Collapse
|
6
|
Chung LKH, Jack BN, Griffiths O, Pearson D, Luque D, Harris AWF, Spencer KM, Le Pelley ME, So SHW, Whitford TJ. Neurophysiological evidence of motor preparation in inner speech and the effect of content predictability. Cereb Cortex 2023; 33:11556-11569. [PMID: 37943760 PMCID: PMC10751289 DOI: 10.1093/cercor/bhad389] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2023] [Revised: 09/25/2023] [Accepted: 09/26/2023] [Indexed: 11/12/2023] Open
Abstract
Self-generated overt actions are preceded by a slow negativity as measured by electroencephalogram, which has been associated with motor preparation. Recent studies have shown that this neural activity is modulated by the predictability of action outcomes. It is unclear whether inner speech is also preceded by a motor-related negativity and influenced by the same factor. In three experiments, we compared the contingent negative variation elicited in a cue paradigm in an active vs. passive condition. In Experiment 1, participants produced an inner phoneme, at which an audible phoneme whose identity was unpredictable was concurrently presented. We found that while passive listening elicited a late contingent negative variation, inner speech production generated a more negative late contingent negative variation. In Experiment 2, the same pattern of results was found when participants were instead asked to overtly vocalize the phoneme. In Experiment 3, the identity of the audible phoneme was made predictable by establishing probabilistic expectations. We observed a smaller late contingent negative variation in the inner speech condition when the identity of the audible phoneme was predictable, but not in the passive condition. These findings suggest that inner speech is associated with motor preparatory activity that may also represent the predicted action-effects of covert actions.
Collapse
Affiliation(s)
- Lawrence K-h Chung
- School of Psychology, University of New South Wales (UNSW Sydney), Mathews Building, Library Walk, Kensington NSW 2052, Australia
- Department of Psychology, The Chinese University of Hong Kong, 3/F Sino Building, Chung Chi Road, Shatin, New Territories, Hong Kong SAR, China
| | - Bradley N Jack
- Research School of Psychology, Australian National University, Building 39, Science Road, Canberra ACT 2601, Australia
| | - Oren Griffiths
- School of Psychological Sciences, University of Newcastle, Behavioural Sciences Building, University Drive, Callaghan NSW 2308, Australia
| | - Daniel Pearson
- School of Psychology, University of Sydney, Griffith Taylor Building, Manning Road, Camperdown NSW 2006, Australia
| | - David Luque
- Department of Basic Psychology and Speech Therapy, University of Malaga, Faculty of Psychology, Dr Ortiz Ramos Street, 29010 Malaga, Spain
| | - Anthony W F Harris
- Westmead Clinical School, University of Sydney, 176 Hawkesbury Road, Westmead NSW 2145, Australia
- Brain Dynamics Centre, Westmead Institute for Medical Research, 176 Hawkesbury Road, Westmead NSW 2145, Australia
| | - Kevin M Spencer
- Research Service, Veterans Affairs Boston Healthcare System, and Department of Psychiatry, Harvard Medical School, 150 South Huntington Avenue, Boston MA 02130, United States
| | - Mike E Le Pelley
- School of Psychology, University of New South Wales (UNSW Sydney), Mathews Building, Library Walk, Kensington NSW 2052, Australia
| | - Suzanne H-w So
- Department of Psychology, The Chinese University of Hong Kong, 3/F Sino Building, Chung Chi Road, Shatin, New Territories, Hong Kong SAR, China
| | - Thomas J Whitford
- School of Psychology, University of New South Wales (UNSW Sydney), Mathews Building, Library Walk, Kensington NSW 2052, Australia
- Brain Dynamics Centre, Westmead Institute for Medical Research, 176 Hawkesbury Road, Westmead NSW 2145, Australia
| |
Collapse
|
7
|
Nalborczyk L, Longcamp M, Bonnard M, Serveau V, Spieser L, Alario FX. Distinct neural mechanisms support inner speaking and inner hearing. Cortex 2023; 169:161-173. [PMID: 37922641 DOI: 10.1016/j.cortex.2023.09.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Revised: 09/11/2023] [Accepted: 09/13/2023] [Indexed: 11/07/2023]
Abstract
Humans have the ability to mentally examine speech. This covert form of speech production is often accompanied by sensory (e.g., auditory) percepts. However, the cognitive and neural mechanisms that generate these percepts are still debated. According to a prominent proposal, inner speech has at least two distinct phenomenological components: inner speaking and inner hearing. We used transcranial magnetic stimulation to test whether these two phenomenologically distinct processes are supported by distinct neural mechanisms. We hypothesised that inner speaking relies more strongly on an online motor-to-sensory simulation that constructs a multisensory experience, whereas inner hearing relies more strongly on a memory-retrieval process, where the multisensory experience is reconstructed from stored motor-to-sensory associations. Accordingly, we predicted that the speech motor system will be involved more strongly during inner speaking than inner hearing. This would be revealed by modulations of TMS evoked responses at muscle level following stimulation of the lip primary motor cortex. Overall, data collected from 31 participants corroborated this prediction, showing that inner speaking increases the excitability of the primary motor cortex more than inner hearing. Moreover, this effect was more pronounced during the inner production of a syllable that strongly recruits the lips (vs. a syllable that recruits the lips to a lesser extent). These results are compatible with models assuming that the primary motor cortex is involved during inner speech and contribute to clarify the neural implementation of the fundamental ability of silently speaking in one's mind.
Collapse
Affiliation(s)
- Ladislas Nalborczyk
- Aix Marseille Univ, CNRS, LPC, Marseille, France; Aix Marseille Univ, CNRS, LNC, Marseille, France.
| | | | | | | | | | | |
Collapse
|
8
|
Moon J, Chau T. Online Ternary Classification of Covert Speech by Leveraging the Passive Perception of Speech. Int J Neural Syst 2023; 33:2350048. [PMID: 37522623 DOI: 10.1142/s012906572350048x] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/01/2023]
Abstract
Brain-computer interfaces (BCIs) provide communicative alternatives to those without functional speech. Covert speech (CS)-based BCIs enable communication simply by thinking of words and thus have intuitive appeal. However, an elusive barrier to their clinical translation is the collection of voluminous examples of high-quality CS signals, as iteratively rehearsing words for long durations is mentally fatiguing. Research on CS and speech perception (SP) identifies common spatiotemporal patterns in their respective electroencephalographic (EEG) signals, pointing towards shared encoding mechanisms. The goal of this study was to investigate whether a model that leverages the signal similarities between SP and CS can differentiate speech-related EEG signals online. Ten participants completed a dyadic protocol where in each trial, they listened to a randomly selected word and then subsequently mentally rehearsed the word. In the offline sessions, eight words were presented to participants. For the subsequent online sessions, the two most distinct words (most separable in terms of their EEG signals) were chosen to form a ternary classification problem (two words and rest). The model comprised a functional mapping derived from SP and CS signals of the same speech token (features are extracted via a Riemannian approach). An average ternary online accuracy of 75.3% (60% chance level) was achieved across participants, with individual accuracies as high as 93%. Moreover, we observed that the signal-to-noise ratio (SNR) of CS signals was enhanced by perception-covert modeling according to the level of high-frequency ([Formula: see text]-band) correspondence between CS and SP. These findings may lead to less burdensome data collection for training speech BCIs, which could eventually enhance the rate at which the vocabulary can grow.
Collapse
Affiliation(s)
- Jae Moon
- Institute of Biomedical Engineering, University of Toronto, Holland Bloorview Kid's Rehabilitation Hospital, Toronto, Ontario, Canada
| | - Tom Chau
- Institute of Biomedical Engineering, University of Toronto, Holland Bloorview Kid's Rehabilitation Hospital, Toronto, Ontario, Canada
| |
Collapse
|
9
|
Liang B, Li Y, Zhao W, Du Y. Bilateral human laryngeal motor cortex in perceptual decision of lexical tone and voicing of consonant. Nat Commun 2023; 14:4710. [PMID: 37543659 PMCID: PMC10404239 DOI: 10.1038/s41467-023-40445-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2023] [Accepted: 07/27/2023] [Indexed: 08/07/2023] Open
Abstract
Speech perception is believed to recruit the left motor cortex. However, the exact role of the laryngeal subregion and its right counterpart in speech perception, as well as their temporal patterns of involvement remain unclear. To address these questions, we conducted a hypothesis-driven study, utilizing transcranial magnetic stimulation on the left or right dorsal laryngeal motor cortex (dLMC) when participants performed perceptual decision on Mandarin lexical tone or consonant (voicing contrast) presented with or without noise. We used psychometric function and hierarchical drift-diffusion model to disentangle perceptual sensitivity and dynamic decision-making parameters. Results showed that bilateral dLMCs were engaged with effector specificity, and this engagement was left-lateralized with right upregulation in noise. Furthermore, the dLMC contributed to various decision stages depending on the hemisphere and task difficulty. These findings substantially advance our understanding of the hemispherical lateralization and temporal dynamics of bilateral dLMC in sensorimotor integration during speech perceptual decision-making.
Collapse
Affiliation(s)
- Baishen Liang
- Institute of Psychology, CAS Key Laboratory of Behavioral Science, Chinese Academy of Sciences, Beijing, 100101, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Yanchang Li
- Institute of Psychology, CAS Key Laboratory of Behavioral Science, Chinese Academy of Sciences, Beijing, 100101, China
| | - Wanying Zhao
- Institute of Psychology, CAS Key Laboratory of Behavioral Science, Chinese Academy of Sciences, Beijing, 100101, China
| | - Yi Du
- Institute of Psychology, CAS Key Laboratory of Behavioral Science, Chinese Academy of Sciences, Beijing, 100101, China.
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, 100049, China.
- CAS Center for Excellence in Brain Science and Intelligence Technology, Shanghai, 200031, China.
- Chinese Institute for Brain Research, Beijing, 102206, China.
| |
Collapse
|
10
|
Soroush PZ, Herff C, Ries SK, Shih JJ, Schultz T, Krusienski DJ. The nested hierarchy of overt, mouthed, and imagined speech activity evident in intracranial recordings. Neuroimage 2023; 269:119913. [PMID: 36731812 DOI: 10.1016/j.neuroimage.2023.119913] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Revised: 01/05/2023] [Accepted: 01/29/2023] [Indexed: 02/01/2023] Open
Abstract
Recent studies have demonstrated that it is possible to decode and synthesize various aspects of acoustic speech directly from intracranial measurements of electrophysiological brain activity. In order to continue progressing toward the development of a practical speech neuroprosthesis for the individuals with speech impairments, better understanding and modeling of imagined speech processes are required. The present study uses intracranial brain recordings from participants that performed a speaking task with trials consisting of overt, mouthed, and imagined speech modes, representing various degrees of decreasing behavioral output. Speech activity detection models are constructed using spatial, spectral, and temporal brain activity features, and the features and model performances are characterized and compared across the three degrees of behavioral output. The results indicate the existence of a hierarchy in which the relevant channels for the lower behavioral output modes form nested subsets of the relevant channels from the higher behavioral output modes. This provides important insights for the elusive goal of developing more effective imagined speech decoding models with respect to the better-established overt speech decoding counterparts.
Collapse
|
11
|
Lu L, Han M, Zou G, Zheng L, Gao JH. Common and distinct neural representations of imagined and perceived speech. Cereb Cortex 2022; 33:6486-6493. [PMID: 36587299 DOI: 10.1093/cercor/bhac519] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2022] [Revised: 12/06/2022] [Accepted: 12/09/2022] [Indexed: 01/02/2023] Open
Abstract
Humans excel at constructing mental representations of speech streams in the absence of external auditory input: the internal experience of speech imagery. Elucidating the neural processes underlying speech imagery is critical to understanding this higher-order brain function in humans. Here, using functional magnetic resonance imaging, we investigated the shared and distinct neural correlates of imagined and perceived speech by asking participants to listen to poems articulated by a male voice (perception condition) and to imagine hearing poems spoken by that same voice (imagery condition). We found that compared to baseline, speech imagery and perception activated overlapping brain regions, including the bilateral superior temporal gyri and supplementary motor areas. The left inferior frontal gyrus was more strongly activated by speech imagery than by speech perception, suggesting functional specialization for generating speech imagery. Although more research with a larger sample size and a direct behavioral indicator is needed to clarify the neural systems underlying the construction of complex speech imagery, this study provides valuable insights into the neural mechanisms of the closely associated but functionally distinct processes of speech imagery and perception.
Collapse
Affiliation(s)
- Lingxi Lu
- Center for the Cognitive Science of Language, Beijing Language and Culture University, Beijing 100083, China
| | - Meizhen Han
- National Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing 100875, China
| | - Guangyuan Zou
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China
| | - Li Zheng
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China
| | - Jia-Hong Gao
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China.,PKU-IDG/McGovern Institute for Brain Research, Peking University, Beijing 100871, China.,Beijing City Key Lab for Medical Physics and Engineering, Institution of Heavy Ion Physics, School of Physics, Peking University, Beijing 100871, China.,National Biomedical Imaging Center, Peking University, Beijing 100871, China
| |
Collapse
|
12
|
Li T, Zhu X, Wu X, Gong Y, Jones JA, Liu P, Chang Y, Yan N, Chen X, Liu H. Continuous theta burst stimulation over left and right supramarginal gyri demonstrates their involvement in auditory feedback control of vocal production. Cereb Cortex 2022; 33:11-22. [PMID: 35174862 DOI: 10.1093/cercor/bhac049] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2021] [Revised: 01/25/2022] [Accepted: 01/26/2022] [Indexed: 01/06/2023] Open
Abstract
The supramarginal gyrus (SMG) has been implicated in auditory-motor integration for vocal production. However, whether the SMG is bilaterally or unilaterally involved in auditory feedback control of vocal production in a causal manner remains unclear. The present event-related potential (ERP) study investigated the causal roles of the left and right SMG to auditory-vocal integration using neuronavigated continuous theta burst stimulation (c-TBS). Twenty-four young adults produced sustained vowel phonations and heard their voice unexpectedly pitch-shifted by ±200 cents after receiving active or sham c-TBS over the left or right SMG. As compared to sham stimulation, c-TBS over the left or right SMG led to significantly smaller vocal compensations for pitch perturbations that were accompanied by smaller cortical P2 responses. Moreover, no significant differences were found in the vocal and ERP responses when comparing active c-TBS over the left vs. right SMG. These findings provide neurobehavioral evidence for a causal influence of both the left and right SMG on auditory feedback control of vocal production. Decreased vocal compensations paralleled by reduced P2 responses following c-TBS over the bilateral SMG support their roles for auditory-motor transformation in a bottom-up manner: receiving auditory feedback information and mediating vocal compensations for feedback errors.
Collapse
Affiliation(s)
- Tingni Li
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, 510080, China
| | - Xiaoxia Zhu
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, 510080, China
| | - Xiuqin Wu
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, 510080, China
| | - Yulai Gong
- Department of Neurological Rehabilitation, Affiliated Sichuan Provincial Rehabilitation Hospital of Chengdu University of Traditional Chinese Medicine, Chengdu, 611135, China
| | - Jeffery A Jones
- Psychology Department and Laurier Centre for Cognitive Neuroscience, Wilfrid Laurier University, Waterloo, Ontario, N2L 3C5, Canada
| | - Peng Liu
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, 510080, China
| | - Yichen Chang
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, 510080, China
| | - Nan Yan
- CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.,Guangdong-Hong Kong-Macao Joint Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Xi Chen
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, 510080, China
| | - Hanjun Liu
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, 510080, China.,Guangdong Provincial Key Laboratory of Brain Function and Disease, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, 510080, China
| |
Collapse
|
13
|
Skipper JI. A voice without a mouth no more: The neurobiology of language and consciousness. Neurosci Biobehav Rev 2022; 140:104772. [PMID: 35835286 DOI: 10.1016/j.neubiorev.2022.104772] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2021] [Revised: 05/18/2022] [Accepted: 07/05/2022] [Indexed: 11/26/2022]
Abstract
Most research on the neurobiology of language ignores consciousness and vice versa. Here, language, with an emphasis on inner speech, is hypothesised to generate and sustain self-awareness, i.e., higher-order consciousness. Converging evidence supporting this hypothesis is reviewed. To account for these findings, a 'HOLISTIC' model of neurobiology of language, inner speech, and consciousness is proposed. It involves a 'core' set of inner speech production regions that initiate the experience of feeling and hearing words. These take on affective qualities, deriving from activation of associated sensory, motor, and emotional representations, involving a largely unconscious dynamic 'periphery', distributed throughout the whole brain. Responding to those words forms the basis for sustained network activity, involving 'default mode' activation and prefrontal and thalamic/brainstem selection of contextually relevant responses. Evidence for the model is reviewed, supporting neuroimaging meta-analyses conducted, and comparisons with other theories of consciousness made. The HOLISTIC model constitutes a more parsimonious and complete account of the 'neural correlates of consciousness' that has implications for a mechanistic account of mental health and wellbeing.
Collapse
|
14
|
Effect of functional and effective brain connectivity in identifying vowels from articulation imagery procedures. Cogn Process 2022; 23:593-618. [PMID: 35794496 DOI: 10.1007/s10339-022-01103-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2020] [Accepted: 06/15/2022] [Indexed: 11/03/2022]
Abstract
Articulation imagery, a form of mental imagery, refers to the activity of imagining or speaking to oneself mentally without an articulation movement. It is an effective domain of research in speech impaired neural disorders, as speech imagination has high similarity to real voice communication. This work employs electroencephalography (EEG) signals acquired from articulation and articulation imagery in identifying the vowel being imagined during different tasks. EEG signals from chosen electrodes are decomposed using the empirical mode decomposition (EMD) method into a series of intrinsic mode functions. Brain connectivity estimators and entropy measures have been computed to analyze the functional cooperation and causal dependence between different cortical regions as well as the regularity in the signals. Using machine learning techniques such as multiclass support vector machine (MSVM) and random forest (RF), the vowels have been classified. Three different training and testing protocols (Articulation-AR, Articulation imagery-AI and Articulation vs Articulation imagery-AR vs AI) were employed for identifying the vowel being imagined of articulating. An overall classification accuracy of 80% was obtained for articulation imagery protocol which was found to be higher than the other two protocols. Also, MSVM techniques outperformed the RF technique in terms of the classification accuracy. The effect of brain connectivity estimators and machine learning techniques seems to be reliable in identifying the vowel from the subjects' thought and thereby assisting the people with speech impairment.
Collapse
|
15
|
Nalborczyk L, Debarnot U, Longcamp M, Guillot A, Alario FX. The Role of Motor Inhibition During Covert Speech Production. Front Hum Neurosci 2022; 16:804832. [PMID: 35355587 PMCID: PMC8959424 DOI: 10.3389/fnhum.2022.804832] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Accepted: 01/31/2022] [Indexed: 11/13/2022] Open
Abstract
Covert speech is accompanied by a subjective multisensory experience with auditory and kinaesthetic components. An influential hypothesis states that these sensory percepts result from a simulation of the corresponding motor action that relies on the same internal models recruited for the control of overt speech. This simulationist view raises the question of how it is possible to imagine speech without executing it. In this perspective, we discuss the possible role(s) played by motor inhibition during covert speech production. We suggest that considering covert speech as an inhibited form of overt speech maps naturally to the purported progressive internalization of overt speech during childhood. We further argue that the role of motor inhibition may differ widely across different forms of covert speech (e.g., condensed vs. expanded covert speech) and that considering this variety helps reconciling seemingly contradictory findings from the neuroimaging literature.
Collapse
Affiliation(s)
- Ladislas Nalborczyk
- Aix Marseille Univ, CNRS, LPC, Marseille, France
- Aix Marseille Univ, CNRS, LNC, Marseille, France
| | - Ursula Debarnot
- Inter-University Laboratory of Human Movement Biology-EA 7424, University of Lyon, University Claude Bernard Lyon 1, Villeurbanne, France
- Institut Universitaire de France, Paris, France
| | | | - Aymeric Guillot
- Inter-University Laboratory of Human Movement Biology-EA 7424, University of Lyon, University Claude Bernard Lyon 1, Villeurbanne, France
- Institut Universitaire de France, Paris, France
| | | |
Collapse
|
16
|
Affiliation(s)
- Wade Munroe
- University of Michigan, Department of Philosophy and the Weinberg Institute for Cognitive Science, Ann Arbor, MI, USA
| |
Collapse
|
17
|
Cheng THZ, Creel SC, Iversen JR. How Do You Feel the Rhythm: Dynamic Motor-Auditory Interactions Are Involved in the Imagination of Hierarchical Timing. J Neurosci 2022; 42:500-512. [PMID: 34848500 PMCID: PMC8802922 DOI: 10.1523/jneurosci.1121-21.2021] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2021] [Revised: 11/10/2021] [Accepted: 11/12/2021] [Indexed: 11/21/2022] Open
Abstract
Predicting and organizing patterns of events is important for humans to survive in a dynamically changing world. The motor system has been proposed to be actively, and necessarily, engaged in not only the production but the perception of rhythm by organizing hierarchical timing that influences auditory responses. It is not yet well understood how the motor system interacts with the auditory system to perceive and maintain hierarchical structure in time. This study investigated the dynamic interaction between auditory and motor functional sources during the perception and imagination of musical meters. We pursued this using a novel method combining high-density EEG, EMG, and motion capture with independent component analysis to separate motor and auditory activity during meter imagery while robustly controlling against covert movement. We demonstrated that endogenous brain activity in both auditory and motor functional sources reflects the imagination of binary and ternary meters in the absence of corresponding acoustic cues or overt movement at the meter rate. We found clear evidence for hypothesized motor-to-auditory information flow at the beat rate in all conditions, suggesting a role for top-down influence of the motor system on auditory processing of beat-based rhythms, and reflecting an auditory-motor system with tight reciprocal informational coupling. These findings align with and further extend a set of motor hypotheses from beat perception to hierarchical meter imagination, adding supporting evidence to active engagement of the motor system in auditory processing, which may more broadly speak to the neural mechanisms of temporal processing in other human cognitive functions.SIGNIFICANCE STATEMENT Humans live in a world full of hierarchically structured temporal information, the accurate perception of which is essential for understanding speech and music. Music provides a window into the brain mechanisms of time perception, enabling us to examine how the brain groups musical beats into, for example a march or waltz. Using a novel paradigm combining measurement of electrical brain activity with data-driven analysis, this study directly investigates motor-auditory connectivity during meter imagination. Findings highlight the importance of the motor system in the active imagination of meter. This study sheds new light on a fundamental form of perception by demonstrating how auditory-motor interaction may support hierarchical timing processing, which may have clinical implications for speech and motor rehabilitation.
Collapse
Affiliation(s)
- Tzu-Han Zoe Cheng
- Department of Cognitive Science, University of California-San Diego, La Jolla, California 92093
- Institute for Neural Computation and Swartz Center for Computational Neuroscience, University of California-San Diego, La Jolla, California 92093
| | - Sarah C Creel
- Department of Cognitive Science, University of California-San Diego, La Jolla, California 92093
| | - John R Iversen
- Institute for Neural Computation and Swartz Center for Computational Neuroscience, University of California-San Diego, La Jolla, California 92093
| |
Collapse
|
18
|
Moon J, Chau T, Orlandi S. A comparison and classification of oscillatory characteristics in speech perception and covert speech. Brain Res 2022; 1781:147778. [PMID: 35007548 DOI: 10.1016/j.brainres.2022.147778] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2021] [Revised: 12/29/2021] [Accepted: 01/03/2022] [Indexed: 11/02/2022]
Abstract
Covert speech, the mental imagery of speaking, has been studied increasingly to understand and decode thoughts in the context of brain-computer interfaces. In studies of speech comprehension, neural oscillations are thought to play a key role in the temporal encoding of speech. However, little is known about the role of oscillations in covert speech. In this study, we investigated the oscillatory involvements in covert speech and speech perception. Data were collected from 10 participants with 64 channel EEG. Participants heard the words, 'blue' and 'orange', and subsequently mentally rehearsed them. First, continuous wavelet transform was performed on epoched signals and subsequently two-tailed t-tests between two classes were conducted to determine statistical differences in frequency and time (t-CWT). Features were also extracted using t-CWT and subsequently classified using a support vector machine. θ and γ phase amplitude coupling (PAC) was also assessed within and between tasks. All binary classifications produced accuracies significantly greater (80-90%) than chance level, supporting the use of t-CWT in determining relative oscillatory involvements. While the perception task dynamically invoked all frequencies with more prominent θ and α activity, the covert task favoured higher frequencies with significantly higher γ activity than perception. Moreover, the perception condition produced significant θ-γ PAC, corroborating a reported linkage between syllabic and phonemic sampling. Although this coupling was found to be suppressed in the covert condition, we found significant cross-task coupling between perception θ and covert speech γ. Covert speech processing appears to be largely associated with higher frequencies of EEG. Importantly, the significant cross-task coupling between speech perception and covert speech, in the absence of within-task covert speech PAC, supports the notion that the γ- and θ-bands subserve, respectively, shared and unique encoding processes across tasks.
Collapse
Affiliation(s)
- Jaewoong Moon
- Bloorview Research Institute, Holland Bloorview Kids Rehabilitation Hospital, Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, ON, Canada.
| | - Tom Chau
- Bloorview Research Institute, Holland Bloorview Kids Rehabilitation Hospital, Toronto, ON, Canada
| | - Silvia Orlandi
- Bloorview Research Institute, Holland Bloorview Kids Rehabilitation Hospital, Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
19
|
López-Silva P, Cavieres Á, Humpston C. The phenomenology of auditory verbal hallucinations in schizophrenia and the challenge from pseudohallucinations. Front Psychiatry 2022; 13:826654. [PMID: 36051554 PMCID: PMC9424625 DOI: 10.3389/fpsyt.2022.826654] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/01/2021] [Accepted: 07/25/2022] [Indexed: 11/13/2022] Open
Abstract
In trying to make sense of the extensive phenomenological variation of first-personal reports on auditory verbal hallucinations, the concept of pseudohallucination is originally introduced to designate any hallucinatory-like phenomena not exhibiting some of the paradigmatic features of "genuine" hallucinations. After its introduction, Karl Jaspers locates the notion of pseudohallucinations into the auditory domain, appealing to a distinction between hallucinatory voices heard within the subjective inner space (pseudohallucination) and voices heard in the outer external space (real hallucinations) with differences in their sensory richness. Jaspers' characterization of the term has been the target of a number of phenomenological, conceptual and empirically-based criticisms. From this latter point of view, it has been claimed that the concept cannot capture distinct phenomena at the neurobiological level. Over the last years, the notion of pseudohallucination seems to be falling into disuse as no major diagnostic system seems to refer to it. In this paper, we propose that even if the concept of pseudohallucination is not helpful to differentiate distinct phenomena at the neurobiological level, the inner/outer distinction highlighted by Jaspers' characterization of the term still remains an open explanatory challenge for dominant theories about the neurocognitive origin of auditory verbal hallucinations. We call this, "the challenge from pseudohallucinations". After exploring this issue in detail, we propose some phenomenological, conceptual, and empirical paths for future research that might help to build up a more contextualized and dynamic view of auditory verbal hallucinatory phenomena.
Collapse
Affiliation(s)
- Pablo López-Silva
- School of Psychology, Faculty of Social Sciences, Universidad de Valparaíso, Valparaíso, Chile.,Millennium Institute for Research in Depression and Personality (MIDAP), Santiago, Chile
| | - Álvaro Cavieres
- Department of Psychiatry, School of Medicine, Faculty of Medicine, Universidad de Valparaíso, Valparaíso, Chile
| | - Clara Humpston
- School of Psychology, University of York, York, United Kingdom.,School of Psychology, Institute for Mental Health, University of Birmingham, Birmingham, United Kingdom
| |
Collapse
|
20
|
Dikker S, Mech EN, Gwilliams L, West T, Dumas G, Federmeier KD. Exploring age-related changes in inter-brain synchrony during verbal communication. PSYCHOLOGY OF LEARNING AND MOTIVATION 2022. [DOI: 10.1016/bs.plm.2022.08.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
21
|
Panachakel JT, G RA. Classification of Phonological Categories in Imagined Speech using Phase Synchronization Measure. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:2226-2229. [PMID: 34891729 DOI: 10.1109/embc46164.2021.9630699] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Phonological categories in articulated speech are defined based on the place and manner of articulation. In this work, we investigate whether the phonological categories of the prompts imagined during speech imagery lead to differences in phase synchronization in various cortical regions that can be discriminated from the EEG captured during the imagination. Nasal and bilabial consonant are the two phonological categories considered due to their differences in both place and manner of articulation. Mean phase coherence (MPC) is used for measuring the phase synchronization and shallow neural network (NN) is used as the classifier. As a benchmark, we have also designed another NN based on statistical parameters extracted from imagined speech EEG. The NN trained on MPC values in the beta band gives classification results superior to NN trained on alpha band MPC values, gamma band MPC values and statistical parameters extracted from the EEG.Clinical relevance: Brain-computer interface (BCI) is a promising tool for aiding differently-abled people and for neurorehabilitation. One of the challenges in designing speech imagery based BCI is the identification of speech prompts that can lead to distinct neural activations. We have shown that nasal and blilabial consonants lead to dissimilar activations. Hence prompts orthogonal in these phonological categories are good choices as speech imagery prompts.
Collapse
|
22
|
Panachakel JT, Sharma K, A S A, A G R. Can we identify the category of imagined phoneme from EEG? ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:459-462. [PMID: 34891332 DOI: 10.1109/embc46164.2021.9630604] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Phonemes are classified into different categories based on the place and manner of articulation. We investigate the differences between the neural correlates of imagined nasal and bilabial consonants (distinct phonological categories). Mean phase coherence is used as a metric for measuring the phase synchronisation between pairs of electrodes in six cortical regions (auditory, motor, prefrontal, sensorimotor, so-matosensory and premotor) during the imagery of nasal and bilabial consonants. Statistically significant difference at 95% confidence interval is observed in beta and lower-gamma bands in various cortical regions. Our observations are inline with the directions into velocities of articulators and dual stream prediction models and support the hypothesis that phonological categories not only exist in articulated speech but can also be distinguished from the EEG of imagined speech.
Collapse
|
23
|
Sartori RF, Nobre GC, Fonseca RP, Valentini NC. Do executive functions and gross motor skills predict writing and mathematical performance in children with developmental coordination disorder? APPLIED NEUROPSYCHOLOGY-CHILD 2021; 11:825-839. [PMID: 34651539 DOI: 10.1080/21622965.2021.1987236] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Aim: To examine whether executive functions, and gross motor skills were predictors for school performance in children with DCD, with risk for DCD (r-DCD), and with typical development (TD).Methods: Participants were 63 children with DCD (Mage = 8.70, SDage = .64), 31 children with r-DCD (Mage = 8.90, SDage = 0.74), and 63 typical development children (Mage = 8.74, SDage = .63). Wechsler Abbreviated Scale of Intelligence, Movement Assessment Battery for Children-2, Test of Gross Motor Development-3, Oral Word Span in Sentences, Odd-One-Out, Go/No-Go, Hayling Test, Trail Making Test, Five Digits Test, and the Test of School Performance-II were utilized.Results: In DCD, processing speed (β = -.42, p = .005), and auditory-motor inhibition (β = -.36, p = .009), and auditory-verbal inhibition (β = -.38, p = .023) predicted math performance; and auditory-motor (β = -.40, p = .38) and visuospatial working memory (β = -.33 p = .011) predicted writing performance. In r-DCD, auditory-motor (β = - .67; p = .002) and visual-motor (β = -.40; p = .040) inhibition predicted math performance; visual-motor inhibition predicted writing performance (β = -.47; p = .015).Conclusion: Lower inhibitory control and visuospatial working memory scores affect children with DCD and r-DCD' school performance.
Collapse
Affiliation(s)
- Rodrigo Flores Sartori
- Department of Physical Education, Pontifical Catholic University of Rio Grande do Su, Rio Grande do Sul, Brazil
| | - Glauber Carvalho Nobre
- Department of Physical Education, Federal Institute of Education, Science and Technology of Ceará, Fortaleza, Brazil
| | - Rochele Paz Fonseca
- Department of Psychology, Pontifical Catholic University of Rio Grande do Su, Rio Grande do Sul, Brazil
| | - Nadia Cristina Valentini
- Department of Physical Education, School of Physical Education, Physiotherapy and Dance, Federal University of Rio Grande do Sul, Rio Grande do Sul, Brazil
| |
Collapse
|
24
|
Si X, Li S, Xiang S, Yu J, Ming D. Imagined speech increases the hemodynamic response and functional connectivity of the dorsal motor cortex. J Neural Eng 2021; 18. [PMID: 34507311 DOI: 10.1088/1741-2552/ac25d9] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2021] [Accepted: 09/10/2021] [Indexed: 11/12/2022]
Abstract
Objective. Decoding imagined speech from brain signals could provide a more natural, user-friendly way for developing the next generation of the brain-computer interface (BCI). With the advantages of non-invasive, portable, relatively high spatial resolution and insensitivity to motion artifacts, the functional near-infrared spectroscopy (fNIRS) shows great potential for developing the non-invasive speech BCI. However, there is a lack of fNIRS evidence in uncovering the neural mechanism of imagined speech. Our goal is to investigate the specific brain regions and the corresponding cortico-cortical functional connectivity features during imagined speech with fNIRS.Approach. fNIRS signals were recorded from 13 subjects' bilateral motor and prefrontal cortex during overtly and covertly repeating words. Cortical activation was determined through the mean oxygen-hemoglobin concentration changes, and functional connectivity was calculated by Pearson's correlation coefficient.Main results. (a) The bilateral dorsal motor cortex was significantly activated during the covert speech, whereas the bilateral ventral motor cortex was significantly activated during the overt speech. (b) As a subregion of the motor cortex, sensorimotor cortex (SMC) showed a dominant dorsal response to covert speech condition, whereas a dominant ventral response to overt speech condition. (c) Broca's area was deactivated during the covert speech but activated during the overt speech. (d) Compared to overt speech, dorsal SMC(dSMC)-related functional connections were enhanced during the covert speech.Significance. We provide fNIRS evidence for the involvement of dSMC in speech imagery. dSMC is the speech imagery network's key hub and is probably involved in the sensorimotor information processing during the covert speech. This study could inspire the BCI community to focus on the potential contribution of dSMC during speech imagery.
Collapse
Affiliation(s)
- Xiaopeng Si
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, People's Republic of China.,Tianjin Key Laboratory of Brain Science and Neural Engineering, Tianjin University, Tianjin 300072, People's Republic of China.,Tianjin International Engineering Institute, Tianjin University, Tianjin 300072, People's Republic of China.,Institute of Applied Psychology, Tianjin University, Tianjin 300350, People's Republic of China
| | - Sicheng Li
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, People's Republic of China.,Tianjin Key Laboratory of Brain Science and Neural Engineering, Tianjin University, Tianjin 300072, People's Republic of China
| | - Shaoxin Xiang
- Tianjin Key Laboratory of Brain Science and Neural Engineering, Tianjin University, Tianjin 300072, People's Republic of China.,Tianjin International Engineering Institute, Tianjin University, Tianjin 300072, People's Republic of China
| | - Jiayue Yu
- Tianjin Key Laboratory of Brain Science and Neural Engineering, Tianjin University, Tianjin 300072, People's Republic of China.,Tianjin International Engineering Institute, Tianjin University, Tianjin 300072, People's Republic of China
| | - Dong Ming
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, People's Republic of China.,Tianjin Key Laboratory of Brain Science and Neural Engineering, Tianjin University, Tianjin 300072, People's Republic of China
| |
Collapse
|
25
|
Jenson D. Audiovisual incongruence differentially impacts left and right hemisphere sensorimotor oscillations: Potential applications to production. PLoS One 2021; 16:e0258335. [PMID: 34618866 PMCID: PMC8496780 DOI: 10.1371/journal.pone.0258335] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2020] [Accepted: 09/26/2021] [Indexed: 11/21/2022] Open
Abstract
Speech production gives rise to distinct auditory and somatosensory feedback signals which are dynamically integrated to enable online monitoring and error correction, though it remains unclear how the sensorimotor system supports the integration of these multimodal signals. Capitalizing on the parity of sensorimotor processes supporting perception and production, the current study employed the McGurk paradigm to induce multimodal sensory congruence/incongruence. EEG data from a cohort of 39 typical speakers were decomposed with independent component analysis to identify bilateral mu rhythms; indices of sensorimotor activity. Subsequent time-frequency analyses revealed bilateral patterns of event related desynchronization (ERD) across alpha and beta frequency ranges over the time course of perceptual events. Right mu activity was characterized by reduced ERD during all cases of audiovisual incongruence, while left mu activity was attenuated and protracted in McGurk trials eliciting sensory fusion. Results were interpreted to suggest distinct hemispheric contributions, with right hemisphere mu activity supporting a coarse incongruence detection process and left hemisphere mu activity reflecting a more granular level of analysis including phonological identification and incongruence resolution. Findings are also considered in regard to incongruence detection and resolution processes during production.
Collapse
Affiliation(s)
- David Jenson
- Department of Speech and Hearing Sciences, Washington State University, Spokane, Washington, United States of America
| |
Collapse
|
26
|
Sun J, Wang Z, Tian X. Manual Gestures Modulate Early Neural Responses in Loudness Perception. Front Neurosci 2021; 15:634967. [PMID: 34539324 PMCID: PMC8440995 DOI: 10.3389/fnins.2021.634967] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2020] [Accepted: 08/06/2021] [Indexed: 12/02/2022] Open
Abstract
How different sensory modalities interact to shape perception is a fundamental question in cognitive neuroscience. Previous studies in audiovisual interaction have focused on abstract levels such as categorical representation (e.g., McGurk effect). It is unclear whether the cross-modal modulation can extend to low-level perceptual attributes. This study used motional manual gestures to test whether and how the loudness perception can be modulated by visual-motion information. Specifically, we implemented a novel paradigm in which participants compared the loudness of two consecutive sounds whose intensity changes around the just noticeable difference (JND), with manual gestures concurrently presented with the second sound. In two behavioral experiments and two EEG experiments, we investigated our hypothesis that the visual-motor information in gestures would modulate loudness perception. Behavioral results showed that the gestural information biased the judgment of loudness. More importantly, the EEG results demonstrated that early auditory responses around 100 ms after sound onset (N100) were modulated by the gestures. These consistent results in four behavioral and EEG experiments suggest that visual-motor processing can integrate with auditory processing at an early perceptual stage to shape the perception of a low-level perceptual attribute such as loudness, at least under challenging listening conditions.
Collapse
Affiliation(s)
- Jiaqiu Sun
- Division of Arts and Sciences, New York University Shanghai, Shanghai, China.,NYU-ECNU Institute of Brain and Cognitive Science, New York University Shanghai, Shanghai, China
| | - Ziqing Wang
- NYU-ECNU Institute of Brain and Cognitive Science, New York University Shanghai, Shanghai, China.,Shanghai Key Laboratory of Brain Functional Genomics, Ministry of Education, School of Psychology and Cognitive Science, East China Normal University, Shanghai, China
| | - Xing Tian
- Division of Arts and Sciences, New York University Shanghai, Shanghai, China.,NYU-ECNU Institute of Brain and Cognitive Science, New York University Shanghai, Shanghai, China.,Shanghai Key Laboratory of Brain Functional Genomics, Ministry of Education, School of Psychology and Cognitive Science, East China Normal University, Shanghai, China
| |
Collapse
|
27
|
Marion G, Di Liberto GM, Shamma SA. The Music of Silence: Part I: Responses to Musical Imagery Encode Melodic Expectations and Acoustics. J Neurosci 2021; 41:7435-7448. [PMID: 34341155 PMCID: PMC8412990 DOI: 10.1523/jneurosci.0183-21.2021] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Revised: 06/23/2021] [Accepted: 06/28/2021] [Indexed: 02/06/2023] Open
Abstract
Musical imagery is the voluntary internal hearing of music in the mind without the need for physical action or external stimulation. Numerous studies have already revealed brain areas activated during imagery. However, it remains unclear to what extent imagined music responses preserve the detailed temporal dynamics of the acoustic stimulus envelope and, crucially, whether melodic expectations play any role in modulating responses to imagined music, as they prominently do during listening. These modulations are important as they reflect aspects of the human musical experience, such as its acquisition, engagement, and enjoyment. This study explored the nature of these modulations in imagined music based on EEG recordings from 21 professional musicians (6 females and 15 males). Regression analyses were conducted to demonstrate that imagined neural signals can be predicted accurately, similarly to the listening task, and were sufficiently robust to allow for accurate identification of the imagined musical piece from the EEG. In doing so, our results indicate that imagery and listening tasks elicited an overlapping but distinctive topography of neural responses to sound acoustics, which is in line with previous fMRI literature. Melodic expectation, however, evoked very similar frontal spatial activation in both conditions, suggesting that they are supported by the same underlying mechanisms. Finally, neural responses induced by imagery exhibited a specific transformation from the listening condition, which primarily included a relative delay and a polarity inversion of the response. This transformation demonstrates the top-down predictive nature of the expectation mechanisms arising during both listening and imagery.SIGNIFICANCE STATEMENT It is well known that the human brain is activated during musical imagery: the act of voluntarily hearing music in our mind without external stimulation. It is unclear, however, what the temporal dynamics of this activation are, as well as what musical features are precisely encoded in the neural signals. This study uses an experimental paradigm with high temporal precision to record and analyze the cortical activity during musical imagery. This study reveals that neural signals encode music acoustics and melodic expectations during both listening and imagery. Crucially, it is also found that a simple mapping based on a time-shift and a polarity inversion could robustly describe the relationship between listening and imagery signals.
Collapse
Affiliation(s)
- Guilhem Marion
- Laboratoire des Systèmes Perceptifs, Département d'Étude Cognitive, École Normale Supérieure, PSL, 75005, Paris, France
| | - Giovanni M Di Liberto
- Laboratoire des Systèmes Perceptifs, Département d'Étude Cognitive, École Normale Supérieure, PSL, 75005, Paris, France
- Trinity Centre for Biomedical Engineering, Trinity College Institute of Neuroscience, Department of Mechanical, Manufacturing and Biomedical Engineering, Trinity College, University of Dublin, D02 PN40, Dublin 2, Ireland
- School of Electrical and Electronic Engineering and UCD Centre for Biomedical Engineering, University College Dublin, D04 V1W8, Dublin 4, Ireland
| | - Shihab A Shamma
- Laboratoire des Systèmes Perceptifs, Département d'Étude Cognitive, École Normale Supérieure, PSL, 75005, Paris, France
- Institute for Systems Research, Electrical and Computer Engineering, University of Maryland, College Park, MD 20742
| |
Collapse
|
28
|
Yang F, Zhu H, Yu L, Lu W, Zhang C, Tian X. Deficits in multi-scale top-down processes distorting auditory perception in schizophrenia. Behav Brain Res 2021; 412:113411. [PMID: 34119507 DOI: 10.1016/j.bbr.2021.113411] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2020] [Revised: 05/12/2021] [Accepted: 06/07/2021] [Indexed: 11/28/2022]
Abstract
Cognitive models postulate that impaired source monitoring incorrectly weights the top-down prediction and bottom-up sensory processes and causes hallucinations. However, the underlying mechanisms of the interaction, such as whether the incorrectly weighting is ubiquitously on all levels of sensory features and whether different top-down processes have distinct effects in subgroups of schizophrenia are still unclear. This study investigates how multi-scale predictions influence perception of basic tonal features in schizophrenia. Sixty-three schizophrenia patients with and without symptoms of auditory verbal hallucinations (AVHs), and thirty healthy controls identified target tones in noise at the end of tone sequences. Predictions of different timescales were manipulated by either an alternating pattern in the preceding tone sequences (long-term regularity) or a repetition between the target tone and the tone immediately before (short-term repetition). The sensitivity index, d prime (d'), was obtained to assess the modulation of predictions on tone identification. Patients with AVHs showed higher d' when the target tones conformed to the long-term regularity of alternating pattern in the preceding tone sequence than when the target tones were inconsistent with the pattern. Whereas, the short-term repetition modulated the tone identification in patients without AVHs. Predictions did not influence tone identification in healthy controls. Our results suggest that impaired source monitoring in schizophrenia patients with AVHs heavily weights top-down predictions over bottom-up perceptual processes to form incorrect perception. The weighting function in source monitoring can extend to the processes of basic tonal features, and predictions at multiple timescales could differentially modulate perception in different clinical populations. The impaired interaction between top-down and bottom-up processes might underlie the development of hallucination symptoms in schizophrenia.
Collapse
Affiliation(s)
- Fuyin Yang
- Shanghai Key Laboratory of Brain Functional Genomics (Ministry of Education), School of Psychology and Cognitive Science, East China Normal University, Shanghai, 200062, China; NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, 3663 Zhongshan Road North, Shanghai, 200062, China
| | - Hao Zhu
- NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, 3663 Zhongshan Road North, Shanghai, 200062, China; Division of Arts and Sciences, New York University Shanghai, 1555 Century Avenue, Shanghai, 200122, China
| | - Lingfang Yu
- Schizophrenia Program, Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine, Shanghai 200030, China
| | - Weihong Lu
- Schizophrenia Program, Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine, Shanghai 200030, China
| | - Chen Zhang
- Schizophrenia Program, Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine, Shanghai 200030, China.
| | - Xing Tian
- NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, 3663 Zhongshan Road North, Shanghai, 200062, China; Division of Arts and Sciences, New York University Shanghai, 1555 Century Avenue, Shanghai, 200122, China.
| |
Collapse
|
29
|
Panachakel JT, Ramakrishnan AG. Decoding Covert Speech From EEG-A Comprehensive Review. Front Neurosci 2021; 15:642251. [PMID: 33994922 PMCID: PMC8116487 DOI: 10.3389/fnins.2021.642251] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2020] [Accepted: 03/18/2021] [Indexed: 11/13/2022] Open
Abstract
Over the past decade, many researchers have come up with different implementations of systems for decoding covert or imagined speech from EEG (electroencephalogram). They differ from each other in several aspects, from data acquisition to machine learning algorithms, due to which, a comparison between different implementations is often difficult. This review article puts together all the relevant works published in the last decade on decoding imagined speech from EEG into a single framework. Every important aspect of designing such a system, such as selection of words to be imagined, number of electrodes to be recorded, temporal and spatial filtering, feature extraction and classifier are reviewed. This helps a researcher to compare the relative merits and demerits of the different approaches and choose the one that is most optimal. Speech being the most natural form of communication which human beings acquire even without formal education, imagined speech is an ideal choice of prompt for evoking brain activity patterns for a BCI (brain-computer interface) system, although the research on developing real-time (online) speech imagery based BCI systems is still in its infancy. Covert speech based BCI can help people with disabilities to improve their quality of life. It can also be used for covert communication in environments that do not support vocal communication. This paper also discusses some future directions, which will aid the deployment of speech imagery based BCI for practical applications, rather than only for laboratory experiments.
Collapse
Affiliation(s)
- Jerrin Thomas Panachakel
- Medical Intelligence and Language Engineering Laboratory, Department of Electrical Engineering, Indian Institute of Science, Bangalore, India
| | | |
Collapse
|
30
|
Lu L, Sheng J, Liu Z, Gao JH. Neural representations of imagined speech revealed by frequency-tagged magnetoencephalography responses. Neuroimage 2021; 229:117724. [PMID: 33421593 DOI: 10.1016/j.neuroimage.2021.117724] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2020] [Revised: 12/25/2020] [Accepted: 01/03/2021] [Indexed: 10/22/2022] Open
Abstract
Speech mental imagery is a quasi-perceptual experience that occurs in the absence of real speech stimulation. How imagined speech with higher-order structures such as words, phrases and sentences is rapidly organized and internally constructed remains elusive. To address this issue, subjects were tasked with imagining and perceiving poems along with a sequence of reference sounds with a presentation rate of 4 Hz while magnetoencephalography (MEG) recording was conducted. Giving that a sentence in a traditional Chinese poem is five syllables, a sentential rhythm was generated at a distinctive frequency of 0.8 Hz. Using the frequency tagging we concurrently tracked the neural processing timescale to the top-down generation of rhythmic constructs embedded in speech mental imagery and the bottom-up sensory-driven activity that were precisely tagged at the sentence-level rate of 0.8 Hz and a stimulus-level rate of 4 Hz, respectively. We found similar neural responses induced by the internal construction of sentences from syllables with both imagined and perceived poems and further revealed shared and distinct cohorts of cortical areas corresponding to the sentence-level rhythm in imagery and perception. This study supports the view of a common mechanism between imagery and perception by illustrating the neural representations of higher-order rhythmic structures embedded in imagined and perceived speech.
Collapse
Affiliation(s)
- Lingxi Lu
- PKU-IDG/McGovern Institute for Brain Research, Peking University, Beijing, 100871 China; Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, 100871 China; Center for the Cognitive Science of Language, Beijing Language and Culture University, Beijing, 100083 China
| | - Jingwei Sheng
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, 100871 China; Beijing Quanmag Healthcare, Beijing, 100195 China
| | - Zhaowei Liu
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, 100871 China; Center for Excellence in Brain Science and Intelligence Technology (Institute of Neuroscience), Chinese Academy of Science, Shanghai, 200031 China
| | - Jia-Hong Gao
- PKU-IDG/McGovern Institute for Brain Research, Peking University, Beijing, 100871 China; Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, 100871 China; Beijing City Key Lab for Medical Physics and Engineering, Institute of Heavy Ion Physics, School of Physics, Peking University, Beijing, 100871, China.
| |
Collapse
|
31
|
Pinheiro AP, Schwartze M, Kotz SA. Cerebellar circuitry and auditory verbal hallucinations: An integrative synthesis and perspective. Neurosci Biobehav Rev 2020; 118:485-503. [DOI: 10.1016/j.neubiorev.2020.08.004] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2020] [Revised: 06/30/2020] [Accepted: 08/07/2020] [Indexed: 02/06/2023]
|
32
|
Borghi AM. A Future of Words: Language and the Challenge of Abstract Concepts. J Cogn 2020; 3:42. [PMID: 33134816 PMCID: PMC7583217 DOI: 10.5334/joc.134] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2020] [Accepted: 10/06/2020] [Indexed: 11/28/2022] Open
Abstract
The paper outlines one of the most important challenges that embodied and grounded theories need to face, i.e., that to explain how abstract concepts (abstractness) are acquired, represented, and used. I illustrate the view according to which abstract concepts are grounded not only in sensorimotor experiences, like concrete concepts, but also and to a greater extent in linguistic, social, and inner experiences. Specifically, I discuss the role played by metacognition, inner speech, social metacognition, and interoception. I also present evidence showing that the weight of linguistic, social, and inner experiences varies depending on the considered sub-kind of abstract concepts (e.g., mental states and spiritual concepts, numbers, emotions, social concepts). I argue that the challenge to explain abstract concepts representation implies the recognition of: a. the role of language, intended as inner and social tool, in shaping our mind; b. the importance of differences across languages; c. the existence of different kinds of abstract concepts; d. the necessity to adopt new paradigms, able to capture the use of abstract concepts in context and interactive situations. This challenge should be addressed with an integrated approach that bridges developmental, anthropological, and neuroscientific studies. This approach extends embodied and grounded views incorporating insights from distributional statistics views of meaning, from pragmatics and semiotics.
Collapse
Affiliation(s)
- Anna M. Borghi
- Sapienza University of Rome, Department of Dynamic and Clinical Psychology, IT
- Institute of Cognitive Sciences and Technologies, Italian National Research Council, IT
| |
Collapse
|
33
|
Li Y, Luo H, Tian X. Mental operations in rhythm: Motor-to-sensory transformation mediates imagined singing. PLoS Biol 2020; 18:e3000504. [PMID: 33017389 PMCID: PMC7561264 DOI: 10.1371/journal.pbio.3000504] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2019] [Revised: 10/15/2020] [Accepted: 09/01/2020] [Indexed: 11/21/2022] Open
Abstract
What enables the mental activities of thinking verbally or humming in our mind? We hypothesized that the interaction between motor and sensory systems induces speech and melodic mental representations, and this motor-to-sensory transformation forms the neural basis that enables our verbal thinking and covert singing. Analogous with the neural entrainment to auditory stimuli, participants imagined singing lyrics of well-known songs rhythmically while their neural electromagnetic signals were recorded using magnetoencephalography (MEG). We found that when participants imagined singing the same song in similar durations across trials, the delta frequency band (1–3 Hz, similar to the rhythm of the songs) showed more consistent phase coherence across trials. This neural phase tracking of imagined singing was observed in a frontal-parietal-temporal network: the proposed motor-to-sensory transformation pathway, including the inferior frontal gyrus (IFG), insula (INS), premotor area, intra-parietal sulcus (IPS), temporal-parietal junction (TPJ), primary auditory cortex (Heschl’s gyrus [HG]), and superior temporal gyrus (STG) and sulcus (STS). These results suggest that neural responses can entrain the rhythm of mental activity. Moreover, the theta-band (4–8 Hz) phase coherence was localized in the auditory cortices. The mu (9–12 Hz) and beta (17–20 Hz) bands were observed in the right-lateralized sensorimotor systems that were consistent with the singing context. The gamma band was broadly manifested in the observed network. The coherent and frequency-specific activations in the motor-to-sensory transformation network mediate the internal construction of perceptual representations and form the foundation of neural computations for mental operations. What enables our mental activities for thinking verbally or humming in our mind? Using an imagined singing paradigm with magnetoencephalography recordings, this study shows that neural oscillations in the motor-to-sensory transformation network tracked inner speech and covert singing.
Collapse
Affiliation(s)
- Yanzhu Li
- New York University Shanghai, Shanghai, China
- NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, Shanghai, China
| | - Huan Luo
- Peking University, Beijing, China
| | - Xing Tian
- New York University Shanghai, Shanghai, China
- NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, Shanghai, China
- * E-mail:
| |
Collapse
|
34
|
Langland-Hassan P. Inner speech. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2020; 12:e1544. [PMID: 32949083 DOI: 10.1002/wcs.1544] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/20/2020] [Revised: 06/25/2020] [Accepted: 08/13/2020] [Indexed: 11/07/2022]
Abstract
Inner speech travels under many aliases: the inner voice, verbal thought, thinking in words, internal verbalization, "talking in your head," the "little voice in the head," and so on. It is both a familiar element of first-person experience and a psychological phenomenon whose complex cognitive components and distributed neural bases are increasingly well understood. There is evidence that inner speech plays a variety of cognitive roles, from enabling abstract thought, to supporting metacognition, memory, and executive function. One active area of controversy concerns the relation of inner speech to auditory verbal hallucinations (AVHs) in schizophrenia, with a common proposal being that sufferers of AVH misidentify their own inner speech as being generated by someone else. Recently, researchers have used artificial intelligence to translate the neural and neuromuscular signatures of inner speech into corresponding outer speech signals, laying the groundwork for a variety of new applications and interventions. This article is categorized under: Philosophy > Foundations of Cognitive Science Linguistics > Language in Mind and Brain Philosophy > Consciousness Philosophy > Psychological Capacities.
Collapse
|
35
|
Verdurand M, Rossato S, Zmarich C. Coarticulatory Aspects of the Fluent Speech of French and Italian People Who Stutter Under Altered Auditory Feedback. Front Psychol 2020; 11:1745. [PMID: 32793069 PMCID: PMC7390966 DOI: 10.3389/fpsyg.2020.01745] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2019] [Accepted: 06/24/2020] [Indexed: 12/03/2022] Open
Abstract
A number of studies have shown that phonetic peculiarities, especially at the coarticulation level, exist in the disfluent as well as in the perceptively fluent speech of people who stutter (PWS). However, results from fluent speech are very disparate and not easily interpretable. Are the coarticulatory features observed in fluent speech of PWS a manifestation of the disorder, or rather a compensation for the disorder itself? The purpose of the present study is to investigate the coarticulatory behavior in the fluent speech of PWS in the attempt to answer the question on its symptomatic or adaptive nature. In order to achieve this, we have studied the speech of 21 adult PWS (10 French and 11 Italian) compared to that of 20 fluent adults (10 French and 10 Italian). The participants had to repeat simple CV syllables in short carrier sentences, where C = /b, d, g/ and V = /a, i, u/. Crucially, this repetition task was performed in order to compare fluent speech coarticulation of PWS to that of PWNS, and to compare the coarticulation of PWS under a condition with normal auditory feedback (NAF) and under a fluency-enhancing condition due to an altered auditory feedback (AAF). This is the first study, to our knowledge, to investigate the coarticulation behavior under AAF. The degree of coarticulation was measured by means of the Locus Equations (LE). The coarticulation degree observed in fluent PWS speech is lower than that of the PWNS, and, more importantly, in AAF condition, PWS coarticulation appears even weaker than in the NAF condition. The results allow to interpret the lower degree of coarticulation found in fluent speech of PWS under NAF condition as a compensation for the disorder, based on the fact that PWS’s coarticulation is weakening in fluency-enhancing conditions, further away from the degree of coarticulation observed in PWNS. Since a lower degree of coarticulation is associated to a greater separation between the places of articulation of the consonant and the vowel, these results are compatible with the hypothesis that larger articulatory movements could be responsible for the stabilization of the PWS speech motor system, increasing the kinesthetic feedback from the effector system. This interpretation shares with a number of relatively recent proposal the idea that stuttering derives from an impaired feedforward (open-loop) control system, which makes PWS rely more heavily on a feedback-based (closed loop) motor control strategy.
Collapse
Affiliation(s)
- Marine Verdurand
- Speech Therapy Study, Cabestany, France.,Université Grenoble Alpes, CNRS, Grenoble INP, LIG, Grenoble, France
| | - Solange Rossato
- Université Grenoble Alpes, CNRS, Grenoble INP, LIG, Grenoble, France
| | - Claudio Zmarich
- Institute of Cognitive Sciences and Technologies, National Research Council, Padua, Italy
| |
Collapse
|
36
|
Pinheiro AP, Schwartze M, Gutiérrez-Domínguez F, Kotz SA. Real and imagined sensory feedback have comparable effects on action anticipation. Cortex 2020; 130:290-301. [PMID: 32698087 DOI: 10.1016/j.cortex.2020.04.030] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2019] [Revised: 03/23/2020] [Accepted: 04/13/2020] [Indexed: 01/08/2023]
Abstract
The forward model monitors the success of sensory feedback to an action and links it to an efference copy originating in the motor system. The Readiness Potential (RP) of the electroencephalogram has been denoted as a neural signature of the efference copy. An open question is whether imagined sensory feedback works similarly to real sensory feedback. We investigated the RP to audible and imagined sounds in a button-press paradigm and assessed the role of sound complexity (vocal vs. non-vocal sound). Sensory feedback (both audible and imagined) in response to a voluntary action modulated the RP amplitude time-locked to the button press. The RP amplitude increase was larger for actions with expected sensory feedback (audible and imagined) than those without sensory feedback, and associated with N1 suppression for audible sounds. Further, the early RP phase was increased when actions elicited an imagined vocal (self-voice) compared to non-vocal sound. Our results support the notion that sensory feedback is anticipated before voluntary actions. This is the case for both audible and imagined sensory feedback and confirms a role of overt and covert feedback in the forward model.
Collapse
Affiliation(s)
- Ana P Pinheiro
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisbon, Portugal; Faculty of Psychology and Neuroscience, University of Maastricht, Maastricht, The Netherlands.
| | - Michael Schwartze
- Faculty of Psychology and Neuroscience, University of Maastricht, Maastricht, The Netherlands
| | | | - Sonja A Kotz
- Faculty of Psychology and Neuroscience, University of Maastricht, Maastricht, The Netherlands
| |
Collapse
|
37
|
Mahmud MS, Ahmed F, Al-Fahad R, Moinuddin KA, Yeasin M, Alain C, Bidelman GM. Decoding Hearing-Related Changes in Older Adults' Spatiotemporal Neural Processing of Speech Using Machine Learning. Front Neurosci 2020; 14:748. [PMID: 32765215 PMCID: PMC7378401 DOI: 10.3389/fnins.2020.00748] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2019] [Accepted: 06/25/2020] [Indexed: 12/25/2022] Open
Abstract
Speech perception in noisy environments depends on complex interactions between sensory and cognitive systems. In older adults, such interactions may be affected, especially in those individuals who have more severe age-related hearing loss. Using a data-driven approach, we assessed the temporal (when in time) and spatial (where in the brain) characteristics of cortical speech-evoked responses that distinguish older adults with or without mild hearing loss. We performed source analyses to estimate cortical surface signals from the EEG recordings during a phoneme discrimination task conducted under clear and noise-degraded conditions. We computed source-level ERPs (i.e., mean activation within each ROI) from each of the 68 ROIs of the Desikan-Killiany (DK) atlas, averaged over a randomly chosen 100 trials without replacement to form feature vectors. We adopted a multivariate feature selection method called stability selection and control to choose features that are consistent over a range of model parameters. We use parameter optimized support vector machine (SVM) as a classifiers to investigate the time course and brain regions that segregate groups and speech clarity. For clear speech perception, whole-brain data revealed a classification accuracy of 81.50% [area under the curve (AUC) 80.73%; F1-score 82.00%], distinguishing groups within ∼60 ms after speech onset (i.e., as early as the P1 wave). We observed lower accuracy of 78.12% [AUC 77.64%; F1-score 78.00%] and delayed classification performance when speech was embedded in noise, with group segregation at 80 ms. Separate analysis using left (LH) and right hemisphere (RH) regions showed that LH speech activity was better at distinguishing hearing groups than activity measured in the RH. Moreover, stability selection analysis identified 12 brain regions (among 1428 total spatiotemporal features from 68 regions) where source activity segregated groups with >80% accuracy (clear speech); whereas 16 regions were critical for noise-degraded speech to achieve a comparable level of group segregation (78.7% accuracy). Our results identify critical time-courses and brain regions that distinguish mild hearing loss from normal hearing in older adults and confirm a larger number of active areas, particularly in RH, when processing noise-degraded speech information.
Collapse
Affiliation(s)
- Md Sultan Mahmud
- Department of Electrical and Computer Engineering, The University of Memphis, Memphis, TN, United States
| | - Faruk Ahmed
- Department of Electrical and Computer Engineering, The University of Memphis, Memphis, TN, United States
| | - Rakib Al-Fahad
- Department of Electrical and Computer Engineering, The University of Memphis, Memphis, TN, United States
| | - Kazi Ashraf Moinuddin
- Department of Electrical and Computer Engineering, The University of Memphis, Memphis, TN, United States
| | - Mohammed Yeasin
- Department of Electrical and Computer Engineering, The University of Memphis, Memphis, TN, United States
| | - Claude Alain
- Rotman Research Institute-Baycrest Centre for Geriatric Care, Toronto, ON, Canada.,Department of Psychology, University of Toronto, Toronto, ON, Canada.,Institute of Medical Sciences, University of Toronto, Toronto, ON, Canada
| | - Gavin M Bidelman
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, United States.,School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, United States.,Department of Anatomy and Neurobiology, University of Tennessee Health Science Center, Memphis, TN, United States
| |
Collapse
|
38
|
Ylinen S, Nora A, Service E. Better Phonological Short-Term Memory Is Linked to Improved Cortical Memory Representations for Word Forms and Better Word Learning. Front Hum Neurosci 2020; 14:209. [PMID: 32581751 PMCID: PMC7291706 DOI: 10.3389/fnhum.2020.00209] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2019] [Accepted: 05/08/2020] [Indexed: 11/13/2022] Open
Abstract
Language learning relies on both short-term and long-term memory. Phonological short-term memory (pSTM) is thought to play an important role in the learning of novel word forms. However, language learners may differ in their ability to maintain word representations in pSTM during interfering auditory input. We used magnetoencephalography (MEG) to investigate how pSTM capacity in better and poorer pSTM groups is linked to language learning and the maintenance of pseudowords in pSTM. In particular, MEG was recorded while participants maintained pseudowords in pSTM by covert speech rehearsal, and while these brain representations were probed by presenting auditory pseudowords with first or third syllables matching or mismatching the rehearsed item. A control condition included identical stimuli but no rehearsal. Differences in response strength between matching and mismatching syllables were interpreted as the phonological mapping negativity (PMN). While PMN for the first syllable was found in both groups, it was observed for the third syllable only in the group with better pSTM. This suggests that individuals with better pSTM maintained representations of trisyllabic pseudowords more accurately during interference than individuals with poorer pSTM. Importantly, the group with better pSTM learned words faster in a paired-associate word learning task, linking the PMN findings to language learning.
Collapse
Affiliation(s)
- Sari Ylinen
- CICERO Learning, Faculty of Educational Sciences, University of Helsinki, Helsinki, Finland.,Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland.,BioMag Laboratory, Helsinki University Central Hospital, Helsinki, Finland
| | - Anni Nora
- Department on Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland
| | - Elisabet Service
- ARiEAL Research Centre, Department of Linguistics and Languages, McMaster University, Hamilton, ON, Canada
| |
Collapse
|
39
|
Li S, Zhu H, Tian X. Corollary Discharge Versus Efference Copy: Distinct Neural Signals in Speech Preparation Differentially Modulate Auditory Responses. Cereb Cortex 2020; 30:5806-5820. [PMID: 32542347 DOI: 10.1093/cercor/bhaa154] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2020] [Revised: 05/15/2020] [Accepted: 05/16/2020] [Indexed: 11/14/2022] Open
Abstract
Actions influence sensory processing in a complex way to shape behavior. For example, during actions, a copy of motor signals-termed "corollary discharge" (CD) or "efference copy" (EC)-can be transmitted to sensory regions and modulate perception. However, the sole inhibitory function of the motor copies is challenged by mixed empirical observations as well as multifaceted computational demands for behaviors. We hypothesized that the content in the motor signals available at distinct stages of actions determined the nature of signals (CD vs. EC) and constrained their modulatory functions on perceptual processing. We tested this hypothesis using speech in which we could precisely control and quantify the course of action. In three electroencephalography (EEG) experiments using a novel delayed articulation paradigm, we found that preparation without linguistic contents suppressed auditory responses to all speech sounds, whereas preparing to speak a syllable selectively enhanced the auditory responses to the prepared syllable. A computational model demonstrated that a bifurcation of motor signals could be a potential algorithm and neural implementation to achieve the distinct functions in the motor-to-sensory transformation. These results suggest that distinct motor signals are generated in the motor-to-sensory transformation and integrated with sensory input to modulate perception.
Collapse
Affiliation(s)
- Siqi Li
- Shanghai Key Laboratory of Brain Functional Genomics (Ministry of Education), School of Psychology and Cognitive Science, East China Normal University, Shanghai 200062, China.,NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, Shanghai 200062, China
| | - Hao Zhu
- NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, Shanghai 200062, China.,Division of Arts and Sciences, New York University Shanghai, Shanghai 200122, China
| | - Xing Tian
- NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, Shanghai 200062, China.,Division of Arts and Sciences, New York University Shanghai, Shanghai 200122, China
| |
Collapse
|
40
|
Vissers CTWM, Tomas E, Law J. The Emergence of Inner Speech and Its Measurement in Atypically Developing Children. Front Psychol 2020; 11:279. [PMID: 32256423 PMCID: PMC7090223 DOI: 10.3389/fpsyg.2020.00279] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2019] [Accepted: 02/06/2020] [Indexed: 11/13/2022] Open
Abstract
Inner speech (IS), or the act of silently talking to yourself, occurs in humans regardless of their cultural and linguistic background, suggesting its key role in human cognition. The absence of overt articulation leads to methodological challenges to studying IS and its effects on cognitive processing. Investigating IS in children is particularly problematic due to cognitive demands of the behavioral tasks and age restrictions for collecting neurophysiological data [e.g., functional magnetic resonance imaging (fMRI) or electromyography (EMG)]; thus, the developmental aspects of IS remain poorly understood despite the long history of adult research. Studying developmental aspects of IS could shed light on the variability in types and amount of IS in adults. In addition, problems in mastering IS might account for neuropsychological deficits observed in children with neurodevelopmental conditions. For example, deviance in IS development might influence these children's general cognitive processing, including social cognition, executive functioning, and related social-emotional functioning. The aim of the present paper is to look at IS from a developmental perspective, exploring its theory and identifying experimental paradigms appropriate for preschool and early school-aged children in Anglophone and Russian literature. We choose these two languages because the original work carried out by Vygotsky on IS was published in Russian, and Russian scientists have continued to publish on this topic since his death. Since the 1960s, much of the experimental work in this area has been published in Anglophone journals. We discuss different measurements of IS phenomena, their informativeness about subtypes of IS, and their potential for studying atypical language development. Implications for assessing and stimulating IS in clinical populations are discussed.
Collapse
Affiliation(s)
- Constance Th W M Vissers
- Royal Dutch Kentalis, Sint-Michielsgestel, Netherlands.,Behavioural Science Institute, Radboud University, Nijmegen, Netherlands
| | - Ekaterina Tomas
- National Research University - Higher School of Economics, Moscow, Russia
| | - James Law
- School of Education, Communication and Language Sciences, Newcastle University, Newcastle upon Tyne, United Kingdom
| |
Collapse
|
41
|
Fama ME, Turkeltaub PE. Inner Speech in Aphasia: Current Evidence, Clinical Implications, and Future Directions. AMERICAN JOURNAL OF SPEECH-LANGUAGE PATHOLOGY 2020; 29:560-573. [PMID: 31518502 PMCID: PMC7233112 DOI: 10.1044/2019_ajslp-cac48-18-0212] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/13/2018] [Revised: 01/29/2019] [Accepted: 05/01/2019] [Indexed: 06/10/2023]
Abstract
Purpose Typical language users can engage in a lively internal monologue for introspection and task performance, but what is the nature of inner speech among individuals with aphasia? Studying the phenomenon of inner speech in this population has the potential to further our understanding of inner speech more generally, help clarify the subjective experience of those with aphasia, and inform clinical practice. In this scoping review, we describe and synthesize the existing literature on inner speech in aphasia. Method Studies examining inner speech in aphasia were located through electronic databases and citation searches. Across the various studies, methods include both subjective approaches (i.e., asking individuals with aphasia about the integrity of their inner speech) and objective approaches (i.e., administering objective language tests as proxy measures for inner speech ability). The findings of relevant studies are summarized. Results Although definitions of inner speech vary across research groups, studies using both subjective and objective methods have established findings showing that inner speech can be preserved relative to spoken language in individuals with aphasia, particularly among those with relatively intact word retrieval and difficulty primarily at the level of speech output processing. Approaches that combine self-report with objective measures have demonstrated that individuals with aphasia are, on the whole, reliably able to report the integrity of their inner speech. Conclusions The examination of inner speech in individuals with aphasia has potential implications for clinical practice, in that differences in the preservation of inner speech across individuals may help guide clinical decision making around aphasia treatment. Although there are many questions that remain open to further investigation, studying inner speech in this specific population has also contributed to a broader understanding of the mechanisms of inner speech more generally.
Collapse
Affiliation(s)
- Mackenzie E. Fama
- Department of Speech-Language Pathology & Audiology, Towson University, Towson, MD
- Center for Brain Plasticity and Recovery, Georgetown University, Washington, DC
| | - Peter E. Turkeltaub
- Center for Brain Plasticity and Recovery, Georgetown University, Washington, DC
- Research Division, MedStar National Rehabilitation Network, Washington, DC
| |
Collapse
|
42
|
Jenson D, Bowers AL, Hudock D, Saltuklaroglu T. The Application of EEG Mu Rhythm Measures to Neurophysiological Research in Stuttering. Front Hum Neurosci 2020; 13:458. [PMID: 31998103 PMCID: PMC6965028 DOI: 10.3389/fnhum.2019.00458] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2019] [Accepted: 12/13/2019] [Indexed: 11/29/2022] Open
Abstract
Deficits in basal ganglia-based inhibitory and timing circuits along with sensorimotor internal modeling mechanisms are thought to underlie stuttering. However, much remains to be learned regarding the precise manner how these deficits contribute to disrupting both speech and cognitive functions in those who stutter. Herein, we examine the suitability of electroencephalographic (EEG) mu rhythms for addressing these deficits. We review some previous findings of mu rhythm activity differentiating stuttering from non-stuttering individuals and present some new preliminary findings capturing stuttering-related deficits in working memory. Mu rhythms are characterized by spectral peaks in alpha (8-13 Hz) and beta (14-25 Hz) frequency bands (mu-alpha and mu-beta). They emanate from premotor/motor regions and are influenced by basal ganglia and sensorimotor function. More specifically, alpha peaks (mu-alpha) are sensitive to basal ganglia-based inhibitory signals and sensory-to-motor feedback. Beta peaks (mu-beta) are sensitive to changes in timing and capture motor-to-sensory (i.e., forward model) projections. Observing simultaneous changes in mu-alpha and mu-beta across the time-course of specific events provides a rich window for observing neurophysiological deficits associated with stuttering in both speech and cognitive tasks and can provide a better understanding of the functional relationship between these stuttering symptoms. We review how independent component analysis (ICA) can extract mu rhythms from raw EEG signals in speech production tasks, such that changes in alpha and beta power are mapped to myogenic activity from articulators. We review findings from speech production and auditory discrimination tasks demonstrating that mu-alpha and mu-beta are highly sensitive to capturing sensorimotor and basal ganglia deficits associated with stuttering with high temporal precision. Novel findings from a non-word repetition (working memory) task are also included. They show reduced mu-alpha suppression in a stuttering group compared to a typically fluent group. Finally, we review current limitations and directions for future research.
Collapse
Affiliation(s)
- David Jenson
- Department of Speech and Hearing Sciences, Elson S. Floyd College of Medicine, Washington State University, Spokane, WA, United States
| | - Andrew L. Bowers
- Epley Center for Health Professions, Communication Sciences and Disorders, University of Arkansas, Fayetteville, AR, United States
| | - Daniel Hudock
- Department of Communication Sciences and Disorders, Idaho State University, Pocatello, ID, United States
| | - Tim Saltuklaroglu
- College of Health Professions, Department of Audiology and Speech-Pathology, University of Tennessee Health Science Center, Knoxville, TN, United States
| |
Collapse
|
43
|
Pinheiro AP, Schwartze M, Gutierrez F, Kotz SA. When temporal prediction errs: ERP responses to delayed action-feedback onset. Neuropsychologia 2019; 134:107200. [DOI: 10.1016/j.neuropsychologia.2019.107200] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2019] [Revised: 09/18/2019] [Accepted: 09/19/2019] [Indexed: 11/26/2022]
|
44
|
Gao C, Weber CE, Shinkareva SV. The brain basis of audiovisual affective processing: Evidence from a coordinate-based activation likelihood estimation meta-analysis. Cortex 2019; 120:66-77. [DOI: 10.1016/j.cortex.2019.05.016] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2019] [Revised: 05/03/2019] [Accepted: 05/28/2019] [Indexed: 01/19/2023]
|
45
|
Lu L, Wang Q, Sheng J, Liu Z, Qin L, Li L, Gao JH. Neural tracking of speech mental imagery during rhythmic inner counting. eLife 2019; 8:48971. [PMID: 31635693 PMCID: PMC6805153 DOI: 10.7554/elife.48971] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2019] [Accepted: 10/09/2019] [Indexed: 11/13/2022] Open
Abstract
The subjective inner experience of mental imagery is among the most ubiquitous human experiences in daily life. Elucidating the neural implementation underpinning the dynamic construction of mental imagery is critical to understanding high-order cognitive function in the human brain. Here, we applied a frequency-tagging method to isolate the top-down process of speech mental imagery from bottom-up sensory-driven activities and concurrently tracked the neural processing time scales corresponding to the two processes in human subjects. Notably, by estimating the source of the magnetoencephalography (MEG) signals, we identified isolated brain networks activated at the imagery-rate frequency. In contrast, more extensive brain regions in the auditory temporal cortex were activated at the stimulus-rate frequency. Furthermore, intracranial stereotactic electroencephalogram (sEEG) evidence confirmed the participation of the inferior frontal gyrus in generating speech mental imagery. Our results indicate that a disassociated neural network underlies the dynamic construction of speech mental imagery independent of auditory perception.
Collapse
Affiliation(s)
- Lingxi Lu
- PKU-IDG/McGovern Institute for Brain Research, Peking University, Beijing, China.,Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China
| | - Qian Wang
- Department of Clinical Neuropsychology, Sanbo Brain Hospital, Capital Medical University, Beijing, China
| | - Jingwei Sheng
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China
| | - Zhaowei Liu
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China
| | - Lang Qin
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China.,Department of Linguistics, The University of Hong Kong, Hong Kong, China
| | - Liang Li
- Speech and Hearing Research Center, School of Psychological and Cognitive Sciences, Peking University, Beijing, China
| | - Jia-Hong Gao
- PKU-IDG/McGovern Institute for Brain Research, Peking University, Beijing, China.,Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China.,Beijing City Key Lab for Medical Physics and Engineering, Institution of Heavy Ion Physics, School of Physics, Peking University, Beijing, China
| |
Collapse
|
46
|
Grandchamp R, Rapin L, Perrone-Bertolotti M, Pichat C, Haldin C, Cousin E, Lachaux JP, Dohen M, Perrier P, Garnier M, Baciu M, Lœvenbruck H. The ConDialInt Model: Condensation, Dialogality, and Intentionality Dimensions of Inner Speech Within a Hierarchical Predictive Control Framework. Front Psychol 2019; 10:2019. [PMID: 31620039 PMCID: PMC6759632 DOI: 10.3389/fpsyg.2019.02019] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2019] [Accepted: 08/19/2019] [Indexed: 11/19/2022] Open
Abstract
Inner speech has been shown to vary in form along several dimensions. Along condensation, condensed inner speech forms have been described, that are supposed to be deprived of acoustic, phonological and even syntactic qualities. Expanded forms, on the other extreme, display articulatory and auditory properties. Along dialogality, inner speech can be monologal, when we engage in internal soliloquy, or dialogal, when we recall past conversations or imagine future dialogs involving our own voice as well as that of others addressing us. Along intentionality, it can be intentional (when we deliberately rehearse material in short-term memory) or it can arise unintentionally (during mind wandering). We introduce the ConDialInt model, a neurocognitive predictive control model of inner speech that accounts for its varieties along these three dimensions. ConDialInt spells out the condensation dimension by including inhibitory control at the conceptualization, formulation or articulatory planning stage. It accounts for dialogality, by assuming internal model adaptations and by speculating on neural processes underlying perspective switching. It explains the differences between intentional and spontaneous varieties in terms of monitoring. We present an fMRI study in which we probed varieties of inner speech along dialogality and intentionality, to examine the validity of the neuroanatomical correlates posited in ConDialInt. Condensation was also informally tackled. Our data support the hypothesis that expanded inner speech recruits speech production processes down to articulatory planning, resulting in a predicted signal, the inner voice, with auditory qualities. Along dialogality, covertly using an avatar's voice resulted in the activation of right hemisphere homologs of the regions involved in internal own-voice soliloquy and in reduced cerebellar activation, consistent with internal model adaptation. Switching from first-person to third-person perspective resulted in activations in precuneus and parietal lobules. Along intentionality, compared with intentional inner speech, mind wandering with inner speech episodes was associated with greater bilateral inferior frontal activation and decreased activation in left temporal regions. This is consistent with the reported subjective evanescence and presumably reflects condensation processes. Our results provide neuroanatomical evidence compatible with predictive control and in favor of the assumptions made in the ConDialInt model.
Collapse
Affiliation(s)
- Romain Grandchamp
- Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, Grenoble, France
| | - Lucile Rapin
- Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, Grenoble, France
| | | | - Cédric Pichat
- Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, Grenoble, France
| | - Célise Haldin
- Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, Grenoble, France
| | - Emilie Cousin
- Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, Grenoble, France
| | - Jean-Philippe Lachaux
- INSERM U1028, CNRS UMR5292, Brain Dynamics and Cognition Team, Lyon Neurosciences Research Center, Bron, France
| | - Marion Dohen
- Univ. Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, Grenoble, France
| | - Pascal Perrier
- Univ. Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, Grenoble, France
| | - Maëva Garnier
- Univ. Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, Grenoble, France
| | - Monica Baciu
- Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, Grenoble, France
| | - Hélène Lœvenbruck
- Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, Grenoble, France
| |
Collapse
|
47
|
Neural Correlates of Music Listening and Recall in the Human Brain. J Neurosci 2019; 39:8112-8123. [PMID: 31501297 DOI: 10.1523/jneurosci.1468-18.2019] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2018] [Revised: 08/13/2019] [Accepted: 08/14/2019] [Indexed: 11/21/2022] Open
Abstract
Previous neuroimaging studies have identified various brain regions that are activated by music listening or recall. However, little is known about how these brain regions represent the time course and temporal features of music during listening and recall. Here we analyzed neural activity in different brain regions associated with music listening and recall using electrocorticography recordings obtained from 10 epilepsy patients of both genders implanted with subdural electrodes. Electrocorticography signals were recorded while subjects were listening to familiar instrumental music or recalling the same music pieces by imagery. During the onset phase (0-500 ms), music listening initiated cortical activity in high-gamma band in the temporal lobe and supramarginal gyrus, followed by the precentral gyrus and the inferior frontal gyrus. In contrast, during music recall, the high-gamma band activity first appeared in the inferior frontal gyrus and precentral gyrus, and then spread to the temporal lobe, showing a reversed temporal sequential order. During the sustained phase (after 500 ms), delta band and high-gamma band responses in the supramarginal gyrus, temporal and frontal lobes dynamically tracked the intensity envelope of the music during listening or recall with distinct temporal delays. During music listening, the neural tracking by the frontal lobe lagged behind that of the temporal lobe; whereas during music recall, the neural tracking by the frontal lobe preceded that of the temporal lobe. These findings demonstrate bottom-up and top-down processes in the cerebral cortex during music listening and recall and provide important insights into music processing by the human brain.SIGNIFICANCE STATEMENT Understanding how the brain analyzes, stores, and retrieves music remains one of the most challenging problems in neuroscience. By analyzing direct neural recordings obtained from the human brain, we observed dispersed and overlapping brain regions associated with music listening and recall. Music listening initiated cortical activity in high-gamma band starting from the temporal lobe and ending at the inferior frontal gyrus. A reversed temporal flow was observed in high-gamma response during music recall. Neural responses of frontal and temporal lobes dynamically tracked the intensity envelope of music that was presented or imagined during listening or recall. These findings demonstrate bottom-up and top-down processes in the cerebral cortex during music listening and recall.
Collapse
|
48
|
Distinct Mechanisms of Imagery Differentially Influence Speech Perception. eNeuro 2019; 6:ENEURO.0261-19.2019. [PMID: 31481396 PMCID: PMC6753248 DOI: 10.1523/eneuro.0261-19.2019] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2019] [Revised: 08/21/2019] [Accepted: 08/23/2019] [Indexed: 12/03/2022] Open
Abstract
Neural representation can be induced without external stimulation, such as in mental imagery. Our previous study found that imagined speaking and imagined hearing modulated perceptual neural responses in opposite directions, suggesting motor-to-sensory transformation and memory retrieval as two separate routes that induce auditory representation (Tian and Poeppel, 2013). We hypothesized that the precision of representation induced from different types of speech imagery led to different modulation effects. Specifically, we predicted that the one-to-one mapping between motor and sensory domains established during speech production would evoke a more precise auditory representation in imagined speaking than retrieving the same sounds from memory in imagined hearing. To test this hypothesis, we built the function of representational precision as the modulation of connection strength in a neural network model. The model fitted the magnetoencephalography (MEG) imagery repetition effects, and the best-fitting parameters showed sharper tuning after imagined speaking than imagined hearing, consistent with the representational precision hypothesis. Moreover, this model predicted that different types of speech imagery would affect perception differently. In an imagery-adaptation experiment, the categorization of /ba/-/da/ continuum from male and female human participants showed more positive shifts towards the preceding imagined syllable after imagined speaking than imagined hearing. These consistent simulation and behavioral results support our hypothesis that distinct mechanisms of speech imagery construct auditory representation with varying degrees of precision and differentially influence auditory perception. This study provides a mechanistic connection between neural-level activity and psychophysics that reveals the neural computation of mental imagery.
Collapse
|
49
|
Harvey JS, Smithson HE, Siviour CR, Gasper GEM, Sønnesyn SO, McLeish TCB, Howard DM. A thirteenth-century theory of speech. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 146:937. [PMID: 31472541 PMCID: PMC7051007 DOI: 10.1121/1.5119126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/12/2019] [Revised: 06/20/2019] [Accepted: 07/03/2019] [Indexed: 06/10/2023]
Abstract
This historical paper examines a pioneering theory of speech production and perception from the thirteenth century. Robert Grosseteste (c.1175—1253) was a celebrated medieval thinker, who developed an impressive corpus of treatises on the natural world. This paper looks at his treatise on sound and phonetics, De generatione sonorum [On the Generation of Sounds]. Through interdisciplinary analysis of the text, this paper finds a theory of vowel production and perception that is notably mathematical, with a formulation of vowel space rooted in combinatorics. Specifically, Grosseteste constructs a categorical space comprising three fundamental types of movements pertaining to the vocal apparatus: linear, circular, and dilational-constrictional; these correspond to similarity transformations of translation, rotation, and uniform scaling, respectively. That Grosseteste's space is categorical, and low-dimensional, is remarkable vis-a-vis current theories of phoneme perception. As well as his description of vowel space, Grosseteste also sets out a hypothetical framework of multisensory integration, uniting the production, perception, and representation in writing of vowels with a set of geometric figures associated with “mental images.” This has clear resonances with contemporary studies of motor facilitation during speech perception and audiovisual speech. This paper additionally provides an experimental foray, illustrating the coherence of mathematical and scientific thinking underpinning this early theory.
Collapse
Affiliation(s)
- J S Harvey
- Department of Experimental Psychology, University of Oxford, Anna Watts Building, Radcliffe Observatory Quarter, Oxford, OX2 6GG, United Kingdom
| | - H E Smithson
- Department of Experimental Psychology, University of Oxford, Anna Watts Building, Radcliffe Observatory Quarter, Oxford, OX2 6GG, United Kingdom
| | - C R Siviour
- Department of Engineering Science, University of Oxford, Oxford e-Research Centre, 7 Keble Road, OX1 3QG, Oxford, United Kingdom
| | - G E M Gasper
- Department of History, Durham University, 43 North Bailey, Durham, DH1 3EX, United Kingdom
| | - S O Sønnesyn
- Department of History, Durham University, 43 North Bailey, Durham, DH1 3EX, United Kingdom
| | - T C B McLeish
- Department of Physics, University of York, Heslington, York, YO10 5DD, United Kingdom
| | - D M Howard
- Department of Electronic Engineering, Royal Holloway, University of London, Egham Hill, Egham, TW20 0EX, United Kingdom
| |
Collapse
|
50
|
Maegherman G, Nuttall HE, Devlin JT, Adank P. Motor Imagery of Speech: The Involvement of Primary Motor Cortex in Manual and Articulatory Motor Imagery. Front Hum Neurosci 2019; 13:195. [PMID: 31244631 PMCID: PMC6579859 DOI: 10.3389/fnhum.2019.00195] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2018] [Accepted: 05/24/2019] [Indexed: 11/25/2022] Open
Abstract
Motor imagery refers to the phenomenon of imagining performing an action without action execution. Motor imagery and motor execution are assumed to share a similar underlying neural system that involves primary motor cortex (M1). Previous studies have focused on motor imagery of manual actions, but articulatory motor imagery has not been investigated. In this study, transcranial magnetic stimulation (TMS) was used to elicit motor-evoked potentials (MEPs) from the articulatory muscles [orbicularis oris (OO)] as well as from hand muscles [first dorsal interosseous (FDI)]. Twenty participants were asked to execute or imagine performing a simple squeezing task involving a pair of tweezers, which was comparable across both effectors. MEPs were elicited at six time points (50, 150, 250, 350, 450, 550 ms post-stimulus) to track the time course of M1 involvement in both lip and hand tasks. The results showed increased MEP amplitudes for action execution compared to rest for both effectors at time points 350, 450 and 550 ms, but we found no evidence of increased cortical activation for motor imagery. The results indicate that motor imagery does not involve M1 for simple tasks for manual or articulatory muscles. The results have implications for models of mental imagery of simple articulatory gestures, in that no evidence is found for somatotopic activation of lip muscles in sub-phonemic contexts during motor imagery of such tasks, suggesting that motor simulation of relatively simple actions does not involve M1.
Collapse
Affiliation(s)
- Gwijde Maegherman
- Department of Speech, Hearing and Phonetic Sciences, University College London, London, United Kingdom
| | - Helen E Nuttall
- Department of Psychology, Lancaster University, Bailrigg, United Kingdom
| | - Joseph T Devlin
- Department of Experimental Psychology, University College London, London, United Kingdom
| | - Patti Adank
- Department of Speech, Hearing and Phonetic Sciences, University College London, London, United Kingdom
| |
Collapse
|