1
|
Wandelt SK, Bjånes DA, Pejsa K, Lee B, Liu C, Andersen RA. Representation of internal speech by single neurons in human supramarginal gyrus. Nat Hum Behav 2024; 8:1136-1149. [PMID: 38740984 PMCID: PMC11199147 DOI: 10.1038/s41562-024-01867-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Accepted: 03/16/2024] [Indexed: 05/16/2024]
Abstract
Speech brain-machine interfaces (BMIs) translate brain signals into words or audio outputs, enabling communication for people having lost their speech abilities due to diseases or injury. While important advances in vocalized, attempted and mimed speech decoding have been achieved, results for internal speech decoding are sparse and have yet to achieve high functionality. Notably, it is still unclear from which brain areas internal speech can be decoded. Here two participants with tetraplegia with implanted microelectrode arrays located in the supramarginal gyrus (SMG) and primary somatosensory cortex (S1) performed internal and vocalized speech of six words and two pseudowords. In both participants, we found significant neural representation of internal and vocalized speech, at the single neuron and population level in the SMG. From recorded population activity in the SMG, the internally spoken and vocalized words were significantly decodable. In an offline analysis, we achieved average decoding accuracies of 55% and 24% for each participant, respectively (chance level 12.5%), and during an online internal speech BMI task, we averaged 79% and 23% accuracy, respectively. Evidence of shared neural representations between internal speech, word reading and vocalized speech processes was found in participant 1. SMG represented words as well as pseudowords, providing evidence for phonetic encoding. Furthermore, our decoder achieved high classification with multiple internal speech strategies (auditory imagination/visual imagination). Activity in S1 was modulated by vocalized but not internal speech in both participants, suggesting no articulator movements of the vocal tract occurred during internal speech production. This work represents a proof-of-concept for a high-performance internal speech BMI.
Collapse
Affiliation(s)
- Sarah K Wandelt
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA.
- T&C Chen Brain-Machine Interface Center, California Institute of Technology, Pasadena, CA, USA.
| | - David A Bjånes
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA
- T&C Chen Brain-Machine Interface Center, California Institute of Technology, Pasadena, CA, USA
- Rancho Los Amigos National Rehabilitation Center, Downey, CA, USA
| | - Kelsie Pejsa
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA
- T&C Chen Brain-Machine Interface Center, California Institute of Technology, Pasadena, CA, USA
| | - Brian Lee
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA
- Department of Neurological Surgery, Keck School of Medicine of USC, Los Angeles, CA, USA
- USC Neurorestoration Center, Keck School of Medicine of USC, Los Angeles, CA, USA
| | - Charles Liu
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA
- Rancho Los Amigos National Rehabilitation Center, Downey, CA, USA
- Department of Neurological Surgery, Keck School of Medicine of USC, Los Angeles, CA, USA
- USC Neurorestoration Center, Keck School of Medicine of USC, Los Angeles, CA, USA
| | - Richard A Andersen
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA
- T&C Chen Brain-Machine Interface Center, California Institute of Technology, Pasadena, CA, USA
| |
Collapse
|
2
|
Zhang W, Jiang M, Teo KAC, Bhuvanakantham R, Fong L, Sim WKJ, Guo Z, Foo CHV, Chua RHJ, Padmanabhan P, Leong V, Lu J, Gulyás B, Guan C. Revealing the spatiotemporal brain dynamics of covert speech compared with overt speech: A simultaneous EEG-fMRI study. Neuroimage 2024; 293:120629. [PMID: 38697588 DOI: 10.1016/j.neuroimage.2024.120629] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Revised: 04/17/2024] [Accepted: 04/29/2024] [Indexed: 05/05/2024] Open
Abstract
Covert speech (CS) refers to speaking internally to oneself without producing any sound or movement. CS is involved in multiple cognitive functions and disorders. Reconstructing CS content by brain-computer interface (BCI) is also an emerging technique. However, it is still controversial whether CS is a truncated neural process of overt speech (OS) or involves independent patterns. Here, we performed a word-speaking experiment with simultaneous EEG-fMRI. It involved 32 participants, who generated words both overtly and covertly. By integrating spatial constraints from fMRI into EEG source localization, we precisely estimated the spatiotemporal dynamics of neural activity. During CS, EEG source activity was localized in three regions: the left precentral gyrus, the left supplementary motor area, and the left putamen. Although OS involved more brain regions with stronger activations, CS was characterized by an earlier event-locked activation in the left putamen (peak at 262 ms versus 1170 ms). The left putamen was also identified as the only hub node within the functional connectivity (FC) networks of both OS and CS, while showing weaker FC strength towards speech-related regions in the dominant hemisphere during CS. Path analysis revealed significant multivariate associations, indicating an indirect association between the earlier activation in the left putamen and CS, which was mediated by reduced FC towards speech-related regions. These findings revealed the specific spatiotemporal dynamics of CS, offering insights into CS mechanisms that are potentially relevant for future treatment of self-regulation deficits, speech disorders, and development of BCI speech applications.
Collapse
Affiliation(s)
- Wei Zhang
- Cognitive Neuroimaging Centre, Nanyang Technological University, Singapore; Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore
| | - Muyun Jiang
- School of Computer Science and Engineering, Nanyang Technological University, Singapore
| | - Kok Ann Colin Teo
- Cognitive Neuroimaging Centre, Nanyang Technological University, Singapore; Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore; IGP-Neuroscience, Interdisciplinary Graduate Programme, Nanyang Technological University, Singapore; Division of Neurosurgery, National University Health System, Singapore
| | - Raghavan Bhuvanakantham
- Cognitive Neuroimaging Centre, Nanyang Technological University, Singapore; Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore
| | - LaiGuan Fong
- Cognitive Neuroimaging Centre, Nanyang Technological University, Singapore
| | - Wei Khang Jeremy Sim
- Cognitive Neuroimaging Centre, Nanyang Technological University, Singapore; IGP-Neuroscience, Interdisciplinary Graduate Programme, Nanyang Technological University, Singapore
| | - Zhiwei Guo
- School of Computer Science and Engineering, Nanyang Technological University, Singapore
| | | | | | - Parasuraman Padmanabhan
- Cognitive Neuroimaging Centre, Nanyang Technological University, Singapore; Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore
| | - Victoria Leong
- Division of Psychology, Nanyang Technological University, Singapore; Department of Pediatrics, University of Cambridge, United Kingdom
| | - Jia Lu
- Cognitive Neuroimaging Centre, Nanyang Technological University, Singapore; DSO National Laboratories, Singapore; Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Balázs Gulyás
- Cognitive Neuroimaging Centre, Nanyang Technological University, Singapore; Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore; Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden.
| | - Cuntai Guan
- School of Computer Science and Engineering, Nanyang Technological University, Singapore.
| |
Collapse
|
3
|
Nedergaard JSK, Lupyan G. Not Everybody Has an Inner Voice: Behavioral Consequences of Anendophasia. Psychol Sci 2024:9567976241243004. [PMID: 38728320 DOI: 10.1177/09567976241243004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/12/2024] Open
Abstract
It is commonly assumed that inner speech-the experience of thought as occurring in a natural language-is a human universal. Recent evidence, however, suggests that the experience of inner speech in adults varies from near constant to nonexistent. We propose a name for a lack of the experience of inner speech-anendophasia-and report four studies examining some of its behavioral consequences. We found that adults who reported low levels of inner speech (N = 46) had lower performance on a verbal working memory task and more difficulty performing rhyme judgments compared with adults who reported high levels of inner speech (N = 47). Task-switching performance-previously linked to endogenous verbal cueing-and categorical effects on perceptual judgments were unrelated to differences in inner speech.
Collapse
Affiliation(s)
| | - Gary Lupyan
- Department of Psychology, University of Wisconsin-Madison
| |
Collapse
|
4
|
Abstract
Inner speech is frequently assessed using self-report scales, but their validity is understudied. Uttl et al. (2011) found moderate correlations, perhaps because measures tap into different dimensions of inner speech. We expand on these preliminary results by investigating reliability and concurrent validity of seven inner speech questionnaires in a larger sample. Our results indicate that inner speech questionnaires are reliable but hold moderate concurrent validity, in line with Uttl and colleagues' (2011) results. Specifically, our results suggest that some inner speech scales may capture a general conception of inner speech, while others may assess evaluative components of negative self-talk, self-regulation, and self-reflective processes, but not emotional valence. The results hold implications around further validity investigations of inner speech measures.
Collapse
|
5
|
Alexander JM, Hedrick T, Stark BC. Inner speech in the daily lives of people with aphasia. Front Psychol 2024; 15:1335425. [PMID: 38577124 PMCID: PMC10991845 DOI: 10.3389/fpsyg.2024.1335425] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Accepted: 02/26/2024] [Indexed: 04/06/2024] Open
Abstract
Introduction This exploratory, preliminary, feasibility study evaluated the extent to which adults with chronic aphasia (N = 23) report experiencing inner speech in their daily lives by leveraging experience sampling and survey methodology. Methods The presence of inner speech was assessed at 30 time-points and themes of inner speech at three time-points, over the course of three weeks. The relationship of inner speech to aphasia severity, demographic information (age, sex, years post-stroke), and insight into language impairment was evaluated. Results There was low attrition (<8%) and high compliance (>94%) for the study procedures, and inner speech was experienced in most sampled instances (>78%). The most common themes of inner speech experience across the weeks were 'when remembering', 'to plan', and 'to motivate oneself'. There was no significant relationship identified between inner speech and aphasia severity, insight into language impairment, or demographic information. In conclusion, adults with aphasia tend to report experiencing inner speech often, with some shared themes (e.g., remembering, planning), and use inner speech to explore themes that are uncommon in young adults in other studies (e.g., to talk to themselves about health). Discussion High compliance and low attrition suggest design feasibility, and results emphasize the importance of collecting data in age-similar, non-brain-damaged peers as well as in adults with other neurogenic communication disorders to fully understand the experience and use of inner speech in daily life. Clinical implications and future directions are discussed.
Collapse
Affiliation(s)
- Julianne M. Alexander
- Department of Speech, Language and Hearing Science, Indiana University Bloomington, Bloomington, IN, United States
- Program in Neuroscience, Indiana University Bloomington, Bloomington, IN, United States
| | - Tessa Hedrick
- Department of Speech, Language and Hearing Science, Indiana University Bloomington, Bloomington, IN, United States
- Program in Neuroscience, Indiana University Bloomington, Bloomington, IN, United States
| | - Brielle C. Stark
- Department of Speech, Language and Hearing Science, Indiana University Bloomington, Bloomington, IN, United States
- Program in Neuroscience, Indiana University Bloomington, Bloomington, IN, United States
| |
Collapse
|
6
|
Huang TJ, Chang PH, Chiou HS, Hsu HJ. Nonlinguistic Cognitive Functions of Mandarin Speakers With Poststroke Aphasia. AMERICAN JOURNAL OF SPEECH-LANGUAGE PATHOLOGY 2024; 33:756-773. [PMID: 38157289 DOI: 10.1044/2023_ajslp-23-00122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/03/2024]
Abstract
PURPOSE The purpose of this study was to examine the cognitive functions of Mandarin speakers with poststroke aphasia and to investigate the relationship between nonlinguistic cognitive deficits and the severity of aphasia. METHOD Twenty-three adults with aphasia resulting from left-hemispheric stroke and 23 adults matched for age and educational level completed a series of six nonlinguistic cognitive tests measuring nonverbal intelligence, short-term memory, visual selective attention, visual alternating attention, auditory selective attention, and auditory alternating attention. A standardized aphasia assessment (Concise Chinese Aphasia Test [CCAT]) was also conducted to evaluate the severity of aphasia. Data analyses examined cognitive functions by comparing task performance of the two groups and examining the relationship between scores on the cognitive tasks and aphasia severity based on a hierarchical regression analysis. RESULTS The aphasia group scored significantly lower than the control group on all nonlinguistic cognitive tasks with large effect sizes (d = 0.95 ~ 1.54). Significant associations between different nonlinguistic cognitive tasks and CCAT subtests were observed. Results from the hierarchical regression analysis showed that auditory alternating attention was the only factor that significantly predicted aphasia severity based on CCAT overall scores after age and education level were taken into account. CONCLUSIONS The findings align with prior research observing deficits in nonlinguistic cognition in individuals with aphasia. Implications for clinical practice and future research are discussed.
Collapse
Affiliation(s)
- Tzu-Jung Huang
- Ph.D. Program in Education Sciences, National Tsing Hua University, Hsinchu, Taiwan
| | | | - Hsinhuei Sheen Chiou
- Department of Speech, Hearing and Rehabilitation Services, Minnesota State University, Mankato
| | - Hsin-Jen Hsu
- Department of Special Education, National Tsing Hua University, Hsinchu, Taiwan
- Research Center for Education and Mind Sciences, National Tsing Hua University, Hsinchu, Taiwan
| |
Collapse
|
7
|
Chung LKH, Jack BN, Griffiths O, Pearson D, Luque D, Harris AWF, Spencer KM, Le Pelley ME, So SHW, Whitford TJ. Neurophysiological evidence of motor preparation in inner speech and the effect of content predictability. Cereb Cortex 2023; 33:11556-11569. [PMID: 37943760 PMCID: PMC10751289 DOI: 10.1093/cercor/bhad389] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2023] [Revised: 09/25/2023] [Accepted: 09/26/2023] [Indexed: 11/12/2023] Open
Abstract
Self-generated overt actions are preceded by a slow negativity as measured by electroencephalogram, which has been associated with motor preparation. Recent studies have shown that this neural activity is modulated by the predictability of action outcomes. It is unclear whether inner speech is also preceded by a motor-related negativity and influenced by the same factor. In three experiments, we compared the contingent negative variation elicited in a cue paradigm in an active vs. passive condition. In Experiment 1, participants produced an inner phoneme, at which an audible phoneme whose identity was unpredictable was concurrently presented. We found that while passive listening elicited a late contingent negative variation, inner speech production generated a more negative late contingent negative variation. In Experiment 2, the same pattern of results was found when participants were instead asked to overtly vocalize the phoneme. In Experiment 3, the identity of the audible phoneme was made predictable by establishing probabilistic expectations. We observed a smaller late contingent negative variation in the inner speech condition when the identity of the audible phoneme was predictable, but not in the passive condition. These findings suggest that inner speech is associated with motor preparatory activity that may also represent the predicted action-effects of covert actions.
Collapse
Affiliation(s)
- Lawrence K-h Chung
- School of Psychology, University of New South Wales (UNSW Sydney), Mathews Building, Library Walk, Kensington NSW 2052, Australia
- Department of Psychology, The Chinese University of Hong Kong, 3/F Sino Building, Chung Chi Road, Shatin, New Territories, Hong Kong SAR, China
| | - Bradley N Jack
- Research School of Psychology, Australian National University, Building 39, Science Road, Canberra ACT 2601, Australia
| | - Oren Griffiths
- School of Psychological Sciences, University of Newcastle, Behavioural Sciences Building, University Drive, Callaghan NSW 2308, Australia
| | - Daniel Pearson
- School of Psychology, University of Sydney, Griffith Taylor Building, Manning Road, Camperdown NSW 2006, Australia
| | - David Luque
- Department of Basic Psychology and Speech Therapy, University of Malaga, Faculty of Psychology, Dr Ortiz Ramos Street, 29010 Malaga, Spain
| | - Anthony W F Harris
- Westmead Clinical School, University of Sydney, 176 Hawkesbury Road, Westmead NSW 2145, Australia
- Brain Dynamics Centre, Westmead Institute for Medical Research, 176 Hawkesbury Road, Westmead NSW 2145, Australia
| | - Kevin M Spencer
- Research Service, Veterans Affairs Boston Healthcare System, and Department of Psychiatry, Harvard Medical School, 150 South Huntington Avenue, Boston MA 02130, United States
| | - Mike E Le Pelley
- School of Psychology, University of New South Wales (UNSW Sydney), Mathews Building, Library Walk, Kensington NSW 2052, Australia
| | - Suzanne H-w So
- Department of Psychology, The Chinese University of Hong Kong, 3/F Sino Building, Chung Chi Road, Shatin, New Territories, Hong Kong SAR, China
| | - Thomas J Whitford
- School of Psychology, University of New South Wales (UNSW Sydney), Mathews Building, Library Walk, Kensington NSW 2052, Australia
- Brain Dynamics Centre, Westmead Institute for Medical Research, 176 Hawkesbury Road, Westmead NSW 2145, Australia
| |
Collapse
|
8
|
Körner A, Strack F. Articulation posture influences pitch during singing imagery. Psychon Bull Rev 2023; 30:2187-2195. [PMID: 37221280 PMCID: PMC10728233 DOI: 10.3758/s13423-023-02306-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/03/2023] [Indexed: 05/25/2023]
Abstract
Facial muscle activity contributes to singing and to articulation: in articulation, mouth shape can alter vowel identity; and in singing, facial movement correlates with pitch changes. Here, we examine whether mouth posture causally influences pitch during singing imagery. Based on perception-action theories and embodied cognition theories, we predict that mouth posture influences pitch judgments even when no overt utterances are produced. In two experiments (total N = 160), mouth posture was manipulated to resemble the articulation of either /i/ (as in English meet; retracted lips) or /o/ (as in French rose; protruded lips). Holding this mouth posture, participants were instructed to mentally "sing" given songs (which were all positive in valence) while listening with their inner ear and, afterwards, to assess the pitch of their mental chant. As predicted, compared to the o-posture, the i-posture led to higher pitch in mental singing. Thus, bodily states can shape experiential qualities, such as pitch, during imagery. This extends embodied music cognition and demonstrates a new link between language and music.
Collapse
Affiliation(s)
- Anita Körner
- Department of Psychology, University of Kassel, Holländische Straße 36-38, 34127, Kassel, Germany.
| | - Fritz Strack
- Department of Psychology, University of Würzburg, Würzburg, Germany
| |
Collapse
|
9
|
Nalborczyk L, Longcamp M, Bonnard M, Serveau V, Spieser L, Alario FX. Distinct neural mechanisms support inner speaking and inner hearing. Cortex 2023; 169:161-173. [PMID: 37922641 DOI: 10.1016/j.cortex.2023.09.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Revised: 09/11/2023] [Accepted: 09/13/2023] [Indexed: 11/07/2023]
Abstract
Humans have the ability to mentally examine speech. This covert form of speech production is often accompanied by sensory (e.g., auditory) percepts. However, the cognitive and neural mechanisms that generate these percepts are still debated. According to a prominent proposal, inner speech has at least two distinct phenomenological components: inner speaking and inner hearing. We used transcranial magnetic stimulation to test whether these two phenomenologically distinct processes are supported by distinct neural mechanisms. We hypothesised that inner speaking relies more strongly on an online motor-to-sensory simulation that constructs a multisensory experience, whereas inner hearing relies more strongly on a memory-retrieval process, where the multisensory experience is reconstructed from stored motor-to-sensory associations. Accordingly, we predicted that the speech motor system will be involved more strongly during inner speaking than inner hearing. This would be revealed by modulations of TMS evoked responses at muscle level following stimulation of the lip primary motor cortex. Overall, data collected from 31 participants corroborated this prediction, showing that inner speaking increases the excitability of the primary motor cortex more than inner hearing. Moreover, this effect was more pronounced during the inner production of a syllable that strongly recruits the lips (vs. a syllable that recruits the lips to a lesser extent). These results are compatible with models assuming that the primary motor cortex is involved during inner speech and contribute to clarify the neural implementation of the fundamental ability of silently speaking in one's mind.
Collapse
Affiliation(s)
- Ladislas Nalborczyk
- Aix Marseille Univ, CNRS, LPC, Marseille, France; Aix Marseille Univ, CNRS, LNC, Marseille, France.
| | | | | | | | | | | |
Collapse
|
10
|
Pratts J, Pobric G, Yao B. Bridging phenomenology and neural mechanisms of inner speech: ALE meta-analysis on egocentricity and spontaneity in a dual-mechanistic framework. Neuroimage 2023; 282:120399. [PMID: 37827205 DOI: 10.1016/j.neuroimage.2023.120399] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Revised: 09/25/2023] [Accepted: 09/29/2023] [Indexed: 10/14/2023] Open
Abstract
The neural mechanisms of inner speech remain unclear despite its importance in a variety of cognitive processes and its implication in aberrant perceptions such as auditory verbal hallucinations. Previous research has proposed a corollary discharge model in which inner speech is a truncated form of overt speech, relying on speech production-related regions (e.g. left inferior frontal gyrus). This model does not fully capture the diverse phenomenology of inner speech and recent research suggesting alternative perception-related mechanisms of generation. Therefore, we present and test a framework in which inner speech can be generated by two separate mechanisms, depending on its phenomenological qualities: a corollary discharge mechanism relying on speech production regions and a perceptual simulation mechanism within speech perceptual regions. The results of the activation likelihood estimation meta-analysis examining inner speech studies support the idea that varieties of inner speech recruit different neural mechanisms.
Collapse
Affiliation(s)
- Jaydan Pratts
- Division of Psychology, Communication and Human Neuroscience, School of Health Sciences, University of Manchester, UK
| | - Gorana Pobric
- Division of Psychology, Communication and Human Neuroscience, School of Health Sciences, University of Manchester, UK
| | - Bo Yao
- Division of Psychology, Communication and Human Neuroscience, School of Health Sciences, University of Manchester, UK; Department of Psychology, Fylde College, Lancaster University, UK.
| |
Collapse
|
11
|
Sadia S, Carbon CC. Looking for the Edge of the World: How 3D Immersive Audio Produces a Shift from an Internalised Inner Voice to Unsymbolised Affect-Driven Ways of Thinking and Heightened Sensory Awareness. Behav Sci (Basel) 2023; 13:858. [PMID: 37887508 PMCID: PMC10604218 DOI: 10.3390/bs13100858] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Revised: 09/30/2023] [Accepted: 10/12/2023] [Indexed: 10/28/2023] Open
Abstract
In this practice-based case study, we investigate the subjective aesthetic and affective responses to a shift from 2D stereo-based modelling to 3D object-based Dolby Atmos in an audio installation artwork. Dolby Atmos is an infinite object-based audio format released in 2012 but only recently incorporated into more public-facing formats. Our analysis focuses on the artist Sadia Sadia's 30-channel audio installation 'Notes to an Unknown Lover', based on her book of free verse poetry of the same title, which was rebuilt and reformatted in a Dolby Atmos specified studio. We examine what effect altered spatiality with an infinite number of 'placements' has on the psychoacoustic and neuroaesthetic response to the text. The effectiveness of three-dimensional (3D) object-based audio is interrogated against more traditional stereo and two-dimensional (2D) formats regarding the expression and communication of emotion and what effect altered spatiality with an infinite number of placements has on the psychoacoustic and neuroaesthetic response to the text. We provide a unique examination of the consequences of a shift from 2D to wholly encompassing object-based audio in a text-based artist's audio installation work. These findings may also have promising applications for health and well-being issues.
Collapse
Affiliation(s)
- Sadia Sadia
- School of Art, College of Design and Social Context, RMIT Royal Melbourne Institute of Technology University, Melbourne, VIC 3000, Australia
- The Light Room, Real World Studios, Wiltshire SN13 8PL, UK
- Research Group EPÆG (Ergonomics, Psychological Aesthetics, Gestalt), 96047 Bamberg, Bavaria, Germany
| | - Claus-Christian Carbon
- Research Group EPÆG (Ergonomics, Psychological Aesthetics, Gestalt), 96047 Bamberg, Bavaria, Germany
- Department of General Psychology and Methodology, University of Bamberg, 96047 Bamberg, Bavaria, Germany
| |
Collapse
|
12
|
Fadeev A. Semiotic Approach to the New Perspectives on Inner Speech. Integr Psychol Behav Sci 2023; 57:1084-1096. [PMID: 36810980 DOI: 10.1007/s12124-022-09738-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/12/2022] [Indexed: 02/23/2023]
Abstract
The article aims at identifying the new perspectives on the study of inaudible internal communication, known as inner speech. This is done by addressing the role of semiotic approach in the contemporary studies of inner speech, emphasising the role of contemporary culture in the formation of human inner communication processes, as well as by critically addressing the recent publications that outline the new directions in inner speech research, more specifically "New Perspectives on Inner Speech" edited by Pablo Fossa (2022). The article develops and expands the framework of the new perspectives on inner speech by focusing on such aspects of inner speech research as the language of inner speech, the role of contemporary digital culture in the formation of inner speech and the advances in the recent research methodologies. The discussions established in the article are based on the recent inner speech studies, as well as the author's own diverse experience in researching inner speech within his PhD research (Fadeev, 2022) and his experience at the inner speech research group at the Department of Semiotics at the University of Tartu.
Collapse
|
13
|
Tsuchiyagaito A, Sánchez SM, Misaki M, Kuplicki R, Park H, Paulus MP, Guinjoan SM. Intensity of repetitive negative thinking in depression is associated with greater functional connectivity between semantic processing and emotion regulation areas. Psychol Med 2023; 53:5488-5499. [PMID: 36043367 PMCID: PMC9973538 DOI: 10.1017/s0033291722002677] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
BACKGROUND Repetitive negative thinking (RNT), a cognitive process that encompasses past (rumination) and future (worry) directed thoughts focusing on negative experiences and the self, is a transdiagnostic construct that is especially relevant for major depressive disorder (MDD). Severe RNT often occurs in individuals with severe levels of MDD, which makes it challenging to disambiguate the neural circuitry underlying RNT from depression severity. METHODS We used a propensity score, i.e., a conditional probability of having high RNT given observed covariates to match high and low RNT individuals who are similar in the severity of depression, anxiety, and demographic characteristics. Of 148 MDD individuals, we matched high and low RNT groups (n = 50/group) and used a data-driven whole-brain voxel-to-voxel connectivity pattern analysis to investigate the resting-state functional connectivity differences between the groups. RESULTS There was an association between RNT and connectivity in the bilateral superior temporal sulcus (STS), an important region for speech processing including inner speech. High relative to low RNT individuals showed greater connectivity between right STS and bilateral anterior insular cortex (AI), and between bilateral STS and left dorsolateral prefrontal cortex (DLPFC). Greater connectivity in those regions was specifically related to RNT but not to depression severity. CONCLUSIONS RNT intensity is directly related to connectivity between STS and AI/DLPFC. This might be a mechanism underlying the role of RNT in perceptive, cognitive, speech, and emotional processing. Future investigations will need to determine whether modifying these connectivities could be a treatment target to reduce RNT.
Collapse
Affiliation(s)
- Aki Tsuchiyagaito
- Laureate Institute for Brain Research, Tulsa, OK, USA
- The University of Tulsa, Tulsa, OK, USA
- Chiba University, Chiba, Japan
| | | | - Masaya Misaki
- Laureate Institute for Brain Research, Tulsa, OK, USA
| | | | - Heekyong Park
- Laureate Institute for Brain Research, Tulsa, OK, USA
- University of North Texas at Dallas, Dallas, TX, USA
| | | | | |
Collapse
|
14
|
Brysbaert M, Vantieghem A. No Correlation Between Articulation Speed and Silent Reading Rate when Adults Read Short Texts. Psychol Belg 2023; 63:82-91. [PMID: 37483467 PMCID: PMC10360968 DOI: 10.5334/pb.1189] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2022] [Accepted: 06/27/2023] [Indexed: 07/25/2023] Open
Abstract
Silent reading often involves phonological encoding of the text in addition to orthographic processing. The nature of the phonological code is debated, however: Is it an abstract code or does it contain information about the pronunciation of the visual stimulus? To answer this question, we investigated the relationship between articulation speed and reading speed, both for silent reading and reading aloud. We investigated whether people with fast articulation speed read faster than people with slow articulation speed. We recruited 94 participants, who in a Zoom session were asked to read short texts silently or aloud. They were also asked to talk about their lives and say the numbers 1-10 or the months of the year as quickly as possible. Finally, they completed an online vocabulary test and an author recognition test. Multiple regression analysis and cluster analysis showed that although the speed of reading aloud and silent reading correlated to some extent, they belonged to two different clusters. Reading aloud was mainly related to talking fluency and articulation speed, while silent reading was more related to vocabulary and knowledge about fiction authors. These findings are consistent with the hypothesis that the phonological code in silent reading typically does not contain articulatory information, although our data do not rule out the possibility that this may be the case for a small percentage of people or when people read more difficult texts.
Collapse
Affiliation(s)
- Marc Brysbaert
- Department of Experimental Psychology, Ghent University, B-9000 Ghent, Belgium
| | - Anke Vantieghem
- Department of Experimental Psychology, Ghent University, B-9000 Ghent, Belgium
| |
Collapse
|
15
|
Brinthaupt TM, Morin A. Self-talk: research challenges and opportunities. Front Psychol 2023; 14:1210960. [PMID: 37465491 PMCID: PMC10350497 DOI: 10.3389/fpsyg.2023.1210960] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2023] [Accepted: 06/19/2023] [Indexed: 07/20/2023] Open
Abstract
In this review, we discuss major measurement and methodological challenges to studying self-talk. We review the assessment of self-talk frequency, studying self-talk in its natural context, personal pronoun usage within self-talk, experiential sampling methods, and the experimental manipulation of self-talk. We highlight new possible research opportunities and discuss recent advances such as brain imaging studies of self-talk, the use of self-talk by robots, and measurement of self-talk in aphasic patients.
Collapse
Affiliation(s)
- Thomas M. Brinthaupt
- Department of Psychology, Middle Tennessee State University, Murfreesboro, TN, United States
| | - Alain Morin
- Department of Psychology, Mount Royal University, Calgary, AB, Canada
| |
Collapse
|
16
|
Mahfoud D, Hallit S, Haddad C, Fekih-Romdhane F, Haddad G. The moderating effect of cognitive impairment on the relationship between inner speech and auditory verbal hallucinations among chronic patients with schizophrenia. BMC Psychiatry 2023; 23:431. [PMID: 37316820 DOI: 10.1186/s12888-023-04940-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Accepted: 06/08/2023] [Indexed: 06/16/2023] Open
Abstract
BACKGROUND Even though there is an increasing amount of evidence from behavioral and neuroimaging studies to suggest that pathological inner speech plays a role in the emergence of auditory verbal hallucinations (AVH), studies investigating the mechanisms underlying this relationship are rather scarce. Examining moderators might inform the development of new treatment options for AVH. We sought to extend the existing knowledge by testing the moderating role of cognitive impairment in the association between inner speech and hallucinations in a sample of Lebanese patients with schizophrenia. METHODS A cross-sectional study was conducted from May till August 2022, enrolling 189 chronic patients. RESULTS Moderation analysis revealed that, after controlling for delusions, the interaction of experiencing voices of other people in inner speech by cognitive performance was significantly associated with AVH. In people having low (Beta = 0.69; t = 5.048; p < .001) and moderate (Beta = 0.45; t = 4.096; p < .001) cognitive performance, the presence of voices of other people in inner speech was significantly associated with more hallucinations. This association was not significant in patients with high cognitive function (Beta = 0.21; t = 1.417; p = .158). CONCLUSION This preliminarily study suggests that interventions aiming at improving cognitive performance may also have a beneficial effect in reducing hallucinations in schizophrenia.
Collapse
Affiliation(s)
| | - Souheil Hallit
- School of Medicine and Medical Sciences, Holy Spirit University of Kaslik, P.O. Box 446, Jounieh, Lebanon.
- Applied Science Research Center, Applied Science Private University, Amman, Jordan.
- Research Department, Psychiatric Hospital of the Cross, Jal Eddib, Lebanon.
| | - Chadia Haddad
- Research Department, Psychiatric Hospital of the Cross, Jal Eddib, Lebanon
- INSPECT-LB (Institut National de Santé Publique, d'Épidémiologie Clinique Et de Toxicologie-Liban), Beirut, Lebanon
- School of Health Sciences, Modern University for Business and Science, Beirut, Lebanon
| | - Feten Fekih-Romdhane
- The Tunisian Center of Early Intervention in Psychosis, Department of Psychiatry "Ibn Omrane", Razi Hospital, 2010, Manouba, Tunisia
- Faculty of Medicine of Tunis, Tunis El Manar University, Tunis, Tunisia
| | - Georges Haddad
- School of Medicine and Medical Sciences, Holy Spirit University of Kaslik, P.O. Box 446, Jounieh, Lebanon
- Research Department, Psychiatric Hospital of the Cross, Jal Eddib, Lebanon
| |
Collapse
|
17
|
Simistira Liwicki F, Gupta V, Saini R, De K, Abid N, Rakesh S, Wellington S, Wilson H, Liwicki M, Eriksson J. Bimodal electroencephalography-functional magnetic resonance imaging dataset for inner-speech recognition. Sci Data 2023; 10:378. [PMID: 37311807 DOI: 10.1038/s41597-023-02286-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Accepted: 06/01/2023] [Indexed: 06/15/2023] Open
Abstract
The recognition of inner speech, which could give a 'voice' to patients that have no ability to speak or move, is a challenge for brain-computer interfaces (BCIs). A shortcoming of the available datasets is that they do not combine modalities to increase the performance of inner speech recognition. Multimodal datasets of brain data enable the fusion of neuroimaging modalities with complimentary properties, such as the high spatial resolution of functional magnetic resonance imaging (fMRI) and the temporal resolution of electroencephalography (EEG), and therefore are promising for decoding inner speech. This paper presents the first publicly available bimodal dataset containing EEG and fMRI data acquired nonsimultaneously during inner-speech production. Data were obtained from four healthy, right-handed participants during an inner-speech task with words in either a social or numerical category. Each of the 8-word stimuli were assessed with 40 trials, resulting in 320 trials in each modality for each participant. The aim of this work is to provide a publicly available bimodal dataset on inner speech, contributing towards speech prostheses.
Collapse
Affiliation(s)
- Foteini Simistira Liwicki
- Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Intelligent Systems LAB, Luleå, Sweden.
| | - Vibha Gupta
- Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Intelligent Systems LAB, Luleå, Sweden
| | - Rajkumar Saini
- Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Intelligent Systems LAB, Luleå, Sweden
| | - Kanjar De
- Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Intelligent Systems LAB, Luleå, Sweden
| | - Nosheen Abid
- Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Intelligent Systems LAB, Luleå, Sweden
| | - Sumit Rakesh
- Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Intelligent Systems LAB, Luleå, Sweden
| | | | - Holly Wilson
- University of Bath, Department of Computer Science, Bath, UK
| | - Marcus Liwicki
- Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Intelligent Systems LAB, Luleå, Sweden
| | - Johan Eriksson
- Umeå University, Department of Integrative Medical Biology (IMB) and Umeå Center for Functional Brain Imaging (UFBI), Umeå, Sweden
| |
Collapse
|
18
|
Yuan B, Xie H, Wang Z, Xu Y, Zhang H, Liu J, Chen L, Li C, Tan S, Lin Z, Hu X, Gu T, Lu J, Liu D, Wu J. The domain-separation language network dynamics in resting state support its flexible functional segregation and integration during language and speech processing. Neuroimage 2023; 274:120132. [PMID: 37105337 DOI: 10.1016/j.neuroimage.2023.120132] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Revised: 04/05/2023] [Accepted: 04/21/2023] [Indexed: 04/29/2023] Open
Abstract
Modern linguistic theories and network science propose that language and speech processing are organized into hierarchical, segregated large-scale subnetworks, with a core of dorsal (phonological) stream and ventral (semantic) stream. The two streams are asymmetrically recruited in receptive and expressive language or speech tasks, which showed flexible functional segregation and integration. We hypothesized that the functional segregation of the two streams was supported by the underlying network segregation. A dynamic conditional correlation approach was employed to construct framewise time-varying language networks and k-means clustering was employed to investigate the temporal-reoccurring patterns. We found that the framewise language network dynamics in resting state were robustly clustered into four states, which dynamically reconfigured following a domain-separation manner. Spatially, the hub distributions of the first three states highly resembled the neurobiology of speech perception and lexical-phonological processing, speech production, and semantic processing, respectively. The fourth state was characterized by the weakest functional connectivity and was regarded as a baseline state. Temporally, the first three states appeared exclusively in limited time bins (∼15%), and most of the time (> 55%), state 4 was dominant. Machine learning-based dFC-linguistics prediction analyses showed that dFCs of the four states significantly predicted individual linguistic performance. These findings suggest a domain-separation manner of language network dynamics in resting state, which forms a dynamic "meta-network" framework to support flexible functional segregation and integration during language and speech processing.
Collapse
Affiliation(s)
- Binke Yuan
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents (South China Normal University), Ministry of Education, China; Institute for Brain Research and Rehabilitation, South China Normal University, Guangzhou, China.
| | - Hui Xie
- Institute for Brain Research and Rehabilitation, South China Normal University, Guangzhou, China; Department of Psychology, The University of Hong Kong, Hong Kong, China
| | - Zhihao Wang
- CNRS - Centre d'Economie de la Sorbonne, Panthéon-Sorbonne University, France
| | - Yangwen Xu
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Trento 38123, Italy
| | - Hanqing Zhang
- Institute for Brain Research and Rehabilitation, South China Normal University, Guangzhou, China
| | - Jiaxuan Liu
- Institute for Brain Research and Rehabilitation, South China Normal University, Guangzhou, China
| | - Lifeng Chen
- Institute for Brain Research and Rehabilitation, South China Normal University, Guangzhou, China
| | - Chaoqun Li
- Institute for Brain Research and Rehabilitation, South China Normal University, Guangzhou, China
| | - Shiyao Tan
- Institute for Brain Research and Rehabilitation, South China Normal University, Guangzhou, China
| | - Zonghui Lin
- Institute for Brain Research and Rehabilitation, South China Normal University, Guangzhou, China
| | - Xin Hu
- Institute for Brain Research and Rehabilitation, South China Normal University, Guangzhou, China
| | - Tianyi Gu
- Institute for Brain Research and Rehabilitation, South China Normal University, Guangzhou, China
| | - Junfeng Lu
- Glioma Surgery Division, Neurologic Surgery Department, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai, China; Brain Function Laboratory, Neurosurgical Institute of Fudan University, Shanghai, China; Shanghai Key Laboratory of Brain Function Restoration and Neural Regeneration, Shanghai, China
| | - Dongqiang Liu
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, PR China.
| | - Jinsong Wu
- Glioma Surgery Division, Neurologic Surgery Department, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai, China; Brain Function Laboratory, Neurosurgical Institute of Fudan University, Shanghai, China; Shanghai Key Laboratory of Brain Function Restoration and Neural Regeneration, Shanghai, China
| |
Collapse
|
19
|
Soroush PZ, Herff C, Ries SK, Shih JJ, Schultz T, Krusienski DJ. The nested hierarchy of overt, mouthed, and imagined speech activity evident in intracranial recordings. Neuroimage 2023; 269:119913. [PMID: 36731812 DOI: 10.1016/j.neuroimage.2023.119913] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Revised: 01/05/2023] [Accepted: 01/29/2023] [Indexed: 02/01/2023] Open
Abstract
Recent studies have demonstrated that it is possible to decode and synthesize various aspects of acoustic speech directly from intracranial measurements of electrophysiological brain activity. In order to continue progressing toward the development of a practical speech neuroprosthesis for the individuals with speech impairments, better understanding and modeling of imagined speech processes are required. The present study uses intracranial brain recordings from participants that performed a speaking task with trials consisting of overt, mouthed, and imagined speech modes, representing various degrees of decreasing behavioral output. Speech activity detection models are constructed using spatial, spectral, and temporal brain activity features, and the features and model performances are characterized and compared across the three degrees of behavioral output. The results indicate the existence of a hierarchy in which the relevant channels for the lower behavioral output modes form nested subsets of the relevant channels from the higher behavioral output modes. This provides important insights for the elusive goal of developing more effective imagined speech decoding models with respect to the better-established overt speech decoding counterparts.
Collapse
|
20
|
Viacheslav I, Vartanov A, Bueva A, Bronov O. The emotional component of inner speech: A pilot exploratory fMRI study. Brain Cogn 2023; 165:105939. [PMID: 36549191 DOI: 10.1016/j.bandc.2022.105939] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 12/11/2022] [Accepted: 12/13/2022] [Indexed: 12/24/2022]
Abstract
Inner speech is one of the most important human cognitive processes. Nevertheless, until now, many aspects of inner speech, particularly the emotional characteristics of inner speech, remain poorly understood. The main objectives of our study are to identify the neural substrate for the emotional (prosodic) dimension of inner speech and brain structures that control the suppression of expression in inner speech. To achieve these goals, a pilot exploratory fMRI study was carried out on 33 people. The subjects listened to pre-recorded phrases or individual words pronounced with different emotional connotations, after which they were internally spoken with the same emotion or with suppression of expression (neutral). The results show that there is an emotional component in inner speech, which is encoded by similar structures as in spoken speech. The unique role of the caudate nuclei in the suppression of expression in the inner speech was also shown.
Collapse
Affiliation(s)
| | | | | | - Oleg Bronov
- Federal State Budgetary Institution "National Medical and Surgical Center named after N.I. Pirogov", Russia
| |
Collapse
|
21
|
Rainey S. Speaker Responsibility for Synthetic Speech Derived from Neural Activity. THE JOURNAL OF MEDICINE AND PHILOSOPHY 2022; 47:503-515. [PMID: 36333930 DOI: 10.1093/jmp/jhac011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
This article provides analysis of the mechanisms and outputs involved in language-use mediated by a neuroprosthetic device. It is motivated by the thought that users of speech neuroprostheses require sufficient control over what their devices externalize as synthetic speech if they are to be thought of as responsible for it, but that the nature of this control, and so the status of their responsibility, is not clear.
Collapse
|
22
|
Unraveling the functional attributes of the language connectome: crucial subnetworks, flexibility and variability. Neuroimage 2022; 263:119672. [PMID: 36209795 DOI: 10.1016/j.neuroimage.2022.119672] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Revised: 10/04/2022] [Accepted: 10/05/2022] [Indexed: 11/23/2022] Open
Abstract
Language processing is a highly integrative function, intertwining linguistic operations (processing the language code intentionally used for communication) and extra-linguistic processes (e.g., attention monitoring, predictive inference, long-term memory). This synergetic cognitive architecture requires a distributed and specialized neural substrate. Brain systems have mainly been examined at rest. However, task-related functional connectivity provides additional and valuable information about how information is processed when various cognitive states are involved. We gathered thirteen language fMRI tasks in a unique database of one hundred and fifty neurotypical adults (InLang [Interactive networks of Language] database), providing the opportunity to assess language features across a wide range of linguistic processes. Using this database, we applied network theory as a computational tool to model the task-related functional connectome of language (LANG atlas). The organization of this data-driven neurocognitive atlas of language was examined at multiple levels, uncovering its major components (or crucial subnetworks), and its anatomical and functional correlates. In addition, we estimated its reconfiguration as a function of linguistic demand (flexibility) or several factors such as age or gender (variability). We observed that several discrete networks could be specifically shaped to promote key functional features of language: coding-decoding (Net1), control-executive (Net2), abstract-knowledge (Net3), and sensorimotor (Net4) functions. The architecture of these systems and the functional connectivity of the pivotal brain regions varied according to the nature of the linguistic process, gender, or age. By accounting for the multifaceted nature of language and modulating factors, this study can contribute to enriching and refining existing neurocognitive models of language. The LANG atlas can also be considered a reference for comparative or clinical studies involving various patients and conditions.
Collapse
|
23
|
Lu H, Long Q, Chai Y, Shang L, Zhang W, Sun W, Liu X. Auditory verbal hallucination can be evoked by prefrontal epileptic seizure. Epilepsy Behav 2022; 135:108915. [PMID: 36115084 DOI: 10.1016/j.yebeh.2022.108915] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Revised: 08/22/2022] [Accepted: 09/05/2022] [Indexed: 11/30/2022]
Abstract
Auditory verbal hallucinations (AVHs) have been reported in neocortical temporal epileptic seizures and have been considered highly associated with implication of auditory cortex by epileptic discharges or electrical stimulation. Herein, we report two rare frontal epilepsy cases in which AVHs featured the habitual seizures. The epileptogenic zones of these two patients were localized in the dorsal and orbitomedial prefrontal cortex, respectively by stereoelectroencephalography (SEEG) monitoring. Comparing with the AVHs in schizophrenia, we postulated that the phenomenological similarities between the two sets of AVHs imply homology in mechanisms. Ictal SEEG confirmed that the wide involvement of prefrontal-cingulate-auditory cortical network by low-voltage fast activity corresponded the occurrence with AVHs during frontal epileptic seizures. Electrical stimulation study of one of the two cases highlighted the causal role of prefrontal-cingulate cortex in the emergence of AVHs. Based on our clinical observation, SEEG findings, and electrical cortical stimulation, we supposed that wide implication of prefrontal-cingulate-auditory cortical network during epileptic seizure underlie the emergence of AVHs, and further hypothesized that AVHs could be yielded by transient deficit of self-monitoring for inner speech in focal epileptic seizures.
Collapse
Affiliation(s)
- Hongjuan Lu
- Department of Neurology, Xuanwu Hospital Capital Medical University, Beijing 100053, China
| | - Qiting Long
- Department of Neurology, Beijing Tsinghua Changgung Hospital, School of Clinical Medicine, Tsinghua University, Beijing 102218, China
| | - Ying Chai
- Department of Neurology, Beijing Tsinghua Changgung Hospital, School of Clinical Medicine, Tsinghua University, Beijing 102218, China
| | - Li Shang
- Epilepsy Center, Shanghai Deji Hospital, Qingdao University, Shanghai 200126, China
| | - Wei Zhang
- Department of Neurology, Beijing Tsinghua Changgung Hospital, School of Clinical Medicine, Tsinghua University, Beijing 102218, China.
| | - Wei Sun
- Department of Neurology, Xuanwu Hospital Capital Medical University, Beijing 100053, China.
| | - Xingzhou Liu
- Department of Neurology, Beijing Tsinghua Changgung Hospital, School of Clinical Medicine, Tsinghua University, Beijing 102218, China
| |
Collapse
|
24
|
A computational model of inner speech supporting flexible goal-directed behaviour in Autism. Sci Rep 2022; 12:14198. [PMID: 35987942 PMCID: PMC9392752 DOI: 10.1038/s41598-022-18445-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2021] [Accepted: 08/11/2022] [Indexed: 11/21/2022] Open
Abstract
Experimental and computational studies propose that inner speech boosts categorisation skills and executive functions, making human behaviour more focused and flexible. In addition, many clinical studies highlight a relationship between poor inner-speech and an executive impairment in autism spectrum condition (ASC), but contrasting findings are reported. Here we directly investigate the latter issue through a previously implemented and validated computational model of the Wisconsin Cards Sorting Tests. In particular, the model was applied to explore potential individual differences in cognitive flexibility and inner speech contribution in autistic and neurotypical participants. Our model predicts that the use of inner-speech could increase along the life-span of neurotypical participants but would be reduced in autistic ones. Although we found more attentional failures (i.e., wrong behavioural rule switches) in autistic children/teenagers and more perseverative behaviours in autistic young/older adults, only autistic children and older adults exhibited a lower performance (i.e., fewer consecutive correct rule switches) than matched control groups. Overall, our results corroborate the idea that the reduced use of inner speech could represent a disadvantage for autistic children and autistic older adults. Moreover, the results suggest that cognitive-behavioural therapies should focus on developing inner speech skills in autistic children as this could provide cognitive support throughout their whole life span.
Collapse
|
25
|
Skipper JI. A voice without a mouth no more: The neurobiology of language and consciousness. Neurosci Biobehav Rev 2022; 140:104772. [PMID: 35835286 DOI: 10.1016/j.neubiorev.2022.104772] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2021] [Revised: 05/18/2022] [Accepted: 07/05/2022] [Indexed: 11/26/2022]
Abstract
Most research on the neurobiology of language ignores consciousness and vice versa. Here, language, with an emphasis on inner speech, is hypothesised to generate and sustain self-awareness, i.e., higher-order consciousness. Converging evidence supporting this hypothesis is reviewed. To account for these findings, a 'HOLISTIC' model of neurobiology of language, inner speech, and consciousness is proposed. It involves a 'core' set of inner speech production regions that initiate the experience of feeling and hearing words. These take on affective qualities, deriving from activation of associated sensory, motor, and emotional representations, involving a largely unconscious dynamic 'periphery', distributed throughout the whole brain. Responding to those words forms the basis for sustained network activity, involving 'default mode' activation and prefrontal and thalamic/brainstem selection of contextually relevant responses. Evidence for the model is reviewed, supporting neuroimaging meta-analyses conducted, and comparisons with other theories of consciousness made. The HOLISTIC model constitutes a more parsimonious and complete account of the 'neural correlates of consciousness' that has implications for a mechanistic account of mental health and wellbeing.
Collapse
|
26
|
Effect of functional and effective brain connectivity in identifying vowels from articulation imagery procedures. Cogn Process 2022; 23:593-618. [PMID: 35794496 DOI: 10.1007/s10339-022-01103-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2020] [Accepted: 06/15/2022] [Indexed: 11/03/2022]
Abstract
Articulation imagery, a form of mental imagery, refers to the activity of imagining or speaking to oneself mentally without an articulation movement. It is an effective domain of research in speech impaired neural disorders, as speech imagination has high similarity to real voice communication. This work employs electroencephalography (EEG) signals acquired from articulation and articulation imagery in identifying the vowel being imagined during different tasks. EEG signals from chosen electrodes are decomposed using the empirical mode decomposition (EMD) method into a series of intrinsic mode functions. Brain connectivity estimators and entropy measures have been computed to analyze the functional cooperation and causal dependence between different cortical regions as well as the regularity in the signals. Using machine learning techniques such as multiclass support vector machine (MSVM) and random forest (RF), the vowels have been classified. Three different training and testing protocols (Articulation-AR, Articulation imagery-AI and Articulation vs Articulation imagery-AR vs AI) were employed for identifying the vowel being imagined of articulating. An overall classification accuracy of 80% was obtained for articulation imagery protocol which was found to be higher than the other two protocols. Also, MSVM techniques outperformed the RF technique in terms of the classification accuracy. The effect of brain connectivity estimators and machine learning techniques seems to be reliable in identifying the vowel from the subjects' thought and thereby assisting the people with speech impairment.
Collapse
|
27
|
Öncel P, Creer SD, Allen LK. Seeing Through the Character’s Eyes: Examining Phenomenological Experiences of Perspective-Taking During Reading. DISCOURSE PROCESSES 2022. [DOI: 10.1080/0163853x.2022.2088031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/16/2022]
Affiliation(s)
- Püren Öncel
- Department of Psychology, University of New Hampshire
| | | | | |
Collapse
|
28
|
Pan C, Liu H, Zheng D, Chen F. Neural Entrainment to Rhythms of Imagined Syllables. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:4040-4043. [PMID: 36086167 DOI: 10.1109/embc48229.2022.9871767] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Imagined speech based brain-computer interface (BCI) is of great interest due to its efficiency and user-friendliness for patients with speech impairment. The aim of this work was to study whether different rhythms of imagined syllables could elicit corresponding frequency components on EEG amplitude spectra. Seventeen participants were recruited to take part in the experiments, and performed a control task and four imagery tasks with the presence of periodic pure tones while their EEG signals were recorded. The four imagery tasks included imagining the syllable' /a/' every time, every two times, and every three times the periodic pure tones occurred, and imagined twice every three times the periodic pure tones occurred. The experimental results analyzed by Fourier transform indicated that neural entrainment to rhythmic speech imagery can be notably reflected on the EEG amplitude spectra. Clinical Relevance- This work manifested that different rhythms of imagined syllables could be identified from EEG amplitude spectra, which may be beneficial to the development of imagined speech based BCIs.
Collapse
|
29
|
Bonnet C, Bayram M, El Bouzaïdi Tiali S, Lebon F, Harquel S, Palluel-Germain R, Perrone-Bertolotti M. Kinesthetic motor-imagery training improves performance on lexical-semantic access. PLoS One 2022; 17:e0270352. [PMID: 35749512 PMCID: PMC9232155 DOI: 10.1371/journal.pone.0270352] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2022] [Accepted: 06/08/2022] [Indexed: 11/30/2022] Open
Abstract
The objective of this study was to evaluate the effect of Motor Imagery (MI) training on language comprehension. In line with literature suggesting an intimate relationship between the language and the motor system, we proposed that a MI-training could improve language comprehension by facilitating lexico-semantic access. In two experiments, participants were assigned to a kinesthetic motor-imagery training (KMI) group, in which they had to imagine making upper-limb movements, or to a static visual imagery training (SVI) group, in which they had to mentally visualize pictures of landscapes. Differential impacts of both training protocols on two different language comprehension tasks (i.e., semantic categorization and sentence-picture matching task) were investigated. Experiment 1 showed that KMI training can induce better performance (shorter reaction times) than SVI training for the two language comprehension tasks, thus suggesting that a KMI-based motor activation can facilitate lexico-semantic access after only one training session. Experiment 2 aimed at replicating these results using a pre/post-training language assessment and a longer training period (four training sessions spread over four days). Although the improvement magnitude between pre- and post-training sessions was greater in the KMI group than in the SVI one on the semantic categorization task, the sentence-picture matching task tended to provide an opposite pattern of results. Overall, this series of experiments highlights for the first time that motor imagery can contribute to the improvement of lexical-semantic processing and could open new avenues on rehabilitation methods for language deficits.
Collapse
Affiliation(s)
- Camille Bonnet
- Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, Grenoble, France
- Psychological Sciences Research Institute, Université Catholique de Louvain, Louvain-la-Neuve, Belgium
| | - Mariam Bayram
- Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, Grenoble, France
| | | | - Florent Lebon
- INSERM UMR1093-CAPS, Université Bourgogne Franche-Comté, UFR des Sciences du Sport, Dijon, France
| | - Sylvain Harquel
- Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, Grenoble, France
- Defitech Chair of Clinical Neuroengineering, Center for Neuroprosthetics (CNP) and Brain Mind Institute (BMI), Swiss Federal Institute of Technology Lausanne (EPFL), Campus Biotech, Geneva, Switzerland
| | | | - Marcela Perrone-Bertolotti
- Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, Grenoble, France
- Institut Universitaire de France, Paris, France
- * E-mail:
| |
Collapse
|
30
|
Meier EL, Kelly CR, Hillis AE. Dissociable language and executive control deficits and recovery in post-stroke aphasia: An exploratory observational and case series study. Neuropsychologia 2022; 172:108270. [PMID: 35597266 PMCID: PMC9728463 DOI: 10.1016/j.neuropsychologia.2022.108270] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2021] [Revised: 04/30/2022] [Accepted: 05/13/2022] [Indexed: 01/04/2023]
Abstract
A growing body of evidence indicates many, but not all, individuals with post-stroke aphasia experience executive dysfunction. Relationships between language and executive function skills are often reported in the literature, but the degree of interdependence between these abilities remains largely unanswered. Therefore, in this study, we investigated the extent to which language and executive control deficits dissociated in 1) acute stroke and 2) longitudinal aphasia recovery. Twenty-three individuals admitted to Johns Hopkins Hospital with a new left hemisphere stroke completed the Western Aphasia Battery-Revised (WAB-R), several additional language measures (of naming, semantics, spontaneous speech, and oral reading), and three non-linguistic cognitive tasks from the NIH Toolbox (i.e., Pattern Comparison Processing Speed Test, Flanker Inhibitory Control and Attention Test, and Dimensional Change Card Sorting Test). Two participants with aphasia (PWA) with temporoparietal lesions, one of whom (PWA1) had greater temporal but less frontal and superior parietal damage than the other (PWA2), also completed testing at subacute (three months post-onset) and early chronic (six months post-onset) time points. In aim 1, principal component analysis on the acute test data (excluding the WAB-R) revealed language and non-linguistic executive control tasks largely loaded onto separate components. Both components were significant predictors of acute aphasia severity per the WAB-R Aphasia Quotient (AQ). Crucially, executive dysfunction explained an additional 17% of the variance in AQ beyond the explanatory power of language impairments alone. In aim 2, both case patients exhibited language and executive control deficits at the acute post-stroke stage. A dissociation was observed in longitudinal recovery of these patients. By the early chronic time point, PWA1 exhibited improved (but persistent) deficits in several language domains and recovered executive control. In contrast, PWA2 demonstrated mostly recovered language but persistent executive dysfunction. Greater damage to language and attention networks in these respective patients may explain the observed behavioral patterns. These results demonstrate that language and executive control can dissociate (at least to a degree), but both contribute to early post-stroke presentation of aphasia and likely influence longitudinal aphasia recovery.
Collapse
Affiliation(s)
| | | | - Argye E Hillis
- Department of Neurology, USA; Physical Medicine and Rehabilitation, USA; Cognitive Science, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
31
|
Nalborczyk L, Debarnot U, Longcamp M, Guillot A, Alario FX. The Role of Motor Inhibition During Covert Speech Production. Front Hum Neurosci 2022; 16:804832. [PMID: 35355587 PMCID: PMC8959424 DOI: 10.3389/fnhum.2022.804832] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Accepted: 01/31/2022] [Indexed: 11/13/2022] Open
Abstract
Covert speech is accompanied by a subjective multisensory experience with auditory and kinaesthetic components. An influential hypothesis states that these sensory percepts result from a simulation of the corresponding motor action that relies on the same internal models recruited for the control of overt speech. This simulationist view raises the question of how it is possible to imagine speech without executing it. In this perspective, we discuss the possible role(s) played by motor inhibition during covert speech production. We suggest that considering covert speech as an inhibited form of overt speech maps naturally to the purported progressive internalization of overt speech during childhood. We further argue that the role of motor inhibition may differ widely across different forms of covert speech (e.g., condensed vs. expanded covert speech) and that considering this variety helps reconciling seemingly contradictory findings from the neuroimaging literature.
Collapse
Affiliation(s)
- Ladislas Nalborczyk
- Aix Marseille Univ, CNRS, LPC, Marseille, France
- Aix Marseille Univ, CNRS, LNC, Marseille, France
| | - Ursula Debarnot
- Inter-University Laboratory of Human Movement Biology-EA 7424, University of Lyon, University Claude Bernard Lyon 1, Villeurbanne, France
- Institut Universitaire de France, Paris, France
| | | | - Aymeric Guillot
- Inter-University Laboratory of Human Movement Biology-EA 7424, University of Lyon, University Claude Bernard Lyon 1, Villeurbanne, France
- Institut Universitaire de France, Paris, France
| | | |
Collapse
|
32
|
Affiliation(s)
- Wade Munroe
- University of Michigan, Department of Philosophy and the Weinberg Institute for Cognitive Science, Ann Arbor, MI, USA
| |
Collapse
|
33
|
Rann JC, Almor A. Effects of verbal tasks on driving simulator performance. Cogn Res Princ Implic 2022; 7:12. [PMID: 35119569 PMCID: PMC8817015 DOI: 10.1186/s41235-022-00357-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2021] [Accepted: 01/08/2022] [Indexed: 11/10/2022] Open
Abstract
We report results from a driving simulator paradigm we developed to test the fine temporal effects of verbal tasks on simultaneous tracking performance. A total of 74 undergraduate students participated in two experiments in which they controlled a cursor using the steering wheel to track a moving target and where the dependent measure was overall deviation from target. Experiment 1 tested tracking performance during slow and fast target speeds under conditions involving either no verbal input or output, passive listening to spoken prompts via headphones, or responding to spoken prompts. Experiment 2 was similar except that participants read written prompts overlain on the simulator screen instead of listening to spoken prompts. Performance in both experiments was worse during fast speeds and worst overall during responding conditions. Most significantly, fine scale time-course analysis revealed deteriorating tracking performance as participants prepared and began speaking and steadily improving performance while speaking. Additionally, post-block survey data revealed that conversation recall was best in responding conditions, and perceived difficulty increased with task complexity. Our study is the first to track temporal changes in interference at high resolution during the first hundreds of milliseconds of verbal production and comprehension. Our results are consistent with load-based theories of multitasking performance and show that language production, and, to a lesser extent, language comprehension tap resources also used for tracking. More generally, our paradigm provides a useful tool for measuring dynamical changes in tracking performance during verbal tasks due to the rapidly changing resource requirements of language production and comprehension.
Collapse
Affiliation(s)
- Jonathan C Rann
- Department of Psychology, University of South Carolina, 1512 Pendelton Street, Columbia, SC, 29208, USA. .,Institute for Mind and Brain, University of South Carolina, Columbia, SC, 29208, USA.
| | - Amit Almor
- Department of Psychology, University of South Carolina, 1512 Pendelton Street, Columbia, SC, 29208, USA.,Institute for Mind and Brain, University of South Carolina, Columbia, SC, 29208, USA.,Linguistics Program, University of South Carolina, Columbia, SC, 29208, USA
| |
Collapse
|
34
|
Blohm S, Versace S, Methner S, Wagner V, Schlesewsky M, Menninghaus W. Reading Poetry and Prose: Eye Movements and Acoustic Evidence. DISCOURSE PROCESSES 2022. [DOI: 10.1080/0163853x.2021.2015188] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Stefan Blohm
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
- School of Psychology, Social Work and Social Policy, University of South Australia, Adelaide, Australia
| | - Stefano Versace
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
- School of Psychology, Social Work and Social Policy, University of South Australia, Adelaide, Australia
| | - Sanja Methner
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
- School of Psychology, Social Work and Social Policy, University of South Australia, Adelaide, Australia
| | - Valentin Wagner
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
- School of Psychology, Social Work and Social Policy, University of South Australia, Adelaide, Australia
| | - Matthias Schlesewsky
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
- School of Psychology, Social Work and Social Policy, University of South Australia, Adelaide, Australia
| | - Winfried Menninghaus
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
- School of Psychology, Social Work and Social Policy, University of South Australia, Adelaide, Australia
| |
Collapse
|
35
|
Proix T, Delgado Saa J, Christen A, Martin S, Pasley BN, Knight RT, Tian X, Poeppel D, Doyle WK, Devinsky O, Arnal LH, Mégevand P, Giraud AL. Imagined speech can be decoded from low- and cross-frequency intracranial EEG features. Nat Commun 2022; 13:48. [PMID: 35013268 PMCID: PMC8748882 DOI: 10.1038/s41467-021-27725-3] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2021] [Accepted: 12/03/2021] [Indexed: 01/19/2023] Open
Abstract
Reconstructing intended speech from neural activity using brain-computer interfaces holds great promises for people with severe speech production deficits. While decoding overt speech has progressed, decoding imagined speech has met limited success, mainly because the associated neural signals are weak and variable compared to overt speech, hence difficult to decode by learning algorithms. We obtained three electrocorticography datasets from 13 patients, with electrodes implanted for epilepsy evaluation, who performed overt and imagined speech production tasks. Based on recent theories of speech neural processing, we extracted consistent and specific neural features usable for future brain computer interfaces, and assessed their performance to discriminate speech items in articulatory, phonetic, and vocalic representation spaces. While high-frequency activity provided the best signal for overt speech, both low- and higher-frequency power and local cross-frequency contributed to imagined speech decoding, in particular in phonetic and vocalic, i.e. perceptual, spaces. These findings show that low-frequency power and cross-frequency dynamics contain key information for imagined speech decoding.
Collapse
Affiliation(s)
- Timothée Proix
- Department of Basic Neurosciences, Faculty of Medicine, University of Geneva, Geneva, Switzerland.
| | - Jaime Delgado Saa
- Department of Basic Neurosciences, Faculty of Medicine, University of Geneva, Geneva, Switzerland
| | - Andy Christen
- Department of Basic Neurosciences, Faculty of Medicine, University of Geneva, Geneva, Switzerland
| | - Stephanie Martin
- Department of Basic Neurosciences, Faculty of Medicine, University of Geneva, Geneva, Switzerland
| | - Brian N Pasley
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, USA
| | - Robert T Knight
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, USA
- Department of Psychology, University of California, Berkeley, Berkeley, USA
| | - Xing Tian
- Division of Arts and Sciences, New York University Shanghai, Shanghai, China
- Shanghai Key Laboratory of Brain Functional Genomics (Ministry of Education), School of Psychology and Cognitive Science, East China Normal University, Shanghai, China
- NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, Shanghai, China
| | - David Poeppel
- Department of Psychology, New York University, New York, NY, USA
- Ernst Strüngmann Institute for Neuroscience, Frankfurt, Germany
| | - Werner K Doyle
- Department of Neurology, New York University Grossman School of Medicine, New York, NY, USA
| | - Orrin Devinsky
- Department of Neurology, New York University Grossman School of Medicine, New York, NY, USA
| | - Luc H Arnal
- Institut de l'Audition, Institut Pasteur, INSERM, F-75012, Paris, France
| | - Pierre Mégevand
- Department of Basic Neurosciences, Faculty of Medicine, University of Geneva, Geneva, Switzerland
- Division of Neurology, Geneva University Hospitals, Geneva, Switzerland
| | - Anne-Lise Giraud
- Department of Basic Neurosciences, Faculty of Medicine, University of Geneva, Geneva, Switzerland
| |
Collapse
|
36
|
Moon J, Chau T, Orlandi S. A comparison and classification of oscillatory characteristics in speech perception and covert speech. Brain Res 2022; 1781:147778. [PMID: 35007548 DOI: 10.1016/j.brainres.2022.147778] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2021] [Revised: 12/29/2021] [Accepted: 01/03/2022] [Indexed: 11/02/2022]
Abstract
Covert speech, the mental imagery of speaking, has been studied increasingly to understand and decode thoughts in the context of brain-computer interfaces. In studies of speech comprehension, neural oscillations are thought to play a key role in the temporal encoding of speech. However, little is known about the role of oscillations in covert speech. In this study, we investigated the oscillatory involvements in covert speech and speech perception. Data were collected from 10 participants with 64 channel EEG. Participants heard the words, 'blue' and 'orange', and subsequently mentally rehearsed them. First, continuous wavelet transform was performed on epoched signals and subsequently two-tailed t-tests between two classes were conducted to determine statistical differences in frequency and time (t-CWT). Features were also extracted using t-CWT and subsequently classified using a support vector machine. θ and γ phase amplitude coupling (PAC) was also assessed within and between tasks. All binary classifications produced accuracies significantly greater (80-90%) than chance level, supporting the use of t-CWT in determining relative oscillatory involvements. While the perception task dynamically invoked all frequencies with more prominent θ and α activity, the covert task favoured higher frequencies with significantly higher γ activity than perception. Moreover, the perception condition produced significant θ-γ PAC, corroborating a reported linkage between syllabic and phonemic sampling. Although this coupling was found to be suppressed in the covert condition, we found significant cross-task coupling between perception θ and covert speech γ. Covert speech processing appears to be largely associated with higher frequencies of EEG. Importantly, the significant cross-task coupling between speech perception and covert speech, in the absence of within-task covert speech PAC, supports the notion that the γ- and θ-bands subserve, respectively, shared and unique encoding processes across tasks.
Collapse
Affiliation(s)
- Jaewoong Moon
- Bloorview Research Institute, Holland Bloorview Kids Rehabilitation Hospital, Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, ON, Canada.
| | - Tom Chau
- Bloorview Research Institute, Holland Bloorview Kids Rehabilitation Hospital, Toronto, ON, Canada
| | - Silvia Orlandi
- Bloorview Research Institute, Holland Bloorview Kids Rehabilitation Hospital, Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
37
|
López-Silva P, Cavieres Á, Humpston C. The phenomenology of auditory verbal hallucinations in schizophrenia and the challenge from pseudohallucinations. Front Psychiatry 2022; 13:826654. [PMID: 36051554 PMCID: PMC9424625 DOI: 10.3389/fpsyt.2022.826654] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/01/2021] [Accepted: 07/25/2022] [Indexed: 11/13/2022] Open
Abstract
In trying to make sense of the extensive phenomenological variation of first-personal reports on auditory verbal hallucinations, the concept of pseudohallucination is originally introduced to designate any hallucinatory-like phenomena not exhibiting some of the paradigmatic features of "genuine" hallucinations. After its introduction, Karl Jaspers locates the notion of pseudohallucinations into the auditory domain, appealing to a distinction between hallucinatory voices heard within the subjective inner space (pseudohallucination) and voices heard in the outer external space (real hallucinations) with differences in their sensory richness. Jaspers' characterization of the term has been the target of a number of phenomenological, conceptual and empirically-based criticisms. From this latter point of view, it has been claimed that the concept cannot capture distinct phenomena at the neurobiological level. Over the last years, the notion of pseudohallucination seems to be falling into disuse as no major diagnostic system seems to refer to it. In this paper, we propose that even if the concept of pseudohallucination is not helpful to differentiate distinct phenomena at the neurobiological level, the inner/outer distinction highlighted by Jaspers' characterization of the term still remains an open explanatory challenge for dominant theories about the neurocognitive origin of auditory verbal hallucinations. We call this, "the challenge from pseudohallucinations". After exploring this issue in detail, we propose some phenomenological, conceptual, and empirical paths for future research that might help to build up a more contextualized and dynamic view of auditory verbal hallucinatory phenomena.
Collapse
Affiliation(s)
- Pablo López-Silva
- School of Psychology, Faculty of Social Sciences, Universidad de Valparaíso, Valparaíso, Chile.,Millennium Institute for Research in Depression and Personality (MIDAP), Santiago, Chile
| | - Álvaro Cavieres
- Department of Psychiatry, School of Medicine, Faculty of Medicine, Universidad de Valparaíso, Valparaíso, Chile
| | - Clara Humpston
- School of Psychology, University of York, York, United Kingdom.,School of Psychology, Institute for Mental Health, University of Birmingham, Birmingham, United Kingdom
| |
Collapse
|
38
|
Kiroy V, Bakhtin O, Krivko E, Lazurenko D, Aslanyan E, Shaposhnikov D, Shcherban I. Spoken and Inner Speech-related EEG Connectivity in Different Spatial Direction. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103224] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
|
39
|
Abstract
The present study investigates effects of conventionally metered and rhymed poetry on eyemovements
in silent reading. Readers saw MRRL poems (i.e., metrically regular, rhymed
language) in two layouts. In poem layout, verse endings coincided with line breaks. In prose
layout verse endings could be mid-line. We also added metrical and rhyme anomalies. We
hypothesized that silently reading MRRL results in building up auditive expectations that
are based on a rhythmic “audible gestalt” and propose that rhythmicity is generated through
subvocalization. Our results revealed that readers were sensitive to rhythmic-gestalt-anomalies
but showed differential effects in poem and prose layouts. Metrical anomalies in particular
resulted in robust reading disruptions across a variety of eye-movement measures in
the poem layout and caused re-reading of the local context. Rhyme anomalies elicited
stronger effects in prose layout and resulted in systematic re-reading of pre-rhymes. The
presence or absence of rhythmic-gestalt-anomalies, as well as the layout manipulation, also
affected reading in general. Effects of syllable number indicated a high degree of subvocalization.
The overall pattern of results suggests that eye-movements reflect, and are closely
aligned with, the rhythmic subvocalization of MRRL. This study introduces a two-stage approach to the analysis of long MRRL stimuli and contributes
to the discussion of how the processing of rhythm in music and speech may overlap.
Collapse
Affiliation(s)
- Judith Beck
- Cognitive Science, University of Freiburg,, Germany
| | | |
Collapse
|
40
|
Fini C, Zannino GD, Orsoni M, Carlesimo GA, Benassi M, Borghi AM. Articulatory suppression delays processing of abstract words: The role of inner speech. Q J Exp Psychol (Hove) 2021; 75:1343-1354. [PMID: 34623202 DOI: 10.1177/17470218211053623] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Compared to concrete concepts, like "book," abstract concepts expressed by words like "justice" are more detached from sensorial experiences, even though they are also grounded in sensorial modalities. Abstract concepts lack a single object as referent and are characterised by higher variability both within and across participants. According to the Word as Social Tool (WAT) proposal, owing to their complexity, abstract concepts need to be processed with the help of inner language. Inner language can namely help participants to re-explain to themselves the meaning of the word, to keep information active in working memory, and to prepare themselves to ask information from more competent people. While previous studies have demonstrated that the mouth is involved during abstract concepts' processing, both the functional role and the mechanisms underlying this involvement still need to be clarified. We report an experiment in which participants were required to evaluate whether 78 words were abstract or concrete by pressing two different pedals. During the judgement task, they were submitted, in different blocks, to a baseline, an articulatory suppression, and a manipulation condition. In the last two conditions, they had to repeat a syllable continually and to manipulate a softball with their dominant hand. Results showed that articulatory suppression slowed down the processing of abstract more than that of concrete words. Overall results confirm the WAT proposal's hypothesis that abstract concepts processing involves the mouth motor system and specifically inner speech. We discuss the implications for current theories of conceptual representation.
Collapse
Affiliation(s)
- Chiara Fini
- Department of Dynamic, Clinical Psychology and Health Studies, Sapienza University of Rome, Rome, Italy
| | - Gian Daniele Zannino
- Laboratory of Clinical and Behavioral Neurology, I.R.C.C.S. Santa Lucia Foundation, Rome, Italy
| | - Matteo Orsoni
- Department of Psychology, University of Bologna, Bologna, Italy
| | - Giovanni A Carlesimo
- Laboratory of Clinical and Behavioral Neurology, I.R.C.C.S. Santa Lucia Foundation, Rome, Italy.,Department of Systems Medicine, Tor Vergata University of Rome, Rome, Italy
| | | | - Anna M Borghi
- Department of Dynamic, Clinical Psychology and Health Studies, Sapienza University of Rome, Rome, Italy.,Institute of Cognitive Sciences and Technologies, Italian National Research Council, Rome, Italy
| |
Collapse
|
41
|
Mazzuca C, Fini C, Michalland AH, Falcinelli I, Da Rold F, Tummolini L, Borghi AM. From Affordances to Abstract Words: The Flexibility of Sensorimotor Grounding. Brain Sci 2021; 11:1304. [PMID: 34679369 PMCID: PMC8534254 DOI: 10.3390/brainsci11101304] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2021] [Revised: 09/23/2021] [Accepted: 09/24/2021] [Indexed: 11/18/2022] Open
Abstract
The sensorimotor system plays a critical role in several cognitive processes. Here, we review recent studies documenting this interplay at different levels. First, we concentrate on studies that have shown how the sensorimotor system is flexibly involved in interactions with objects. We report evidence demonstrating how social context and situations influence affordance activation, and then focus on tactile and kinesthetic components in body-object interactions. Then, we turn to word use, and review studies that have shown that not only concrete words, but also abstract words are grounded in the sensorimotor system. We report evidence that abstract concepts activate the mouth effector more than concrete concepts, and discuss this effect in light of studies on adults, children, and infants. Finally, we pinpoint possible sensorimotor mechanisms at play in the acquisition and use of abstract concepts. Overall, we show that the involvement of the sensorimotor system is flexibly modulated by context, and that its role can be integrated and flanked by that of other systems such as the linguistic system. We suggest that to unravel the role of the sensorimotor system in cognition, future research should fully explore the complexity of this intricate, and sometimes slippery, relation.
Collapse
Affiliation(s)
- Claudia Mazzuca
- Body Action Language Lab (BALLAB), Sapienza University of Rome and ISTC-CNR, 00185 Rome, Italy; (C.M.); (C.F.); (A.H.M.); (I.F.); (F.D.R.); (L.T.)
| | - Chiara Fini
- Body Action Language Lab (BALLAB), Sapienza University of Rome and ISTC-CNR, 00185 Rome, Italy; (C.M.); (C.F.); (A.H.M.); (I.F.); (F.D.R.); (L.T.)
- IRCCS Fondazione Santa Lucia, 00179 Rome, Italy
| | - Arthur Henri Michalland
- Body Action Language Lab (BALLAB), Sapienza University of Rome and ISTC-CNR, 00185 Rome, Italy; (C.M.); (C.F.); (A.H.M.); (I.F.); (F.D.R.); (L.T.)
- Department of Psychology, Université Paul Valéry Montpellier, EPSYLON EA 4556, 34199 Montpellier, France
| | - Ilenia Falcinelli
- Body Action Language Lab (BALLAB), Sapienza University of Rome and ISTC-CNR, 00185 Rome, Italy; (C.M.); (C.F.); (A.H.M.); (I.F.); (F.D.R.); (L.T.)
| | - Federico Da Rold
- Body Action Language Lab (BALLAB), Sapienza University of Rome and ISTC-CNR, 00185 Rome, Italy; (C.M.); (C.F.); (A.H.M.); (I.F.); (F.D.R.); (L.T.)
| | - Luca Tummolini
- Body Action Language Lab (BALLAB), Sapienza University of Rome and ISTC-CNR, 00185 Rome, Italy; (C.M.); (C.F.); (A.H.M.); (I.F.); (F.D.R.); (L.T.)
- Institute of Cognitive Sciences and Technologies, National Research Council (CNR), 00185 Rome, Italy
| | - Anna M. Borghi
- Body Action Language Lab (BALLAB), Sapienza University of Rome and ISTC-CNR, 00185 Rome, Italy; (C.M.); (C.F.); (A.H.M.); (I.F.); (F.D.R.); (L.T.)
- Institute of Cognitive Sciences and Technologies, National Research Council (CNR), 00185 Rome, Italy
- Department of Dynamic and Clinical Psychology, and Health Studies, Sapienza University of Rome, 00185 Rome, Italy
| |
Collapse
|
42
|
Ordering of functions according to multiple fuzzy criteria: application to denoising electroencephalography. Soft comput 2021. [DOI: 10.1007/s00500-021-05719-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
43
|
Yao B, Taylor JR, Banks B, Kotz SA. Reading direct speech quotes increases theta phase-locking: Evidence for cortical tracking of inner speech? Neuroimage 2021; 239:118313. [PMID: 34175425 DOI: 10.1016/j.neuroimage.2021.118313] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2021] [Revised: 05/28/2021] [Accepted: 06/24/2021] [Indexed: 11/25/2022] Open
Abstract
Growing evidence shows that theta-band (4-7 Hz) activity in the auditory cortex phase-locks to rhythms of overt speech. Does theta activity also encode the rhythmic dynamics of inner speech? Previous research established that silent reading of direct speech quotes (e.g., Mary said: "This dress is lovely!") elicits more vivid inner speech than indirect speech quotes (e.g., Mary said that the dress was lovely). As we cannot directly track the phase alignment between theta activity and inner speech over time, we used EEG to measure the brain's phase-locked responses to the onset of speech quote reading. We found that direct (vs. indirect) quote reading was associated with increased theta phase synchrony over trials at 250-500 ms post-reading onset, with sources of the evoked activity estimated in the speech processing network. An eye-tracking control experiment confirmed that increased theta phase synchrony in direct quote reading was not driven by eye movement patterns, and more likely reflects synchronous phase resetting at the onset of inner speech. These findings suggest a functional role of theta phase modulation in reading-induced inner speech.
Collapse
Affiliation(s)
- Bo Yao
- Division of Neuroscience and Experimental Psychology, School of Biological Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester M13 9PL, United Kingdom.
| | - Jason R Taylor
- Division of Neuroscience and Experimental Psychology, School of Biological Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester M13 9PL, United Kingdom
| | - Briony Banks
- Department of Psychology, Lancaster University, Lancaster LA1 4YF, United Kingdom
| | - Sonja A Kotz
- Department of Neuropsychology & Psychopharmacology, Maastricht University, Maastricht 6211 LK, Netherlands; Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
| |
Collapse
|
44
|
Vandervert L, Moe K. The cerebellum-driven social basis of mathematics: implications for one-on-one tutoring of children with mathematics learning disabilities. CEREBELLUM & ATAXIAS 2021; 8:13. [PMID: 33971983 PMCID: PMC8112041 DOI: 10.1186/s40673-021-00136-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 02/05/2021] [Accepted: 04/26/2021] [Indexed: 12/04/2022]
Abstract
The purpose of this article is to argue that the patterns of sequence control over kinematics (movements) and dynamics (forces) which evolved in phonological processing in inner speech during the evolution of the social-cognitive capacities behind stone-tool making that led to the emergence of Homo sapiens are homologous to the social cerebellum's capacity to learn patterns of sequence within language that we refer to as mathematics. It is argued that this evolution (1) selected toward a social cognitive cerebellum which arose from the arduous, repetitive precision patterns of knapping (stone shaping) and (2) that over a period of a million-plus years was selected from mentalizing toward the kinematics and dynamics as observed and modeled in Theory of Mind (ToM) of more experienced stone knappers. It is concluded that components of this socially-induced autobiographical knowledge, namely, (1) segmenting events, (2) sequencing events, and (3) sequencing event clusters, all at various levels of abstraction, can inform optimum approaches to one-on-one tutoring of children with mathematical learning disabilities.
Collapse
Affiliation(s)
| | - Kimberly Moe
- Dept. of Education, Adjunct, Whitworth University, Spokane, USA.
| |
Collapse
|
45
|
Cummine J, Huynh TKT, Cullum A, Ostevik A, Hodgetts W. Chew on this! Oral stereognosis predicts visual word recognition in typical adults. CURRENT PSYCHOLOGY 2021. [DOI: 10.1007/s12144-021-01647-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
46
|
Borghi AM, Mazzuca C, Da Rold F, Falcinelli I, Fini C, Michalland AH, Tummolini L. Abstract Words as Social Tools: Which Necessary Evidence? Front Psychol 2021; 11:613026. [PMID: 33519634 PMCID: PMC7844197 DOI: 10.3389/fpsyg.2020.613026] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2020] [Accepted: 12/14/2020] [Indexed: 11/20/2022] Open
Affiliation(s)
- Anna M Borghi
- Department of Dynamic and Clinical Psychology, and Health Studies, Sapienza University of Rome, Rome, Italy.,Institute of Cognitive Sciences and Technologies, Italian National Research Council, Rome, Italy
| | - Claudia Mazzuca
- Department of Psychology, University of York, York, United Kingdom
| | - Federico Da Rold
- Institute of Cognitive Sciences and Technologies, Italian National Research Council, Rome, Italy
| | - Ilenia Falcinelli
- Department of Dynamic and Clinical Psychology, and Health Studies, Sapienza University of Rome, Rome, Italy.,Institute of Cognitive Sciences and Technologies, Italian National Research Council, Rome, Italy
| | - Chiara Fini
- Department of Dynamic and Clinical Psychology, and Health Studies, Sapienza University of Rome, Rome, Italy
| | - Arthur-Henri Michalland
- Department of Dynamic and Clinical Psychology, and Health Studies, Sapienza University of Rome, Rome, Italy.,University of Montpellier-LIFAM, Montpellier, France
| | - Luca Tummolini
- Institute of Cognitive Sciences and Technologies, Italian National Research Council, Rome, Italy
| |
Collapse
|
47
|
Windt JM. How deep is the rift between conscious states in sleep and wakefulness? Spontaneous experience over the sleep-wake cycle. Philos Trans R Soc Lond B Biol Sci 2021; 376:20190696. [PMID: 33308071 PMCID: PMC7741079 DOI: 10.1098/rstb.2019.0696] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/12/2020] [Indexed: 12/29/2022] Open
Abstract
Whether we are awake or asleep is believed to mark a sharp divide between the types of conscious states we undergo in either behavioural state. Consciousness in sleep is often equated with dreaming and thought to be characteristically different from waking consciousness. Conversely, recent research shows that we spend a substantial amount of our waking lives mind wandering, or lost in spontaneous thoughts. Dreaming has been described as intensified mind wandering, suggesting that there is a continuum of spontaneous experience that reaches from waking into sleep. This challenges how we conceive of the behavioural states of sleep and wakefulness in relation to conscious states. I propose a conceptual framework that distinguishes different subtypes of spontaneous thoughts and experiences independently of their occurrence in sleep or waking. I apply this framework to selected findings from dream and mind-wandering research. I argue that to assess the relationship between spontaneous thoughts and experiences and the behavioural states of sleep and wakefulness, we need to look beyond dreams to consider kinds of sleep-related experience that qualify as dreamless. I conclude that if we consider the entire range of spontaneous thoughts and experiences, there appears to be variation in subtypes both within as well as across behavioural states. Whether we are sleeping or waking does not appear to strongly constrain which subtypes of spontaneous thoughts and experiences we undergo in those states. This challenges the conventional and coarse-grained distinction between sleep and waking and their putative relation to conscious states. This article is part of the theme issue 'Offline perception: voluntary and spontaneous perceptual experiences without matching external stimulation'.
Collapse
Affiliation(s)
- Jennifer M. Windt
- Department of Philosophy, Monash University, Clayton, Victoria 3800, Australia
| |
Collapse
|
48
|
Simulating thoughts to measure and study internal attention in mental health. Sci Rep 2021; 11:2251. [PMID: 33500510 PMCID: PMC7838298 DOI: 10.1038/s41598-021-81756-w] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2020] [Accepted: 12/29/2020] [Indexed: 12/20/2022] Open
Abstract
Our mind's eye and the role of internal attention in mental life and suffering has intrigued scholars for centuries. Yet, experimental study of internal attention has been elusive due to our limited capacity to control the timing and content of internal stimuli. We thus developed the Simulated Thoughts Paradigm (STP) to experimentally deliver own-voice thought stimuli that simulate the content and experience of thinking and thereby experimental study of internal attentional processes. In independent experiments (N = 122) integrating STP into established cognitive-experimental tasks, we found and replicated evidence that emotional reactivity to negative thoughts predicts difficulty disengaging internal attention from, as well as biased selective internal attention of, those thoughts; these internal attention processes predict cognitive vulnerability (e.g., negative repetitive thinking) which thereby predict anxiety and depression. Proposed methods and findings may have implications for the study of information processing and attention in mental health broadly and models of internal attentional (dys)control in cognitive vulnerability and mental health more specifically.
Collapse
|
49
|
A computational model of language functions in flexible goal-directed behaviour. Sci Rep 2020; 10:21623. [PMID: 33303842 PMCID: PMC7729881 DOI: 10.1038/s41598-020-78252-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2020] [Accepted: 08/06/2020] [Indexed: 12/22/2022] Open
Abstract
The function of language in high-order goal-directed human cognition is an important topic at the centre of current debates. Experimental evidence shows that inner speech, representing a self-directed form of language, empowers cognitive processes such as working memory, perception, categorization, and executive functions. Here we study the relations between inner speech and processes like feedback processing and cognitive flexibility. To this aim we propose a computational model that controls an artificial agent who uses inner speech to internally manipulate its representations. The agent is able to reproduce human behavioural data collected during the solution of the Wisconsin Card Sorting test, a neuropsychological test measuring cognitive flexibility, both in the basic condition and when a verbal shadowing protocol is used. The components of the model were systematically lesioned to clarify the specific impact of inner speech on the agent’s behaviour. The results indicate that inner speech improves the efficiency of internal representation manipulation. Specifically, it makes the representations linked to specific visual features more disentangled, thus improving the agent’s capacity to engage/disengage attention on stimulus features after positive/negative action outcomes. Overall, the model shows how inner speech could improve goal-directed internal manipulation of representations and enhance behavioural flexibility.
Collapse
|
50
|
Stephan F, Saalbach H, Rossi S. Inner versus Overt Speech Production: Does This Make a Difference in the Developing Brain? Brain Sci 2020; 10:E939. [PMID: 33291489 PMCID: PMC7762104 DOI: 10.3390/brainsci10120939] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2020] [Revised: 11/24/2020] [Accepted: 12/03/2020] [Indexed: 11/21/2022] Open
Abstract
Studies in adults showed differential neural processing between overt and inner speech. So far, it is unclear whether inner and overt speech are processed differentially in children. The present study examines the pre-activation of the speech network in order to disentangle domain-general executive control from linguistic control of inner and overt speech production in 6- to 7-year-olds by simultaneously applying electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS). Children underwent a picture-naming task in which the pure preparation of a subsequent speech production and the actual execution of speech can be differentiated. The preparation phase does not represent speech per se but it resembles the setting up of the language production network. Only the fNIRS revealed a larger activation for overt, compared to inner, speech over bilateral prefrontal to parietal regions during the preparation phase. Findings suggest that the children's brain can prepare the subsequent speech production. The preparation for overt and inner speech requires different domain-general executive control. In contrast to adults, the children´s brain did not show differences between inner and overt speech when a concrete linguistic content occurs and a concrete execution is required. This might indicate that domain-specific executive control processes are still under development.
Collapse
Affiliation(s)
- Franziska Stephan
- Department of Educational Psychology, Faculty of Education, University Leipzig, 04109 Leipzig, Germany;
- Leipzig Research Center for Early Child Development, 04109 Leipzig, Germany
- ICONE, Innsbruck Cognitive Neuroscience, Department for Hearing, Speech, and Voice Disorders, Medical University of Innsbruck, 6020 Innsbruck, Austria
| | - Henrik Saalbach
- Department of Educational Psychology, Faculty of Education, University Leipzig, 04109 Leipzig, Germany;
- Leipzig Research Center for Early Child Development, 04109 Leipzig, Germany
| | - Sonja Rossi
- ICONE, Innsbruck Cognitive Neuroscience, Department for Hearing, Speech, and Voice Disorders, Medical University of Innsbruck, 6020 Innsbruck, Austria
| |
Collapse
|