1
|
Heitmeier M, Chuang YY, Baayen RH. How trial-to-trial learning shapes mappings in the mental lexicon: Modelling lexical decision with linear discriminative learning. Cogn Psychol 2023; 146:101598. [PMID: 37716109 PMCID: PMC10589761 DOI: 10.1016/j.cogpsych.2023.101598] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Revised: 08/23/2023] [Accepted: 09/02/2023] [Indexed: 09/18/2023]
Abstract
Trial-to-trial effects have been found in a number of studies, indicating that processing a stimulus influences responses in subsequent trials. A special case are priming effects which have been modelled successfully with error-driven learning (Marsolek, 2008), implying that participants are continuously learning during experiments. This study investigates whether trial-to-trial learning can be detected in an unprimed lexical decision experiment. We used the Discriminative Lexicon Model (DLM; Baayen et al., 2019), a model of the mental lexicon with meaning representations from distributional semantics, which models error-driven incremental learning with the Widrow-Hoff rule. We used data from the British Lexicon Project (BLP; Keuleers et al., 2012) and simulated the lexical decision experiment with the DLM on a trial-by-trial basis for each subject individually. Then, reaction times were predicted with Generalized Additive Models (GAMs), using measures derived from the DLM simulations as predictors. We extracted measures from two simulations per subject (one with learning updates between trials and one without), and used them as input to two GAMs. Learning-based models showed better model fit than the non-learning ones for the majority of subjects. Our measures also provide insights into lexical processing and individual differences. This demonstrates the potential of the DLM to model behavioural data and leads to the conclusion that trial-to-trial learning can indeed be detected in unprimed lexical decision. Our results support the possibility that our lexical knowledge is subject to continuous changes.
Collapse
|
2
|
Vogt C, Floegel M, Kasper J, Gispert-Sánchez S, Kell CA. Oxytocinergic modulation of speech production-a double-blind placebo-controlled fMRI study. Soc Cogn Affect Neurosci 2023; 18:nsad035. [PMID: 37384576 PMCID: PMC10348401 DOI: 10.1093/scan/nsad035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2022] [Revised: 05/21/2023] [Accepted: 06/16/2023] [Indexed: 07/01/2023] Open
Abstract
Many socio-affective behaviors, such as speech, are modulated by oxytocin. While oxytocin modulates speech perception, it is not known whether it also affects speech production. Here, we investigated effects of oxytocin administration and interactions with the functional rs53576 oxytocin receptor (OXTR) polymorphism on produced speech and its underlying brain activity. During functional magnetic resonance imaging, 52 healthy male participants read sentences out loud with either neutral or happy intonation, a covert reading condition served as a common baseline. Participants were studied once under the influence of intranasal oxytocin and in another session under placebo. Oxytocin administration increased the second formant of produced vowels. This acoustic feature has previously been associated with speech valence; however, the acoustic differences were not perceptually distinguishable in our experimental setting. When preparing to speak, oxytocin enhanced brain activity in sensorimotor cortices and regions of both dorsal and right ventral speech processing streams, as well as subcortical and cortical limbic and executive control regions. In some of these regions, the rs53576 OXTR polymorphism modulated oxytocin administration-related brain activity. Oxytocin also gated cortical-basal ganglia circuits involved in the generation of happy prosody. Our findings suggest that several neural processes underlying speech production are modulated by oxytocin, including control of not only affective intonation but also sensorimotor aspects during emotionally neutral speech.
Collapse
Affiliation(s)
- Charlotte Vogt
- Department of Neurology and Brain Imaging Center Frankfurt, Goethe University Frankfurt, Schleusenweg 2-16, Frankfurt am Main 60528, Germany
| | - Mareike Floegel
- Department of Neurology and Brain Imaging Center Frankfurt, Goethe University Frankfurt, Schleusenweg 2-16, Frankfurt am Main 60528, Germany
| | - Johannes Kasper
- Department of Neurology and Brain Imaging Center Frankfurt, Goethe University Frankfurt, Schleusenweg 2-16, Frankfurt am Main 60528, Germany
| | - Suzana Gispert-Sánchez
- Department of Neurology and Brain Imaging Center Frankfurt, Goethe University Frankfurt, Schleusenweg 2-16, Frankfurt am Main 60528, Germany
- Experimental Neurology, Department of Neurology, Goethe University Frankfurt, Frankfurt am Main 60528, Germany
| | - Christian A Kell
- Department of Neurology and Brain Imaging Center Frankfurt, Goethe University Frankfurt, Schleusenweg 2-16, Frankfurt am Main 60528, Germany
| |
Collapse
|
3
|
Viacheslav I, Vartanov A, Bueva A, Bronov O. The emotional component of inner speech: A pilot exploratory fMRI study. Brain Cogn 2023; 165:105939. [PMID: 36549191 DOI: 10.1016/j.bandc.2022.105939] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 12/11/2022] [Accepted: 12/13/2022] [Indexed: 12/24/2022]
Abstract
Inner speech is one of the most important human cognitive processes. Nevertheless, until now, many aspects of inner speech, particularly the emotional characteristics of inner speech, remain poorly understood. The main objectives of our study are to identify the neural substrate for the emotional (prosodic) dimension of inner speech and brain structures that control the suppression of expression in inner speech. To achieve these goals, a pilot exploratory fMRI study was carried out on 33 people. The subjects listened to pre-recorded phrases or individual words pronounced with different emotional connotations, after which they were internally spoken with the same emotion or with suppression of expression (neutral). The results show that there is an emotional component in inner speech, which is encoded by similar structures as in spoken speech. The unique role of the caudate nuclei in the suppression of expression in the inner speech was also shown.
Collapse
Affiliation(s)
| | | | | | - Oleg Bronov
- Federal State Budgetary Institution "National Medical and Surgical Center named after N.I. Pirogov", Russia
| |
Collapse
|
4
|
Ćwiek A, Fuchs S, Draxler C, Asu EL, Dediu D, Hiovain K, Kawahara S, Koutalidis S, Krifka M, Lippus P, Lupyan G, Oh GE, Paul J, Petrone C, Ridouane R, Reiter S, Schümchen N, Szalontai Á, Ünal-Logacev Ö, Zeller J, Perlman M, Winter B. The bouba/kiki effect is robust across cultures and writing systems. Philos Trans R Soc Lond B Biol Sci 2022; 377:20200390. [PMID: 34775818 PMCID: PMC8591387 DOI: 10.1098/rstb.2020.0390] [Citation(s) in RCA: 24] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2021] [Accepted: 09/10/2021] [Indexed: 01/06/2023] Open
Abstract
The bouba/kiki effect-the association of the nonce word bouba with a round shape and kiki with a spiky shape-is a type of correspondence between speech sounds and visual properties with potentially deep implications for the evolution of spoken language. However, there is debate over the robustness of the effect across cultures and the influence of orthography. We report an online experiment that tested the bouba/kiki effect across speakers of 25 languages representing nine language families and 10 writing systems. Overall, we found strong evidence for the effect across languages, with bouba eliciting more congruent responses than kiki. Participants who spoke languages with Roman scripts were only marginally more likely to show the effect, and analysis of the orthographic shape of the words in different scripts showed that the effect was no stronger for scripts that use rounder forms for bouba and spikier forms for kiki. These results confirm that the bouba/kiki phenomenon is rooted in crossmodal correspondence between aspects of the voice and visual shape, largely independent of orthography. They provide the strongest demonstration to date that the bouba/kiki effect is robust across cultures and writing systems. This article is part of the theme issue 'Voice modulation: from origin and mechanism to social impact (Part II)'.
Collapse
Affiliation(s)
- Aleksandra Ćwiek
- Leibniz-Zentrum Allgemeine Sprachwissenschaft, 10117 Berlin, Germany
- Institut für deutsche Sprache und Linguistik, Humboldt-Universität zu Berlin, 10099 Berlin, Germany
| | - Susanne Fuchs
- Leibniz-Zentrum Allgemeine Sprachwissenschaft, 10117 Berlin, Germany
| | - Christoph Draxler
- Institute of Phonetics and Speech Processing, Ludwig Maximilian University, 80799 Munich, Germany
| | - Eva Liina Asu
- Institute of Estonian and General Linguistics, University of Tartu, 50090 Tartu, Estonia
| | - Dan Dediu
- Laboratoire Dynamique Du Langage UMR 5596, Université Lumière Lyon 2, 69363 Lyon, France
| | - Katri Hiovain
- Department of Digital Humanities, University of Helsinki, 00014 Helsinki, Finland
| | - Shigeto Kawahara
- The Institute of Cultural and Linguistic Studies, Keio University, Mita Minatoku, Tokyo 108-8345, Japan
| | - Sofia Koutalidis
- Faculty of Linguistics and Literary Studies, Bielefeld University, 33615 Bielefeld, Germany
| | - Manfred Krifka
- Leibniz-Zentrum Allgemeine Sprachwissenschaft, 10117 Berlin, Germany
- Institut für deutsche Sprache und Linguistik, Humboldt-Universität zu Berlin, 10099 Berlin, Germany
| | - Pärtel Lippus
- Institute of Estonian and General Linguistics, University of Tartu, 50090 Tartu, Estonia
| | - Gary Lupyan
- Department of Psychology, University of Wisconsin-Madison, Madison, WI 53706, USA
| | - Grace E. Oh
- Department of English Language and Literature, Konkuk University, Seoul 05029, South Korea
| | - Jing Paul
- Asian Studies Program, Agnes Scott College, Decatur, GA 30030, USA
| | - Caterina Petrone
- Aix-Marseille Université, CNRS, Laboratoire Parole et Langage, UMR 7309, 13100 Aix-en-Provence, France
| | - Rachid Ridouane
- Laboratoire de Phonétique et Phonologie, UMR 7018, CNRS and Sorbonne Nouvelle, 75005 Paris, France
| | - Sabine Reiter
- Depto. de Polonês, Alemão e Letras Clássicas, Universidade Federal do Paraná, 80060-150 Curitiba, Brazil
| | - Nathalie Schümchen
- Department of Language and Communication, University of Southern Denmark, 5230 Odense, Denmark
| | - Ádám Szalontai
- Department of Phonetics, Hungarian Research Centre for Linguistics, Budapest 1068, Hungary
| | - Özlem Ünal-Logacev
- School of Health Sciences, Department of Speech and Language Therapy, Istanbul Medipol University, 34810 Istanbul, Turkey
| | - Jochen Zeller
- School of Arts, Linguistics Discipline, University of KwaZulu-Natal, Durban 4041, South Africa
| | - Marcus Perlman
- Department of English Language and Linguistics, University of Birmingham, Birmingham B15 2TT, UK
| | - Bodo Winter
- Department of English Language and Linguistics, University of Birmingham, Birmingham B15 2TT, UK
| |
Collapse
|
5
|
Gao Q, Xiang Y, Zhang J, Luo N, Liang M, Gong L, Yu J, Cui Q, Sepulcre J, Chen H. A reachable probability approach for the analysis of spatio-temporal dynamics in the human functional network. Neuroimage 2021; 243:118497. [PMID: 34428571 DOI: 10.1016/j.neuroimage.2021.118497] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2021] [Revised: 08/06/2021] [Accepted: 08/20/2021] [Indexed: 12/25/2022] Open
Abstract
The dynamic architecture of the human brain has been consistently observed. However, there is still limited modeling work to elucidate how neuronal circuits are hierarchically and flexibly organized in functional systems. Here we proposed a reachable probability approach based on non-homogeneous Markov chains, to characterize all possible connectivity flows and the hierarchical structure of brain functional systems at the dynamic level. We proved at the theoretical level the convergence of the functional brain network system, and demonstrated that this approach is able to detect network steady states across connectivity structure, particularly in areas of the default mode network. We further explored the dynamically hierarchical functional organization centered at the primary sensory cortices. We observed smaller optimal reachable steps to their local functional regions, and differentiated patterns in larger optimal reachable steps for primary perceptual modalities. The reachable paths with the largest and second largest transition probabilities between primary sensory seeds via multisensory integration regions were also tracked to explore the flexibility and plasticity of the multisensory integration. The present work provides a novel approach to depict both the stable and flexible hierarchical connectivity organization of the human brain.
Collapse
Affiliation(s)
- Qing Gao
- School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu 611731, China; High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 611731, China.
| | - Yu Xiang
- School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Jiabao Zhang
- School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Ning Luo
- School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Minfeng Liang
- School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Lisha Gong
- School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Jiali Yu
- School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Qian Cui
- School of Public Affairs and Administration, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Jorge Sepulcre
- Gordon Center for Medical Imaging, Division of Nuclear Medicine and Molecular Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, United States; Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Charlestown, MA, United States
| | - Huafu Chen
- High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 611731, China; Department of Radiology, First Affiliated Hospital to Army Medical University, Chongqing 400038, China.
| |
Collapse
|
6
|
Meekings S, Scott SK. Error in the Superior Temporal Gyrus? A Systematic Review and Activation Likelihood Estimation Meta-Analysis of Speech Production Studies. J Cogn Neurosci 2020; 33:422-444. [PMID: 33326327 DOI: 10.1162/jocn_a_01661] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Evidence for perceptual processing in models of speech production is often drawn from investigations in which the sound of a talker's voice is altered in real time to induce "errors." Methods of acoustic manipulation vary but are assumed to engage the same neural network and psychological processes. This paper aims to review fMRI and PET studies of altered auditory feedback and assess the strength of the evidence these studies provide for a speech error correction mechanism. Studies included were functional neuroimaging studies of speech production in neurotypical adult humans, using natural speech errors or one of three predefined speech manipulation techniques (frequency altered feedback, delayed auditory feedback, and masked auditory feedback). Seventeen studies met the inclusion criteria. In a systematic review, we evaluated whether each study (1) used an ecologically valid speech production task, (2) controlled for auditory activation caused by hearing the perturbation, (3) statistically controlled for multiple comparisons, and (4) measured behavioral compensation correlating with perturbation. None of the studies met all four criteria. We then conducted an activation likelihood estimation meta-analysis of brain coordinates from 16 studies that reported brain responses to manipulated over unmanipulated speech feedback, using the GingerALE toolbox. These foci clustered in bilateral superior temporal gyri, anterior to cortical fields typically linked to error correction. Within the limits of our analysis, we conclude that existing neuroimaging evidence is insufficient to determine whether error monitoring occurs in the posterior superior temporal gyrus regions proposed by models of speech production.
Collapse
|
7
|
Langland-Hassan P. Inner speech. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2020; 12:e1544. [PMID: 32949083 DOI: 10.1002/wcs.1544] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/20/2020] [Revised: 06/25/2020] [Accepted: 08/13/2020] [Indexed: 11/07/2022]
Abstract
Inner speech travels under many aliases: the inner voice, verbal thought, thinking in words, internal verbalization, "talking in your head," the "little voice in the head," and so on. It is both a familiar element of first-person experience and a psychological phenomenon whose complex cognitive components and distributed neural bases are increasingly well understood. There is evidence that inner speech plays a variety of cognitive roles, from enabling abstract thought, to supporting metacognition, memory, and executive function. One active area of controversy concerns the relation of inner speech to auditory verbal hallucinations (AVHs) in schizophrenia, with a common proposal being that sufferers of AVH misidentify their own inner speech as being generated by someone else. Recently, researchers have used artificial intelligence to translate the neural and neuromuscular signatures of inner speech into corresponding outer speech signals, laying the groundwork for a variety of new applications and interventions. This article is categorized under: Philosophy > Foundations of Cognitive Science Linguistics > Language in Mind and Brain Philosophy > Consciousness Philosophy > Psychological Capacities.
Collapse
|
8
|
Differential contributions of the two cerebral hemispheres to temporal and spectral speech feedback control. Nat Commun 2020; 11:2839. [PMID: 32503986 PMCID: PMC7275068 DOI: 10.1038/s41467-020-16743-2] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2019] [Accepted: 05/21/2020] [Indexed: 11/16/2022] Open
Abstract
Proper speech production requires auditory speech feedback control. Models of speech production associate this function with the right cerebral hemisphere while the left hemisphere is proposed to host speech motor programs. However, previous studies have investigated only spectral perturbations of the auditory speech feedback. Since auditory perception is known to be lateralized, with right-lateralized analysis of spectral features and left-lateralized processing of temporal features, it is unclear whether the observed right-lateralization of auditory speech feedback processing reflects a preference for speech feedback control or for spectral processing in general. Here we use a behavioral speech adaptation experiment with dichotically presented altered auditory feedback and an analogous fMRI experiment with binaurally presented altered feedback to confirm a right hemisphere preference for spectral feedback control and to reveal a left hemisphere preference for temporal feedback control during speaking. These results indicate that auditory feedback control involves both hemispheres with differential contributions along the spectro-temporal axis. Speech production is thought to rely on speech motor programs in the left cerebral hemisphere and on auditory feedback control by the right halve of the human brain. Here, the authors reveal that the left hemisphere preferentially controls temporal speech features while the right hemisphere controls speech by analyzing spectral features of the auditory feedback.
Collapse
|
9
|
Jacobs CL, Loucks TM, Watson DG, Dell GS. Masking auditory feedback does not eliminate repetition reduction. LANGUAGE, COGNITION AND NEUROSCIENCE 2019; 35:485-497. [PMID: 35992578 PMCID: PMC9390968 DOI: 10.1080/23273798.2019.1693051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/09/2019] [Accepted: 10/30/2019] [Indexed: 06/15/2023]
Abstract
Repetition reduces word duration. Explanations of this process have appealed to audience design, internal production mechanisms, and combinations thereof (e.g. Kahn & Arnold, 2015). Jacobs, Yiu, Watson, and Dell (2015) proposed the auditory feedback hypothesis, which states that speakers must hear a word, produced either by themselves or another speaker, in order for duration reduction on a subsequent production. We conducted a strong test of the auditory feedback hypothesis in two experiments, in which we used masked auditory feedback and whispering to prevent speakers from hearing themselves fully. Both experiments showed that despite limiting the sources of normal auditory feedback, repetition reduction was observed to equal extents in masked and unmasked conditions, suggesting that repetition reduction may be supported by multiple sources, such as somatosensory feedback and feedforward signals, depending on their availability.
Collapse
|
10
|
Grandchamp R, Rapin L, Perrone-Bertolotti M, Pichat C, Haldin C, Cousin E, Lachaux JP, Dohen M, Perrier P, Garnier M, Baciu M, Lœvenbruck H. The ConDialInt Model: Condensation, Dialogality, and Intentionality Dimensions of Inner Speech Within a Hierarchical Predictive Control Framework. Front Psychol 2019; 10:2019. [PMID: 31620039 PMCID: PMC6759632 DOI: 10.3389/fpsyg.2019.02019] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2019] [Accepted: 08/19/2019] [Indexed: 11/19/2022] Open
Abstract
Inner speech has been shown to vary in form along several dimensions. Along condensation, condensed inner speech forms have been described, that are supposed to be deprived of acoustic, phonological and even syntactic qualities. Expanded forms, on the other extreme, display articulatory and auditory properties. Along dialogality, inner speech can be monologal, when we engage in internal soliloquy, or dialogal, when we recall past conversations or imagine future dialogs involving our own voice as well as that of others addressing us. Along intentionality, it can be intentional (when we deliberately rehearse material in short-term memory) or it can arise unintentionally (during mind wandering). We introduce the ConDialInt model, a neurocognitive predictive control model of inner speech that accounts for its varieties along these three dimensions. ConDialInt spells out the condensation dimension by including inhibitory control at the conceptualization, formulation or articulatory planning stage. It accounts for dialogality, by assuming internal model adaptations and by speculating on neural processes underlying perspective switching. It explains the differences between intentional and spontaneous varieties in terms of monitoring. We present an fMRI study in which we probed varieties of inner speech along dialogality and intentionality, to examine the validity of the neuroanatomical correlates posited in ConDialInt. Condensation was also informally tackled. Our data support the hypothesis that expanded inner speech recruits speech production processes down to articulatory planning, resulting in a predicted signal, the inner voice, with auditory qualities. Along dialogality, covertly using an avatar's voice resulted in the activation of right hemisphere homologs of the regions involved in internal own-voice soliloquy and in reduced cerebellar activation, consistent with internal model adaptation. Switching from first-person to third-person perspective resulted in activations in precuneus and parietal lobules. Along intentionality, compared with intentional inner speech, mind wandering with inner speech episodes was associated with greater bilateral inferior frontal activation and decreased activation in left temporal regions. This is consistent with the reported subjective evanescence and presumably reflects condensation processes. Our results provide neuroanatomical evidence compatible with predictive control and in favor of the assumptions made in the ConDialInt model.
Collapse
Affiliation(s)
- Romain Grandchamp
- Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, Grenoble, France
| | - Lucile Rapin
- Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, Grenoble, France
| | | | - Cédric Pichat
- Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, Grenoble, France
| | - Célise Haldin
- Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, Grenoble, France
| | - Emilie Cousin
- Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, Grenoble, France
| | - Jean-Philippe Lachaux
- INSERM U1028, CNRS UMR5292, Brain Dynamics and Cognition Team, Lyon Neurosciences Research Center, Bron, France
| | - Marion Dohen
- Univ. Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, Grenoble, France
| | - Pascal Perrier
- Univ. Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, Grenoble, France
| | - Maëva Garnier
- Univ. Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, Grenoble, France
| | - Monica Baciu
- Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, Grenoble, France
| | - Hélène Lœvenbruck
- Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, Grenoble, France
| |
Collapse
|
11
|
Geva S, Fernyhough C. A Penny for Your Thoughts: Children's Inner Speech and Its Neuro-Development. Front Psychol 2019; 10:1708. [PMID: 31474897 PMCID: PMC6702515 DOI: 10.3389/fpsyg.2019.01708] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2018] [Accepted: 07/09/2019] [Indexed: 01/01/2023] Open
Abstract
Inner speech emerges in early childhood, in parallel with the maturation of the dorsal language stream. To date, the developmental relations between these two processes have not been examined. We review evidence that the dorsal language stream has a role in supporting the psychological phenomenon of inner speech, before considering pediatric studies of the dorsal stream's anatomical development and evidence for its emerging functional roles. We examine possible causal accounts of the relations between these two developmental processes and consider their implications for phylogenetic theories about the evolution of inner speech and the accounts of the ontogenetic relations between language and cognition.
Collapse
Affiliation(s)
- Sharon Geva
- Wellcome Centre for Human Neuroimaging, University College London, London, United Kingdom
| | | |
Collapse
|
12
|
Abstract
Objective Inner speech, or the ability to talk to yourself in your head, is one of the most ubiquitous phenomena of everyday experience. Recent years have seen growing interest in the role and function of inner speech in various typical and cognitively impaired populations. Although people vary in their ability to produce inner speech, there is currently no test battery which can be used to evaluate people's inner speech ability. Here we developed a test battery which can be used to evaluate individual differences in the ability to access the auditory word form internally. Methods We developed and standardized five tests: rhyme judgment of pictures and written words, homophone judgment of written words and non-words, and judgment of lexical stress of written words. The tasks were administered to adult healthy native British English speakers (age range 20-72, n = 28-97, varies between tests). Results In all tests, some items were excluded based on low success rates among participants, or documented regional variability in accent. Level of education, but not age, correlated with task performance for some of the tasks, and there were no gender difference in performance. Conclusion A process of standardization resulted in a battery of tests which can be used to assess natural variability of inner speech abilities among English speaking adults.
Collapse
Affiliation(s)
- Sharon Geva
- Department of Clinical Neurosciences, University of Cambridge, R3 Neurosciences - Box 83, Addenbrooke's Hospital, Cambridge, UK
| | - Elizabeth A Warburton
- Department of Clinical Neurosciences, University of Cambridge, R3 Neurosciences - Box 83, Addenbrooke's Hospital, Cambridge, UK
| |
Collapse
|
13
|
Molinaro N, Monsalve IF. Perceptual facilitation of word recognition through motor activation during sentence comprehension. Cortex 2018; 108:144-159. [PMID: 30172097 DOI: 10.1016/j.cortex.2018.07.001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2018] [Revised: 06/04/2018] [Accepted: 07/04/2018] [Indexed: 11/26/2022]
Abstract
Despite the growing literature on anticipatory language processing, the brain dynamics of this high-level predictive process are still unclear. In the present MEG study, we analyzed pre- and post-stimulus oscillatory activity time-locked to the reading of a target word. We experimentally contrasted the processing of the same target word following two highly constraining sentence contexts, in which the constraint was driven either by the semantic content or by the lexical association between words. Previous research suggests the presence of sensory facilitation for expected words in the latter condition but not in the former. We observed a dissociation between beta (∼20 Hz) and gamma (>50 Hz) band activity in pre- and post-stimulus time intervals respectively. Both the beta and gamma effects were evident in occipital brain regions, and only the pre-stimulus beta effect additionally involved left pre-articulatory motor regions. Lexically constrained (vs. semantically constrained) words elicited reduced beta power around 400 msec before the target word in motor regions and a functionally related gamma enhancement in occipital regions around 200 msec post-target. The present findings highlight the role of the motor network in word-form prediction and support proposals claiming that low-level perceptual representations can be pre-activated during language prediction.
Collapse
Affiliation(s)
- Nicola Molinaro
- BCBL, Basque center on Cognition, Brain and Language, Donostia/San Sebastian, Spain; Ikerbasque, Basque Foundation for Science, Bilbao, Spain.
| | - Irene F Monsalve
- BCBL, Basque center on Cognition, Brain and Language, Donostia/San Sebastian, Spain
| |
Collapse
|
14
|
Kell CA, Neumann K, Behrens M, von Gudenberg AW, Giraud AL. Speaking-related changes in cortical functional connectivity associated with assisted and spontaneous recovery from developmental stuttering. JOURNAL OF FLUENCY DISORDERS 2018; 55:135-144. [PMID: 28216127 DOI: 10.1016/j.jfludis.2017.02.001] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/09/2016] [Revised: 12/15/2016] [Accepted: 02/08/2017] [Indexed: 06/06/2023]
Abstract
We previously reported speaking-related activity changes associated with assisted recovery induced by a fluency shaping therapy program and unassisted recovery from developmental stuttering (Kell et al., Brain 2009). While assisted recovery re-lateralized activity to the left hemisphere, unassisted recovery was specifically associated with the activation of the left BA 47/12 in the lateral orbitofrontal cortex. These findings suggested plastic changes in speaking-related functional connectivity between left hemispheric speech network nodes. We reanalyzed these data involving 13 stuttering men before and after fluency shaping, 13 men who recovered spontaneously from their stuttering, and 13 male control participants, and examined functional connectivity during overt vs. covert reading by means of psychophysiological interactions computed across left cortical regions involved in articulation control. Persistent stuttering was associated with reduced auditory-motor coupling and enhanced integration of somatosensory feedback between the supramarginal gyrus and the prefrontal cortex. Assisted recovery reduced this hyper-connectivity and increased functional connectivity between the articulatory motor cortex and the auditory feedback processing anterior superior temporal gyrus. In spontaneous recovery, both auditory-motor coupling and integration of somatosensory feedback were normalized. In addition, activity in the left orbitofrontal cortex and superior cerebellum appeared uncoupled from the rest of the speech production network. These data suggest that therapy and spontaneous recovery normalizes the left hemispheric speaking-related activity via an improvement of auditory-motor mapping. By contrast, long-lasting unassisted recovery from stuttering is additionally supported by a functional isolation of the superior cerebellum from the rest of the speech production network, through the pivotal left BA 47/12.
Collapse
Affiliation(s)
- Christian A Kell
- Brain Imaging Center and Department of Neurology, Goethe University, Frankfurt, Germany.
| | - Katrin Neumann
- Department of Phoniatrics and Pediatric Audiology, Clinic of Otorhinolaryngology, Head and Neck Surgery, St. Elisabeth-Hospital, Ruhr University Bochum, Bochum, Germany
| | - Marion Behrens
- Brain Imaging Center and Department of Neurology, Goethe University, Frankfurt, Germany
| | | | - Anne-Lise Giraud
- Département des Neuroscience Fondamentales, Université de Genève, Switzerland
| |
Collapse
|