1
|
Ashjaei S, Behroozmand R, Fozdar S, Farrar R, Arjmandi M. Vocal control and speech production in cochlear implant listeners: A review within auditory-motor processing framework. Hear Res 2024; 453:109132. [PMID: 39447319 DOI: 10.1016/j.heares.2024.109132] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/24/2024] [Revised: 10/11/2024] [Accepted: 10/14/2024] [Indexed: 10/26/2024]
Abstract
A comprehensive literature review is conducted to summarize and discuss prior findings on how cochlear implants (CI) affect the users' abilities to produce and control vocal and articulatory movements within the auditory-motor integration framework of speech. Patterns of speech production pre- versus post-implantation, post-implantation adjustments, deviations from the typical ranges of speakers with normal hearing (NH), the effects of switching the CI on and off, as well as the impact of altered auditory feedback on vocal and articulatory speech control are discussed. Overall, findings indicate that CIs enhance the vocal and articulatory control aspects of speech production at both segmental and suprasegmental levels. While many CI users achieve speech quality comparable to NH individuals, some features still deviate in a group of CI users even years post-implantation. More specifically, contracted vowel space, increased vocal jitter and shimmer, longer phoneme and utterance durations, shorter voice onset time, decreased contrast in fricative production, limited prosodic patterns, and reduced intelligibility have been reported in subgroups of CI users compared to NH individuals. Significant individual variations among CI users have been observed in both the pace of speech production adjustments and long-term speech outcomes. Few controlled studies have explored how the implantation age and the duration of CI use influence speech features, leaving substantial gaps in our understanding about the effects of spectral resolution, auditory rehabilitation, and individual auditory-motor processing abilities on vocal and articulatory speech outcomes in CI users. Future studies under the auditory-motor integration framework are warranted to determine how suboptimal CI auditory feedback impacts auditory-motor processing and precise vocal and articulatory control in CI users.
Collapse
Affiliation(s)
- Samin Ashjaei
- Translational Auditory Neuroscience Lab, Department of Communication Sciences and Disorders, Arnold School of Public Health, University of South Carolina, 1705 College Street, Columbia, SC 29208, USA
| | - Roozbeh Behroozmand
- Speech Neuroscience Lab, Department of Speech, Language, and Hearing, Callier Center for Communication Disorders, School of Behavioral and Brain Sciences, The University of Texas at Dallas, 2811 North Floyd Road, Richardson, TX 75080, USA
| | - Shaivee Fozdar
- Translational Auditory Neuroscience Lab, Department of Communication Sciences and Disorders, Arnold School of Public Health, University of South Carolina, 1705 College Street, Columbia, SC 29208, USA
| | - Reed Farrar
- Translational Auditory Neuroscience Lab, Department of Communication Sciences and Disorders, Arnold School of Public Health, University of South Carolina, 1705 College Street, Columbia, SC 29208, USA
| | - Meisam Arjmandi
- Translational Auditory Neuroscience Lab, Department of Communication Sciences and Disorders, Arnold School of Public Health, University of South Carolina, 1705 College Street, Columbia, SC 29208, USA; Institute for Mind and Brain, University of South Carolina, Barnwell Street, Columbia, SC 29208, USA.
| |
Collapse
|
2
|
Jingwen Li J, Daliri A, Kim KS, Max L. Does pre-speech auditory modulation reflect processes related to feedback monitoring or speech movement planning? Neurosci Lett 2024; 843:138025. [PMID: 39461704 DOI: 10.1016/j.neulet.2024.138025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2024] [Revised: 10/03/2024] [Accepted: 10/20/2024] [Indexed: 10/29/2024]
Abstract
Previous studies have revealed that auditory processing is modulated during the planning phase immediately prior to speech onset. To date, the functional relevance of this pre-speech auditory modulation (PSAM) remains unknown. Here, we investigated whether PSAM reflects neuronal processes that are associated with preparing auditory cortex for optimized feedback monitoring as reflected in online speech corrections. Combining electroencephalographic PSAM data from a previous data set with new acoustic measures of the same participants' speech, we asked whether individual speakers' extent of PSAM is correlated with the implementation of within-vowel articulatory adjustments during /b/-vowel-/d/ word productions. Online articulatory adjustments were quantified as the extent of change in inter-trial formant variability from vowel onset to vowel midpoint (a phenomenon known as centering). This approach allowed us to also consider inter-trial variability in formant production, and its possible relation to PSAM, at vowel onset and midpoint separately. Results showed that inter-trial formant variability was significantly smaller at vowel midpoint than at vowel onset. PSAM was not significantly correlated with this amount of change in variability as an index of within-vowel adjustments. Surprisingly, PSAM was negatively correlated with inter-trial formant variability not only in the middle but also at the very onset of the vowels. Thus, speakers with more PSAM produced formants that were already less variable at vowel onset. Findings suggest that PSAM may reflect processes that influence speech acoustics as early as vowel onset and, thus, that are directly involved in motor command preparation (feedforward control) rather than output monitoring (feedback control).
Collapse
Affiliation(s)
- Joanne Jingwen Li
- Department of Speech and Hearing Sciences, University of Washington, 1417 NE 42nd Street, Seattle, WA 98105-6246, United States.
| | - Ayoub Daliri
- College of Health Solutions, Arizona State University, 975 S Myrtle Ave., Tempe, AZ 85287, United States.
| | - Kwang S Kim
- Department of Speech, Language, and Hearing Sciences, Purdue University, 715 Clinic Drive, West Lafayette, IN 47907-2122, United States.
| | - Ludo Max
- Department of Speech and Hearing Sciences, University of Washington, 1417 NE 42nd Street, Seattle, WA 98105-6246, United States.
| |
Collapse
|
3
|
Schreiner MR, Feustel S, Kunde W. Linking actions and memories: Probing the interplay of action-effect congruency, agency experience, and recognition memory. Mem Cognit 2024:10.3758/s13421-024-01644-2. [PMID: 39382829 DOI: 10.3758/s13421-024-01644-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/21/2024] [Indexed: 10/10/2024]
Abstract
Adult humans experience agency when their action causes certain events (sense of agency). Moreover, they can later remember what these events were (memory). Here, we investigate how the relationship between actions and events shapes agency experience and memory for the corresponding events. Participants performed actions that produced stimuli that were either congruent or incongruent to the action while memory of these stimuli was probed in a recognition test. Additionally, predictability of the effect was manipulated in Experiment 1 by using either randomly interleaved or blocked ordering of action-congruent and action-incongruent events. In Experiment 2, the size of the action space was manipulated by allowing participants to choose between three or six possible responses. The results indicated a heightened sense of agency following congruent compared to incongruent trials, with this effect being increased given a larger available action space, as well as a greater sense of agency given higher predictability of the effect. Recognition memory was better for stimuli presented in congruent compared to incongruent trials, with no discernible effects of effect predictability or the size of the action space. The results point towards a joint influence of predictive and postdictive processes on agency experience and suggest a link between control and memory. The partial dissociation of influences on agency experience and memory cast doubt on a mediating role of agency experience on the relationship between action-effect congruency and memory. Theoretical accounts for this relationship are discussed.
Collapse
Affiliation(s)
- Marcel R Schreiner
- Julius-Maximilians-Universität Würzburg, Röntgenring 11, 97070, Würzburg, Germany.
| | - Shenna Feustel
- Julius-Maximilians-Universität Würzburg, Röntgenring 11, 97070, Würzburg, Germany
| | - Wilfried Kunde
- Julius-Maximilians-Universität Würzburg, Röntgenring 11, 97070, Würzburg, Germany
| |
Collapse
|
4
|
Kim KS, Hinkley LB, Brent K, Gaines JL, Pongos AL, Gupta S, Dale CL, Nagarajan SS, Houde JF. Neurophysiological evidence of sensory prediction errors driving speech sensorimotor adaptation. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.10.22.563504. [PMID: 37961099 PMCID: PMC10634734 DOI: 10.1101/2023.10.22.563504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/15/2023]
Abstract
The human sensorimotor system has a remarkable ability to quickly and efficiently learn movements from sensory experience. A prominent example is sensorimotor adaptation, learning that characterizes the sensorimotor system's response to persistent sensory errors by adjusting future movements to compensate for those errors. Despite being essential for maintaining and fine-tuning motor control, mechanisms underlying sensorimotor adaptation remain unclear. A component of sensorimotor adaptation is implicit (i.e., the learner is unaware of the learning process) which has been suggested to result from sensory prediction errors-the discrepancies between predicted sensory consequences of motor commands and actual sensory feedback. However, to date no direct neurophysiological evidence that sensory prediction errors drive adaptation has been demonstrated. Here, we examined prediction errors via magnetoencephalography (MEG) imaging of the auditory cortex (n = 34) during sensorimotor adaptation of speech to altered auditory feedback, an entirely implicit adaptation task. Specifically, we measured how speaking-induced suppression (SIS)--a neural representation of auditory prediction errors--changed over the trials of the adaptation experiment. SIS refers to the suppression of auditory cortical response to speech onset (in particular, the M100 response) to self-produced speech when compared to the response to passive listening to identical playback of that speech. SIS was reduced (reflecting larger prediction errors) during the early learning phase compared to the initial unaltered feedback phase. Furthermore, reduction in SIS positively correlated with behavioral adaptation extents, suggesting that larger prediction errors were associated with more learning. In contrast, such a reduction in SIS was not found in a control experiment in which participants heard unaltered feedback and thus did not adapt. In addition, in some participants who reached a plateau in the late learning phase, SIS increased (reflecting smaller prediction errors), demonstrating that prediction errors were minimal when there was no further adaptation. Together, these findings provide the first neurophysiological evidence for the hypothesis that prediction errors drive human sensorimotor adaptation.
Collapse
Affiliation(s)
- Kwang S. Kim
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, IN, USA
| | - Leighton B. Hinkley
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA, USA
| | - Kurtis Brent
- UC Berkeley - UCSF Graduate Program in Bioengineering, University of California San Francisco, San Francisco, CA, USA
| | - Jessica L. Gaines
- UC Berkeley - UCSF Graduate Program in Bioengineering, University of California San Francisco, San Francisco, CA, USA
| | - Alvincé L. Pongos
- UC Berkeley - UCSF Graduate Program in Bioengineering, University of California San Francisco, San Francisco, CA, USA
| | - Saloni Gupta
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA, USA
| | - Corby L. Dale
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA, USA
| | - Srikantan S. Nagarajan
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA, USA
| | - John F. Houde
- UC Berkeley - UCSF Graduate Program in Bioengineering, University of California San Francisco, San Francisco, CA, USA
| |
Collapse
|
5
|
Ozker M, Yu L, Dugan P, Doyle W, Friedman D, Devinsky O, Flinker A. Speech-induced suppression and vocal feedback sensitivity in human cortex. eLife 2024; 13:RP94198. [PMID: 39255194 PMCID: PMC11386952 DOI: 10.7554/elife.94198] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/12/2024] Open
Abstract
Across the animal kingdom, neural responses in the auditory cortex are suppressed during vocalization, and humans are no exception. A common hypothesis is that suppression increases sensitivity to auditory feedback, enabling the detection of vocalization errors. This hypothesis has been previously confirmed in non-human primates, however a direct link between auditory suppression and sensitivity in human speech monitoring remains elusive. To address this issue, we obtained intracranial electroencephalography (iEEG) recordings from 35 neurosurgical participants during speech production. We first characterized the detailed topography of auditory suppression, which varied across superior temporal gyrus (STG). Next, we performed a delayed auditory feedback (DAF) task to determine whether the suppressed sites were also sensitive to auditory feedback alterations. Indeed, overlapping sites showed enhanced responses to feedback, indicating sensitivity. Importantly, there was a strong correlation between the degree of auditory suppression and feedback sensitivity, suggesting suppression might be a key mechanism that underlies speech monitoring. Further, we found that when participants produced speech with simultaneous auditory feedback, posterior STG was selectively activated if participants were engaged in a DAF paradigm, suggesting that increased attentional load can modulate auditory feedback sensitivity.
Collapse
Affiliation(s)
- Muge Ozker
- Neurology Department, New York University, New York, United States
- Max Planck Institute for Psycholinguistics, Nijmegen, Netherlands
| | - Leyao Yu
- Neurology Department, New York University, New York, United States
- Biomedical Engineering Department, New York University, New York, United States
| | - Patricia Dugan
- Neurology Department, New York University, New York, United States
| | - Werner Doyle
- Neurosurgery Department, New York University, New York, United States
| | - Daniel Friedman
- Neurology Department, New York University, New York, United States
| | - Orrin Devinsky
- Neurology Department, New York University, New York, United States
| | - Adeen Flinker
- Neurology Department, New York University, New York, United States
- Biomedical Engineering Department, New York University, New York, United States
| |
Collapse
|
6
|
Bach P, Frank C, Kunde W. Why motor imagery is not really motoric: towards a re-conceptualization in terms of effect-based action control. PSYCHOLOGICAL RESEARCH 2024; 88:1790-1804. [PMID: 36515699 PMCID: PMC11315751 DOI: 10.1007/s00426-022-01773-w] [Citation(s) in RCA: 14] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2021] [Accepted: 11/11/2022] [Indexed: 12/15/2022]
Abstract
Overt and imagined action seem inextricably linked. Both have similar timing, activate shared brain circuits, and motor imagery influences overt action and vice versa. Motor imagery is, therefore, often assumed to recruit the same motor processes that govern action execution, and which allow one to play through or simulate actions offline. Here, we advance a very different conceptualization. Accordingly, the links between imagery and overt action do not arise because action imagery is intrinsically motoric, but because action planning is intrinsically imaginistic and occurs in terms of the perceptual effects one want to achieve. Seen like this, the term 'motor imagery' is a misnomer of what is more appropriately portrayed as 'effect imagery'. In this article, we review the long-standing arguments for effect-based accounts of action, which are often ignored in motor imagery research. We show that such views provide a straightforward account of motor imagery. We review the evidence for imagery-execution overlaps through this new lens and argue that they indeed emerge because every action we execute is planned, initiated and controlled through an imagery-like process. We highlight findings that this new view can now explain and point out open questions.
Collapse
Affiliation(s)
- Patric Bach
- School of Psychology, University of Aberdeen, William Guild Building, Kings College, Aberdeen, UK.
| | - Cornelia Frank
- Department of Sports and Movement Science, School of Educational and Cultural Studies, Osnabrück University, Osnabrück, Germany
| | - Wilfried Kunde
- Department of Psychology, Julius-Maximilians-Universität Würzburg, Röntgenring 11, Würzburg, Germany
| |
Collapse
|
7
|
Subrahmanya A, Ranasinghe KG, Kothare H, Raharjo I, Kim KS, Houde JF, Nagarajan SS. Pitch corrections occur in natural speech and are abnormal in patients with Alzheimer's disease. Front Hum Neurosci 2024; 18:1424920. [PMID: 39234407 PMCID: PMC11371567 DOI: 10.3389/fnhum.2024.1424920] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2024] [Accepted: 08/05/2024] [Indexed: 09/06/2024] Open
Abstract
Past studies have explored formant centering, a corrective behavior of convergence over the duration of an utterance toward the formants of a putative target vowel. In this study, we establish the existence of a similar centering phenomenon for pitch in healthy elderly controls and examine how such corrective behavior is altered in Alzheimer's Disease (AD). We found the pitch centering response in healthy elderly was similar when correcting pitch errors below and above the target (median) pitch. In contrast, patients with AD showed an asymmetry with a larger correction for the pitch errors below the target phonation than above the target phonation. These findings indicate that pitch centering is a robust compensation behavior in human speech. Our findings also explore the potential impacts on pitch centering from neurodegenerative processes impacting speech in AD.
Collapse
Affiliation(s)
- Anantajit Subrahmanya
- Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, IL, United States
| | - Kamalini G Ranasinghe
- Department of Neurology, University of California, San Francisco, San Francisco, CA, United States
| | - Hardik Kothare
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, CA, United States
| | - Inez Raharjo
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, CA, United States
| | - Kwang S Kim
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, IN, United States
| | - John F Houde
- Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, San Francisco, CA, United States
| | - Srikantan S Nagarajan
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, CA, United States
| |
Collapse
|
8
|
Parrell B, Naber C, Kim OA, Nizolek CA, McDougle SD. Audiomotor prediction errors drive speech adaptation even in the absence of overt movement. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.08.13.607718. [PMID: 39185222 PMCID: PMC11343123 DOI: 10.1101/2024.08.13.607718] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/27/2024]
Abstract
Observed outcomes of our movements sometimes differ from our expectations. These sensory prediction errors recalibrate the brain's internal models for motor control, reflected in alterations to subsequent movements that counteract these errors (motor adaptation). While leading theories suggest that all forms of motor adaptation are driven by learning from sensory prediction errors, dominant models of speech adaptation argue that adaptation results from integrating time-advanced copies of corrective feedback commands into feedforward motor programs. Here, we tested these competing theories of speech adaptation by inducing planned, but not executed, speech. Human speakers (male and female) were prompted to speak a word and, on a subset of trials, were rapidly cued to withhold the prompted speech. On standard trials, speakers were exposed to real-time playback of their own speech with an auditory perturbation of the first formant to induce single-trial speech adaptation. Speakers experienced a similar sensory error on movement cancelation trials, hearing a perturbation applied to a recording of their speech from a previous trial at the time they would have spoken. Speakers adapted to auditory prediction errors in both contexts, altering the spectral content of spoken vowels to counteract formant perturbations even when no actual movement coincided with the perturbed feedback. These results build upon recent findings in reaching, and suggest that prediction errors, rather than corrective motor commands, drive adaptation in speech.
Collapse
Affiliation(s)
- Benjamin Parrell
- Waisman Center, University of Wisconsin-Madison
- Department of Communication Sciences and Disorders, University of Wisconsin-Madison
| | - Chris Naber
- Waisman Center, University of Wisconsin-Madison
| | | | - Caroline A Nizolek
- Waisman Center, University of Wisconsin-Madison
- Department of Communication Sciences and Disorders, University of Wisconsin-Madison
| | - Samuel D McDougle
- Department of Psychology, Yale University
- Wu Tsai Institute, Yale University
| |
Collapse
|
9
|
Li JJ, Daliri A, Kim KS, Max L. Does pre-speech auditory modulation reflect processes related to feedback monitoring or speech movement planning? BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.07.13.603344. [PMID: 39026879 PMCID: PMC11257623 DOI: 10.1101/2024.07.13.603344] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/20/2024]
Abstract
Previous studies have revealed that auditory processing is modulated during the planning phase immediately prior to speech onset. To date, the functional relevance of this pre-speech auditory modulation (PSAM) remains unknown. Here, we investigated whether PSAM reflects neuronal processes that are associated with preparing auditory cortex for optimized feedback monitoring as reflected in online speech corrections. Combining electroencephalographic PSAM data from a previous data set with new acoustic measures of the same participants' speech, we asked whether individual speakers' extent of PSAM is correlated with the implementation of within-vowel articulatory adjustments during /b/-vowel-/d/ word productions. Online articulatory adjustments were quantified as the extent of change in inter-trial formant variability from vowel onset to vowel midpoint (a phenomenon known as centering). This approach allowed us to also consider inter-trial variability in formant production and its possible relation to PSAM at vowel onset and midpoint separately. Results showed that inter-trial formant variability was significantly smaller at vowel midpoint than at vowel onset. PSAM was not significantly correlated with this amount of change in variability as an index of within-vowel adjustments. Surprisingly, PSAM was negatively correlated with inter-trial formant variability not only in the middle but also at the very onset of the vowels. Thus, speakers with more PSAM produced formants that were already less variable at vowel onset. Findings suggest that PSAM may reflect processes that influence speech acoustics as early as vowel onset and, thus, that are directly involved in motor command preparation (feedforward control) rather than output monitoring (feedback control).
Collapse
Affiliation(s)
| | | | | | - Ludo Max
- University of Washington, Seattle, WA, USA
| |
Collapse
|
10
|
Ozker M, Yu L, Dugan P, Doyle W, Friedman D, Devinsky O, Flinker A. Speech-induced suppression and vocal feedback sensitivity in human cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.12.08.570736. [PMID: 38370843 PMCID: PMC10871232 DOI: 10.1101/2023.12.08.570736] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/20/2024]
Abstract
Across the animal kingdom, neural responses in the auditory cortex are suppressed during vocalization, and humans are no exception. A common hypothesis is that suppression increases sensitivity to auditory feedback, enabling the detection of vocalization errors. This hypothesis has been previously confirmed in non-human primates, however a direct link between auditory suppression and sensitivity in human speech monitoring remains elusive. To address this issue, we obtained intracranial electroencephalography (iEEG) recordings from 35 neurosurgical participants during speech production. We first characterized the detailed topography of auditory suppression, which varied across superior temporal gyrus (STG). Next, we performed a delayed auditory feedback (DAF) task to determine whether the suppressed sites were also sensitive to auditory feedback alterations. Indeed, overlapping sites showed enhanced responses to feedback, indicating sensitivity. Importantly, there was a strong correlation between the degree of auditory suppression and feedback sensitivity, suggesting suppression might be a key mechanism that underlies speech monitoring. Further, we found that when participants produced speech with simultaneous auditory feedback, posterior STG was selectively activated if participants were engaged in a DAF paradigm, suggesting that increased attentional load can modulate auditory feedback sensitivity.
Collapse
Affiliation(s)
- Muge Ozker
- Neurology Department, New York University, New York, 10016, NY, USA
- Max Planck Institute for Psycholinguistics, 6525 XD Nijmegen, The Netherlands
| | - Leyao Yu
- Neurology Department, New York University, New York, 10016, NY, USA
- Biomedical Engineering Department, New York University, Brooklyn, 11201, NY, USA
| | - Patricia Dugan
- Neurology Department, New York University, New York, 10016, NY, USA
| | - Werner Doyle
- Neurosurgery Department, New York University, New York, 10016, NY, USA
| | - Daniel Friedman
- Neurology Department, New York University, New York, 10016, NY, USA
| | - Orrin Devinsky
- Neurology Department, New York University, New York, 10016, NY, USA
| | - Adeen Flinker
- Neurology Department, New York University, New York, 10016, NY, USA
- Biomedical Engineering Department, New York University, Brooklyn, 11201, NY, USA
| |
Collapse
|
11
|
Beach SD, Tang DL, Kiran S, Niziolek CA. Pars Opercularis Underlies Efferent Predictions and Successful Auditory Feedback Processing in Speech: Evidence From Left-Hemisphere Stroke. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2024; 5:454-483. [PMID: 38911464 PMCID: PMC11192514 DOI: 10.1162/nol_a_00139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Accepted: 02/07/2024] [Indexed: 06/25/2024]
Abstract
Hearing one's own speech allows for acoustic self-monitoring in real time. Left-hemisphere motor planning regions are thought to give rise to efferent predictions that can be compared to true feedback in sensory cortices, resulting in neural suppression commensurate with the degree of overlap between predicted and actual sensations. Sensory prediction errors thus serve as a possible mechanism of detection of deviant speech sounds, which can then feed back into corrective action, allowing for online control of speech acoustics. The goal of this study was to assess the integrity of this detection-correction circuit in persons with aphasia (PWA) whose left-hemisphere lesions may limit their ability to control variability in speech output. We recorded magnetoencephalography (MEG) while 15 PWA and age-matched controls spoke monosyllabic words and listened to playback of their utterances. From this, we measured speaking-induced suppression of the M100 neural response and related it to lesion profiles and speech behavior. Both speaking-induced suppression and cortical sensitivity to deviance were preserved at the group level in PWA. PWA with more spared tissue in pars opercularis had greater left-hemisphere neural suppression and greater behavioral correction of acoustically deviant pronunciations, whereas sparing of superior temporal gyrus was not related to neural suppression or acoustic behavior. In turn, PWA who made greater corrections had fewer overt speech errors in the MEG task. Thus, the motor planning regions that generate the efferent prediction are integral to performing corrections when that prediction is violated.
Collapse
Affiliation(s)
| | - Ding-lan Tang
- Waisman Center, The University of Wisconsin–Madison
- Academic Unit of Human Communication, Development, and Information Sciences, University of Hong Kong, Hong Kong, SAR China
| | - Swathi Kiran
- Department of Speech, Language & Hearing Sciences, Boston University
| | - Caroline A. Niziolek
- Waisman Center, The University of Wisconsin–Madison
- Department of Communication Sciences and Disorders, The University of Wisconsin–Madison
| |
Collapse
|
12
|
Tremblay P, Sato M. Movement-related cortical potential and speech-induced suppression during speech production in younger and older adults. BRAIN AND LANGUAGE 2024; 253:105415. [PMID: 38692095 DOI: 10.1016/j.bandl.2024.105415] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Revised: 04/16/2024] [Accepted: 04/18/2024] [Indexed: 05/03/2024]
Abstract
With age, the speech system undergoes important changes that render speech production more laborious, slower and often less intelligible. And yet, the neural mechanisms that underlie these age-related changes remain unclear. In this EEG study, we examined two important mechanisms in speech motor control: pre-speech movement-related cortical potential (MRCP), which reflects speech motor planning, and speaking-induced suppression (SIS), which indexes auditory predictions of speech motor commands, in 20 healthy young and 20 healthy older adults. Participants undertook a vowel production task which was followed by passive listening of their own recorded vowels. Our results revealed extensive differences in MRCP in older compared to younger adults. Further, while longer latencies were observed in older adults on N1 and P2, in contrast, the SIS was preserved. The observed reduced MRCP appears as a potential explanatory mechanism for the known age-related slowing of speech production, while preserved SIS suggests intact motor-to-auditory integration.
Collapse
Affiliation(s)
- Pascale Tremblay
- Université Laval, Faculté de Médecine, Département de Réadaptation, Quebec City G1V 0A6, Canada; CERVO Brain Research Center, Quebec City G1J 2G3, Canada.
| | - Marc Sato
- Laboratoire Parole et Langage, Centre National de la Recherche Scientifique, Aix-Marseille Université, Aix-en-Provence, France
| |
Collapse
|
13
|
Kurteff GL, Field AM, Asghar S, Tyler-Kabara EC, Clarke D, Weiner HL, Anderson AE, Watrous AJ, Buchanan RJ, Modur PN, Hamilton LS. Processing of auditory feedback in perisylvian and insular cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.05.14.593257. [PMID: 38798574 PMCID: PMC11118286 DOI: 10.1101/2024.05.14.593257] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2024]
Abstract
When we speak, we not only make movements with our mouth, lips, and tongue, but we also hear the sound of our own voice. Thus, speech production in the brain involves not only controlling the movements we make, but also auditory and sensory feedback. Auditory responses are typically suppressed during speech production compared to perception, but how this manifests across space and time is unclear. Here we recorded intracranial EEG in seventeen pediatric, adolescent, and adult patients with medication-resistant epilepsy who performed a reading/listening task to investigate how other auditory responses are modulated during speech production. We identified onset and sustained responses to speech in bilateral auditory cortex, with a selective suppression of onset responses during speech production. Onset responses provide a temporal landmark during speech perception that is redundant with forward prediction during speech production. Phonological feature tuning in these "onset suppression" electrodes remained stable between perception and production. Notably, the posterior insula responded at sentence onset for both perception and production, suggesting a role in multisensory integration during feedback control.
Collapse
Affiliation(s)
- Garret Lynn Kurteff
- Department of Speech, Language, and Hearing Sciences, Moody College of Communication, The University of Texas at Austin, Austin, TX, USA
| | - Alyssa M. Field
- Department of Speech, Language, and Hearing Sciences, Moody College of Communication, The University of Texas at Austin, Austin, TX, USA
| | - Saman Asghar
- Department of Speech, Language, and Hearing Sciences, Moody College of Communication, The University of Texas at Austin, Austin, TX, USA
- Department of Neurosurgery, Baylor College of Medicine, Houston, TX, USA
| | - Elizabeth C. Tyler-Kabara
- Department of Neurosurgery, Dell Medical School, The University of Texas at Austin, Austin, TX, USA
- Department of Pediatrics, Dell Medical School, The University of Texas at Austin, Austin, TX, USA
| | - Dave Clarke
- Department of Neurosurgery, Dell Medical School, The University of Texas at Austin, Austin, TX, USA
- Department of Pediatrics, Dell Medical School, The University of Texas at Austin, Austin, TX, USA
- Department of Neurology, Dell Medical School, The University of Texas at Austin, Austin, TX, USA
| | - Howard L. Weiner
- Department of Neurosurgery, Baylor College of Medicine, Houston, TX, USA
| | - Anne E. Anderson
- Department of Pediatrics, Baylor College of Medicine, Houston, TX, USA
| | - Andrew J. Watrous
- Department of Neurosurgery, Baylor College of Medicine, Houston, TX, USA
| | - Robert J. Buchanan
- Department of Neurosurgery, Dell Medical School, The University of Texas at Austin, Austin, TX, USA
| | - Pradeep N. Modur
- Department of Neurology, Dell Medical School, The University of Texas at Austin, Austin, TX, USA
| | - Liberty S. Hamilton
- Department of Speech, Language, and Hearing Sciences, Moody College of Communication, The University of Texas at Austin, Austin, TX, USA
- Department of Neurology, Dell Medical School, The University of Texas at Austin, Austin, TX, USA
- Lead contact
| |
Collapse
|
14
|
Tsunada J, Wang X, Eliades SJ. Multiple processes of vocal sensory-motor interaction in primate auditory cortex. Nat Commun 2024; 15:3093. [PMID: 38600118 PMCID: PMC11006904 DOI: 10.1038/s41467-024-47510-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Accepted: 04/02/2024] [Indexed: 04/12/2024] Open
Abstract
Sensory-motor interactions in the auditory system play an important role in vocal self-monitoring and control. These result from top-down corollary discharges, relaying predictions about vocal timing and acoustics. Recent evidence suggests such signals may be two distinct processes, one suppressing neural activity during vocalization and another enhancing sensitivity to sensory feedback, rather than a single mechanism. Single-neuron recordings have been unable to disambiguate due to overlap of motor signals with sensory inputs. Here, we sought to disentangle these processes in marmoset auditory cortex during production of multi-phrased 'twitter' vocalizations. Temporal responses revealed two timescales of vocal suppression: temporally-precise phasic suppression during phrases and sustained tonic suppression. Both components were present within individual neurons, however, phasic suppression presented broadly regardless of frequency tuning (gating), while tonic was selective for vocal frequencies and feedback (prediction). This suggests that auditory cortex is modulated by concurrent corollary discharges during vocalization, with different computational mechanisms.
Collapse
Affiliation(s)
- Joji Tsunada
- Auditory and Communication Systems Laboratory, Department of Otorhinolaryngology: Head and Neck Surgery, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, USA
- Chinese Institute for Brain Research, Beijing, China
| | - Xiaoqin Wang
- Laboratory of Auditory Neurophysiology, Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Steven J Eliades
- Auditory and Communication Systems Laboratory, Department of Otorhinolaryngology: Head and Neck Surgery, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, USA.
- Department of Head and Neck Surgery & Communication Sciences, Duke University School of Medicine, Durham, NC, USA.
| |
Collapse
|
15
|
Tan S, Jia Y, Jariwala N, Zhang Z, Brent K, Houde J, Nagarajan S, Subramaniam K. A randomised controlled trial investigating the causal role of the medial prefrontal cortex in mediating self-agency during speech monitoring and reality monitoring. Sci Rep 2024; 14:5108. [PMID: 38429404 PMCID: PMC10907680 DOI: 10.1038/s41598-024-55275-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2023] [Accepted: 02/21/2024] [Indexed: 03/03/2024] Open
Abstract
Self-agency is the awareness of being the agent of one's own thoughts and actions. Self-agency is essential for interacting with the outside world (reality-monitoring). The medial prefrontal cortex (mPFC) is thought to be one neural correlate of self-agency. We investigated whether mPFC activity can causally modulate self-agency on two different tasks of speech-monitoring and reality-monitoring. The experience of self-agency is thought to result from making reliable predictions about the expected outcomes of one's own actions. This self-prediction ability is necessary for the encoding and memory retrieval of one's own thoughts during reality-monitoring to enable accurate judgments of self-agency. This self-prediction ability is also necessary for speech-monitoring where speakers consistently compare auditory feedback (what we hear ourselves say) with what we expect to hear while speaking. In this study, 30 healthy participants are assigned to either 10 Hz repetitive transcranial magnetic stimulation (rTMS) to enhance mPFC excitability (N = 15) or 10 Hz rTMS targeting a distal temporoparietal site (N = 15). High-frequency rTMS to mPFC enhanced self-predictions during speech-monitoring that predicted improved self-agency judgments during reality-monitoring. This is the first study to provide robust evidence for mPFC underlying a causal role in self-agency, that results from the fundamental ability of improving self-predictions across two different tasks.
Collapse
Affiliation(s)
- Songyuan Tan
- Department of Psychiatry, University of California, 513 Parnassus Avenue, HSE604, San Francisco, CA, 94143, USA
| | - Yingxin Jia
- Department of Psychiatry, University of California, 513 Parnassus Avenue, HSE604, San Francisco, CA, 94143, USA
| | - Namasvi Jariwala
- Department of Psychology, Palo Alto University, Palo Alto, CA, USA
| | - Zoey Zhang
- Department of Otolaryngology, University of California, San Francisco, San Francisco, CA, USA
| | - Kurtis Brent
- Department of Otolaryngology, University of California, San Francisco, San Francisco, CA, USA
| | - John Houde
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, CA, USA
| | - Srikantan Nagarajan
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, CA, USA
| | - Karuna Subramaniam
- Department of Psychiatry, University of California, 513 Parnassus Avenue, HSE604, San Francisco, CA, 94143, USA.
| |
Collapse
|
16
|
Borjigin A, Bakst S, Anderson K, Litovsky RY, Niziolek CA. Discrimination and sensorimotor adaptation of self-produced vowels in cochlear implant users. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 155:1895-1908. [PMID: 38456732 PMCID: PMC11527478 DOI: 10.1121/10.0025063] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/13/2023] [Revised: 12/12/2023] [Accepted: 02/11/2024] [Indexed: 03/09/2024]
Abstract
Humans rely on auditory feedback to monitor and adjust their speech for clarity. Cochlear implants (CIs) have helped over a million people restore access to auditory feedback, which significantly improves speech production. However, there is substantial variability in outcomes. This study investigates the extent to which CI users can use their auditory feedback to detect self-produced sensory errors and make adjustments to their speech, given the coarse spectral resolution provided by their implants. First, we used an auditory discrimination task to assess the sensitivity of CI users to small differences in formant frequencies of their self-produced vowels. Then, CI users produced words with altered auditory feedback in order to assess sensorimotor adaptation to auditory error. Almost half of the CI users tested can detect small, within-channel differences in their self-produced vowels, and they can utilize this auditory feedback towards speech adaptation. An acoustic hearing control group showed better sensitivity to the shifts in vowels, even in CI-simulated speech, and elicited more robust speech adaptation behavior than the CI users. Nevertheless, this study confirms that CI users can compensate for sensory errors in their speech and supports the idea that sensitivity to these errors may relate to variability in production.
Collapse
Affiliation(s)
- Agudemu Borjigin
- Waisman Center, University of Wisconsin-Madison, Madison, Wisconsin 53705, USA
| | - Sarah Bakst
- Waisman Center, University of Wisconsin-Madison, Madison, Wisconsin 53705, USA
| | - Katla Anderson
- Waisman Center, University of Wisconsin-Madison, Madison, Wisconsin 53705, USA
| | - Ruth Y Litovsky
- Waisman Center, University of Wisconsin-Madison, Madison, Wisconsin 53705, USA
- Department of Communication Sciences and Disorders, University of Wisconsin-Madison, Madison, Wisconsin 53705, USA
| | - Caroline A Niziolek
- Waisman Center, University of Wisconsin-Madison, Madison, Wisconsin 53705, USA
- Department of Communication Sciences and Disorders, University of Wisconsin-Madison, Madison, Wisconsin 53705, USA
| |
Collapse
|
17
|
Mårup SH, Kleber BA, Møller C, Vuust P. When direction matters: Neural correlates of interlimb coordination of rhythm and beat. Cortex 2024; 172:86-108. [PMID: 38241757 DOI: 10.1016/j.cortex.2023.11.019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2022] [Revised: 04/11/2023] [Accepted: 11/09/2023] [Indexed: 01/21/2024]
Abstract
In a previous experiment, we found evidence for a bodily hierarchy governing interlimb coordination of rhythm and beat, using five effectors: 1) Left foot, 2) Right foot, 3) Left hand, 4) Right hand and 5) Voice. The hierarchy implies that, during simultaneous rhythm and beat performance and using combinations of two of these effectors, executing the task by performing the rhythm with an effector that has a higher number than the beat effector is significantly easier than vice versa. To investigate the neural underpinnings of this proposed bodily hierarchy, we here scanned 46 professional musicians using fMRI as they performed a rhythmic pattern with one effector while keeping the beat with another. The conditions combined the voice and the right hand (V + RH), the right hand and the left hand (RH + LH), and the left hand and the right foot (LH + RF). Each effector combination was performed with and against the bodily hierarchy. Going against the bodily hierarchy increased tapping errors significantly and also increased activity in key brain areas functionally associated with top-down sensorimotor control and bottom-up feedback processing, such as the cerebellum and SMA. Conversely, going with the bodily hierarchy engaged areas functionally associated with the default mode network and regions involved in emotion processing. Theories of general brain function that hold prediction as a key principle, propose that action and perception are governed by the brain's attempt to minimise prediction error at different levels in the brain. Following this viewpoint, our results indicate that going against the hierarchy induces stronger prediction errors, while going with the hierarchy allows for a higher degree of automatization. Our results also support the notion of a bodily hierarchy in motor control that prioritizes certain conductive and supportive tapping roles in specific effector combinations.
Collapse
Affiliation(s)
- Signe H Mårup
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Universitetsbyen 3, Aarhus C, Denmark.
| | - Boris A Kleber
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Universitetsbyen 3, Aarhus C, Denmark.
| | - Cecilie Møller
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Universitetsbyen 3, Aarhus C, Denmark.
| | - Peter Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Universitetsbyen 3, Aarhus C, Denmark.
| |
Collapse
|
18
|
Jia Y, Kudo K, Jariwala N, Tarapore P, Nagarajan S, Subramaniam K. Causal role of medial superior frontal cortex on enhancing neural information flow and self-agency judgments in the self-agency network. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2024:2024.02.13.24302764. [PMID: 38405834 PMCID: PMC10888992 DOI: 10.1101/2024.02.13.24302764] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/27/2024]
Abstract
Self-agency is being aware of oneself as the agent of one's thoughts and actions. Self-agency is necessary for successful interactions with the outside world (reality-monitoring). Prior research has shown that the medial superior prefrontal gyri (mPFC/SFG) may represent one neural correlate underlying self-agency judgments. However, the causal relationship remains unknown. Here, we applied high-frequency 10Hz repetitive transcranial magnetic stimulation (rTMS) to modulate the excitability of the mPFC/SFG site that we have previously shown to mediate self-agency. For the first time, we delineate causal neural mechanisms, revealing precisely how rTMS modulates SFG excitability and impacts directional neural information flow in the self-agency network by implementing innovative magnetoencephalography (MEG) phase-transfer entropy (PTE) metrics, measured from pre-to-post rTMS. We found that, compared to control rTMS, enhancing SFG excitability by rTMS induced significant increases in information flow between SFG and specific cingulate and paracentral regions in the self-agency network in delta-theta, alpha, and gamma bands, which predicted improved self-agency judgments. This is the first multimodal imaging study in which we implement MEG PTE metrics of 5D imaging of space, frequency and time, to provide cutting-edge analyses of the causal neural mechanisms of how rTMS enhances SFG excitability and improves neural information flow between distinct regions in the self-agency network to potentiate improved self-agency judgments. Our findings provide a novel perspective for investigating causal neural mechanisms underlying self-agency and create a path towards developing novel neuromodulation interventions to improve self-agency that will be particularly useful for patients with psychosis who exhibit severe impairments in self-agency.
Collapse
|
19
|
Arjmandi MK, Behroozmand R. On the interplay between speech perception and production: insights from research and theories. Front Neurosci 2024; 18:1347614. [PMID: 38332858 PMCID: PMC10850291 DOI: 10.3389/fnins.2024.1347614] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Accepted: 01/08/2024] [Indexed: 02/10/2024] Open
Abstract
The study of spoken communication has long been entrenched in a debate surrounding the interdependence of speech production and perception. This mini review summarizes findings from prior studies to elucidate the reciprocal relationships between speech production and perception. We also discuss key theoretical perspectives relevant to speech perception-production loop, including hyper-articulation and hypo-articulation (H&H) theory, speech motor theory, direct realism theory, articulatory phonology, the Directions into Velocities of Articulators (DIVA) and Gradient Order DIVA (GODIVA) models, and predictive coding. Building on prior findings, we propose a revised auditory-motor integration model of speech and provide insights for future research in speech perception and production, focusing on the effects of impaired peripheral auditory systems.
Collapse
Affiliation(s)
- Meisam K. Arjmandi
- Translational Auditory Neuroscience Lab, Department of Communication Sciences and Disorders, Arnold School of Public Health, University of South Carolina, Columbia, SC, United States
| | - Roozbeh Behroozmand
- Speech Neuroscience Lab, Department of Speech, Language, and Hearing, Callier Center for Communication Disorders, School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX, United States
| |
Collapse
|
20
|
Chung LKH, Jack BN, Griffiths O, Pearson D, Luque D, Harris AWF, Spencer KM, Le Pelley ME, So SHW, Whitford TJ. Neurophysiological evidence of motor preparation in inner speech and the effect of content predictability. Cereb Cortex 2023; 33:11556-11569. [PMID: 37943760 PMCID: PMC10751289 DOI: 10.1093/cercor/bhad389] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2023] [Revised: 09/25/2023] [Accepted: 09/26/2023] [Indexed: 11/12/2023] Open
Abstract
Self-generated overt actions are preceded by a slow negativity as measured by electroencephalogram, which has been associated with motor preparation. Recent studies have shown that this neural activity is modulated by the predictability of action outcomes. It is unclear whether inner speech is also preceded by a motor-related negativity and influenced by the same factor. In three experiments, we compared the contingent negative variation elicited in a cue paradigm in an active vs. passive condition. In Experiment 1, participants produced an inner phoneme, at which an audible phoneme whose identity was unpredictable was concurrently presented. We found that while passive listening elicited a late contingent negative variation, inner speech production generated a more negative late contingent negative variation. In Experiment 2, the same pattern of results was found when participants were instead asked to overtly vocalize the phoneme. In Experiment 3, the identity of the audible phoneme was made predictable by establishing probabilistic expectations. We observed a smaller late contingent negative variation in the inner speech condition when the identity of the audible phoneme was predictable, but not in the passive condition. These findings suggest that inner speech is associated with motor preparatory activity that may also represent the predicted action-effects of covert actions.
Collapse
Affiliation(s)
- Lawrence K-h Chung
- School of Psychology, University of New South Wales (UNSW Sydney), Mathews Building, Library Walk, Kensington NSW 2052, Australia
- Department of Psychology, The Chinese University of Hong Kong, 3/F Sino Building, Chung Chi Road, Shatin, New Territories, Hong Kong SAR, China
| | - Bradley N Jack
- Research School of Psychology, Australian National University, Building 39, Science Road, Canberra ACT 2601, Australia
| | - Oren Griffiths
- School of Psychological Sciences, University of Newcastle, Behavioural Sciences Building, University Drive, Callaghan NSW 2308, Australia
| | - Daniel Pearson
- School of Psychology, University of Sydney, Griffith Taylor Building, Manning Road, Camperdown NSW 2006, Australia
| | - David Luque
- Department of Basic Psychology and Speech Therapy, University of Malaga, Faculty of Psychology, Dr Ortiz Ramos Street, 29010 Malaga, Spain
| | - Anthony W F Harris
- Westmead Clinical School, University of Sydney, 176 Hawkesbury Road, Westmead NSW 2145, Australia
- Brain Dynamics Centre, Westmead Institute for Medical Research, 176 Hawkesbury Road, Westmead NSW 2145, Australia
| | - Kevin M Spencer
- Research Service, Veterans Affairs Boston Healthcare System, and Department of Psychiatry, Harvard Medical School, 150 South Huntington Avenue, Boston MA 02130, United States
| | - Mike E Le Pelley
- School of Psychology, University of New South Wales (UNSW Sydney), Mathews Building, Library Walk, Kensington NSW 2052, Australia
| | - Suzanne H-w So
- Department of Psychology, The Chinese University of Hong Kong, 3/F Sino Building, Chung Chi Road, Shatin, New Territories, Hong Kong SAR, China
| | - Thomas J Whitford
- School of Psychology, University of New South Wales (UNSW Sydney), Mathews Building, Library Walk, Kensington NSW 2052, Australia
- Brain Dynamics Centre, Westmead Institute for Medical Research, 176 Hawkesbury Road, Westmead NSW 2145, Australia
| |
Collapse
|
21
|
Kurteff GL, Lester-Smith RA, Martinez A, Currens N, Holder J, Villarreal C, Mercado VR, Truong C, Huber C, Pokharel P, Hamilton LS. Speaker-induced Suppression in EEG during a Naturalistic Reading and Listening Task. J Cogn Neurosci 2023; 35:1538-1556. [PMID: 37584593 DOI: 10.1162/jocn_a_02037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/17/2023]
Abstract
Speaking elicits a suppressed neural response when compared with listening to others' speech, a phenomenon known as speaker-induced suppression (SIS). Previous research has focused on investigating SIS at constrained levels of linguistic representation, such as the individual phoneme and word level. Here, we present scalp EEG data from a dual speech perception and production task where participants read sentences aloud then listened to playback of themselves reading those sentences. Playback was separated into immediate repetition of the previous trial and randomized repetition of a former trial to investigate if forward modeling of responses during passive listening suppresses the neural response. Concurrent EMG was recorded to control for movement artifact during speech production. In line with previous research, ERP analyses at the sentence level demonstrated suppression of early auditory components of the EEG for production compared with perception. To evaluate whether linguistic abstractions (in the form of phonological feature tuning) are suppressed during speech production alongside lower-level acoustic information, we fit linear encoding models that predicted scalp EEG based on phonological features, EMG activity, and task condition. We found that phonological features were encoded similarly between production and perception. However, this similarity was only observed when controlling for movement by using the EMG response as an additional regressor. Our results suggest that SIS operates at a sensory representational level and is dissociated from higher order cognitive and linguistic processing that takes place during speech perception and production. We also detail some important considerations when analyzing EEG during continuous speech production.
Collapse
|
22
|
Tan S, Jia Y, Jariwala N, Zhang Z, Brent K, Houde J, Nagarajan S, Subramaniam K. A randomised controlled trial investigating the causal role of the medial prefrontal cortex in mediating self-agency during speech monitoring and reality monitoring. RESEARCH SQUARE 2023:rs.3.rs-3280599. [PMID: 37790323 PMCID: PMC10543504 DOI: 10.21203/rs.3.rs-3280599/v1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/05/2023]
Abstract
Self-agency is being aware of oneself as the agent of one's thoughts and actions. Self agency is necessary for successful interactions with the external world (reality-monitoring). The medial prefrontal cortex (mPFC) is considered to represent one neural correlate underlying self-agency. We investigated whether mPFC activity can causally modulate self-agency on two different tasks involving speech-monitoring and reality-monitoring. The experience of self-agency is thought to result from being able to reliably predict the sensory outcomes of one's own actions. This self-prediction ability is necessary for successfully encoding and recalling one's own thoughts to enable accurate self-agency judgments during reality-monitoring tasks. This self-prediction ability is also necessary during speech-monitoring tasks where speakers compare what we hear ourselves say in auditory feedback with what we predict we will hear while speaking. In this randomised-controlled study, heathy controls (HC) are assigned to either high-frequency transcranial magnetic stimulation (TMS) to enhance mPFC excitability or TMS targeting a control site. After TMS to mPFC, HC improved self-predictions during speech-monitoring tasks that predicted improved self-agency judgments during different reality-monitoring tasks. These first-in-kind findings demonstrate the mechanisms of how mPFC plays a causal role in self-agency that results from the fundamental ability of improving self-predictions across two different tasks.
Collapse
Affiliation(s)
- Songyuan Tan
- University of California San Francisco Medical Center
| | - Yingxin Jia
- University of California San Francisco Medical Center
| | | | - Zoey Zhang
- University of California San Francisco Medical Center
| | - Kurtis Brent
- University of California San Francisco Medical Center
| | - John Houde
- University of California San Francisco Medical Center
| | | | | |
Collapse
|
23
|
Cuadros J, Z-Rivera L, Castro C, Whitaker G, Otero M, Weinstein A, Martínez-Montes E, Prado P, Zañartu M. DIVA Meets EEG: Model Validation Using Formant-Shift Reflex. APPLIED SCIENCES (BASEL, SWITZERLAND) 2023; 13:7512. [PMID: 38435340 PMCID: PMC10906992 DOI: 10.3390/app13137512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 03/05/2024]
Abstract
The neurocomputational model 'Directions into Velocities of Articulators' (DIVA) was developed to account for various aspects of normal and disordered speech production and acquisition. The neural substrates of DIVA were established through functional magnetic resonance imaging (fMRI), providing physiological validation of the model. This study introduces DIVA_EEG an extension of DIVA that utilizes electroencephalography (EEG) to leverage the high temporal resolution and broad availability of EEG over fMRI. For the development of DIVA_EEG, EEG-like signals were derived from original equations describing the activity of the different DIVA maps. Synthetic EEG associated with the utterance of syllables was generated when both unperturbed and perturbed auditory feedback (first formant perturbations) were simulated. The cortical activation maps derived from synthetic EEG closely resembled those of the original DIVA model. To validate DIVA_EEG, the EEG of individuals with typical voices (N = 30) was acquired during an altered auditory feedback paradigm. The resulting empirical brain activity maps significantly overlapped with those predicted by DIVA_EEG. In conjunction with other recent model extensions, DIVA_EEG lays the foundations for constructing a complete neurocomputational framework to tackle vocal and speech disorders, which can guide model-driven personalized interventions.
Collapse
Affiliation(s)
- Jhosmary Cuadros
- Department of Electronic Engineering, Universidad Técnica Federico Santa María, Valparaíso 2390123, Chile
- Advanced Center for Electrical and Electronic Engineering, Universidad Técnica Federico Santa María, Valparaíso 2390123, Chile
- Grupo de Bioingeniería, Decanato de Investigación, Universidad Nacional Experimental del Táchira, San Cristóbal 5001, Venezuela
| | - Lucía Z-Rivera
- Advanced Center for Electrical and Electronic Engineering, Universidad Técnica Federico Santa María, Valparaíso 2390123, Chile
- Escuela de Ingeniería Civil Biomédica, Facultad de Ingeniería, Universidad de Valparaíso, Valparaíso 2350026, Chile
| | - Christian Castro
- Advanced Center for Electrical and Electronic Engineering, Universidad Técnica Federico Santa María, Valparaíso 2390123, Chile
- Escuela de Ingeniería Civil Biomédica, Facultad de Ingeniería, Universidad de Valparaíso, Valparaíso 2350026, Chile
| | - Grace Whitaker
- Advanced Center for Electrical and Electronic Engineering, Universidad Técnica Federico Santa María, Valparaíso 2390123, Chile
| | - Mónica Otero
- Facultad de Ingeniería, Arquitectura y Diseño, Universidad San Sebastián, Santiago 8420524, Chile
- Centro Basal Ciencia & Vida, Universidad San Sebastián, Santiago 8580000, Chile
| | - Alejandro Weinstein
- Advanced Center for Electrical and Electronic Engineering, Universidad Técnica Federico Santa María, Valparaíso 2390123, Chile
- Escuela de Ingeniería Civil Biomédica, Facultad de Ingeniería, Universidad de Valparaíso, Valparaíso 2350026, Chile
| | | | - Pavel Prado
- Escuela de Fonoaudiología, Facultad de Odontología y Ciencias de la Rehabilitación, Universidad San Sebastián, Santiago 7510602, Chile
| | - Matías Zañartu
- Department of Electronic Engineering, Universidad Técnica Federico Santa María, Valparaíso 2390123, Chile
- Advanced Center for Electrical and Electronic Engineering, Universidad Técnica Federico Santa María, Valparaíso 2390123, Chile
| |
Collapse
|
24
|
Kim KX, Dale CL, Ranasinghe KG, Kothare H, Beagle AJ, Lerner H, Mizuiri D, Gorno-Tempini ML, Vossel K, Nagarajan SS, Houde JF. Impaired Speaking-Induced Suppression in Alzheimer's Disease. eNeuro 2023; 10:ENEURO.0056-23.2023. [PMID: 37221089 PMCID: PMC10249944 DOI: 10.1523/eneuro.0056-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Accepted: 04/04/2023] [Indexed: 05/25/2023] Open
Abstract
Alzheimer's disease (AD) is a neurodegenerative disease involving cognitive impairment and abnormalities in speech and language. Here, we examine how AD affects the fidelity of auditory feedback predictions during speaking. We focus on the phenomenon of speaking-induced suppression (SIS), the auditory cortical responses' suppression during auditory feedback processing. SIS is determined by subtracting the magnitude of auditory cortical responses during speaking from listening to playback of the same speech. Our state feedback control (SFC) model of speech motor control explains SIS as arising from the onset of auditory feedback matching a prediction of that feedback onset during speaking, a prediction that is absent during passive listening to playback of the auditory feedback. Our model hypothesizes that the auditory cortical response to auditory feedback reflects the mismatch with the prediction: small during speaking, large during listening, with the difference being SIS. Normally, during speaking, auditory feedback matches its predictions, then SIS will be large. Any reductions in SIS will indicate inaccuracy in auditory feedback prediction not matching the actual feedback. We investigated SIS in AD patients [n = 20; mean (SD) age, 60.77 (10.04); female (%), 55.00] and healthy controls [n = 12; mean (SD) age, 63.68 (6.07); female (%), 83.33] through magnetoencephalography (MEG)-based functional imaging. We found a significant reduction in SIS at ∼100 ms in AD patients compared with healthy controls (linear mixed effects model, F (1,57.5) = 6.849, p = 0.011). The results suggest that AD patients generate inaccurate auditory feedback predictions, contributing to abnormalities in AD speech.
Collapse
Affiliation(s)
- Kyunghee X Kim
- Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco, San Francisco, CA 94117
| | - Corby L Dale
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA 94117
| | - Kamalini G Ranasinghe
- Department of Neurology, University of California San Francisco, San Francisco, CA 94158
| | - Hardik Kothare
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA 94117
| | - Alexander J Beagle
- Department of Neurology, University of California San Francisco, San Francisco, CA 94158
| | - Hannah Lerner
- Department of Neurology, University of California San Francisco, San Francisco, CA 94158
| | - Danielle Mizuiri
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA 94117
| | | | - Keith Vossel
- Department of Neurology, University of California San Francisco, San Francisco, CA 94158
| | - Srikantan S Nagarajan
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA 94117
| | - John F Houde
- Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco, San Francisco, CA 94117
| |
Collapse
|
25
|
Terband H, van Brenk F. Modeling Responses to Auditory Feedback Perturbations in Adults, Children, and Children With Complex Speech Sound Disorders: Evidence for Impaired Auditory Self-Monitoring? JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:1563-1587. [PMID: 37071803 DOI: 10.1044/2023_jslhr-22-00379] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
PURPOSE Previous studies have found that typically developing (TD) children were able to compensate and adapt to auditory feedback perturbations to a similar or larger degree compared to young adults, while children with speech sound disorder (SSD) were found to produce predominantly following responses. However, large individual differences lie underneath the group-level results. This study investigates possible mechanisms in responses to formant shifts by modeling parameters of feedback and feedforward control of speech production based on behavioral data. METHOD SimpleDIVA was used to model an existing dataset of compensation/adaptation behavior to auditory feedback perturbations collected from three groups of Dutch speakers: 50 young adults, twenty-three 4- to 8-year-old children with TD speech, and seven 4- to 8-year-old children with SSD. Between-groups and individual within-group differences in model outcome measures representing auditory and somatosensory feedback control gain and feedforward learning rate were assessed. RESULTS Notable between-groups and within-group variation was found for all outcome measures. Data modeled for individual speakers yielded model fits with varying reliability. Auditory feedback control gain was negative in children with SSD and positive in both other groups. Somatosensory feedback control gain was negative for both groups of children and marginally negative for adults. Feedforward learning rate measures were highest in the children with TD speech followed by children with SSD, compared to adults. CONCLUSIONS The SimpleDIVA model was able to account for responses to the perturbation of auditory feedback other than corrective, as negative auditory feedback control gains were associated with following responses to vowel shifts. These preliminary findings are suggestive of impaired auditory self-monitoring in children with complex SSD. Possible mechanisms underlying the nature of following responses are discussed.
Collapse
Affiliation(s)
- Hayo Terband
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City
| | - Frits van Brenk
- Faculty of Humanities, Department of Languages, Literature and Communication & Institute for Language Sciences, Utrecht University, the Netherlands
- Department of Communicative Disorders and Sciences, University at Buffalo, NY
| |
Collapse
|
26
|
Eliades SJ, Tsunada J. Effects of Cortical Stimulation on Feedback-Dependent Vocal Control in Non-Human Primates. Laryngoscope 2023; 133 Suppl 2:S1-S10. [PMID: 35538859 PMCID: PMC9649833 DOI: 10.1002/lary.30175] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Revised: 04/16/2022] [Accepted: 04/24/2022] [Indexed: 11/07/2022]
Abstract
OBJECTIVES Hearing plays an important role in our ability to control voice, and perturbations in auditory feedback result in compensatory changes in vocal production. The auditory cortex (AC) has been proposed as an important mediator of this behavior, but causal evidence is lacking. We tested this in an animal model, hypothesizing that AC is necessary for vocal self-monitoring and feedback-dependent control, and that altering activity in AC during vocalization will interfere with vocal control. METHODS We implanted two marmoset monkeys (Callithrix jacchus) with bilateral AC electrode arrays. Acoustic signals were recorded from vocalizing marmosets while altering vocal feedback or electrically stimulating AC during random subsets of vocalizations. Feedback was altered by real-time frequency shifts and presented through headphones and electrical stimulation delivered to individual electrodes. We analyzed recordings to measure changes in vocal acoustics during shifted feedback and stimulation, and to determine their interaction. Results were correlated with the location and frequency tuning of stimulation sites. RESULTS Consistent with previous results, we found electrical stimulation alone evoked changes in vocal production. Results were stronger in the right hemisphere, but decreased with lower currents or repeated stimulation. Simultaneous stimulation and shifted feedback significantly altered vocal control for a subset of sites, decreasing feedback compensation at some and increasing it at others. Inhibited compensation was more likely at sites closer to vocal frequencies. CONCLUSIONS Results provide causal evidence that the AC is involved in feedback-dependent vocal control, and that it is sufficient and may also be necessary to drive changes in vocal production. LEVEL OF EVIDENCE N/A Laryngoscope, 133:1-10, 2023.
Collapse
Affiliation(s)
- Steven J Eliades
- Auditory and Communication Systems Laboratory, Department of Otorhinolaryngology: Head and Neck Surgery, University of Pennsylvania Perelman School of Medicine, Philadelphia, Pennsylvania, USA
- Department of Head and Neck Surgery & Communication Sciences, Duke University School of Medicine, Durham, North Carolina, USA
| | - Joji Tsunada
- Auditory and Communication Systems Laboratory, Department of Otorhinolaryngology: Head and Neck Surgery, University of Pennsylvania Perelman School of Medicine, Philadelphia, Pennsylvania, USA
- Chinese Institute for Brain Research, Beijing, China
| |
Collapse
|
27
|
Floegel M, Kasper J, Perrier P, Kell CA. How the conception of control influences our understanding of actions. Nat Rev Neurosci 2023; 24:313-329. [PMID: 36997716 DOI: 10.1038/s41583-023-00691-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/28/2023] [Indexed: 04/01/2023]
Abstract
Wilful movement requires neural control. Commonly, neural computations are thought to generate motor commands that bring the musculoskeletal system - that is, the plant - from its current physical state into a desired physical state. The current state can be estimated from past motor commands and from sensory information. Modelling movement on the basis of this concept of plant control strives to explain behaviour by identifying the computational principles for control signals that can reproduce the observed features of movements. From an alternative perspective, movements emerge in a dynamically coupled agent-environment system from the pursuit of subjective perceptual goals. Modelling movement on the basis of this concept of perceptual control aims to identify the controlled percepts and their coupling rules that can give rise to the observed characteristics of behaviour. In this Perspective, we discuss a broad spectrum of approaches to modelling human motor control and their notions of control signals, internal models, handling of sensory feedback delays and learning. We focus on the influence that the plant control and the perceptual control perspective may have on decisions when modelling empirical data, which may in turn shape our understanding of actions.
Collapse
Affiliation(s)
- Mareike Floegel
- Department of Neurology and Brain Imaging Center, Goethe University Frankfurt, Frankfurt, Germany
| | - Johannes Kasper
- Department of Neurology and Brain Imaging Center, Goethe University Frankfurt, Frankfurt, Germany
| | - Pascal Perrier
- Univ. Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, Grenoble, France
| | - Christian A Kell
- Department of Neurology and Brain Imaging Center, Goethe University Frankfurt, Frankfurt, Germany.
| |
Collapse
|
28
|
Davis M, Redford MA. Learning and change in a dual lexicon model of speech production. Front Hum Neurosci 2023; 17:893785. [PMID: 36875228 PMCID: PMC9975561 DOI: 10.3389/fnhum.2023.893785] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Accepted: 01/26/2023] [Indexed: 02/17/2023] Open
Abstract
Speech motor processes and phonological forms influence one another because speech and language are acquired and used together. This hypothesis underpins the Computational Core (CC) model, which provides a framework for understanding the limitations of perceptually-driven changes to production. The model assumes a lexicon of motor and perceptual wordforms linked to concepts and whole-word production based on these forms. Motor wordforms are built up with speech practice. Perceptual wordforms encode ambient language patterns in detail. Speech production is the integration of the two forms. Integration results in an output trajectory through perceptual-motor space that guides articulation. Assuming successful communication of the intended concept, the output trajectory is incorporated into the existing motor wordform for that concept. Novel word production exploits existing motor wordforms to define a perceptually-acceptable path through motor space that is further modified by the perceptual wordform during integration. Simulation results show that, by preserving a distinction between motor and perceptual wordforms in the lexicon, the CC model can account for practice-based changes in the production of known words and for the effect of expressive vocabulary size on production accuracy of novel words.
Collapse
Affiliation(s)
- Maya Davis
- Department of Linguistics, University of Oregon, Eugene, OR, United States
| | - Melissa A Redford
- Department of Linguistics, University of Oregon, Eugene, OR, United States
| |
Collapse
|
29
|
Koçak OM, Ceran S, Üney PK, Hacıyev C. Obsessive-Compulsive Disorder from an Embodied Cognition Perspective. Noro Psikiyatr Ars 2022; 59:S50-S56. [PMID: 36578983 PMCID: PMC9767127 DOI: 10.29399/npa.28151] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2022] [Accepted: 05/27/2022] [Indexed: 12/31/2022] Open
Abstract
Obsessive Compulsive Disorder (OCD) is characterized by problems of control over behavior and cognition. Although almost all of the studies on pathogenesis of OCD point out fronto-striatal dysfunction, it is still not possible to reveal mechanisms to explain the entire clinical course of OCD through these circuits. A more holistic explanation can be given through the Embodied Cognition (EC) perspective, which suggests that the alteration/dysfunction of low-level sensory-motor process may appear as a multifarious extent of dysfunction of high-level cognitive processes. Fronto-striatal circuits play fundamental role in behavioral control. These circuits also have a central role for the feed-forward motor control (FFMC). In FFMC, the internal model of movement is driven by efference copies as templates for motor behavior, without being adjusted by sensory information. If impairment of low-level sensory-motor processing is crucial to occurrence of compulsions, one possible hypothesis about this impairment is the problem which emerges from occurrence of efference copy in FFMC. On the other hand, the efference copy has also pivotal role for subject's feeling of the agency of an action. Therefore, there may be role of failure in successfully reproduction of the efference copy in the background of subjects' experience of losing control on compulsive behaviors. In this paper, we will discuss how the embodied cognition (EC) perspective which can be one of the biological bases of computationalism, which brings neuroscientific explanations on the functioning of nervous system to a more symbolic perspective, may contribute to our understanding of etiopathogenesis of OCD. In this perspective, our method will be to integrate the theoretical basis provided by EC perspective to the current models for OCD, rather than falsifying them.
Collapse
Affiliation(s)
- Orhan Murat Koçak
- Başkent University Faculty of Medicine, Department of Psychiatry, Ankara, Turkey,Correspondence Address: Orhan Murat Koçak, Baskent University Faculty of Medicine, Department of Psychiatry, Taşkent Cad. Şht. H. Temel Kuğuoğlu Sokak No: 30, 06490 Bahçelievler, Ankara, Turkey • E-mail:
| | - Selvi Ceran
- Başkent University Faculty of Medicine, Department of Psychiatry, Ankara, Turkey
| | - Pelin Kutlutürk Üney
- Başkent University Faculty of Medicine, Department of Psychiatry, Ankara, Turkey
| | - Ceyhun Hacıyev
- Başkent University Faculty of Medicine, Department of Psychiatry, Ankara, Turkey
| |
Collapse
|
30
|
Patel S. Towards a conative account of mental imagery. PHILOSOPHICAL PSYCHOLOGY 2022. [DOI: 10.1080/09515089.2022.2148521] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Affiliation(s)
- Shivam Patel
- Department of Philosophy, Florida State University, Tallahassee, Florida, USA
| |
Collapse
|
31
|
Frankford SA, Cai S, Nieto-Castañón A, Guenther FH. Auditory feedback control in adults who stutter during metronome-paced speech II. Formant Perturbation. JOURNAL OF FLUENCY DISORDERS 2022; 74:105928. [PMID: 36063640 PMCID: PMC9930613 DOI: 10.1016/j.jfludis.2022.105928] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/06/2022] [Revised: 07/11/2022] [Accepted: 08/19/2022] [Indexed: 06/15/2023]
Abstract
PURPOSE Prior work has shown that Adults who stutter (AWS) have reduced and delayed responses to auditory feedback perturbations. This study aimed to determine whether external timing cues, which increase fluency, resolve auditory feedback processing disruptions. METHODS Fifteen AWS and sixteen adults who do not stutter (ANS) read aloud a multisyllabic sentence either with natural stress and timing or with each syllable paced at the rate of a metronome. On random trials, an auditory feedback formant perturbation was applied, and formant responses were compared between groups and pacing conditions. RESULTS During normally paced speech, ANS showed a significant compensatory response to the perturbation by the end of the perturbed vowel, while AWS did not. In the metronome-paced condition, which significantly reduced the disfluency rate, the opposite was true: AWS showed a significant response by the end of the vowel, while ANS did not. CONCLUSION These findings indicate a potential link between the reduction in stuttering found during metronome-paced speech and changes in auditory motor integration in AWS.
Collapse
Affiliation(s)
- Saul A Frankford
- Department of Speech, Language, & Hearing Sciences, Boston University, Boston, MA 02215, USA.
| | - Shanqing Cai
- Department of Speech, Language, & Hearing Sciences, Boston University, Boston, MA 02215, USA.
| | - Alfonso Nieto-Castañón
- Department of Speech, Language, & Hearing Sciences, Boston University, Boston, MA 02215, USA.
| | - Frank H Guenther
- Department of Speech, Language, & Hearing Sciences, Boston University, Boston, MA 02215, USA; Department of Biomedical Engineering, Boston University, Boston, MA 02215, USA.
| |
Collapse
|
32
|
Railo H, Varjonen A, Lehtonen M, Sikka P. Event-Related Potential Correlates of Learning to Produce Novel Foreign Phonemes. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2022; 3:599-614. [PMID: 37215343 PMCID: PMC10158638 DOI: 10.1162/nol_a_00080] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/18/2022] [Accepted: 08/31/2022] [Indexed: 05/24/2023]
Abstract
Learning to pronounce a foreign phoneme requires an individual to acquire a motor program that enables the reproduction of the new acoustic target sound. This process is largely based on the use of auditory feedback to detect pronunciation errors to adjust vocalization. While early auditory evoked neural activity underlies automatic detection and adaptation to vocalization errors, little is known about the neural correlates of acquiring novel speech targets. To investigate the neural processes that mediate the learning of foreign phoneme pronunciation, we recorded event-related potentials when participants (N = 19) pronounced native or foreign phonemes. Behavioral results indicated that the participants' pronunciation of the foreign phoneme improved during the experiment. Early auditory responses (N1 and P2 waves, approximately 85-290 ms after the sound onset) revealed no differences between foreign and native phonemes. In contrast, the amplitude of the frontocentrally distributed late slow wave (LSW, 320-440 ms) was modulated by the pronunciation of the foreign phonemes, and the effect changed during the experiment, paralleling the improvement in pronunciation. These results suggest that the LSW may reflect higher-order monitoring processes that signal successful pronunciation and help learn novel phonemes.
Collapse
Affiliation(s)
- Henry Railo
- Department of Psychology and Speech-Language Pathology, University of Turku, Turku, Finland
- Turku Brain and Mind Centre, University of Turku, Turku, Finland
| | - Anni Varjonen
- Turku Brain and Mind Centre, University of Turku, Turku, Finland
| | - Minna Lehtonen
- Department of Psychology and Speech-Language Pathology, University of Turku, Turku, Finland
- Turku Brain and Mind Centre, University of Turku, Turku, Finland
- Center for Multilingualism in Society across the Lifespan, Department of Linguistics and Scandinavian Studies, University of Oslo, Oslo, Norway
| | - Pilleriin Sikka
- Department of Psychology and Speech-Language Pathology, University of Turku, Turku, Finland
- Turku Brain and Mind Centre, University of Turku, Turku, Finland
- Department of Cognitive Neuroscience and Philosophy, School of Bioscience, University of Skövde, Skövde, Sweden
- Department of Psychology, Stanford University, Stanford, California, USA
| |
Collapse
|
33
|
Kitchen NM, Kim KS, Wang PZ, Hermosillo RJ, Max L. Individual sensorimotor adaptation characteristics are independent across orofacial speech movements and limb reaching movements. J Neurophysiol 2022; 128:696-710. [PMID: 35946809 PMCID: PMC9484989 DOI: 10.1152/jn.00167.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Revised: 07/20/2022] [Accepted: 08/08/2022] [Indexed: 11/22/2022] Open
Abstract
Sensorimotor adaptation is critical for human motor control but shows considerable interindividual variability. Efforts are underway to identify factors accounting for individual differences in specific adaptation tasks. However, a fundamental question has remained unaddressed: Is an individual's capability for adaptation effector system specific or does it reflect a generalized adaptation ability? We therefore tested the same participants in analogous adaptation paradigms focusing on distinct sensorimotor systems: speaking with perturbed auditory feedback and reaching with perturbed visual feedback. Each task was completed once with the perturbation introduced gradually (ramped up over 60 trials) and, on a different day, once with the perturbation introduced suddenly. Consistent with studies of each system separately, visuomotor reach adaptation was more complete than auditory-motor speech adaptation (80% vs. 29% of the perturbation). Adaptation was not significantly correlated between the speech and reach tasks. Moreover, considered within tasks, 1) adaptation extent was correlated between the gradual and sudden conditions for reaching but not for speaking, 2) adaptation extent was correlated with additional measures of performance (e.g., trial duration, within-trial corrections) only for reaching and not for speaking, and 3) fitting individual participant adaptation profiles with exponential rather than linear functions offered a larger benefit [lower root mean square error (RMSE)] for the reach task than for the speech task. Combined, results suggest that the ability for sensorimotor adaptation relies on neural plasticity mechanisms that are effector system specific rather than generalized. This finding has important implications for ongoing efforts seeking to identify cognitive, behavioral, and neurochemical predictors of individual sensorimotor adaptation.NEW & NOTEWORTHY This study provides the first detailed demonstration that individual sensorimotor adaptation characteristics are independent across articulatory speech movements and limb reaching movements. Thus, individual sensorimotor learning abilities are effector system specific rather than generalized. Findings regarding one effector system do not necessarily apply to other systems, different underlying mechanisms may be involved, and implications for clinical rehabilitation or performance training also cannot be generalized.
Collapse
Affiliation(s)
- Nick M Kitchen
- Department of Speech and Hearing Sciences, University of Washington, Seattle, Washington
| | - Kwang S Kim
- Department of Speech and Hearing Sciences, University of Washington, Seattle, Washington
| | - Prince Z Wang
- Department of Speech and Hearing Sciences, University of Washington, Seattle, Washington
| | - Robert J Hermosillo
- Department of Speech and Hearing Sciences, University of Washington, Seattle, Washington
| | - Ludo Max
- Department of Speech and Hearing Sciences, University of Washington, Seattle, Washington
- Haskins Laboratories, New Haven, Connecticut
| |
Collapse
|
34
|
Reinvestigating the Neural Bases Involved in Speech Production of Stutterers: An ALE Meta-Analysis. Brain Sci 2022; 12:brainsci12081030. [PMID: 36009093 PMCID: PMC9406059 DOI: 10.3390/brainsci12081030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2022] [Revised: 07/25/2022] [Accepted: 08/02/2022] [Indexed: 02/04/2023] Open
Abstract
Background: Stuttering is characterized by dysfluency and difficulty in speech production. Previous research has found abnormalities in the neural function of various brain areas during speech production tasks. However, the cognitive neural mechanism of stuttering has still not been fully determined. Method: Activation likelihood estimation analysis was performed to provide neural imaging evidence on neural bases by reanalyzing published studies. Results: Our analysis revealed overactivation in the bilateral posterior superior temporal gyrus, inferior frontal gyrus, medial frontal gyrus, precentral gyrus, postcentral gyrus, basal ganglia, and cerebellum, and deactivation in the anterior superior temporal gyrus and middle temporal gyrus among the stutterers. The overactivated regions might indicate a greater demand in feedforward planning in speech production, while the deactivated regions might indicate dysfunction in the auditory feedback system among stutterers. Conclusions: Our findings provide updated and direct evidence on the multi-level impairment (feedforward and feedback systems) of stutterers during speech production and show that the corresponding neural bases were differentiated.
Collapse
|
35
|
Wang H, Max L. Inter-Trial Formant Variability in Speech Production Is Actively Controlled but Does Not Affect Subsequent Adaptation to a Predictable Formant Perturbation. Front Hum Neurosci 2022; 16:890065. [PMID: 35874163 PMCID: PMC9300893 DOI: 10.3389/fnhum.2022.890065] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2022] [Accepted: 06/14/2022] [Indexed: 11/13/2022] Open
Abstract
Despite ample evidence that speech production is associated with extensive trial-to-trial variability, it remains unclear whether this variability represents merely unwanted system noise or an actively regulated mechanism that is fundamental for maintaining and adapting accurate speech movements. Recent work on upper limb movements suggest that inter-trial variability may be not only actively regulated based on sensory feedback, but also provide a type of workspace exploration that facilitates sensorimotor learning. We therefore investigated whether experimentally reducing or magnifying inter-trial formant variability in the real-time auditory feedback during speech production (a) leads to adjustments in formant production variability that compensate for the manipulation, (b) changes the temporal structure of formant adjustments across productions, and (c) enhances learning in a subsequent adaptation task in which a predictable formant-shift perturbation is applied to the feedback signal. Results show that subjects gradually increased formant variability in their productions when hearing auditory feedback with reduced variability, but subsequent formant-shift adaptation was not affected by either reducing or magnifying the perceived variability. Thus, findings provide evidence for speakers’ active control of inter-trial formant variability based on auditory feedback from previous trials, but–at least for the current short-term experimental manipulation of feedback variability–not for a role of this variability regulation mechanism in subsequent auditory-motor learning.
Collapse
Affiliation(s)
- Hantao Wang
- Department of Speech and Hearing Sciences, University of Washington, Seattle, WA, United States
| | - Ludo Max
- Department of Speech and Hearing Sciences, University of Washington, Seattle, WA, United States
- Haskins Laboratories, New Haven, CT, United States
- *Correspondence: Ludo Max,
| |
Collapse
|
36
|
Neural correlates of impaired vocal feedback control in post-stroke aphasia. Neuroimage 2022; 250:118938. [PMID: 35092839 PMCID: PMC8920755 DOI: 10.1016/j.neuroimage.2022.118938] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2021] [Revised: 12/31/2021] [Accepted: 01/25/2022] [Indexed: 01/16/2023] Open
Abstract
We used left-hemisphere stroke as a model to examine how damage to sensorimotor brain networks impairs vocal auditory feedback processing and control. Individuals with post-stroke aphasia and matched neurotypical control subjects vocalized speech vowel sounds and listened to the playback of their self-produced vocalizations under normal (NAF) and pitch-shifted altered auditory feedback (AAF) while their brain activity was recorded using electroencephalography (EEG) signals. Event-related potentials (ERPs) were utilized as a neural index to probe the effect of vocal production on auditory feedback processing with high temporal resolution, while lesion data in the stroke group was used to determine how brain abnormality accounted for the impairment of such mechanisms. Results revealed that ERP activity was aberrantly modulated during vocalization vs. listening in aphasia, and this effect was accompanied by the reduced magnitude of compensatory vocal responses to pitch-shift alterations in the auditory feedback compared with control subjects. Lesion-mapping revealed that the aberrant pattern of ERP modulation in response to NAF was accounted for by damage to sensorimotor networks within the left-hemisphere inferior frontal, precentral, inferior parietal, and superior temporal cortices. For responses to AAF, neural deficits were predicted by damage to a distinguishable network within the inferior frontal and parietal cortices. These findings define the left-hemisphere sensorimotor networks implicated in auditory feedback processing, error detection, and vocal motor control. Our results provide translational synergy to inform the theoretical models of sensorimotor integration while having clinical applications for diagnosis and treatment of communication disabilities in individuals with stroke and other neurological conditions.
Collapse
|
37
|
Sato M. Motor and visual influences on auditory neural processing during speaking and listening. Cortex 2022; 152:21-35. [DOI: 10.1016/j.cortex.2022.03.013] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2021] [Revised: 02/02/2022] [Accepted: 03/15/2022] [Indexed: 11/03/2022]
|
38
|
Tomassi NE, Weerathunge HR, Cushman MR, Bohland JW, Stepp CE. Assessing Ecologically Valid Methods of Auditory Feedback Measurement in Individuals With Typical Speech. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:121-135. [PMID: 34941381 PMCID: PMC9153919 DOI: 10.1044/2021_jslhr-21-00377] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Revised: 09/15/2021] [Accepted: 09/16/2021] [Indexed: 06/14/2023]
Abstract
PURPOSE Auditory feedback is thought to contribute to the online control of speech production. Yet, the standard method of estimating auditory feedback control (i.e., reflexive responses to auditory-motor perturbations), although sound, requires specialized instrumentation, meticulous calibration, unnatural tasks, and specific acoustic environments. The purpose of this study was to explore more ecologically valid features of speech production to determine their relationships with auditory feedback mechanisms. METHOD Two previously proposed measures of within-utterance variability (centering and baseline variability) were compared with reflexive response magnitudes in 30 adults with typical speech. These three measures were estimated for both the laryngeal and articulatory subsystems of speech. RESULTS Regardless of the speech subsystem, neither centering nor baseline variability was shown to be related to reflexive response magnitudes. Likewise, no relationships were found between centering and baseline variability. CONCLUSIONS Despite previous suggestions that centering and baseline variability may be related to auditory feedback mechanisms, this study did not support these assertions. However, the detection of such relationships may have required a larger degree of variability in responses, relative to that found in those with typical speech. Future research on these relationships is warranted in populations with more heterogeneous responses, such as children or clinical populations. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.17330546.
Collapse
Affiliation(s)
- Nicole E. Tomassi
- Graduate Program for Neuroscience, Boston University, MA
- Department of Speech, Language & Hearing Sciences, Boston University, MA
| | - Hasini R. Weerathunge
- Department of Speech, Language & Hearing Sciences, Boston University, MA
- Department of Biomedical Engineering, Boston University, MA
| | - Megan R. Cushman
- Department of Speech, Language & Hearing Sciences, Boston University, MA
| | - Jason W. Bohland
- Department of Communication Science and Disorders, University of Pittsburgh, PA
| | - Cara E. Stepp
- Graduate Program for Neuroscience, Boston University, MA
- Department of Speech, Language & Hearing Sciences, Boston University, MA
- Department of Biomedical Engineering, Boston University, MA
- Department of Otolaryngology—Head & Neck Surgery, Boston University School of Medicine, MA
| |
Collapse
|
39
|
Parthasharathy M, Mantini D, Orban de Xivry JJ. Increased upper-limb sensory attenuation with age. J Neurophysiol 2021; 127:474-492. [PMID: 34936521 DOI: 10.1152/jn.00558.2020] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The pressure of our own finger on the arm feels differently than the same pressure exerted by an external agent: the latter involves just touch, whereas the former involves a combination of touch and predictive output from the internal model of the body. This internal model predicts the movement of our own finger and hence the intensity of the sensation of the finger press is decreased. A decrease in intensity of the self-produced stimulus is called sensory attenuation. It has been reported that, due to decreased proprioception with age and an increased reliance on the prediction of the internal model, sensory attenuation is increased in older adults. In this study, we used a force-matching paradigm to test if sensory attenuation is also present over the arm and if aging increases sensory attenuation. We demonstrated that, while both young and older adults overestimate a self-produced force, older adults overestimate it even more showing an increased sensory attenuation. In addition, we also found that both younger and older adults self-produce higher forces when activating the homologous muscles of the upper limb. While this is traditionally viewed as evidence for an increased reliance on internal model function in older adults because of decreased proprioception, proprioception appeared unimpaired in our older participants. This begs the question of whether an age-related decrease in proprioception is really responsible for the increased sensory attenuation observed in older people.
Collapse
Affiliation(s)
- Manasa Parthasharathy
- Motor Control and Neuroplasticity Research group, Department of Movement Sciences, KU Leuven, Leuven, Belgium.,Leuven Brain Institute, KU Leuven, Leuven, Belgium
| | - Dante Mantini
- Motor Control and Neuroplasticity Research group, Department of Movement Sciences, KU Leuven, Leuven, Belgium.,Brain Imaging and Neural Dynamics Research Group, IRCCS San Camillo Hospital, Venice, Italy
| | - Jean-Jacques Orban de Xivry
- Motor Control and Neuroplasticity Research group, Department of Movement Sciences, KU Leuven, Leuven, Belgium.,Leuven Brain Institute, KU Leuven, Leuven, Belgium
| |
Collapse
|
40
|
The Role of the Medial Prefontal Cortex in Self-Agency in Schizophrenia. JOURNAL OF PSYCHIATRY AND BRAIN SCIENCE 2021; 6. [PMID: 34761121 PMCID: PMC8577427 DOI: 10.20900/jpbs.20210017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
Schizophrenia is a disorder of the self. In particular, patients show cardinal deficits in self-agency (i.e., the experience and awareness of being the agent of one’s own thoughts and actions) that directly contribute to positive psychotic symptoms of hallucinations and delusions and distort reality monitoring (defined as distinguishing self-generated information from externally-derived information). Predictive coding models suggest that the experience of self-agency results from a minimal prediction error between the predicted sensory consequence of a self-generated action and the actual outcome. In other words, the experience of self-agency is thought to be driven by making reliable predictions about the expected outcomes of one’s own actions. Most of the agency literature has focused on the motor system; here we present a novel viewpoint that examines agency from a different lens using distinct tasks of reality monitoring and speech monitoring. The self-prediction mechanism that leads to self-agency is necessary for reality monitoring in that self-predictions represent a critical precursor for the successful encoding and memory retrieval of one’s own thoughts and actions during reality monitoring to enable accurate self-agency judgments (i.e., accurate identification of self-generated information). This self-prediction mechanism is also critical for speech monitoring where we continually compare auditory feedback (i.e., what we hear ourselves say) with what we expect to hear. Prior research has shown that the medial prefrontal cortex (mPFC) may represent one potential neural substrate of this self-prediction mechanism. Unfortunately, patients with schizophrenia (SZ) show mPFC hypoactivity associated with self-agency impairments on reality and speech monitoring tasks, as well as aberrant mPFC functional connectivity during intrinsic measures of agency during resting states that predicted worsening psychotic symptoms. Causal neurostimulation and neurofeedback techniques can move the frontiers of schizophrenia research into a new era where we implement techniques to manipulate excitability in key neural regions, such as the mPFC, to modulate patients’ reliance on self-prediction mechanisms on distinct tasks of reality and speech monitoring. We hypothesize these findings will show that mPFC provides a unitary basis for self-agency, driven by reliance on self-prediction mechanisms, which will facilitate the development of new targeted treatments in patients with schizophrenia.
Collapse
|
41
|
Skandalis DA, Lunsford ET, Liao JC. Corollary discharge enables proprioception from lateral line sensory feedback. PLoS Biol 2021; 19:e3001420. [PMID: 34634044 PMCID: PMC8530527 DOI: 10.1371/journal.pbio.3001420] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2021] [Revised: 10/21/2021] [Accepted: 09/21/2021] [Indexed: 11/26/2022] Open
Abstract
Animals modulate sensory processing in concert with motor actions. Parallel copies of motor signals, called corollary discharge (CD), prepare the nervous system to process the mixture of externally and self-generated (reafferent) feedback that arises during locomotion. Commonly, CD in the peripheral nervous system cancels reafference to protect sensors and the central nervous system from being fatigued and overwhelmed by self-generated feedback. However, cancellation also limits the feedback that contributes to an animal's awareness of its body position and motion within the environment, the sense of proprioception. We propose that, rather than cancellation, CD to the fish lateral line organ restructures reafference to maximize proprioceptive information content. Fishes' undulatory body motions induce reafferent feedback that can encode the body's instantaneous configuration with respect to fluid flows. We combined experimental and computational analyses of swimming biomechanics and hair cell physiology to develop a neuromechanical model of how fish can track peak body curvature, a key signature of axial undulatory locomotion. Without CD, this computation would be challenged by sensory adaptation, typified by decaying sensitivity and phase distortions with respect to an input stimulus. We find that CD interacts synergistically with sensor polarization to sharpen sensitivity along sensors' preferred axes. The sharpening of sensitivity regulates spiking to a narrow interval coinciding with peak reafferent stimulation, which prevents adaptation and homogenizes the otherwise variable sensor output. Our integrative model reveals a vital role of CD for ensuring precise proprioceptive feedback during undulatory locomotion, which we term external proprioception.
Collapse
Affiliation(s)
- Dimitri A. Skandalis
- Department of Biology & Whitney Laboratory for Marine Bioscience, University of Florida, St. Augustine, Florida, United States of America
- Department of Psychological & Brain Sciences, Johns Hopkins University, Baltimore, Maryland, United States of America
| | - Elias T. Lunsford
- Department of Biology & Whitney Laboratory for Marine Bioscience, University of Florida, St. Augustine, Florida, United States of America
| | - James C. Liao
- Department of Biology & Whitney Laboratory for Marine Bioscience, University of Florida, St. Augustine, Florida, United States of America
| |
Collapse
|
42
|
Barbier G, Merzouki R, Bal M, Baum SR, Shiller DM. Visual feedback of the tongue influences speech adaptation to a physical modification of the oral cavity. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 150:718. [PMID: 34470311 DOI: 10.1121/10.0005520] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/04/2021] [Accepted: 06/15/2021] [Indexed: 06/13/2023]
Abstract
Studies examining sensorimotor adaptation of speech to changing sensory conditions have demonstrated a central role for both auditory and somatosensory feedback in speech motor learning. The potential influence of visual feedback of oral articulators, which is not typically available during speech production but may nonetheless enhance oral motor control, remains poorly understood. The present study explores the influence of ultrasound visual feedback of the tongue on adaptation of speech production (focusing on the sound /s/) to a physical perturbation of the oral articulators (prosthesis altering the shape of the hard palate). Two visual feedback groups were tested that differed in the two-dimensional plane being imaged (coronal or sagittal) during practice producing /s/ words, along with a no-visual-feedback control group. Participants in the coronal condition were found to adapt their speech production across a broader range of acoustic spectral moments and syllable contexts than the no-feedback controls. In contrast, the sagittal group showed reduced adaptation compared to no-feedback controls. The results indicate that real-time visual feedback of the tongue is spontaneously integrated during speech motor adaptation, with effects that can enhance or interfere with oral motor learning depending on compatibility of the visual articulatory information with requirements of the speaking task.
Collapse
Affiliation(s)
- Guillaume Barbier
- École d'Orthophonie et d'Audiologie, Université de Montréal, Case Postale 6128, Succursale Centre-Ville, Montréal, Québec H3C 3J7, Canada
| | - Ryme Merzouki
- École d'Orthophonie et d'Audiologie, Université de Montréal, Case Postale 6128, Succursale Centre-Ville, Montréal, Québec H3C 3J7, Canada
| | - Mathilde Bal
- École d'Orthophonie et d'Audiologie, Université de Montréal, Case Postale 6128, Succursale Centre-Ville, Montréal, Québec H3C 3J7, Canada
| | - Shari R Baum
- School of Communication Sciences and Disorders, McGill University, 2001 McGill College Avenue, Suite 800, Montréal, Québec H3A 1G1, Canada
| | - Douglas M Shiller
- École d'Orthophonie et d'Audiologie, Université de Montréal, Case Postale 6128, Succursale Centre-Ville, Montréal, Québec H3C 3J7, Canada
| |
Collapse
|
43
|
Parrell B, Ivry RB, Nagarajan SS, Houde JF. Intact Correction for Self-Produced Vowel Formant Variability in Individuals With Cerebellar Ataxia Regardless of Auditory Feedback Availability. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:2234-2247. [PMID: 33900786 PMCID: PMC8740698 DOI: 10.1044/2021_jslhr-20-00270] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/18/2020] [Revised: 11/02/2020] [Accepted: 01/19/2021] [Indexed: 06/12/2023]
Abstract
Purpose Individuals with cerebellar ataxia (CA) caused by cerebellar degeneration exhibit larger reactive compensatory responses to unexpected auditory feedback perturbations than neurobiologically typical speakers, suggesting they may rely more on feedback control during speech. We test this hypothesis by examining variability in unaltered speech. Previous studies of typical speakers have demonstrated a reduction in formant variability (centering) observed during the initial phase of vowel production from vowel onset to vowel midpoint. Centering is hypothesized to reflect feedback-based corrections for self-produced variability and thus may provide a behavioral assay of feedback control in unperturbed speech in the same manner as the compensatory response does for feedback perturbations. Method To comprehensively compare centering in individuals with CA and controls, we examine centering in two vowels (/i/ and /ɛ/) under two contexts (isolated words and connected speech). As a control, we examine speech produced both with and without noise to mask auditory feedback. Results Individuals with CA do not show increased centering compared to age-matched controls, regardless of vowel, context, or masking. Contrary to previous results in neurobiologically typical speakers, centering was not affected by the presence of masking noise in either group. Conclusions The similar magnitude of centering seen with and without masking noise questions whether centering is driven by auditory feedback. However, if centering is at least partially driven by auditory/somatosensory feedback, these results indicate that the larger compensatory response to altered auditory feedback observed in individuals with CA may not reflect typical motor control processes during normal, unaltered speech production.
Collapse
Affiliation(s)
- Benjamin Parrell
- Department of Communication Sciences and Disorders, University of Wisconsin–Madison
| | - Richard B. Ivry
- Department of Psychology, University of California, Berkeley
| | | | - John F. Houde
- Department of Otolaryngology, University of California, San Francisco
| |
Collapse
|
44
|
Niziolek CA, Parrell B. Responses to Auditory Feedback Manipulations in Speech May Be Affected by Previous Exposure to Auditory Errors. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:2169-2181. [PMID: 33705674 PMCID: PMC8740748 DOI: 10.1044/2020_jslhr-20-00263] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Purpose Speakers use auditory feedback to guide their speech output, although individuals differ in the magnitude of their compensatory response to perceived errors in feedback. Little is known about the factors that contribute to the compensatory response or how fixed or flexible they are within an individual. Here, we test whether manipulating the perceived reliability of auditory feedback modulates speakers' compensation to auditory perturbations, as predicted by optimal models of sensorimotor control. Method Forty participants produced monosyllabic words in two separate sessions, which differed in the auditory feedback given during an initial exposure phase. In the veridical session exposure phase, feedback was normal. In the noisy session exposure phase, small, random formant perturbations were applied, reducing reliability of auditory feedback. In each session, a subsequent test phase introduced larger unpredictable formant perturbations. We assessed whether the magnitude of within-trial compensation for these larger perturbations differed across the two sessions. Results Compensatory responses to downward (though not upward) formant perturbations were larger in the veridical session than the noisy session. However, in post hoc testing, we found the magnitude of this effect is highly dependent on the choice of analysis procedures. Compensation magnitude was not predicted by other production measures, such as formant variability, and was not reliably correlated across sessions. Conclusions Our results, though mixed, provide tentative support that the feedback control system monitors the reliability of sensory feedback. These results must be interpreted cautiously given the potentially limited stability of auditory feedback compensation measures across analysis choices and across sessions. Supplemental Material https://doi.org/10.23641/asha.14167136.
Collapse
Affiliation(s)
- Caroline A. Niziolek
- Department of Communication Sciences and Disorders, University of Wisconsin–Madison
| | - Benjamin Parrell
- Department of Communication Sciences and Disorders, University of Wisconsin–Madison
| |
Collapse
|
45
|
Cheng HS, Niziolek CA, Buchwald A, McAllister T. Examining the Relationship Between Speech Perception, Production Distinctness, and Production Variability. Front Hum Neurosci 2021; 15:660948. [PMID: 34122028 PMCID: PMC8192800 DOI: 10.3389/fnhum.2021.660948] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2021] [Accepted: 04/30/2021] [Indexed: 11/13/2022] Open
Abstract
Several studies have demonstrated that individuals' ability to perceive a speech sound contrast is related to the production of that contrast in their native language. The theoretical account for this relationship is that speech perception and production have a shared multimodal representation in relevant sensory spaces (e.g., auditory and somatosensory domains). This gives rise to a prediction that individuals with more narrowly defined targets will produce greater separation between contrasting sounds, as well as lower variability in the production of each sound. However, empirical studies that tested this hypothesis, particularly with regard to variability, have reported mixed outcomes. The current study investigates the relationship between perceptual ability and production ability, focusing on the auditory domain. We examined whether individuals' categorical labeling consistency for the American English /ε/-/æ/ contrast, measured using a perceptual identification task, is related to distance between the centroids of vowel categories in acoustic space (i.e., vowel contrast distance) and to two measures of production variability: the overall distribution of repeated tokens for the vowels (i.e., area of the ellipse) and the proportional within-trial decrease in variability as defined as the magnitude of self-correction to the initial acoustic variation of each token (i.e., centering ratio). No significant associations were found between categorical labeling consistency and vowel contrast distance, between categorical labeling consistency and area of the ellipse, or between categorical labeling consistency and centering ratio. These null results suggest that the perception-production relation may not be as robust as suggested by a widely adopted theoretical framing in terms of the size of auditory target regions. However, the present results may also be attributable to choices in implementation (e.g., the use of model talkers instead of continua derived from the participants' own productions) that should be subject to further investigation.
Collapse
Affiliation(s)
- Hung-Shao Cheng
- Department of Communicative Sciences and Disorders, New York University, New York City, NY, United States
| | - Caroline A Niziolek
- Department of Communication Sciences and Disorders, University of Wisconsin-Madison, Madison, WI, United States
| | - Adam Buchwald
- Department of Communicative Sciences and Disorders, New York University, New York City, NY, United States
| | - Tara McAllister
- Department of Communicative Sciences and Disorders, New York University, New York City, NY, United States
| |
Collapse
|
46
|
Interhemispheric Auditory Cortical Synchronization in Asymmetric Hearing Loss. Ear Hear 2021; 42:1253-1262. [PMID: 33974786 PMCID: PMC8378543 DOI: 10.1097/aud.0000000000001027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
Objectives: Auditory cortical activation of the two hemispheres to monaurally presented tonal stimuli has been shown to be asynchronous in normal hearing (NH) but synchronous in the extreme case of adult-onset asymmetric hearing loss (AHL) with single-sided deafness. We addressed the wide knowledge gap between these two anchoring states of interhemispheric temporal organization. The objectives of this study were as follows: (1) to map the trajectory of interhemispheric temporal reorganization from asynchrony to synchrony using magnitude of interaural threshold difference as the independent variable in a cross-sectional study and (2) to evaluate reversibility of interhemispheric synchrony in association with hearing in noise performance by amplifying the aidable poorer ear in a repeated measures, longitudinal study. Design: The cross-sectional and longitudinal cohorts were comprised of 49 subjects (AHL; N = 21; 11 male, 10 female; mean age = 48 years) and NH (N = 28; 16 male, 12 female; mean age = 45 years). The maximum interaural threshold difference of the two cohorts spanned from 0 to 65 dB. Magnetoencephalography analyses focused on latency of the M100 peak response from auditory cortex in both hemispheres between 50 msec and 150 msec following monaural tonal stimulation at the frequency (0.5, 1, 2, 3, or 4 kHz) corresponding to the maximum and minimum interaural threshold difference for better and poorer ears separately. The longitudinal AHL cohort was drawn from three subjects in the cross-sectional AHL cohort (all male; ages 49 to 60 years; varied AHL etiologies; no amplification for at least 2 years). All longitudinal study subjects were treated by monaural amplification of the poorer ear and underwent repeated measures examination of the M100 response latency and quick speech in noise hearing in noise performance at baseline, and postamplification months 3, 6, and 12. Results: The M100 response peak latency values in the ipsilateral hemisphere lagged those in the contralateral hemisphere for all stimulation conditions. The mean (SD) interhemispheric latency difference values (ipsilateral less contralateral) to better ear stimulation for three categories of maximum interaural threshold difference were as follows: NH (≤ 10 dB)—8.6 (3.0) msec; AHL (15 to 40 dB)—3.0 (1.2) msec; AHL (≥ 45 dB)—1.4 (1.3) msec. In turn, the magnitude of difference values were used to define interhemispheric temporal organization states of asynchrony, mixed asynchrony and synchrony, and synchrony, respectively. Amplification of the poorer ear in longitudinal subjects drove interhemispheric organization change from baseline synchrony to postamplification asynchrony and hearing in noise performance improvement in those with baseline impairment over a 12-month period. Conclusions: Interhemispheric temporal organization in AHL was anchored between states of asynchrony in NH and synchrony in single-sided deafness. For asymmetry magnitudes between 15 and 40 dB, the intermediate mixed state of asynchrony and synchrony was continuous and reversible. Amplification of the poorer ear in AHL improved hearing in noise performance and restored normal temporal organization of auditory cortices in the two hemispheres. The return to normal interhemispheric asynchrony from baseline synchrony and improvement in hearing following monoaural amplification of the poorer ear evolved progressively over a 12-month period.
Collapse
|
47
|
Roach BJ, Ford JM, Loewy RL, Stuart BK, Mathalon DH. Theta Phase Synchrony Is Sensitive to Corollary Discharge Abnormalities in Early Illness Schizophrenia but Not in the Psychosis Risk Syndrome. Schizophr Bull 2021; 47:415-423. [PMID: 32793958 PMCID: PMC7965080 DOI: 10.1093/schbul/sbaa110] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
Abstract
BACKGROUND Prior studies have shown that the auditory N1 event-related potential component elicited by self-generated vocalizations is reduced relative to played back vocalizations, putatively reflecting a corollary discharge mechanism. Schizophrenia patients and psychosis risk syndrome (PRS) youth show deficient N1 suppression during vocalization, consistent with corollary discharge dysfunction. Because N1 is an admixture of theta (4-7 Hz) power and phase synchrony, we examined their contributions to N1 suppression during vocalization, as well as their sensitivity, relative to N1, to corollary discharge dysfunction in schizophrenia and PRS individuals. METHODS Theta phase and power values were extracted from electroencephalography data acquired from PRS youth (n = 71), early illness schizophrenia patients (ESZ; n = 84), and healthy controls (HCs; n = 103) as they said "ah" (Talk) and then listened to the playback of their vocalizations (Listen). A principal component analysis extracted theta intertrial coherence (ITC; phase consistency) and event-related spectral power, peaking in the N1 latency range. Talk-Listen suppression scores were analyzed. RESULTS Talk-Listen suppression was greater for theta ITC (Cohen's d = 1.46) than for N1 in HC (d = 0.63). Both were deficient in ESZ, but only N1 suppression was deficient in PRS. When deprived of variance shared with theta ITC suppression, N1 suppression no longer differentiated ESZ and PRS individuals from HC. Deficits in theta ITC suppression were correlated with delusions (P = .007) in ESZ. Theta power suppression did not differentiate groups. CONCLUSIONS Theta ITC-suppression during vocalization is a more sensitive index of corollary discharge-mediated auditory cortical suppression than N1 suppression and is more sensitive to corollary discharge dysfunction in ESZ than in PRS individuals.
Collapse
Affiliation(s)
- Brian J Roach
- Psychiatry Service, San Francisco VA Medical Center, San Francisco, CA
| | - Judith M Ford
- Psychiatry Service, San Francisco VA Medical Center, San Francisco, CA
- Department of Psychiatry, University of California, San Francisco, CA
| | - Rachel L Loewy
- Department of Psychiatry, University of California, San Francisco, CA
| | - Barbara K Stuart
- Department of Psychiatry, University of California, San Francisco, CA
| | - Daniel H Mathalon
- Psychiatry Service, San Francisco VA Medical Center, San Francisco, CA
- Department of Psychiatry, University of California, San Francisco, CA
| |
Collapse
|
48
|
Kornisch M. Bilinguals who stutter: A cognitive perspective. JOURNAL OF FLUENCY DISORDERS 2021; 67:105819. [PMID: 33296800 DOI: 10.1016/j.jfludis.2020.105819] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/20/2019] [Revised: 10/05/2020] [Accepted: 11/19/2020] [Indexed: 06/12/2023]
Abstract
PURPOSE Brain differences, both in structure and executive functioning, have been found in both developmental stuttering and bilingualism. However, the etiology of stuttering remains unknown. The early suggestion that stuttering is a result of brain dysfunction has since received support from various behavioral and neuroimaging studies that have revealed functional and structural brain changes in monolinguals who stutter (MWS). In addition, MWS appear to show deficits in executive control. However, there is a lack of data on bilinguals who stutter (BWS). This literature review is intended to provide an overview of both stuttering and bilingualism as well as synthesize areas of overlap among both lines of research and highlight knowledge gaps in the current literature. METHODS A systematic literature review on both stuttering and bilingualism studies was conducted, searching for articles containing "stuttering" and/or "bilingualism" and either "brain", "executive functions", "executive control", "motor control", "cognitive reserve", or "brain reserve" in the PubMed database. Additional studies were found by examining the reference list of studies that met the inclusion criteria. RESULTS A total of 148 references that met the criteria for inclusion in this paper were used in the review. A comparison of the impact of stuttering or bilingualism on the brain are discussed. CONCLUSION Previous research examining a potential bilingual advantage for BWS is mixed. However, if such an advantage does exist, it appears to offset potential deficits in executive functioning that may be associated with stuttering.
Collapse
Affiliation(s)
- Myriam Kornisch
- The University of Mississippi, School of Applied Sciences, Department of Communication Sciences & Disorders, 2301 South Lamar Blvd, Oxford, MS 38655, United States.
| |
Collapse
|
49
|
Endo N, Ito T, Mochida T, Ijiri T, Watanabe K, Nakazawa K. Precise force controls enhance loudness discrimination of self-generated sound. Exp Brain Res 2021; 239:1141-1149. [PMID: 33555383 DOI: 10.1007/s00221-020-05993-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2020] [Accepted: 11/19/2020] [Indexed: 10/22/2022]
Abstract
Motor executions alter sensory processes. Studies have shown that loudness perception changes when a sound is generated by active movement. However, it is still unknown where and how the motor-related changes in loudness perception depend on the task demand of motor execution. We examined whether different levels of precision demands in motor control affects loudness perception. We carried out a loudness discrimination test, in which the sound stimulus was produced in conjunction with the force generation task. We tested three target force amplitude levels. The force target was presented on a monitor as a fixed visual target. The generated force was also presented on the same monitor as a movement of the visual cursor. Participants adjusted their force amplitude in a predetermined range without overshooting using these visual targets and moving cursor. In the control condition, the sound and visual stimuli were generated externally (without a force generation task). We found that the discrimination performance was significantly improved when the sound was produced by the force generation task compared to the control condition, in which the sound was produced externally, although we did not find that this improvement in discrimination performance changed depending on the different target force amplitude levels. The results suggest that the demand for precise control to produce a fixed amount of force may be key to obtaining the facilitatory effect of motor execution in auditory processes.
Collapse
Affiliation(s)
- Nozomi Endo
- Department of Life Sciences, Graduate School of Arts and Sciences, The University of Tokyo, 3-8-1, Komaba, Meguro-ku, Tokyo, 153-8902, Japan.,Faculty of Science and Engineering, Waseda University, 3-4-1, Ohkubo, Shinjuku-ku, Tokyo, 169-8555, Japan.,Japan Society for the Promotion of Science, 5-3-1 Kojimachi, Chiyoda-ku, Tokyo, 102-0083, Japan
| | - Takayuki Ito
- Univ. Grenoble Alps, Grenoble-INP, CNRS, GIPSA-Lab, 11 rue des Mathématiques, Grenoble Campus BP46, 38402, Saint Martin D'heres Cedex, France.,Haskins Laboratories, 300 George Street, New Haven, CT, 06511, USA
| | - Takemi Mochida
- NTT Communication Science Laboratories, 3-1, Morinosato Wakamiya, Atsugi-shi, Kanagawa, 243-0198, Japan
| | - Tetsuya Ijiri
- Department of Life Sciences, Graduate School of Arts and Sciences, The University of Tokyo, 3-8-1, Komaba, Meguro-ku, Tokyo, 153-8902, Japan
| | - Katsumi Watanabe
- Faculty of Science and Engineering, Waseda University, 3-4-1, Ohkubo, Shinjuku-ku, Tokyo, 169-8555, Japan.,Art & Design, University of New South Wales, Oxford St & Greens Rd, Paddington, NSW 202, Australia
| | - Kimitaka Nakazawa
- Department of Life Sciences, Graduate School of Arts and Sciences, The University of Tokyo, 3-8-1, Komaba, Meguro-ku, Tokyo, 153-8902, Japan.
| |
Collapse
|
50
|
Shamma S, Patel P, Mukherjee S, Marion G, Khalighinejad B, Han C, Herrero J, Bickel S, Mehta A, Mesgarani N. Learning Speech Production and Perception through Sensorimotor Interactions. Cereb Cortex Commun 2020; 2:tgaa091. [PMID: 33506209 PMCID: PMC7811190 DOI: 10.1093/texcom/tgaa091] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2020] [Revised: 11/19/2020] [Accepted: 11/23/2020] [Indexed: 12/21/2022] Open
Abstract
Action and perception are closely linked in many behaviors necessitating a close coordination between sensory and motor neural processes so as to achieve a well-integrated smoothly evolving task performance. To investigate the detailed nature of these sensorimotor interactions, and their role in learning and executing the skilled motor task of speaking, we analyzed ECoG recordings of responses in the high-γ band (70-150 Hz) in human subjects while they listened to, spoke, or silently articulated speech. We found elaborate spectrotemporally modulated neural activity projecting in both "forward" (motor-to-sensory) and "inverse" directions between the higher-auditory and motor cortical regions engaged during speaking. Furthermore, mathematical simulations demonstrate a key role for the forward projection in "learning" to control the vocal tract, beyond its commonly postulated predictive role during execution. These results therefore offer a broader view of the functional role of the ubiquitous forward projection as an important ingredient in learning, rather than just control, of skilled sensorimotor tasks.
Collapse
Affiliation(s)
- Shihab Shamma
- Department of Electrical and Computer Engineering, Institute for Systems Research, University of Maryland, College Park, MD 20742, USA
- Laboratoire des Systèmes Perceptifs, Department des Etudes Cognitive, École Normale Supérieure, PSL University, 75005 Paris, France
| | - Prachi Patel
- Department of Electrical Engineering, Columbia University, New York, NY 10027, USA
- Mortimer B Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
| | - Shoutik Mukherjee
- Department of Electrical and Computer Engineering, Institute for Systems Research, University of Maryland, College Park, MD 20742, USA
| | - Guilhem Marion
- Laboratoire des Systèmes Perceptifs, Department des Etudes Cognitive, École Normale Supérieure, PSL University, 75005 Paris, France
| | - Bahar Khalighinejad
- Department of Electrical Engineering, Columbia University, New York, NY 10027, USA
- Mortimer B Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
| | - Cong Han
- Department of Electrical Engineering, Columbia University, New York, NY 10027, USA
- Mortimer B Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
| | - Jose Herrero
- Neurosurgery, Hofstra Northwell School of Medicine, Manhasset, NY, USA
| | - Stephan Bickel
- Neurosurgery, Hofstra Northwell School of Medicine, Manhasset, NY, USA
| | - Ashesh Mehta
- Neurosurgery, Hofstra Northwell School of Medicine, Manhasset, NY, USA
- The Feinstein Institutes for Medical Research, Manhasset, NY 11030, USA
| | - Nima Mesgarani
- Department of Electrical Engineering, Columbia University, New York, NY 10027, USA
- Mortimer B Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
| |
Collapse
|