1
|
Pérez-Valenzuela C, Vicencio-Jiménez S, Caballero M, Delano PH, Elgueda D. Wireless electrocochleography in awake chinchillas: A model to study crossmodal modulations at the peripheral level. Hear Res 2024; 451:109093. [PMID: 39094370 DOI: 10.1016/j.heares.2024.109093] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/01/2023] [Revised: 07/07/2024] [Accepted: 07/25/2024] [Indexed: 08/04/2024]
Abstract
The discovery and development of electrocochleography (ECochG) in animal models has been fundamental for its implementation in clinical audiology and neurotology. In our laboratory, the use of round-window ECochG recordings in chinchillas has allowed a better understanding of auditory efferent functioning. In previous works, we gave evidence of the corticofugal modulation of auditory-nerve and cochlear responses during visual attention and working memory. However, whether these cognitive top-down mechanisms to the most peripheral structures of the auditory pathway are also active during audiovisual crossmodal stimulation is unknown. Here, we introduce a new technique, wireless ECochG to record compound-action potentials of the auditory nerve (CAP), cochlear microphonics (CM), and round-window noise (RWN) in awake chinchillas during a paradigm of crossmodal (visual and auditory) stimulation. We compared ECochG data obtained from four awake chinchillas recorded with a wireless ECochG system with wired ECochG recordings from six anesthetized animals. Although ECochG experiments with the wireless system had a lower signal-to-noise ratio than wired recordings, their quality was sufficient to compare ECochG potentials in awake crossmodal conditions. We found non-significant differences in CAP and CM amplitudes in response to audiovisual stimulation compared to auditory stimulation alone (clicks and tones). On the other hand, spontaneous auditory-nerve activity (RWN) was modulated by visual crossmodal stimulation, suggesting that visual crossmodal simulation can modulate spontaneous but not evoked auditory-nerve activity. However, given the limited sample of 10 animals (4 wireless and 6 wired), these results should be interpreted cautiously. Future experiments are required to substantiate these conclusions. In addition, we introduce the use of wireless ECochG in animal models as a useful tool for translational research.
Collapse
Affiliation(s)
| | - Sergio Vicencio-Jiménez
- Departamento de Neurociencia, Facultad de Medicina, Universidad de Chile, Santiago, Chile; Johns Hopkins School of Medicine, Otolaryngology-Head and Neck Surgery Department, Baltimore, MD 21231, USA; Biomedical Neuroscience Institute, Facultad de Medicina, Universidad de Chile, Santiago, Chile
| | - Mia Caballero
- Departamento de Neurociencia, Facultad de Medicina, Universidad de Chile, Santiago, Chile
| | - Paul H Delano
- Departamento de Neurociencia, Facultad de Medicina, Universidad de Chile, Santiago, Chile; Servicio Otorrinolaringología, Hospital Clínico de la Universidad de Chile, Santiago, Chile; Centro Avanzado de Ingeniería Eléctrica y Electrónica, AC3E, Universidad Técnica Federico Santa María, Valparaíso, Chile; Biomedical Neuroscience Institute, Facultad de Medicina, Universidad de Chile, Santiago, Chile
| | - Diego Elgueda
- Departamento de Patología Animal, Facultad de Ciencias Veterinarias y Pecuarias, Universidad de Chile 8820808, Santiago, Chile.
| |
Collapse
|
2
|
Bonnet C, Poulin-Charronnat B, Michel-Colent C. Aftereffects of visuomanual prism adaptation in auditory modality: Review and perspectives. Neurosci Biobehav Rev 2024; 164:105814. [PMID: 39032842 DOI: 10.1016/j.neubiorev.2024.105814] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Revised: 06/20/2024] [Accepted: 07/16/2024] [Indexed: 07/23/2024]
Abstract
Visuomanual prism adaptation (PA), which consists of pointing to visual targets while wearing prisms that shift the visual field, is one of the oldest experimental paradigms used to investigate sensorimotor plasticity. Since the 2000's, a growing scientific interest emerged for the expansion of PA to cognitive functions in several sensory modalities. The present work focused on the aftereffects of PA within the auditory modality. Recent studies showed changes in mental representation of auditory frequencies and a shift of divided auditory attention following PA. Moreover, one study demonstrated benefits of PA in a patient suffering from tinnitus. According to these results, we tried to shed light on the following question: How could this be possible to modulate audition by inducing sensorimotor plasticity with glasses? Based on the literature, we suggest a bottom-up attentional mechanism involving cerebellar, parietal, and temporal structures to explain crossmodal aftereffects of PA. This review opens promising new avenues of research about aftereffects of PA in audition and its implication in the therapeutic field of auditory troubles.
Collapse
Affiliation(s)
- Clémence Bonnet
- LEAD - CNRS UMR5022, Université de Bourgogne, Pôle AAFE, 11 Esplanade Erasme, Dijon 21000, France.
| | | | - Carine Michel-Colent
- CAPS, Inserm U1093, Université de Bourgogne, UFR des Sciences du Sport, Dijon F-21000, France
| |
Collapse
|
3
|
Debiève C, Rosenzweig F, Wathour J. Standardization of Three Familiar Sound Recognition Tests in Hearing and Deaf Adult Populations. Otol Neurotol 2024; 45:656-661. [PMID: 38769085 DOI: 10.1097/mao.0000000000004215] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/22/2024]
Abstract
OBJECTIVE Recognition of familiar noises is crucial for understanding and reacting appropriately to our auditory environment. Its improvement is one of the benefits expected after cochlear implantation. The aim of this study was to standardize three environmental sounds noise recognition tests and to illustrate their application to a population of deaf adults with cochlear implants. METHOD Norms were established on a sample of 126 normal-hearing adults divided into 6 age groups. Three familiar sound recognition tests were used: 1) the Blue Mouse "First Familiar Sounds" (BM), 2) the UCL-IRSA test (TI), and 3) the Bernadette Piérart Familiar Sounds Test (TBF). These tests were also administered to 61 implanted deaf ears. RESULTS We observed a significant effect of age on the accuracy scores of the TI and TBF tests for the hearing group and on the time scores of the TI and BM tests. Overall, the performance of the deaf participants was poorer and more variable than that of the hearing participants. CONCLUSION We have three tests that can be used in practice to measure the performance of deaf people (with cochlear implants) at different stages of their pre- and post-implant rehabilitation.
Collapse
|
4
|
Paromov D, Moïn-Darbari K, Cedras AM, Maheu M, Bacon BA, Champoux F. Body representation drives auditory spatial perception. iScience 2024; 27:109196. [PMID: 38433911 PMCID: PMC10906536 DOI: 10.1016/j.isci.2024.109196] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2022] [Revised: 08/23/2023] [Accepted: 02/07/2024] [Indexed: 03/05/2024] Open
Abstract
In contrast to the large body of findings confirming the influence of auditory cues on body perception and movement-related activity, the influence of body representation on spatial hearing remains essentially unexplored. Here, we use a disorientation task to assess whether a change in the body's orientation in space could lead to an illusory shift in the localization of a sound source. While most of the participants were initially able to locate the sound source with great precision, they all made substantial errors in judging the position of the same sound source following the body orientation-altering task. These results demonstrate that a change in body orientation can have a significant impact on the auditory processes underlying sound localization. The illusory errors not only confirm the strong connection between the auditory system and the representation of the body in space but also raise questions about the importance of hearing in determining spatial position.
Collapse
Affiliation(s)
- Daniel Paromov
- Université de Montréal, Montréal, QC, Canada
- Centre de recherche de l’Institut Universitaire de Gériatrie de Montréal, Montréal, QC, Canada
| | - Karina Moïn-Darbari
- Université de Montréal, Montréal, QC, Canada
- Centre de recherche de l’Institut Universitaire de Gériatrie de Montréal, Montréal, QC, Canada
| | | | | | - Benoit-Antoine Bacon
- Department of Psychology, The University of British Columbia, Vancouver, BC, Canada
| | - François Champoux
- Université de Montréal, Montréal, QC, Canada
- Centre de recherche de l’Institut Universitaire de Gériatrie de Montréal, Montréal, QC, Canada
| |
Collapse
|
5
|
Fivel L, Mondino M, Brunelin J, Haesebaert F. Basic auditory processing and its relationship with symptoms in patients with schizophrenia: A systematic review. Psychiatry Res 2023; 323:115144. [PMID: 36940586 DOI: 10.1016/j.psychres.2023.115144] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Revised: 02/09/2023] [Accepted: 03/01/2023] [Indexed: 03/23/2023]
Abstract
Processing of basic auditory features, one of the earliest stages of auditory perception, has been the focus of considerable investigations in schizophrenia. Although numerous studies have shown abnormalities in pitch perception in schizophrenia, other basic auditory features such as intensity, duration, and sound localization have been less explored. Additionally, the relationship between basic auditory features and symptom severity shows inconsistent results, preventing concrete conclusions. Our aim was to present a comprehensive overview of basic auditory processing in schizophrenia and its relationship with symptoms. We conducted a systematic review according to the PRISMA guidelines. PubMed, Embase, and PsycINFO databases were searched for studies exploring auditory perception in schizophrenia compared to controls, with at least one behavioral task investigating basic auditory processing using pure tones. Forty-one studies were included. The majority investigated pitch processing while the others investigated intensity, duration and sound localization. The results revealed that patients have a significant deficit in the processing of all basic auditory features. Although the search for a relationship with symptoms was limited, auditory hallucinations experience appears to have an impact on basic auditory processing. Further research may examine correlations with clinical symptoms to explore the performance of patient subgroups and possibly implement remediation strategies.
Collapse
Affiliation(s)
- Laure Fivel
- Université Claude Bernard Lyon 1, CNRS, INSERM, Centre de Recherche en Neurosciences de Lyon CRNL U1028 UMR5292, PSYR2, Bron F-69500, France
| | - Marine Mondino
- Université Claude Bernard Lyon 1, CNRS, INSERM, Centre de Recherche en Neurosciences de Lyon CRNL U1028 UMR5292, PSYR2, Bron F-69500, France; Centre Hospitalier Le Vinatier, 95 Boulevard Pinel, Bron F-69500, France.
| | - Jerome Brunelin
- Université Claude Bernard Lyon 1, CNRS, INSERM, Centre de Recherche en Neurosciences de Lyon CRNL U1028 UMR5292, PSYR2, Bron F-69500, France; Centre Hospitalier Le Vinatier, 95 Boulevard Pinel, Bron F-69500, France
| | - Frédéric Haesebaert
- Université Claude Bernard Lyon 1, CNRS, INSERM, Centre de Recherche en Neurosciences de Lyon CRNL U1028 UMR5292, PSYR2, Bron F-69500, France; Centre Hospitalier Le Vinatier, 95 Boulevard Pinel, Bron F-69500, France
| |
Collapse
|
6
|
Varuzza C, D’Aiello B, Lazzaro G, Quarin F, De Rose P, Bergonzini P, Menghini D, Marini A, Vicari S. Gross, Fine and Visual-Motor Skills in Children with Language Disorder, Speech Sound Disorder and Their Combination. Brain Sci 2022; 13:brainsci13010059. [PMID: 36672041 PMCID: PMC9856286 DOI: 10.3390/brainsci13010059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2022] [Revised: 12/16/2022] [Accepted: 12/27/2022] [Indexed: 12/30/2022] Open
Abstract
Increasing evidence shows that children with Communication Disorders (CDs) may show gross, fine, and visual-motor difficulties compared to children with typical development. Accordingly, the present study aims to characterize gross, fine and visual-motor skills in children with CDs, distinguishing children with CDs into three subgroups, i.e., with Language Disorders (LD), Speech Sound Disorders (SSD), and LD + SSD. In Experiment 1, around 60% of children with CDs (4 to 7 years; 21 with LD, 36 with SSD, and 90 with LD + SSD) showed clinical/borderline scores in balance skills, regardless of the type of communication deficit. However, children with LD, SSD, and LD + SSD did not differ in gross and fine motor skills. In Experiment 2, a higher percentage of children with CDs (4 to 7 years; 34 with LD, 62 with SSD, 148 with LD + SSD) obtained clinical/borderline scores in Visual Perception skills. Moreover, children with LD + SSD performed significantly worsen in Visual Perception and Fine Motor Coordination skills compared to children with SSD only. Our results underlined that CDs are generally associated with gross motor difficulties and that visual-motor difficulties are related to the type of communication deficit. Paying earlier attention to the motor skills of children with CDs could help clinicians design effective interventions.
Collapse
Affiliation(s)
- Cristiana Varuzza
- Child and Adolescent Neuropsychiatry Unit, Department of Neuroscience, Bambino Gesù Children’s Hospital, IRCCS, 00146 Rome, Italy
| | - Barbara D’Aiello
- Child and Adolescent Neuropsychiatry Unit, Department of Neuroscience, Bambino Gesù Children’s Hospital, IRCCS, 00146 Rome, Italy
- Department of Human Science, LUMSA University, 00193 Rome, Italy
| | - Giulia Lazzaro
- Child and Adolescent Neuropsychiatry Unit, Department of Neuroscience, Bambino Gesù Children’s Hospital, IRCCS, 00146 Rome, Italy
| | - Fabio Quarin
- Child and Adolescent Neuropsychiatry Unit, Department of Neuroscience, Bambino Gesù Children’s Hospital, IRCCS, 00146 Rome, Italy
| | - Paola De Rose
- Child and Adolescent Neuropsychiatry Unit, Department of Neuroscience, Bambino Gesù Children’s Hospital, IRCCS, 00146 Rome, Italy
| | - Paola Bergonzini
- Child and Adolescent Neuropsychiatry Unit, Department of Neuroscience, Bambino Gesù Children’s Hospital, IRCCS, 00146 Rome, Italy
| | - Deny Menghini
- Child and Adolescent Neuropsychiatry Unit, Department of Neuroscience, Bambino Gesù Children’s Hospital, IRCCS, 00146 Rome, Italy
- Correspondence:
| | - Andrea Marini
- Department of Language and Literatures, Communication, Education and Society, University of Udine, 33100 Udine, Italy
| | - Stefano Vicari
- Child and Adolescent Neuropsychiatry Unit, Department of Neuroscience, Bambino Gesù Children’s Hospital, IRCCS, 00146 Rome, Italy
- Department of Life Science and Public Health, Catholic University, 00168 Rome, Italy
| |
Collapse
|
7
|
Lohse M, Zimmer-Harwood P, Dahmen JC, King AJ. Integration of somatosensory and motor-related information in the auditory system. Front Neurosci 2022; 16:1010211. [PMID: 36330342 PMCID: PMC9622781 DOI: 10.3389/fnins.2022.1010211] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Accepted: 09/28/2022] [Indexed: 11/30/2022] Open
Abstract
An ability to integrate information provided by different sensory modalities is a fundamental feature of neurons in many brain areas. Because visual and auditory inputs often originate from the same external object, which may be located some distance away from the observer, the synthesis of these cues can improve localization accuracy and speed up behavioral responses. By contrast, multisensory interactions occurring close to the body typically involve a combination of tactile stimuli with other sensory modalities. Moreover, most activities involving active touch generate sound, indicating that stimuli in these modalities are frequently experienced together. In this review, we examine the basis for determining sound-source distance and the contribution of auditory inputs to the neural encoding of space around the body. We then consider the perceptual consequences of combining auditory and tactile inputs in humans and discuss recent evidence from animal studies demonstrating how cortical and subcortical areas work together to mediate communication between these senses. This research has shown that somatosensory inputs interface with and modulate sound processing at multiple levels of the auditory pathway, from the cochlear nucleus in the brainstem to the cortex. Circuits involving inputs from the primary somatosensory cortex to the auditory midbrain have been identified that mediate suppressive effects of whisker stimulation on auditory thalamocortical processing, providing a possible basis for prioritizing the processing of tactile cues from nearby objects. Close links also exist between audition and movement, and auditory responses are typically suppressed by locomotion and other actions. These movement-related signals are thought to cancel out self-generated sounds, but they may also affect auditory responses via the associated somatosensory stimulation or as a result of changes in brain state. Together, these studies highlight the importance of considering both multisensory context and movement-related activity in order to understand how the auditory cortex operates during natural behaviors, paving the way for future work to investigate auditory-somatosensory interactions in more ecological situations.
Collapse
|
8
|
Bernstein LE, Jordan N, Auer ET, Eberhardt SP. Lipreading: A Review of Its Continuing Importance for Speech Recognition With an Acquired Hearing Loss and Possibilities for Effective Training. Am J Audiol 2022; 31:453-469. [PMID: 35316072 PMCID: PMC9524756 DOI: 10.1044/2021_aja-21-00112] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2021] [Revised: 10/25/2021] [Accepted: 12/30/2021] [Indexed: 11/09/2022] Open
Abstract
PURPOSE The goal of this review article is to reinvigorate interest in lipreading and lipreading training for adults with acquired hearing loss. Most adults benefit from being able to see the talker when speech is degraded; however, the effect size is related to their lipreading ability, which is typically poor in adults who have experienced normal hearing through most of their lives. Lipreading training has been viewed as a possible avenue for rehabilitation of adults with an acquired hearing loss, but most training approaches have not been particularly successful. Here, we describe lipreading and theoretically motivated approaches to its training, as well as examples of successful training paradigms. We discuss some extensions to auditory-only (AO) and audiovisual (AV) speech recognition. METHOD Visual speech perception and word recognition are described. Traditional and contemporary views of training and perceptual learning are outlined. We focus on the roles of external and internal feedback and the training task in perceptual learning, and we describe results of lipreading training experiments. RESULTS Lipreading is commonly characterized as limited to viseme perception. However, evidence demonstrates subvisemic perception of visual phonetic information. Lipreading words also relies on lexical constraints, not unlike auditory spoken word recognition. Lipreading has been shown to be difficult to improve through training, but under specific feedback and task conditions, training can be successful, and learning can generalize to untrained materials, including AV sentence stimuli in noise. The results on lipreading have implications for AO and AV training and for use of acoustically processed speech in face-to-face communication. CONCLUSION Given its importance for speech recognition with a hearing loss, we suggest that the research and clinical communities integrate lipreading in their efforts to improve speech recognition in adults with acquired hearing loss.
Collapse
Affiliation(s)
- Lynne E. Bernstein
- Department of Speech, Language & Hearing Sciences, George Washington University, Washington, DC
| | - Nicole Jordan
- Department of Speech, Language & Hearing Sciences, George Washington University, Washington, DC
| | - Edward T. Auer
- Department of Speech, Language & Hearing Sciences, George Washington University, Washington, DC
| | - Silvio P. Eberhardt
- Department of Speech, Language & Hearing Sciences, George Washington University, Washington, DC
| |
Collapse
|
9
|
Kulkarni A, Kegler M, Reichenbach T. Effect of visual input on syllable parsing in a computational model of a neural microcircuit for speech processing. J Neural Eng 2021; 18. [PMID: 34547737 DOI: 10.1088/1741-2552/ac28d3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2021] [Accepted: 09/21/2021] [Indexed: 11/12/2022]
Abstract
Objective.Seeing a person talking can help us understand them, particularly in a noisy environment. However, how the brain integrates the visual information with the auditory signal to enhance speech comprehension remains poorly understood.Approach.Here we address this question in a computational model of a cortical microcircuit for speech processing. The model consists of an excitatory and an inhibitory neural population that together create oscillations in the theta frequency range. When stimulated with speech, the theta rhythm becomes entrained to the onsets of syllables, such that the onsets can be inferred from the network activity. We investigate how well the obtained syllable parsing performs when different types of visual stimuli are added. In particular, we consider currents related to the rate of syllables as well as currents related to the mouth-opening area of the talking faces.Main results.We find that currents that target the excitatory neuronal population can influence speech comprehension, both boosting it or impeding it, depending on the temporal delay and on whether the currents are excitatory or inhibitory. In contrast, currents that act on the inhibitory neurons do not impact speech comprehension significantly.Significance.Our results suggest neural mechanisms for the integration of visual information with the acoustic information in speech and make experimentally-testable predictions.
Collapse
Affiliation(s)
- Anirudh Kulkarni
- Department of Bioengineering and Centre for Neurotechnology, Imperial College London, South Kensington Campus, SW7 2AZ London, United Kingdom
| | - Mikolaj Kegler
- Department of Bioengineering and Centre for Neurotechnology, Imperial College London, South Kensington Campus, SW7 2AZ London, United Kingdom
| | - Tobias Reichenbach
- Department of Bioengineering and Centre for Neurotechnology, Imperial College London, South Kensington Campus, SW7 2AZ London, United Kingdom.,Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg, Konrad-Zuse-Strasse 3/5, Erlangen, 91056, Germany
| |
Collapse
|
10
|
Narzisi A, Muccio R. A Neuro-Phenomenological Perspective on the Autism Phenotype. Brain Sci 2021; 11:914. [PMID: 34356148 PMCID: PMC8307909 DOI: 10.3390/brainsci11070914] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2021] [Revised: 07/07/2021] [Accepted: 07/08/2021] [Indexed: 11/18/2022] Open
Abstract
In the current paper, we present a view of autism spectrum disorder (ASD) which avoids the typical relational issues, instead drawing on philosophy, in particular Husserlian phenomenology. We begin by following the recent etiological perspectives that suggest a natural predisposition of a part of individuals with ASD towards hypersensitivity and the reduced influence of cognitive priors (i.e., event schemas). Following this perspective, these two characteristics should be considered as a sort of phenomenological a priori that, importantly, could predispose people with ASD towards a spiritual experience, not intended in its religious meaning, but as an attribute of consciousness that consists of being aware of and attentive to what is occurring in the present moment. Potential clinical implications are discussed.
Collapse
|