26
|
Jeng FC, Matzdorf K, Hickman KL, Bauer SW, Carriero AE, McDonald K, Lin TH, Wang CY. Advancing Auditory Processing by Detecting Frequency-Following Responses Through a Specialized Machine Learning Model. Percept Mot Skills 2024; 131:417-431. [PMID: 38153030 DOI: 10.1177/00315125231225767] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2023]
Abstract
In this study, we explore the feasibility and performance of detecting scalp-recorded frequency-following responses (FFRs) with a specialized machine learning (ML) model. By leveraging the strengths of feature extraction of the source separation non-negative matrix factorization (SSNMF) algorithm and its adeptness in handling limited training data, we adapted the SSNMF algorithm into a specialized ML model with a hybrid architecture to enhance FFR detection amidst background noise. We recruited 40 adults with normal hearing and evoked their scalp recorded FFRs using the English vowel/i/with a rising pitch contour. The model was trained on FFR-present and FFR-absent conditions, and its performance was evaluated using sensitivity, specificity, efficiency, false-positive rate, and false-negative rate metrics. This study revealed that the specialized SSNMF model achieved heightened sensitivity, specificity, and efficiency in detecting FFRs as the number of recording sweeps increased. Sensitivity exceeded 80% at 500 sweeps and maintained over 89% from 1000 sweeps onwards. Similarly, specificity and efficiency also improved rapidly with increasing sweeps. The progressively enhanced sensitivity, specificity, and efficiency of this specialized ML model underscore its practicality and potential for broader applications. These findings have immediate implications for FFR research and clinical use, while paving the way for further advancements in the assessment of auditory processing.
Collapse
|
27
|
Greenlee ET, Hess LJ, Simpson BD, Finomore VS. Vigilance to Spatialized Auditory Displays: Initial Assessment of Performance and Workload. HUMAN FACTORS 2024; 66:987-1003. [PMID: 36455164 DOI: 10.1177/00187208221139744] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
OBJECTIVE The present study was designed to evaluate human performance and workload associated with an auditory vigilance task that required spatial discrimination of auditory stimuli. BACKGROUND Spatial auditory displays have been increasingly developed and implemented into settings that require vigilance toward auditory spatial discrimination and localization (e.g., collision avoidance warnings). Research has yet to determine whether a vigilance decrement could impede performance in such applications. METHOD Participants completed a 40-minute auditory vigilance task in either a spatial discrimination condition or a temporal discrimination condition. In the spatial discrimination condition, participants differentiated sounds based on differences in spatial location. In the temporal discrimination condition, participants differentiated sounds based on differences in stimulus duration. RESULTS Correct detections and false alarms declined during the vigilance task, and each did so at a similar rate in both conditions. The overall level of correct detections did not differ significantly between conditions, but false alarms occurred more frequently within the spatial discrimination condition than in the temporal discrimination condition. NASA-TLX ratings and pupil diameter measurements indicated no differences in workload. CONCLUSION Results indicated that tasks requiring auditory spatial discrimination can induce a vigilance decrement; and they may result in inferior vigilance performance, compared to tasks requiring discrimination of auditory duration. APPLICATION Vigilance decrements may impede performance and safety in settings that depend on sustained attention to spatial auditory displays. Display designers should also be aware that auditory displays that require users to discriminate differences in spatial location may result in poorer discrimination performance than non-spatial displays.
Collapse
|
28
|
Rimmer C, Dahary H, Quintin EM. Links between musical beat perception and phonological skills for autistic children. Child Neuropsychol 2024; 30:361-380. [PMID: 37104762 DOI: 10.1080/09297049.2023.2202902] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2022] [Accepted: 04/10/2023] [Indexed: 04/29/2023]
Abstract
Exploring non-linguistic predictors of phonological awareness, such as musical beat perception, is valuable for children who present with language difficulties and diverse support needs. Studies on the musical abilities of children on the autism spectrum show that they have average or above-average musical production and auditory processing abilities. This study aimed to explore the relationship between musical beat perception and phonological awareness skills of children on the autism spectrum with a wide range of cognitive abilities. A total of 21 autistic children between the ages of 6 to 11 years old (M = 8.9, SD = 1.5) with full scale IQs ranging from 52 to 105 (M = 74, SD = 16) completed a beat perception and a phonological awareness task. Results revealed that phonological awareness and beat perception are positively correlated for children on the autism spectrum. Findings lend support to the potential use of beat and rhythm perception as a screening tool for early literacy skills, specifically for phonological awareness, for children with diverse support needs as an alternative to traditional verbal tasks that tend to underestimate the potential of children on the autism spectrum.
Collapse
|
29
|
Meral Çetinkaya M, Konukseven Ö, İralı AE. World of sounds (Seslerin Dünyası): A mobile auditory training game for children with cochlear implants. Int J Pediatr Otorhinolaryngol 2024; 179:111908. [PMID: 38461681 DOI: 10.1016/j.ijporl.2024.111908] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/21/2023] [Revised: 02/24/2024] [Accepted: 03/04/2024] [Indexed: 03/12/2024]
Abstract
OBJECTIVES The aim of this study is to develop a mobile auditory training application based on gaming for children aged 3-5 years using cochlear implants and to evaluate its usability. METHODS 4 games were developed in the scope of the application World of Sounds; the crucible sound for auditory awareness, mole hunting for auditory discrimination, find the sound for auditory recognition, and choo-choo for auditory comprehension. The prototype was applied to 20 children with normal hearing and 20 children with cochlear implants, all of whom were aged 3-5. The participants were asked to fill out the Game Evaluation Form for Children. Moreover, 40 parents were included in the study, and the Evaluation Form for the Application was applied. RESULTS According to the form, at least 80% of children using cochlear implants, and all children in the healthy group, responded well to the usability factors. All factors were obtained as highly useable by parents of the children using cochlear implants. The results indicated that in the healthy group, the usefulness and motivation factors were above moderate, and the other factors were highly useable. In the mole-hunting game, there was no significant difference between the groups in the easy level of the first sub-section (p > 0.05). There was a significant difference between the groups in terms of the other sub-sections of the mole-hunting game and all sub-sections of the crucible sound, find the sound, and the choo-choo games (p < 0.05). While there was no correlation between duration of cochlear implant use and ADSI scores and the third sub-section of the crucible sound game (p > 0.05); a correlation was found in the other sub-sections of crucible sound and all sub-sections of the mole hunting, find the sound, and Choo-Choo games (p < 0.05). CONCLUSION It is thought that the application World of Sounds can serve as an accessible option to support traditional auditory rehabilitation for children with cochlear implants.
Collapse
|
30
|
Gu J, Deng K, Luo X, Ma W, Tang X. Investigating the different mechanisms in related neural activities: a focus on auditory perception and imagery. Cereb Cortex 2024; 34:bhae139. [PMID: 38629796 DOI: 10.1093/cercor/bhae139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Revised: 03/17/2024] [Accepted: 03/20/2024] [Indexed: 04/19/2024] Open
Abstract
Neuroimaging studies have shown that the neural representation of imagery is closely related to the perception modality; however, the undeniable different experiences between perception and imagery indicate that there are obvious neural mechanism differences between them, which cannot be explained by the simple theory that imagery is a form of weak perception. Considering the importance of functional integration of brain regions in neural activities, we conducted correlation analysis of neural activity in brain regions jointly activated by auditory imagery and perception, and then brain functional connectivity (FC) networks were obtained with a consistent structure. However, the connection values between the areas in the superior temporal gyrus and the right precentral cortex were significantly higher in auditory perception than in the imagery modality. In addition, the modality decoding based on FC patterns showed that the FC network of auditory imagery and perception can be significantly distinguishable. Subsequently, voxel-level FC analysis further verified the distribution regions of voxels with significant connectivity differences between the 2 modalities. This study complemented the correlation and difference between auditory imagery and perception in terms of brain information interaction, and it provided a new perspective for investigating the neural mechanisms of different modal information representations.
Collapse
|
31
|
Zhu H, Beierholm U, Shams L. The overlooked role of unisensory precision in multisensory research. Curr Biol 2024; 34:R229-R231. [PMID: 38531310 DOI: 10.1016/j.cub.2024.01.057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Revised: 01/10/2024] [Accepted: 01/23/2024] [Indexed: 03/28/2024]
Abstract
Zhu et al. present an alternative explanation for the weaker multisensory illusions in football goalkeepers compared with outfielders and non-athletes, showing that better unisensory precision in goalkeepers can also account for this effect.
Collapse
|
32
|
Tune S, Obleser J. Neural attentional filters and behavioural outcome follow independent individual trajectories over the adult lifespan. eLife 2024; 12:RP92079. [PMID: 38470243 DOI: 10.7554/elife.92079] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/13/2024] Open
Abstract
Preserved communication abilities promote healthy ageing. To this end, the age-typical loss of sensory acuity might in part be compensated for by an individual's preserved attentional neural filtering. Is such a compensatory brain-behaviour link longitudinally stable? Can it predict individual change in listening behaviour? We here show that individual listening behaviour and neural filtering ability follow largely independent developmental trajectories modelling electroencephalographic and behavioural data of N = 105 ageing individuals (39-82 y). First, despite the expected decline in hearing-threshold-derived sensory acuity, listening-task performance proved stable over 2 y. Second, neural filtering and behaviour were correlated only within each separate measurement timepoint (T1, T2). Longitudinally, however, our results raise caution on attention-guided neural filtering metrics as predictors of individual trajectories in listening behaviour: neither neural filtering at T1 nor its 2-year change could predict individual 2-year behavioural change, under a combination of modelling strategies.
Collapse
|
33
|
Roark CL, Thakkar V, Chandrasekaran B, Centanni TM. Auditory Category Learning in Children With Dyslexia. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:974-988. [PMID: 38354099 PMCID: PMC11001431 DOI: 10.1044/2023_jslhr-23-00361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Revised: 09/15/2023] [Accepted: 11/14/2023] [Indexed: 02/16/2024]
Abstract
PURPOSE Developmental dyslexia is proposed to involve selective procedural memory deficits with intact declarative memory. Recent research in the domain of category learning has demonstrated that adults with dyslexia have selective deficits in Information-Integration (II) category learning that is proposed to rely on procedural learning mechanisms and unaffected Rule-Based (RB) category learning that is proposed to rely on declarative, hypothesis testing mechanisms. Importantly, learning mechanisms also change across development, with distinct developmental trajectories in both procedural and declarative learning mechanisms. It is unclear how dyslexia in childhood should influence auditory category learning, a critical skill for speech perception and reading development. METHOD We examined auditory category learning performance and strategies in 7- to 12-year-old children with dyslexia (n = 25; nine females, 16 males) and typically developing controls (n = 25; 13 females, 12 males). Participants learned nonspeech auditory categories of spectrotemporal ripples that could be optimally learned with either RB selective attention to the temporal modulation dimension or procedural integration of information across spectral and temporal dimensions. We statistically compared performance using mixed-model analyses of variance and identified strategies using decision-bound computational models. RESULTS We found that children with dyslexia have an apparent selective RB category learning deficit, rather than a selective II learning deficit observed in prior work in adults with dyslexia. CONCLUSION These results suggest that the important skill of auditory category learning is impacted in children with dyslexia and throughout development, individuals with dyslexia may develop compensatory strategies that preserve declarative learning while developing difficulties in procedural learning. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.25148519.
Collapse
|
34
|
Crespo-Bojorque P, Cauvet E, Pallier C, Toro JM. Recognizing structure in novel tunes: differences between human and rats. Anim Cogn 2024; 27:17. [PMID: 38429431 PMCID: PMC10907461 DOI: 10.1007/s10071-024-01848-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Revised: 10/27/2023] [Accepted: 11/08/2023] [Indexed: 03/03/2024]
Abstract
A central feature in music is the hierarchical organization of its components. Musical pieces are not a simple concatenation of chords, but are characterized by rhythmic and harmonic structures. Here, we explore if sensitivity to music structure might emerge in the absence of any experience with musical stimuli. For this, we tested if rats detect the difference between structured and unstructured musical excerpts and compared their performance with that of humans. Structured melodies were excerpts of Mozart's sonatas. Unstructured melodies were created by the recombination of fragments of different sonatas. We trained listeners (both human participants and Long-Evans rats) with a set of structured and unstructured excerpts, and tested them with completely novel excerpts they had not heard before. After hundreds of training trials, rats were able to tell apart novel structured from unstructured melodies. Human listeners required only a few trials to reach better performance than rats. Interestingly, such performance was increased in humans when tonality changes were included, while it decreased to chance in rats. Our results suggest that, with enough training, rats might learn to discriminate acoustic differences differentiating hierarchical music structures from unstructured excerpts. More importantly, the results point toward species-specific adaptations on how tonality is processed.
Collapse
|
35
|
Jiam NT, Formeister EJ, Chari DA, David AP, Alsoudi AF, Purnell S, Jiradejvong P, Limb CJ. Music Perception in Bone-Anchored Hearing Implant Users. Laryngoscope 2024; 134:1381-1387. [PMID: 37665102 DOI: 10.1002/lary.30919] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Revised: 06/12/2023] [Accepted: 07/14/2023] [Indexed: 09/05/2023]
Abstract
OBJECTIVE Music is a highly complex acoustic stimulus in both spectral and temporal contents. Accurate representation and delivery of high-fidelity information are essential for music perception. However, it is unclear how well bone-anchored hearing implants (BAHIs) transmit music. The study objective is to establish music perception performance baselines for BAHI users and normal hearing (NH) listeners and compare outcomes between the cohorts. METHODS A case-controlled, cross-sectional study was conducted among 18 BAHI users and 11 NH controls. Music perception was assessed via performance on seven major musical element tasks: pitch discrimination, melodic contour identification, rhythmic clocking, basic tempo discrimination, timbre identification, polyphonic pitch detection, and harmonic chord discrimination. RESULTS BAHI users performed comparably well on all music perception tasks with their device compared with the unilateral condition with their better-hearing ear. BAHI performance was not statistically significantly different from NH listeners' performance. BAHI users performed just as well, if not better than NH listeners when using their control contralateral ear; there was no significant difference between the two groups except for the rhythmic timing (BAHI non-implanted ear 69% [95% CI: 62%-75%], NH 56% [95% CI: 49%-63%], p = 0.02), and basic tempo tasks (BAHI non-implanted ear 80% [95% CI: 65%-95%]; NH 75% [95% CI: 68%-82%, p = 0.03]). CONCLUSIONS This study represents the first comprehensive study of basic music perception performance in BAHI users. Our results demonstrate that BAHI users perform as well with their implanted ear as with their contralateral better-hearing ear and NH controls in the major elements of music perception. LEVEL OF EVIDENCE 3 Laryngoscope, 134:1381-1387, 2024.
Collapse
|
36
|
Alnes SL, Bächlin LZM, Schindler K, Tzovara A. Neural complexity and the spectral slope characterise auditory processing in wakefulness and sleep. Eur J Neurosci 2024; 59:822-841. [PMID: 38100263 DOI: 10.1111/ejn.16203] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Revised: 10/11/2023] [Accepted: 11/10/2023] [Indexed: 12/17/2023]
Abstract
Auditory processing and the complexity of neural activity can both indicate residual consciousness levels and differentiate states of arousal. However, how measures of neural signal complexity manifest in neural activity following environmental stimulation and, more generally, how the electrophysiological characteristics of auditory responses change in states of reduced consciousness remain under-explored. Here, we tested the hypothesis that measures of neural complexity and the spectral slope would discriminate stages of sleep and wakefulness not only in baseline electroencephalography (EEG) activity but also in EEG signals following auditory stimulation. High-density EEG was recorded in 21 participants to determine the spatial relationship between these measures and between EEG recorded pre- and post-auditory stimulation. Results showed that the complexity and the spectral slope in the 2-20 Hz range discriminated between sleep stages and had a high correlation in sleep. In wakefulness, complexity was strongly correlated to the 20-40 Hz spectral slope. Auditory stimulation resulted in reduced complexity in sleep compared to the pre-stimulation EEG activity and modulated the spectral slope in wakefulness. These findings confirm our hypothesis that electrophysiological markers of arousal are sensitive to sleep/wake states in EEG activity during baseline and following auditory stimulation. Our results have direct applications to studies using auditory stimulation to probe neural functions in states of reduced consciousness.
Collapse
|
37
|
Takasago M, Kunii N, Fujitani S, Ishishita Y, Tada M, Kirihara K, Komatsu M, Uka T, Shimada S, Nagata K, Kasai K, Saito N. Auditory prediction errors in sound frequency and duration generated different cortical activation patterns in the human brain: an ECoG study. Cereb Cortex 2024; 34:bhae072. [PMID: 38466116 DOI: 10.1093/cercor/bhae072] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Revised: 02/04/2024] [Accepted: 02/06/2024] [Indexed: 03/12/2024] Open
Abstract
Sound frequency and duration are essential auditory components. The brain perceives deviations from the preceding sound context as prediction errors, allowing efficient reactions to the environment. Additionally, prediction error response to duration change is reduced in the initial stages of psychotic disorders. To compare the spatiotemporal profiles of responses to prediction errors, we conducted a human electrocorticography study with special attention to high gamma power in 13 participants who completed both frequency and duration oddball tasks. Remarkable activation in the bilateral superior temporal gyri in both the frequency and duration oddball tasks were observed, suggesting their association with prediction errors. However, the response to deviant stimuli in duration oddball task exhibited a second peak, which resulted in a bimodal response. Furthermore, deviant stimuli in frequency oddball task elicited a significant response in the inferior frontal gyrus that was not observed in duration oddball task. These spatiotemporal differences within the Parasylvian cortical network could account for our efficient reactions to changes in sound properties. The findings of this study may contribute to unveiling auditory processing and elucidating the pathophysiology of psychiatric disorders.
Collapse
|
38
|
Bureš Z, Profant O, Sommerhalder N, Skarnitzl R, Fuksa J, Meyer M. Speech intelligibility and its relation to auditory temporal processing in Czech and Swiss German subjects with and without tinnitus. Eur Arch Otorhinolaryngol 2024; 281:1589-1595. [PMID: 38175264 DOI: 10.1007/s00405-023-08398-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2023] [Accepted: 12/06/2023] [Indexed: 01/05/2024]
Abstract
PURPOSE Previous studies have shown that levels for 50% speech intelligibility in quiet and in noise differ for different languages. Here, we aimed to find out whether these differences may relate to different auditory processing of temporal sound features in different languages, and to determine the influence of tinnitus on speech comprehension in different languages. METHODS We measured speech intelligibility under various conditions (words in quiet, sentences in babble noise, interrupted sentences) along with tone detection thresholds in quiet [PTA] and in noise [PTAnoise], gap detection thresholds [GDT], and detection thresholds for frequency modulation [FMT], and compared them between Czech and Swiss subjects matched in mean age and PTA. RESULTS The Swiss subjects exhibited higher speech reception thresholds in quiet, higher threshold speech-to-noise ratio, and shallower slope of performance-intensity function for the words in quiet. Importantly, the intelligibility of temporally gated speech was similar in the Czech and Swiss subjects. The PTAnoise, GDT, and FMT were similar in the two groups. The Czech subjects exhibited correlations of the speech tests with GDT and FMT, which was not the case in the Swiss group. Qualitatively, the results of comparisons between the Swiss and Czech populations were not influenced by presence of subjective tinnitus. CONCLUSION The results support the notion of language-specific differences in speech comprehension which persists also in tinnitus subjects, and indicates different associations with the elementary measures of auditory temporal processing.
Collapse
|
39
|
Temboury-Gutierrez M, Encina-Llamas G, Dau T. Predicting early auditory evoked potentials using a computational model of auditory-nerve processing. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 155:1799-1812. [PMID: 38445986 DOI: 10.1121/10.0025136] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Accepted: 02/16/2024] [Indexed: 03/07/2024]
Abstract
Non-invasive electrophysiological measures, such as auditory evoked potentials (AEPs), play a crucial role in diagnosing auditory pathology. However, the relationship between AEP morphology and cochlear degeneration remains complex and not well understood. Dau [J. Acoust. Soc. Am. 113, 936-950 (2003)] proposed a computational framework for modeling AEPs that utilized a nonlinear auditory-nerve (AN) model followed by a linear unitary response function. While the model captured some important features of the measured AEPs, it also exhibited several discrepancies in response patterns compared to the actual measurements. In this study, an enhanced AEP modeling framework is presented, incorporating an improved AN model, and the conclusions from the original study were reevaluated. Simulation results with transient and sustained stimuli demonstrated accurate auditory brainstem responses (ABRs) and frequency-following responses (FFRs) as a function of stimulation level, although wave-V latencies remained too short, similar to the original study. When compared to physiological responses in animals, the revised model framework showed a more accurate balance between the contributions of auditory-nerve fibers (ANFs) at on- and off-frequency regions to the predicted FFRs. These findings emphasize the importance of cochlear processing in brainstem potentials. This framework may provide a valuable tool for assessing human AN models and simulating AEPs for various subtypes of peripheral pathologies, offering opportunities for research and clinical applications.
Collapse
|
40
|
Parnas J, Yttri JE, Urfer-Parnas A. Phenomenology of auditory verbal hallucination in schizophrenia: An erroneous perception or something else? Schizophr Res 2024; 265:83-88. [PMID: 37024418 DOI: 10.1016/j.schres.2023.03.045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/30/2022] [Revised: 03/05/2023] [Accepted: 03/24/2023] [Indexed: 04/08/2023]
Abstract
This study presents phenomenological features of auditory verbal hallucinations (AVH) in schizophrenia and associated anomalies of experience. The purpose is to compare the lived experience of AVH to the official definition of hallucinations as a perception without object. Furthermore, we wish to explore the clinical and research implication of the phenomenological approach to AVH. Our exposition is based on classic texts on AVH, recent phenomenological studies and our clinical experience. AVH differ on several dimensions from ordinary perception. Only a minority of schizophrenia patients experiences AVH localized externally. Thus, the official definition of hallucinations does not fit the AVH in schizophrenia. AVH are associated with several anomalies of subjective experiences (self-disorders) and the AVH must be considered as a product of self-fragmentation. We discuss the implications with respect to the definition of hallucination, clinical interview, conceptualization of a psychotic state and potential target of pathogenetic research.
Collapse
|
41
|
Borjigin A, Bakst S, Anderson K, Litovsky RY, Niziolek CA. Discrimination and sensorimotor adaptation of self-produced vowels in cochlear implant users. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 155:1895-1908. [PMID: 38456732 DOI: 10.1121/10.0025063] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/13/2023] [Accepted: 02/11/2024] [Indexed: 03/09/2024]
Abstract
Humans rely on auditory feedback to monitor and adjust their speech for clarity. Cochlear implants (CIs) have helped over a million people restore access to auditory feedback, which significantly improves speech production. However, there is substantial variability in outcomes. This study investigates the extent to which CI users can use their auditory feedback to detect self-produced sensory errors and make adjustments to their speech, given the coarse spectral resolution provided by their implants. First, we used an auditory discrimination task to assess the sensitivity of CI users to small differences in formant frequencies of their self-produced vowels. Then, CI users produced words with altered auditory feedback in order to assess sensorimotor adaptation to auditory error. Almost half of the CI users tested can detect small, within-channel differences in their self-produced vowels, and they can utilize this auditory feedback towards speech adaptation. An acoustic hearing control group showed better sensitivity to the shifts in vowels, even in CI-simulated speech, and elicited more robust speech adaptation behavior than the CI users. Nevertheless, this study confirms that CI users can compensate for sensory errors in their speech and supports the idea that sensitivity to these errors may relate to variability in production.
Collapse
|
42
|
Liu J, Stohl J, Overath T. Hidden hearing loss: Fifteen years at a glance. Hear Res 2024; 443:108967. [PMID: 38335624 DOI: 10.1016/j.heares.2024.108967] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Revised: 01/15/2024] [Accepted: 01/29/2024] [Indexed: 02/12/2024]
Abstract
Hearing loss affects approximately 18% of the population worldwide. Hearing difficulties in noisy environments without accompanying audiometric threshold shifts likely affect an even larger percentage of the global population. One of the potential causes of hidden hearing loss is cochlear synaptopathy, the loss of synapses between inner hair cells (IHC) and auditory nerve fibers (ANF). These synapses are the most vulnerable structures in the cochlea to noise exposure or aging. The loss of synapses causes auditory deafferentation, i.e., the loss of auditory afferent information, whose downstream effect is the loss of information that is sent to higher-order auditory processing stages. Understanding the physiological and perceptual effects of this early auditory deafferentation might inform interventions to prevent later, more severe hearing loss. In the past decade, a large body of work has been devoted to better understand hidden hearing loss, including the causes of hidden hearing loss, their corresponding impact on the auditory pathway, and the use of auditory physiological measures for clinical diagnosis of auditory deafferentation. This review synthesizes the findings from studies in humans and animals to answer some of the key questions in the field, and it points to gaps in knowledge that warrant more investigation. Specifically, recent studies suggest that some electrophysiological measures have the potential to function as indicators of hidden hearing loss in humans, but more research is needed for these measures to be included as part of a clinical test battery.
Collapse
|
43
|
Zhao S, Ma F, Xie J, Zhou Y, Feng C, Feng W. The stimulus-driven and representation-driven cross-modal attentional spreading are both modulated by audiovisual temporal synchrony. Psychophysiology 2024; 61:e14527. [PMID: 38243583 DOI: 10.1111/psyp.14527] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Revised: 11/18/2023] [Accepted: 01/03/2024] [Indexed: 01/21/2024]
Abstract
Multisensory integration and attention can interact in a way that attention to the visual constituent of a multisensory object results in an attentional spreading to its ignored auditory constituent, which can be either stimulus-driven or representation-driven depending on whether the object's visual constituent receives extra representation-based selective attention. Previous research using simple unrelated audiovisual combinations has shown that the stimulus-driven attentional spreading is contingent on audiovisual temporal simultaneity. However, little is known about whether this temporal constraint applies also to the representation-driven attentional spreading, and whether it holds for the stimulus-driven process elicited by real-life multisensory objects. The current event-related potential study investigated these questions by systematically manipulating the visual-to-auditory stimulus onset asynchrony (SOA: 0/100/300 ms) in an object-selective visual recognition task wherein the representation-driven and stimulus-driven spreading processes, measured as two distinct auditory negative difference (Nd) components, could be isolated independently. Our results showed that both the representation-driven and stimulus-driven Nds decreased as the SOA increased. Interestingly, the representation-driven Nd was completely absent, whereas the stimulus-driven Nd was still robust, when the auditory constituents were delayed by 300 ms. These findings not only indicate that the role of audiovisual simultaneity in the representation-driven attentional spreading has been underestimated, but also suggest that learned associations between the unisensory constituents of real-life objects render the stimulus-driven attentional spreading more tolerant of audiovisual asynchrony.
Collapse
|
44
|
Noyce AL, Varghese L, Mathias SR, Shinn-Cunningham BG. Perceptual organization and task demands jointly shape auditory working memory capacity. JASA EXPRESS LETTERS 2024; 4:034402. [PMID: 38526127 PMCID: PMC10966505 DOI: 10.1121/10.0025392] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Accepted: 03/08/2024] [Indexed: 03/26/2024]
Abstract
Listeners performed two different tasks in which they remembered short sequences comprising either complex tones (generally heard as one melody) or everyday sounds (generally heard as separate objects). In one, listeners judged whether a probe item had been present in the preceding sequence. In the other, they judged whether a second sequence of the same items was identical in order to the preceding sequence. Performance on the first task was higher for everyday sounds; performance on the second was higher for complex tones. Perceptual organization strongly shapes listeners' memory for sounds, with implications for real-world communication.
Collapse
|
45
|
Caprini F, Zhao S, Chait M, Agus T, Pomper U, Tierney A, Dick F. Generalization of auditory expertise in audio engineers and instrumental musicians. Cognition 2024; 244:105696. [PMID: 38160651 DOI: 10.1016/j.cognition.2023.105696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Revised: 12/04/2023] [Accepted: 12/13/2023] [Indexed: 01/03/2024]
Abstract
From auditory perception to general cognition, the ability to play a musical instrument has been associated with skills both related and unrelated to music. However, it is unclear if these effects are bound to the specific characteristics of musical instrument training, as little attention has been paid to other populations such as audio engineers and designers whose auditory expertise may match or surpass that of musicians in specific auditory tasks or more naturalistic acoustic scenarios. We explored this possibility by comparing students of audio engineering (n = 20) to matched conservatory-trained instrumentalists (n = 24) and to naive controls (n = 20) on measures of auditory discrimination, auditory scene analysis, and speech in noise perception. We found that audio engineers and performing musicians had generally lower psychophysical thresholds than controls, with pitch perception showing the largest effect size. Compared to controls, audio engineers could better memorise and recall auditory scenes composed of non-musical sounds, whereas instrumental musicians performed best in a sustained selective attention task with two competing streams of tones. Finally, in a diotic speech-in-babble task, musicians showed lower signal-to-noise-ratio thresholds than both controls and engineers; however, a follow-up online study did not replicate this musician advantage. We also observed differences in personality that might account for group-based self-selection biases. Overall, we showed that investigating a wider range of forms of auditory expertise can help us corroborate (or challenge) the specificity of the advantages previously associated with musical instrument training.
Collapse
|
46
|
Lankinen K, Wang R, Tian Q, Wang QM, Perry BJ, Green JR, Kimberley TJ, Ahveninen J, Li S. Individualized white matter connectivity of the articulatory pathway: An ultra-high field study. BRAIN AND LANGUAGE 2024; 250:105391. [PMID: 38354542 PMCID: PMC10940181 DOI: 10.1016/j.bandl.2024.105391] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Revised: 01/29/2024] [Accepted: 02/07/2024] [Indexed: 02/16/2024]
Abstract
In current sensorimotor theories pertaining to speech perception, there is a notable emphasis on the involvement of the articulatory-motor system in the processing of speech sounds. Using ultra-high field diffusion-weighted imaging at 7 Tesla, we visualized the white matter tracts connected to areas activated during a simple speech-sound production task in 18 healthy right-handed adults. Regions of interest for white matter tractography were individually determined through 7T functional MRI (fMRI) analyses, based on activations during silent vocalization tasks. These precentral seed regions, activated during the silent production of a lip-vowel sound, demonstrated anatomical connectivity with posterior superior temporal gyrus areas linked to the auditory perception of phonetic sounds. Our study provides a macrostructural foundation for understanding connections in speech production and underscores the central role of the articulatory motor system in speech perception. These findings highlight the value of ultra-high field 7T MR acquisition in unraveling the neural underpinnings of speech.
Collapse
|
47
|
Deschamps ML, Sanderson P, Waxenegger H, Mohamed I, Loeb RG. Auditory Sequences Presented With Spearcons Support Better Multiple Patient Monitoring Than Single-Patient Alarms: A Preclinical Simulation. HUMAN FACTORS 2024; 66:872-890. [PMID: 35934986 DOI: 10.1177/00187208221116949] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
OBJECTIVE A study of auditory displays for simulated patient monitoring compared the effectiveness of two sound categories (alarm sounds indicating general risk categories from international alarm standard IEC 60601-1-8 versus event-specific sounds according to the type of nursing unit) and two configurations (single-patient alarms versus multi-patient sequences). BACKGROUND Fieldwork in speciality-focused high dependency units (HDU) indicated that auditory alarms are ambiguous and do not identify which patient has a problem. We tested whether participants perform better using auditory displays that identify the relevant patient and problem. METHOD During simulated patient monitoring of four patients in a respiratory HDU, 60 non-clinicians heard either (a) IEC risk categories as single-patient alarm sounds, (b) event-specific categories as single-patient alarm sounds, (c) IEC risk categories in multi-patient sequences or (d) event-specific categories in multi-patient sequences. Participants performed a perceptual-motor task while monitoring patients; after detecting abnormal events, they identified the patient and the event. RESULTS Participants hearing multi-patient sequences made fewer wrong patient identifications than participants hearing single-patient alarms. Advantages of event-specific categories emerged when IEC risk category sounds indicated more than one potential event. Even when IEC and event-specific sounds indicated the same unique event, spearcons supported better event identification than did auditory icon sounds. CONCLUSION Auditory displays that unambiguously convey which patient is having what problem dramatically improve monitoring performance in a preclinical HDU simulation. APPLICATION Time-compressed speech assists development of detailed risk categories needed in specific HDU contexts, and multi-patient sound sequences allow multiple patient wellbeing to be monitored.
Collapse
|
48
|
De Souza J, Overy K. Embodied playfulness in musical synchrony: Comment on "musical engagement as a duet of tight synchrony and loose interpretability" by Tal-Chen Rabinowitch. Phys Life Rev 2024; 48:167-168. [PMID: 38244477 DOI: 10.1016/j.plrev.2023.11.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Accepted: 11/21/2023] [Indexed: 01/22/2024]
|
49
|
Chang YJ, Han JY, Chu WC, Li LPH, Lai YH. Enhancing music recognition using deep learning-powered source separation technology for cochlear implant users. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 155:1694-1703. [PMID: 38426839 DOI: 10.1121/10.0025057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/09/2023] [Accepted: 02/09/2024] [Indexed: 03/02/2024]
Abstract
Cochlear implant (CI) is currently the vital technological device for assisting deaf patients in hearing sounds and greatly enhances their sound listening appreciation. Unfortunately, it performs poorly for music listening because of the insufficient number of electrodes and inaccurate identification of music features. Therefore, this study applied source separation technology with a self-adjustment function to enhance the music listening benefits for CI users. In the objective analysis method, this study showed that the results of the source-to-distortion, source-to-interference, and source-to-artifact ratios were 4.88, 5.92, and 15.28 dB, respectively, and significantly better than the Demucs baseline model. For the subjective analysis method, it scored higher than the traditional baseline method VIR6 (vocal to instrument ratio, 6 dB) by approximately 28.1 and 26.4 (out of 100) in the multi-stimulus test with hidden reference and anchor test, respectively. The experimental results showed that the proposed method can benefit CI users in identifying music in a live concert, and the personal self-fitting signal separation method had better results than any other default baselines (vocal to instrument ratio of 6 dB or vocal to instrument ratio of 0 dB) did. This finding suggests that the proposed system is a potential method for enhancing the music listening benefits for CI users.
Collapse
|
50
|
Rogers LW, Yeebo M, Collerton D, Moseley P, Dudley R. Non-clinical hallucinations and mental imagery across sensory modalities. Cogn Neuropsychiatry 2024; 29:87-102. [PMID: 38363282 DOI: 10.1080/13546805.2024.2313467] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/24/2023] [Accepted: 01/10/2024] [Indexed: 02/17/2024]
Abstract
INTRODUCTION Vivid mental imagery has been proposed to increase the likelihood of experiencing hallucinations. Typically, studies have employed a modality general approach to mental imagery which compares imagery across multiple domains (e.g., visual, auditory and tactile) to hallucinations in multiple senses. However, modality specific imagery may be a better predictor of hallucinations in the same domain. The study examined the contribution of imagery to hallucinations in a non-clinical sample and specifically whether imagery best predicted hallucinations at a modality general or modality specific level. METHODS In study one, modality general and modality specific accounts of the imagery-hallucination relationship were contrasted through application of self-report measures in a sample of 434 students. Study two used a subsample (n = 103) to extend exploration of the imagery-hallucinations relationship using a performance-based imagery task. RESULTS A small to moderate modality general relationship was observed between self-report imagery and hallucination proneness. There was only evidence of a modality specific relationship in the tactile domain. Performance-based imagery measures were unrelated to hallucinations and self-report imagery. CONCLUSIONS Mental imagery may act as a modality general process increasing hallucination proneness. The observed distinction between self-report and performance-based imagery highlights the difficulty of accurately measuring internal processes.
Collapse
|