1
|
Chai C, Lei Y, Wei H, Wu C, Zhang W, Hansen P, Fan H, Shi J. The effects of various auditory takeover requests: A simulated driving study considering the modality of non-driving-related tasks. Appl Ergon 2024; 118:104252. [PMID: 38417230 DOI: 10.1016/j.apergo.2024.104252] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/04/2023] [Revised: 10/26/2023] [Accepted: 02/16/2024] [Indexed: 03/01/2024]
Abstract
With the era of automated driving approaching, designing an effective auditory takeover request (TOR) is critical to ensure automated driving safety. The present study investigated the effects of speech-based (speech and spearcon) and non-speech-based (earcon and auditory icon) TORs on takeover performance and subjective preferences. The potential impact of the non-driving-related task (NDRT) modality on auditory TORs was considered. Thirty-two participants were recruited in the present study and assigned to two groups, with one group performing the visual N-back task and another performing the auditory N-back task during automated driving. They were required to complete four simulated driving blocks corresponding to four auditory TOR types. The earcon TOR was found to be the most suitable for alerting drivers to return to the control loop because of its advantageous takeover time, lane change time, and minimum time to collision. Although participants preferred the speech TOR, it led to relatively poor takeover performance. In addition, the auditory NDRT was found to have a detrimental impact on auditory TORs. When drivers were engaged in the auditory NDRT, the takeover time and lane change time advantages of earcon TORs no longer existed. These findings highlight the importance of considering the influence of auditory NDRTs when designing an auditory takeover interface. The present study also has some practical implications for researchers and designers when designing an auditory takeover system in automated vehicles.
Collapse
Affiliation(s)
- Chunlei Chai
- College of Computer Science and Technology, Zhejiang University, Hangzhou, China
| | - Yu Lei
- School of Software Technology, Zhejiang University, Hangzhou, China
| | - Haoran Wei
- School of Software Technology, Zhejiang University, Hangzhou, China
| | - Changxu Wu
- Department of Industrial Engineering, Tsinghua University, Beijing, China
| | - Wei Zhang
- Department of Industrial Engineering, Tsinghua University, Beijing, China
| | - Preben Hansen
- Department of Computer and System Sciences, Stockholm University, Stockholm, Sweden
| | - Hao Fan
- College of Computer Science and Technology, Zhejiang University, Hangzhou, China
| | - Jinlei Shi
- College of Computer Science and Technology, Zhejiang University, Hangzhou, China.
| |
Collapse
|
2
|
Zhou H, Cai S, Zhang X, Chen Y, Wang A. Cross-modal conflict deficit in children with attention-deficit/hyperactivity disorder. J Exp Child Psychol 2024; 243:105917. [PMID: 38579588 DOI: 10.1016/j.jecp.2024.105917] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Revised: 02/19/2024] [Accepted: 03/13/2024] [Indexed: 04/07/2024]
Abstract
The difference between the audiovisual incongruent condition and the audiovisual congruent condition is known as cross-modal conflict, which is an important behavioral index to measure the conflict control function. Previous studies have found conflict control deficits in children with attention-deficit/hyperactivity disorder (ADHD), but it is not clear whether and how cross-modal conflict occurs in children with ADHD at different processing levels. The current study adopted the cross-modal matching paradigm to recruit 25 children with ADHD (19 boys and 6 girls) and 24 TD children (17 boys and 7 girls), aiming to investigate the cross-modal conflict effect at the perception and response levels of children with ADHD. The results showed that both groups of children showed significant cross-modal conflict, and there was no significant difference between the ADHD and TD groups in the number of error trials and mean response time. However, the cross-modal conflict effect caused by auditory distractors was different between the ADHD and TD groups; the TD group had stronger auditory conflict at the response level, whereas the ADHD group had weaker auditory conflict. This indicates that the ADHD group had a deficit of auditory conflict at the response level.
Collapse
Affiliation(s)
- Heng Zhou
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou 215123, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing 101408, China; Chinese Institute for Brain Research, Beijing 102206, China
| | - Shizhong Cai
- Department of Child and Adolescent Healthcare, Children's Hospital of Soochow University, Suzhou 215025, China
| | - Xianghui Zhang
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou 215123, China
| | - Yan Chen
- Department of Child and Adolescent Healthcare, Children's Hospital of Soochow University, Suzhou 215025, China.
| | - Aijun Wang
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou 215123, China.
| |
Collapse
|
3
|
Yang L, Wang S, Chen Y, Liang Y, Chen T, Wang Y, Fu X, Wang S. Effects of Age on the Auditory Cortex During Speech Perception in Noise: Evidence From Functional Near-Infrared Spectroscopy. Ear Hear 2024; 45:742-752. [PMID: 38268081 PMCID: PMC11008455 DOI: 10.1097/aud.0000000000001460] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2022] [Accepted: 11/23/2023] [Indexed: 01/26/2024]
Abstract
OBJECTIVES Age-related speech perception difficulties may be related to a decline in central auditory processing abilities, particularly in noisy or challenging environments. However, how the activation patterns related to speech stimulation in different noise situations change with normal aging has yet to be elucidated. In this study, we aimed to investigate the effects of noisy environments and aging on patterns of auditory cortical activation. DESIGN We analyzed the functional near-infrared spectroscopy signals of 20 young adults, 21 middle-aged adults, and 21 elderly adults, and evaluated their cortical response patterns to speech stimuli under five different signal to noise ratios (SNRs). In addition, we analyzed the behavior score, activation intensity, oxyhemoglobin variability, and dominant hemisphere, to investigate the effects of aging and noisy environments on auditory cortical activation. RESULTS Activation intensity and oxyhemoglobin variability both showed a decreasing trend with aging at an SNR of 0 dB; we also identified a strong correlation between activation intensity and age under this condition. However, we observed an inconsistent activation pattern when the SNR was 5 dB. Furthermore, our analysis revealed that the left hemisphere may be more susceptible to aging than the right hemisphere. Activation in the right hemisphere was more evident in older adults than in the left hemisphere; in contrast, younger adults showed leftward lateralization. CONCLUSIONS Our analysis showed that with aging, auditory cortical regions gradually become inflexible in noisy environments. Furthermore, changes in cortical activation patterns with aging may be related to SNR conditions, and that understandable speech with a low SNR ratio but still understandable may induce the highest level of activation. We also found that the left hemisphere was more affected by aging than the right hemisphere in speech perception tasks; the left-sided dominance observed in younger individuals gradually shifted to the right hemisphere with aging.
Collapse
Affiliation(s)
- Liu Yang
- Beijing Institute of Otolaryngology, Otolaryngology-Head and Neck Surgery, Key Laboratory of Otolaryngology Head and Neck Surgery (Capital Medical University), Ministry of Education, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- These authors contributed equally to this work
| | - Songjian Wang
- Beijing Institute of Otolaryngology, Otolaryngology-Head and Neck Surgery, Key Laboratory of Otolaryngology Head and Neck Surgery (Capital Medical University), Ministry of Education, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- These authors contributed equally to this work
| | - Younuo Chen
- Beijing Institute of Otolaryngology, Otolaryngology-Head and Neck Surgery, Key Laboratory of Otolaryngology Head and Neck Surgery (Capital Medical University), Ministry of Education, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Ying Liang
- School of Biomedical Engineering, Capital Medical University, Beijing, China
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing, China
| | - Ting Chen
- School of Biomedical Engineering, Capital Medical University, Beijing, China
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing, China
| | - Yuan Wang
- Beijing Institute of Otolaryngology, Otolaryngology-Head and Neck Surgery, Key Laboratory of Otolaryngology Head and Neck Surgery (Capital Medical University), Ministry of Education, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Xinxing Fu
- Beijing Institute of Otolaryngology, Otolaryngology-Head and Neck Surgery, Key Laboratory of Otolaryngology Head and Neck Surgery (Capital Medical University), Ministry of Education, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Shuo Wang
- Beijing Institute of Otolaryngology, Otolaryngology-Head and Neck Surgery, Key Laboratory of Otolaryngology Head and Neck Surgery (Capital Medical University), Ministry of Education, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
4
|
Paraskevopoulos E, Anagnostopoulou A, Chalas N, Karagianni M, Bamidis P. Unravelling the multisensory learning advantage: Different patterns of within and across frequency-specific interactions drive uni- and multisensory neuroplasticity. Neuroimage 2024; 291:120582. [PMID: 38521212 DOI: 10.1016/j.neuroimage.2024.120582] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Revised: 03/12/2024] [Accepted: 03/20/2024] [Indexed: 03/25/2024] Open
Abstract
In the field of learning theory and practice, the superior efficacy of multisensory learning over uni-sensory is well-accepted. However, the underlying neural mechanisms at the macro-level of the human brain remain largely unexplored. This study addresses this gap by providing novel empirical evidence and a theoretical framework for understanding the superiority of multisensory learning. Through a cognitive, behavioral, and electroencephalographic assessment of carefully controlled uni-sensory and multisensory training interventions, our study uncovers a fundamental distinction in their neuroplastic patterns. A multilayered network analysis of pre- and post- training EEG data allowed us to model connectivity within and across different frequency bands at the cortical level. Pre-training EEG analysis unveils a complex network of distributed sources communicating through cross-frequency coupling, while comparison of pre- and post-training EEG data demonstrates significant differences in the reorganizational patterns of uni-sensory and multisensory learning. Uni-sensory training primarily modifies cross-frequency coupling between lower and higher frequencies, whereas multisensory training induces changes within the beta band in a more focused network, implying the development of a unified representation of audiovisual stimuli. In combination with behavioural and cognitive findings this suggests that, multisensory learning benefits from an automatic top-down transfer of training, while uni-sensory training relies mainly on limited bottom-up generalization. Our findings offer a compelling theoretical framework for understanding the advantage of multisensory learning.
Collapse
Affiliation(s)
| | - Alexandra Anagnostopoulou
- School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Nikolas Chalas
- Institute for Biomagnetism and Biosignalanalysis, University of Münster, Germany
| | - Maria Karagianni
- School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Panagiotis Bamidis
- School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, Thessaloniki, Greece
| |
Collapse
|
5
|
Contreras-Ruston F, Guzman M, Castillo-Allendes A, Cantor-Cutiva L, Behlau M. Auditory-perceptual Assessment of Healthy and Disordered Voices Using the Voice Deviation Scale. J Voice 2024; 38:654-659. [PMID: 34903393 DOI: 10.1016/j.jvoice.2021.10.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2021] [Revised: 10/11/2021] [Accepted: 10/12/2021] [Indexed: 10/19/2022]
Abstract
OBJECTIVE This study aims to (1) determine the cut-off values of the Global Severity of vocal deviation on the Visual Analog Scale (VAS) from the numerical scale ratings, and (2) identify the cut-off values according to different degrees of vocal deviation used by Voice-Specialized Speech-Language Pathologists (SLP). STUDY DESIGN Prospective study. METHODS The auditory-perceptual assessment was performed by four SLPs using two protocols with different scales: the VAS and the 4-point numerical scale. Among the 211 voices analyzed, 147 corresponded to female participants, and 64 corresponded to males, plus 20% repeated voice samples. Participants were between 19 and 60 years. All of them were asked to count from 1 to 10 and were recorded in a sound-proof booth. For both protocols, the judges scored the overall severity. One SLP was excluded from the analysis due to inconsistency during the perceptual assessment. RESULTS For normal voice and mild deviations, overall severity cut-off score on the VAS was 21. For mild-moderate deviations, the cut-off was 55; and 81 points for moderate and severe deviations. The Area Under the Curve values correspond to 0.725, 0.905 and 0.851, respectively. CONCLUSIONS Our results suggest that the VAS is a good instrument to be used during voice assessment performed by Chilean SLPs. However, it evidences possible differences in voice analysis perception with other cut-off scores performed in other countries, which can be compared to future studies.
Collapse
Affiliation(s)
- Francisco Contreras-Ruston
- Speech-Language Pathology and Audiology Department, Universidad de Valparaíso, San Felipe, Chile; Parlab - Perception, Attention and Representation Lab, Department of Cognition, Development and Educational Psychology, University of Barcelona, Barcelona, Spain.
| | - Marco Guzman
- Universidad de los Andes, Chile, Santiago, Chile
| | - Adrián Castillo-Allendes
- Department of Communicative Sciences and Disorders, Michigan State University, East Lansing, Michigan
| | - Lady Cantor-Cutiva
- Department of Collective Health, Universidad Nacional de Colombia, Bogotá, Colombia; Program of Speech and Language Pathology, Universidad Manuela Beltrán, Bogotá, Colombia
| | - Mara Behlau
- CEV - Centro de Estudos da Voz, São Paulo, Brazil; Speech-Language Pathology and Audiology Department, Escola Paulista de Medicina, Federal University of São Paulo, São Paulo, Brazil
| |
Collapse
|
6
|
Mahalingam S, Venkatraman Y, Boominathan P. Cross-Cultural Adaptation and Validation of Consensus Auditory Perceptual Evaluation of Voice (CAPE-V): A Systematic Review. J Voice 2024; 38:630-640. [PMID: 34879984 DOI: 10.1016/j.jvoice.2021.10.022] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2021] [Revised: 10/10/2021] [Accepted: 10/11/2021] [Indexed: 01/10/2023]
Abstract
INTRODUCTION Consensus Auditory Perceptual Evaluation of Voice (CAPE-V) is a widely used perceptual evaluation scale for voice assessment. It is adapted in many regional languages worldwide. This systematic review will help critically evaluate the methodologies used to adapt and establish CAPE-V as a valid and reliable tool. METHOD Authors reviewed literature in search engines (Scopus, Google Scholar, PubMed) to identify studies published in English between 2002-2020. The CAPE-V translated and adapted for linguistic or cultural variations were included for the review. The studies were compiled using the Mendeley Reference Manager and screened for title/abstract before shortlisting the studies. RESULTS The initial database had 3459 search results and after duplicates removal, 1535 articles were analysed. Thirteen studies were narrowed based on title/abstract screening. A final of ten studies were selected for the review. DISCUSSION/CONCLUSION This review provided a comprehensive understanding of the challenges encountered during cross-cultural adaptation and will help future researchers choose a suitable adaptation method.
Collapse
Affiliation(s)
- Shenbagavalli Mahalingam
- Department of Speech Language and Hearing Sciences, Sri Ramachandra Institute of Higher Education and Research (DU), Chennai, India
| | | | - Prakash Boominathan
- Department of Speech Language and Hearing Sciences, Sri Ramachandra Institute of Higher Education and Research (DU), Chennai, India.
| |
Collapse
|
7
|
Scheller M, Fang H, Sui J. Self as a prior: The malleability of Bayesian multisensory integration to social salience. Br J Psychol 2024; 115:185-205. [PMID: 37747452 DOI: 10.1111/bjop.12683] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 08/26/2023] [Accepted: 09/11/2023] [Indexed: 09/26/2023]
Abstract
Our everyday perceptual experiences are grounded in the integration of information within and across our senses. Due to this direct behavioural relevance, cross-modal integration retains a certain degree of contextual flexibility, even to social relevance. However, how social relevance modulates cross-modal integration remains unclear. To investigate possible mechanisms, Experiment 1 tested the principles of audio-visual integration for numerosity estimation by deriving a Bayesian optimal observer model with perceptual prior from empirical data to explain perceptual biases. Such perceptual priors may shift towards locations of high salience in the stimulus space. Our results showed that the tendency to over- or underestimate numerosity, expressed in the frequency and strength of fission and fusion illusions, depended on the actual event numerosity. Experiment 2 replicated the effects of social relevance on multisensory integration from Scheller & Sui, 2022 JEP:HPP, using a lower number of events, thereby favouring the opposite illusion through enhanced influences of the prior. In line with the idea that the self acts like a prior, the more frequently observed illusion (more malleable to prior influences) was modulated by self-relevance. Our findings suggest that the self can influence perception by acting like a prior in cue integration, biasing perceptual estimates towards areas of high self-relevance.
Collapse
Affiliation(s)
- Meike Scheller
- Department of Psychology, University of Aberdeen, Aberdeen, UK
- Department of Psychology, Durham University, Durham, UK
| | - Huilin Fang
- Department of Psychology, University of Aberdeen, Aberdeen, UK
| | - Jie Sui
- Department of Psychology, University of Aberdeen, Aberdeen, UK
| |
Collapse
|
8
|
Spiech C, Danielsen A, Laeng B, Endestad T. Oscillatory attention in groove. Cortex 2024; 174:137-148. [PMID: 38547812 DOI: 10.1016/j.cortex.2024.02.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Revised: 11/10/2023] [Accepted: 02/19/2024] [Indexed: 04/21/2024]
Abstract
Attention is not constant but rather fluctuates over time and these attentional fluctuations may prioritize the processing of certain events over others. In music listening, the pleasurable urge to move to music (termed 'groove' by music psychologists) offers a particularly convenient case study of oscillatory attention because it engenders synchronous and oscillatory movements which also vary predictably with stimulus complexity. In this study, we simultaneously recorded pupillometry and scalp electroencephalography (EEG) from participants while they listened to drumbeats of varying complexity that they rated in terms of groove afterwards. Using the intertrial phase coherence of the beat frequency, we found that while subjects were listening, their pupil activity became entrained to the beat of the drumbeats and this entrained attention persisted in the EEG even as subjects imagined the drumbeats continuing through subsequent silent periods. This entrainment in both the pupillometry and EEG worsened with increasing rhythmic complexity, indicating poorer sensory precision as the beat became more obscured. Additionally, sustained pupil dilations revealed the expected, inverted U-shaped relationship between rhythmic complexity and groove ratings. Taken together, this work bridges oscillatory attention to rhythmic complexity in relation to musical groove.
Collapse
Affiliation(s)
- Connor Spiech
- RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Norway; Department of Psychology, University of Oslo, Norway.
| | - Anne Danielsen
- RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Norway; Department of Musicology, University of Oslo, Norway
| | - Bruno Laeng
- RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Norway; Department of Psychology, University of Oslo, Norway
| | - Tor Endestad
- RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Norway; Department of Psychology, University of Oslo, Norway
| |
Collapse
|
9
|
Gößwein JA, Rennies J, Winneke A, Hildebrandt A, Kollmeier B. Evaluation of adjustment behaviour in a semi-supervised self-adjustment fine-tuning procedure for hearing aids. Int J Audiol 2024; 63:313-325. [PMID: 37079087 DOI: 10.1080/14992027.2023.2196601] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Accepted: 03/22/2023] [Indexed: 04/21/2023]
Abstract
OBJECTIVE This study investigated the adjustment behaviour of hearing aid (HA) users participating in a semi-supervised self-adjustment fine-tuning procedure for HAs. The aim was to link behaviour with the reproducibility and duration of the adjustments. DESIGN Participants used a two-dimensional user interface to identify their HA gain preferences while listening to realistic sound scenes presented in a laboratory environment. The interface allowed participants to adjust amplitude (vertical axis) and spectral slope (horizontal axis) simultaneously. Participants were clustered according to their interaction with the user interface, and their search directions were analysed. STUDY SAMPLE Twenty older experienced HA users were invited to participate in this study. RESULTS We identified four different archetypes of adjustment behaviour (curious, cautious, semi-browsing, and full-on browsing) by analysing the trace points of all measurements for each participant. Furthermore, participants used predominantly horizontal or vertical paths when searching for their preference. Neither the archetype, nor the search directions, nor the participants' technology commitment was predictive of the reproducibility or the adjustment duration. CONCLUSIONS The findings suggest that enforcement of a specific adjustment behaviour or search direction is not necessary to obtain fast, reliable self-adjustments. Furthermore, no strict requirements with respect to technology commitment are necessary.
Collapse
Affiliation(s)
- Jonathan Albert Gößwein
- Fraunhofer Institute for Digital Media Technology (IDMT), Oldenburg Branch for Hearing, Speech and Audio Technology (HSA) and Cluster of Excellence "Hearing4All", Oldenburg, Germany
| | - Jan Rennies
- Fraunhofer Institute for Digital Media Technology (IDMT), Oldenburg Branch for Hearing, Speech and Audio Technology (HSA) and Cluster of Excellence "Hearing4All", Oldenburg, Germany
| | - Axel Winneke
- Fraunhofer Institute for Digital Media Technology (IDMT), Oldenburg Branch for Hearing, Speech and Audio Technology (HSA) and Cluster of Excellence "Hearing4All", Oldenburg, Germany
| | - Andrea Hildebrandt
- Department of Psychology, Carl von Ossietzky Universität Oldenburg, and Cluster of Excellence "Hearing4All", Oldenburg, Germany
| | - Birger Kollmeier
- Fraunhofer Institute for Digital Media Technology (IDMT), Oldenburg Branch for Hearing, Speech and Audio Technology (HSA) and Cluster of Excellence "Hearing4All", Oldenburg, Germany
- Department of Medical Physics and Acoustics, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
| |
Collapse
|
10
|
Visram AS, Jackson IR, Guest H, Plack CJ, Brij S, Chaudhuri N, Munro KJ. Pre-registered controlled comparison of auditory function reveals no difference between hospitalised adults with and without COVID-19. Int J Audiol 2024; 63:300-312. [PMID: 37363933 DOI: 10.1080/14992027.2023.2213841] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Revised: 05/03/2023] [Accepted: 05/08/2023] [Indexed: 06/28/2023]
Abstract
OBJECTIVE Several viruses are known to have a negative impact on hearing health. The global prevalence of COVID-19 means that it is crucial to understand whether and how SARS-CoV2 affects hearing. Evidence to date is mixed, with studies frequently exhibiting limitations in the methodological approaches used or the populations sampled, leading to a substantial risk of bias. This study addressed many of these limitations. DESIGN A comprehensive battery of measures was administered, including lab-based behavioural and physiological measures, as well as self-report instruments. Performance was thoroughly assessed across the auditory system, including measures of cochlear function, neural function and auditory perception. Hypotheses and analyses were pre-registered. STUDY SAMPLES Participants who were hospitalised as a result of COVID-19 (n = 57) were compared with a well-matched control group (n = 40) who had also been hospitalised but had never had COVID-19. RESULTS We find no evidence to support the hypothesis that COVID-19 is associated with deficits in auditory function on any auditory test measure. Of all the confirmatory analyses, only the self-report measure of hearing decline indicated any difference between groups. CONCLUSION Results do not support the hypothesis that COVID-19 infection has a significant long-term impact on the auditory system.
Collapse
Affiliation(s)
- A S Visram
- Manchester Centre for Audiology and Deafness, School of Health Sciences, University of Manchester, Manchester, UK
| | - I R Jackson
- Manchester Centre for Audiology and Deafness, School of Health Sciences, University of Manchester, Manchester, UK
| | - H Guest
- Manchester Centre for Audiology and Deafness, School of Health Sciences, University of Manchester, Manchester, UK
| | - C J Plack
- Manchester Centre for Audiology and Deafness, School of Health Sciences, University of Manchester, Manchester, UK
- Department of Psychology, Lancaster University, Lancaster, UK
| | - S Brij
- Department of Respiratory Medicine, Manchester Royal Infirmary, Manchester University NHS Foundation Trust, Manchester, UK
| | - N Chaudhuri
- Magee Medical School, The University of Ulster, Londonderry, UK
| | - K J Munro
- Manchester Centre for Audiology and Deafness, School of Health Sciences, University of Manchester, Manchester, UK
- University of Manchester NHS Foundation Trust, Manchester Academic Health Science Centre, Manchester, UK
| |
Collapse
|
11
|
Nussbaum C, Schirmer A, Schweinberger SR. Musicality - Tuned to the melody of vocal emotions. Br J Psychol 2024; 115:206-225. [PMID: 37851369 DOI: 10.1111/bjop.12684] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Revised: 09/12/2023] [Accepted: 09/24/2023] [Indexed: 10/19/2023]
Abstract
Musicians outperform non-musicians in vocal emotion perception, likely because of increased sensitivity to acoustic cues, such as fundamental frequency (F0) and timbre. Yet, how musicians make use of these acoustic cues to perceive emotions, and how they might differ from non-musicians, is unclear. To address these points, we created vocal stimuli that conveyed happiness, fear, pleasure or sadness, either in all acoustic cues, or selectively in either F0 or timbre only. We then compared vocal emotion perception performance between professional/semi-professional musicians (N = 39) and non-musicians (N = 38), all socialized in Western music culture. Compared to non-musicians, musicians classified vocal emotions more accurately. This advantage was seen in the full and F0-modulated conditions, but was absent in the timbre-modulated condition indicating that musicians excel at perceiving the melody (F0), but not the timbre of vocal emotions. Further, F0 seemed more important than timbre for the recognition of all emotional categories. Additional exploratory analyses revealed a link between time-varying F0 perception in music and voices that was independent of musical training. Together, these findings suggest that musicians are particularly tuned to the melody of vocal emotions, presumably due to a natural predisposition to exploit melodic patterns.
Collapse
Affiliation(s)
- Christine Nussbaum
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University, Jena, Germany
- Voice Research Unit, Friedrich Schiller University, Jena, Germany
| | - Annett Schirmer
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University, Jena, Germany
- Institute of Psychology, University of Innsbruck, Innsbruck, Austria
| | - Stefan R Schweinberger
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University, Jena, Germany
- Voice Research Unit, Friedrich Schiller University, Jena, Germany
- Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland
| |
Collapse
|
12
|
Wang X, Chen J, Zhang R, Wu Q, Fan M, Shi W, Lou G, Zhang Q. [Assessment of auditory perception of children with single-sided deafness after cochlear implantation]. Lin Chuang Er Bi Yan Hou Tou Jing Wai Ke Za Zhi 2024; 38:436-441. [PMID: 38686484 DOI: 10.13201/j.issn.2096-7993.2024.05.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Received: 02/12/2023] [Indexed: 05/02/2024]
Abstract
Unilateral deafness will lead to the decline of children's speech recognition rate, language development retardation and spatial positioning ability, which will have many adverse effects on children's life and study. Cochlear implantation can help children rebuild binaural hearing, and systematic audiological evaluation after operation is particularly important for clinicians to evaluate the hearing recovery of children. In this study, a variety of commonly used audiological evaluation, testing processes and methods after cochlear implantation in children with unilateral deafness are described in detail, and the related research status and results are summarized.
Collapse
Affiliation(s)
- Xuemei Wang
- Department of Otorhinolaryngology,Hangzhou Children's Hospital,Hangzhou,310014,China
| | - Jiahui Chen
- Hangzhou Ren'ai Deaf Rehabilitation Research Institute
| | - Rui Zhang
- Department of Otolaryngology Head and Neck Surgery,Xi'an Children's Hospital
| | - Qiong Wu
- Department of Otolaryngology Head and Neck Surgery,Xinhua Hospital,Shanghai Jiaotong University School of Medicine
| | - Mengyun Fan
- Department of Otolaryngology Head and Neck Surgery,Xi'an Children's Hospital
| | - Wendi Shi
- Hangzhou Ren'ai Deaf Rehabilitation Research Institute
| | - Gaozhong Lou
- Department of Otorhinolaryngology,Hangzhou Children's Hospital,Hangzhou,310014,China
| | - Qing Zhang
- Department of Otolaryngology Head and Neck Surgery,Xinhua Hospital,Shanghai Jiaotong University School of Medicine
| |
Collapse
|
13
|
Inguscio BMS, Cartocci G, Sciaraffa N, Nicastri M, Giallini I, Aricò P, Greco A, Babiloni F, Mancini P. Two are better than one: Differences in cortical EEG patterns during auditory and visual verbal working memory processing between Unilateral and Bilateral Cochlear Implanted children. Hear Res 2024; 446:109007. [PMID: 38608331 DOI: 10.1016/j.heares.2024.109007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/26/2023] [Revised: 03/28/2024] [Accepted: 04/04/2024] [Indexed: 04/14/2024]
Abstract
Despite the proven effectiveness of cochlear implant (CI) in the hearing restoration of deaf or hard-of-hearing (DHH) children, to date, extreme variability in verbal working memory (VWM) abilities is observed in both unilateral and bilateral CI user children (CIs). Although clinical experience has long observed deficits in this fundamental executive function in CIs, the cause to date is still unknown. Here, we have set out to investigate differences in brain functioning regarding the impact of monaural and binaural listening in CIs compared with normal hearing (NH) peers during a three-level difficulty n-back task undertaken in two sensory modalities (auditory and visual). The objective of this pioneering study was to identify electroencephalographic (EEG) marker pattern differences in visual and auditory VWM performances in CIs compared to NH peers and possible differences between unilateral cochlear implant (UCI) and bilateral cochlear implant (BCI) users. The main results revealed differences in theta and gamma EEG bands. Compared with hearing controls and BCIs, UCIs showed hypoactivation of theta in the frontal area during the most complex condition of the auditory task and a correlation of the same activation with VWM performance. Hypoactivation in theta was also observed, again for UCIs, in the left hemisphere when compared to BCIs and in the gamma band in UCIs compared to both BCIs and NHs. For the latter two, a correlation was found between left hemispheric gamma oscillation and performance in the audio task. These findings, discussed in the light of recent research, suggest that unilateral CI is deficient in supporting auditory VWM in DHH. At the same time, bilateral CI would allow the DHH child to approach the VWM benchmark for NH children. The present study suggests the possible effectiveness of EEG in supporting, through a targeted approach, the diagnosis and rehabilitation of VWM in DHH children.
Collapse
Affiliation(s)
- Bianca Maria Serena Inguscio
- Department of Molecular Medicine, Sapienza University of Rome, Viale Regina Elena 291, Rome 00161, Italy; BrainSigns Srl, Via Tirso, 14, Rome 00198, Italy.
| | - Giulia Cartocci
- Department of Molecular Medicine, Sapienza University of Rome, Viale Regina Elena 291, Rome 00161, Italy; BrainSigns Srl, Via Tirso, 14, Rome 00198, Italy
| | | | - Maria Nicastri
- Department of Sense Organs, Sapienza University of Rome, Viale dell'Università 31, Rome 00161, Italy
| | - Ilaria Giallini
- Department of Sense Organs, Sapienza University of Rome, Viale dell'Università 31, Rome 00161, Italy
| | - Pietro Aricò
- Department of Molecular Medicine, Sapienza University of Rome, Viale Regina Elena 291, Rome 00161, Italy; BrainSigns Srl, Via Tirso, 14, Rome 00198, Italy; Department of Computer, Control, and Management Engineering "Antonio Ruberti", Sapienza University of Rome, Via Ariosto 125, Rome 00185, Italy
| | - Antonio Greco
- Department of Sense Organs, Sapienza University of Rome, Viale dell'Università 31, Rome 00161, Italy
| | - Fabio Babiloni
- Department of Molecular Medicine, Sapienza University of Rome, Viale Regina Elena 291, Rome 00161, Italy; BrainSigns Srl, Via Tirso, 14, Rome 00198, Italy; Department of Computer Science, Hangzhou Dianzi University, Xiasha Higher Education Zone, Hangzhou 310018, China
| | - Patrizia Mancini
- Department of Sense Organs, Sapienza University of Rome, Viale dell'Università 31, Rome 00161, Italy
| |
Collapse
|
14
|
Kamiloğlu RG, Sauter DA. Sounds like a fight: listeners can infer behavioural contexts from spontaneous nonverbal vocalisations. Cogn Emot 2024; 38:277-295. [PMID: 37997898 PMCID: PMC11057848 DOI: 10.1080/02699931.2023.2285854] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Accepted: 11/13/2023] [Indexed: 11/25/2023]
Abstract
When we hear another person laugh or scream, can we tell the kind of situation they are in - for example, whether they are playing or fighting? Nonverbal expressions are theorised to vary systematically across behavioural contexts. Perceivers might be sensitive to these putative systematic mappings and thereby correctly infer contexts from others' vocalisations. Here, in two pre-registered experiments, we test the prediction that listeners can accurately deduce production contexts (e.g. being tickled, discovering threat) from spontaneous nonverbal vocalisations, like sighs and grunts. In Experiment 1, listeners (total n = 3120) matched 200 nonverbal vocalisations to one of 10 contexts using yes/no response options. Using signal detection analysis, we show that listeners were accurate at matching vocalisations to nine of the contexts. In Experiment 2, listeners (n = 337) categorised the production contexts by selecting from 10 response options in a forced-choice task. By analysing unbiased hit rates, we show that participants categorised all 10 contexts at better-than-chance levels. Together, these results demonstrate that perceivers can infer contexts from nonverbal vocalisations at rates that exceed that of random selection, suggesting that listeners are sensitive to systematic mappings between acoustic structures in vocalisations and behavioural contexts.
Collapse
Affiliation(s)
- Roza G. Kamiloğlu
- Department of Psychology, University of Amsterdam, Amsterdam, the Netherlands
- Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Disa A. Sauter
- Department of Psychology, University of Amsterdam, Amsterdam, the Netherlands
| |
Collapse
|
15
|
Ciralli B, Malfatti T, Hilscher MM, Leao RN, Cederroth CR, Leao KE, Kullander K. Unraveling the role of Slc10a4 in auditory processing and sensory motor gating: Implications for neuropsychiatric disorders? Prog Neuropsychopharmacol Biol Psychiatry 2024; 131:110930. [PMID: 38160852 DOI: 10.1016/j.pnpbp.2023.110930] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Revised: 11/28/2023] [Accepted: 12/23/2023] [Indexed: 01/03/2024]
Abstract
BACKGROUND Psychiatric disorders, such as schizophrenia, are complex and challenging to study, partly due to the lack of suitable animal models. However, the absence of the Slc10a4 gene, which codes for a monoaminergic and cholinergic associated vesicular transporter protein, in knockout mice (Slc10a4-/-), leads to the accumulation of extracellular dopamine. A major challenge for studying schizophrenia is the lack of suitable animal models that accurately represent the disorder. We sought to overcome this challenge by using Slc10a4-/- mice as a potential model, considering their altered dopamine levels. This makes them a potential animal model for schizophrenia, a disorder known to be associated with altered dopamine signaling in the brain. METHODS The locomotion, auditory sensory filtering and prepulse inhibition (PPI) of Slc10a4-/- mice were quantified and compared to wildtype (WT) littermates. Intrahippocampal electrodes were used to record auditory event-related potentials (aERPs) for quantifying sensory filtering in response to paired-clicks. The channel above aERPs phase reversal was chosen for reliably comparing results between animals, and aERPs amplitude and latency of click responses were quantified. WT and Slc10a4-/- mice were also administered subanesthetic doses of ketamine to provoke psychomimetic behavior. RESULTS Baseline locomotion during auditory stimulation was similar between Slc10a4-/- mice and WT littermates. In WT animals, normal auditory processing was observed after i.p saline injections, and it was maintained under the influence of 5 mg/kg ketamine, but disrupted by 20 mg/kg ketamine. On the other hand, Slc10a4-/- mice did not show significant differences between N40 S1 and S2 amplitude responses in saline or low dose ketamine treatment. Auditory gating was considered preserved since the second N40 peak was consistently suppressed, but with increased latency. The P80 component showed higher amplitude, with shorter S2 latency under saline and 5 mg/kg ketamine treatment in Slc10a4-/- mice, which was not observed in WT littermates. Prepulse inhibition was also decreased in Slc10a4-/- mice when the longer interstimulus interval of 100 ms was applied, compared to WT littermates. CONCLUSION The Slc10a4-/- mice responses indicate that cholinergic and monoaminergic systems participate in the PPI magnitude, in the temporal coding (response latency) of the auditory sensory gating component N40, and in the amplitude of aERPs P80 component. These results suggest that Slc10a4-/- mice can be considered as potential models for neuropsychiatric conditions.
Collapse
Affiliation(s)
- Barbara Ciralli
- Brain Institute, Federal University of Rio Grande do Norte, Natal, RN, Brazil; Department of Immunology, Genetics and Pathology, Programme in Genomics and Neurobiology, Uppsala University, Uppsala, Sweden
| | - Thawann Malfatti
- Brain Institute, Federal University of Rio Grande do Norte, Natal, RN, Brazil; Department of Immunology, Genetics and Pathology, Programme in Genomics and Neurobiology, Uppsala University, Uppsala, Sweden; Experimental Audiology, Department of Physiology and Pharmacology, Karolinska Institutet, 171 77 Stockholm, Sweden
| | - Markus M Hilscher
- Institute for Analysis and Scientific Computing, Vienna University of Technology, Vienna, Austria
| | - Richardson N Leao
- Brain Institute, Federal University of Rio Grande do Norte, Natal, RN, Brazil; Department of Immunology, Genetics and Pathology, Programme in Genomics and Neurobiology, Uppsala University, Uppsala, Sweden
| | - Christopher R Cederroth
- Experimental Audiology, Department of Physiology and Pharmacology, Karolinska Institutet, 171 77 Stockholm, Sweden
| | - Katarina E Leao
- Brain Institute, Federal University of Rio Grande do Norte, Natal, RN, Brazil; Department of Immunology, Genetics and Pathology, Programme in Genomics and Neurobiology, Uppsala University, Uppsala, Sweden
| | - Klas Kullander
- Department of Immunology, Genetics and Pathology, Programme in Genomics and Neurobiology, Uppsala University, Uppsala, Sweden.
| |
Collapse
|
16
|
Pounder Z, Eardley AF, Loveday C, Evans S. No clear evidence of a difference between individuals who self-report an absence of auditory imagery and typical imagers on auditory imagery tasks. PLoS One 2024; 19:e0300219. [PMID: 38568916 PMCID: PMC10990234 DOI: 10.1371/journal.pone.0300219] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Accepted: 02/25/2024] [Indexed: 04/05/2024] Open
Abstract
Aphantasia is characterised by the inability to create mental images in one's mind. Studies investigating impairments in imagery typically focus on the visual domain. However, it is possible to generate many different forms of imagery including imagined auditory, kinesthetic, tactile, motor, taste and other experiences. Recent studies show that individuals with aphantasia report a lack of imagery in modalities, other than vision, including audition. However, to date, no research has examined whether these reductions in self-reported auditory imagery are associated with decrements in tasks that require auditory imagery. Understanding the extent to which visual and auditory imagery deficits co-occur can help to better characterise the core deficits of aphantasia and provide an alternative perspective on theoretical debates on the extent to which imagery draws on modality-specific or modality-general processes. In the current study, individuals that self-identified as being aphantasic and matched control participants with typical imagery performed two tasks: a musical pitch-based imagery and voice-based categorisation task. The majority of participants with aphantasia self-reported significant deficits in both auditory and visual imagery. However, we did not find a concomitant decrease in performance on tasks which require auditory imagery, either in the full sample or only when considering those participants that reported significant deficits in both domains. These findings are discussed in relation to the mechanisms that might obscure observation of imagery deficits in auditory imagery tasks in people that report reduced auditory imagery.
Collapse
Affiliation(s)
- Zoë Pounder
- Department of Psychology, School of Social Sciences, University of Westminster, London, United Kingdom
- Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom
| | - Alison F. Eardley
- Department of Psychology, School of Social Sciences, University of Westminster, London, United Kingdom
| | - Catherine Loveday
- Department of Psychology, School of Social Sciences, University of Westminster, London, United Kingdom
| | - Samuel Evans
- Department of Psychology, School of Social Sciences, University of Westminster, London, United Kingdom
- Neuroimaging, King’s College London, London, United Kingdom
| |
Collapse
|
17
|
Zhang Z. Frequency effects can modulate the neural correlates of prosodic processing in Mandarin. Neuroreport 2024; 35:399-405. [PMID: 38526973 DOI: 10.1097/wnr.0000000000002021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/27/2024]
Abstract
In tonal languages, tone perception involves the processing of both acoustic and phonological information conveyed by tonal signals. In Mandarin, in addition to four canonical full tones, there exists a group of weak syllables known as neutral tones. This study aims to investigate the impact of lexical frequency effects and prosodic information associated with neutral tones on the auditory representation of Mandarin compounds. We initially selected disyllabic compounds as targets, manipulating their lexical frequencies and prosodic structures. Subsequently, these target compounds were embedded into selected sentences and auditorily presented to native speakers. During the experiments, participants engaged in lexical decision tasks while their event-related potentials were recorded. The results showed that the auditory lexical representation of disyllabic compounds was modulated by lexical frequency effects. Rare compounds and compounds with rare first constituents elicited larger N400 effects compared to frequent compounds. Furthermore, neutral tones were found to play a role in the processing, resulting in larger N400 effects. Our findings showed significantly increased amplitudes of the N400 component, suggesting that the processing of rare compounds and compounds with neutral tones may require more cognitive resources. Additionally, we observed an interaction effect between lexical frequency and neutral tones, indicating that they could serve as determining cues in the auditory processing of disyllabic compounds.
Collapse
Affiliation(s)
- Zhongpei Zhang
- Laboratory Models, Dynamics, Corpora, CNRS, University Paris Nanterre, Nanterre, France
| |
Collapse
|
18
|
Kayser C, Debats N, Heuer H. Both stimulus-specific and configurational features of multiple visual stimuli shape the spatial ventriloquism effect. Eur J Neurosci 2024; 59:1770-1788. [PMID: 38230578 DOI: 10.1111/ejn.16251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Revised: 12/22/2023] [Accepted: 12/25/2023] [Indexed: 01/18/2024]
Abstract
Studies on multisensory perception often focus on simplistic conditions in which one single stimulus is presented per modality. Yet, in everyday life, we usually encounter multiple signals per modality. To understand how multiple signals within and across the senses are combined, we extended the classical audio-visual spatial ventriloquism paradigm to combine two visual stimuli with one sound. The individual visual stimuli presented in the same trial differed in their relative timing and spatial offsets to the sound, allowing us to contrast their individual and combined influence on sound localization judgements. We find that the ventriloquism bias is not dominated by a single visual stimulus but rather is shaped by the collective multisensory evidence. In particular, the contribution of an individual visual stimulus to the ventriloquism bias depends not only on its own relative spatio-temporal alignment to the sound but also the spatio-temporal alignment of the other visual stimulus. We propose that this pattern of multi-stimulus multisensory integration reflects the evolution of evidence for sensory causal relations during individual trials, calling for the need to extend established models of multisensory causal inference to more naturalistic conditions. Our data also suggest that this pattern of multisensory interactions extends to the ventriloquism aftereffect, a bias in sound localization observed in unisensory judgements following a multisensory stimulus.
Collapse
Affiliation(s)
- Christoph Kayser
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
| | - Nienke Debats
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
| | - Herbert Heuer
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
- Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| |
Collapse
|
19
|
Casilio M, Kasdan AV, Schneck SM, Entrup JL, Levy DF, Crouch K, Wilson SM. Situating word deafness within aphasia recovery: A case report. Cortex 2024; 173:96-119. [PMID: 38387377 PMCID: PMC11073474 DOI: 10.1016/j.cortex.2023.12.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Revised: 10/02/2023] [Accepted: 12/26/2023] [Indexed: 02/24/2024]
Abstract
Word deafness is a rare neurological disorder often observed following bilateral damage to superior temporal cortex and canonically defined as an auditory modality-specific deficit in word comprehension. The extent to which word deafness is dissociable from aphasia remains unclear given its heterogeneous presentation, and some have consequently posited that word deafness instead represents a stage in recovery from aphasia, where auditory and linguistic processing are affected to varying degrees and improve at differing rates. Here, we report a case of an individual (Mr. C) with bilateral temporal lobe lesions whose presentation evolved from a severe aphasia to an atypical form of word deafness, where auditory linguistic processing was impaired at the sentence level and beyond. We first reconstructed in detail Mr. C's stroke recovery through medical record review and supplemental interviewing. Then, using behavioral testing and multimodal neuroimaging, we documented a predominant auditory linguistic deficit in sentence and narrative comprehension-with markedly reduced behavioral performance and absent brain activation in the language network in the spoken modality exclusively. In contrast, Mr. C displayed near-unimpaired behavioral performance and robust brain activations in the language network for the linguistic processing of words, irrespective of modality. We argue that these findings not only support the view of word deafness as a stage in aphasia recovery but also further instantiate the important role of left superior temporal cortex in auditory linguistic processing.
Collapse
Affiliation(s)
| | - Anna V Kasdan
- Vanderbilt University Medical Center, Nashville, TN, USA; Vanderbilt Brain Institute, TN, USA
| | | | | | - Deborah F Levy
- Vanderbilt University Medical Center, Nashville, TN, USA
| | - Kelly Crouch
- Vanderbilt University Medical Center, Nashville, TN, USA
| | - Stephen M Wilson
- Vanderbilt University Medical Center, Nashville, TN, USA; School of Health and Rehabilitation Sciences, University of Queensland, Brisbane, QLD, Australia
| |
Collapse
|
20
|
Jalalkamali H, Tajik A, Hatami R, Nezamabadipour H. Detecting how time is subjectively perceived based on event-related potentials (ERPs): a machine learning approach. Int J Neurosci 2024; 134:372-380. [PMID: 35848165 DOI: 10.1080/00207454.2022.2103413] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2022] [Revised: 07/07/2022] [Accepted: 07/11/2022] [Indexed: 10/17/2022]
Abstract
Background and objective: Time perception is essential for the precise performance of many of our activities and the coordination between different modalities. But it is distorted in many diseases and disorders. Event-related potentials (ERP) have long been used to understand better how the human brain perceives time, but machine learning methods have rarely been used to detect a person's time perception from his/her ERPs. Methods: In this study, EEG signals of the individuals were recorded while performing an auditory oddball time discrimination task. After features were extracted from ERPs, data balancing, and feature selection, machine learning models were used to distinguish between the oddball durations of 400 ms and 600 ms from standard durations of 500 ms. ERP results showed that the P3 evoked by the 600 ms oddball stimuli appeared about 200 ms later than that of the 400 ms oddball tones. Classification performance results indicated that support vector machine (SVM) outperformed K-nearest neighbors (KNN), Random Forest, and Logistic regression models. Results: The accuracy of SVM was 91.24, 92.96, and 89.9 for the three used labeling modes, respectively. Another important finding was that most features selected for classification were in the P3 component range, supporting the observed significant effect of duration on the P3. Although all N1, P2, N2, and P3 components contributed to detecting the desired durations. Conclusion: Therefore, results of this study suggest the P3 component as a potential candidate to detect sub-second periods in future researches on brain-computer interface (BCI) applications.
Collapse
Affiliation(s)
- Hoda Jalalkamali
- Computer Engineering Group, Higher Education Complex of Zarand, Kerman, Iran
| | - Amirhossein Tajik
- Department of Electrical Engineering, College of Engineering, Shahid Bahonar University of Kerman, Kerman, Iran
| | - Rashid Hatami
- ICT Group, National Iranian Copper Industries Co. (NICICO), Sarcheshme, Kerman, Iran
| | - Hossein Nezamabadipour
- Department of Electrical Engineering, College of Engineering, Shahid Bahonar University of Kerman, Kerman, Iran
| |
Collapse
|
21
|
Strivens A, Koch I, Lavric A. Does preparation help to switch auditory attention between simultaneous voices: Effects of switch probability and prevalence of conflict. Atten Percept Psychophys 2024; 86:750-767. [PMID: 38212478 PMCID: PMC11062987 DOI: 10.3758/s13414-023-02841-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/22/2023] [Indexed: 01/13/2024]
Abstract
Switching auditory attention to one of two (or more) simultaneous voices incurs a substantial performance overhead. Whether/when this voice 'switch cost' reduces when the listener has opportunity to prepare in silence is not clear-the findings on the effect of preparation on the switch cost range from (near) null to substantial. We sought to determine which factors are crucial for encouraging preparation and detecting its effect on the switch cost in a paradigm where participants categorized the number spoken by one of two simultaneous voices; the target voice, which changed unpredictably, was specified by a visual cue depicting the target's gender. First, we manipulated the probability of a voice switch. When 25% of trials were switches, increasing the preparation interval (50/800/1,400 ms) resulted in substantial (~50%) reduction in switch cost. No reduction was observed when 75% of trials were switches. Second, we examined the relative prevalence of low-conflict, 'congruent' trials (where the numbers spoken by the two voices were mapped onto the same response) and high-conflict, 'incongruent' trials (where the voices afforded different responses). 'Conflict prevalence' had a strong effect on selectivity-the incongruent-congruent difference ('congruence effect') was reduced in the 66%-incongruent condition relative to the 66%-congruent condition-but conflict prevalence did not discernibly interact with preparation and its effect on the switch cost. Thus, conditions where switches of target voice are relatively rare are especially conducive to preparation, possibly because attention is committed more strongly to (and/or disengaged less rapidly from) the perceptual features of target voice.
Collapse
Affiliation(s)
- Amy Strivens
- Institute for Psychology, RWTH Aachen University, Jägerstraße 17-19, 52066, Aachen, Germany.
| | - Iring Koch
- Institute for Psychology, RWTH Aachen University, Jägerstraße 17-19, 52066, Aachen, Germany
| | - Aureliu Lavric
- Department of Psychology, University of Exeter, Exeter, UK
| |
Collapse
|
22
|
Li Y, Xia J, Zhan Y, Yang J, Naman A, Mo L, Zhou H, Zhang J, Xu G. Modality-dependent distortion effects of temporal frequency on time perception. Q J Exp Psychol (Hove) 2024; 77:846-855. [PMID: 37232399 DOI: 10.1177/17470218231181011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
Time perception has been known to depend on the temporal frequency of the stimulus. Previously, the effect of temporal frequency modulation was assumed to be monotonically lengthening or shortening. However, this study shows that temporal frequency affects time perception in a non-monotonic and modality-dependent manner. Four experiments investigated the time distortion effects induced by modulation of temporal frequency across auditory and visual modalities. Critically, the temporal frequency was parametrically manipulated across four levels (steady stimulus, 10-, 20-, and 30/40-Hz intermittent auditory/visual stimulus). Experiment 1, 2, and 3 consistently showed that a 10-Hz auditory stimulus was perceived as shorter than a steady auditory stimulus. Meanwhile, as the temporal frequency increased, the perceived duration of the intermittent auditory stimulus was lengthened. A 40-Hz auditory stimulus was perceived as longer than a 10- Hz auditory stimulus, but did not differ significantly from a steady one. Experiment 4 showed that, for the visual modality, a 10-Hz visual stimulus was perceived as longer than a steady stimulus, and the perceived duration was lengthened as temporal frequency increased. This study demonstrated that within the scope of the temporal frequencies examined in this study, there were differential distortion effects observed across sensory modalities.
Collapse
Affiliation(s)
- You Li
- College of Chinese Language and Culture, Jinan University, Guangzhou, China
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Jing Xia
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Yang Zhan
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Juanhua Yang
- School of Entrepreneurship Education, Guangdong University of Finance & Economics, Guangzhou, China
| | - Abuzha Naman
- School of Psychology, South China Normal University, Guangzhou, China
| | - Lei Mo
- School of Psychology, South China Normal University, Guangzhou, China
| | - Huihui Zhou
- The Research Center for Artificial Intelligence, Peng Cheng Laboratory, Shenzhen, China
| | - Jinqiao Zhang
- College of Chinese Language and Culture, Jinan University, Guangzhou, China
| | - Guiping Xu
- College of Chinese Language and Culture, Jinan University, Guangzhou, China
| |
Collapse
|
23
|
Abstract
Many autistic children show musical interests and good musical skills including pitch and melodic memory. Autistic children may also perceive temporal regularities in music such as the primary beat underlying the rhythmic structure of music given some work showing preserved rhythm processing in the context of basic, nonverbal auditory stimuli. The temporal regularity and prediction of musical beats can potentially serve as an excellent framework for building skills in non-musical areas of growth for autistic children. We examine if autistic children are perceptually sensitive to the primary beat of music by comparing the musical beat perception skills of autistic and neurotypical children. Twenty-three autistic children and 23 neurotypical children aged 6-13 years with no group differences in chronological age and verbal and nonverbal mental ages completed a musical beat perception task where they identified whether beeps superimposed on musical excerpts were on or off the musical beat. Overall task performance was above the theoretical chance threshold of 50% but not the statistical chance threshold of 70% across groups. On-beat (versus off-beat) accuracy was higher for the autistic group but not the neurotypical group. The autistic group was just as accurate at detecting beat alignments (on-beat) but less precise at detecting beat misalignments (off-beat) compared to the neurotypical group. Perceptual sensitivity to beat alignments provides support for spared music processing among autistic children and informs on the accessibility of using musical beats and rhythm for cultivating related skills and behaviours (e.g., language and motor abilities).
Collapse
Affiliation(s)
- Hadas Dahary
- Department of Educational and Counselling Psychology, McGill University, Montreal, QC, Canada
- Centre for Research on Brain, Language and Music, McGill University, Montreal, QC, Canada
- Azrieli Centre for Autism Research, Montreal Neurological Institute, McGill University, Montreal, QC, Canada
| | - Charlotte Rimmer
- Department of Educational and Counselling Psychology, McGill University, Montreal, QC, Canada
- Centre for Research on Brain, Language and Music, McGill University, Montreal, QC, Canada
| | - Eve-Marie Quintin
- Department of Educational and Counselling Psychology, McGill University, Montreal, QC, Canada.
- Centre for Research on Brain, Language and Music, McGill University, Montreal, QC, Canada.
- Azrieli Centre for Autism Research, Montreal Neurological Institute, McGill University, Montreal, QC, Canada.
| |
Collapse
|
24
|
Jeng FC, Matzdorf K, Hickman KL, Bauer SW, Carriero AE, McDonald K, Lin TH, Wang CY. Advancing Auditory Processing by Detecting Frequency-Following Responses Through a Specialized Machine Learning Model. Percept Mot Skills 2024; 131:417-431. [PMID: 38153030 DOI: 10.1177/00315125231225767] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2023]
Abstract
In this study, we explore the feasibility and performance of detecting scalp-recorded frequency-following responses (FFRs) with a specialized machine learning (ML) model. By leveraging the strengths of feature extraction of the source separation non-negative matrix factorization (SSNMF) algorithm and its adeptness in handling limited training data, we adapted the SSNMF algorithm into a specialized ML model with a hybrid architecture to enhance FFR detection amidst background noise. We recruited 40 adults with normal hearing and evoked their scalp recorded FFRs using the English vowel/i/with a rising pitch contour. The model was trained on FFR-present and FFR-absent conditions, and its performance was evaluated using sensitivity, specificity, efficiency, false-positive rate, and false-negative rate metrics. This study revealed that the specialized SSNMF model achieved heightened sensitivity, specificity, and efficiency in detecting FFRs as the number of recording sweeps increased. Sensitivity exceeded 80% at 500 sweeps and maintained over 89% from 1000 sweeps onwards. Similarly, specificity and efficiency also improved rapidly with increasing sweeps. The progressively enhanced sensitivity, specificity, and efficiency of this specialized ML model underscore its practicality and potential for broader applications. These findings have immediate implications for FFR research and clinical use, while paving the way for further advancements in the assessment of auditory processing.
Collapse
Affiliation(s)
- Fuh-Cherng Jeng
- Communication Sciences and Disorders, Ohio University, Athens, OH, USA
- Communication Sciences and Disorders, Asia University, Taichung, Taiwan
| | - Katie Matzdorf
- Communication Sciences and Disorders, Ohio University, Athens, OH, USA
| | - Kassy L Hickman
- Communication Sciences and Disorders, Ohio University, Athens, OH, USA
| | - Sydney W Bauer
- Communication Sciences and Disorders, Ohio University, Athens, OH, USA
| | - Amanda E Carriero
- Communication Sciences and Disorders, Ohio University, Athens, OH, USA
| | - Kalyn McDonald
- Communication Sciences and Disorders, Ohio University, Athens, OH, USA
| | - Tzu-Hao Lin
- Biodiversity Research Center, Academia Sinica, Taipei, Taiwan
| | - Ching-Yuan Wang
- Department of Otolaryngology-HNS, China Medical University Hospital, Taichung, Taiwan
| |
Collapse
|
25
|
Greenlee ET, Hess LJ, Simpson BD, Finomore VS. Vigilance to Spatialized Auditory Displays: Initial Assessment of Performance and Workload. Hum Factors 2024; 66:987-1003. [PMID: 36455164 DOI: 10.1177/00187208221139744] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
OBJECTIVE The present study was designed to evaluate human performance and workload associated with an auditory vigilance task that required spatial discrimination of auditory stimuli. BACKGROUND Spatial auditory displays have been increasingly developed and implemented into settings that require vigilance toward auditory spatial discrimination and localization (e.g., collision avoidance warnings). Research has yet to determine whether a vigilance decrement could impede performance in such applications. METHOD Participants completed a 40-minute auditory vigilance task in either a spatial discrimination condition or a temporal discrimination condition. In the spatial discrimination condition, participants differentiated sounds based on differences in spatial location. In the temporal discrimination condition, participants differentiated sounds based on differences in stimulus duration. RESULTS Correct detections and false alarms declined during the vigilance task, and each did so at a similar rate in both conditions. The overall level of correct detections did not differ significantly between conditions, but false alarms occurred more frequently within the spatial discrimination condition than in the temporal discrimination condition. NASA-TLX ratings and pupil diameter measurements indicated no differences in workload. CONCLUSION Results indicated that tasks requiring auditory spatial discrimination can induce a vigilance decrement; and they may result in inferior vigilance performance, compared to tasks requiring discrimination of auditory duration. APPLICATION Vigilance decrements may impede performance and safety in settings that depend on sustained attention to spatial auditory displays. Display designers should also be aware that auditory displays that require users to discriminate differences in spatial location may result in poorer discrimination performance than non-spatial displays.
Collapse
Affiliation(s)
| | | | - Brian D Simpson
- Air Force Research Laboratory, Wright-Patterson AFB, OH, USA
| | | |
Collapse
|
26
|
Rimmer C, Dahary H, Quintin EM. Links between musical beat perception and phonological skills for autistic children. Child Neuropsychol 2024; 30:361-380. [PMID: 37104762 DOI: 10.1080/09297049.2023.2202902] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2022] [Accepted: 04/10/2023] [Indexed: 04/29/2023]
Abstract
Exploring non-linguistic predictors of phonological awareness, such as musical beat perception, is valuable for children who present with language difficulties and diverse support needs. Studies on the musical abilities of children on the autism spectrum show that they have average or above-average musical production and auditory processing abilities. This study aimed to explore the relationship between musical beat perception and phonological awareness skills of children on the autism spectrum with a wide range of cognitive abilities. A total of 21 autistic children between the ages of 6 to 11 years old (M = 8.9, SD = 1.5) with full scale IQs ranging from 52 to 105 (M = 74, SD = 16) completed a beat perception and a phonological awareness task. Results revealed that phonological awareness and beat perception are positively correlated for children on the autism spectrum. Findings lend support to the potential use of beat and rhythm perception as a screening tool for early literacy skills, specifically for phonological awareness, for children with diverse support needs as an alternative to traditional verbal tasks that tend to underestimate the potential of children on the autism spectrum.
Collapse
Affiliation(s)
- Charlotte Rimmer
- Department of Educational and Counselling Psychology, McGill University, Montreal, Quebec, Canada
- The Centre for Research on Brain, Language and Music, McGill University, Montreal, Quebec, Canada
| | - Hadas Dahary
- Department of Educational and Counselling Psychology, McGill University, Montreal, Quebec, Canada
- The Centre for Research on Brain, Language and Music, McGill University, Montreal, Quebec, Canada
- Azrieli Centre for Autism Research, Montreal Neurological Institute, McGill University, Montreal, Quebec, Canada
| | - Eve-Marie Quintin
- Department of Educational and Counselling Psychology, McGill University, Montreal, Quebec, Canada
- The Centre for Research on Brain, Language and Music, McGill University, Montreal, Quebec, Canada
- Azrieli Centre for Autism Research, Montreal Neurological Institute, McGill University, Montreal, Quebec, Canada
| |
Collapse
|
27
|
Meral Çetinkaya M, Konukseven Ö, İralı AE. World of sounds (Seslerin Dünyası): A mobile auditory training game for children with cochlear implants. Int J Pediatr Otorhinolaryngol 2024; 179:111908. [PMID: 38461681 DOI: 10.1016/j.ijporl.2024.111908] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/21/2023] [Revised: 02/24/2024] [Accepted: 03/04/2024] [Indexed: 03/12/2024]
Abstract
OBJECTIVES The aim of this study is to develop a mobile auditory training application based on gaming for children aged 3-5 years using cochlear implants and to evaluate its usability. METHODS 4 games were developed in the scope of the application World of Sounds; the crucible sound for auditory awareness, mole hunting for auditory discrimination, find the sound for auditory recognition, and choo-choo for auditory comprehension. The prototype was applied to 20 children with normal hearing and 20 children with cochlear implants, all of whom were aged 3-5. The participants were asked to fill out the Game Evaluation Form for Children. Moreover, 40 parents were included in the study, and the Evaluation Form for the Application was applied. RESULTS According to the form, at least 80% of children using cochlear implants, and all children in the healthy group, responded well to the usability factors. All factors were obtained as highly useable by parents of the children using cochlear implants. The results indicated that in the healthy group, the usefulness and motivation factors were above moderate, and the other factors were highly useable. In the mole-hunting game, there was no significant difference between the groups in the easy level of the first sub-section (p > 0.05). There was a significant difference between the groups in terms of the other sub-sections of the mole-hunting game and all sub-sections of the crucible sound, find the sound, and the choo-choo games (p < 0.05). While there was no correlation between duration of cochlear implant use and ADSI scores and the third sub-section of the crucible sound game (p > 0.05); a correlation was found in the other sub-sections of crucible sound and all sub-sections of the mole hunting, find the sound, and Choo-Choo games (p < 0.05). CONCLUSION It is thought that the application World of Sounds can serve as an accessible option to support traditional auditory rehabilitation for children with cochlear implants.
Collapse
Affiliation(s)
- Merve Meral Çetinkaya
- Department of Audiology, Faculty of Health Sciences, Istanbul Aydin University, Istanbul, Turkey.
| | - Özlem Konukseven
- Department of Audiology, Faculty of Health Sciences, Istanbul Aydin University, Istanbul, Turkey.
| | - Ali Efe İralı
- Department of Cartoon and Animation, Faculty of Fine Arts, Istanbul Aydin University, Istanbul, Turkey.
| |
Collapse
|
28
|
Gu J, Deng K, Luo X, Ma W, Tang X. Investigating the different mechanisms in related neural activities: a focus on auditory perception and imagery. Cereb Cortex 2024; 34:bhae139. [PMID: 38629796 DOI: 10.1093/cercor/bhae139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Revised: 03/17/2024] [Accepted: 03/20/2024] [Indexed: 04/19/2024] Open
Abstract
Neuroimaging studies have shown that the neural representation of imagery is closely related to the perception modality; however, the undeniable different experiences between perception and imagery indicate that there are obvious neural mechanism differences between them, which cannot be explained by the simple theory that imagery is a form of weak perception. Considering the importance of functional integration of brain regions in neural activities, we conducted correlation analysis of neural activity in brain regions jointly activated by auditory imagery and perception, and then brain functional connectivity (FC) networks were obtained with a consistent structure. However, the connection values between the areas in the superior temporal gyrus and the right precentral cortex were significantly higher in auditory perception than in the imagery modality. In addition, the modality decoding based on FC patterns showed that the FC network of auditory imagery and perception can be significantly distinguishable. Subsequently, voxel-level FC analysis further verified the distribution regions of voxels with significant connectivity differences between the 2 modalities. This study complemented the correlation and difference between auditory imagery and perception in terms of brain information interaction, and it provided a new perspective for investigating the neural mechanisms of different modal information representations.
Collapse
Affiliation(s)
- Jin Gu
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, No. 999, Xi'an Road, Pidu District, Chengdu, China
- Manufacturing Industry Chains Collaboration and Information Support Technology Key Laboratory of Sichuan Province, No. 999, Xi'an Road, Pidu District, Chengdu, China
| | - Kexin Deng
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, No. 999, Xi'an Road, Pidu District, Chengdu, China
| | - Xiaoqi Luo
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, No. 999, Xi'an Road, Pidu District, Chengdu, China
| | - Wanli Ma
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, No. 999, Xi'an Road, Pidu District, Chengdu, China
| | - Xuegang Tang
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, No. 999, Xi'an Road, Pidu District, Chengdu, China
| |
Collapse
|
29
|
Tune S, Obleser J. Neural attentional filters and behavioural outcome follow independent individual trajectories over the adult lifespan. eLife 2024; 12:RP92079. [PMID: 38470243 DOI: 10.7554/elife.92079] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/13/2024] Open
Abstract
Preserved communication abilities promote healthy ageing. To this end, the age-typical loss of sensory acuity might in part be compensated for by an individual's preserved attentional neural filtering. Is such a compensatory brain-behaviour link longitudinally stable? Can it predict individual change in listening behaviour? We here show that individual listening behaviour and neural filtering ability follow largely independent developmental trajectories modelling electroencephalographic and behavioural data of N = 105 ageing individuals (39-82 y). First, despite the expected decline in hearing-threshold-derived sensory acuity, listening-task performance proved stable over 2 y. Second, neural filtering and behaviour were correlated only within each separate measurement timepoint (T1, T2). Longitudinally, however, our results raise caution on attention-guided neural filtering metrics as predictors of individual trajectories in listening behaviour: neither neural filtering at T1 nor its 2-year change could predict individual 2-year behavioural change, under a combination of modelling strategies.
Collapse
Affiliation(s)
- Sarah Tune
- Center of Brain, Behavior, and Metabolism, University of Lübeck, Lübeck, Germany
- Department of Psychology, University of Lübeck, Lübeck, Germany
| | - Jonas Obleser
- Center of Brain, Behavior, and Metabolism, University of Lübeck, Lübeck, Germany
- Department of Psychology, University of Lübeck, Lübeck, Germany
| |
Collapse
|
30
|
Roark CL, Thakkar V, Chandrasekaran B, Centanni TM. Auditory Category Learning in Children With Dyslexia. J Speech Lang Hear Res 2024; 67:974-988. [PMID: 38354099 PMCID: PMC11001431 DOI: 10.1044/2023_jslhr-23-00361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Revised: 09/15/2023] [Accepted: 11/14/2023] [Indexed: 02/16/2024]
Abstract
PURPOSE Developmental dyslexia is proposed to involve selective procedural memory deficits with intact declarative memory. Recent research in the domain of category learning has demonstrated that adults with dyslexia have selective deficits in Information-Integration (II) category learning that is proposed to rely on procedural learning mechanisms and unaffected Rule-Based (RB) category learning that is proposed to rely on declarative, hypothesis testing mechanisms. Importantly, learning mechanisms also change across development, with distinct developmental trajectories in both procedural and declarative learning mechanisms. It is unclear how dyslexia in childhood should influence auditory category learning, a critical skill for speech perception and reading development. METHOD We examined auditory category learning performance and strategies in 7- to 12-year-old children with dyslexia (n = 25; nine females, 16 males) and typically developing controls (n = 25; 13 females, 12 males). Participants learned nonspeech auditory categories of spectrotemporal ripples that could be optimally learned with either RB selective attention to the temporal modulation dimension or procedural integration of information across spectral and temporal dimensions. We statistically compared performance using mixed-model analyses of variance and identified strategies using decision-bound computational models. RESULTS We found that children with dyslexia have an apparent selective RB category learning deficit, rather than a selective II learning deficit observed in prior work in adults with dyslexia. CONCLUSION These results suggest that the important skill of auditory category learning is impacted in children with dyslexia and throughout development, individuals with dyslexia may develop compensatory strategies that preserve declarative learning while developing difficulties in procedural learning. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.25148519.
Collapse
Affiliation(s)
- Casey L. Roark
- Department of Communication Science and Disorders, University of Pittsburgh, PA
- Center for the Neural Basis of Cognition, University of Pittsburgh, Carnegie Mellon University, PA
| | - Vishal Thakkar
- Department of Psychology, Texas Christian University, Fort Worth
| | - Bharath Chandrasekaran
- Department of Communication Science and Disorders, University of Pittsburgh, PA
- Center for the Neural Basis of Cognition, University of Pittsburgh, Carnegie Mellon University, PA
| | | |
Collapse
|
31
|
Crespo-Bojorque P, Cauvet E, Pallier C, Toro JM. Recognizing structure in novel tunes: differences between human and rats. Anim Cogn 2024; 27:17. [PMID: 38429431 PMCID: PMC10907461 DOI: 10.1007/s10071-024-01848-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Revised: 10/27/2023] [Accepted: 11/08/2023] [Indexed: 03/03/2024]
Abstract
A central feature in music is the hierarchical organization of its components. Musical pieces are not a simple concatenation of chords, but are characterized by rhythmic and harmonic structures. Here, we explore if sensitivity to music structure might emerge in the absence of any experience with musical stimuli. For this, we tested if rats detect the difference between structured and unstructured musical excerpts and compared their performance with that of humans. Structured melodies were excerpts of Mozart's sonatas. Unstructured melodies were created by the recombination of fragments of different sonatas. We trained listeners (both human participants and Long-Evans rats) with a set of structured and unstructured excerpts, and tested them with completely novel excerpts they had not heard before. After hundreds of training trials, rats were able to tell apart novel structured from unstructured melodies. Human listeners required only a few trials to reach better performance than rats. Interestingly, such performance was increased in humans when tonality changes were included, while it decreased to chance in rats. Our results suggest that, with enough training, rats might learn to discriminate acoustic differences differentiating hierarchical music structures from unstructured excerpts. More importantly, the results point toward species-specific adaptations on how tonality is processed.
Collapse
Affiliation(s)
| | - Elodie Cauvet
- Cognitive Neuroimaging Unit, INSERM, CEA, CNRS, Université Paris-Saclay, NeuroSpin Center, Gif-Sur-Yvette, France
- DIS Study Abroad in Scandinavia, Stockholm, Sweden
| | - Christophe Pallier
- Cognitive Neuroimaging Unit, INSERM, CEA, CNRS, Université Paris-Saclay, NeuroSpin Center, Gif-Sur-Yvette, France
| | - Juan M Toro
- Universitat Pompeu Fabra, C. Ramon Trias Fargas, 25-27, CP. 08005, Barcelona, Spain.
- Institució Catalana de Recerca I Estudis Avançats (ICREA), Barcelona, Spain.
| |
Collapse
|
32
|
Jiam NT, Formeister EJ, Chari DA, David AP, Alsoudi AF, Purnell S, Jiradejvong P, Limb CJ. Music Perception in Bone-Anchored Hearing Implant Users. Laryngoscope 2024; 134:1381-1387. [PMID: 37665102 DOI: 10.1002/lary.30919] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Revised: 06/12/2023] [Accepted: 07/14/2023] [Indexed: 09/05/2023]
Abstract
OBJECTIVE Music is a highly complex acoustic stimulus in both spectral and temporal contents. Accurate representation and delivery of high-fidelity information are essential for music perception. However, it is unclear how well bone-anchored hearing implants (BAHIs) transmit music. The study objective is to establish music perception performance baselines for BAHI users and normal hearing (NH) listeners and compare outcomes between the cohorts. METHODS A case-controlled, cross-sectional study was conducted among 18 BAHI users and 11 NH controls. Music perception was assessed via performance on seven major musical element tasks: pitch discrimination, melodic contour identification, rhythmic clocking, basic tempo discrimination, timbre identification, polyphonic pitch detection, and harmonic chord discrimination. RESULTS BAHI users performed comparably well on all music perception tasks with their device compared with the unilateral condition with their better-hearing ear. BAHI performance was not statistically significantly different from NH listeners' performance. BAHI users performed just as well, if not better than NH listeners when using their control contralateral ear; there was no significant difference between the two groups except for the rhythmic timing (BAHI non-implanted ear 69% [95% CI: 62%-75%], NH 56% [95% CI: 49%-63%], p = 0.02), and basic tempo tasks (BAHI non-implanted ear 80% [95% CI: 65%-95%]; NH 75% [95% CI: 68%-82%, p = 0.03]). CONCLUSIONS This study represents the first comprehensive study of basic music perception performance in BAHI users. Our results demonstrate that BAHI users perform as well with their implanted ear as with their contralateral better-hearing ear and NH controls in the major elements of music perception. LEVEL OF EVIDENCE 3 Laryngoscope, 134:1381-1387, 2024.
Collapse
Affiliation(s)
- Nicole T Jiam
- Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, San Francisco, California, USA
| | - Eric J Formeister
- Department of Head and Neck Surgery & Communication Sciences, Duke University School of Medicine, Durham, North Carolina, USA
| | - Divya A Chari
- Department of Otolaryngology, University of Massachusetts School of Medicine, Worcester, Massachusetts, USA
| | - Abel P David
- Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, San Francisco, California, USA
| | - Amer F Alsoudi
- Department of Ophthalmology, Baylor College of Medicine, Houston, Texas, USA
| | - Stephanie Purnell
- Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, San Francisco, California, USA
| | - Patpong Jiradejvong
- Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, San Francisco, California, USA
| | - Charles J Limb
- Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, San Francisco, California, USA
| |
Collapse
|
33
|
Alnes SL, Bächlin LZM, Schindler K, Tzovara A. Neural complexity and the spectral slope characterise auditory processing in wakefulness and sleep. Eur J Neurosci 2024; 59:822-841. [PMID: 38100263 DOI: 10.1111/ejn.16203] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Revised: 10/11/2023] [Accepted: 11/10/2023] [Indexed: 12/17/2023]
Abstract
Auditory processing and the complexity of neural activity can both indicate residual consciousness levels and differentiate states of arousal. However, how measures of neural signal complexity manifest in neural activity following environmental stimulation and, more generally, how the electrophysiological characteristics of auditory responses change in states of reduced consciousness remain under-explored. Here, we tested the hypothesis that measures of neural complexity and the spectral slope would discriminate stages of sleep and wakefulness not only in baseline electroencephalography (EEG) activity but also in EEG signals following auditory stimulation. High-density EEG was recorded in 21 participants to determine the spatial relationship between these measures and between EEG recorded pre- and post-auditory stimulation. Results showed that the complexity and the spectral slope in the 2-20 Hz range discriminated between sleep stages and had a high correlation in sleep. In wakefulness, complexity was strongly correlated to the 20-40 Hz spectral slope. Auditory stimulation resulted in reduced complexity in sleep compared to the pre-stimulation EEG activity and modulated the spectral slope in wakefulness. These findings confirm our hypothesis that electrophysiological markers of arousal are sensitive to sleep/wake states in EEG activity during baseline and following auditory stimulation. Our results have direct applications to studies using auditory stimulation to probe neural functions in states of reduced consciousness.
Collapse
Affiliation(s)
- Sigurd L Alnes
- Institute of Computer Science, University of Bern, Bern, Switzerland
- Zentrum für Experimentelle Neurologie, Department of Neurology, Inselspital University Hospital Bern, Bern, Switzerland
| | - Lea Z M Bächlin
- Institute of Computer Science, University of Bern, Bern, Switzerland
| | - Kaspar Schindler
- Sleep-Wake-Epilepsy Center, NeuroTec, Department of Neurology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Athina Tzovara
- Institute of Computer Science, University of Bern, Bern, Switzerland
- Zentrum für Experimentelle Neurologie, Department of Neurology, Inselspital University Hospital Bern, Bern, Switzerland
- Sleep-Wake-Epilepsy Center, NeuroTec, Department of Neurology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| |
Collapse
|
34
|
Takasago M, Kunii N, Fujitani S, Ishishita Y, Tada M, Kirihara K, Komatsu M, Uka T, Shimada S, Nagata K, Kasai K, Saito N. Auditory prediction errors in sound frequency and duration generated different cortical activation patterns in the human brain: an ECoG study. Cereb Cortex 2024; 34:bhae072. [PMID: 38466116 DOI: 10.1093/cercor/bhae072] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Revised: 02/04/2024] [Accepted: 02/06/2024] [Indexed: 03/12/2024] Open
Abstract
Sound frequency and duration are essential auditory components. The brain perceives deviations from the preceding sound context as prediction errors, allowing efficient reactions to the environment. Additionally, prediction error response to duration change is reduced in the initial stages of psychotic disorders. To compare the spatiotemporal profiles of responses to prediction errors, we conducted a human electrocorticography study with special attention to high gamma power in 13 participants who completed both frequency and duration oddball tasks. Remarkable activation in the bilateral superior temporal gyri in both the frequency and duration oddball tasks were observed, suggesting their association with prediction errors. However, the response to deviant stimuli in duration oddball task exhibited a second peak, which resulted in a bimodal response. Furthermore, deviant stimuli in frequency oddball task elicited a significant response in the inferior frontal gyrus that was not observed in duration oddball task. These spatiotemporal differences within the Parasylvian cortical network could account for our efficient reactions to changes in sound properties. The findings of this study may contribute to unveiling auditory processing and elucidating the pathophysiology of psychiatric disorders.
Collapse
Affiliation(s)
- Megumi Takasago
- Department of Neurosurgery, The University of Tokyo, Tokyo 113-0033, Japan
| | - Naoto Kunii
- Department of Neurosurgery, The University of Tokyo, Tokyo 113-0033, Japan
- Department of Neurosurgery, Jichi Medical University, Shimotsuke 329-0498, Japan
| | - Shigeta Fujitani
- Department of Neurosurgery, The University of Tokyo, Tokyo 113-0033, Japan
| | - Yohei Ishishita
- Department of Neurosurgery, The University of Tokyo, Tokyo 113-0033, Japan
- Department of Neurosurgery, Jichi Medical University, Shimotsuke 329-0498, Japan
| | - Mariko Tada
- Department of Neuropsychiatry, The University of Tokyo, Tokyo 113-0033, Japan
- Office for Mental Health Support, Center for Research on Counseling and Support Services, The University of Tokyo, Tokyo 113-0033, Japan
| | - Kenji Kirihara
- Department of Neuropsychiatry, The University of Tokyo, Tokyo 113-0033, Japan
- Disability Services Office, The University of Tokyo, Tokyo 113-0033, Japan
| | - Misako Komatsu
- Institution of Innovative Research, Tokyo Institute of Technology, Tokyo 226-8503, Japan
- Laboratory for Molecular Analysis of Higher Brain Function, Center for Brain Science, RIKEN, Saitama 351-0198, Japan
| | - Takanori Uka
- Department of Integrative Physiology, Graduate School of Medicine, University of Yamanashi, Yamanashi 409-3898, Japan
| | - Seijiro Shimada
- Department of Neurosurgery, The University of Tokyo, Tokyo 113-0033, Japan
| | - Keisuke Nagata
- Department of Neurosurgery, The University of Tokyo, Tokyo 113-0033, Japan
| | - Kiyoto Kasai
- Department of Neuropsychiatry, The University of Tokyo, Tokyo 113-0033, Japan
- The International Research Center for Neurointelligence (WPI-IRCN), The University of Tokyo Institutes for Advanced Study (UTIAS), Tokyo 113-0033, Japan
| | - Nobuhito Saito
- Department of Neurosurgery, The University of Tokyo, Tokyo 113-0033, Japan
| |
Collapse
|
35
|
Bureš Z, Profant O, Sommerhalder N, Skarnitzl R, Fuksa J, Meyer M. Speech intelligibility and its relation to auditory temporal processing in Czech and Swiss German subjects with and without tinnitus. Eur Arch Otorhinolaryngol 2024; 281:1589-1595. [PMID: 38175264 DOI: 10.1007/s00405-023-08398-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2023] [Accepted: 12/06/2023] [Indexed: 01/05/2024]
Abstract
PURPOSE Previous studies have shown that levels for 50% speech intelligibility in quiet and in noise differ for different languages. Here, we aimed to find out whether these differences may relate to different auditory processing of temporal sound features in different languages, and to determine the influence of tinnitus on speech comprehension in different languages. METHODS We measured speech intelligibility under various conditions (words in quiet, sentences in babble noise, interrupted sentences) along with tone detection thresholds in quiet [PTA] and in noise [PTAnoise], gap detection thresholds [GDT], and detection thresholds for frequency modulation [FMT], and compared them between Czech and Swiss subjects matched in mean age and PTA. RESULTS The Swiss subjects exhibited higher speech reception thresholds in quiet, higher threshold speech-to-noise ratio, and shallower slope of performance-intensity function for the words in quiet. Importantly, the intelligibility of temporally gated speech was similar in the Czech and Swiss subjects. The PTAnoise, GDT, and FMT were similar in the two groups. The Czech subjects exhibited correlations of the speech tests with GDT and FMT, which was not the case in the Swiss group. Qualitatively, the results of comparisons between the Swiss and Czech populations were not influenced by presence of subjective tinnitus. CONCLUSION The results support the notion of language-specific differences in speech comprehension which persists also in tinnitus subjects, and indicates different associations with the elementary measures of auditory temporal processing.
Collapse
Affiliation(s)
- Zbyněk Bureš
- Department of Otorhinolaryngology, Third Faculty of Medicine, University Hospital Královské Vinohrady, Charles University, Prague, Czech Republic.
- Department of Cognitive Systems and Neurosciences, Czech Institute of Informatics, Robotics and Cybernetics, Czech Technical University in Prague, Jugoslávských partyzánů 1580/3, 160 00, Prague 6, Czech Republic.
| | - Oliver Profant
- Department of Otorhinolaryngology, Third Faculty of Medicine, University Hospital Královské Vinohrady, Charles University, Prague, Czech Republic
- Department of Auditory Neuroscience, Institute of Experimental Medicine, Czech Academy of Sciences, Prague, Czech Republic
| | - Nick Sommerhalder
- Evolutionary Neuroscience of Language, Department of Comparative Language Science, University of Zurich, Zurich, Switzerland
| | - Radek Skarnitzl
- Institute of Phonetics, Faculty of Arts, Charles University, Prague, Czech Republic
| | - Jakub Fuksa
- Department of Otorhinolaryngology, Third Faculty of Medicine, University Hospital Královské Vinohrady, Charles University, Prague, Czech Republic
- Department of Auditory Neuroscience, Institute of Experimental Medicine, Czech Academy of Sciences, Prague, Czech Republic
| | - Martin Meyer
- Evolutionary Neuroscience of Language, Department of Comparative Language Science, University of Zurich, Zurich, Switzerland
- Center for the Interdisciplinary Study of Language Evolution, University of Zurich, Zurich, Switzerland
| |
Collapse
|
36
|
Temboury-Gutierrez M, Encina-Llamas G, Dau T. Predicting early auditory evoked potentials using a computational model of auditory-nerve processing. J Acoust Soc Am 2024; 155:1799-1812. [PMID: 38445986 DOI: 10.1121/10.0025136] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Accepted: 02/16/2024] [Indexed: 03/07/2024]
Abstract
Non-invasive electrophysiological measures, such as auditory evoked potentials (AEPs), play a crucial role in diagnosing auditory pathology. However, the relationship between AEP morphology and cochlear degeneration remains complex and not well understood. Dau [J. Acoust. Soc. Am. 113, 936-950 (2003)] proposed a computational framework for modeling AEPs that utilized a nonlinear auditory-nerve (AN) model followed by a linear unitary response function. While the model captured some important features of the measured AEPs, it also exhibited several discrepancies in response patterns compared to the actual measurements. In this study, an enhanced AEP modeling framework is presented, incorporating an improved AN model, and the conclusions from the original study were reevaluated. Simulation results with transient and sustained stimuli demonstrated accurate auditory brainstem responses (ABRs) and frequency-following responses (FFRs) as a function of stimulation level, although wave-V latencies remained too short, similar to the original study. When compared to physiological responses in animals, the revised model framework showed a more accurate balance between the contributions of auditory-nerve fibers (ANFs) at on- and off-frequency regions to the predicted FFRs. These findings emphasize the importance of cochlear processing in brainstem potentials. This framework may provide a valuable tool for assessing human AN models and simulating AEPs for various subtypes of peripheral pathologies, offering opportunities for research and clinical applications.
Collapse
Affiliation(s)
- Miguel Temboury-Gutierrez
- Hearing Systems Section, Department of Health Technology, Technical University of Denmark, Kongens Lyngby, DK-2800, Denmark
| | - Gerard Encina-Llamas
- Copenhagen Hearing and Balance Center, Ear, Nose and Throat (ENT) and Audiology Clinic, Rigshospitalet, Copenhagen University Hospital, Copenhagen, DK-2100, Denmark
- Faculty of Medicine, University of Vic-Central University of Catalonia (UVic-UCC), Vic, 08500, Catalonia, Spain
| | - Torsten Dau
- Hearing Systems Section, Department of Health Technology, Technical University of Denmark, Kongens Lyngby, DK-2800, Denmark
- Copenhagen Hearing and Balance Center, Ear, Nose and Throat (ENT) and Audiology Clinic, Rigshospitalet, Copenhagen University Hospital, Copenhagen, DK-2100, Denmark
| |
Collapse
|
37
|
Parnas J, Yttri JE, Urfer-Parnas A. Phenomenology of auditory verbal hallucination in schizophrenia: An erroneous perception or something else? Schizophr Res 2024; 265:83-88. [PMID: 37024418 DOI: 10.1016/j.schres.2023.03.045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/30/2022] [Revised: 03/05/2023] [Accepted: 03/24/2023] [Indexed: 04/08/2023]
Abstract
This study presents phenomenological features of auditory verbal hallucinations (AVH) in schizophrenia and associated anomalies of experience. The purpose is to compare the lived experience of AVH to the official definition of hallucinations as a perception without object. Furthermore, we wish to explore the clinical and research implication of the phenomenological approach to AVH. Our exposition is based on classic texts on AVH, recent phenomenological studies and our clinical experience. AVH differ on several dimensions from ordinary perception. Only a minority of schizophrenia patients experiences AVH localized externally. Thus, the official definition of hallucinations does not fit the AVH in schizophrenia. AVH are associated with several anomalies of subjective experiences (self-disorders) and the AVH must be considered as a product of self-fragmentation. We discuss the implications with respect to the definition of hallucination, clinical interview, conceptualization of a psychotic state and potential target of pathogenetic research.
Collapse
Affiliation(s)
- Josef Parnas
- Center for Subjectivity Research, University of Copenhagen, DK-2300 Copenhagen S, Denmark; Mental Health Centre Glostrup, University Hospital of Copenhagen, DK-2605 Brøndby, Denmark; Faculty of Health and Medical Sciences, University of Copenhagen, DK-2200 Copenhagen N, Denmark
| | - Janne-Elin Yttri
- Mental Health Centre Amager, University Hospital of Copenhagen, DK-1610 Copenhagen V, Denmark
| | - Annick Urfer-Parnas
- Faculty of Health and Medical Sciences, University of Copenhagen, DK-2200 Copenhagen N, Denmark; Mental Health Centre Amager, University Hospital of Copenhagen, DK-1610 Copenhagen V, Denmark.
| |
Collapse
|
38
|
Borjigin A, Bakst S, Anderson K, Litovsky RY, Niziolek CA. Discrimination and sensorimotor adaptation of self-produced vowels in cochlear implant users. J Acoust Soc Am 2024; 155:1895-1908. [PMID: 38456732 DOI: 10.1121/10.0025063] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/13/2023] [Accepted: 02/11/2024] [Indexed: 03/09/2024]
Abstract
Humans rely on auditory feedback to monitor and adjust their speech for clarity. Cochlear implants (CIs) have helped over a million people restore access to auditory feedback, which significantly improves speech production. However, there is substantial variability in outcomes. This study investigates the extent to which CI users can use their auditory feedback to detect self-produced sensory errors and make adjustments to their speech, given the coarse spectral resolution provided by their implants. First, we used an auditory discrimination task to assess the sensitivity of CI users to small differences in formant frequencies of their self-produced vowels. Then, CI users produced words with altered auditory feedback in order to assess sensorimotor adaptation to auditory error. Almost half of the CI users tested can detect small, within-channel differences in their self-produced vowels, and they can utilize this auditory feedback towards speech adaptation. An acoustic hearing control group showed better sensitivity to the shifts in vowels, even in CI-simulated speech, and elicited more robust speech adaptation behavior than the CI users. Nevertheless, this study confirms that CI users can compensate for sensory errors in their speech and supports the idea that sensitivity to these errors may relate to variability in production.
Collapse
Affiliation(s)
- Agudemu Borjigin
- Waisman Center, University of Wisconsin-Madison, Madison, Wisconsin 53705, USA
| | - Sarah Bakst
- Waisman Center, University of Wisconsin-Madison, Madison, Wisconsin 53705, USA
| | - Katla Anderson
- Waisman Center, University of Wisconsin-Madison, Madison, Wisconsin 53705, USA
| | - Ruth Y Litovsky
- Waisman Center, University of Wisconsin-Madison, Madison, Wisconsin 53705, USA
- Department of Communication Sciences and Disorders, University of Wisconsin-Madison, Madison, Wisconsin 53705, USA
| | - Caroline A Niziolek
- Waisman Center, University of Wisconsin-Madison, Madison, Wisconsin 53705, USA
- Department of Communication Sciences and Disorders, University of Wisconsin-Madison, Madison, Wisconsin 53705, USA
| |
Collapse
|
39
|
Liu J, Stohl J, Overath T. Hidden hearing loss: Fifteen years at a glance. Hear Res 2024; 443:108967. [PMID: 38335624 DOI: 10.1016/j.heares.2024.108967] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Revised: 01/15/2024] [Accepted: 01/29/2024] [Indexed: 02/12/2024]
Abstract
Hearing loss affects approximately 18% of the population worldwide. Hearing difficulties in noisy environments without accompanying audiometric threshold shifts likely affect an even larger percentage of the global population. One of the potential causes of hidden hearing loss is cochlear synaptopathy, the loss of synapses between inner hair cells (IHC) and auditory nerve fibers (ANF). These synapses are the most vulnerable structures in the cochlea to noise exposure or aging. The loss of synapses causes auditory deafferentation, i.e., the loss of auditory afferent information, whose downstream effect is the loss of information that is sent to higher-order auditory processing stages. Understanding the physiological and perceptual effects of this early auditory deafferentation might inform interventions to prevent later, more severe hearing loss. In the past decade, a large body of work has been devoted to better understand hidden hearing loss, including the causes of hidden hearing loss, their corresponding impact on the auditory pathway, and the use of auditory physiological measures for clinical diagnosis of auditory deafferentation. This review synthesizes the findings from studies in humans and animals to answer some of the key questions in the field, and it points to gaps in knowledge that warrant more investigation. Specifically, recent studies suggest that some electrophysiological measures have the potential to function as indicators of hidden hearing loss in humans, but more research is needed for these measures to be included as part of a clinical test battery.
Collapse
Affiliation(s)
- Jiayue Liu
- Department of Psychology and Neuroscience, Duke University, Durham, USA.
| | - Joshua Stohl
- North American Research Laboratory, MED-EL Corporation, Durham, USA
| | - Tobias Overath
- Department of Psychology and Neuroscience, Duke University, Durham, USA
| |
Collapse
|
40
|
Zhao S, Ma F, Xie J, Zhou Y, Feng C, Feng W. The stimulus-driven and representation-driven cross-modal attentional spreading are both modulated by audiovisual temporal synchrony. Psychophysiology 2024; 61:e14527. [PMID: 38243583 DOI: 10.1111/psyp.14527] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Revised: 11/18/2023] [Accepted: 01/03/2024] [Indexed: 01/21/2024]
Abstract
Multisensory integration and attention can interact in a way that attention to the visual constituent of a multisensory object results in an attentional spreading to its ignored auditory constituent, which can be either stimulus-driven or representation-driven depending on whether the object's visual constituent receives extra representation-based selective attention. Previous research using simple unrelated audiovisual combinations has shown that the stimulus-driven attentional spreading is contingent on audiovisual temporal simultaneity. However, little is known about whether this temporal constraint applies also to the representation-driven attentional spreading, and whether it holds for the stimulus-driven process elicited by real-life multisensory objects. The current event-related potential study investigated these questions by systematically manipulating the visual-to-auditory stimulus onset asynchrony (SOA: 0/100/300 ms) in an object-selective visual recognition task wherein the representation-driven and stimulus-driven spreading processes, measured as two distinct auditory negative difference (Nd) components, could be isolated independently. Our results showed that both the representation-driven and stimulus-driven Nds decreased as the SOA increased. Interestingly, the representation-driven Nd was completely absent, whereas the stimulus-driven Nd was still robust, when the auditory constituents were delayed by 300 ms. These findings not only indicate that the role of audiovisual simultaneity in the representation-driven attentional spreading has been underestimated, but also suggest that learned associations between the unisensory constituents of real-life objects render the stimulus-driven attentional spreading more tolerant of audiovisual asynchrony.
Collapse
Affiliation(s)
- Song Zhao
- Department of Psychology, School of Education, Soochow University, Suzhou, Jiangsu, China
| | - Fangfang Ma
- Department of Psychology, School of Education, Soochow University, Suzhou, Jiangsu, China
| | - Jimei Xie
- Department of Psychology, School of Education, Soochow University, Suzhou, Jiangsu, China
| | - Yuxin Zhou
- Department of Psychology, School of Education, Soochow University, Suzhou, Jiangsu, China
| | - Chengzhi Feng
- Department of Psychology, School of Education, Soochow University, Suzhou, Jiangsu, China
| | - Wenfeng Feng
- Department of Psychology, School of Education, Soochow University, Suzhou, Jiangsu, China
- Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, Jiangsu, China
| |
Collapse
|
41
|
Noyce AL, Varghese L, Mathias SR, Shinn-Cunningham BG. Perceptual organization and task demands jointly shape auditory working memory capacity. JASA Express Lett 2024; 4:034402. [PMID: 38526127 PMCID: PMC10966505 DOI: 10.1121/10.0025392] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Accepted: 03/08/2024] [Indexed: 03/26/2024]
Abstract
Listeners performed two different tasks in which they remembered short sequences comprising either complex tones (generally heard as one melody) or everyday sounds (generally heard as separate objects). In one, listeners judged whether a probe item had been present in the preceding sequence. In the other, they judged whether a second sequence of the same items was identical in order to the preceding sequence. Performance on the first task was higher for everyday sounds; performance on the second was higher for complex tones. Perceptual organization strongly shapes listeners' memory for sounds, with implications for real-world communication.
Collapse
Affiliation(s)
- Abigail L Noyce
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213, USA
| | - Leonard Varghese
- Department of Biomedical Engineering, Boston University, Boston, Massachusetts 02215, USA
| | - Samuel R Mathias
- Department of Psychiatry, Boston Children's Hospital, Harvard Medical School, Boston, Massachusetts 02115, , , ,
| | | |
Collapse
|
42
|
Caprini F, Zhao S, Chait M, Agus T, Pomper U, Tierney A, Dick F. Generalization of auditory expertise in audio engineers and instrumental musicians. Cognition 2024; 244:105696. [PMID: 38160651 DOI: 10.1016/j.cognition.2023.105696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Revised: 12/04/2023] [Accepted: 12/13/2023] [Indexed: 01/03/2024]
Abstract
From auditory perception to general cognition, the ability to play a musical instrument has been associated with skills both related and unrelated to music. However, it is unclear if these effects are bound to the specific characteristics of musical instrument training, as little attention has been paid to other populations such as audio engineers and designers whose auditory expertise may match or surpass that of musicians in specific auditory tasks or more naturalistic acoustic scenarios. We explored this possibility by comparing students of audio engineering (n = 20) to matched conservatory-trained instrumentalists (n = 24) and to naive controls (n = 20) on measures of auditory discrimination, auditory scene analysis, and speech in noise perception. We found that audio engineers and performing musicians had generally lower psychophysical thresholds than controls, with pitch perception showing the largest effect size. Compared to controls, audio engineers could better memorise and recall auditory scenes composed of non-musical sounds, whereas instrumental musicians performed best in a sustained selective attention task with two competing streams of tones. Finally, in a diotic speech-in-babble task, musicians showed lower signal-to-noise-ratio thresholds than both controls and engineers; however, a follow-up online study did not replicate this musician advantage. We also observed differences in personality that might account for group-based self-selection biases. Overall, we showed that investigating a wider range of forms of auditory expertise can help us corroborate (or challenge) the specificity of the advantages previously associated with musical instrument training.
Collapse
Affiliation(s)
- Francesco Caprini
- Department of Psychological Sciences, Birkbeck, University of London, UK.
| | - Sijia Zhao
- Department of Experimental Psychology, University of Oxford, UK
| | - Maria Chait
- University College London (UCL) Ear Institute, UK
| | - Trevor Agus
- School of Arts, English and Languages, Queen's University Belfast, UK
| | - Ulrich Pomper
- Department of Cognition, Emotion, and Methods in Psychology, Universität Wien, Austria
| | - Adam Tierney
- Department of Psychological Sciences, Birkbeck, University of London, UK
| | - Fred Dick
- Department of Experimental Psychology, University College London (UCL), UK
| |
Collapse
|
43
|
Lankinen K, Wang R, Tian Q, Wang QM, Perry BJ, Green JR, Kimberley TJ, Ahveninen J, Li S. Individualized white matter connectivity of the articulatory pathway: An ultra-high field study. Brain Lang 2024; 250:105391. [PMID: 38354542 PMCID: PMC10940181 DOI: 10.1016/j.bandl.2024.105391] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Revised: 01/29/2024] [Accepted: 02/07/2024] [Indexed: 02/16/2024]
Abstract
In current sensorimotor theories pertaining to speech perception, there is a notable emphasis on the involvement of the articulatory-motor system in the processing of speech sounds. Using ultra-high field diffusion-weighted imaging at 7 Tesla, we visualized the white matter tracts connected to areas activated during a simple speech-sound production task in 18 healthy right-handed adults. Regions of interest for white matter tractography were individually determined through 7T functional MRI (fMRI) analyses, based on activations during silent vocalization tasks. These precentral seed regions, activated during the silent production of a lip-vowel sound, demonstrated anatomical connectivity with posterior superior temporal gyrus areas linked to the auditory perception of phonetic sounds. Our study provides a macrostructural foundation for understanding connections in speech production and underscores the central role of the articulatory motor system in speech perception. These findings highlight the value of ultra-high field 7T MR acquisition in unraveling the neural underpinnings of speech.
Collapse
Affiliation(s)
- Kaisu Lankinen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA, United States; Harvard Medical School, Boston, MA, United States
| | - Ruopeng Wang
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
| | - Qiyuan Tian
- Harvard Medical School, Boston, MA, United States
| | - Qing Mei Wang
- Stroke Biological Recovery Laboratory, Spaulding Rehabilitation Hospital, the teaching affiliate of Harvard Medical School, Charlestown, MA, United States
| | - Bridget J Perry
- Department of Communication Sciences and Disorders, MGH Institute of Health Professions Boston, MA, United States
| | - Jordan R Green
- Department of Communication Sciences and Disorders, MGH Institute of Health Professions Boston, MA, United States
| | - Teresa J Kimberley
- Department of Physical Therapy, School of Health and Rehabilitation Sciences, MGH Institute of Health Professions, Boston, MA, United States
| | - Jyrki Ahveninen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA, United States; Harvard Medical School, Boston, MA, United States
| | - Shasha Li
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA, United States; Harvard Medical School, Boston, MA, United States.
| |
Collapse
|
44
|
Deschamps ML, Sanderson P, Waxenegger H, Mohamed I, Loeb RG. Auditory Sequences Presented With Spearcons Support Better Multiple Patient Monitoring Than Single-Patient Alarms: A Preclinical Simulation. Hum Factors 2024; 66:872-890. [PMID: 35934986 DOI: 10.1177/00187208221116949] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
OBJECTIVE A study of auditory displays for simulated patient monitoring compared the effectiveness of two sound categories (alarm sounds indicating general risk categories from international alarm standard IEC 60601-1-8 versus event-specific sounds according to the type of nursing unit) and two configurations (single-patient alarms versus multi-patient sequences). BACKGROUND Fieldwork in speciality-focused high dependency units (HDU) indicated that auditory alarms are ambiguous and do not identify which patient has a problem. We tested whether participants perform better using auditory displays that identify the relevant patient and problem. METHOD During simulated patient monitoring of four patients in a respiratory HDU, 60 non-clinicians heard either (a) IEC risk categories as single-patient alarm sounds, (b) event-specific categories as single-patient alarm sounds, (c) IEC risk categories in multi-patient sequences or (d) event-specific categories in multi-patient sequences. Participants performed a perceptual-motor task while monitoring patients; after detecting abnormal events, they identified the patient and the event. RESULTS Participants hearing multi-patient sequences made fewer wrong patient identifications than participants hearing single-patient alarms. Advantages of event-specific categories emerged when IEC risk category sounds indicated more than one potential event. Even when IEC and event-specific sounds indicated the same unique event, spearcons supported better event identification than did auditory icon sounds. CONCLUSION Auditory displays that unambiguously convey which patient is having what problem dramatically improve monitoring performance in a preclinical HDU simulation. APPLICATION Time-compressed speech assists development of detailed risk categories needed in specific HDU contexts, and multi-patient sound sequences allow multiple patient wellbeing to be monitored.
Collapse
Affiliation(s)
| | | | | | | | - Robert G Loeb
- The University of Queensland, Brisbane, Australia
- University of Florida, Gainesville, USA
| |
Collapse
|
45
|
De Souza J, Overy K. Embodied playfulness in musical synchrony: Comment on "musical engagement as a duet of tight synchrony and loose interpretability" by Tal-Chen Rabinowitch. Phys Life Rev 2024; 48:167-168. [PMID: 38244477 DOI: 10.1016/j.plrev.2023.11.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Accepted: 11/21/2023] [Indexed: 01/22/2024]
Affiliation(s)
- Jonathan De Souza
- Don Wright Faculty of Music, University of Western Ontario; Brain and Mind Institute, University of Western Ontario
| | - Katie Overy
- Reid School of Music, Edinburgh College of Art, University of Edinburgh; Edinburgh Neuroscience, University of Edinburgh.
| |
Collapse
|
46
|
Chang YJ, Han JY, Chu WC, Li LPH, Lai YH. Enhancing music recognition using deep learning-powered source separation technology for cochlear implant users. J Acoust Soc Am 2024; 155:1694-1703. [PMID: 38426839 DOI: 10.1121/10.0025057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/09/2023] [Accepted: 02/09/2024] [Indexed: 03/02/2024]
Abstract
Cochlear implant (CI) is currently the vital technological device for assisting deaf patients in hearing sounds and greatly enhances their sound listening appreciation. Unfortunately, it performs poorly for music listening because of the insufficient number of electrodes and inaccurate identification of music features. Therefore, this study applied source separation technology with a self-adjustment function to enhance the music listening benefits for CI users. In the objective analysis method, this study showed that the results of the source-to-distortion, source-to-interference, and source-to-artifact ratios were 4.88, 5.92, and 15.28 dB, respectively, and significantly better than the Demucs baseline model. For the subjective analysis method, it scored higher than the traditional baseline method VIR6 (vocal to instrument ratio, 6 dB) by approximately 28.1 and 26.4 (out of 100) in the multi-stimulus test with hidden reference and anchor test, respectively. The experimental results showed that the proposed method can benefit CI users in identifying music in a live concert, and the personal self-fitting signal separation method had better results than any other default baselines (vocal to instrument ratio of 6 dB or vocal to instrument ratio of 0 dB) did. This finding suggests that the proposed system is a potential method for enhancing the music listening benefits for CI users.
Collapse
Affiliation(s)
- Yuh-Jer Chang
- Department of Biomedical Engineering, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Ji-Yan Han
- Department of Biomedical Engineering, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Wei-Chung Chu
- Department of Biomedical Engineering, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Lieber Po-Hung Li
- Faculty of Medicine, School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
- Department of Otolaryngology, Cheng Hsin General Hospital, Taipei, Taiwan
- Department of Medical Research, China Medical University Hospital, China Medical University, Taichung, Taiwan
- Institute of Brain Science, School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Ying-Hui Lai
- Department of Biomedical Engineering, National Yang Ming Chiao Tung University, Taipei, Taiwan
- Medical Device Innovation Translation Center, National Yang Ming Chiao Tung University, Taipei, Taiwan
| |
Collapse
|
47
|
Mårup SH, Kleber BA, Møller C, Vuust P. When direction matters: Neural correlates of interlimb coordination of rhythm and beat. Cortex 2024; 172:86-108. [PMID: 38241757 DOI: 10.1016/j.cortex.2023.11.019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2022] [Revised: 04/11/2023] [Accepted: 11/09/2023] [Indexed: 01/21/2024]
Abstract
In a previous experiment, we found evidence for a bodily hierarchy governing interlimb coordination of rhythm and beat, using five effectors: 1) Left foot, 2) Right foot, 3) Left hand, 4) Right hand and 5) Voice. The hierarchy implies that, during simultaneous rhythm and beat performance and using combinations of two of these effectors, executing the task by performing the rhythm with an effector that has a higher number than the beat effector is significantly easier than vice versa. To investigate the neural underpinnings of this proposed bodily hierarchy, we here scanned 46 professional musicians using fMRI as they performed a rhythmic pattern with one effector while keeping the beat with another. The conditions combined the voice and the right hand (V + RH), the right hand and the left hand (RH + LH), and the left hand and the right foot (LH + RF). Each effector combination was performed with and against the bodily hierarchy. Going against the bodily hierarchy increased tapping errors significantly and also increased activity in key brain areas functionally associated with top-down sensorimotor control and bottom-up feedback processing, such as the cerebellum and SMA. Conversely, going with the bodily hierarchy engaged areas functionally associated with the default mode network and regions involved in emotion processing. Theories of general brain function that hold prediction as a key principle, propose that action and perception are governed by the brain's attempt to minimise prediction error at different levels in the brain. Following this viewpoint, our results indicate that going against the hierarchy induces stronger prediction errors, while going with the hierarchy allows for a higher degree of automatization. Our results also support the notion of a bodily hierarchy in motor control that prioritizes certain conductive and supportive tapping roles in specific effector combinations.
Collapse
Affiliation(s)
- Signe H Mårup
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Universitetsbyen 3, Aarhus C, Denmark.
| | - Boris A Kleber
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Universitetsbyen 3, Aarhus C, Denmark.
| | - Cecilie Møller
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Universitetsbyen 3, Aarhus C, Denmark.
| | - Peter Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Universitetsbyen 3, Aarhus C, Denmark.
| |
Collapse
|
48
|
Jertberg RM, Begeer S, Geurts HM, Chakrabarti B, Van der Burg E. Perception of temporal synchrony not a prerequisite for multisensory integration. Sci Rep 2024; 14:4982. [PMID: 38424118 PMCID: PMC10904801 DOI: 10.1038/s41598-024-55572-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Accepted: 02/25/2024] [Indexed: 03/02/2024] Open
Abstract
Temporal alignment is often viewed as the most essential cue the brain can use to integrate information from across sensory modalities. However, the importance of conscious perception of synchrony to multisensory integration is a controversial topic. Conversely, the influence of cross-modal incongruence of higher level stimulus features such as phonetics on temporal processing is poorly understood. To explore the nuances of this relationship between temporal processing and multisensory integration, we presented 101 participants (ranging from 19 to 73 years of age) with stimuli designed to elicit the McGurk/MacDonald illusion (either matched or mismatched pairs of phonemes and visemes) with varying degrees of stimulus onset asynchrony between the visual and auditory streams. We asked them to indicate which syllable they perceived and whether the video and audio were synchronized on each trial. We found that participants often experienced the illusion despite not perceiving the stimuli as synchronous, and the same phonetic incongruence that produced the illusion also led to significant interference in simultaneity judgments. These findings challenge the longstanding assumption that perception of synchrony is a prerequisite to multisensory integration, support a more flexible view of multisensory integration, and suggest a complex, reciprocal relationship between temporal and multisensory processing.
Collapse
Affiliation(s)
- Robert M Jertberg
- Department of Clinical and Developmental Psychology, The Netherlands and Amsterdam Public Health Research Institute, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Sander Begeer
- Department of Clinical and Developmental Psychology, The Netherlands and Amsterdam Public Health Research Institute, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Hilde M Geurts
- Brain and Cognition, Department of Psychology, Dutch Autism and ADHD Research Center (d'Arc), Universiteit van Amsterdam, Amsterdam, The Netherlands
| | - Bhismadev Chakrabarti
- Centre for Autism, School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK.
- India Autism Center, Kolkata, India.
- Department of Psychology, Ashoka University, Sonipat, India.
| | - Erik Van der Burg
- Brain and Cognition, Department of Psychology, Dutch Autism and ADHD Research Center (d'Arc), Universiteit van Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
49
|
Marjieh R, Harrison PMC, Lee H, Deligiannaki F, Jacoby N. Timbral effects on consonance disentangle psychoacoustic mechanisms and suggest perceptual origins for musical scales. Nat Commun 2024; 15:1482. [PMID: 38369535 DOI: 10.1038/s41467-024-45812-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Accepted: 12/11/2023] [Indexed: 02/20/2024] Open
Abstract
The phenomenon of musical consonance is an essential feature in diverse musical styles. The traditional belief, supported by centuries of Western music theory and psychological studies, is that consonance derives from simple (harmonic) frequency ratios between tones and is insensitive to timbre. Here we show through five large-scale behavioral studies, comprising 235,440 human judgments from US and South Korean populations, that harmonic consonance preferences can be reshaped by timbral manipulations, even as far as to induce preferences for inharmonic intervals. We show how such effects may suggest perceptual origins for diverse scale systems ranging from the gamelan's slendro scale to the tuning of Western mean-tone and equal-tempered scales. Through computational modeling we show that these timbral manipulations dissociate competing psychoacoustic mechanisms underlying consonance, and we derive an updated computational model combining liking of harmonicity, disliking of fast beats (roughness), and liking of slow beats. Altogether, this work showcases how large-scale behavioral experiments can inform classical questions in auditory perception.
Collapse
Affiliation(s)
- Raja Marjieh
- Department of Psychology, Princeton University, Princeton, NJ, USA.
- Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany.
| | - Peter M C Harrison
- Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany.
- Centre for Music and Science, University of Cambridge, Cambridge, UK.
| | - Harin Lee
- Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Fotini Deligiannaki
- Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
- German Aerospace Center (DLR), Institute for AI Safety and Security, Bonn, Germany
| | - Nori Jacoby
- Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany.
| |
Collapse
|
50
|
林 欢, 刘 潘, 孙 钰, 俞 欣, 钱 君, 池 霞, 洪 琴. [Association between auditory processing and problem behaviors in preschool children: the mediating role of executive function]. Zhongguo Dang Dai Er Ke Za Zhi 2024; 26:174-180. [PMID: 38436316 PMCID: PMC10921876 DOI: 10.7499/j.issn.1008-8830.2309067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Accepted: 12/25/2023] [Indexed: 03/05/2024]
Abstract
OBJECTIVES To investigate the association between auditory processing and problem behaviors in preschool children, as well as the mediating role of executive function. METHODS A total of 2 342 preschool children were selected from 7 kindergartens in Nanjing, China from June to August 2021. They were evaluated using Preschool Auditory Processing Assessment Scale, Conners Parent Symptom Questionnaire, and Behavior Rating Inventory of Executive Functioning-Preschool version. Children with different demographic features were compared in the scores and the abnormality rates of auditory processing, problem behaviors, and executive function. The influencing factors of the total scores of auditory processing, problem behaviors, and executive function were evaluated using multiple linear regression analysis. Whether executive function was a mediating factor between auditory processing and executive function was examined. RESULTS Sex and grade were the main influencing factors for the total score of auditory processing (P<0.05), and sex, grade, parental education level, and family economic status were the main influencing factors for the total scores of problem behaviors and executive function (P<0.05). The auditory processing score (rs=0.458, P<0.05) and problem behavior score (rs=0.185, P<0.05) were significantly positively correlated with the executive function score, and the auditory processing score was significantly positively correlated with the problem behavior score (rs=0.423, P<0.05). Executive function played a partial mediating role between auditory processing and problem behaviors, and the mediating effect accounted for 33.44% of the total effect. CONCLUSIONS Auditory processing can directly affect the problem behaviors of preschool children and indirectly affect problem behaviors through executive function.
Collapse
|