101
|
Schilling A, Sedley W, Gerum R, Metzner C, Tziridis K, Maier A, Schulze H, Zeng FG, Friston KJ, Krauss P. Predictive coding and stochastic resonance as fundamental principles of auditory phantom perception. Brain 2023; 146:4809-4825. [PMID: 37503725 PMCID: PMC10690027 DOI: 10.1093/brain/awad255] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Revised: 06/27/2023] [Accepted: 07/15/2023] [Indexed: 07/29/2023] Open
Abstract
Mechanistic insight is achieved only when experiments are employed to test formal or computational models. Furthermore, in analogy to lesion studies, phantom perception may serve as a vehicle to understand the fundamental processing principles underlying healthy auditory perception. With a special focus on tinnitus-as the prime example of auditory phantom perception-we review recent work at the intersection of artificial intelligence, psychology and neuroscience. In particular, we discuss why everyone with tinnitus suffers from (at least hidden) hearing loss, but not everyone with hearing loss suffers from tinnitus. We argue that intrinsic neural noise is generated and amplified along the auditory pathway as a compensatory mechanism to restore normal hearing based on adaptive stochastic resonance. The neural noise increase can then be misinterpreted as auditory input and perceived as tinnitus. This mechanism can be formalized in the Bayesian brain framework, where the percept (posterior) assimilates a prior prediction (brain's expectations) and likelihood (bottom-up neural signal). A higher mean and lower variance (i.e. enhanced precision) of the likelihood shifts the posterior, evincing a misinterpretation of sensory evidence, which may be further confounded by plastic changes in the brain that underwrite prior predictions. Hence, two fundamental processing principles provide the most explanatory power for the emergence of auditory phantom perceptions: predictive coding as a top-down and adaptive stochastic resonance as a complementary bottom-up mechanism. We conclude that both principles also play a crucial role in healthy auditory perception. Finally, in the context of neuroscience-inspired artificial intelligence, both processing principles may serve to improve contemporary machine learning techniques.
Collapse
|
102
|
Keller PE. Integrating theory and models of musical group interaction. Trends Cogn Sci 2023; 27:1105-1106. [PMID: 37739920 DOI: 10.1016/j.tics.2023.07.008] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Revised: 07/14/2023] [Accepted: 07/18/2023] [Indexed: 09/24/2023]
|
103
|
Petrides M. On the evolution of polysensory superior temporal sulcus and middle temporal gyrus: A key component of the semantic system in the human brain. J Comp Neurol 2023; 531:1987-1995. [PMID: 37434287 DOI: 10.1002/cne.25521] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2022] [Revised: 05/24/2023] [Accepted: 06/20/2023] [Indexed: 07/13/2023]
Abstract
The primary auditory cortex and other early auditory cortical areas lie on Heschl's gyrus within the Sylvian fissure. On the adjacent lateral surface of the superior temporal gyrus, the cortex processes higher order auditory information leading to auditory perception. On the ventral surface of the temporal lobe in the primate brain, there are areas that process higher order visual information leading to visual perception. These sensory-specific auditory and visual processing regions are separated by areas that integrate multisensory information within the deep superior temporal sulcus in both the macaque monkey and human brains. In the human brain, the multisensory integration cortex expands and forms the adjacent middle temporal gyrus. The expansion of this multisensory region in the language-dominant hemisphere of the human brain is critical for the emergence of semantic processing, namely, the processing of conceptual information that is not sensory specific but rather relies on multisensory integration.
Collapse
|
104
|
Yaman H, Yılmaz O, Hanoğlu L, Bayazıt Y. fNIRS-based evaluation of the impact of SARS-CoV-2 infection central auditory processing. Brain Behav 2023; 13:e3303. [PMID: 37908040 PMCID: PMC10726898 DOI: 10.1002/brb3.3303] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/09/2023] [Revised: 09/29/2023] [Accepted: 10/18/2023] [Indexed: 11/02/2023] Open
Abstract
OBJECTIVES Coronavirus disease-2019 due to SARS-CoV-2 infection has been associated with neurological and neuropsychiatric illnesses as well as auditory system problems. In this study, we aimed to evaluate the impact of SARS-CoV-2 infection on the central auditory system by assessing the hemodynamic activation changes using functional near-infrared spectroscopy (fNIRS). METHODS Three participants who had SARS-CoV-2 infection (study group) and four participants who had no SARS-CoV-2 infection (control group) were included in the study. During the auditory oddball task in which two different frequencies of tonal stimulation were presented at 80 dB HL, the participants were asked to pay attention to the rare tonal stimulation and mentally count these target stimuli throughout the task. During this task, oxygenated hemodynamic response functions were evaluated with fNIRS. RESULTS Significantly increased oxygenated hemodynamic responses were observed in both groups during the task (p < .05), which was significantly higher in the study group (p < .05). Significantly more HbO activation was observed in the vmPFC, superior temporal gyrus, and medial temporal gyrus in the study group compared to controls (p < .05). Significantly higher hemodynamic activation was observed in the right hemisphere in both groups, which was significantly higher in the study group (p < .05). CONCLUSION SARS-CoV-2 infections may impact on central auditory processing or auditory attention due to changes in oxyhemoglobin levels in the frontal and temporal brain regions. It seems that SARS-CoV-2 infection is associated with an additional load on neural activity, and difficulties in focusing in auditory attention, following speech and hearing in noise as well as increased effort to perceive auditory cues.
Collapse
|
105
|
Sendesen İ, Sendesen E, Yücel E. Evaluation of musical emotion perception and language development in children with cochlear implants. Int J Pediatr Otorhinolaryngol 2023; 175:111753. [PMID: 37839291 DOI: 10.1016/j.ijporl.2023.111753] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 10/05/2023] [Accepted: 10/10/2023] [Indexed: 10/17/2023]
Abstract
OBJECTIVES While the primary purpose of cochlear implant (CI) fitting is to improve individuals' receptive and expressive skills, musical emotion perception (MEP) is generally ignored. This study assesses the MEP and language skills (LS) of children using CI. METHODS 26 CI users and 26 matched healthy controls between the ages of 6 and 9 were included in the study. The Test of Language Development (TOLD) was applied to evaluate the LS of the participants, and the Montreal Emotion Identification Test (MEI) was applied to evaluate the MEP. RESULTS MEI test scores and all subtests of TOLD were statistically significantly lower in the CI group. Also, there was a statistically significant and moderate correlation between the listening subtest of TOLD and the MEI test. CONCLUSIONS MEP and language skills are poor in children with CI. Although language skills are primarily targeted in CI performance, improving MEP should also be included in rehabilitation programs. The relationship between music and the TOLD's listening subtest may provide evidence that listening skills can be improved by paying attention to the MEP, which is frequently ignored in rehabilitation programs.
Collapse
|
106
|
Schmitter CV, Kufer K, Steinsträter O, Sommer J, Kircher T, Straube B. Neural correlates of temporal recalibration to delayed auditory feedback of active and passive movements. Hum Brain Mapp 2023; 44:6227-6244. [PMID: 37818950 PMCID: PMC10619381 DOI: 10.1002/hbm.26508] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2023] [Revised: 09/18/2023] [Accepted: 09/26/2023] [Indexed: 10/13/2023] Open
Abstract
When we perform an action, its sensory outcomes usually follow shortly after. This characteristic temporal relationship aids in distinguishing self- from externally generated sensory input. To preserve this ability under dynamically changing environmental conditions, our expectation of the timing between action and outcome must be able to recalibrate, for example, when the outcome is consistently delayed. Until now, it remains unclear whether this process, known as sensorimotor temporal recalibration, can be specifically attributed to recalibration of sensorimotor (action-outcome) predictions, or whether it may be partly due to the recalibration of expectations about the intersensory (e.g., audio-tactile) timing. Therefore, we investigated the behavioral and neural correlates of temporal recalibration and differences in sensorimotor and intersensory contexts. During fMRI, subjects were exposed to delayed or undelayed tones elicited by actively or passively generated button presses. While recalibration of the expected intersensory timing (i.e., between the tactile sensation during the button movement and the tones) can be expected to occur during both active and passive movements, recalibration of sensorimotor predictions should be limited to active movement conditions. Effects of this procedure on auditory temporal perception and the modality-transfer to visual perception were tested in a delay detection task. Across both contexts, we found recalibration to be associated with activations in hippocampus and cerebellum. Context-dependent differences emerged in terms of stronger behavioral recalibration effects in sensorimotor conditions and were captured by differential activation pattern in frontal cortices, cerebellum, and sensory processing regions. These findings highlight the role of the hippocampus in encoding and retrieving newly acquired temporal stimulus associations during temporal recalibration. Furthermore, recalibration-related activations in the cerebellum may reflect the retention of multiple representations of temporal stimulus associations across both contexts. Finally, we showed that sensorimotor predictions modulate recalibration-related processes in frontal, cerebellar, and sensory regions, which potentially account for the perceptual advantage of sensorimotor versus intersensory temporal recalibration.
Collapse
|
107
|
Timmers R, Tzanaki P, Christensen J. Coordinating actions as active agents in a dynamic musical environment: Comment on "Musical engagement as a duet of tight synchrony and loose interpretability" by Tal-Chen Rabinowitch. Phys Life Rev 2023; 47:104-107. [PMID: 37812985 DOI: 10.1016/j.plrev.2023.09.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2023] [Accepted: 09/24/2023] [Indexed: 10/11/2023]
|
108
|
Xu Z, Xu Q. Students' Psychological State, Creative Development, and Music Appreciation: The Influence of Different Musical Act Modes (Exemplified by a Video Clip, an Audio Recording, and a Video Concert). JOURNAL OF PSYCHOLINGUISTIC RESEARCH 2023; 52:3001-3017. [PMID: 37962821 DOI: 10.1007/s10936-023-10035-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 10/25/2023] [Indexed: 11/15/2023]
Abstract
This paper aims to study how different musical act modes influence the student's psychological state, creative development, and music appreciation. In particular, the research focuses on concert videos, video clips, and audio records. Based on the Likert scale, the authors determined that video clips significantly influenced students' learning process since they contributed to the combination of visual and sound effects. Video concerts were less important. Concerts are mainly staged actions with frequent use of pre-recorded music, affecting the accuracy of singing techniques. The authors concluded that the most effective approach is systematical learning using the effect of colors and sounds with a preliminary analysis of musical compositions. The results showed that the most significant number of students significantly improved their knowledge (87%, with an average score of 0.92), and the elements of a musical act (rhythm, color scheme, text, and performance) influenced their development. The practical significance of the paper lies in the use of approaches to learning using colors and sound effects with an emphasis on the development of certain elements. The study prospects involve determining how effectively the elements of a musical act influence the psychological state resulting from comparing music genres.
Collapse
|
109
|
Elhakeem ES, Mustafa RMAM, Talaat MAM, Radwan AMA, Eldeeb M. The relation between long latency cortical auditory evoked potentials and stuttering severity in stuttering school-age children. Int J Pediatr Otorhinolaryngol 2023; 175:111766. [PMID: 37875046 DOI: 10.1016/j.ijporl.2023.111766] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/10/2023] [Revised: 10/10/2023] [Accepted: 10/18/2023] [Indexed: 10/26/2023]
Abstract
BACKGROUND Disturbances in auditory processing and feedback have been suggested to play a role in the pathogenesis of developmental stuttering. Long latency cortical auditory evoked potentials in response to non-linguistic and linguistic stimuli can be used to investigate these disturbances. There were differences between developmental stuttering patients. However, there is no solid evidence of these differences to date. OBJECTIVE This study aims to determine whether there is a statistically significant difference in component P1-N1-P2 of long latency cortical auditory evoked potentials between stuttering school-aged children and non-stuttering children. In addition, the study aims to investigate the relationship between these potentials and objective quantitative measures of stuttering. METHOD The study included two groups, patients and controls, consisting of 40 subjects aged 6-12 years. For the cases group, the severity of stuttering symptoms and P1-N1-P2 responses to a non-linguistic stimulus were evaluated. In addition, the P1-N1-P2 responses of the matched control group were evaluated. RESULTS The P1-N1 responses were similar in both study groups, while P2 response was shorter in the patient group, but the difference was not statistically significant compared to the control group. N1 latency has the only statistically significant correlation with the percentage of repetitions, prolongation, and blocks. The female cases had a decreased, not statistically significant, latency than the male cases group. CONCLUSION In contrast to the previous finding, the study revealed a non-statistically significant different P1-N1, a non-statistically significant reduced P2 response to a non-linguistic stimulus, in CWS, in as evidence for basic auditory processing. The study also revealed a significant correlation between N1 latency and proportion of the repetition symptoms.
Collapse
|
110
|
von der Linde M, Herbster C, Dobel C, Festag S, Thielsch MT. Creating safe environments: optimal acoustic alarming of laypeople in fire prevention. ERGONOMICS 2023; 66:2193-2211. [PMID: 36927322 DOI: 10.1080/00140139.2023.2191915] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/07/2022] [Accepted: 03/10/2023] [Indexed: 06/18/2023]
Abstract
Hazards like fires occur regularly and can cost people's lives. Optimal auditory alarm signals enable laypeople to recognise dangers and to protect themselves. Existing fire alarm sound research focuses on alarm sounds and voice alerts presented singularly. We explored a combination of both and aimed to identify alarm signals that work optimally in everyday life. Thus, we conducted two online experiments: In Study 1 (N = 379), we tested eight alarm sounds regarding their typicality, their familiarity, their arousal, their valence, and their dominance. Siren-like alarm sounds were the most effective. In Study 2 (N = 206), we combined the four most effective alarm sounds with a voice alert. The voice alert reinforced ambiguity reduction, action motivation, and action intention. Hence, we suggest using alarm sounds with siren-like patterns. They should be combined with a voice alert to foster a quick and specific (target task-oriented) reaction.Practitioner summary: Warning laypeople is of great importance in time-critical hazards. In two remote testing studies (NTotal = 585), auditory alarm sounds with siren-like patterns resulted in the most distinct and emotional perception. Combining the alarm sound with a voice alert adds meaning to the alarm and fosters action intention.Abbreviations: DIN: Deutsches Institut für Normung [German Institute for Standardization]; ISO: International Organization for Standardization; Mixed MANOVA: mixed measures multivariate analysis of variance; rmMANOVA: repeated measures multivariate analysis of variance.
Collapse
|
111
|
Abdulbaki H, Mo J, Limb CJ, Jiam NT. The Impact of Musical Rehabilitation on Complex Sound Perception in Cochlear Implant Users: A Systematic Review. Otol Neurotol 2023; 44:965-977. [PMID: 37758325 DOI: 10.1097/mao.0000000000004025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/03/2023]
Abstract
OBJECTIVE Musical rehabilitation has been used in clinical and nonclinical contexts to improve postimplantation auditory processing in implanted individuals. This systematic review aimed to evaluate the efficacy of music rehabilitation in controlled experimental and quasi-experimental studies on cochlear implant (CI) user speech and music perception. DATABASES REVIEWED PubMed/MEDLINE, EMBASE, Web of Science, PsycARTICLES, and PsycINFO databases through July 2022. METHODS Controlled experimental trials and prospective studies were included if they compared pretest and posttest data and excluded hearing aid-only users. Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines were then used to extract data from 11 included studies with a total of 206 pediatric and adult participants. Interventions included group music therapy, melodic contour identification training, auditory-motor instruction, or structured digital music training. Studies used heterogeneous outcome measures evaluating speech and music perception. Risk of bias was assessed using the National Heart, Lung, and Blood Institute Quality Assessment Tool. RESULTS A total of 735 studies were screened, and 11 met the inclusion criteria. Six trials reported both speech and music outcomes, whereas five reported only music perception outcomes after the intervention relative to control. For music perception outcomes, significant findings included improvements in melodic contour identification (five studies, p < 0.05), timbre recognition (three studies, p < 0.05), and song appraisal (three studies, p < 0.05) in their respective trials. For speech prosody outcomes, only vocal emotion identification demonstrated significant improvements (two studies, p < 0.05). CONCLUSION Music rehabilitation improves performance on multiple measures of music perception, as well as tone-based characteristics of speech (i.e., emotional prosody). This suggests that rehabilitation may facilitate improvements in the discrimination of spectrally complex signals.
Collapse
|
112
|
Fram NR, Berger J. Syncopation as Probabilistic Expectation: Conceptual, Computational, and Experimental Evidence. Cogn Sci 2023; 47:e13390. [PMID: 38043104 DOI: 10.1111/cogs.13390] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Revised: 08/22/2023] [Accepted: 11/17/2023] [Indexed: 12/05/2023]
Abstract
Definitions of syncopation share two characteristics: the presence of a meter or analogous hierarchical rhythmic structure and a displacement or contradiction of that structure. These attributes are translated in terms of a Bayesian theory of syncopation, where the syncopation of a rhythm is inferred based on a hierarchical structure that is, in turn, learned from the ongoing musical stimulus. Several experiments tested its simplest possible implementation, with equally weighted priors associated with different meters and independence of auditory events, which can be decomposed into two terms representing note density and deviation from a metric hierarchy. A computational simulation demonstrated that extant measures of syncopation fall into two distinct factors analogous to the terms in the simple Bayesian model. Next, a series of behavioral experiments found that perceived syncopation is significantly related to both terms, offering support for the general Bayesian construction of syncopation. However, we also found that the prior expectations associated with different metric structures are not equal across meters and that there is an interaction between density and hierarchical deviation, implying that auditory events are not independent from each other. Together, these findings provide evidence that syncopation is a manifestation of a form of temporal expectation that can be directly represented in Bayesian terms and offer a complementary, feature-driven approach to recent Bayesian models of temporal prediction.
Collapse
|
113
|
Shen L, Lu X, Wang Y, Jiang Y. Audiovisual correspondence facilitates the visual search for biological motion. Psychon Bull Rev 2023; 30:2272-2281. [PMID: 37231177 PMCID: PMC10728268 DOI: 10.3758/s13423-023-02308-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/05/2023] [Indexed: 05/27/2023]
Abstract
Hearing synchronous sounds may facilitate the visual search for the concurrently changed visual targets. Evidence for this audiovisual attentional facilitation effect mainly comes from studies using artificial stimuli with relatively simple temporal dynamics, indicating a stimulus-driven mechanism whereby synchronous audiovisual cues create a salient object to capture attention. Here, we investigated the crossmodal attentional facilitation effect on biological motion (BM), a natural, biologically significant stimulus with complex and unique dynamic profiles. We found that listening to temporally congruent sounds, compared with incongruent sounds, enhanced the visual search for BM targets. More intriguingly, such a facilitation effect requires the presence of distinctive local motion cues (especially the accelerations in feet movement) independent of the global BM configuration, suggesting a crossmodal mechanism triggered by specific biological features to enhance the salience of BM signals. These findings provide novel insights into how audiovisual integration boosts attention to biologically relevant motion stimuli and extend the function of a proposed life detection system driven by local kinematics of BM to multisensory life motion perception.
Collapse
|
114
|
Cecchetti G, Tomasini CA, Herff SA, Rohrmeier MA. Interpreting Rhythm as Parsing: Syntactic-Processing Operations Predict the Migration of Visual Flashes as Perceived During Listening to Musical Rhythms. Cogn Sci 2023; 47:e13389. [PMID: 38038624 DOI: 10.1111/cogs.13389] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Revised: 11/10/2023] [Accepted: 11/13/2023] [Indexed: 12/02/2023]
Abstract
Music can be interpreted by attributing syntactic relationships to sequential musical events, and, computationally, such musical interpretation represents an analogous combinatorial task to syntactic processing in language. While this perspective has been primarily addressed in the domain of harmony, we focus here on rhythm in the Western tonal idiom, and we propose for the first time a framework for modeling the moment-by-moment execution of processing operations involved in the interpretation of music. Our approach is based on (1) a music-theoretically motivated grammar formalizing the competence of rhythmic interpretation in terms of three basic types of dependency (preparation, syncopation, and split; Rohrmeier, 2020), and (2) psychologically plausible predictions about the complexity of structural integration and memory storage operations, necessary for parsing hierarchical dependencies, derived from the dependency locality theory (Gibson, 2000). With a behavioral experiment, we exemplify an empirical implementation of the proposed theoretical framework. One hundred listeners were asked to reproduce the location of a visual flash presented while listening to three rhythmic excerpts, each exemplifying a different interpretation under the formal grammar. The hypothesized execution of syntactic-processing operations was found to be a significant predictor of the observed displacement between the reported and the objective location of the flashes. Overall, this study presents a theoretical approach and a first empirical proof-of-concept for modeling the cognitive process resulting in such interpretation as a form of syntactic parsing with algorithmic similarities to its linguistic counterpart. Results from the present small-scale experiment should not be read as a final test of the theory, but they are consistent with the theoretical predictions after controlling for several possible confounding factors and may form the basis for further large-scale and ecological testing.
Collapse
|
115
|
Baykan C, Zhu X, Allenmark F, Shi Z. Influences of temporal order in temporal reproduction. Psychon Bull Rev 2023; 30:2210-2218. [PMID: 37291447 PMCID: PMC10728249 DOI: 10.3758/s13423-023-02310-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/12/2023] [Indexed: 06/10/2023]
Abstract
Despite the crucial role of complex temporal sequences, such as speech and music, in our everyday lives, our ability to acquire and reproduce these patterns is prone to various contextual biases. In this study, we examined how the temporal order of auditory sequences affects temporal reproduction. Participants were asked to reproduce accelerating, decelerating or random sequences, each consisting of four intervals, by tapping their fingers. Our results showed that the reproduction and the reproduction variability were influenced by the sequential structure and interval orders. The mean reproduced interval was assimilated by the first interval of the sequence, with the lowest mean for decelerating and the highest for accelerating sequences. Additionally, the central tendency bias was affected by the volatility and the last interval of the sequence, resulting in a stronger central tendency in the random and decelerating sequences than the accelerating sequence. Using Bayesian integration between the ensemble mean of the sequence and individual durations and considering the perceptual uncertainty associated with the sequential structure and position, we were able to accurately predict the behavioral results. The findings highlight the critical role of the temporal order of a sequence in temporal pattern reproduction, with the first interval exerting greater influence on mean reproduction and the volatility and the last interval contributing to the perceptual uncertainty of individual intervals and the central tendency bias.
Collapse
|
116
|
Bingham MA, Cummins ML, Tong A, Purcell P, Sangari A, Sood A, Schlesinger JJ. Effects of altering harmonic structure on the recognition of simulated auditory arterial pressure alarms. Br J Anaesth 2023; 131:e178-e180. [PMID: 37758624 DOI: 10.1016/j.bja.2023.08.037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Revised: 08/20/2023] [Accepted: 08/29/2023] [Indexed: 09/29/2023] Open
|
117
|
O'Donohue M, Lacherez P, Yamamoto N. Audiovisual spatial ventriloquism is reduced in musicians. Hear Res 2023; 440:108918. [PMID: 37992516 DOI: 10.1016/j.heares.2023.108918] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/18/2023] [Revised: 11/14/2023] [Accepted: 11/16/2023] [Indexed: 11/24/2023]
Abstract
There is great scientific and public interest in claims that musical training improves general cognitive and perceptual abilities. While this is controversial, recent and rather convincing evidence suggests that musical training refines the temporal integration of auditory and visual stimuli at a general level. We investigated whether musical training also affects integration in the spatial domain, via an auditory localisation experiment that measured ventriloquism (where localisation is biased towards visual stimuli on audiovisual trials) and recalibration (a unimodal localisation aftereffect). While musicians (n = 22) and non-musicians (n = 22) did not have significantly different unimodal precision or accuracy, musicians were significantly less susceptible than non-musicians to ventriloquism, with large effect sizes. We replicated these results in another experiment with an independent sample of 24 musicians and 21 non-musicians. Across both experiments, spatial recalibration did not significantly differ between the groups even though musicians resisted ventriloquism. Our results suggest that the multisensory expertise afforded by musical training refines spatial integration, a process that underpins multisensory perception.
Collapse
|
118
|
Alluri V, Toiviainen P. The naturalistic paradigm: An approach to studying individual variability in neural underpinnings of music perception. Ann N Y Acad Sci 2023; 1530:18-22. [PMID: 37847675 DOI: 10.1111/nyas.15075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2023]
Abstract
Music listening is a dynamic process that entails complex interactions between sensory, cognitive, and emotional processes. The naturalistic paradigm provides a means to investigate these processes in an ecologically valid manner by allowing experimental settings that mimic real-life musical experiences. In this paper, we highlight the importance of the naturalistic paradigm in studying dynamic music processing and discuss how it allows for investigating both the segregation and integration of brain processes using model-based and model-free methods. We further suggest that studying individual difference-modulated music processing in this paradigm can provide insights into the mechanisms of brain plasticity, which can have implications for the development of interventions and therapies in a personalized way. Finally, despite the challenges that the naturalistic paradigm poses, we end with a discussion on future prospects of music and neuroscience research, especially with the continued development and refinement of naturalistic paradigms and the adoption of open science practices.
Collapse
|
119
|
Vasilkov V, Caswell-Midwinter B, Zhao Y, de Gruttola V, Jung DH, Liberman MC, Maison SF. Evidence of cochlear neural degeneration in normal-hearing subjects with tinnitus. Sci Rep 2023; 13:19870. [PMID: 38036538 PMCID: PMC10689483 DOI: 10.1038/s41598-023-46741-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Accepted: 11/04/2023] [Indexed: 12/02/2023] Open
Abstract
Tinnitus, reduced sound-level tolerance, and difficulties hearing in noisy environments are the most common complaints associated with sensorineural hearing loss in adult populations. This study aims to clarify if cochlear neural degeneration estimated in a large pool of participants with normal audiograms is associated with self-report of tinnitus using a test battery probing the different stages of the auditory processing from hair cell responses to the auditory reflexes of the brainstem. Self-report of chronic tinnitus was significantly associated with (1) reduced cochlear nerve responses, (2) weaker middle-ear muscle reflexes, (3) stronger medial olivocochlear efferent reflexes and (4) hyperactivity in the central auditory pathways. These results support the model of tinnitus generation whereby decreased neural activity from a damaged cochlea can elicit hyperactivity from decreased inhibition in the central nervous system.
Collapse
|
120
|
Del Gatto C, Indraccolo A, Pedale T, Brunetti R. Crossmodal interference on counting performance: Evidence for shared attentional resources. PLoS One 2023; 18:e0294057. [PMID: 37948407 PMCID: PMC10637692 DOI: 10.1371/journal.pone.0294057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Accepted: 10/16/2023] [Indexed: 11/12/2023] Open
Abstract
During the act of counting, our perceptual system may rely on information coming from different sensory channels. However, when the information coming from different sources is discordant, such as in the case of a de-synchronization between visual stimuli to be counted and irrelevant auditory stimuli, the performance in a sequential counting task might deteriorate. Such deterioration may originate from two different mechanisms, both linked to exogenous attention attracted by auditory stimuli. Indeed, exogenous auditory triggers may infiltrate our internal "counter", interfering with the counting process, resulting in an overcount; alternatively, the exogenous auditory triggers may disrupt the internal "counter" by deviating participants' attention from the visual stimuli, resulting in an undercount. We tested these hypotheses by asking participants to count visual discs sequentially appearing on the screen while listening to task-irrelevant sounds, in systematically varied conditions: visual stimuli could be synchronized or de-synchronized with sounds; they could feature regular or irregular pacing; and their speed presentation could be fast (approx. 3/sec), moderate (approx. 2/sec), or slow (approx. 1.5/sec). Our results support the second hypothesis since participants tend to undercount visual stimuli in all harder conditions (de-synchronized, irregular, fast sequences). We discuss these results in detail, adding novel elements to the study of crossmodal interference.
Collapse
|
121
|
Brannick S, Vibell JF. Motion aftereffects in vision, audition, and touch, and their crossmodal interactions. Neuropsychologia 2023; 190:108696. [PMID: 37793544 DOI: 10.1016/j.neuropsychologia.2023.108696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Revised: 09/26/2023] [Accepted: 09/27/2023] [Indexed: 10/06/2023]
|
122
|
Van Hedger SC, Bongiovanni NR, Heald SLM, Nusbaum HC. Absolute pitch judgments of familiar melodies generalize across timbre and octave. Mem Cognit 2023; 51:1898-1910. [PMID: 37165298 DOI: 10.3758/s13421-023-01429-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/24/2023] [Indexed: 05/12/2023]
Abstract
Most listeners can determine when a familiar recording of music has been shifted in musical key by as little as one semitone (e.g., from B to C major). These findings appear to suggest that absolute pitch memory is widespread in the general population. However, the use of familiar recordings makes it unclear whether these findings genuinely reflect absolute melody-key associations for at least two reasons. First, listeners may be able to use spectral cues from the familiar instrumentation of the recordings to determine when a familiar recording has been shifted in pitch. Second, listeners may be able to rely solely on pitch height cues (e.g., relying on a feeling that an incorrect recording sounds "too high" or "too low"). Neither of these strategies would require an understanding of pitch chroma or musical key. The present experiments thus assessed whether listeners could make accurate absolute melody-key judgments when listening to novel versions of these melodies, differing from the iconic recording in timbre (Experiment 1) or timbre and octave (Experiment 2). Listeners in both experiments were able to select the correct-key version of the familiar melody at rates that were well above chance. These results fit within a growing body of research supporting the idea that most listeners, regardless of formal musical training, have robust representations of absolute pitch - based on pitch chroma - that generalize to novel listening situations. Implications for theories of auditory pitch memory are discussed.
Collapse
|
123
|
Simmons JA, Hom KN, Simmons AM. Temporal coherence of harmonic frequencies affects echo detection in the big brown bat, Eptesicus fuscus. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 154:3321-3327. [PMID: 37983295 DOI: 10.1121/10.0022444] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Accepted: 11/01/2023] [Indexed: 11/22/2023]
Abstract
Echolocating big brown bats (Eptesicus fuscus) broadcast frequency modulated (FM) ultrasonic pulses containing two prominent harmonic sweeps (FM1, FM2). Both harmonics typically return as echoes at the same absolute time delay following the broadcast, making them coherent. Electronically splitting FM1 and FM2 allows their time delays to be controlled separately, making them non-coherent. Earlier work shows that big brown bats discriminate coherent from split harmonic, non-coherent echoes and that disruptions of harmonic coherence produce blurry acoustic images. A psychophysical experiment on two trained big brown bats tested the hypothesis that detection thresholds for split harmonic, non-coherent echoes are higher than those for coherent echoes. Thresholds of the two bats for detecting 1-glint echoes with coherent harmonics were around 35 and 36 dB sound pressure level, respectively, while thresholds for split harmonic echoes were about 10 dB higher. When the delay of FM2 in split harmonic echoes is shortened by 75 μs to offset neural amplitude-latency trading and restore coherence in the auditory representation, thresholds decreased back down to those estimated for coherent echoes. These results show that echo detection is affected by loss of harmonic coherence, consistent with the proposed broader role of coherence across frequencies for auditory perception.
Collapse
|
124
|
Bolzenius JD, Goodkin K. Variability in the relationships between auditory processing and neurocognitive status among older adults with HIV. AIDS 2023; 37:2091-2093. [PMID: 37755426 DOI: 10.1097/qad.0000000000003668] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/28/2023]
|
125
|
Guedes D, Prada M, Garrido MV, Caeiro I, Simões C, Lamy E. Sensitive to music? Examining the crossmodal effect of audition on sweet taste sensitivity. Food Res Int 2023; 173:113256. [PMID: 37803571 DOI: 10.1016/j.foodres.2023.113256] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2023] [Revised: 07/06/2023] [Accepted: 07/07/2023] [Indexed: 10/08/2023]
Abstract
Previous research has shown that music can influence taste perception. While most studies to date have focused on taste intensity ratings, less is known about the influence of musical stimuli on other parameters of taste function. In this within-subjects experiment (N = 73), we tested the effects of three sound conditions (High Sweetness soundtrack - HS; Low Sweetness soundtrack - LS; and Silence - S) on sweet taste sensitivity, namely, detection and recognition. Each participant tasted nine samples of sucrose solutions (from 0 g/L to 20 g/L) under each of the three sound conditions in counterbalanced order. We assessed the lower concentrations at which participants were able to detect (detection threshold) and correctly identify (recognition threshold) a taste sensation. Additionally, the intensity and hedonic ratings of samples above the recognition threshold (7.20 g/L) were analyzed. Affective variations (valence and arousal) in response to the sound conditions were also assessed. Although music did not lead to significant differences in mean detection and recognition thresholds, a larger proportion of sweet taste recognitions was observed at a near-threshold level (2.59 g/L) in the HS condition. The intensity and hedonic ratings of supra-threshold conditions were unaffected by the music condition. Significant differences in self-reported mood in response to the sound conditions were also observed. The present study suggests that the influence of music on the sweet taste perception of basic solutions may depend on the parameter under consideration.
Collapse
|