51
|
Castaldi E, Tinelli F, Filippo G, Bartoli M, Anobile G. Auditory time perception impairment in children with developmental dyscalculia. RESEARCH IN DEVELOPMENTAL DISABILITIES 2024; 149:104733. [PMID: 38663331 PMCID: PMC11155440 DOI: 10.1016/j.ridd.2024.104733] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Revised: 02/19/2024] [Accepted: 04/09/2024] [Indexed: 05/21/2024]
Abstract
Developmental dyscalculia (DD) is a specific learning disability which prevents children from acquiring adequate numerical and arithmetical competences. We investigated whether difficulties in children with DD spread beyond the numerical domain and impact also their ability to perceive time. A group of 37 children/adolescent with and without DD were tested with an auditory categorization task measuring time perception thresholds in the sub-second (0.25-1 s) and supra-second (0.75-3 s) ranges. Results showed that auditory time perception was strongly impaired in children with DD at both time scales. The impairment remained even when age, non-verbal reasoning, and gender were regressed out. Overall, our results show that the difficulties of DD can affect magnitudes other than numerical and contribute to the increasing evidence that frames dyscalculia as a disorder affecting multiple neurocognitive and perceptual systems.
Collapse
|
52
|
Sendesen E, Turkyilmaz D. Investigation of the behavior of tinnitus patients under varying listening conditions with simultaneous electroencephalography and pupillometry. Brain Behav 2024; 14:e3571. [PMID: 38841736 PMCID: PMC11154813 DOI: 10.1002/brb3.3571] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/15/2023] [Revised: 02/05/2024] [Accepted: 05/08/2024] [Indexed: 06/07/2024] Open
Abstract
OBJECTIVE This study aims to control all hearing thresholds, including extended high frequencies (EHFs), presents stimuli of varying difficulty levels, and measures electroencephalography (EEG) and pupillometry responses to determine whether listening difficulty in tinnitus patients is effort or fatigue-related. METHODS Twenty-one chronic tinnitus patients and 26 matched healthy controls having normal pure-tone averages with symmetrical hearing thresholds were included. Subjects were evaluated with 0.125-20 kHz pure-tone audiometry, Montreal Cognitive Assessment Test (MoCA), Tinnitus Handicap Inventory (THI), EEG, and pupillometry. RESULTS Pupil dilatation and EEG alpha power during the "encoding" phase of the presented sentence in tinnitus patients were less in all listening conditions (p < .05). Also, there was no statistically significant relationship between EEG and pupillometry components for all listening conditions and THI or MoCA (p > .05). CONCLUSION EEG and pupillometry results under various listening conditions indicate potential listening effort in tinnitus patients even if all frequencies, including EHFs, are controlled. Also, we suggest that pupillometry should be interpreted with caution in autonomic nervous system-related conditions such as tinnitus.
Collapse
|
53
|
Lialiou M, Grice M, Röhr CT, Schumacher PB. Auditory Processing of Intonational Rises and Falls in German: Rises Are Special in Attention Orienting. J Cogn Neurosci 2024; 36:1099-1122. [PMID: 38358004 DOI: 10.1162/jocn_a_02129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/16/2024]
Abstract
This article investigates the processing of intonational rises and falls when presented unexpectedly in a stream of repetitive auditory stimuli. It examines the neurophysiological correlates (ERPs) of attention to these unexpected stimuli through the use of an oddball paradigm where sequences of repetitive stimuli are occasionally interspersed with a deviant stimulus, allowing for elicitation of an MMN. Whereas previous oddball studies on attention toward unexpected sounds involving pitch rises were conducted on nonlinguistic stimuli, the present study uses as stimuli lexical items in German with naturalistic intonation contours. Results indicate that rising intonation plays a special role in attention orienting at a pre-attentive processing stage, whereas contextual meaning (here a list of items) is essential for activating attentional resources at a conscious processing stage. This is reflected in the activation of distinct brain responses: Rising intonation evokes the largest MMN, whereas falling intonation elicits a less pronounced MMN followed by a P3 (reflecting a conscious processing stage). Subsequently, we also find a complex interplay between the phonological status (i.e., accent/head marking vs. boundary/edge marking) and the direction of pitch change in their contribution to attention orienting: Attention is not oriented necessarily toward a specific position in prosodic structure (head or edge). Rather, we find that the intonation contour itself and the appropriateness of the contour in the linguistic context are the primary cues to two core mechanisms of attention orienting, pre-attentive and conscious orientation respectively, whereas the phonological status of the pitch event plays only a supplementary role.
Collapse
|
54
|
Coy N, Bendixen A, Grimm S, Roeber U, Schröger E. Conditional deviant repetition in the oddball paradigm modulates processing at the level of P3a but not MMN. Psychophysiology 2024; 61:e14545. [PMID: 38366704 DOI: 10.1111/psyp.14545] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Revised: 01/30/2024] [Accepted: 01/31/2024] [Indexed: 02/18/2024]
Abstract
The auditory system has an amazing ability to rapidly encode auditory regularities. Evidence comes from the popular oddball paradigm, in which frequent (standard) sounds are occasionally exchanged for rare deviant sounds, which then elicit signs of prediction error based on their unexpectedness (e.g., MMN and P3a). Here, we examine the widely neglected characteristics of deviants being bearers of predictive information themselves; naive participants listened to sound sequences constructed according to a new, modified version of the oddball paradigm including two types of deviants that followed diametrically opposed rules: one deviant sound occurred mostly in pairs (repetition rule), the other deviant sound occurred mostly in isolation (non-repetition rule). Due to this manipulation, the sound following a first deviant (either the same deviant or a standard) was either predictable or unpredictable based on its conditional probability associated with the preceding deviant sound. Our behavioral results from an active deviant detection task replicate previous findings that deviant repetition rules (based on conditional probability) can be extracted when behaviorally relevant. Our electrophysiological findings obtained in a passive listening setting indicate that conditional probability also translates into differential processing at the P3a level. However, MMN was confined to global deviants and was not sensitive to conditional probability. This suggests that higher-level processing concerned with stimulus selection and/or evaluation (reflected in P3a) but not lower-level sensory processing (reflected in MMN) considers rarely encountered rules.
Collapse
|
55
|
Kimura T, Kawashima T. The influence of peripheral information on a proactive process during multitasking. Q J Exp Psychol (Hove) 2024; 77:1352-1362. [PMID: 37542429 DOI: 10.1177/17470218231195198] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/07/2023]
Abstract
The aim of this study was to examine whether peripheral information facilitates proactive processes during multitasking. For this purpose, peripheral information was presented regularly during multitasking and its effects on the performance of a tracking task (main task: reactive process) and a discrimination task (sub-task: proactive process) were examined. Experiment 1 presented peripheral information (white circles) in the same sensory modality (visual) as the information used for multitasking and the number of circle presentations was manipulated. In Experiment 2, a pure tone (auditory) was presented as peripheral information. We found that, in both experiments, the difficulty of the tracking task influenced discrimination performance, showing that as the difficulty of the tracking task (reactive process) increased, more cognitive resources were consumed in the tracking task, resulting in a decrease in cognitive resources available for the discrimination task (proactive process). In addition, regular presentation of peripheral information facilitated discrimination task performance in both experiments. Interestingly, this peripheral information also facilitated the tracking task performance (reactive process) even if the tracking task was difficult. Moreover, this promoting effect of the peripheral information occurred regardless of the sensory modality. This study revealed that processing of peripheral information facilitates the proactive process even if more cognitive resources are consumed, and that this facilitating effect does not conflict with multitasking and provides a margin of cognitive resources and also facilitates the reactive process. Our results provide evidence of how peripheral information and cognitive resources are used during multitasking.
Collapse
|
56
|
Wang P, Zhang X, Ai X, Wang S. Modulation of EEG Signals by Visual and Auditory Distractors in Virtual Reality-Based Continuous Performance Tests. IEEE Trans Neural Syst Rehabil Eng 2024; 32:2049-2059. [PMID: 38801679 DOI: 10.1109/tnsre.2024.3405549] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/29/2024]
Abstract
Compared to traditional continuous performance tasks, virtual reality-based continuous performance tests (VR-CPT) offer higher ecological validity. While previous studies have primarily focused on behavioral outcomes in VR-CPT and incorporated various distractors to enhance ecological realism, little attention has been paid to the effects of distractors on EEG. Therefore, our study aimed to investigate the influence of distractors on EEG during VR-CPT. We studied visual distractors and auditory distractors separately, recruiting 68 subjects (M =20.82, SD =1.72) and asking each to complete four tasks. These tasks were categorized into four groups according to the presence or absence of visual and auditory distractors. We conducted paired t-tests on the mean relative power of the five electrodes in the ROI region across different frequency bands. Significant differences were found in theta waves between Group 3 (M =2.49, SD =2.02) and Group 4 (M =2.68, SD =2.39) (p < 0.05); in alpha waves between Group 3 (M =2.08, SD =3.73) and Group 4 (M =3.03, SD =4.60) (p < 0.001); and in beta waves between Group 1 (M = -4.44 , SD =2.29) and Group 2 (M = -5.03 , SD =2.48) (p < 0.001), as well as between Group 3 (M = -4.48 , SD =2.03) and Group 4 (M = -4.67 , SD =2.23) (p < 0.05). The incorporation of distractors in VR-CPT modulates EEG signals across different frequency bands, with visual distractors attenuating theta band activity, auditory distractors enhancing alpha band activity, and both types of distractors reducing beta oscillations following target stimuli. This insight holds significant promise for the rehabilitation of children and adolescents with attention deficits.
Collapse
|
57
|
Zeng X, Cai S, Xie L. Attention-guided graph structure learning network for EEG-enabled auditory attention detection. J Neural Eng 2024; 21:036025. [PMID: 38776893 DOI: 10.1088/1741-2552/ad4f1a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2024] [Accepted: 05/22/2024] [Indexed: 05/25/2024]
Abstract
Objective: Decoding auditory attention from brain signals is essential for the development of neuro-steered hearing aids. This study aims to overcome the challenges of extracting discriminative feature representations from electroencephalography (EEG) signals for auditory attention detection (AAD) tasks, particularly focusing on the intrinsic relationships between different EEG channels.Approach: We propose a novel attention-guided graph structure learning network, AGSLnet, which leverages potential relationships between EEG channels to improve AAD performance. Specifically, AGSLnet is designed to dynamically capture latent relationships between channels and construct a graph structure of EEG signals.Main result: We evaluated AGSLnet on two publicly available AAD datasets and demonstrated its superiority and robustness over state-of-the-art models. Visualization of the graph structure trained by AGSLnet supports previous neuroscience findings, enhancing our understanding of the underlying neural mechanisms.Significance: This study presents a novel approach for examining brain functional connections, improving AAD performance in low-latency settings, and supporting the development of neuro-steered hearing aids.
Collapse
|
58
|
Schiller IS, Breuer C, Aspöck L, Ehret J, Bönsch A, Kuhlen TW, Fels J, Schlittmeier SJ. A lecturer's voice quality and its effect on memory, listening effort, and perception in a VR environment. Sci Rep 2024; 14:12407. [PMID: 38811832 PMCID: PMC11137055 DOI: 10.1038/s41598-024-63097-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2024] [Accepted: 05/24/2024] [Indexed: 05/31/2024] Open
Abstract
Many lecturers develop voice problems, such as hoarseness. Nevertheless, research on how voice quality influences listeners' perception, comprehension, and retention of spoken language is limited to a small number of audio-only experiments. We aimed to address this gap by using audio-visual virtual reality (VR) to investigate the impact of a lecturer's hoarseness on university students' heard text recall, listening effort, and listening impression. Fifty participants were immersed in a virtual seminar room, where they engaged in a Dual-Task Paradigm. They listened to narratives presented by a virtual female professor, who spoke in either a typical or hoarse voice. Simultaneously, participants performed a secondary task. Results revealed significantly prolonged secondary-task response times with the hoarse voice compared to the typical voice, indicating increased listening effort. Subjectively, participants rated the hoarse voice as more annoying, effortful to listen to, and impeding for their cognitive performance. No effect of voice quality was found on heard text recall, suggesting that, while hoarseness may compromise certain aspects of spoken language processing, this might not necessarily result in reduced information retention. In summary, our findings underscore the importance of promoting vocal health among lecturers, which may contribute to enhanced listening conditions in learning spaces.
Collapse
|
59
|
Carrillo C, Chang A, Armstrong H, Cairney J, McAuley JD, Trainor LJ. Auditory rhythm facilitates perception and action in children at risk for developmental coordination disorder. Sci Rep 2024; 14:12203. [PMID: 38806554 PMCID: PMC11133375 DOI: 10.1038/s41598-024-62322-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Accepted: 05/15/2024] [Indexed: 05/30/2024] Open
Abstract
Developmental Coordination Disorder (DCD) is a common neurodevelopmental disorder featuring deficits in motor coordination and motor timing among children. Deficits in rhythmic tracking, including perceptually tracking and synchronizing action with auditory rhythms, have been studied in a wide range of motor disorders, providing a foundation for developing rehabilitation programs incorporating auditory rhythms. We tested whether DCD also features these auditory-motor deficits among 7-10 year-old children. In a speech recognition task with no overt motor component, modulating the speech rhythm interfered more with the performance of children at risk for DCD than typically developing (TD) children. A set of auditory-motor tapping tasks further showed that, although children at risk for DCD performed worse than TD children in general, the presence of an auditory rhythmic cue (isochronous metronome or music) facilitated the temporal consistency of tapping. Finally, accuracy in the recognition of rhythmically modulated speech and tapping consistency correlated with performance on the standardized motor assessment. Together, the results show auditory rhythmic regularity benefits auditory perception and auditory-motor coordination in children at risk for DCD. This provides a foundation for future clinical studies to develop evidence-based interventions involving auditory-motor rhythmic coordination for children with DCD.
Collapse
|
60
|
Nitta J, Kondoh S, Okanoya K, Tachibana RO. Spectral consistency in sound sequence affects perceptual accuracy in discriminating subdivided rhythmic patterns. PLoS One 2024; 19:e0303347. [PMID: 38805449 PMCID: PMC11132482 DOI: 10.1371/journal.pone.0303347] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Accepted: 04/24/2024] [Indexed: 05/30/2024] Open
Abstract
Musical compositions are distinguished by their unique rhythmic patterns, determined by subtle differences in how regular beats are subdivided. Precise perception of these subdivisions is essential for discerning nuances in rhythmic patterns. While musical rhythm typically comprises sound elements with a variety of timbres or spectral cues, the impact of such spectral variations on the perception of rhythmic patterns remains unclear. Here, we show that consistency in spectral cues affects perceptual accuracy in discriminating subdivided rhythmic patterns. We conducted online experiments using rhythmic sound sequences consisting of band-passed noise bursts to measure discrimination accuracy. Participants were asked to discriminate between a swing-like rhythm sequence, characterized by a 2:1 interval ratio, and its more or less exaggerated version. This task was also performed under two additional rhythm conditions: inversed-swing rhythm (1:2 ratio) and regular subdivision (1:1 ratio). The center frequency of the band noises was either held constant or alternated between two values. Our results revealed a significant decrease in discrimination accuracy when the center frequency was alternated, irrespective of the rhythm ratio condition. This suggests that rhythm perception is shaped by temporal structure and affected by spectral properties.
Collapse
|
61
|
Wang M, Jendrichovsky P, Kanold PO. Auditory discrimination learning differentially modulates neural representation in auditory cortex subregions and inter-areal connectivity. Cell Rep 2024; 43:114172. [PMID: 38703366 DOI: 10.1016/j.celrep.2024.114172] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 02/06/2024] [Accepted: 04/16/2024] [Indexed: 05/06/2024] Open
Abstract
Changes in sound-evoked responses in the auditory cortex (ACtx) occur during learning, but how learning alters neural responses in different ACtx subregions and changes their interactions is unclear. To address these questions, we developed an automated training and widefield imaging system to longitudinally track the neural activity of all mouse ACtx subregions during a tone discrimination task. We find that responses in primary ACtx are highly informative of learned stimuli and behavioral outcomes throughout training. In contrast, representations of behavioral outcomes in the dorsal posterior auditory field, learned stimuli in the dorsal anterior auditory field, and inter-regional correlations between primary and higher-order areas are enhanced with training. Moreover, ACtx response changes vary between stimuli, and such differences display lag synchronization with the learning rate. These results indicate that learning alters functional connections between ACtx subregions, inducing region-specific modulations by propagating behavioral information from primary to higher-order areas.
Collapse
|
62
|
Rabelo ECDS, Dassie-Leite AP, Ribeiro VV, Madazio G, Behlau MS. Cepstral Peak Prominence Smoothed - CPPS and Acoustic Voice Quality Index - AVQI in healthy and altered children's voices: comparation, relationship with auditory-perceptual judgment and cut-off points. Codas 2024; 36:e20230047. [PMID: 38808777 DOI: 10.1590/2317-1782/20242023047pt] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Accepted: 12/27/2023] [Indexed: 05/30/2024] Open
Abstract
PURPOSE To compare the acoustic measurements of Cepstral Peak Prominence Smoothed (CPPS) and Acoustic Voice Quality Index (AVQI) of children with normal and altered voices, to relationship with auditory-perceptual judgment (APJ) and to establish cut-off points. METHODS Vocal recordings of the sustained vowel and number counting tasks of 185 children were selected from a database and submitted to acoustic analysis with extraction of CPPS and AVQI measurements, and to APJ. The APJ was performed individually for each task, classified as normal or altered, and for the tasks together defining whether the child would pass or fail in a situation of vocal screening. RESULTS Children with altered APJ and who failed the screening had lower CPPS values and higher AVQI values, than those with normal APJ and who passed the screening. The APJ of the sustained vowel task was related to CPPS and AVQI, and APJ of the number counting task was related only to AVQI and CPPS numbers. The cut-off points that differentiate children with and without vocal deviation are 14.07 for the vowel CPPS, 7.62 for the CPPS numbers and 2.01 for the AVQI. CONCLUSION Children with altered voices, have higher AVQI values and lower CPPS values, when detected in children with voices within the normal range. The acoustic measurements were related to the auditory perceptual judgment of vocal quality in the sustained vowel task, however, the number counting task was related only to the AVQI and CPPS. The cut-off points that differentiate children with and without vocal deviation are 14.07 for the CPPS vowel, 7.62 for the CPPS numbers and 2.01 for the AVQI. The three measures were similar in identifying voices without deviation and dysphonic voices.
Collapse
|
63
|
Kojima S, Kanoh S. An auditory brain-computer interface based on selective attention to multiple tone streams. PLoS One 2024; 19:e0303565. [PMID: 38781127 PMCID: PMC11115270 DOI: 10.1371/journal.pone.0303565] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Accepted: 04/27/2024] [Indexed: 05/25/2024] Open
Abstract
In this study, we attempted to improve brain-computer interface (BCI) systems by means of auditory stream segregation in which alternately presented tones are perceived as sequences of various different tones (streams). A 3-class BCI using three tone sequences, which were perceived as three different tone streams, was investigated and evaluated. Each presented musical tone was generated by a software synthesizer. Eleven subjects took part in the experiment. Stimuli were presented to each user's right ear. Subjects were requested to attend to one of three streams and to count the number of target stimuli in the attended stream. In addition, 64-channel electroencephalogram (EEG) and two-channel electrooculogram (EOG) signals were recorded from participants with a sampling frequency of 1000 Hz. The measured EEG data were classified based on Riemannian geometry to detect the object of the subject's selective attention. P300 activity was elicited by the target stimuli in the segregated tone streams. In five out of eleven subjects, P300 activity was elicited only by the target stimuli included in the attended stream. In a 10-fold cross validation test, a classification accuracy over 80% for five subjects and over 75% for nine subjects was achieved. For subjects whose accuracy was lower than 75%, either the P300 was also elicited for nonattended streams or the amplitude of P300 was small. It was concluded that the number of selected BCI systems based on auditory stream segregation can be increased to three classes, and these classes can be detected by a single ear without the aid of any visual modality.
Collapse
|
64
|
Bae AJ, Ferger R, Peña JL. Auditory Competition and Coding of Relative Stimulus Strength across Midbrain Space Maps of Barn Owls. J Neurosci 2024; 44:e2081232024. [PMID: 38664010 PMCID: PMC11112643 DOI: 10.1523/jneurosci.2081-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Revised: 03/06/2024] [Accepted: 04/05/2024] [Indexed: 05/24/2024] Open
Abstract
The natural environment challenges the brain to prioritize the processing of salient stimuli. The barn owl, a sound localization specialist, exhibits a circuit called the midbrain stimulus selection network, dedicated to representing locations of the most salient stimulus in circumstances of concurrent stimuli. Previous competition studies using unimodal (visual) and bimodal (visual and auditory) stimuli have shown that relative strength is encoded in spike response rates. However, open questions remain concerning auditory-auditory competition on coding. To this end, we present diverse auditory competitors (concurrent flat noise and amplitude-modulated noise) and record neural responses of awake barn owls of both sexes in subsequent midbrain space maps, the external nucleus of the inferior colliculus (ICx) and optic tectum (OT). While both ICx and OT exhibit a topographic map of auditory space, OT also integrates visual input and is part of the global-inhibitory midbrain stimulus selection network. Through comparative investigation of these regions, we show that while increasing strength of a competitor sound decreases spike response rates of spatially distant neurons in both regions, relative strength determines spike train synchrony of nearby units only in the OT. Furthermore, changes in synchrony by sound competition in the OT are correlated to gamma range oscillations of local field potentials associated with input from the midbrain stimulus selection network. The results of this investigation suggest that modulations in spiking synchrony between units by gamma oscillations are an emergent coding scheme representing relative strength of concurrent stimuli, which may have relevant implications for downstream readout.
Collapse
|
65
|
Ishizu K, Nishimoto S, Ueoka Y, Funamizu A. Localized and global representation of prior value, sensory evidence, and choice in male mouse cerebral cortex. Nat Commun 2024; 15:4071. [PMID: 38778078 PMCID: PMC11111702 DOI: 10.1038/s41467-024-48338-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Accepted: 04/26/2024] [Indexed: 05/25/2024] Open
Abstract
Adaptive behavior requires integrating prior knowledge of action outcomes and sensory evidence for making decisions while maintaining prior knowledge for future actions. As outcome- and sensory-based decisions are often tested separately, it is unclear how these processes are integrated in the brain. In a tone frequency discrimination task with two sound durations and asymmetric reward blocks, we found that neurons in the medial prefrontal cortex of male mice represented the additive combination of prior reward expectations and choices. The sensory inputs and choices were selectively decoded from the auditory cortex irrespective of reward priors and the secondary motor cortex, respectively, suggesting localized computations of task variables are required within single trials. In contrast, all the recorded regions represented prior values that needed to be maintained across trials. We propose localized and global computations of task variables in different time scales in the cerebral cortex.
Collapse
|
66
|
Tanveer MA, Skoglund MA, Bernhardsson B, Alickovic E. Deep learning-based auditory attention decoding in listeners with hearing impairment . J Neural Eng 2024; 21:036022. [PMID: 38729132 DOI: 10.1088/1741-2552/ad49d7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2023] [Accepted: 05/10/2024] [Indexed: 05/12/2024]
Abstract
Objective.This study develops a deep learning (DL) method for fast auditory attention decoding (AAD) using electroencephalography (EEG) from listeners with hearing impairment (HI). It addresses three classification tasks: differentiating noise from speech-in-noise, classifying the direction of attended speech (left vs. right) and identifying the activation status of hearing aid noise reduction algorithms (OFF vs. ON). These tasks contribute to our understanding of how hearing technology influences auditory processing in the hearing-impaired population.Approach.Deep convolutional neural network (DCNN) models were designed for each task. Two training strategies were employed to clarify the impact of data splitting on AAD tasks: inter-trial, where the testing set used classification windows from trials that the training set had not seen, and intra-trial, where the testing set used unseen classification windows from trials where other segments were seen during training. The models were evaluated on EEG data from 31 participants with HI, listening to competing talkers amidst background noise.Main results.Using 1 s classification windows, DCNN models achieve accuracy (ACC) of 69.8%, 73.3% and 82.9% and area-under-curve (AUC) of 77.2%, 80.6% and 92.1% for the three tasks respectively on inter-trial strategy. In the intra-trial strategy, they achieved ACC of 87.9%, 80.1% and 97.5%, along with AUC of 94.6%, 89.1%, and 99.8%. Our DCNN models show good performance on short 1 s EEG samples, making them suitable for real-world applications. Conclusion: Our DCNN models successfully addressed three tasks with short 1 s EEG windows from participants with HI, showcasing their potential. While the inter-trial strategy demonstrated promise for assessing AAD, the intra-trial approach yielded inflated results, underscoring the important role of proper data splitting in EEG-based AAD tasks.Significance.Our findings showcase the promising potential of EEG-based tools for assessing auditory attention in clinical contexts and advancing hearing technology, while also promoting further exploration of alternative DL architectures and their potential constraints.
Collapse
|
67
|
Bonetti L, Fernández-Rubio G, Carlomagno F, Dietz M, Pantazis D, Vuust P, Kringelbach ML. Spatiotemporal brain hierarchies of auditory memory recognition and predictive coding. Nat Commun 2024; 15:4313. [PMID: 38773109 PMCID: PMC11109219 DOI: 10.1038/s41467-024-48302-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Accepted: 04/25/2024] [Indexed: 05/23/2024] Open
Abstract
Our brain is constantly extracting, predicting, and recognising key spatiotemporal features of the physical world in order to survive. While neural processing of visuospatial patterns has been extensively studied, the hierarchical brain mechanisms underlying conscious recognition of auditory sequences and the associated prediction errors remain elusive. Using magnetoencephalography (MEG), we describe the brain functioning of 83 participants during recognition of previously memorised musical sequences and systematic variations. The results show feedforward connections originating from auditory cortices, and extending to the hippocampus, anterior cingulate gyrus, and medial cingulate gyrus. Simultaneously, we observe backward connections operating in the opposite direction. Throughout the sequences, the hippocampus and cingulate gyrus maintain the same hierarchical level, except for the final tone, where the cingulate gyrus assumes the top position within the hierarchy. The evoked responses of memorised sequences and variations engage the same hierarchical brain network but systematically differ in terms of temporal dynamics, strength, and polarity. Furthermore, induced-response analysis shows that alpha and beta power is stronger for the variations, while gamma power is enhanced for the memorised sequences. This study expands on the predictive coding theory by providing quantitative evidence of hierarchical brain mechanisms during conscious memory and predictive processing of auditory sequences.
Collapse
|
68
|
Undurraga JA, Luke R, Van Yper L, Monaghan JJM, McAlpine D. The neural representation of an auditory spatial cue in the primate cortex. Curr Biol 2024; 34:2162-2174.e5. [PMID: 38718798 DOI: 10.1016/j.cub.2024.04.034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Revised: 02/14/2024] [Accepted: 04/12/2024] [Indexed: 05/23/2024]
Abstract
Humans make use of small differences in the timing of sounds at the two ears-interaural time differences (ITDs)-to locate their sources. Despite extensive investigation, however, the neural representation of ITDs in the human brain is contentious, particularly the range of ITDs explicitly represented by dedicated neural detectors. Here, using magneto- and electro-encephalography (MEG and EEG), we demonstrate evidence of a sparse neural representation of ITDs in the human cortex. The magnitude of cortical activity to sounds presented via insert earphones oscillated as a function of increasing ITD-within and beyond auditory cortical regions-and listeners rated the perceptual quality of these sounds according to the same oscillating pattern. This pattern was accurately described by a population of model neurons with preferred ITDs constrained to the narrow, sound-frequency-dependent range evident in other mammalian species. When scaled for head size, the distribution of ITD detectors in the human cortex is remarkably like that recorded in vivo from the cortex of rhesus monkeys, another large primate that uses ITDs for source localization. The data solve a long-standing issue concerning the neural representation of ITDs in humans and suggest a representation that scales for head size and sound frequency in an optimal manner.
Collapse
|
69
|
Steinfeld R, Tacão-Monteiro A, Renart A. Differential representation of sensory information and behavioral choice across layers of the mouse auditory cortex. Curr Biol 2024; 34:2200-2211.e6. [PMID: 38733991 DOI: 10.1016/j.cub.2024.04.040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Revised: 03/22/2024] [Accepted: 04/18/2024] [Indexed: 05/13/2024]
Abstract
The activity of neurons in sensory areas sometimes covaries with upcoming choices in decision-making tasks. However, the prevalence, causal origin, and functional role of choice-related activity remain controversial. Understanding the circuit-logic of decision signals in sensory areas will require understanding their laminar specificity, but simultaneous recordings of neural activity across the cortical layers in forced-choice discrimination tasks have not yet been performed. Here, we describe neural activity from such recordings in the auditory cortex of mice during a frequency discrimination task with delayed report, which, as we show, requires the auditory cortex. Stimulus-related information was widely distributed across layers but disappeared very quickly after stimulus offset. Choice selectivity emerged toward the end of the delay period-suggesting a top-down origin-but only in the deep layers. Early stimulus-selective and late choice-selective deep neural ensembles were correlated, suggesting that the choice-selective signal fed back to the auditory cortex is not just action specific but develops as a consequence of the sensory-motor contingency imposed by the task.
Collapse
|
70
|
Mizuguchi D, Sánchez-Valpuesta M, Kim Y, Dos Santos EB, Kang H, Mori C, Wada K, Kojima S. Daily singing of adult songbirds functions to maintain song performance independently of auditory feedback and age. Commun Biol 2024; 7:598. [PMID: 38762691 PMCID: PMC11102546 DOI: 10.1038/s42003-024-06311-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Accepted: 05/08/2024] [Indexed: 05/20/2024] Open
Abstract
Many songbirds learn to produce songs through vocal practice in early life and continue to sing daily throughout their lifetime. While it is well-known that adult songbirds sing as part of their mating rituals, the functions of singing behavior outside of reproductive contexts remain unclear. Here, we investigated this issue in adult male zebra finches by suppressing their daily singing for two weeks and examining the effects on song performance. We found that singing suppression decreased the pitch, amplitude, and duration of songs, and that those song features substantially recovered through subsequent free singing. These reversible song changes were not dependent on auditory feedback or the age of the birds, contrasting with the adult song plasticity that has been reported previously. These results demonstrate that adult song structure is not stable without daily singing, and suggest that adult songbirds maintain song performance by preventing song changes through physical act of daily singing throughout their life. Such daily singing likely functions as vocal training to maintain the song production system in optimal conditions for song performance in reproductive contexts, similar to how human singers and athletes practice daily to maintain their performance.
Collapse
|
71
|
Ishida K, Ishida T, Nittono H. Decoding predicted musical notes from omitted stimulus potentials. Sci Rep 2024; 14:11164. [PMID: 38750185 PMCID: PMC11096333 DOI: 10.1038/s41598-024-61989-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Accepted: 05/13/2024] [Indexed: 05/18/2024] Open
Abstract
Electrophysiological studies have investigated predictive processing in music by examining event-related potentials (ERPs) elicited by the violation of musical expectations. While several studies have reported that the predictability of stimuli can modulate the amplitude of ERPs, it is unclear how specific the representation of the expected note is. The present study addressed this issue by recording the omitted stimulus potentials (OSPs) to avoid contamination of bottom-up sensory processing with top-down predictive processing. Decoding of the omitted content was attempted using a support vector machine, which is a type of machine learning. ERP responses to the omission of four target notes (E, F, A, and C) at the same position in familiar and unfamiliar melodies were recorded from 25 participants. The results showed that the omission N1 were larger in the familiar melody condition than in the unfamiliar melody condition. The decoding accuracy of the four omitted notes was significantly higher in the familiar melody condition than in the unfamiliar melody condition. These results suggest that the OSPs contain discriminable predictive information, and the higher the predictability, the more the specific representation of the expected note is generated.
Collapse
|
72
|
Bechtold TA, Curry B, Witek M. The perceived catchiness of music affects the experience of groove. PLoS One 2024; 19:e0303309. [PMID: 38748741 PMCID: PMC11095763 DOI: 10.1371/journal.pone.0303309] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Accepted: 04/23/2024] [Indexed: 05/19/2024] Open
Abstract
Catchiness and groove are common phenomena when listening to popular music. Catchiness may be a potential factor for experiencing groove but quantitative evidence for such a relationship is missing. To examine whether and how catchiness influences a key component of groove-the pleasurable urge to move to music (PLUMM)-we conducted a listening experiment with 450 participants and 240 short popular music clips of drum patterns, bass lines or keys/guitar parts. We found four main results: (1) catchiness as measured in a recognition task was only weakly associated with participants' perceived catchiness of music. We showed that perceived catchiness is multi-dimensional, subjective, and strongly associated with pleasure. (2) We found a sizeable positive relationship between PLUMM and perceived catchiness. (3) However, the relationship is complex, as further analysis showed that pleasure suppresses perceived catchiness' effect on the urge to move. (4) We compared common factors that promote perceived catchiness and PLUMM and found that listener-related variables contributed similarly, while the effects of musical content diverged. Overall, our data suggests music perceived as catchy is likely to foster groove experiences.
Collapse
|
73
|
Yasoda-Mohan A, Faubert J, Ost J, Kropotov JD, Vanneste S. Investigating sensitivity to multi-domain prediction errors in chronic auditory phantom perception. Sci Rep 2024; 14:11036. [PMID: 38744906 PMCID: PMC11094085 DOI: 10.1038/s41598-024-61045-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Accepted: 04/29/2024] [Indexed: 05/16/2024] Open
Abstract
The perception of a continuous phantom in a sensory domain in the absence of an external stimulus is explained as a maladaptive compensation of aberrant predictive coding, a proposed unified theory of brain functioning. If this were true, these changes would occur not only in the domain of the phantom percept but in other sensory domains as well. We confirm this hypothesis by using tinnitus (continuous phantom sound) as a model and probe the predictive coding mechanism using the established local-global oddball paradigm in both the auditory and visual domains. We observe that tinnitus patients are sensitive to changes in predictive coding not only in the auditory but also in the visual domain. We report changes in well-established components of event-related EEG such as the mismatch negativity. Furthermore, deviations in stimulus characteristics were correlated with the subjective tinnitus distress. These results provide an empirical confirmation that aberrant perceptions are a symptom of a higher-order systemic disorder transcending the domain of the percept.
Collapse
|
74
|
Ludwig S, Bakas S, Adamos DA, Laskaris N, Panagakis Y, Zafeiriou S. EEGminer: discovering interpretable features of brain activity with learnable filters. J Neural Eng 2024; 21:036010. [PMID: 38684154 DOI: 10.1088/1741-2552/ad44d7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Accepted: 04/29/2024] [Indexed: 05/02/2024]
Abstract
Objective. The patterns of brain activity associated with different brain processes can be used to identify different brain states and make behavioural predictions. However, the relevant features are not readily apparent and accessible. Our aim is to design a system for learning informative latent representations from multichannel recordings of ongoing EEG activity.Approach: We propose a novel differentiable decoding pipeline consisting of learnable filters and a pre-determined feature extraction module. Specifically, we introduce filters parameterized by generalized Gaussian functions that offer a smooth derivative for stable end-to-end model training and allow for learning interpretable features. For the feature module, we use signal magnitude and functional connectivity estimates.Main results.We demonstrate the utility of our model on a new EEG dataset of unprecedented size (i.e. 721 subjects), where we identify consistent trends of music perception and related individual differences. Furthermore, we train and apply our model in two additional datasets, specifically for emotion recognition on SEED and workload classification on simultaneous task EEG workload. The discovered features align well with previous neuroscience studies and offer new insights, such as marked differences in the functional connectivity profile between left and right temporal areas during music listening. This agrees with the specialisation of the temporal lobes regarding music perception proposed in the literature.Significance. The proposed method offers strong interpretability of learned features while reaching similar levels of accuracy achieved by black box deep learning models. This improved trustworthiness may promote the use of deep learning models in real world applications. The model code is available athttps://github.com/SMLudwig/EEGminer/.
Collapse
|
75
|
Nakajima Y, Ashida H. Revisiting the Deviation Effects of Irrelevant Sound on Serial and Nonserial Tasks. Multisens Res 2024; 37:261-273. [PMID: 38724023 DOI: 10.1163/22134808-bja10123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Accepted: 04/24/2024] [Indexed: 06/06/2024]
Abstract
Two types of disruptive effects of irrelevant sound on visual tasks have been reported: the changing-state effect and the deviation effect. The idea that the deviation effect, which arises from attentional capture, is independent of task requirements, whereas the changing-state effect is specific to tasks that require serial processing, has been examined by comparing tasks that do or do not require serial-order processing. While many previous studies used the missing-item task as the nonserial task, it is unclear whether other cognitive tasks lead to similar results regarding the different task specificity of both effects. Kattner et al. (Memory and Cognition, 2023) used the mental-arithmetic task as the nonserial task, and failed to demonstrate the deviation effect. However, there were several procedural factors that could account for the lack of deviation effect, such as differences in design and procedures (e.g., conducted online, intermixed conditions). In the present study, we aimed to investigate whether the deviation effect could be observed in both the serial-recall and mental-arithmetic tasks when these procedural factors were modified. We found strong evidence of the deviation effect in both the serial-recall and the mental-arithmetic tasks when stimulus presentation and experimental design were aligned with previous studies that demonstrated the deviation effect (e.g., conducted in-person, blockwise presentation of sound, etc.). The results support the idea that the deviation effect is not task-specific.
Collapse
|