1
|
Uemura M, Katagiri Y, Imai E, Kawahara Y, Otani Y, Ichinose T, Kondo K, Kowa H. Dorsal Anterior Cingulate Cortex Coordinates Contextual Mental Imagery for Single-Beat Manipulation during Rhythmic Sensorimotor Synchronization. Brain Sci 2024; 14:757. [PMID: 39199452 PMCID: PMC11352649 DOI: 10.3390/brainsci14080757] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2024] [Revised: 07/17/2024] [Accepted: 07/23/2024] [Indexed: 09/01/2024] Open
Abstract
Flexible pulse-by-pulse regulation of sensorimotor synchronization is crucial for voluntarily showing rhythmic behaviors synchronously with external cueing; however, the underpinning neurophysiological mechanisms remain unclear. We hypothesized that the dorsal anterior cingulate cortex (dACC) plays a key role by coordinating both proactive and reactive motor outcomes based on contextual mental imagery. To test our hypothesis, a missing-oddball task in finger-tapping paradigms was conducted in 33 healthy young volunteers. The dynamic properties of the dACC were evaluated by event-related deep-brain activity (ER-DBA), supported by event-related potential (ERP) analysis and behavioral evaluation based on signal detection theory. We found that ER-DBA activation/deactivation reflected a strategic choice of motor control modality in accordance with mental imagery. Reverse ERP traces, as omission responses, confirmed that the imagery was contextual. We found that mental imagery was updated only by environmental changes via perceptual evidence and response-based abductive reasoning. Moreover, stable on-pulse tapping was achievable by maintaining proactive control while creating an imagery of syncopated rhythms from simple beat trains, whereas accuracy was degraded with frequent erroneous tapping for missing pulses. We conclude that the dACC voluntarily regulates rhythmic sensorimotor synchronization by utilizing contextual mental imagery based on experience and by creating novel rhythms.
Collapse
Affiliation(s)
- Maho Uemura
- Department of Rehabilitation Science, Kobe University Graduate School of Health Sciences, Kobe 654-0142, Japan; (Y.O.); (H.K.)
- School of Music, Mukogawa Women’s University, Nishinomiya 663-8558, Japan;
| | - Yoshitada Katagiri
- Department of Bioengineering, School of Engineering, The University of Tokyo, Tokyo 113-8655, Japan;
| | - Emiko Imai
- Department of Biophysics, Kobe University Graduate School of Health Sciences, Kobe 654-0142, Japan;
| | - Yasuhiro Kawahara
- Department of Human life and Health Sciences, Division of Arts and Sciences, The Open University of Japan, Chiba 261-8586, Japan;
| | - Yoshitaka Otani
- Department of Rehabilitation Science, Kobe University Graduate School of Health Sciences, Kobe 654-0142, Japan; (Y.O.); (H.K.)
- Faculty of Rehabilitation, Kobe International University, Kobe 658-0032, Japan
| | - Tomoko Ichinose
- School of Music, Mukogawa Women’s University, Nishinomiya 663-8558, Japan;
| | | | - Hisatomo Kowa
- Department of Rehabilitation Science, Kobe University Graduate School of Health Sciences, Kobe 654-0142, Japan; (Y.O.); (H.K.)
| |
Collapse
|
2
|
te Rietmolen N, Mercier MR, Trébuchon A, Morillon B, Schön D. Speech and music recruit frequency-specific distributed and overlapping cortical networks. eLife 2024; 13:RP94509. [PMID: 39038076 PMCID: PMC11262799 DOI: 10.7554/elife.94509] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/24/2024] Open
Abstract
To what extent does speech and music processing rely on domain-specific and domain-general neural networks? Using whole-brain intracranial EEG recordings in 18 epilepsy patients listening to natural, continuous speech or music, we investigated the presence of frequency-specific and network-level brain activity. We combined it with a statistical approach in which a clear operational distinction is made between shared, preferred, and domain-selective neural responses. We show that the majority of focal and network-level neural activity is shared between speech and music processing. Our data also reveal an absence of anatomical regional selectivity. Instead, domain-selective neural responses are restricted to distributed and frequency-specific coherent oscillations, typical of spectral fingerprints. Our work highlights the importance of considering natural stimuli and brain dynamics in their full complexity to map cognitive and brain functions.
Collapse
Affiliation(s)
- Noémie te Rietmolen
- Institute for Language, Communication, and the Brain, Aix-Marseille UniversityMarseilleFrance
- Aix Marseille Université, INSERM, INS, Institut de Neurosciences des SystèmesMarseilleFrance
| | - Manuel R Mercier
- Aix Marseille Université, INSERM, INS, Institut de Neurosciences des SystèmesMarseilleFrance
| | - Agnès Trébuchon
- Institute for Language, Communication, and the Brain, Aix-Marseille UniversityMarseilleFrance
- Aix Marseille Université, INSERM, INS, Institut de Neurosciences des SystèmesMarseilleFrance
- APHM, Hôpital de la Timone, Service de Neurophysiologie CliniqueMarseilleFrance
| | - Benjamin Morillon
- Institute for Language, Communication, and the Brain, Aix-Marseille UniversityMarseilleFrance
- Aix Marseille Université, INSERM, INS, Institut de Neurosciences des SystèmesMarseilleFrance
| | - Daniele Schön
- Institute for Language, Communication, and the Brain, Aix-Marseille UniversityMarseilleFrance
- Aix Marseille Université, INSERM, INS, Institut de Neurosciences des SystèmesMarseilleFrance
| |
Collapse
|
3
|
Sugiyama S, Inui K, Ohi K, Shioiri T. The influence of novelty detection on the 40-Hz auditory steady-state response in schizophrenia: A novel hypothesis from meta-analysis. Prog Neuropsychopharmacol Biol Psychiatry 2024; 135:111096. [PMID: 39029650 DOI: 10.1016/j.pnpbp.2024.111096] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/07/2024] [Revised: 07/14/2024] [Accepted: 07/16/2024] [Indexed: 07/21/2024]
Abstract
The 40-Hz auditory steady-state response (ASSR) is influenced not only by parameters such as attention, stimulus type, and analysis level but also by stimulus duration and inter-stimulus interval (ISI). In this meta-analysis, we examined these parameters in 33 studies that investigated 40-Hz ASSRs in patients with schizophrenia. The average Hedges' g random effect sizes were - 0.47 and - 0.43 for spectral power and phase-locking, respectively. We also found differences in ASSR measures based on stimulus duration and ISI. In particular, ISI was shown to significantly influence differences in the 40-Hz ASSR between healthy controls and patients with schizophrenia. We proposed a novel hypothesis focusing on the role of novelty detection, dependent on stimulus duration and ISI, as a critical factor in determining these differences. Specifically, longer stimulus durations and shorter ISIs under random presentation, or shorter stimulus durations and longer ISIs under repetitive presentation, decrease the 40-Hz ASSR in healthy controls. Patients with schizophrenia show minimal changes in response to stimulus duration and ISI, thus reducing the difference between controls and patients. This hypothesis can consistently explain most of the studies that have failed to show a reduction in 40-Hz ASSR in patients with schizophrenia. Increased novelty-related activity, reflected as an increase in auditory evoked potential components at stimulus onset, such as the N1, could suppress the 40-Hz ASSR, potentially reducing the peak measures of spectral power and phase-locking. To establish the 40-Hz ASSR as a truly valuable biomarker for schizophrenia, further systematic research using paradigms with various stimulus durations and ISIs is needed.
Collapse
Affiliation(s)
- Shunsuke Sugiyama
- Department of Psychiatry, Gifu University Graduate School of Medicine, Gifu, Japan.
| | - Koji Inui
- Department of Functioning and Disability, Institute for Developmental Research, Aichi Developmental Disability Center, Kasugai, Japan; Section of Brain Function Information, National Institute for Physiological Sciences, Okazaki, Japan
| | - Kazutaka Ohi
- Department of Psychiatry, Gifu University Graduate School of Medicine, Gifu, Japan
| | - Toshiki Shioiri
- Department of Psychiatry, Gifu University Graduate School of Medicine, Gifu, Japan
| |
Collapse
|
4
|
Robert P, Zatorre R, Gupta A, Sein J, Anton JL, Belin P, Thoret E, Morillon B. Auditory hemispheric asymmetry for actions and objects. Cereb Cortex 2024; 34:bhae292. [PMID: 39051660 DOI: 10.1093/cercor/bhae292] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Revised: 06/08/2024] [Accepted: 07/03/2024] [Indexed: 07/27/2024] Open
Abstract
What is the function of auditory hemispheric asymmetry? We propose that the identification of sound sources relies on the asymmetric processing of two complementary and perceptually relevant acoustic invariants: actions and objects. In a large dataset of environmental sounds, we observed that temporal and spectral modulations display only weak covariation. We then synthesized auditory stimuli by simulating various actions (frictions) occurring on different objects (solid surfaces). Behaviorally, discrimination of actions relies on temporal modulations, while discrimination of objects relies on spectral modulations. Functional magnetic resonance imaging data showed that actions and objects are decoded in the left and right hemispheres, respectively, in bilateral superior temporal and left inferior frontal regions. This asymmetry reflects a generic differential processing-through differential neural sensitivity to temporal and spectral modulations present in environmental sounds-that supports the efficient categorization of actions and objects. These results support an ecologically valid framework of the functional role of auditory brain asymmetry.
Collapse
Affiliation(s)
- Paul Robert
- Institut de Neurosciences des Systèmes (INS), Inserm/UMR1106, Aix Marseille University, 27 Bd Jean Moulin, Marseille 13005, France
| | - Robert Zatorre
- Montreal Neurological Institute (MNI), Cognitive Neuroscience Unit, McGill University, 3801 Rue University, Montréal, QC H3A 2B4, Canada
- Centre for Research in Brain, Language, and Music (CRBLM), McGill University, Faculty of Medicine 3640 de la Montagne, Montreal QC H3G 2A8, Canada
| | - Akanksha Gupta
- Institut de Neurosciences des Systèmes (INS), Inserm/UMR1106, Aix Marseille University, 27 Bd Jean Moulin, Marseille 13005, France
| | - Julien Sein
- Institut de Neurosciences de la Timone (INT), CNRS/UMR7289, Aix Marseille University, 27 Bd Jean Moulin, Marseille 13005, France
| | - Jean-Luc Anton
- Institut de Neurosciences de la Timone (INT), CNRS/UMR7289, Aix Marseille University, 27 Bd Jean Moulin, Marseille 13005, France
| | - Pascal Belin
- Institut de Neurosciences de la Timone (INT), CNRS/UMR7289, Aix Marseille University, 27 Bd Jean Moulin, Marseille 13005, France
| | - Etienne Thoret
- Institut de Neurosciences de la Timone (INT), CNRS/UMR7289, Aix Marseille University, 27 Bd Jean Moulin, Marseille 13005, France
- PRISM Laboratory, CNRS/UMR7061, Aix Marseille University, 31 Chemin Joseph Aiguier, Marseille, 13402 Cedex 20, France
- Laboratoire d'Informatique et Systèmes (LIS), CNRS/UMR7020, Aix Marseille University, 52 Av Escadrille Normandie Niemen, Marseille, 13397 Cedex 20, France
- Institute of Language, Communication, and the Brain (ILCB), Aix Marseille University, 5 avenue Pasteur, Aix-en-Provence, 13604 Cedex 1, France
| | - Benjamin Morillon
- Institut de Neurosciences des Systèmes (INS), Inserm/UMR1106, Aix Marseille University, 27 Bd Jean Moulin, Marseille 13005, France
| |
Collapse
|
5
|
de Zubicaray GI, Hinojosa JA. Statistical Relationships Between Phonological Form, Emotional Valence and Arousal of Spanish Words. J Cogn 2024; 7:42. [PMID: 38737820 PMCID: PMC11086587 DOI: 10.5334/joc.366] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2024] [Accepted: 04/23/2024] [Indexed: 05/14/2024] Open
Abstract
A number of studies have provided evidence of limited non-arbitrary associations between the phonological forms and meanings of affective words, a finding referred to as affective sound symbolism. Here, we explored whether the affective connotations of Spanish words might have more extensive statistical relationships with phonological/phonetic features, or affective form typicality. After eliminating words with poor affective rating agreement and morphophonological redundancies (e.g., negating prefixes), we found evidence of significant form typicality for emotional valence, emotionality, and arousal in a large sample of monosyllabic and polysyllabic words. These affective form-meaning mappings remained significant even when controlling for a range of lexico-semantic variables. We show that affective variables and their corresponding form typicality measures are able to significantly predict lexical decision performance using a megastudy dataset. Overall, our findings provide new evidence that affective form typicality is a statistical property of the Spanish lexicon.
Collapse
Affiliation(s)
- Greig I. de Zubicaray
- School of Psychology and Counselling, Faculty of Health, Queensland University of Technology (QUT), Brisbane, Australia
| | - José A. Hinojosa
- Departamento de Psicología Experimental, Procesos Cognitivos y Logopedia, Universidad Complutense de Madrid, Madrid, Spain
- Instituto Pluridisciplinar, Universidad Complutense de Madrid, Madrid, Spain
- Centro de Investigación Nebrija en Cognición (CINC), Universidad Nebrija, Madrid, Spain
| |
Collapse
|
6
|
Norena A. Did Kant suffer from misophonia? Front Psychol 2024; 15:1242516. [PMID: 38420172 PMCID: PMC10899398 DOI: 10.3389/fpsyg.2024.1242516] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Accepted: 02/01/2024] [Indexed: 03/02/2024] Open
Abstract
Misophonia is a disorder of decreased tolerance to specific sounds, often produced by humans but not always, which can trigger intense emotional reactions (anger, disgust etc.). This relatively prevalent disorder can cause a reduction in the quality of life. The causes of misophonia are still unclear. In this article, we develop a hypothesis suggesting that misophonia can be caused by a failure in the organization of the perceived world. The perceived world is the result of both the structure of human thought and the many conditioning factors that punctuate human life, particularly social conditioning. It is made up of abstract symbols that map the world and help humans to orient himself in a potentially dangerous environment. In this context, the role of social rules acquired throughout life is considerable. Table manners, for example, are a set of deeply regulated and controlled behaviors (it's considered impolite to eat with the mouth open and to make noise while eating), which contribute to shape the way the perceived world is organized. So it's not surprising to find sounds from the mouth (chewing etc.) among the most common misophonic sound triggers. Politeness can be seen as an act of obedience to moral rules or courtesy, which is a prerequisite for peaceful social relations. Beyond this example, we also argue that any sound can become a misophonic trigger as long as it is not integrated into the perceived ordered and harmonious world, because it is considered an "anomaly," i.e., a disorder, an immorality or a vulgarity.
Collapse
Affiliation(s)
- Arnaud Norena
- Centre de recherche en Psychologie et Neuroscience, UMR7077, Aix-Marseille Université, CNRS, Marseille, France
| |
Collapse
|
7
|
Kathios N, Patel AD, Loui P. Musical anhedonia, timbre, and the rewards of music listening. Cognition 2024; 243:105672. [PMID: 38086279 DOI: 10.1016/j.cognition.2023.105672] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Revised: 10/18/2023] [Accepted: 11/21/2023] [Indexed: 12/22/2023]
Abstract
Pleasure in music has been linked to predictive coding of melodic and rhythmic patterns, subserved by connectivity between regions in the brain's auditory and reward networks. Specific musical anhedonics derive little pleasure from music and have altered auditory-reward connectivity, but no difficulties with music perception abilities and no generalized physical anhedonia. Recent research suggests that specific musical anhedonics experience pleasure in nonmusical sounds, suggesting that the implicated brain pathways may be specific to music reward. However, this work used sounds with clear real-world sources (e.g., babies laughing, crowds cheering), so positive hedonic responses could be based on the referents of these sounds rather than the sounds themselves. We presented specific musical anhedonics and matched controls with isolated short pleasing and displeasing synthesized sounds of varying timbres with no clear real-world referents. While the two groups found displeasing sounds equally displeasing, the musical anhedonics gave substantially lower pleasure ratings to the pleasing sounds, indicating that their sonic anhedonia is not limited to musical rhythms and melodies. Furthermore, across a large sample of participants, mean pleasure ratings for pleasing synthesized sounds predicted significant and similar variance in six dimensions of musical reward considered to be relatively independent, suggesting that pleasure in sonic timbres play a role in eliciting reward-related responses to music. We replicate the earlier findings of preserved pleasure ratings for semantically referential sounds in musical anhedonics and find that pleasure ratings of semantic referents, when presented without sounds, correlated with ratings for the sounds themselves. This association was stronger in musical anhedonics than in controls, suggesting the use of semantic knowledge as a compensatory mechanism for affective sound processing. Our results indicate that specific musical anhedonia is not entirely specific to melodic and rhythmic processing, and suggest that timbre merits further research as a source of pleasure in music.
Collapse
Affiliation(s)
- Nicholas Kathios
- Dept. of Psychology, Northeastern University, United States of America
| | - Aniruddh D Patel
- Dept. of Psychology, Tufts University, United States of America; Program in Brain Mind and Consciousness, Canadian Institute for Advanced Research, Canada
| | - Psyche Loui
- Dept. of Psychology, Northeastern University, United States of America; Dept. of Music, Northeastern University, United States of America.
| |
Collapse
|
8
|
Humphrey J, Brophy E, Kosoy R, Zeng B, Coccia E, Mattei D, Ravi A, Efthymiou AG, Navarro E, Muller BZ, Snijders GJLJ, Allan A, Münch A, Kitata RB, Kleopoulos SP, Argyriou S, Shao Z, Francoeur N, Tsai CF, Gritsenko MA, Monroe ME, Paurus VL, Weitz KK, Shi T, Sebra R, Liu T, de Witte LD, Goate AM, Bennett DA, Haroutunian V, Hoffman GE, Fullard JF, Roussos P, Raj T. Long-read RNA-seq atlas of novel microglia isoforms elucidates disease-associated genetic regulation of splicing. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2023:2023.12.01.23299073. [PMID: 38076956 PMCID: PMC10705658 DOI: 10.1101/2023.12.01.23299073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/21/2023]
Abstract
Microglia, the innate immune cells of the central nervous system, have been genetically implicated in multiple neurodegenerative diseases. We previously mapped the genetic regulation of gene expression and mRNA splicing in human microglia, identifying several loci where common genetic variants in microglia-specific regulatory elements explain disease risk loci identified by GWAS. However, identifying genetic effects on splicing has been challenging due to the use of short sequencing reads to identify causal isoforms. Here we present the isoform-centric microglia genomic atlas (isoMiGA) which leverages the power of long-read RNA-seq to identify 35,879 novel microglia isoforms. We show that the novel microglia isoforms are involved in stimulation response and brain region specificity. We then quantified the expression of both known and novel isoforms in a multi-ethnic meta-analysis of 555 human microglia short-read RNA-seq samples from 391 donors, the largest to date, and found associations with genetic risk loci in Alzheimer's disease and Parkinson's disease. We nominate several loci that may act through complex changes in isoform and splice site usage.
Collapse
Affiliation(s)
- Jack Humphrey
- Department of Genetics and Genomic Sciences, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Nash Family Department of Neuroscience & Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Ronald M. Loeb Center for Alzheimer’s Disease, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Icahn Genomics Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Estelle and Daniel Maggin Department of Neurology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Erica Brophy
- Department of Genetics and Genomic Sciences, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Nash Family Department of Neuroscience & Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Ronald M. Loeb Center for Alzheimer’s Disease, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Icahn Genomics Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Estelle and Daniel Maggin Department of Neurology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Roman Kosoy
- Department of Genetics and Genomic Sciences, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Nash Family Department of Neuroscience & Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Icahn Genomics Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Center for Disease Neurogenomics, Icahn School of Medicine at Mount Sinai, New York, USA
- Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, USA
| | - Biao Zeng
- Department of Genetics and Genomic Sciences, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Nash Family Department of Neuroscience & Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Icahn Genomics Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Center for Disease Neurogenomics, Icahn School of Medicine at Mount Sinai, New York, USA
- Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, USA
| | - Elena Coccia
- Department of Genetics and Genomic Sciences, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Nash Family Department of Neuroscience & Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Ronald M. Loeb Center for Alzheimer’s Disease, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Icahn Genomics Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Estelle and Daniel Maggin Department of Neurology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Daniele Mattei
- Department of Genetics and Genomic Sciences, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Nash Family Department of Neuroscience & Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Ronald M. Loeb Center for Alzheimer’s Disease, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Icahn Genomics Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Estelle and Daniel Maggin Department of Neurology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Ashvin Ravi
- Department of Genetics and Genomic Sciences, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Nash Family Department of Neuroscience & Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Ronald M. Loeb Center for Alzheimer’s Disease, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Icahn Genomics Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Estelle and Daniel Maggin Department of Neurology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Anastasia G. Efthymiou
- Department of Genetics and Genomic Sciences, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Nash Family Department of Neuroscience & Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Ronald M. Loeb Center for Alzheimer’s Disease, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Elisa Navarro
- Department of Genetics and Genomic Sciences, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Nash Family Department of Neuroscience & Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Ronald M. Loeb Center for Alzheimer’s Disease, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Icahn Genomics Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Estelle and Daniel Maggin Department of Neurology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Department of Biochemistry and Molecular Biology, Faculty of Medicine (Universidad Complutense de Madrid), Madrid, Spain
- Centro de Investigación Biomédica en Red sobre Enfermedades Neurodegenerativas (CIBERNED), Madrid, Spain
- Instituto Ramon y Cajal de Investigacion Sanitaria (IRYCIS), Madrid, Spain
| | - Benjamin Z. Muller
- Department of Genetics and Genomic Sciences, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Nash Family Department of Neuroscience & Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Ronald M. Loeb Center for Alzheimer’s Disease, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Icahn Genomics Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Estelle and Daniel Maggin Department of Neurology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Gijsje JLJ Snijders
- Nash Family Department of Neuroscience & Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Ronald M. Loeb Center for Alzheimer’s Disease, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, USA
| | - Amanda Allan
- Department of Genetics and Genomic Sciences, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Nash Family Department of Neuroscience & Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Ronald M. Loeb Center for Alzheimer’s Disease, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Icahn Genomics Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Estelle and Daniel Maggin Department of Neurology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Alexandra Münch
- Department of Genetics and Genomic Sciences, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Nash Family Department of Neuroscience & Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Ronald M. Loeb Center for Alzheimer’s Disease, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Reta Birhanu Kitata
- Biological Sciences Division, Pacific Northwest National Laboratory, Richland, Washington, USA
| | - Steven P Kleopoulos
- Department of Genetics and Genomic Sciences, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Nash Family Department of Neuroscience & Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Icahn Genomics Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Center for Disease Neurogenomics, Icahn School of Medicine at Mount Sinai, New York, USA
- Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, USA
| | - Stathis Argyriou
- Department of Genetics and Genomic Sciences, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Nash Family Department of Neuroscience & Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Icahn Genomics Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Center for Disease Neurogenomics, Icahn School of Medicine at Mount Sinai, New York, USA
- Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, USA
| | - Zhiping Shao
- Department of Genetics and Genomic Sciences, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Nash Family Department of Neuroscience & Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Icahn Genomics Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Center for Disease Neurogenomics, Icahn School of Medicine at Mount Sinai, New York, USA
- Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, USA
| | - Nancy Francoeur
- Department of Genetics and Genomic Sciences, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Icahn Genomics Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Chia-Feng Tsai
- Biological Sciences Division, Pacific Northwest National Laboratory, Richland, Washington, USA
| | - Marina A Gritsenko
- Biological Sciences Division, Pacific Northwest National Laboratory, Richland, Washington, USA
| | - Matthew E Monroe
- Biological Sciences Division, Pacific Northwest National Laboratory, Richland, Washington, USA
| | - Vanessa L Paurus
- Biological Sciences Division, Pacific Northwest National Laboratory, Richland, Washington, USA
| | - Karl K Weitz
- Biological Sciences Division, Pacific Northwest National Laboratory, Richland, Washington, USA
| | - Tujin Shi
- Biological Sciences Division, Pacific Northwest National Laboratory, Richland, Washington, USA
| | - Robert Sebra
- Department of Genetics and Genomic Sciences, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Icahn Genomics Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Black Family Stem Cell Institute, Icahn School of Medicine at Mount Sinai, New York, NY, 10029, USA
- Global Health and Emerging Pathogens Institute, Icahn School of Medicine at Mount Sinai, New York, NY, 10029, USA
| | - Tao Liu
- Biological Sciences Division, Pacific Northwest National Laboratory, Richland, Washington, USA
| | - Lot D. de Witte
- Nash Family Department of Neuroscience & Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Ronald M. Loeb Center for Alzheimer’s Disease, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, USA
| | - Alison M. Goate
- Department of Genetics and Genomic Sciences, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Nash Family Department of Neuroscience & Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Ronald M. Loeb Center for Alzheimer’s Disease, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Icahn Genomics Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Estelle and Daniel Maggin Department of Neurology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - David A. Bennett
- Rush Alzheimer’s Disease Center, Rush University Medical Center, Chicago, Illinois, USA
| | - Vahram Haroutunian
- Nash Family Department of Neuroscience & Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Mental Illness Research Education, and Clinical Center (VISN 2 South), James J. Peters VA Medical Center, Bronx, NY, USA
| | - Gabriel E. Hoffman
- Department of Genetics and Genomic Sciences, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Nash Family Department of Neuroscience & Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Icahn Genomics Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Center for Disease Neurogenomics, Icahn School of Medicine at Mount Sinai, New York, USA
- Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, USA
| | - John F. Fullard
- Department of Genetics and Genomic Sciences, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Nash Family Department of Neuroscience & Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Icahn Genomics Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Center for Disease Neurogenomics, Icahn School of Medicine at Mount Sinai, New York, USA
- Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, USA
| | - Panos Roussos
- Department of Genetics and Genomic Sciences, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Nash Family Department of Neuroscience & Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Icahn Genomics Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Center for Disease Neurogenomics, Icahn School of Medicine at Mount Sinai, New York, USA
- Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, USA
- Mental Illness Research Education, and Clinical Center (VISN 2 South), James J. Peters VA Medical Center, Bronx, NY, USA
| | - Towfique Raj
- Department of Genetics and Genomic Sciences, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Nash Family Department of Neuroscience & Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Ronald M. Loeb Center for Alzheimer’s Disease, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Icahn Genomics Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Estelle and Daniel Maggin Department of Neurology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| |
Collapse
|
9
|
Kothinti SR, Elhilali M. Are acoustics enough? Semantic effects on auditory salience in natural scenes. Front Psychol 2023; 14:1276237. [PMID: 38098516 PMCID: PMC10720592 DOI: 10.3389/fpsyg.2023.1276237] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Accepted: 11/10/2023] [Indexed: 12/17/2023] Open
Abstract
Auditory salience is a fundamental property of a sound that allows it to grab a listener's attention regardless of their attentional state or behavioral goals. While previous research has shed light on acoustic factors influencing auditory salience, the semantic dimensions of this phenomenon have remained relatively unexplored owing both to the complexity of measuring salience in audition as well as limited focus on complex natural scenes. In this study, we examine the relationship between acoustic, contextual, and semantic attributes and their impact on the auditory salience of natural audio scenes using a dichotic listening paradigm. The experiments present acoustic scenes in forward and backward directions; the latter allows to diminish semantic effects, providing a counterpoint to the effects observed in forward scenes. The behavioral data collected from a crowd-sourced platform reveal a striking convergence in temporal salience maps for certain sound events, while marked disparities emerge in others. Our main hypothesis posits that differences in the perceptual salience of events are predominantly driven by semantic and contextual cues, particularly evident in those cases displaying substantial disparities between forward and backward presentations. Conversely, events exhibiting a high degree of alignment can largely be attributed to low-level acoustic attributes. To evaluate this hypothesis, we employ analytical techniques that combine rich low-level mappings from acoustic profiles with high-level embeddings extracted from a deep neural network. This integrated approach captures both acoustic and semantic attributes of acoustic scenes along with their temporal trajectories. The results demonstrate that perceptual salience is a careful interplay between low-level and high-level attributes that shapes which moments stand out in a natural soundscape. Furthermore, our findings underscore the important role of longer-term context as a critical component of auditory salience, enabling us to discern and adapt to temporal regularities within an acoustic scene. The experimental and model-based validation of semantic factors of salience paves the way for a complete understanding of auditory salience. Ultimately, the empirical and computational analyses have implications for developing large-scale models for auditory salience and audio analytics.
Collapse
Affiliation(s)
| | - Mounya Elhilali
- Department of Electrical and Computer Engineering, Center for Language and Speech Processing, The Johns Hopkins University, Baltimore, MD, United States
| |
Collapse
|
10
|
Dejean C, Dupont T, Verpy E, Gonçalves N, Coqueran S, Michalski N, Pucheu S, Bourgeron T, Gourévitch B. Detecting Central Auditory Processing Disorders in Awake Mice. Brain Sci 2023; 13:1539. [PMID: 38002499 PMCID: PMC10669832 DOI: 10.3390/brainsci13111539] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2023] [Revised: 10/24/2023] [Accepted: 10/28/2023] [Indexed: 11/26/2023] Open
Abstract
Mice are increasingly used as models of human-acquired neurological or neurodevelopmental conditions, such as autism, schizophrenia, and Alzheimer's disease. All these conditions involve central auditory processing disorders, which have been little investigated despite their potential for providing interesting insights into the mechanisms behind such disorders. Alterations of the auditory steady-state response to 40 Hz click trains are associated with an imbalance between neuronal excitation and inhibition, a mechanism thought to be common to many neurological disorders. Here, we demonstrate the value of presenting click trains at various rates to mice with chronically implanted pins above the inferior colliculus and the auditory cortex for obtaining easy, reliable, and long-lasting access to subcortical and cortical complex auditory processing in awake mice. Using this protocol on a mutant mouse model of autism with a defect of the Shank3 gene, we show that the neural response is impaired at high click rates (above 60 Hz) and that this impairment is visible subcortically-two results that cannot be obtained with classical protocols for cortical EEG recordings in response to stimulation at 40 Hz. These results demonstrate the value and necessity of a more complete investigation of central auditory processing disorders in mouse models of neurological or neurodevelopmental disorders.
Collapse
Affiliation(s)
- Camille Dejean
- Institut Pasteur, Université Paris Cité, INSERM, Institut de l’Audition, Plasticity of Central Auditory Circuits, F-75012 Paris, France
- Cilcare Company, F-34080 Montpellier, France
- Sorbonne Université, Ecole Doctorale Complexité du Vivant, F-75005 Paris, France
| | - Typhaine Dupont
- Institut Pasteur, Université Paris Cité, INSERM, Institut de l’Audition, Plasticity of Central Auditory Circuits, F-75012 Paris, France
| | - Elisabeth Verpy
- Institut Pasteur, Université Paris Cité, CNRS, IUF, Human Genetics and Cognitive Functions, F-75015 Paris, France
| | - Noémi Gonçalves
- Institut Pasteur, Université Paris Cité, INSERM, Institut de l’Audition, Plasticity of Central Auditory Circuits, F-75012 Paris, France
| | - Sabrina Coqueran
- Institut Pasteur, Université Paris Cité, CNRS, IUF, Human Genetics and Cognitive Functions, F-75015 Paris, France
| | - Nicolas Michalski
- Institut Pasteur, Université Paris Cité, INSERM, Institut de l’Audition, Plasticity of Central Auditory Circuits, F-75012 Paris, France
| | | | - Thomas Bourgeron
- Institut Pasteur, Université Paris Cité, CNRS, IUF, Human Genetics and Cognitive Functions, F-75015 Paris, France
| | - Boris Gourévitch
- Institut Pasteur, Université Paris Cité, INSERM, Institut de l’Audition, Plasticity of Central Auditory Circuits, F-75012 Paris, France
- CNRS, F-75016 Paris, France
| |
Collapse
|
11
|
Riegel M, Granja D, Amer T, Vuilleumier P, Rimmele U. Opposite effects of emotion and event segmentation on temporal order memory and object-context binding. Cogn Emot 2023:1-19. [PMID: 37882239 DOI: 10.1080/02699931.2023.2270195] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Accepted: 09/04/2023] [Indexed: 10/27/2023]
Abstract
Our daily lives unfold continuously, yet our memories are organised into distinct events, situated in a specific context of space and time, and chunked when this context changes (at event boundaries). Previous research showed that this process, termed event segmentation, enhances object-context binding but impairs temporal order memory. Physiologically, peaks in pupil dilation index event segmentation, similar to emotion-induced bursts of autonomic arousal. Emotional arousal also modulates object-context binding and temporal order memory. Yet, these two critical factors have not been systematically studied together. To address this gap, we ran a behavioural experiment using a paradigm validated to study event segmentation and extended it with emotion manipulation. During encoding, we sequentially presented greyscale objects embedded in coloured frames (colour changes defining events), with a neutral or aversive sound. During retrieval, we tested participants' memory of temporal order memory and object-colour binding. We found opposite effects of emotion and event segmentation on episodic memory. While event segmentation enhanced object-context binding, emotion impaired it. On the contrary, event segmentation impaired temporal order memory, but emotion enhanced it. These findings increase our understanding of episodic memory organisation in laboratory settings, and potentially in real life with perceptual changes and emotion fluctuations constantly interacting.
Collapse
Affiliation(s)
- Monika Riegel
- Emotion and Memory Laboratory, Faculty of Psychology and Educational Sciences, University of Geneva, Geneva, Switzerland
- Swiss Center of Affective Sciences (CISA), University of Geneva, Geneva, Switzerland
- Center for Interdisciplinary Study of Gerontology and Vulnerability (CIGEV), University of Geneva, Geneva, Switzerland
- Laboratory for Behavioral Neurology and Imaging of Cognition, Department of Basic Neurosciences, University of Geneva, Geneva, Switzerland
| | - Daniel Granja
- Emotion and Memory Laboratory, Faculty of Psychology and Educational Sciences, University of Geneva, Geneva, Switzerland
- Center for Interdisciplinary Study of Gerontology and Vulnerability (CIGEV), University of Geneva, Geneva, Switzerland
- Neurocenter, University of Geneva, Geneva, Switzerland
| | - Tarek Amer
- Psychology Department, University of Victoria, BC, Victoria, Canada
| | - Patrik Vuilleumier
- Swiss Center of Affective Sciences (CISA), University of Geneva, Geneva, Switzerland
- Laboratory for Behavioral Neurology and Imaging of Cognition, Department of Basic Neurosciences, University of Geneva, Geneva, Switzerland
- Neurocenter, University of Geneva, Geneva, Switzerland
| | - Ulrike Rimmele
- Emotion and Memory Laboratory, Faculty of Psychology and Educational Sciences, University of Geneva, Geneva, Switzerland
- Swiss Center of Affective Sciences (CISA), University of Geneva, Geneva, Switzerland
- Center for Interdisciplinary Study of Gerontology and Vulnerability (CIGEV), University of Geneva, Geneva, Switzerland
- Laboratory for Behavioral Neurology and Imaging of Cognition, Department of Basic Neurosciences, University of Geneva, Geneva, Switzerland
| |
Collapse
|
12
|
Burdick KJ, Yang S, Lopez AE, Wessel C, Schutz M, Schlesinger JJ. Auditory roughness: a delicate balance. Br J Anaesth 2023; 131:649-652. [PMID: 37537119 DOI: 10.1016/j.bja.2023.07.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Revised: 07/03/2023] [Accepted: 07/04/2023] [Indexed: 08/05/2023] Open
Abstract
Auditory roughness in medical alarm sounds is an important design attribute, and has been shown to impact user performance and perception. While roughness can assist in decreased signal-to-noise ratios (perceived loudness) and communicate urgency, it might also impact patient recovery. Therefore, considerations of neuroscience correlates, music theory, and patient impact are critical aspects to investigate in order to optimise alarm design.
Collapse
Affiliation(s)
- Kendall J Burdick
- Department of Pediatrics, Boston Children's Hospital, Boston, MA, USA.
| | - Sean Yang
- Blair School of Music, Vanderbilt University, Nashville, TN, USA
| | | | | | | | - Joseph J Schlesinger
- Department of Anesthesiology, Division of Critical Care Medicine, Vanderbilt University Medical Center, Nashville, TN, USA
| |
Collapse
|
13
|
Huang Y, Lv B, Ni K, Jiang W. Discomfort estimation for aircraft cabin noise using linear regression and modified psychoacoustic annoyance approaches. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 154:1963-1976. [PMID: 37782118 DOI: 10.1121/10.0020838] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Accepted: 08/15/2023] [Indexed: 10/03/2023]
Abstract
Appropriate sound quality models for noise-induced discomfort are necessary for a better acoustic comfort design in the aircraft cabin. This study investigates the acoustic discomfort in two large passenger aeroplanes (i.e., planes A and B). We recorded the noise at 21 positions in each aircraft cabin and selected 42 stimuli ranging from 72 to 81 dB(A) during the cruising flights. Twenty-four participants rated the noise discomfort by the absolute magnitude estimation method. The discomfort values in the middle section of the aircraft cabin are 10% points higher than in the front or rear section. The discomfort magnitude was dominated by loudness and influenced by roughness and sharpness. A multiple linear (MA) discomfort model was established, accounting for the relationship between the discomfort and sound quality metrics (i.e., loudness, sharpness, and roughness). The MA model estimated noise discomfort better than the Zwicker and other (i.e., More and Di) psychoacoustic annoyance (PA) models. We modified the coefficients of independent variables in the formulations of Zwicker, Di, and More PA models, respectively, according to the present experimental results. The correlation coefficients between the estimated and measured values of the modified models were at least 20% points higher than the original ones.
Collapse
Affiliation(s)
- Yu Huang
- Institute of Vibration, Shock and Noise, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Bingcong Lv
- Institute of Vibration, Shock and Noise, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Ke Ni
- Institute of Vibration, Shock and Noise, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Weikang Jiang
- Institute of Vibration, Shock and Noise, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| |
Collapse
|
14
|
Bowling DL. Vocal similarity theory and the biology of musical tonality. Phys Life Rev 2023; 46:46-51. [PMID: 37244152 PMCID: PMC10528872 DOI: 10.1016/j.plrev.2023.05.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2023] [Accepted: 05/15/2023] [Indexed: 05/29/2023]
Affiliation(s)
- Daniel L Bowling
- Department of Psychiatry and Behavioral Sciences, Stanford School of Medicine, United States of America; Center for Computer Research in Music and Acoustics, Stanford School of Humanities and Sciences, United States of America.
| |
Collapse
|
15
|
Li G, Jiang S, Meng J, Wu Z, Jiang H, Fan Z, Hu J, Sheng X, Zhang D, Schalk G, Chen L, Zhu X. Spatio-temporal evolution of human neural activity during visually cued hand movements. Cereb Cortex 2023; 33:9764-9777. [PMID: 37464883 DOI: 10.1093/cercor/bhad242] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Revised: 06/14/2023] [Accepted: 06/15/2023] [Indexed: 07/20/2023] Open
Abstract
Making hand movements in response to visual cues is common in daily life. It has been well known that this process activates multiple areas in the brain, but how these neural activations progress across space and time remains largely unknown. Taking advantage of intracranial electroencephalographic (iEEG) recordings using depth and subdural electrodes from 36 human subjects using the same task, we applied single-trial and cross-trial analyses to high-frequency iEEG activity. The results show that the neural activation was widely distributed across the human brain both within and on the surface of the brain, and focused specifically on certain areas in the parietal, frontal, and occipital lobes, where parietal lobes present significant left lateralization on the activation. We also demonstrate temporal differences across these brain regions. Finally, we evaluated the degree to which the timing of activity within these regions was related to sensory or motor function. The findings of this study promote the understanding of task-related neural processing of the human brain, and may provide important insights for translational applications.
Collapse
Affiliation(s)
- Guangye Li
- Institute of Robotics, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Shize Jiang
- Department of Neurosurgery of Huashan Hospital, Fudan University, Shanghai 200040, China
| | - Jianjun Meng
- Institute of Robotics, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Zehan Wu
- Department of Neurosurgery of Huashan Hospital, Fudan University, Shanghai 200040, China
| | - Haiteng Jiang
- Department of Neurobiology, Affiliated Mental Health Center & Hangzhou Seventh People's Hospital, Zhejiang University School of Medicine, Hangzhou 310013, China
- MOE Frontier Science Center for Brain Science & Brain-Machine Integration, Zhejiang University, Hangzhou 310058, China
| | - Zhen Fan
- Department of Neurosurgery of Huashan Hospital, Fudan University, Shanghai 200040, China
| | - Jie Hu
- Department of Neurosurgery of Huashan Hospital, Fudan University, Shanghai 200040, China
| | - Xinjun Sheng
- Institute of Robotics, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Dingguo Zhang
- Department of Electronic and Electrical Engineering, University of Bath, Bath BA2 7AY, United Kingdom
| | - Gerwin Schalk
- Chen Frontier Lab for Applied Neurotechnology, Tianqiao and Chrissy Chen Institute, Shanghai 200052, China
- Department of Neurosurgery of Huashan Hospital, Fudan University, Shanghai 200040, China
| | - Liang Chen
- Department of Neurosurgery of Huashan Hospital, Fudan University, Shanghai 200040, China
| | - Xiangyang Zhu
- Institute of Robotics, Shanghai Jiao Tong University, Shanghai 200240, China
| |
Collapse
|
16
|
Dowdall JR, Schneider M, Vinck M. Attentional modulation of inter-areal coherence explained by frequency shifts. Neuroimage 2023:120256. [PMID: 37392809 DOI: 10.1016/j.neuroimage.2023.120256] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Revised: 06/22/2023] [Accepted: 06/28/2023] [Indexed: 07/03/2023] Open
Abstract
Inter-areal coherence has been hypothesized as a mechanism for inter-areal communication. Indeed, empirical studies have observed an increase in inter-areal coherence with attention. Yet, the mechanisms underlying changes in coherence remain largely unknown. Both attention and stimulus salience are associated with shifts in the peak frequency of gamma oscillations in V1, which suggests that the frequency of oscillations may play a role in facilitating changes in inter-areal communication and coherence. In this study, we used computational modeling to investigate how the peak frequency of a sender influences inter-areal coherence. We show that changes in the magnitude of coherence are largely determined by the peak frequency of the sender. However, the pattern of coherence depends on the intrinsic properties of the receiver, specifically whether the receiver integrates or resonates with its synaptic inputs. Because resonant receivers are frequency-selective, resonance has been proposed as a mechanism for selective communication. However, the pattern of coherence changes produced by a resonant receiver is inconsistent with empirical studies. By contrast, an integrator receiver does produce the pattern of coherence with frequency shifts in the sender observed in empirical studies. These results indicate that coherence can be a misleading measure of inter-areal interactions. This led us to develop a new measure of inter-areal interactions, which we refer to as Explained Power. We show that Explained Power maps directly to the signal transmitted by the sender filtered by the receiver, and thus provides a method to quantify the true signals transmitted between the sender and receiver. Together, these findings provide a model of changes in inter-areal coherence and Granger-causality as a result of frequency shifts.
Collapse
Affiliation(s)
- Jarrod Robert Dowdall
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, Frankfurt am Main, Germany; Robarts Research Institute, Western University, London, Ontario, Canada.
| | - Marius Schneider
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, Frankfurt am Main, Germany; Donders Centre for Neuroscience, Department of Neuroinformatics, Radboud University, Nijmegen, Netherlands
| | - Martin Vinck
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, Frankfurt am Main, Germany; Donders Centre for Neuroscience, Department of Neuroinformatics, Radboud University, Nijmegen, Netherlands.
| |
Collapse
|
17
|
Lahdelma I, Eerola T. Data-driven theory formulation or theory-driven data interpretation?: Comment on "Consonance and dissonance perception. A critical review of the historical sources, multidisciplinary findings, and main hypotheses" by Di Stefano et al. Phys Life Rev 2023; 45:56-59. [PMID: 37148786 DOI: 10.1016/j.plrev.2023.04.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Accepted: 04/20/2023] [Indexed: 05/08/2023]
|
18
|
Di Stefano N, Vuust P, Brattico E. Consonance and dissonance perception. A critical review of the historical sources, multidisciplinary findings, and main hypotheses. Phys Life Rev 2022; 43:273-304. [PMID: 36372030 DOI: 10.1016/j.plrev.2022.10.004] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Accepted: 10/17/2022] [Indexed: 11/05/2022]
Abstract
Revealed more than two millennia ago by Pythagoras, consonance and dissonance (C/D) are foundational concepts in music theory, perception, and aesthetics. The search for the biological, acoustical, and cultural factors that affect C/D perception has resulted in descriptive accounts inspired by arithmetic, musicological, psychoacoustical or neurobiological frameworks without reaching a consensus. Here, we review the key historical sources and modern multidisciplinary findings on C/D and integrate them into three main hypotheses: the vocal similarity hypothesis (VSH), the psychocultural hypothesis (PH), and the sensorimotor hypothesis (SH). By illustrating the hypotheses-related findings, we highlight their major conceptual, methodological, and terminological shortcomings. Trying to provide a unitary framework for C/D understanding, we put together multidisciplinary research on human and animal vocalizations, which converges to suggest that auditory roughness is associated with distress/danger and, therefore, elicits defensive behavioral reactions and neural responses that indicate aversion. We therefore stress the primacy of vocality and roughness as key factors in the explanation of C/D phenomenon, and we explore the (neuro)biological underpinnings of the attraction-aversion mechanisms that are triggered by C/D stimuli. Based on the reviewed evidence, while the aversive nature of dissonance appears as solidly rooted in the multidisciplinary findings, the attractive nature of consonance remains a somewhat speculative claim that needs further investigation. Finally, we outline future directions for empirical research in C/D, especially regarding cross-modal and cross-cultural approaches.
Collapse
Affiliation(s)
- Nicola Di Stefano
- Institute for Cognitive Sciences and Technologies (ISTC), National Research Council of Italy (CNR), Via San Martino della Battaglia 44, 00185 Rome, Italy.
| | - Peter Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University Royal Academy of Music Aarhus/Aalborg (RAMA), 8000 Aarhus, Denmark.
| | - Elvira Brattico
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University Royal Academy of Music Aarhus/Aalborg (RAMA), 8000 Aarhus, Denmark; Department of Education, Psychology, Communication, University of Bari Aldo Moro, 70122 Bari, Italy.
| |
Collapse
|
19
|
Di Stefano N, Spence C. Roughness perception: A multisensory/crossmodal perspective. Atten Percept Psychophys 2022; 84:2087-2114. [PMID: 36028614 PMCID: PMC9481510 DOI: 10.3758/s13414-022-02550-y] [Citation(s) in RCA: 21] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/01/2022] [Indexed: 11/08/2022]
Abstract
Roughness is a perceptual attribute typically associated with certain stimuli that are presented in one of the spatial senses. In auditory research, the term is typically used to describe the harsh effects that are induced by particular sound qualities (i.e., dissonance) and human/animal vocalizations (e.g., screams, distress cries). In the tactile domain, roughness is a crucial factor determining the perceptual features of a surface. The same feature can also be ascertained visually, by means of the extraction of pattern features that determine the haptic quality of surfaces, such as grain size and density. By contrast, the term roughness has rarely been applied to the description of those stimuli perceived via the chemical senses. In this review, we take a critical look at the putative meaning(s) of the term roughness, when used in both unisensory and multisensory contexts, in an attempt to answer two key questions: (1) Is the use of the term 'roughness' the same in each modality when considered individually? and (2) Do crossmodal correspondences involving roughness match distinct perceptual features or (at least on certain occasions) do they merely pick-up on an amodal property? We start by examining the use of the term in the auditory domain. Next, we summarize the ways in which the term roughness has been used in the literature on tactile and visual perception, and in the domain of olfaction and gustation. Then, we move on to the crossmodal context, reviewing the literature on the perception of roughness in the audiovisual, audiotactile, and auditory-gustatory/olfactory domains. Finally, we highlight some limitations of the reviewed literature and we outline a number of key directions for future empirical research in roughness perception.
Collapse
Affiliation(s)
- Nicola Di Stefano
- National Research Council, Institute for Cognitive Sciences and Technologies, Rome, Italy.
| | | |
Collapse
|
20
|
Nakamura T, Dinh TH, Asai M, Nishimaru H, Matsumoto J, Setogawa T, Ichijo H, Honda S, Yamada H, Mihara T, Nishijo H. Characteristics of auditory steady-state responses to different click frequencies in awake intact macaques. BMC Neurosci 2022; 23:57. [PMID: 36180823 PMCID: PMC9524006 DOI: 10.1186/s12868-022-00741-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2022] [Accepted: 09/13/2022] [Indexed: 11/28/2022] Open
Abstract
Background Auditory steady-state responses (ASSRs) are periodic evoked responses to constant periodic auditory stimuli, such as click trains, and are suggested to be associated with higher cognitive functions in humans. Since ASSRs are disturbed in human psychiatric disorders, recording ASSRs from awake intact macaques would be beneficial to translational research as well as an understanding of human brain function and its pathology. However, ASSR has not been reported in awake macaques. Results Electroencephalograms (EEGs) were recorded from awake intact macaques, while click trains at 20–83.3 Hz were binaurally presented. EEGs were quantified based on event-related spectral perturbation (ERSP) and inter-trial coherence (ITC), and ASSRs were significantly demonstrated in terms of ERSP and ITC in awake intact macaques. A comparison of ASSRs among different click train frequencies indicated that ASSRs were maximal at 83.3 Hz. Furthermore, analyses of laterality indices of ASSRs showed that no laterality dominance of ASSRs was observed. Conclusions The present results demonstrated ASSRs, comparable to those in humans, in awake intact macaques. However, there were some differences in ASSRs between macaques and humans: macaques showed maximal ASSR responses to click frequencies higher than 40 Hz that has been reported to elicit maximal responses in humans, and showed no dominant laterality of ASSRs under the electrode montage in this study compared with humans with right hemisphere dominance. The future ASSR studies using awake intact macaques should be aware of these differences, and possible factors, to which these differences were ascribed, are discussed. Supplementary Information The online version contains supplementary material available at 10.1186/s12868-022-00741-9.
Collapse
Affiliation(s)
- Tomoya Nakamura
- System Emotional Science, Faculty of Medicine, University of Toyama, Sugitani2630, Toyama, 930-0194, Japan.,Department of Anatomy, Faculty of Medicine, University of Toyama, Toyama, 930-0194, Japan
| | - Trong Ha Dinh
- System Emotional Science, Faculty of Medicine, University of Toyama, Sugitani2630, Toyama, 930-0194, Japan.,Department of Physiology, Vietnam Military Medical University, Hanoi, 100000, Vietnam
| | - Makoto Asai
- Candidate Discovery Science Labs, Drug Discovery Research, Astellas Pharma Inc., Tsukuba, Ibaraki, 305-8585, Japan
| | - Hiroshi Nishimaru
- System Emotional Science, Faculty of Medicine, University of Toyama, Sugitani2630, Toyama, 930-0194, Japan.,Research Center for Idling Brain Science (RCIBS), University of Toyama, Toyama, 930-0194, Japan
| | - Jumpei Matsumoto
- System Emotional Science, Faculty of Medicine, University of Toyama, Sugitani2630, Toyama, 930-0194, Japan.,Research Center for Idling Brain Science (RCIBS), University of Toyama, Toyama, 930-0194, Japan
| | - Tsuyoshi Setogawa
- System Emotional Science, Faculty of Medicine, University of Toyama, Sugitani2630, Toyama, 930-0194, Japan.,Research Center for Idling Brain Science (RCIBS), University of Toyama, Toyama, 930-0194, Japan
| | - Hiroyuki Ichijo
- Department of Anatomy, Faculty of Medicine, University of Toyama, Toyama, 930-0194, Japan
| | - Sokichi Honda
- Candidate Discovery Science Labs, Drug Discovery Research, Astellas Pharma Inc., Tsukuba, Ibaraki, 305-8585, Japan
| | - Hiroshi Yamada
- Candidate Discovery Science Labs, Drug Discovery Research, Astellas Pharma Inc., Tsukuba, Ibaraki, 305-8585, Japan
| | - Takuma Mihara
- Candidate Discovery Science Labs, Drug Discovery Research, Astellas Pharma Inc., Tsukuba, Ibaraki, 305-8585, Japan
| | - Hisao Nishijo
- System Emotional Science, Faculty of Medicine, University of Toyama, Sugitani2630, Toyama, 930-0194, Japan. .,Research Center for Idling Brain Science (RCIBS), University of Toyama, Toyama, 930-0194, Japan.
| |
Collapse
|
21
|
Abstract
Categorising voices is crucial for auditory-based social interactions. This Primer explores a PLOS Biiology study that capitalises on human intracranial recordings to describe the spatiotemporal pattern of neural activity leading to voice-selective responses in associative auditory cortex.
Collapse
Affiliation(s)
- Benjamin Morillon
- Aix Marseille University, Inserm, Institut de Neurosciences des Systèmes (INS), Marseille, France
- * E-mail:
| | - Luc H. Arnal
- Institut de l’Audition, Inserm unit 1120, Institut Pasteur, Paris, France
| | - Pascal Belin
- Aix Marseille University, CNRS, La Timone Neuroscience Institute (INT), Marseille, France
| |
Collapse
|
22
|
Savard MA, Sares AG, Coffey EBJ, Deroche MLD. Specificity of Affective Responses in Misophonia Depends on Trigger Identification. Front Neurosci 2022; 16:879583. [PMID: 35692416 PMCID: PMC9179422 DOI: 10.3389/fnins.2022.879583] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2022] [Accepted: 04/26/2022] [Indexed: 12/05/2022] Open
Abstract
Individuals with misophonia, a disorder involving extreme sound sensitivity, report significant anger, disgust, and anxiety in response to select but usually common sounds. While estimates of prevalence within certain populations such as college students have approached 20%, it is currently unknown what percentage of people experience misophonic responses to such “trigger” sounds. Furthermore, there is little understanding of the fundamental processes involved. In this study, we aimed to characterize the distribution of misophonic symptoms in a general population, as well as clarify whether the aversive emotional responses to trigger sounds are partly caused by acoustic salience of the sound itself, or by recognition of the sound. Using multi-talker babble as masking noise to decrease participants' ability to identify sounds, we assessed how identification of common trigger sounds related to subjective emotional responses in 300 adults who participated in an online study. Participants were asked to listen to and identify neutral, unpleasant and trigger sounds embedded in different levels of the masking noise (signal-to-noise ratios: −30, −20, −10, 0, +10 dB), and then to evaluate their subjective judgment of the sounds (pleasantness) and emotional reactions to them (anxiety, anger, and disgust). Using participants' scores on a scale quantifying misophonia sensitivity, we selected the top and bottom 20% scorers from the distribution to form a Most-Misophonic subgroup (N = 66) and Least-Misophonic subgroup (N = 68). Both groups were better at identifying triggers than unpleasant sounds, which themselves were identified better than neutral sounds. Both groups also recognized the aversiveness of the unpleasant and trigger sounds, yet for the Most-Misophonic group, there was a greater increase in subjective ratings of negative emotions once the sounds became identifiable, especially for trigger sounds. These results highlight the heightened salience of trigger sounds, but furthermore suggest that learning and higher-order evaluation of sounds play an important role in misophonia.
Collapse
Affiliation(s)
- Marie-Anick Savard
- Department of Psychology, Concordia University, Montreal, QC, Canada
- Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, QC, Canada
- Centre for Research on Brain, Language, and Music (CRBLM), Montreal, QC, Canada
- *Correspondence: Marie-Anick Savard
| | - Anastasia G. Sares
- Department of Psychology, Concordia University, Montreal, QC, Canada
- Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, QC, Canada
- Centre for Research on Brain, Language, and Music (CRBLM), Montreal, QC, Canada
| | - Emily B. J. Coffey
- Department of Psychology, Concordia University, Montreal, QC, Canada
- Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, QC, Canada
- Centre for Research on Brain, Language, and Music (CRBLM), Montreal, QC, Canada
| | - Mickael L. D. Deroche
- Department of Psychology, Concordia University, Montreal, QC, Canada
- Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, QC, Canada
- Centre for Research on Brain, Language, and Music (CRBLM), Montreal, QC, Canada
| |
Collapse
|
23
|
Kothinti SR, Huang N, Elhilali M. Auditory salience using natural scenes: An online study. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 150:2952. [PMID: 34717500 PMCID: PMC8528551 DOI: 10.1121/10.0006750] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/15/2021] [Revised: 08/13/2021] [Accepted: 09/29/2021] [Indexed: 05/12/2023]
Abstract
Salience is the quality of a sensory signal that attracts involuntary attention in humans. While it primarily reflects conspicuous physical attributes of a scene, our understanding of processes underlying what makes a certain object or event salient remains limited. In the vision literature, experimental results, theoretical accounts, and large amounts of eye-tracking data using rich stimuli have shed light on some of the underpinnings of visual salience in the brain. In contrast, studies of auditory salience have lagged behind due to limitations in both experimental designs and stimulus datasets used to probe the question of salience in complex everyday soundscapes. In this work, we deploy an online platform to study salience using a dichotic listening paradigm with natural auditory stimuli. The study validates crowd-sourcing as a reliable platform to collect behavioral responses to auditory salience by comparing experimental outcomes to findings acquired in a controlled laboratory setting. A model-based analysis demonstrates the benefits of extending behavioral measures of salience to broader selection of auditory scenes and larger pools of subjects. Overall, this effort extends our current knowledge of auditory salience in everyday soundscapes and highlights the limitations of low-level acoustic attributes in capturing the richness of natural soundscapes.
Collapse
Affiliation(s)
- Sandeep Reddy Kothinti
- Department of Electrical and Computer Engineering, Center for Language and Speech Processing, The Johns Hopkins University, Baltimore, Maryland 21218, USA
| | - Nicholas Huang
- Department of Biomedical Engineering, The Johns Hopkins University, Baltimore, Maryland 21218, USA
| | - Mounya Elhilali
- Department of Electrical and Computer Engineering, Center for Language and Speech Processing, The Johns Hopkins University, Baltimore, Maryland 21218, USA
| |
Collapse
|
24
|
Quon RJ, Leslie GA, Camp EJ, Meisenhelter S, Steimel SA, Song Y, Ettinger AB, Bujarski KA, Casey MA, Jobst BC. 40-Hz auditory stimulation for intracranial interictal activity: A pilot study. Acta Neurol Scand 2021; 144:192-201. [PMID: 33893999 DOI: 10.1111/ane.13437] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2021] [Revised: 03/08/2021] [Accepted: 04/11/2021] [Indexed: 12/13/2022]
Abstract
OBJECTIVES To study the effects of auditory stimuli on interictal epileptiform discharge (IED) rates evident with intracranial monitoring. MATERIALS AND METHODS Eight subjects undergoing intracranial EEG monitoring for refractory epilepsy participated in this study. Auditory stimuli consisted of a 40-Hz tone, a 440-Hz tone modulated by a 40-Hz sinusoid, Mozart's Sonata for Two Pianos in D Major (K448), and K448 modulated by a 40-Hz sinusoid (modK448). Subjects were stratified into high- and low-IED rate groups defined by baseline IED rates. Subject-level analyses identified individual responses to auditory stimuli, discerned specific brain regions with significant reductions in IED rates, and examined the influence auditory stimuli had on whole-brain sigma power (12-16 Hz). RESULTS All subjects in the high baseline IED group had a significant 35.25% average reduction in IEDs during the 40-Hz tone; subject-level reductions localized to mesial and lateral temporal regions. Exposure to Mozart K448 showed significant yet less homogeneous responses. A post hoc analysis demonstrated two of the four subjects with positive IED responses had increased whole-brain power at the sigma frequency band during 40-Hz stimulation. CONCLUSIONS Our study is the first to evaluate the relationship between 40-Hz auditory stimulation and IED rates in refractory epilepsy. We reveal that 40-Hz auditory stimuli may be a noninvasive adjunctive intervention to reduce IED burden. Our pilot study supports the future examination of 40-Hz auditory stimuli in a larger population of subjects with high baseline IED rates.
Collapse
Affiliation(s)
- Robert J. Quon
- Department of Neurology Geisel School of Medicine at Dartmouth Hanover NH USA
| | - Grace A. Leslie
- Department of Music Georgia Institute of Technology Atlanta GA USA
| | - Edward J. Camp
- Department of Neurology Dartmouth‐Hitchcock Medical Center Lebanon NH USA
| | | | - Sarah A. Steimel
- Department of Neurology Geisel School of Medicine at Dartmouth Hanover NH USA
| | - Yinchen Song
- Department of Neurology Geisel School of Medicine at Dartmouth Hanover NH USA
- Department of Neurology Dartmouth‐Hitchcock Medical Center Lebanon NH USA
| | | | - Krzysztof A. Bujarski
- Department of Neurology Geisel School of Medicine at Dartmouth Hanover NH USA
- Department of Neurology Dartmouth‐Hitchcock Medical Center Lebanon NH USA
| | - Michael A. Casey
- Department of Music at Dartmouth College Hanover NH USA
- Department of Computer Science at Dartmouth College Hanover NH USA
| | - Barbara C. Jobst
- Department of Neurology Geisel School of Medicine at Dartmouth Hanover NH USA
- Department of Neurology Dartmouth‐Hitchcock Medical Center Lebanon NH USA
| |
Collapse
|
25
|
Armitage J, Lahdelma I, Eerola T. Automatic responses to musical intervals: Contrasts in acoustic roughness predict affective priming in Western listeners. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 150:551. [PMID: 34340511 DOI: 10.1121/10.0005623] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/04/2021] [Accepted: 06/24/2021] [Indexed: 06/13/2023]
Abstract
The aim of the present study is to determine which acoustic components of harmonic consonance and dissonance influence automatic responses in a simple cognitive task. In a series of affective priming experiments, eight pairs of musical intervals were used to measure the influence of acoustic roughness and harmonicity on response times in a word-classification task conducted online. Interval pairs that contrasted in roughness induced a greater degree of affective priming than pairs that did not contrast in terms of their roughness. Contrasts in harmonicity did not induce affective priming. A follow-up experiment used detuned intervals to create higher levels of roughness contrasts. However, the detuning did not lead to any further increase in the size of the priming effect. More detailed analysis suggests that the presence of priming in intervals is binary: in the negative primes that create congruency effects the intervals' fundamentals and overtones coincide within the same equivalent rectangular bandwidth (i.e., the minor and major seconds). Intervals that fall outside this equivalent rectangular bandwidth do not elicit priming effects, regardless of their dissonance or negative affect. The results are discussed in the context of recent developments in consonance/dissonance research and vocal similarity.
Collapse
Affiliation(s)
- James Armitage
- Department of Music, Durham University, Durham, DH1 3RL, United Kingdom
| | - Imre Lahdelma
- Department of Music, Durham University, Durham, DH1 3RL, United Kingdom
| | - Tuomas Eerola
- Department of Music, Durham University, Durham, DH1 3RL, United Kingdom
| |
Collapse
|
26
|
Holz N, Larrouy-Maestri P, Poeppel D. The paradoxical role of emotional intensity in the perception of vocal affect. Sci Rep 2021; 11:9663. [PMID: 33958630 PMCID: PMC8102532 DOI: 10.1038/s41598-021-88431-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2021] [Accepted: 04/09/2021] [Indexed: 11/08/2022] Open
Abstract
Vocalizations including laughter, cries, moans, or screams constitute a potent source of information about the affective states of others. It is typically conjectured that the higher the intensity of the expressed emotion, the better the classification of affective information. However, attempts to map the relation between affective intensity and inferred meaning are controversial. Based on a newly developed stimulus database of carefully validated non-speech expressions ranging across the entire intensity spectrum from low to peak, we show that the intuition is false. Based on three experiments (N = 90), we demonstrate that intensity in fact has a paradoxical role. Participants were asked to rate and classify the authenticity, intensity and emotion, as well as valence and arousal of the wide range of vocalizations. Listeners are clearly able to infer expressed intensity and arousal; in contrast, and surprisingly, emotion category and valence have a perceptual sweet spot: moderate and strong emotions are clearly categorized, but peak emotions are maximally ambiguous. This finding, which converges with related observations from visual experiments, raises interesting theoretical challenges for the emotion communication literature.
Collapse
Affiliation(s)
- N Holz
- Department of Neuroscience, Max Planck Institute for Empirical Aesthetics, Frankfurt/M, Germany.
| | - P Larrouy-Maestri
- Department of Neuroscience, Max Planck Institute for Empirical Aesthetics, Frankfurt/M, Germany
- Max Planck NYU Center for Language, Music, and Emotion, Frankfurt/M, Germany
| | - D Poeppel
- Department of Neuroscience, Max Planck Institute for Empirical Aesthetics, Frankfurt/M, Germany
- Max Planck NYU Center for Language, Music, and Emotion, Frankfurt/M, Germany
- Department of Psychology, New York University, New York, NY, USA
| |
Collapse
|
27
|
Farahani ED, Wouters J, van Wieringen A. Brain mapping of auditory steady-state responses: A broad view of cortical and subcortical sources. Hum Brain Mapp 2021; 42:780-796. [PMID: 33166050 PMCID: PMC7814770 DOI: 10.1002/hbm.25262] [Citation(s) in RCA: 34] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2020] [Revised: 10/13/2020] [Accepted: 10/15/2020] [Indexed: 12/21/2022] Open
Abstract
Auditory steady-state responses (ASSRs) are evoked brain responses to modulated or repetitive acoustic stimuli. Investigating the underlying neural generators of ASSRs is important to gain in-depth insight into the mechanisms of auditory temporal processing. The aim of this study is to reconstruct an extensive range of neural generators, that is, cortical and subcortical, as well as primary and non-primary ones. This extensive overview of neural generators provides an appropriate basis for studying functional connectivity. To this end, a minimum-norm imaging (MNI) technique is employed. We also present a novel extension to MNI which facilitates source analysis by quantifying the ASSR for each dipole. Results demonstrate that the proposed MNI approach is successful in reconstructing sources located both within (primary) and outside (non-primary) of the auditory cortex (AC). Primary sources are detected in different stimulation conditions (four modulation frequencies and two sides of stimulation), thereby demonstrating the robustness of the approach. This study is one of the first investigations to identify non-primary sources. Moreover, we show that the MNI approach is also capable of reconstructing the subcortical activities of ASSRs. Finally, the results obtained using the MNI approach outperform the group-independent component analysis method on the same data, in terms of detection of sources in the AC, reconstructing the subcortical activities and reducing computational load.
Collapse
Affiliation(s)
- Ehsan Darestani Farahani
- Research Group Experimental ORL, Department of NeurosciencesKatholieke Universiteit LeuvenLeuvenBelgium
| | - Jan Wouters
- Research Group Experimental ORL, Department of NeurosciencesKatholieke Universiteit LeuvenLeuvenBelgium
| | - Astrid van Wieringen
- Research Group Experimental ORL, Department of NeurosciencesKatholieke Universiteit LeuvenLeuvenBelgium
| |
Collapse
|
28
|
Williams ZJ, He JL, Cascio CJ, Woynaroski TG. A review of decreased sound tolerance in autism: Definitions, phenomenology, and potential mechanisms. Neurosci Biobehav Rev 2021; 121:1-17. [PMID: 33285160 PMCID: PMC7855558 DOI: 10.1016/j.neubiorev.2020.11.030] [Citation(s) in RCA: 55] [Impact Index Per Article: 18.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2020] [Revised: 11/11/2020] [Accepted: 11/12/2020] [Indexed: 12/23/2022]
Abstract
Atypical behavioral responses to environmental sounds are common in autistic children and adults, with 50-70 % of this population exhibiting decreased sound tolerance (DST) at some point in their lives. This symptom is a source of significant distress and impairment across the lifespan, contributing to anxiety, challenging behaviors, reduced community participation, and school/workplace difficulties. However, relatively little is known about its phenomenology or neurocognitive underpinnings. The present article synthesizes a large body of literature on the phenomenology and pathophysiology of DST-related conditions to generate a comprehensive theoretical account of DST in autism. Notably, we argue against conceptualizing DST as a unified construct, suggesting that it be separated into three phenomenologically distinct conditions: hyperacusis (the perception of everyday sounds as excessively loud or painful), misophonia (an acquired aversive reaction to specific sounds), and phonophobia (a specific phobia of sound), each responsible for a portion of observed DST behaviors. We further elaborate our framework by proposing preliminary neurocognitive models of hyperacusis, misophonia, and phonophobia that incorporate neurophysiologic findings from studies of autism.
Collapse
Affiliation(s)
- Zachary J Williams
- Medical Scientist Training Program, Vanderbilt University School of Medicine, 221 Eskind Biomedical Library and Learning Center, 2209 Garland Ave., Nashville, TN, 37240, United States; Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Avenue South, Medical Center East, Room 8310, Nashville, TN, 37232, United States; Vanderbilt Brain Institute, Vanderbilt University, 7203 Medical Research Building III, 465 21st Avenue South, Nashville, TN, 37232, United States; Frist Center for Autism and Innovation, Vanderbilt University, 2414 Highland Avenue, Suite 115, Nashville, TN, 37212, United States.
| | - Jason L He
- Department of Forensic and Neurodevelopmental Sciences, Sackler Institute for Translational Neurodevelopment, Institute of Psychiatry, Psychology and Neuroscience, King's College London, Strand Building, Strand Campus, Strand, London, WC2R 2LS, London, United Kingdom.
| | - Carissa J Cascio
- Vanderbilt Brain Institute, Vanderbilt University, 7203 Medical Research Building III, 465 21st Avenue South, Nashville, TN, 37232, United States; Frist Center for Autism and Innovation, Vanderbilt University, 2414 Highland Avenue, Suite 115, Nashville, TN, 37212, United States; Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center, 2254 Village at Vanderbilt, 1500 21st Ave South, Nashville, TN, 37212, United States; Vanderbilt Kennedy Center, Vanderbilt University Medical Center, 110 Magnolia Cir, Nashville, TN, 37203, United States.
| | - Tiffany G Woynaroski
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Avenue South, Medical Center East, Room 8310, Nashville, TN, 37232, United States; Vanderbilt Brain Institute, Vanderbilt University, 7203 Medical Research Building III, 465 21st Avenue South, Nashville, TN, 37232, United States; Frist Center for Autism and Innovation, Vanderbilt University, 2414 Highland Avenue, Suite 115, Nashville, TN, 37212, United States; Vanderbilt Kennedy Center, Vanderbilt University Medical Center, 110 Magnolia Cir, Nashville, TN, 37203, United States.
| |
Collapse
|
29
|
Abstract
Evidence supporting a link between harmoni-city and the attractiveness of simultaneous tone combinations has emerged from an experiment designed to mitigate effects of musical enculturation. I examine the analysis undertaken to produce this evidence and clarify its relation to an account of tonal aesthetics based on the biology of auditory-vocal communication.
Collapse
|
30
|
Taffou M, Suied C, Viaud-Delmon I. Auditory roughness elicits defense reactions. Sci Rep 2021; 11:956. [PMID: 33441758 PMCID: PMC7806762 DOI: 10.1038/s41598-020-79767-0] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2020] [Accepted: 12/09/2020] [Indexed: 11/26/2022] Open
Abstract
Auditory roughness elicits aversion, and higher activation in cerebral areas involved in threat processing, but its link with defensive behavior is unknown. Defensive behaviors are triggered by intrusions into the space immediately surrounding the body, called peripersonal space (PPS). Integrating multisensory information in PPS is crucial to assure the protection of the body. Here, we assessed the behavioral effects of roughness on auditory-tactile integration, which reflects the monitoring of this multisensory region of space. Healthy human participants had to detect as fast as possible a tactile stimulation delivered on their hand while an irrelevant sound was approaching them from the rear hemifield. The sound was either a simple harmonic sound or a rough sound, processed through binaural rendering so that the virtual sound source was looming towards participants. The rough sound speeded tactile reaction times at a farther distance from the body than the non-rough sound. This indicates that PPS, as estimated here via auditory-tactile integration, is sensitive to auditory roughness. Auditory roughness modifies the behavioral relevance of simple auditory events in relation to the body. Even without emotional or social contextual information, auditory roughness constitutes an innate threat cue that elicits defensive responses.
Collapse
Affiliation(s)
- Marine Taffou
- Institut de Recherche Biomédicale des Armées, 91220, Brétigny-sur-Orge, France.
| | - Clara Suied
- Institut de Recherche Biomédicale des Armées, 91220, Brétigny-sur-Orge, France
| | - Isabelle Viaud-Delmon
- CNRS, Ircam, Sorbonne Université, Ministère de la Culture, Sciences et Technologies de la Musique et du son, STMS, 75004, Paris, France
| |
Collapse
|
31
|
Anikin A, Pisanski K, Reby D. Do nonlinear vocal phenomena signal negative valence or high emotion intensity? ROYAL SOCIETY OPEN SCIENCE 2020; 7:201306. [PMID: 33489278 PMCID: PMC7813245 DOI: 10.1098/rsos.201306] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/23/2020] [Accepted: 11/05/2020] [Indexed: 05/06/2023]
Abstract
Nonlinear vocal phenomena (NLPs) are commonly reported in animal calls and, increasingly, in human vocalizations. These perceptually harsh and chaotic voice features function to attract attention and convey urgency, but they may also signal aversive states. To test whether NLPs enhance the perception of negative affect or only signal high arousal, we added subharmonics, sidebands or deterministic chaos to 48 synthetic human nonverbal vocalizations of ambiguous valence: gasps of fright/surprise, moans of pain/pleasure, roars of frustration/achievement and screams of fear/delight. In playback experiments (N = 900 listeners), we compared their perceived valence and emotion intensity in positive or negative contexts or in the absence of any contextual cues. Primarily, NLPs increased the perceived aversiveness of vocalizations regardless of context. To a smaller extent, they also increased the perceived emotion intensity, particularly when the context was negative or absent. However, NLPs also enhanced the perceived intensity of roars of achievement, indicating that their effects can generalize to positive emotions. In sum, a harsh voice with NLPs strongly tips the balance towards negative emotions when a vocalization is ambiguous, but with sufficiently informative contextual cues, NLPs may be re-evaluated as expressions of intense positive affect, underlining the importance of context in nonverbal communication.
Collapse
Affiliation(s)
- Andrey Anikin
- Division of Cognitive Science, Lund University, Lund, Sweden
- Equipe de Neuro-Ethologie Sensorielle (ENES) / Centre de Recherche en Neurosciences de Lyon (CRNL), University of Lyon/Saint-Etienne, CNRS UMR5292, INSERM UMR_S 1028, Saint-Etienne, France
- Author for correspondence: Andrey Anikin e-mail:
| | - Katarzyna Pisanski
- Equipe de Neuro-Ethologie Sensorielle (ENES) / Centre de Recherche en Neurosciences de Lyon (CRNL), University of Lyon/Saint-Etienne, CNRS UMR5292, INSERM UMR_S 1028, Saint-Etienne, France
| | - David Reby
- Equipe de Neuro-Ethologie Sensorielle (ENES) / Centre de Recherche en Neurosciences de Lyon (CRNL), University of Lyon/Saint-Etienne, CNRS UMR5292, INSERM UMR_S 1028, Saint-Etienne, France
| |
Collapse
|
32
|
Mégevand P, Mercier MR, Groppe DM, Zion Golumbic E, Mesgarani N, Beauchamp MS, Schroeder CE, Mehta AD. Crossmodal Phase Reset and Evoked Responses Provide Complementary Mechanisms for the Influence of Visual Speech in Auditory Cortex. J Neurosci 2020; 40:8530-8542. [PMID: 33023923 PMCID: PMC7605423 DOI: 10.1523/jneurosci.0555-20.2020] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2020] [Revised: 07/27/2020] [Accepted: 08/31/2020] [Indexed: 12/26/2022] Open
Abstract
Natural conversation is multisensory: when we can see the speaker's face, visual speech cues improve our comprehension. The neuronal mechanisms underlying this phenomenon remain unclear. The two main alternatives are visually mediated phase modulation of neuronal oscillations (excitability fluctuations) in auditory neurons and visual input-evoked responses in auditory neurons. Investigating this question using naturalistic audiovisual speech with intracranial recordings in humans of both sexes, we find evidence for both mechanisms. Remarkably, auditory cortical neurons track the temporal dynamics of purely visual speech using the phase of their slow oscillations and phase-related modulations in broadband high-frequency activity. Consistent with known perceptual enhancement effects, the visual phase reset amplifies the cortical representation of concomitant auditory speech. In contrast to this, and in line with earlier reports, visual input reduces the amplitude of evoked responses to concomitant auditory input. We interpret the combination of improved phase tracking and reduced response amplitude as evidence for more efficient and reliable stimulus processing in the presence of congruent auditory and visual speech inputs.SIGNIFICANCE STATEMENT Watching the speaker can facilitate our understanding of what is being said. The mechanisms responsible for this influence of visual cues on the processing of speech remain incompletely understood. We studied these mechanisms by recording the electrical activity of the human brain through electrodes implanted surgically inside the brain. We found that visual inputs can operate by directly activating auditory cortical areas, and also indirectly by modulating the strength of cortical responses to auditory input. Our results help to understand the mechanisms by which the brain merges auditory and visual speech into a unitary perception.
Collapse
Affiliation(s)
- Pierre Mégevand
- Department of Neurosurgery, Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, New York 11549
- Feinstein Institutes for Medical Research, Manhasset, New York 11030
- Department of Basic Neurosciences, Faculty of Medicine, University of Geneva, 1211 Geneva, Switzerland
| | - Manuel R Mercier
- Department of Neurology, Montefiore Medical Center, Bronx, New York 10467
- Department of Neuroscience, Albert Einstein College of Medicine, Bronx, New York 10461
- Institut de Neurosciences des Systèmes, Aix Marseille University, INSERM, 13005 Marseille, France
| | - David M Groppe
- Department of Neurosurgery, Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, New York 11549
- Feinstein Institutes for Medical Research, Manhasset, New York 11030
- The Krembil Neuroscience Centre, University Health Network, Toronto, Ontario M5T 1M8, Canada
| | - Elana Zion Golumbic
- The Gonda Brain Research Center, Bar Ilan University, Ramat Gan 5290002, Israel
| | - Nima Mesgarani
- Department of Electrical Engineering, Columbia University, New York, New York 10027
| | - Michael S Beauchamp
- Department of Neurosurgery, Baylor College of Medicine, Houston, Texas 77030
| | - Charles E Schroeder
- Nathan S. Kline Institute, Orangeburg, New York 10962
- Department of Psychiatry, Columbia University, New York, New York 10032
| | - Ashesh D Mehta
- Department of Neurosurgery, Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, New York 11549
- Feinstein Institutes for Medical Research, Manhasset, New York 11030
| |
Collapse
|
33
|
Postal O, Dupont T, Bakay W, Dominique N, Petit C, Michalski N, Gourévitch B. Spontaneous Mouse Behavior in Presence of Dissonance and Acoustic Roughness. Front Behav Neurosci 2020; 14:588834. [PMID: 33132864 PMCID: PMC7578920 DOI: 10.3389/fnbeh.2020.588834] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2020] [Accepted: 09/08/2020] [Indexed: 11/13/2022] Open
Abstract
According to a novel hypothesis (Arnal et al., 2015, Current Biology 25:2051-2056), auditory roughness, or temporal envelope modulations between 30 and 150 Hz, are present in both natural and artificial human alarm signals, which boosts the detection of these alarms in various tasks. These results also shed new light on the unpleasantness of dissonant sounds to humans, which builds upon the high level of roughness present in such sounds. However, it is not clear whether this hypothesis also applies to other species, such as rodents. In particular, whether consonant/dissonant chords, and particularly whether auditory roughness, can trigger unpleasant sensations in mice remains unknown. Using an autonomous behavioral system, which allows the monitoring of mouse behavior over a period of weeks, we observed that C57Bl6J mice did not show any preference for consonant chords. In addition, we found that mice showed a preference for rough sounds over sounds having amplitude modulations in their temporal envelope outside the "rough" range. These results suggest that some emotional features carried by the acoustic temporal envelope are likely to be species-specific.
Collapse
Affiliation(s)
- Olivier Postal
- Institut de l’Audition, Institut Pasteur, INSERM, Paris, France
- Sorbonne Université, Collège Doctoral, Paris, France
| | - Typhaine Dupont
- Institut de l’Audition, Institut Pasteur, INSERM, Paris, France
| | - Warren Bakay
- Institut de l’Audition, Institut Pasteur, INSERM, Paris, France
| | - Noémi Dominique
- Institut de l’Audition, Institut Pasteur, INSERM, Paris, France
| | - Christine Petit
- Institut de l’Audition, Institut Pasteur, INSERM, Paris, France
- Syndrome de Usher et Autres Atteintes Rétino-Cochléaires, Institut de la Vision, Paris, France
- Collège de France, Paris, France
| | | | - Boris Gourévitch
- Institut de l’Audition, Institut Pasteur, INSERM, Paris, France
- CNRS, Paris, France
| |
Collapse
|
34
|
A novel approach to investigate subcortical and cortical sensitivity to temporal structure simultaneously. Hear Res 2020; 398:108080. [PMID: 33038827 DOI: 10.1016/j.heares.2020.108080] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/23/2020] [Revised: 09/11/2020] [Accepted: 09/20/2020] [Indexed: 11/24/2022]
Abstract
Hearing loss is associated with changes at the peripheral, subcortical, and cortical auditory stages. Research often focuses on these stages in isolation, but peripheral damage has cascading effects on central processing, and different stages are interconnected through extensive feedforward and feedback projections. Accordingly, assessment of the entire auditory system is needed to understand auditory pathology. Using a novel stimulus paired with electroencephalography in young, normal-hearing adults, we assess neural function at multiple stages of the auditory pathway simultaneously. We employ click trains that repeatedly accelerate then decelerate (3.5 Hz click-rate-modulation) introducing varying inter-click-intervals (4 to 40 ms). We measured the amplitude of cortical potentials, and the latencies and amplitudes of Waves III and V of the auditory brainstem response (ABR), to clicks as a function of preceding inter-click-interval. This allowed us to assess cortical processing of click-rate-modulation, as well as adaptation and neural recovery time in subcortical structures (probably cochlear nuclei and inferior colliculi). Subcortical adaptation to inter-click intervals was reflected in longer latencies. Cortical responses to the 3.5 Hz modulation included phase-locking, probably originating from auditory cortex, and sustained activity likely originating from higher-level cortices. We did not observe any correlations between subcortical and cortical responses. By recording neural responses from different stages of the auditory system simultaneously, we can study functional relationships among levels of the auditory system, which may provide a new and helpful window on hearing and hearing impairment.
Collapse
|
35
|
Wang M, Li G, Jiang S, Wei Z, Hu J, Chen L, Zhang D. Enhancing gesture decoding performance using signals from posterior parietal cortex: a stereo-electroencephalograhy (SEEG) study. J Neural Eng 2020; 17:046043. [DOI: 10.1088/1741-2552/ab9987] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
|
36
|
Weissman YA, Demartsev V, Ilany A, Barocas A, Bar-Ziv E, Koren L, Geffen E. A crescendo in the inner structure of snorts: a reflection of increasing arousal in rock hyrax songs? Anim Behav 2020. [DOI: 10.1016/j.anbehav.2020.06.010] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
37
|
Beaurenaut M, Tokarski E, Dezecache G, Grèzes J. The 'Threat of Scream' paradigm: a tool for studying sustained physiological and subjective anxiety. Sci Rep 2020; 10:12496. [PMID: 32719491 PMCID: PMC7385655 DOI: 10.1038/s41598-020-68889-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2019] [Accepted: 07/02/2020] [Indexed: 12/22/2022] Open
Abstract
Progress in understanding the emergence of pathological anxiety depends on the availability of paradigms effective in inducing anxiety in a simple, consistent and sustained manner. The Threat-of-Shock paradigm has typically been used to elicit anxiety, but poses ethical issues when testing vulnerable populations. Moreover, it is not clear from past studies whether anxiety can be sustained in experiments of longer durations. Here, we present empirical support for an alternative approach, the ‘Threat-of-Scream’ paradigm, in which shocks are replaced by screams. In two studies, participants were repeatedly exposed to blocks in which they were at risk of hearing aversive screams at any time vs. blocks in which they were safe from screams. Contrary to previous ‘Threat-of-Scream’ studies, we ensured that our screams were neither harmful nor intolerable by presenting them at low intensity. We found higher subjective reports of anxiety, higher skin conductance levels, and a positive correlation between the two measures, in threat compared to safe blocks. These results were reproducible and we found no significant change over time. The unpredictable delivery of low intensity screams could become an essential part of a psychology toolkit, particularly when investigating the impact of anxiety in a diversity of cognitive functions and populations.
Collapse
Affiliation(s)
- Morgan Beaurenaut
- Laboratoire de Neurosciences Cognitives et Computationnelles, ENS, PSL Research University, INSERM, Département d'études Cognitives, Paris, France.
| | - Elliot Tokarski
- Laboratoire de Neurosciences Cognitives et Computationnelles, ENS, PSL Research University, INSERM, Département d'études Cognitives, Paris, France
| | - Guillaume Dezecache
- Department of Experimental Psychology, Division of Psychology and Language Sciences, University College London, London, UK.,Université Clermont Auvergne, CNRS, LAPSCO, Clermont-Ferrand, France
| | - Julie Grèzes
- Laboratoire de Neurosciences Cognitives et Computationnelles, ENS, PSL Research University, INSERM, Département d'études Cognitives, Paris, France.
| |
Collapse
|
38
|
Trevor C, Arnal LH, Frühholz S. Terrifying film music mimics alarming acoustic feature of human screams. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 147:EL540. [PMID: 32611175 DOI: 10.1121/10.0001459] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/25/2020] [Accepted: 06/02/2020] [Indexed: 06/11/2023]
Abstract
One way music is thought to convey emotion is by mimicking acoustic features of affective human vocalizations [Juslin and Laukka (2003). Psychol. Bull. 129(5), 770-814]. Regarding fear, it has been informally noted that music for scary scenes in films frequently exhibits a "scream-like" character. Here, this proposition is formally tested. This paper reports acoustic analyses for four categories of audio stimuli: screams, non-screaming vocalizations, scream-like music, and non-scream-like music. Valence and arousal ratings were also collected. Results support the hypothesis that a key feature of human screams (roughness) is imitated by scream-like music and could potentially signal danger through both music and the voice.
Collapse
Affiliation(s)
- Caitlyn Trevor
- Department of Psychology, University of Zurich, Binzmuehlestrasse 14, 8050 Zurich, Switzerland
| | - Luc H Arnal
- Department of Fundamental Neuroscience, University of Geneva, Biotech Campus, Geneva 7, CH-1202, , ,
| | - Sascha Frühholz
- Department of Psychology, University of Zurich, Binzmuehlestrasse 14, 8050 Zurich, Switzerland
| |
Collapse
|
39
|
Hechavarría JC, Jerome Beetz M, García-Rosales F, Kössl M. Bats distress vocalizations carry fast amplitude modulations that could represent an acoustic correlate of roughness. Sci Rep 2020; 10:7332. [PMID: 32355293 PMCID: PMC7192923 DOI: 10.1038/s41598-020-64323-7] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2019] [Accepted: 03/04/2020] [Indexed: 02/07/2023] Open
Abstract
Communication sounds are ubiquitous in the animal kingdom, where they play a role in advertising physiological states and/or socio-contextual scenarios. Human screams, for example, are typically uttered in fearful contexts and they have a distinctive feature termed as "roughness", which depicts amplitude fluctuations at rates from 30-150 Hz. In this article, we report that the occurrence of fast acoustic periodicities in harsh sounding vocalizations is not unique to humans. A roughness-like structure is also present in vocalizations emitted by bats (species Carollia perspicillata) in distressful contexts. We report that 47.7% of distress calls produced by bats carry amplitude fluctuations at rates ~1.7 kHz (>10 times faster than temporal modulations found in human screams). In bats, rough-like vocalizations entrain brain potentials and are more effective in accelerating the bats' heart rate than slow amplitude modulated sounds. Our results are consistent with a putative role of fast amplitude modulations (roughness in humans) for grabbing the listeners attention in situations in which the emitter is in distressful, potentially dangerous, contexts.
Collapse
Affiliation(s)
- Julio C Hechavarría
- Institut für Zellbiologie und Neurowissenschaft, Goethe-Universität, Frankfurt/M., Germany.
| | - M Jerome Beetz
- Institut für Zellbiologie und Neurowissenschaft, Goethe-Universität, Frankfurt/M., Germany
- Zoology II Emmy-Noether Animal Navigation Group, Biocenter, University of Würzburg, Würzburg, Germany
| | | | - Manfred Kössl
- Institut für Zellbiologie und Neurowissenschaft, Goethe-Universität, Frankfurt/M., Germany
| |
Collapse
|
40
|
Li G, Jiang S, Chen C, Brunner P, Wu Z, Schalk G, Chen L, Zhang D. iEEGview: an open-source multifunction GUI-based Matlab toolbox for localization and visualization of human intracranial electrodes. J Neural Eng 2019; 17:016016. [PMID: 31658449 DOI: 10.1088/1741-2552/ab51a5] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
OBJECTIVE The precise localization of intracranial electrodes is a fundamental step relevant to the analysis of intracranial electroencephalography (iEEG) recordings in various fields. With the increasing development of iEEG studies in human neuroscience, higher requirements have been posed on the localization process, resulting in urgent demand for more integrated, easy-operation and versatile tools for electrode localization and visualization. With the aim of addressing this need, we develop an easy-to-use and multifunction toolbox called iEEGview, which can be used for the localization and visualization of human intracranial electrodes. APPROACH iEEGview is written in Matlab scripts and implemented with a GUI. From the GUI, by taking only pre-implant MRI and post-implant CT images as input, users can directly run the full localization pipeline including brain segmentation, image co-registration, electrode reconstruction, anatomical information identification, activation map generation and electrode projection from native brain space into common brain space for group analysis. Additionally, iEEGview implements methods for brain shift correction, visual location inspection on MRI slices and computation of certainty index in anatomical label assignment. MAIN RESULTS All the introduced functions of iEEGview work reliably and successfully, and are tested by images from 28 human subjects implanted with depth and/or subdural electrodes. SIGNIFICANCE iEEGview is the first public Matlab GUI-based software for intracranial electrode localization and visualization that holds integrated capabilities together within one pipeline. iEEGview promotes convenience and efficiency for the localization process, provides rich localization information for further analysis and offers solutions for addressing raised technical challenges. Therefore, it can serve as a useful tool in facilitating iEEG studies.
Collapse
Affiliation(s)
- Guangye Li
- State Key Laboratory of Mechanical Systems and Vibrations, Institute of Robotics, Shanghai Jiao Tong University, Shanghai, People's Republic of China. National Center for Adaptive Neurotechnologies, Wadsworth Center, New York State Department of Health, Albany, NY, United States of America
| | | | | | | | | | | | | | | |
Collapse
|