1
|
Retsa C, Turpin H, Geiser E, Ansermet F, Müller-Nix C, Murray MM. Longstanding Auditory Sensory and Semantic Differences in Preterm Born Children. Brain Topogr 2024; 37:536-551. [PMID: 38010487 PMCID: PMC11199270 DOI: 10.1007/s10548-023-01022-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Accepted: 11/06/2023] [Indexed: 11/29/2023]
Abstract
More than 10% of births are preterm, and the long-term consequences on sensory and semantic processing of non-linguistic information remain poorly understood. 17 very preterm-born children (born at < 33 weeks gestational age) and 15 full-term controls were tested at 10 years old with an auditory object recognition task, while 64-channel auditory evoked potentials (AEPs) were recorded. Sounds consisted of living (animal and human vocalizations) and manmade objects (e.g. household objects, instruments, and tools). Despite similar recognition behavior, AEPs strikingly differed between full-term and preterm children. Starting at 50ms post-stimulus onset, AEPs from preterm children differed topographically from their full-term counterparts. Over the 108-224ms post-stimulus period, full-term children showed stronger AEPs in response to living objects, whereas preterm born children showed the reverse pattern; i.e. stronger AEPs in response to manmade objects. Differential brain activity between semantic categories could reliably classify children according to their preterm status. Moreover, this opposing pattern of differential responses to semantic categories of sounds was also observed in source estimations within a network of occipital, temporal and frontal regions. This study highlights how early life experience in terms of preterm birth shapes sensory and object processing later on in life.
Collapse
Affiliation(s)
- Chrysa Retsa
- The Radiology Department, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland.
- The Sense Innovation and Research Center, Lausanne and Sion, Lausanne, Switzerland.
- CIBM Center for Biomedical Imaging, Lausanne, Switzerland.
| | - Hélène Turpin
- The Radiology Department, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
- University Service of Child and Adolescent Psychiatry, University Hospital of Lausanne and University of Lausanne, Lausanne, Switzerland
| | - Eveline Geiser
- The Radiology Department, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - François Ansermet
- University Service of Child and Adolescent Psychiatry, University Hospital of Lausanne and University of Lausanne, Lausanne, Switzerland
- Department of Child and Adolescent Psychiatry, University Hospital, Geneva, Switzerland
| | - Carole Müller-Nix
- University Service of Child and Adolescent Psychiatry, University Hospital of Lausanne and University of Lausanne, Lausanne, Switzerland
| | - Micah M Murray
- The Radiology Department, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
- The Sense Innovation and Research Center, Lausanne and Sion, Lausanne, Switzerland
- CIBM Center for Biomedical Imaging, Lausanne, Switzerland
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA
| |
Collapse
|
2
|
Gurariy G, Randall R, Greenberg AS. Neuroimaging evidence for the direct role of auditory scene analysis in object perception. Cereb Cortex 2023; 33:6257-6272. [PMID: 36562994 PMCID: PMC10183742 DOI: 10.1093/cercor/bhac501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2022] [Revised: 11/29/2022] [Accepted: 11/30/2022] [Indexed: 12/24/2022] Open
Abstract
Auditory Scene Analysis (ASA) refers to the grouping of acoustic signals into auditory objects. Previously, we have shown that perceived musicality of auditory sequences varies with high-level organizational features. Here, we explore the neural mechanisms mediating ASA and auditory object perception. Participants performed musicality judgments on randomly generated pure-tone sequences and manipulated versions of each sequence containing low-level changes (amplitude; timbre). Low-level manipulations affected auditory object perception as evidenced by changes in musicality ratings. fMRI was used to measure neural activation to sequences rated most and least musical, and the altered versions of each sequence. Next, we generated two partially overlapping networks: (i) a music processing network (music localizer) and (ii) an ASA network (base sequences vs. ASA manipulated sequences). Using Representational Similarity Analysis, we correlated the functional profiles of each ROI to a model generated from behavioral musicality ratings as well as models corresponding to low-level feature processing and music perception. Within overlapping regions, areas near primary auditory cortex correlated with low-level ASA models, whereas right IPS was correlated with musicality ratings. Shared neural mechanisms that correlate with behavior and underlie both ASA and music perception suggests that low-level features of auditory stimuli play a role in auditory object perception.
Collapse
Affiliation(s)
- Gennadiy Gurariy
- Department of Biomedical Engineering, Medical College of Wisconsin and Marquette University, 8701 W Watertown Plank Rd, Milwaukee, WI 53233, United States
| | - Richard Randall
- School of Music and Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA 15213, United States
| | - Adam S Greenberg
- Department of Biomedical Engineering, Medical College of Wisconsin and Marquette University, 8701 W Watertown Plank Rd, Milwaukee, WI 53233, United States
| |
Collapse
|
3
|
Lorenzi C, Apoux F, Grinfeder E, Krause B, Miller-Viacava N, Sueur J. Human Auditory Ecology: Extending Hearing Research to the Perception of Natural Soundscapes by Humans in Rapidly Changing Environments. Trends Hear 2023; 27:23312165231212032. [PMID: 37981813 PMCID: PMC10658775 DOI: 10.1177/23312165231212032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Revised: 10/13/2023] [Accepted: 10/18/2023] [Indexed: 11/21/2023] Open
Abstract
Research in hearing sciences has provided extensive knowledge about how the human auditory system processes speech and assists communication. In contrast, little is known about how this system processes "natural soundscapes," that is the complex arrangements of biological and geophysical sounds shaped by sound propagation through non-anthropogenic habitats [Grinfeder et al. (2022). Frontiers in Ecology and Evolution. 10: 894232]. This is surprising given that, for many species, the capacity to process natural soundscapes determines survival and reproduction through the ability to represent and monitor the immediate environment. Here we propose a framework to encourage research programmes in the field of "human auditory ecology," focusing on the study of human auditory perception of ecological processes at work in natural habitats. Based on large acoustic databases with high ecological validity, these programmes should investigate the extent to which this presumably ancestral monitoring function of the human auditory system is adapted to specific information conveyed by natural soundscapes, whether it operate throughout the life span or whether it emerges through individual learning or cultural transmission. Beyond fundamental knowledge of human hearing, these programmes should yield a better understanding of how normal-hearing and hearing-impaired listeners monitor rural and city green and blue spaces and benefit from them, and whether rehabilitation devices (hearing aids and cochlear implants) restore natural soundscape perception and emotional responses back to normal. Importantly, they should also reveal whether and how humans hear the rapid changes in the environment brought about by human activity.
Collapse
Affiliation(s)
- Christian Lorenzi
- Laboratoire des Systèmes Perceptifs, UMR CNRS 8248, Département d’Etudes Cognitives, Ecole Normale Supérieure, Université Paris Sciences et Lettres (PSL), Paris, France
| | - Frédéric Apoux
- Laboratoire des Systèmes Perceptifs, UMR CNRS 8248, Département d’Etudes Cognitives, Ecole Normale Supérieure, Université Paris Sciences et Lettres (PSL), Paris, France
| | - Elie Grinfeder
- Laboratoire des Systèmes Perceptifs, UMR CNRS 8248, Département d’Etudes Cognitives, Ecole Normale Supérieure, Université Paris Sciences et Lettres (PSL), Paris, France
- Institut de Systématique, Évolution, Biodiversité (ISYEB), Muséum national d’Histoire naturelle, CNRS, Sorbonne Université, EPHE, Université des Antilles, Paris, France
| | | | - Nicole Miller-Viacava
- Laboratoire des Systèmes Perceptifs, UMR CNRS 8248, Département d’Etudes Cognitives, Ecole Normale Supérieure, Université Paris Sciences et Lettres (PSL), Paris, France
| | - Jérôme Sueur
- Institut de Systématique, Évolution, Biodiversité (ISYEB), Muséum national d’Histoire naturelle, CNRS, Sorbonne Université, EPHE, Université des Antilles, Paris, France
| |
Collapse
|
4
|
Biondi M, Hirshkowitz A, Stotler J, Wilcox T. Cortical Activation to Social and Mechanical Stimuli in the Infant Brain. Front Syst Neurosci 2021; 15:510030. [PMID: 34248512 PMCID: PMC8264292 DOI: 10.3389/fnsys.2021.510030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2019] [Accepted: 06/01/2021] [Indexed: 11/13/2022] Open
Abstract
From the early days of life infants distinguish between social and non-social physical entities and have different expectations for the way these two entities should move and interact. At the same time, we know very little about the cortical systems that support this early emerging ability. The goal of the current research was to assess the extent to which infant's processing of social and non-social physical entities is mediated by distinct information processing systems in the temporal cortex. Using a cross-sectional design, infants aged 6-9 months (Experiment 1) and 11-18 months (Experiment 2) were presented with two types of events: social interaction and mechanical interaction. In the social interaction event (patterned after Hamlin et al., 2007), an entity with googly eyes, hair tufts, and an implied goal of moving up the hill was either helped up, or pushed down, a hill through the actions of another social entity. In the mechanical interaction event, the googly eyes and hair tufts were replaced with vertical black dots and a hook and clasp, and the objects moved up or down the hill via mechanical interactions. FNIRS was used to measure activation from temporal cortex while infants viewed the test events. In both age groups, viewing social and mechanical interaction events elicited different patterns of activation in the right temporal cortex, although responses were more specialized in the older age group. Activation was not obtained in these areas when the objects moved in synchrony without interacting, suggesting that the causal nature of the interaction events may be responsible, in part, to the results obtained. This is one of the few fNIRS studies that has investigated age-related patterns of cortical activation and the first to provide insight into the functional development of networks specialized for processing of social and non-social physical entities engaged in interaction events.
Collapse
Affiliation(s)
- Marisa Biondi
- Tobii Pro, College Station, TX, United States.,Department of Psychological & Brain Sciences, Texas A&M University, College Station, TX, United States
| | - Amy Hirshkowitz
- Department of Psychological & Brain Sciences, Texas A&M University, College Station, TX, United States.,Baylor College of Medicine, Houston, TX, United States
| | - Jacqueline Stotler
- Department of Psychology, Florida Atlantic University, Boca Raton, FL, United States
| | - Teresa Wilcox
- Department of Psychological & Brain Sciences, Texas A&M University, College Station, TX, United States.,Department of Psychology, Florida Atlantic University, Boca Raton, FL, United States
| |
Collapse
|
5
|
Describing the sounds of nature: Using onomatopoeia to classify bird calls for citizen science. PLoS One 2021; 16:e0250363. [PMID: 33979330 PMCID: PMC8115837 DOI: 10.1371/journal.pone.0250363] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2020] [Accepted: 04/06/2021] [Indexed: 11/19/2022] Open
Abstract
Bird call libraries are difficult to collect yet vital for bio-acoustics studies. A potential solution is citizen science labelling of calls. However, acoustic annotation techniques are still relatively undeveloped and in parallel, citizen science initiatives struggle with maintaining participant engagement, while increasing efficiency and accuracy. This study explores the use of an under-utilised and theoretically engaging and intuitive means of sound categorisation: onomatopoeia. To learn if onomatopoeia was a reliable means of categorisation, an online experiment was conducted. Participants sourced from Amazon mTurk (N = 104) ranked how well twelve onomatopoeic words described acoustic recordings of ten native Australian bird calls. Of the ten bird calls, repeated measures ANOVA revealed that five of these had single descriptors ranked significantly higher than all others, while the remaining calls had multiple descriptors that were rated significantly higher than others. Agreement as assessed by Kendall's W shows that overall, raters agreed regarding the suitability and unsuitability of the descriptors used across all bird calls. Further analysis of the spread of responses using frequency charts confirms this and indicates that agreement on which descriptors were unsuitable was pronounced throughout, and that stronger agreement of suitable singular descriptions was matched with greater rater confidence. This demonstrates that onomatopoeia may be reliably used to classify bird calls by non-expert listeners, adding to the suite of methods used in classification of biological sounds. Interface design implications for acoustic annotation are discussed.
Collapse
|
6
|
Valencia GN, Khoo S, Wong T, Ta J, Hou B, Barsalou LW, Hazen K, Lin HH, Wang S, Brefczynski-Lewis JA, Frum CA, Lewis JW. Chinese-English bilinguals show linguistic-perceptual links in the brain associating short spoken phrases with corresponding real-world natural action sounds by semantic category. LANGUAGE, COGNITION AND NEUROSCIENCE 2021; 36:773-790. [PMID: 34568509 PMCID: PMC8462789 DOI: 10.1080/23273798.2021.1883073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/05/2020] [Accepted: 01/26/2021] [Indexed: 06/13/2023]
Abstract
Higher cognitive functions such as linguistic comprehension must ultimately relate to perceptual systems in the brain, though how and why this forms remains unclear. Different brain networks that mediate perception when hearing real-world natural sounds has recently been proposed to respect a taxonomic model of acoustic-semantic categories. Using functional magnetic resonance imaging (fMRI) with Chinese/English bilingual listeners, the present study explored whether reception of short spoken phrases, in both Chinese (Mandarin) and English, describing corresponding sound-producing events would engage overlapping brain regions at a semantic category level. The results revealed a double-dissociation of cortical regions that were preferential for representing knowledge of human versus environmental action events, whether conveyed through natural sounds or the corresponding spoken phrases depicted by either language. These findings of cortical hubs exhibiting linguistic-perceptual knowledge links at a semantic category level should help to advance neurocomputational models of the neurodevelopment of language systems.
Collapse
Affiliation(s)
- Gabriela N. Valencia
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University (WVU), Morgantown, WV 26506, USA
| | - Stephanie Khoo
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University (WVU), Morgantown, WV 26506, USA
| | - Ting Wong
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University (WVU), Morgantown, WV 26506, USA
| | - Joseph Ta
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University (WVU), Morgantown, WV 26506, USA
| | - Bob Hou
- Department of Radiology, Center for Advanced Imaging
| | | | - Kirk Hazen
- Department of English, West Virginia University
| | | | - Shuo Wang
- Department of Chemical and Biomedical Engineering
| | - Julie A. Brefczynski-Lewis
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University (WVU), Morgantown, WV 26506, USA
| | - Chris A. Frum
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University (WVU), Morgantown, WV 26506, USA
| | - James W. Lewis
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University (WVU), Morgantown, WV 26506, USA
| |
Collapse
|
7
|
Csonka M, Mardmomen N, Webster PJ, Brefczynski-Lewis JA, Frum C, Lewis JW. Meta-Analyses Support a Taxonomic Model for Representations of Different Categories of Audio-Visual Interaction Events in the Human Brain. Cereb Cortex Commun 2021; 2:tgab002. [PMID: 33718874 PMCID: PMC7941256 DOI: 10.1093/texcom/tgab002] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2020] [Revised: 12/31/2020] [Accepted: 01/06/2021] [Indexed: 01/23/2023] Open
Abstract
Our ability to perceive meaningful action events involving objects, people, and other animate agents is characterized in part by an interplay of visual and auditory sensory processing and their cross-modal interactions. However, this multisensory ability can be altered or dysfunctional in some hearing and sighted individuals, and in some clinical populations. The present meta-analysis sought to test current hypotheses regarding neurobiological architectures that may mediate audio-visual multisensory processing. Reported coordinates from 82 neuroimaging studies (137 experiments) that revealed some form of audio-visual interaction in discrete brain regions were compiled, converted to a common coordinate space, and then organized along specific categorical dimensions to generate activation likelihood estimate (ALE) brain maps and various contrasts of those derived maps. The results revealed brain regions (cortical "hubs") preferentially involved in multisensory processing along different stimulus category dimensions, including 1) living versus nonliving audio-visual events, 2) audio-visual events involving vocalizations versus actions by living sources, 3) emotionally valent events, and 4) dynamic-visual versus static-visual audio-visual stimuli. These meta-analysis results are discussed in the context of neurocomputational theories of semantic knowledge representations and perception, and the brain volumes of interest are available for download to facilitate data interpretation for future neuroimaging studies.
Collapse
Affiliation(s)
- Matt Csonka
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - Nadia Mardmomen
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - Paula J Webster
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - Julie A Brefczynski-Lewis
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - Chris Frum
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - James W Lewis
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| |
Collapse
|
8
|
Abstract
This volume has highlighted the many recent advances in tinnitus theory, models, diagnostics, therapies, and therapeutics. But tinnitus knowledge is far from complete. In this chapter, contributors to the Behavioral Neuroscience of Tinnitus consider emerging topics and areas of research needed in light of recent findings. New research avenues and methods to explore are discussed. Issues pertaining to current assessment, treatment, and research methods are outlined, along with recommendations on new avenues to explore with research.
Collapse
|
9
|
Talkington WJ, Donai J, Kadner AS, Layne ML, Forino A, Wen S, Gao S, Gray MM, Ashraf AJ, Valencia GN, Smith BD, Khoo SK, Gray SJ, Lass N, Brefczynski-Lewis JA, Engdahl S, Graham D, Frum CA, Lewis JW. Electrophysiological Evidence of Early Cortical Sensitivity to Human Conspecific Mimic Voice as a Distinct Category of Natural Sound. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:3539-3559. [PMID: 32936717 PMCID: PMC8060013 DOI: 10.1044/2020_jslhr-20-00063] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/17/2020] [Revised: 04/29/2020] [Accepted: 07/01/2020] [Indexed: 06/11/2023]
Abstract
Purpose From an anthropological perspective of hominin communication, the human auditory system likely evolved to enable special sensitivity to sounds produced by the vocal tracts of human conspecifics whether attended or passively heard. While numerous electrophysiological studies have used stereotypical human-produced verbal (speech voice and singing voice) and nonverbal vocalizations to identify human voice-sensitive responses, controversy remains as to when (and where) processing of acoustic signal attributes characteristic of "human voiceness" per se initiate in the brain. Method To explore this, we used animal vocalizations and human-mimicked versions of those calls ("mimic voice") to examine late auditory evoked potential responses in humans. Results Here, we revealed an N1b component (96-120 ms poststimulus) during a nonattending listening condition showing significantly greater magnitude in response to mimics, beginning as early as primary auditory cortices, preceding the time window reported in previous studies that revealed species-specific vocalization processing initiating in the range of 147-219 ms. During a sound discrimination task, a P600 (500-700 ms poststimulus) component showed specificity for accurate discrimination of human mimic voice. Distinct acoustic signal attributes and features of the stimuli were used in a classifier model, which could distinguish most human from animal voice comparably to behavioral data-though none of these single features could adequately distinguish human voiceness. Conclusions These results provide novel ideas for algorithms used in neuromimetic hearing aids, as well as direct electrophysiological support for a neurocognitive model of natural sound processing that informs both neurodevelopmental and anthropological models regarding the establishment of auditory communication systems in humans. Supplemental Material https://doi.org/10.23641/asha.12903839.
Collapse
Affiliation(s)
- William J. Talkington
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown
| | - Jeremy Donai
- Department of Communication Sciences and Disorders, College of Education and Human Services, West Virginia University, Morgantown
| | - Alexandra S. Kadner
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown
| | - Molly L. Layne
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown
| | - Andrew Forino
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown
| | - Sijin Wen
- Department of Biostatistics, West Virginia University, Morgantown
| | - Si Gao
- Department of Biostatistics, West Virginia University, Morgantown
| | - Margeaux M. Gray
- Department of Biology, Rockefeller Neuroscience Institute, West Virginia University, Morgantown
| | - Alexandria J. Ashraf
- Department of Biology, Rockefeller Neuroscience Institute, West Virginia University, Morgantown
| | - Gabriela N. Valencia
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown
| | - Brandon D. Smith
- Department of Biology, Rockefeller Neuroscience Institute, West Virginia University, Morgantown
| | - Stephanie K. Khoo
- Department of Biology, Rockefeller Neuroscience Institute, West Virginia University, Morgantown
| | - Stephen J. Gray
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown
| | - Norman Lass
- Department of Communication Sciences and Disorders, College of Education and Human Services, West Virginia University, Morgantown
| | | | - Susannah Engdahl
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown
| | - David Graham
- Department of Computer Science and Electrical Engineering, West Virginia University, Morgantown
| | - Chris A. Frum
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown
| | - James W. Lewis
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown
| |
Collapse
|
10
|
Lloyd-Esenkaya T, Lloyd-Esenkaya V, O'Neill E, Proulx MJ. Multisensory inclusive design with sensory substitution. COGNITIVE RESEARCH-PRINCIPLES AND IMPLICATIONS 2020; 5:37. [PMID: 32770416 PMCID: PMC7415050 DOI: 10.1186/s41235-020-00240-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/22/2019] [Accepted: 07/13/2020] [Indexed: 11/10/2022]
Abstract
Sensory substitution techniques are perceptual and cognitive phenomena used to represent one sensory form with an alternative. Current applications of sensory substitution techniques are typically focused on the development of assistive technologies whereby visually impaired users can acquire visual information via auditory and tactile cross-modal feedback. But despite their evident success in scientific research and furthering theory development in cognition, sensory substitution techniques have not yet gained widespread adoption within sensory-impaired populations. Here we argue that shifting the focus from assistive to mainstream applications may resolve some of the current issues regarding the use of sensory substitution devices to improve outcomes for those with disabilities. This article provides a tutorial guide on how to use research into multisensory processing and sensory substitution techniques from the cognitive sciences to design new inclusive cross-modal displays. A greater focus on developing inclusive mainstream applications could lead to innovative technologies that could be enjoyed by every person.
Collapse
Affiliation(s)
- Tayfun Lloyd-Esenkaya
- Crossmodal Cognition Lab, University of Bath, Bath, BA2 7AY, UK.,Department of Computer Science, University of Bath, Bath, UK
| | | | - Eamonn O'Neill
- Department of Computer Science, University of Bath, Bath, UK
| | - Michael J Proulx
- Crossmodal Cognition Lab, University of Bath, Bath, BA2 7AY, UK. .,Department of Psychology, University of Bath, Bath, UK.
| |
Collapse
|
11
|
Lezama-Espinosa C, Hernandez-Montiel HL. Neuroscience of the auditory-motor system: How does sound interact with movement? Behav Brain Res 2020; 384:112535. [PMID: 32044405 DOI: 10.1016/j.bbr.2020.112535] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2019] [Revised: 01/25/2020] [Accepted: 02/01/2020] [Indexed: 11/29/2022]
Abstract
Human musicality is a complex problem because it involves the coupling of multiple exogenous and endogenous signals with different physical properties. The synchronization of these signals translates into specific behaviors. The study of this synchronization, based on the physical properties of two oscillatory bodies, is the first step in understanding the behaviors associated with rhythmic auditory stimuli. In recent years, different neurorehabilitation therapies have emerged for motor pathologies involving music. However, the neurophysiological bases that describe the coupling phenomenon are not yet fully understood. In this article, two theories are addressed that attempt to explain the convergence of the auditory system and the motor system according to new neuroanatomical, neurophysiological and artificial neural network findings. It also reflects on the different approaches to a complex problem in cognitive neuroscience and the need for a study model for the different motor behaviors evoked by auditory stimuli.
Collapse
Affiliation(s)
- C Lezama-Espinosa
- Autonomous University of Queretaro (UAQ) Faculty of Medicine, Nervous System Clinic, Clavel 200, Prados de la Capilla, CP. 76176, Santiago de Querétaro, Qro., México.
| | - H L Hernandez-Montiel
- Autonomous University of Queretaro (UAQ) Faculty of Medicine, Nervous System Clinic, Clavel 200, Prados de la Capilla, CP. 76176, Santiago de Querétaro, Qro., México.
| |
Collapse
|
12
|
Hu M, Wang D, Ji X, Yu T, Shan Y, Fan X, Du J, Zhang X, Zhao G, Wang Y, Ren L, Liégeois-Chauvel C. Neural processes of auditory perception in Heschl's gyrus for upcoming acoustic stimuli in humans. Hear Res 2020; 388:107895. [PMID: 31982643 DOI: 10.1016/j.heares.2020.107895] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/03/2019] [Revised: 12/20/2019] [Accepted: 01/10/2020] [Indexed: 11/29/2022]
Abstract
In the natural environment, attended sounds tend to be perceived much better than unattended sounds. However, the physiological mechanism of how our neural systems direct the state of perceptual attention to prepare for the detection of upcoming acoustic stimuli before auditory stream segregation remains elusive. In this study, based on the direct intracerebral recordings from the auditory cortex in eight epileptic patients with refractory focal seizures, we investigated the neural processing of auditory attention by comparing the local field potentials before 'attentional' and 'distracted' conditions. Here we first showed a distinct build-up of slow, negative cortical potential in Heschl's gyrus. The amplitude increased steadily, starting from 600 to 800 ms before presentation of the tone until the onset of the evoked component P/N 60-80 when the patients were in the attentional condition. Because of their specific topographical distribution and modality-specific properties, we named these 'auditory preparatory potentials', which are also associated with increased gamma oscillations (30-150 Hz) and desynchronized low frequency activity (below 30 Hz). Thus, our findings suggest that the auditory cortex is pre-activated to facilitate the perception of forthcoming sound events, and contribute to the understanding of the neurophysiological mechanisms of auditory perception from a new perspective.
Collapse
Affiliation(s)
- Minjing Hu
- Department of Neurology, Xuanwu Hospital, Capital Medical University, Beijing, China; Department of Neurology, Affiliated Hospital of Nantong University, Nantong, China
| | - Di Wang
- Department of Neurology, Xuanwu Hospital, Capital Medical University, Beijing, China
| | - Xuanxiu Ji
- Second Department of Geriatric Division, General Hospital of Jinan Military Region, Jinan, China
| | - Tao Yu
- Beijing Institute of Functional Neurosurgery, Xuanwu Hospital, Capital Medical University, Beijing, China
| | - Yongzhi Shan
- Department of Neurosurgery, Xuanwu Hospital, Capital Medical University, Beijing, China
| | - Xiaotong Fan
- Department of Neurosurgery, Xuanwu Hospital, Capital Medical University, Beijing, China
| | - Jialin Du
- Department of Neurology, Xuanwu Hospital, Capital Medical University, Beijing, China
| | - Xiaohua Zhang
- Beijing Institute of Functional Neurosurgery, Xuanwu Hospital, Capital Medical University, Beijing, China
| | - Guoguang Zhao
- Department of Neurosurgery, Xuanwu Hospital, Capital Medical University, Beijing, China.
| | - Yuping Wang
- Department of Neurology, Xuanwu Hospital, Capital Medical University, Beijing, China.
| | - Liankun Ren
- Department of Neurology, Xuanwu Hospital, Capital Medical University, Beijing, China.
| | - Catherine Liégeois-Chauvel
- Aix Marseille Université, Inserm, Institut des Neurosciences des Systemes, Marseille, France; Cleveland Clinic Neurological Institute, Epilepsy Center, Cleveland, OH, USA
| |
Collapse
|
13
|
Sense and Sensibility: A Review of the Behavioral Neuroscience of Tinnitus Sound Therapy and a New Typology. Curr Top Behav Neurosci 2020; 51:213-247. [PMID: 33547596 DOI: 10.1007/7854_2020_183] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
Tinnitus Sound Therapy is not a single strategy. It consists of many different sound types, targeting many different mechanisms. Therapies that use sound to cover, reduce attention to, or facilitate habituation of tinnitus are among the most common tinnitus treatment paradigms. Recent history has seen a proliferation of sound therapies, but they have each been criticized for having limited empirical support. In this review, Sound Therapy's modern history will be described, and a typology will be introduced and discussed in light of current behavioral neuroscience research. It will be argued that contributing factors to the limited evidence for the efficacy of Sound Therapy are its diversity, plural modes of action, and absence of a clear typology. Despite gaps in understanding the efficacy of sound's effects on tinnitus, there is compelling evidence for its multiple, but related, neurophysiological mechanisms. Evidence suggests that sound may reduce tinnitus through its presence, context, reaction, and potentially adaptation. This review provides insights into the neurocognitive basis of these tinnitus Sound Therapy modes. It concludes that a unifying classification is needed to secure and advance arguments in favor of Sound Therapy.
Collapse
|
14
|
Salvari V, Paraskevopoulos E, Chalas N, Müller K, Wollbrink A, Dobel C, Korth D, Pantev C. Auditory Categorization of Man-Made Sounds Versus Natural Sounds by Means of MEG Functional Brain Connectivity. Front Neurosci 2019; 13:1052. [PMID: 31636532 PMCID: PMC6787283 DOI: 10.3389/fnins.2019.01052] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2019] [Accepted: 09/19/2019] [Indexed: 01/27/2023] Open
Abstract
Previous neuroimaging studies have shown that sounds can be discriminated due to living-related or man-made-related characteristics and involve different brain regions. However, these studies have mainly provided source space analyses, which offer simple maps of activated brain regions but do not explain how regions of a distributed system are functionally organized under a specific task. In the present study, we aimed to further examine the functional connectivity of the auditory processing pathway across different categories of non-speech sounds in healthy adults, by means of MEG. Our analyses demonstrated significant activation and interconnection differences between living and man-made object sounds, in the prefrontal areas, anterior-superior temporal gyrus (aSTG), posterior cingulate cortex (PCC), and supramarginal gyrus (SMG), occurring within 80–120 ms post-stimulus interval. Current findings replicated previous ones, in that other regions beyond the auditory cortex are involved during auditory processing. According to the functional connectivity analysis, differential brain networks across the categories exist, which proposes that sound category discrimination processing relies on distinct cortical networks, a notion that has been strongly argued in the literature also in relation to the visual system.
Collapse
Affiliation(s)
- Vasiliki Salvari
- Institute for Biomagnetism and Biosignalanalysis, University of Münster, Münster, Germany
| | - Evangelos Paraskevopoulos
- Institute for Biomagnetism and Biosignalanalysis, University of Münster, Münster, Germany.,School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Nikolas Chalas
- School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Kilian Müller
- Institute for Biomagnetism and Biosignalanalysis, University of Münster, Münster, Germany
| | - Andreas Wollbrink
- Institute for Biomagnetism and Biosignalanalysis, University of Münster, Münster, Germany
| | - Christian Dobel
- Department of Otorhinolaryngology, Friedrich-Schiller University of Jena, Jena, Germany
| | - Daniela Korth
- Department of Otorhinolaryngology, Friedrich-Schiller University of Jena, Jena, Germany
| | - Christo Pantev
- Institute for Biomagnetism and Biosignalanalysis, University of Münster, Münster, Germany
| |
Collapse
|
15
|
Preferential activation for emotional Western classical music versus emotional environmental sounds in motor, interoceptive, and language brain areas. Brain Cogn 2019; 136:103593. [PMID: 31404816 DOI: 10.1016/j.bandc.2019.103593] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2019] [Revised: 07/30/2019] [Accepted: 08/02/2019] [Indexed: 01/11/2023]
Abstract
Recent meta analyses suggest there is a common brain network involved in processing emotion in music and sounds. However, no studies have directly compared the neural substrates of equivalent emotional Western classical music and emotional environmental sounds. Using functional magnetic resonance imaging we investigated whether brain activation in motor cortex, interoceptive cortex, and Broca's language area during an auditory emotional appraisal task differed as a function of stimulus type. Activation was relatively greater to music in motor and interoceptive cortex - areas associated with movement and internal physical feelings - and relatively greater to emotional environmental sounds in Broca's area. We conclude that emotional environmental sounds are appraised through verbal identification of the source, and that emotional Western classical music is appraised through evaluation of bodily feelings. While there is clearly a common core emotion-processing network underlying all emotional appraisal, modality-specific contextual information may be important for understanding the contribution of voluntary versus automatic appraisal mechanisms.
Collapse
|
16
|
Lewis JW, Silberman MJ, Donai JJ, Frum CA, Brefczynski-Lewis JA. Hearing and orally mimicking different acoustic-semantic categories of natural sound engage distinct left hemisphere cortical regions. BRAIN AND LANGUAGE 2018; 183:64-78. [PMID: 29966815 PMCID: PMC6461214 DOI: 10.1016/j.bandl.2018.05.002] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/08/2017] [Revised: 03/22/2018] [Accepted: 05/06/2018] [Indexed: 05/10/2023]
Abstract
Oral mimicry is thought to represent an essential process for the neurodevelopment of spoken language systems in infants, the evolution of language in hominins, and a process that could possibly aid recovery in stroke patients. Using functional magnetic resonance imaging (fMRI), we previously reported a divergence of auditory cortical pathways mediating perception of specific categories of natural sounds. However, it remained unclear if or how this fundamental sensory organization by the brain might relate to motor output, such as sound mimicry. Here, using fMRI, we revealed a dissociation of activated brain regions preferential for hearing with the intent to imitate and the oral mimicry of animal action sounds versus animal vocalizations as distinct acoustic-semantic categories. This functional dissociation may reflect components of a rudimentary cortical architecture that links systems for processing acoustic-semantic universals of natural sound with motor-related systems mediating oral mimicry at a category level. The observation of different brain regions involved in different aspects of oral mimicry may inform targeted therapies for rehabilitation of functional abilities after stroke.
Collapse
Affiliation(s)
- James W Lewis
- Rockefeller Neurosciences Institute, Department of Physiology, Pharmacology & Neuroscience, West Virginia University, Morgantown, WV 26506, USA.
| | - Magenta J Silberman
- Rockefeller Neurosciences Institute, Department of Physiology, Pharmacology & Neuroscience, West Virginia University, Morgantown, WV 26506, USA
| | - Jeremy J Donai
- Rockefeller Neurosciences Institute, Department of Communication Sciences and Disorders, West Virginia University, Morgantown, WV 26506, USA
| | - Chris A Frum
- Rockefeller Neurosciences Institute, Department of Physiology, Pharmacology & Neuroscience, West Virginia University, Morgantown, WV 26506, USA
| | - Julie A Brefczynski-Lewis
- Rockefeller Neurosciences Institute, Department of Physiology, Pharmacology & Neuroscience, West Virginia University, Morgantown, WV 26506, USA
| |
Collapse
|
17
|
Morningstar M, Nelson EE, Dirks MA. Maturation of vocal emotion recognition: Insights from the developmental and neuroimaging literature. Neurosci Biobehav Rev 2018; 90:221-230. [DOI: 10.1016/j.neubiorev.2018.04.019] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2017] [Revised: 03/16/2018] [Accepted: 04/24/2018] [Indexed: 01/05/2023]
|
18
|
Muhammed L, Hardy CJD, Russell LL, Marshall CR, Clark CN, Bond RL, Warrington EK, Warren JD. Agnosia for bird calls. Neuropsychologia 2018; 113:61-67. [PMID: 29572063 PMCID: PMC5946901 DOI: 10.1016/j.neuropsychologia.2018.03.024] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2017] [Revised: 01/18/2018] [Accepted: 03/19/2018] [Indexed: 12/02/2022]
Abstract
The cognitive organisation of nonverbal auditory knowledge remains poorly defined. Deficits of environmental sound as well as word and visual object knowledge are well-recognised in semantic dementia. However, it is unclear how auditory cognition breaks down in this disorder and how this relates to deficits in other knowledge modalities. We had the opportunity to study a patient with a typical syndrome of semantic dementia who had extensive premorbid knowledge of birds, allowing us to assess the impact of the disease on the processing of auditory in relation to visual and verbal attributes of this specific knowledge category. We designed a novel neuropsychological test to probe knowledge of particular avian characteristics (size, behaviour [migratory or nonmigratory], habitat [whether or not primarily water-dwelling]) in the nonverbal auditory, visual and verbal modalities, based on a uniform two-alternative-forced-choice procedure. The patient's performance was compared to healthy older individuals of similar birding experience. We further compared his performance on this test of bird knowledge with his knowledge of familiar human voices and faces. Relative to healthy birder controls, the patient showed marked deficits of bird call and bird name knowledge but relatively preserved knowledge of avian visual attributes and retained knowledge of human voices and faces. In both the auditory and visual modalities, his knowledge of the avian characteristics of size and behaviour was intact whereas his knowledge of the associated characteristic of habitat was deficient. This case provides further evidence that nonverbal auditory knowledge has a fractionated organisation that can be differentially targeted in semantic dementia. The cognitive organisation of auditory semantics is poorly understood. We assessed multimodal avian knowledge in a birder with semantic dementia. The patient had auditory (but not visual) agnosia for birds versus healthy birders. Auditory knowledge of avian attributes and human voices were differentially affected. This case illuminates the fractionated organisation of nonverbal auditory knowledge.
Collapse
Affiliation(s)
- Louwai Muhammed
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, United Kingdom
| | - Chris J D Hardy
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, United Kingdom
| | - Lucy L Russell
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, United Kingdom
| | - Charles R Marshall
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, United Kingdom
| | - Camilla N Clark
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, United Kingdom
| | - Rebecca L Bond
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, United Kingdom
| | - Elizabeth K Warrington
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, United Kingdom
| | - Jason D Warren
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, United Kingdom.
| |
Collapse
|
19
|
Dormal G, Pelland M, Rezk M, Yakobov E, Lepore F, Collignon O. Functional Preference for Object Sounds and Voices in the Brain of Early Blind and Sighted Individuals. J Cogn Neurosci 2018; 30:86-106. [DOI: 10.1162/jocn_a_01186] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
Sounds activate occipital regions in early blind individuals. However, how different sound categories map onto specific regions of the occipital cortex remains a matter of debate. We used fMRI to characterize brain responses of early blind and sighted individuals to familiar object sounds, human voices, and their respective low-level control sounds. In addition, sighted participants were tested while viewing pictures of faces, objects, and phase-scrambled control pictures. In both early blind and sighted, a double dissociation was evidenced in bilateral auditory cortices between responses to voices and object sounds: Voices elicited categorical responses in bilateral superior temporal sulci, whereas object sounds elicited categorical responses along the lateral fissure bilaterally, including the primary auditory cortex and planum temporale. Outside the auditory regions, object sounds also elicited categorical responses in the left lateral and in the ventral occipitotemporal regions in both groups. These regions also showed response preference for images of objects in the sighted group, thus suggesting a functional specialization that is independent of sensory input and visual experience. Between-group comparisons revealed that, only in the blind group, categorical responses to object sounds extended more posteriorly into the occipital cortex. Functional connectivity analyses evidenced a selective increase in the functional coupling between these reorganized regions and regions of the ventral occipitotemporal cortex in the blind group. In contrast, vocal sounds did not elicit preferential responses in the occipital cortex in either group. Nevertheless, enhanced voice-selective connectivity between the left temporal voice area and the right fusiform gyrus were found in the blind group. Altogether, these findings suggest that, in the absence of developmental vision, separate auditory categories are not equipotent in driving selective auditory recruitment of occipitotemporal regions and highlight the presence of domain-selective constraints on the expression of cross-modal plasticity.
Collapse
Affiliation(s)
| | | | | | | | | | - Olivier Collignon
- University of Montreal
- University of Louvain
- McGill University, Montreal, Canada
| |
Collapse
|
20
|
Peelen MV, Caramazza A. Concepts, actions, and objects: Functional and neural perspectives. Neuropsychologia 2017; 105:1-3. [DOI: 10.1016/j.neuropsychologia.2017.10.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|