1
|
Luzum NR, Hamel BL, Shafiro V, Harris MS. Identification Accuracy of Safety-Relevant Environmental Sounds in Adult Cochlear Implant Users. Laryngoscope 2023; 133:2388-2393. [PMID: 36317721 PMCID: PMC10149563 DOI: 10.1002/lary.30475] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Revised: 10/18/2022] [Accepted: 10/19/2022] [Indexed: 11/05/2022]
Abstract
OBJECTIVE Examine cochlear implant (CI) users' ability to identify safety-relevant environmental sounds, imperative for safety, independence, and personal well-being. METHODS Twenty-one experienced adult CI users completed an Environmental Sound Identification (ESI) test consisting of 42 common environmental sounds, 28 of which were relevant to personal safety, along with 14 control sounds. Prior to sound identification, participants were shown sound names and asked to rate the familiarity and, separately, relevance to safety of each corresponding sound on a 1-5 scale. RESULTS Overall ESI accuracy was 57% correct for the safety-relevant sounds and 55% correct for control sounds. Participants rated safety-relevant sounds as more important to safety and more familiar than the non-safety sounds. ESI accuracy significantly correlated with familiarity ratings. CONCLUSION The present findings suggest mediocre ESI accuracy in postlingual adult CI users for safety-relevant and other environmental sounds. Deficits in the identification of these sounds may put CI listeners at increased risk of accidents or injuries and may require a specific rehabilitation program to improve CI outcomes. LEVEL OF EVIDENCE 4 Laryngoscope, 133:2388-2393, 2023.
Collapse
Affiliation(s)
| | - Benjamin L. Hamel
- Department of Pediatric and Adolescent Medicine, Mayo Clinic, Rochester, MN, USA
| | - Valeriy Shafiro
- Department of Communication Disorders & Sciences, College of Health Sciences & Graduate College, Rush University, Chicago, IL, USA
| | - Michael S. Harris
- Department of Otolaryngology & Communication Sciences, Medical College of Wisconsin Milwaukee, WI, USA
- Department of Neurosurgery, Medical College of Wisconsin, Milwaukee, WI, USA
| |
Collapse
|
2
|
Brownsett SLE, Mascelloni M, Gowlett G, McMahon KL, de Zubicaray GI. Neighing dogs: Semantic context effects of environmental sounds in spoken word production - a replication and extension. Q J Exp Psychol (Hove) 2023; 76:1990-2000. [PMID: 36301012 DOI: 10.1177/17470218221137007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/23/2023]
Abstract
Semantic context effects are well established using both words and pictures as stimuli. One such effect, semantic interference, is observed in naming latencies when a categorically related distractor word or picture is presented together with a target picture (e.g., dog-LION). Recently, this effect has also been shown to occur when an environmental sound (e.g., a dog barking) is presented as an auditory distractor during picture naming and when a distractor picture is presented with a target sound for naming. The purpose of the current study was twofold: (1) to replicate the semantic interference effect in the picture-sound interference (PSI) paradigm and (2) determine whether a semantic interference effect is also observable when distractor words are presented with environmental sounds as target auditory objects for naming, using a novel sound-word interference (SWI) paradigm. We replicated the semantic interference effect in Experiment 1 with environmental sound distractors. Experiment 2 demonstrated significant semantic interference during an SWI paradigm for the first time. We discuss the implications of these results for our understanding of the origin and locus of the semantic interference effect according to current theories of lexical selection.
Collapse
Affiliation(s)
- Sonia LE Brownsett
- School of Health and Rehabilitation Sciences, The University of Queensland, St Lucia, QLD, Australia
- NHMRC Centre of Research Excellence in Aphasia Recovery and Rehabilitation, Melbourne, VIC, Australia
| | - Matteo Mascelloni
- School of Psychology and Counselling, Faculty of Health, Queensland University of Technology, Brisbane, QLD, Australia
| | - Georgia Gowlett
- School of Psychology and Counselling, Faculty of Health, Queensland University of Technology, Brisbane, QLD, Australia
| | - Katie L McMahon
- School of Clinical Sciences and Centre for Biomedical Technologies, Queensland University of Technology, Brisbane, QLD, Australia
- Herston Imaging Research Facility, Royal Brisbane & Women's Hospital, Herston, QLD, Australia
| | - Greig I de Zubicaray
- School of Psychology and Counselling, Faculty of Health, Queensland University of Technology, Brisbane, QLD, Australia
| |
Collapse
|
3
|
Ma W, Zhou P, Liang X, Thompson WF. Children across cultures respond emotionally to the acoustic environment. Cogn Emot 2023; 37:1144-1152. [PMID: 37338002 DOI: 10.1080/02699931.2023.2225850] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2022] [Revised: 06/07/2023] [Accepted: 06/08/2023] [Indexed: 06/21/2023]
Abstract
Among human and non-human animals, the ability to respond rapidly to biologically significant events in the environment is essential for survival and development. Research has confirmed that human adult listeners respond emotionally to environmental sounds by relying on the same acoustic cues that signal emotionality in speech prosody and music. However, it is unknown whether young children also respond emotionally to environmental sounds. Here, we report that changes in pitch, rate (i.e. playback speed), and intensity (i.e. amplitude) of environmental sounds trigger emotional responses in 3- to 6-year-old American and Chinese children, including four sound types: sounds of human actions, animal calls, machinery, and natural phenomena such as wind and waves. Children's responses did not differ across the four types of sounds used but developed with age - a finding observed in both American and Chinese children. Thus, the ability to respond emotionally to non-linguistic, non-music environmental sounds is evident at three years of age - an age when the ability to decode emotional prosody in language and music emerges. We argue that general mechanisms that support emotional prosody decoding are engaged by all sounds, as reflected in emotional responses to non-linguistic acoustic input such as music and environmental sounds.
Collapse
Affiliation(s)
- Weiyi Ma
- School of Human Environmental Sciences, University of Arkansas, Fayetteville, AR, USA
| | - Peng Zhou
- School of International Studies, Zhejiang University, Hangzhou, People's Republic of China
| | - Xinya Liang
- Department of Counseling, Leadership, and Research Methods, University of Arkansas, Fayetteville, AR, USA
| | | |
Collapse
|
4
|
Renvall H, Seol J, Tuominen R, Sorger B, Riecke L, Salmelin R. Selective auditory attention within naturalistic scenes modulates reactivity to speech sounds. Eur J Neurosci 2021; 54:7626-7641. [PMID: 34697833 PMCID: PMC9298413 DOI: 10.1111/ejn.15504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2021] [Accepted: 10/10/2021] [Indexed: 11/27/2022]
Abstract
Rapid recognition and categorization of sounds are essential for humans and animals alike, both for understanding and reacting to our surroundings and for daily communication and social interaction. For humans, perception of speech sounds is of crucial importance. In real life, this task is complicated by the presence of a multitude of meaningful non‐speech sounds. The present behavioural, magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) study was set out to address how attention to speech versus attention to natural non‐speech sounds within complex auditory scenes influences cortical processing. The stimuli were superimpositions of spoken words and environmental sounds, with parametric variation of the speech‐to‐environmental sound intensity ratio. The participants' task was to detect a repetition in either the speech or the environmental sound. We found that specifically when participants attended to speech within the superimposed stimuli, higher speech‐to‐environmental sound ratios resulted in shorter sustained MEG responses and stronger BOLD fMRI signals especially in the left supratemporal auditory cortex and in improved behavioural performance. No such effects of speech‐to‐environmental sound ratio were observed when participants attended to the environmental sound part within the exact same stimuli. These findings suggest stronger saliency of speech compared with other meaningful sounds during processing of natural auditory scenes, likely linked to speech‐specific top‐down and bottom‐up mechanisms activated during speech perception that are needed for tracking speech in real‐life‐like auditory environments.
Collapse
Affiliation(s)
- Hanna Renvall
- Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland.,Aalto NeuroImaging, Aalto University, Espoo, Finland.,BioMag Laboratory, HUS Diagnostic Center, Helsinki University Hospital, University of Helsinki and Aalto University School of Science, Helsinki, Finland
| | - Jaeho Seol
- Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland.,Aalto NeuroImaging, Aalto University, Espoo, Finland
| | - Riku Tuominen
- Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland.,Aalto NeuroImaging, Aalto University, Espoo, Finland
| | - Bettina Sorger
- Department of Cognitive Neuroscience, Maastricht University, Maastricht, The Netherlands
| | - Lars Riecke
- Department of Cognitive Neuroscience, Maastricht University, Maastricht, The Netherlands
| | - Riitta Salmelin
- Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland.,Aalto NeuroImaging, Aalto University, Espoo, Finland
| |
Collapse
|
5
|
Marian V, Hayakawa S, Schroeder SR. Cross-Modal Interaction Between Auditory and Visual Input Impacts Memory Retrieval. Front Neurosci 2021; 15:661477. [PMID: 34381328 PMCID: PMC8350348 DOI: 10.3389/fnins.2021.661477] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2021] [Accepted: 06/24/2021] [Indexed: 11/13/2022] Open
Abstract
How we perceive and learn about our environment is influenced by our prior experiences and existing representations of the world. Top-down cognitive processes, such as attention and expectations, can alter how we process sensory stimuli, both within a modality (e.g., effects of auditory experience on auditory perception), as well as across modalities (e.g., effects of visual feedback on sound localization). Here, we demonstrate that experience with different types of auditory input (spoken words vs. environmental sounds) modulates how humans remember concurrently-presented visual objects. Participants viewed a series of line drawings (e.g., picture of a cat) displayed in one of four quadrants while listening to a word or sound that was congruent (e.g., "cat" or <meow>), incongruent (e.g., "motorcycle" or <vroom-vroom>), or neutral (e.g., a meaningless pseudoword or a tonal beep) relative to the picture. Following the encoding phase, participants were presented with the original drawings plus new drawings and asked to indicate whether each one was "old" or "new." If a drawing was designated as "old," participants then reported where it had been displayed. We find that words and sounds both elicit more accurate memory for what objects were previously seen, but only congruent environmental sounds enhance memory for where objects were positioned - this, despite the fact that the auditory stimuli were not meaningful spatial cues of the objects' locations on the screen. Given that during real-world listening conditions, environmental sounds, but not words, reliably originate from the location of their referents, listening to sounds may attune the visual dorsal pathway to facilitate attention and memory for objects' locations. We propose that audio-visual associations in the environment and in our previous experience jointly contribute to visual memory, strengthening visual memory through exposure to auditory input.
Collapse
Affiliation(s)
- Viorica Marian
- Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, United States
| | - Sayuri Hayakawa
- Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, United States
| | - Scott R Schroeder
- Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, United States.,Department of Speech-Language-Hearing Sciences, Hofstra University, Hempstead, NY, United States
| |
Collapse
|
6
|
Durai M, Doborjeh Z, Sanders PJ, Vajsakovic D, Wendt A, Searchfield GD. Behavioral Outcomes and Neural Network Modeling of a Novel, Putative, Recategorization Sound Therapy. Brain Sci 2021; 11:554. [PMID: 33925762 PMCID: PMC8146945 DOI: 10.3390/brainsci11050554] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2021] [Revised: 04/23/2021] [Accepted: 04/24/2021] [Indexed: 02/07/2023] Open
Abstract
The mechanisms underlying sound's effect on tinnitus perception are unclear. Tinnitus activity appears to conflict with perceptual expectations of "real" sound, resulting in it being a salient signal. Attention diverted towards tinnitus during the later stages of object processing potentially disrupts high-order auditory streaming, and its uncertain nature results in negative psychological responses. This study investigated the benefits and neurophysiological basis of passive perceptual training and informational counseling to recategorize phantom perception as a more real auditory object. Specifically, it examined underlying psychoacoustic correlates of tinnitus and the neural activities associated with tinnitus auditory streaming and how malleable these are to change with targeted intervention. Eighteen participants (8 females, 10 males, mean age = 61.6 years) completed the study. The study consisted of 2 parts: (1) An acute exposure over 30 min to a sound that matched the person's tinnitus (Tinnitus Avatar) that was cross-faded to a selected nature sound (Cicadas, Fan, Water Sound/Rain, Birds, Water and Bird). (2) A chronic exposure for 3 months to the same "morphed" sound. A brain-inspired spiking neural network (SNN) architecture was used to model and compare differences between electroencephalography (EEG) patterns recorded prior to morphing sound presentation, during, after (3-month), and post-follow-up. Results showed that the tinnitus avatar generated was a good match to an individual's tinnitus as rated on likeness scales and was not rated as unpleasant. The five environmental sounds selected for this study were also rated as being appropriate matches to individuals' tinnitus and largely pleasant to listen to. There was a significant reduction in the Tinnitus Functional Index score and subscales of intrusiveness of the tinnitus signal and ability to concentrate with the tinnitus trial end compared to baseline. There was a significant decrease in how strong the tinnitus signal was rated as well as ratings of how easy it was to ignore the tinnitus signal on severity rating scales. Qualitative analysis found that the environmental sound interacted with the tinnitus in a positive way, but participants did not experience change in severity, however, characteristics of tinnitus, including pitch and uniformity of sound, were reported to change. The results indicate the feasibility of the computational SNN method and preliminary evidence that the sound exposure may change activation of neural tinnitus networks and greater bilateral hemispheric involvement as the sound morphs over time into natural environmental sound; particularly relating to attention and discriminatory judgments (dorsal attention network, precentral gyrus, ventral anterior network). This is the first study that attempts to recategorize tinnitus using passive auditory training to a sound that morphs from resembling the person's tinnitus to a natural sound. These findings will be used to design future-controlled trials to elucidate whether the approach used differs in effect and mechanism from conventional Broadband Noise (BBN) sound therapy.
Collapse
Affiliation(s)
- Mithila Durai
- Section of Audiology, School of Population Health, The University of Auckland, Auckland 1023, New Zealand; (M.D.); (Z.D.); (P.J.S.); (D.V.)
- Eisdell Moore Centre, Auckland 1023, New Zealand
- Centre for Brain Research, The University of Auckland, Auckland 1023, New Zealand
| | - Zohreh Doborjeh
- Section of Audiology, School of Population Health, The University of Auckland, Auckland 1023, New Zealand; (M.D.); (Z.D.); (P.J.S.); (D.V.)
- Eisdell Moore Centre, Auckland 1023, New Zealand
- Centre for Brain Research, The University of Auckland, Auckland 1023, New Zealand
| | - Philip J. Sanders
- Section of Audiology, School of Population Health, The University of Auckland, Auckland 1023, New Zealand; (M.D.); (Z.D.); (P.J.S.); (D.V.)
- Eisdell Moore Centre, Auckland 1023, New Zealand
- Centre for Brain Research, The University of Auckland, Auckland 1023, New Zealand
| | - Dunja Vajsakovic
- Section of Audiology, School of Population Health, The University of Auckland, Auckland 1023, New Zealand; (M.D.); (Z.D.); (P.J.S.); (D.V.)
- Eisdell Moore Centre, Auckland 1023, New Zealand
- Centre for Brain Research, The University of Auckland, Auckland 1023, New Zealand
| | - Anne Wendt
- Knowledge Engineering & Discovery Research Institute, Auckland University of Technology, Auckland 1010, New Zealand;
| | - Grant D. Searchfield
- Section of Audiology, School of Population Health, The University of Auckland, Auckland 1023, New Zealand; (M.D.); (Z.D.); (P.J.S.); (D.V.)
- Eisdell Moore Centre, Auckland 1023, New Zealand
- Centre for Brain Research, The University of Auckland, Auckland 1023, New Zealand
- Brain Research New Zealand—Rangahau Roro Aotearoa, The University of Auckland, Auckland 1142, New Zealand
| |
Collapse
|
7
|
Halbur M, Kodak T, Williams X, Reidy J, Halbur C. Comparison of sounds and words as sample stimuli for discrimination training. J Appl Behav Anal 2021; 54:1126-1138. [PMID: 33759461 DOI: 10.1002/jaba.830] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Revised: 02/14/2021] [Accepted: 02/14/2021] [Indexed: 11/11/2022]
Abstract
A portion of children diagnosed with autism spectrum disorder (ASD) have difficulty acquiring conditional discrimination. However, previous researchers suggested that the discrimination of nonverbal auditory stimuli may be acquired more efficiently (Eikeseth & Hayward, 2009; Uwer, et al., 2002). For example, a child may learn to touch a picture of a piano after hearing the musical instrument more quickly than when the auditory stimulus is the spoken word "piano." The purpose of the present study was to extend previous research by assessing the acquisition of conditional discriminations with sample stimuli presented as either automated spoken words or high- and low-disparity nonverbal auditory stimuli (i.e., environmental sounds). Conditional discriminations with high-disparity environmental sounds as sample stimuli were acquired rather than or more efficiently than those trained with low-disparity environmental sounds and words as sample stimuli.
Collapse
Affiliation(s)
- Mary Halbur
- University of Nebraska Medical Center's Munroe-Meyer Institute
| | | | | | | | | |
Collapse
|
8
|
Wöhner S, Jescheniak JD, Mädebach A. Semantic interference is not modality specific: Evidence from sound naming with distractor pictures. Q J Exp Psychol (Hove) 2020; 73:2290-2308. [PMID: 32640868 DOI: 10.1177/1747021820943130] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
In three experiments, participants named environmental sounds (e.g., the bleating of a sheep by producing the word "sheep") in the presence of distractor pictures. In Experiment 1, we observed faster responses in sound naming with congruent pictures (e.g., sheep; congruency facilitation) and slower responses with semantically related pictures (e.g., donkey; semantic interference), each compared with unrelated pictures (e.g., violin). In Experiments 2 and 3, we replicated these effects and used a psychological refractory period approach (combining an arrow decision or letter rotation task as Task 1 with sound naming as Task 2) to investigate the locus of the effects. Congruency facilitation was underadditive with dual-task interference suggesting that it arises, in part, during pre-central processing stages in sound naming (i.e., sound identification). In contrast, semantic interference was additive with dual-task interference suggesting that it arises during central (or post-central) processing stages in sound naming (i.e., response selection or later processes). These results demonstrate the feasibility of sound naming tasks for chronometric investigations of word production. Furthermore, they highlight that semantic interference is not restricted to the use of target pictures and distractor words but can be observed with quite different target-distractor configurations. The experiments support the view that congruency facilitation and semantic interference reflect some general cognitive mechanism involved in word production. These results are discussed in the context of the debate about semantic-lexical selection mechanisms in word production.
Collapse
Affiliation(s)
- Stefan Wöhner
- Institut für Psychologie - Wilhelm Wundt, Universität Leipzig, Leipzig, Germany
| | - Jörg D Jescheniak
- Institut für Psychologie - Wilhelm Wundt, Universität Leipzig, Leipzig, Germany
| | - Andreas Mädebach
- Institut für Psychologie - Wilhelm Wundt, Universität Leipzig, Leipzig, Germany.,Center for Brain and Cognition, Universitat Pompeu Fabra, Barcelona, Spain
| |
Collapse
|
9
|
Burns T, Rajan R. A Mathematical Approach to Correlating Objective Spectro-Temporal Features of Non-linguistic Sounds With Their Subjective Perceptions in Humans. Front Neurosci 2019; 13:794. [PMID: 31417350 PMCID: PMC6685481 DOI: 10.3389/fnins.2019.00794] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2019] [Accepted: 07/16/2019] [Indexed: 11/13/2022] Open
Abstract
Non-linguistic sounds (NLSs) are a core feature of our everyday life and many evoke powerful cognitive and emotional outcomes. The subjective perception of NLSs by humans has occasionally been defined for single percepts, e.g., their pleasantness, whereas many NLSs evoke multiple perceptions. There has also been very limited attempt to determine if NLS perceptions are predicted from objective spectro-temporal features. We therefore examined three human perceptions well-established in previous NLS studies ("Complexity," "Pleasantness," and "Familiarity"), and the accuracy of identification, for a large NLS database and related these four measures to objective spectro-temporal NLS features, defined using rigorous mathematical descriptors including stimulus entropic and algorithmic complexity measures, peaks-related measures, fractal dimension estimates, and various spectral measures (mean spectral centroid, power in discrete frequency ranges, harmonicity, spectral flatness, and spectral structure). We mapped the perceptions to the spectro-temporal measures individually and in combinations, using complex multivariate analyses including principal component analyses and agglomerative hierarchical clustering.
Collapse
Affiliation(s)
| | - Ramesh Rajan
- Biomedicine Discovery Institute, Monash University, Melbourne, VIC, Australia
| |
Collapse
|
10
|
Hendrickson K, Love T, Walenski M, Friend M. The organization of words and environmental sounds in the second year: Behavioral and electrophysiological evidence. Dev Sci 2019; 22:e12746. [PMID: 30159958 PMCID: PMC6294716 DOI: 10.1111/desc.12746] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2016] [Accepted: 05/18/2018] [Indexed: 11/30/2022]
Abstract
The majority of research examining early auditory-semantic processing and organization is based on studies of meaningful relations between words and referents. However, a thorough investigation into the fundamental relation between acoustic signals and meaning requires an understanding of how meaning is associated with both lexical and non-lexical sounds. Indeed, it is unknown how meaningful auditory information that is not lexical (e.g., environmental sounds) is processed and organized in the young brain. To capture the structure of semantic organization for words and environmental sounds, we record event-related potentials as 20-month-olds view images of common nouns (e.g., dog) while hearing words or environmental sounds that match the picture (e.g., "dog" or barking), that are within-category violations (e.g., "cat" or meowing), or that are between-category violations (e.g., "pen" or scribbling). Results show both words and environmental sounds exhibit larger negative amplitudes to between-category violations relative to matches. Unlike words, which show a greater negative response early and consistently to within-category violations, such an effect for environmental sounds occurs late in semantic processing. Thus, as in adults, the young brain represents semantic relations between words and between environmental sounds, though it more readily differentiates semantically similar words compared to environmental sounds.
Collapse
Affiliation(s)
- Kristi Hendrickson
- Department of Communication Sciences & Disorders, University of Iowa, USA
| | - Tracy Love
- Center for Research in Language, University of California, San Diego, USA
- School of Speech, Language, and Hearing Sciences, San Diego State University, USA
| | | | | |
Collapse
|
11
|
Hanna-Pladdy B, Choi H, Herman B, Haffey S. Audiovisual Lexical Retrieval Deficits Following Left Hemisphere Stroke. Brain Sci 2018; 8:E206. [PMID: 30486517 DOI: 10.3390/brainsci8120206] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2018] [Revised: 11/18/2018] [Accepted: 11/27/2018] [Indexed: 11/27/2022] Open
Abstract
Binding sensory features of multiple modalities of what we hear and see allows formation of a coherent percept to access semantics. Previous work on object naming has focused on visual confrontation naming with limited research in nonverbal auditory or multisensory processing. To investigate neural substrates and sensory effects of lexical retrieval, we evaluated healthy adults (n = 118) and left hemisphere stroke patients (LHD, n = 42) in naming manipulable objects across auditory (sound), visual (picture), and multisensory (audiovisual) conditions. LHD patients were divided into cortical, cortical–subcortical, or subcortical lesions (CO, CO–SC, SC), and specific lesion location investigated in a predictive model. Subjects produced lower accuracy in auditory naming relative to other conditions. Controls demonstrated greater naming accuracy and faster reaction times across all conditions compared to LHD patients. Naming across conditions was most severely impaired in CO patients. Both auditory and visual naming accuracy were impacted by temporal lobe involvement, although auditory naming was sensitive to lesions extending subcortically. Only controls demonstrated significant improvement over visual naming with the addition of auditory cues (i.e., multisensory condition). Results support overlapping neural networks for visual and auditory modalities related to semantic integration in lexical retrieval and temporal lobe involvement, while multisensory integration was impacted by both occipital and temporal lobe lesion involvement. The findings support modality specificity in naming and suggest that auditory naming is mediated by a distributed cortical–subcortical network overlapping with networks mediating spatiotemporal aspects of skilled movements producing sound.
Collapse
|
12
|
Aletta F, Kang J. Towards an Urban Vibrancy Model: A Soundscape Approach. Int J Environ Res Public Health 2018; 15:ijerph15081712. [PMID: 30103394 PMCID: PMC6122032 DOI: 10.3390/ijerph15081712] [Citation(s) in RCA: 43] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/12/2018] [Revised: 08/04/2018] [Accepted: 08/08/2018] [Indexed: 11/21/2022]
Abstract
Soundscape research needs to develop predictive tools for environmental design. A number of descriptor-indicator(s) models have been proposed so far, particularly for the “tranquility” dimension to manage “quiet areas” in urban contexts. However, there is a current lack of models addressing environments offering actively engaging soundscapes, i.e., the “vibrancy” dimension. The main aim of this study was to establish a predictive model for a vibrancy descriptor based on physical parameters, which could be used by designers and practitioners. A group interview was carried out to formulate a hypothesis on what elements would be influential for vibrancy perception. Afterwards, data on vibrancy perception were collected for different locations in the UK and China through a laboratory experiment and their physical parameters were used as indicators to establish a predictive model. Such indicators included both aural and visual parameters. The model, based on Roughness, Presence of People, Fluctuation Strength, Loudness and Presence of Music as predictors, explained 76% of the variance in the mean individual vibrancy scores. A statistically significant correlation was found between vibrancy scores and eventfulness scores, but not between vibrancy scores and pleasantness scores. Overall results showed that vibrancy is contextual and depends both on the soundscape and on the visual scenery.
Collapse
Affiliation(s)
- Francesco Aletta
- UCL Institute for Environmental Design and Engineering, The Bartlett, University College London (UCL), Central House, 14 Upper Woburn Place, London WC1H 0NN, UK.
| | - Jian Kang
- UCL Institute for Environmental Design and Engineering, The Bartlett, University College London (UCL), Central House, 14 Upper Woburn Place, London WC1H 0NN, UK.
| |
Collapse
|
13
|
Cornell Kärnekull S, Arshamian A, Nilsson ME, Larsson M. The Effect of Blindness on Long-Term Episodic Memory for Odors and Sounds. Front Psychol 2018; 9:1003. [PMID: 29973898 PMCID: PMC6020764 DOI: 10.3389/fpsyg.2018.01003] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2018] [Accepted: 05/30/2018] [Indexed: 12/13/2022] Open
Abstract
We recently showed that compared with sighted, early blind individuals have better episodic memory for environmental sounds, but not odors, after a short retention interval (∼ 8 – 9 min). Few studies have investigated potential effects of blindness on memory across long time frames, such as months or years. Consequently, it was unclear whether compensatory effects may vary as a function of retention interval. In this study, we followed-up participants (N = 57 out of 60) approximately 1 year after the initial testing and retested episodic recognition for environmental sounds and odors, and identification ability. In contrast to our previous findings, the early blind participants (n = 14) performed at a similar level as the late blind (n = 13) and sighted (n = 30) participants for sound recognition. Moreover, the groups had similar recognition performance of odors and identification ability of odors and sounds. These findings suggest that episodic odor memory is unaffected by blindness after both short and long retention intervals. However, the effect of blindness on episodic memory for sounds may vary as a function of retention interval, such that early blind individuals have an advantage over sighted across short but not long time frames. We speculate that the finding of a differential effect of blindness on auditory episodic memory across retention intervals may be related to different memory strategies at initial and follow-up assessments. In conclusion, this study suggests that blindness does not influence auditory or olfactory episodic memory as assessed after a long retention interval.
Collapse
Affiliation(s)
| | - Artin Arshamian
- Gösta Ekman Laboratory, Department of Psychology, Stockholm University, Stockholm, Sweden.,Division of Psychology, Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden.,Center for Language Studies, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands
| | - Mats E Nilsson
- Gösta Ekman Laboratory, Department of Psychology, Stockholm University, Stockholm, Sweden
| | - Maria Larsson
- Gösta Ekman Laboratory, Department of Psychology, Stockholm University, Stockholm, Sweden
| |
Collapse
|
14
|
Delogu F, Lilla CC. Do you remember where sounds, pictures and words came from? The role of the stimulus format in object location memory. Memory 2017; 25:1340-1346. [PMID: 28287018 DOI: 10.1080/09658211.2017.1300668] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Contrasting results in visual and auditory spatial memory stimulate the debate over the role of sensory modality and attention in identity-to-location binding. We investigated the role of sensory modality in the incidental/deliberate encoding of the location of a sequence of items. In 4 separated blocks, 88 participants memorised sequences of environmental sounds, spoken words, pictures and written words, respectively. After memorisation, participants were asked to recognise old from new items in a new sequence of stimuli. They were also asked to indicate from which side of the screen (visual stimuli) or headphone channel (sounds) the old stimuli were presented in encoding. In the first block, participants were not aware of the spatial requirement while, in blocks 2, 3 and 4 they knew that their memory for item location was going to be tested. Results show significantly lower accuracy of object location memory for the auditory stimuli (environmental sounds and spoken words) than for images (pictures and written words). Awareness of spatial requirement did not influence localisation accuracy. We conclude that: (a) object location memory is more effective for visual objects; (b) object location is implicitly associated with item identity during encoding and
Collapse
Affiliation(s)
- Franco Delogu
- a Department of Humanities, Social Sciences, and Communication , College of Arts and Sciences, Lawrence Technological University , Southfield , MI , USA
| | - Christopher C Lilla
- a Department of Humanities, Social Sciences, and Communication , College of Arts and Sciences, Lawrence Technological University , Southfield , MI , USA
| |
Collapse
|
15
|
Abstract
Actions that produce sounds infuse our daily lives. Some of these sounds are a natural consequence of physical interactions (such as a clang resulting from dropping a pan), but others are artificially designed (such as a beep resulting from a keypress). Although the relationship between actions and sounds has previously been examined, the frame of reference of these associations is still unknown, despite it being a fundamental property of a psychological representation. For example, when an association is created between a keypress and a tone, it is unclear whether the frame of reference is egocentric (gesture-sound association) or exocentric (key-sound association). This question is especially important for artificially created associations, which occur in technology that pairs sounds with actions, such as gestural interfaces, virtual or augmented reality, and simple buttons that produce tones. The frame of reference could directly influence the learnability, the ease of use, the extent of immersion, and many other factors of the interaction. To explore whether action-sound associations are egocentric or exocentric, an experiment was implemented using a computer keyboard’s number pad wherein moving a finger from one key to another produced a sound, thus creating an action-sound association. Half of the participants received egocentric instructions to move their finger with a particular gesture. The other half of the participants received exocentric instructions to move their finger to a particular number on the keypad. All participants were performing the same actions, and only the framing of the action varied between conditions by altering task instructions. Participants in the egocentric condition learned the gesture-sound association, as revealed by a priming paradigm. However, the exocentric condition showed no priming effects. This finding suggests that action-sound associations are egocentric in nature. A second part of the same session further confirmed the egocentric nature of these associations by showing no change in the priming effect after moving to a different starting location. Our findings are consistent with an egocentric representation of action-sound associations, which could have implications for applications that utilize these associations.
Collapse
Affiliation(s)
- Nicole Navolio
- Auditory Perception Lab, Department of Psychology, Carnegie Mellon University, PittsburghPA, USA; Department of Human-Computer Interaction, Carnegie Mellon University, PittsburghPA, USA
| | - Guillaume Lemaitre
- Auditory Perception Lab, Department of Psychology, Carnegie Mellon University, Pittsburgh PA, USA
| | - Alain Forget
- CyLab Usable Privacy and Security Research Group, Carnegie Mellon University, Pittsburgh PA, USA
| | - Laurie M Heller
- Auditory Perception Lab, Department of Psychology, Carnegie Mellon University, Pittsburgh PA, USA
| |
Collapse
|
16
|
Abstract
Emotional responses to biologically significant events are essential for human survival. Do human emotions lawfully track changes in the acoustic environment? Here we report that changes in acoustic attributes that are well known to interact with human emotions in speech and music also trigger systematic emotional responses when they occur in environmental sounds, including sounds of human actions, animal calls, machinery, or natural phenomena, such as wind and rain. Three changes in acoustic attributes known to signal emotional states in speech and music were imposed upon 24 environmental sounds. Evaluations of stimuli indicated that human emotions track such changes in environmental sounds just as they do for speech and music. Such changes not only influenced evaluations of the sounds themselves, they also affected the way accompanying facial expressions were interpreted emotionally. The findings illustrate that human emotions are highly attuned to changes in the acoustic environment, and reignite a discussion of Charles Darwin's hypothesis that speech and music originated from a common emotional signal system based on the imitation and modification of environmental sounds.
Collapse
|
17
|
Tomasino B, Canderan C, Marin D, Maieron M, Gremese M, D'Agostini S, Fabbro F, Skrap M. Identifying environmental sounds: a multimodal mapping study. Front Hum Neurosci 2015; 9:567. [PMID: 26539096 PMCID: PMC4612670 DOI: 10.3389/fnhum.2015.00567] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2015] [Accepted: 09/28/2015] [Indexed: 11/13/2022] Open
Abstract
Our environment is full of auditory events such as warnings or hazards, and their correct recognition is essential. We explored environmental sounds (ES) recognition in a series of studies. In study 1 we performed an Activation Likelihood Estimation (ALE) meta-analysis of neuroimaging experiments addressing ES processing to delineate the network of areas consistently involved in ES processing. Areas consistently activated in the ALE meta-analysis were the STG/MTG, insula/rolandic operculum, parahippocampal gyrus and inferior frontal gyrus bilaterally. Some of these areas truly reflect ES processing, whereas others are related to design choices, e.g., type of task, type of control condition, type of stimulus. In study 2 we report on 7 neurosurgical patients with lesions involving the areas which were found to be activated by the ALE meta-analysis. We tested their ES recognition abilities and found an impairment of ES recognition. These results indicate that deficits of ES recognition do not exclusively reflect lesions to the right or to the left hemisphere but both hemispheres are involved. The most frequently lesioned area is the hippocampus/insula/STG. We made sure that any impairment in ES recognition would not be related to language problems, but reflect impaired ES processing. In study 3 we carried out an fMRI study on patients (vs. healthy controls) to investigate how the areas involved in ES might be functionally deregulated because of a lesion. The fMRI evidenced that controls activated the right IFG, the STG bilaterally and the left insula. We applied a multimodal mapping approach and found that, although the meta-analysis showed that part of the left and right STG/MTG activation during ES processing might in part be related to design choices, this area was one of the most frequently lesioned areas in our patients, thus highlighting its causal role in ES processing. We found that the ROIs we drew on the two clusters of activation found in the left and in the right STG overlapped with the lesions of at least 4 out of the 7 patients' lesions, indicating that the lack of STG activation found for patients is related to brain damage and is crucial for explaining the ES deficit.
Collapse
Affiliation(s)
- Barbara Tomasino
- Istituto di Ricovero e Cura a Carattere Scientifico “E. Medea”, Polo Regionale del Friuli Venezia GiuliaUdine, Italy
| | - Cinzia Canderan
- Istituto di Ricovero e Cura a Carattere Scientifico “E. Medea”, Polo Regionale del Friuli Venezia GiuliaUdine, Italy
| | - Dario Marin
- Istituto di Ricovero e Cura a Carattere Scientifico “E. Medea”, Polo Regionale del Friuli Venezia GiuliaUdine, Italy
| | - Marta Maieron
- Fisica Medica A.O.S. Maria della MisericordiaUdine, Italy
| | - Michele Gremese
- Istituto di Ricovero e Cura a Carattere Scientifico “E. Medea”, Polo Regionale del Friuli Venezia GiuliaUdine, Italy
| | - Serena D'Agostini
- Unità Operativa di Neuroradiologia, A.O.S. Maria della MisericordiaUdine, Italy
| | - Franco Fabbro
- Istituto di Ricovero e Cura a Carattere Scientifico “E. Medea”, Polo Regionale del Friuli Venezia GiuliaUdine, Italy
| | - Miran Skrap
- Unità Operativa di Neurochirurgia, A.O.S. Maria della MisericordiaUdine, Italy
| |
Collapse
|
18
|
Abstract
Music is a complex acoustic signal that relies on a number of different brain and cognitive processes to create the sensation of hearing. Changes in hearing function are generally not a major focus of concern for persons with a majority of neurodegenerative diseases associated with dementia, such as Alzheimer disease (AD). However, changes in the processing of sounds may be an early, and possibly preclinical, feature of AD and other neurodegenerative diseases. The aim of this chapter is to review the current state of knowledge concerning hearing and music perception in persons who have a dementia as a result of a neurodegenerative disease. The review focuses on both peripheral and central auditory processing in common neurodegenerative diseases, with a particular focus on the processing of music and other non-verbal sounds. The chapter also reviews music interventions used for persons with neurodegenerative diseases.
Collapse
Affiliation(s)
- Julene K Johnson
- Institute for Health and Aging, University of California, San Francisco, CA, USA.
| | - Maggie L Chow
- School of Medicine, University of California, San Francisco, CA, USA
| |
Collapse
|
19
|
Cossy N, Tzovara A, Simonin A, Rossetti AO, De Lucia M. Robust discrimination between EEG responses to categories of environmental sounds in early coma. Front Psychol 2014; 5:155. [PMID: 24611061 PMCID: PMC3933775 DOI: 10.3389/fpsyg.2014.00155] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2013] [Accepted: 02/07/2014] [Indexed: 01/18/2023] Open
Abstract
Humans can recognize categories of environmental sounds, including vocalizations produced by humans and animals and the sounds of man-made objects. Most neuroimaging investigations of environmental sound discrimination have studied subjects while consciously perceiving and often explicitly recognizing the stimuli. Consequently, it remains unclear to what extent auditory object processing occurs independently of task demands and consciousness. Studies in animal models have shown that environmental sound discrimination at a neural level persists even in anesthetized preparations, whereas data from anesthetized humans has thus far provided null results. Here, we studied comatose patients as a model of environmental sound discrimination capacities during unconsciousness. We included 19 comatose patients treated with therapeutic hypothermia (TH) during the first 2 days of coma, while recording nineteen-channel electroencephalography (EEG). At the level of each individual patient, we applied a decoding algorithm to quantify the differential EEG responses to human vs. animal vocalizations as well as to sounds of living vocalizations vs. man-made objects. Discrimination between vocalization types was accurate in 11 patients and discrimination between sounds from living and man-made sources in 10 patients. At the group level, the results were significant only for the comparison between vocalization types. These results lay the groundwork for disentangling truly preferential activations in response to auditory categories, and the contribution of awareness to auditory category discrimination.
Collapse
Affiliation(s)
- Natacha Cossy
- Electroencephalography Brain Mapping Core, Center for Biomedical Imaging (CIBM), University Hospital Center, University of Lausanne Lausanne, Switzerland ; Department of Radiology, University Hospital Center, University of Lausanne Lausanne, Switzerland
| | - Athina Tzovara
- Electroencephalography Brain Mapping Core, Center for Biomedical Imaging (CIBM), University Hospital Center, University of Lausanne Lausanne, Switzerland ; Department of Radiology, University Hospital Center, University of Lausanne Lausanne, Switzerland
| | - Alexandre Simonin
- Department of Clinical Neurosciences, University Hospital Center, University of Lausanne Lausanne, Switzerland
| | - Andrea O Rossetti
- Department of Clinical Neurosciences, University Hospital Center, University of Lausanne Lausanne, Switzerland
| | - Marzia De Lucia
- Electroencephalography Brain Mapping Core, Center for Biomedical Imaging (CIBM), University Hospital Center, University of Lausanne Lausanne, Switzerland ; Department of Radiology, University Hospital Center, University of Lausanne Lausanne, Switzerland
| |
Collapse
|
20
|
Abstract
Through evaluative conditioning (EC) a stimulus can acquire an affective value by pairing it with another affective stimulus. While many sounds we encounter daily have acquired an affective value over life, EC has hardly been tested in the auditory domain. To get a more complete understanding of affective processing in auditory domain we examined EC of sound. In Experiment 1 we investigated whether the affective evaluation of short environmental sounds can be changed using affective words as unconditioned stimuli (US). Congruency effects on an affective priming task for conditioned sounds demonstrated successful EC. Subjective ratings for sounds paired with negative words changed accordingly. In Experiment 2 we investigated whether extinction occurs, i.e., whether the acquired valence remains stable after repeated presentation of the conditioned sound without the US. The acquired affective value remained present, albeit weaker, even after 40 extinction trials. These results provide clear evidence for EC effects in the auditory domain. We will argue that both associative as well as propositional processes are likely to underlie these effects.
Collapse
Affiliation(s)
- Anna C Bolders
- Cognitive Psychology Unit, Institute of Psychology, Leiden University Leiden, Netherlands
| | | | | |
Collapse
|
21
|
Cummings A, Ceponiene R. Verbal and nonverbal semantic processing in children with developmental language impairment. Neuropsychologia 2010; 48:77-85. [PMID: 19698728 PMCID: PMC2794944 DOI: 10.1016/j.neuropsychologia.2009.08.012] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2009] [Revised: 06/03/2009] [Accepted: 08/13/2009] [Indexed: 11/21/2022]
Abstract
In an effort to clarify whether semantic integration is impaired in verbal and nonverbal auditory domains in children with developmental language impairment (a.k.a., LI and SLI), the present study obtained behavioral and neural responses to words and environmental sounds in children with language impairment and their typically developing age-matched controls (ages 7-15 years). Event-related brain potentials (ERPs) were recorded while children performed a forced-choice matching task on semantically matching and mismatching visual-auditory, picture-word and picture-environmental sound pairs. Behavioral accuracy and reaction time measures were similar for both groups of children, with environmental sounds eliciting more accurate responses than words. In picture-environmental sound trials, behavioral performance and the brain's response to semantic incongruency (i.e., the N400 effect) of the children with language impairment were comparable to those of their typically developing peers. However, in picture-word trials, children with LI tended to be less accurate than their controls and their N400 effect was significantly delayed in latency. Thus, the children with LI demonstrated a semantic integration deficit that was somewhat specific to the verbal domain. The particular finding of a delayed N400 effect is consistent with the storage deficit hypothesis of language impairment (Kail & Leonard, 1986) suggesting weakened and/or less efficient connections within the language networks of children with LI.
Collapse
Affiliation(s)
- Alycia Cummings
- San Diego State University/University of California, San Diego Joint Doctoral Program in Language and Communicative Disorders, San Diego, CA, USA.
| | | |
Collapse
|
22
|
Bidet-Caulet A, Ye XL, Bouchet P, Guénot M, Fischer C, Bertrand O. Non-verbal auditory cognition in patients with temporal epilepsy before and after anterior temporal lobectomy. Front Hum Neurosci 2009; 3:42. [PMID: 20011222 PMCID: PMC2791036 DOI: 10.3389/neuro.09.042.2009] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2009] [Accepted: 10/14/2009] [Indexed: 11/29/2022] Open
Abstract
For patients with pharmaco-resistant temporal epilepsy, unilateral anterior temporal lobectomy (ATL) – i.e. the surgical resection of the hippocampus, the amygdala, the temporal pole and the most anterior part of the temporal gyri – is an efficient treatment. There is growing evidence that anterior regions of the temporal lobe are involved in the integration and short-term memorization of object-related sound properties. However, non-verbal auditory processing in patients with temporal lobe epilepsy (TLE) has raised little attention. To assess non-verbal auditory cognition in patients with temporal epilepsy both before and after unilateral ATL, we developed a set of non-verbal auditory tests, including environmental sounds. We could evaluate auditory semantic identification, acoustic and object-related short-term memory, and sound extraction from a sound mixture. The performances of 26 TLE patients before and/or after ATL were compared to those of 18 healthy subjects. Patients before and after ATL were found to present with similar deficits in pitch retention, and in identification and short-term memorisation of environmental sounds, whereas not being impaired in basic acoustic processing compared to healthy subjects. It is most likely that the deficits observed before and after ATL are related to epileptic neuropathological processes. Therefore, in patients with drug-resistant TLE, ATL seems to significantly improve seizure control without producing additional auditory deficits.
Collapse
|