1
|
Cosper SH, Männel C, Mueller JL. Auditory associative word learning in adults: The effects of musical experience and stimulus ordering. Brain Cogn 2024; 180:106207. [PMID: 39053199 DOI: 10.1016/j.bandc.2024.106207] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Revised: 06/18/2024] [Accepted: 07/16/2024] [Indexed: 07/27/2024]
Abstract
Evidence for sequential associative word learning in the auditory domain has been identified in infants, while adults have shown difficulties. To better understand which factors may facilitate adult auditory associative word learning, we assessed the role of auditory expertise as a learner-related property and stimulus order as a stimulus-related manipulation in the association of auditory objects and novel labels. We tested in the first experiment auditorily-trained musicians versus athletes (high-level control group) and in the second experiment stimulus ordering, contrasting object-label versus label-object presentation. Learning was evaluated from Event-Related Potentials (ERPs) during training and subsequent testing phases using a cluster-based permutation approach, as well as accuracy-judgement responses during test. Results revealed for musicians a late positive component in the ERP during testing, but neither an N400 (400-800 ms) nor behavioral effects were found at test, while athletes did not show any effect of learning. Moreover, the object-label-ordering group only exhibited emerging association effects during training, while the label-object-ordering group showed a trend-level late ERP effect (800-1200 ms) during test as well as above chance accuracy-judgement scores. Thus, our results suggest the learner-related property of auditory expertise and stimulus-related manipulation of stimulus ordering modulate auditory associative word learning in adults.
Collapse
Affiliation(s)
- Samuel H Cosper
- Chair of Lifespan Developmental Neuroscience, Faculty of Psychology, Technische Universität Dresden, Dresden, Germany.
| | - Claudia Männel
- Department of Audiology and Phoniatrics, Charité-Universitätsmedizin Berlin, Berlin, Germany; Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Jutta L Mueller
- Department of Linguistics, University of Vienna, Vienna, Austria
| |
Collapse
|
2
|
Bechtold L, Cosper SH, Malyshevskaya A, Montefinese M, Morucci P, Niccolai V, Repetto C, Zappa A, Shtyrov Y. Brain Signatures of Embodied Semantics and Language: A Consensus Paper. J Cogn 2023; 6:61. [PMID: 37841669 PMCID: PMC10573703 DOI: 10.5334/joc.237] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2022] [Accepted: 07/29/2022] [Indexed: 10/17/2023] Open
Abstract
According to embodied theories (including embodied, embedded, extended, enacted, situated, and grounded approaches to cognition), language representation is intrinsically linked to our interactions with the world around us, which is reflected in specific brain signatures during language processing and learning. Moving on from the original rivalry of embodied vs. amodal theories, this consensus paper addresses a series of carefully selected questions that aim at determining when and how rather than whether motor and perceptual processes are involved in language processes. We cover a wide range of research areas, from the neurophysiological signatures of embodied semantics, e.g., event-related potentials and fields as well as neural oscillations, to semantic processing and semantic priming effects on concrete and abstract words, to first and second language learning and, finally, the use of virtual reality for examining embodied semantics. Our common aim is to better understand the role of motor and perceptual processes in language representation as indexed by language comprehension and learning. We come to the consensus that, based on seminal research conducted in the field, future directions now call for enhancing the external validity of findings by acknowledging the multimodality, multidimensionality, flexibility and idiosyncrasy of embodied and situated language and semantic processes.
Collapse
Affiliation(s)
- Laura Bechtold
- Institute for Experimental Psychology, Department for Biological Psychology, Heinrich-Heine University Düsseldorf, Germany
| | - Samuel H. Cosper
- Institute of Cognitive Science, University of Osnabrück, Germany
| | - Anastasia Malyshevskaya
- Centre for Cognition and Decision making, Institute for Cognitive Neuroscience, HSE University, Russian Federation
- Potsdam Embodied Cognition Group, Cognitive Sciences, University of Potsdam, Germany
| | | | | | - Valentina Niccolai
- Institute of Clinical Neuroscience and Medical Psychology, Medical Faculty, Heinrich-Heine University Düsseldorf, Germany
| | - Claudia Repetto
- Department of Psychology, Università Cattolica del Sacro Cuore, Milan, Italy
| | - Ana Zappa
- Laboratoire parole et langage, Aix-Marseille Université, Aix-en-Provence, France
| | - Yury Shtyrov
- Centre for Cognition and Decision making, Institute for Cognitive Neuroscience, HSE University, Russian Federation
- Center of Functionally Integrative Neuroscience, Department of Clinical Medicine, Aarhus University, Denmark
| |
Collapse
|
3
|
Online mouse cursor trajectories distinguish phonological activation by linguistic and nonlinguistic sounds. Psychon Bull Rev 2023; 30:362-372. [PMID: 35882722 PMCID: PMC9971122 DOI: 10.3758/s13423-022-02153-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/15/2022] [Indexed: 11/08/2022]
Abstract
Four online mouse cursor tracking experiments (total N = 208) examined the activation of phonological representations by linguistic and nonlinguistic auditory stimuli. Participants hearing spoken words (e.g., "bell") produced less direct mouse cursor trajectories toward corresponding pictures or text when visual arrays also included phonologically related competitors (e.g., belt) as compared with unrelated distractors (e.g., hose), but no such phonological competition was observed during environmental sounds (e.g., the ring of a bell). While important similarities have been observed between spoken words and environmental sounds, these experiments provide novel mouse cursor evidence that environmental sounds directly activate conceptual knowledge without needing to engage linguistic knowledge, contrasting with spoken words. Implications for theories of conceptual knowledge are discussed.
Collapse
|
4
|
Chow J, Angulo-Chavira AQ, Spangenberg M, Hentrup L, Plunkett K. Bottom-up processes dominate early word recognition in toddlers. Cognition 2022; 228:105214. [PMID: 35810512 DOI: 10.1016/j.cognition.2022.105214] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2021] [Revised: 06/12/2022] [Accepted: 06/27/2022] [Indexed: 11/03/2022]
Abstract
This study set out to investigate whether the 'phonological onset preference effect' often reported in adult studies using the visual world task (i.e., increased attention to an object that is phonologically-related to a spoken-target word, such as boat-bear) is also contingent upon toddler participants having sufficient preview time to inspect the picture stimuli. Picture preview is thought to support the activation of phonological codes which can then be matched to the phonological representations extracted from incoming speech signals and the picture stimuli, supporting the 'phonological mapping hypothesis'. We found that both toddlers and adults were able to show an early phonological onset preference in short preview conditions, though, adults' early phonological onset preferences in the short preview condition was extinguished by the presence of a semantic competitor, replicating previous adult findings (Huettig & McQueen, 2007). Removal of a semantic competitor reinstated the phonological onset preference effect under short preview conditions for adults. Our findings indicate that toddlers are driven more by bottom-up, phonological information when selecting a referent in a visual world task, as compared to adults who are more inclined to exploit top-down, semantic information when directing their attention to a visual object, especially when there is insufficient preview time. We propose that, when implicit naming is improbable in short-preview conditions, a phonological onset preference effect is driven by mapping on the visual-semantic levels, which is more susceptible to top-down influences.
Collapse
Affiliation(s)
- Janette Chow
- Department of Experimental Psychology, University of Oxford, Anna Watts Building, Radcliffe Observatory Quarter, Woodstock Road, Oxford OX2 6GG, UK
| | - Armando Q Angulo-Chavira
- Laboratorio de Psicolingüística, Facultad de Psicología, Universidad Nacional Autónoma de Mexico, Mexico
| | - Marlene Spangenberg
- Department of Experimental Psychology, University of Oxford, Anna Watts Building, Radcliffe Observatory Quarter, Woodstock Road, Oxford OX2 6GG, UK
| | - Leonie Hentrup
- Department of Experimental Psychology, University of Oxford, Anna Watts Building, Radcliffe Observatory Quarter, Woodstock Road, Oxford OX2 6GG, UK
| | - Kim Plunkett
- Department of Experimental Psychology, University of Oxford, Anna Watts Building, Radcliffe Observatory Quarter, Woodstock Road, Oxford OX2 6GG, UK.
| |
Collapse
|
5
|
Röders D, Klepp A, Schnitzler A, Biermann-Ruben K, Niccolai V. Induced and Evoked Brain Activation Related to the Processing of Onomatopoetic Verbs. Brain Sci 2022; 12:brainsci12040481. [PMID: 35448012 PMCID: PMC9029984 DOI: 10.3390/brainsci12040481] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Revised: 03/16/2022] [Accepted: 03/31/2022] [Indexed: 02/01/2023] Open
Abstract
Grounded cognition theory postulates that cognitive processes related to motor or sensory content are processed by brain networks involved in motor execution and perception, respectively. Processing words with auditory features was shown to activate the auditory cortex. Our study aimed at determining whether onomatopoetic verbs (e.g., “tröpfeln”—to dripple), whose articulation reproduces the sound of respective actions, engage the auditory cortex more than non-onomatopoetic verbs. Alpha and beta brain frequencies as well as evoked-related fields (ERFs) were targeted as potential neurophysiological correlates of this linguistic auditory quality. Twenty participants were measured with magnetoencephalography (MEG) while semantically processing visually presented onomatopoetic and non-onomatopoetic German verbs. While a descriptively stronger left temporal alpha desynchronization for onomatopoetic verbs did not reach statistical significance, a larger ERF for onomatopoetic verbs emerged at about 240 ms in the centro-parietal area. Findings suggest increased cortical activation related to onomatopoeias in linguistically relevant areas.
Collapse
Affiliation(s)
- Dorian Röders
- Institute of Clinical Neuroscience and Medical Psychology, Medical Faculty, Heinrich-Heine University, 40225 Duesseldorf, Germany; (A.K.); (A.S.); (K.B.-R.); (V.N.)
- Neural Basis of Learning Lab, Institute for Cognitive Neuroscience, Faculty of Psychology, Ruhr University, 44801 Bochum, Germany
- Correspondence:
| | - Anne Klepp
- Institute of Clinical Neuroscience and Medical Psychology, Medical Faculty, Heinrich-Heine University, 40225 Duesseldorf, Germany; (A.K.); (A.S.); (K.B.-R.); (V.N.)
| | - Alfons Schnitzler
- Institute of Clinical Neuroscience and Medical Psychology, Medical Faculty, Heinrich-Heine University, 40225 Duesseldorf, Germany; (A.K.); (A.S.); (K.B.-R.); (V.N.)
| | - Katja Biermann-Ruben
- Institute of Clinical Neuroscience and Medical Psychology, Medical Faculty, Heinrich-Heine University, 40225 Duesseldorf, Germany; (A.K.); (A.S.); (K.B.-R.); (V.N.)
| | - Valentina Niccolai
- Institute of Clinical Neuroscience and Medical Psychology, Medical Faculty, Heinrich-Heine University, 40225 Duesseldorf, Germany; (A.K.); (A.S.); (K.B.-R.); (V.N.)
| |
Collapse
|
6
|
Mechanisms of associative word learning: Benefits from the visual modality and synchrony of labeled objects. Cortex 2022; 152:36-52. [DOI: 10.1016/j.cortex.2022.03.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2021] [Revised: 12/05/2021] [Accepted: 03/30/2022] [Indexed: 11/21/2022]
|
7
|
Implicit auditory perception of local and global irregularities in passive listening condition. Neuropsychologia 2021; 165:108129. [PMID: 34929262 DOI: 10.1016/j.neuropsychologia.2021.108129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2020] [Revised: 12/14/2021] [Accepted: 12/15/2021] [Indexed: 11/19/2022]
Abstract
The auditory system detects differences in sounds at an implicit level, but data on this difference might not be sufficient to make explicit discrimination. The biomarkers of implicit auditory memory of ambiguous stimuli could shed light on unconscious auditory processing and implicit auditory learning. Mismatch negativity (MMN) and P3a, components of event-related potentials (ERPs) reflecting stimuli discrimination without direct attention, were previously detected in response to the local (short-term) irregularity in the auditory sequence even in an unconscious state. At the same time, P3b was elicited only in case of direct attention in response to the global (long-term) irregularity. In this study, we applied the local-global auditory paradigm to obtain possible electrophysiological signatures of implicit detection of hardly distinguishable auditory stimuli. ERPs were recorded from 20 healthy volunteers during active discrimination of deviant sounds in the old-ball sequence and passive listening of the same sounds in the sequence with local-global irregularity. The discrimination task consisted of two blocks with different deviant sounds targeted to respond. The sound discrimination accuracy was at an average of 40%, implying the difficulty of explicit sound recognition. Comparing ERPs to standard and deviant sounds, we found posterior negativity in ERP around 450-600 ms in response to targeted deviant sounds. MMN was significant only in response to non-target deviants. In the passive local-global paradigm, we observed an anterior positivity (284-412 ms), compatible with P3a, in response to a violation of local regularity. Violation of global regularity elicited an anterior negative response (228-586 ms), resembling the N400 component of ERPs. Importantly, the other indexes of auditory discrimination, such as MMN and P3b, were insignificant in ERPs to both regularity violations. The observed P3a and N400 components of ERPs may reflect prediction error signals in the implicit perception of sound patterns even if behavioral recognition was poor.
Collapse
|
8
|
Manfredi M, Sanchez Mello de Pinho P, Murrins Marques L, de Oliveira Ribeiro B, Boggio PS. Crossmodal processing of environmental sounds and everyday life actions: An ERP study. Heliyon 2021; 7:e07937. [PMID: 34541349 PMCID: PMC8436072 DOI: 10.1016/j.heliyon.2021.e07937] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2021] [Revised: 07/06/2021] [Accepted: 09/01/2021] [Indexed: 12/03/2022] Open
Abstract
To investigate the processing of environmental sounds, previous researchers have compared the semantic processing of words and sounds, yielding mixed results. This study aimed to specifically investigate the electrophysiological mechanism underlying the semantic processing of environmental sounds presented in a naturalistic visual scene. We recorded event-related brain potentials in a group of young adults over the presentation of everyday life actions that were either congruent or incongruent with environmental sounds. Our results showed that incongruent environmental sounds evoked both a P400 and an N400 effect, reflecting sensitivity to physical and semantic violations of environmental sounds’ properties, respectively. In addition, our findings showed an enhanced late positivity in response to incongruous environmental sounds, probably reflecting additional reanalysis costs. In conclusion, these results indicate that the crossmodal processing of the environmental sounds might require the simultaneous involvement of different cognitive processes.
Collapse
Affiliation(s)
- Mirella Manfredi
- Department of Psychology, University of Zurich, Zurich, Switzerland
- Corresponding author.
| | - Pamella Sanchez Mello de Pinho
- Social and Cognitive Neuroscience Laboratory, Developmental Disorders Program, Center for Health and Biological Sciences, Mackenzie Presbyterian University, Sao Paulo, Brazil
| | - Lucas Murrins Marques
- Social and Cognitive Neuroscience Laboratory, Developmental Disorders Program, Center for Health and Biological Sciences, Mackenzie Presbyterian University, Sao Paulo, Brazil
| | - Beatriz de Oliveira Ribeiro
- Social and Cognitive Neuroscience Laboratory, Developmental Disorders Program, Center for Health and Biological Sciences, Mackenzie Presbyterian University, Sao Paulo, Brazil
| | - Paulo Sergio Boggio
- Social and Cognitive Neuroscience Laboratory, Developmental Disorders Program, Center for Health and Biological Sciences, Mackenzie Presbyterian University, Sao Paulo, Brazil
- Corresponding author.
| |
Collapse
|
9
|
Coebergh JAF, McDowell S, van Woerkom TCAM, Koopman JP, Mulder J, Bruijn SFTM. Auditory Agnosia for Environmental Sounds in Alzheimer's Disease: Not Hearing and Not Listening? J Alzheimers Dis 2021; 73:1407-1419. [PMID: 31958091 DOI: 10.3233/jad-190431] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Auditory agnosia for environmental sounds (AES) is an example of central auditory dysfunction. It is presumed to be independent of language deficits and in presence of normal hearing. We undertook a detailed neuropsychological assessment including environmental sound naming and recognition in 34 clinically mild Alzheimer's disease (AD) patients and 29 age-matched healthy control subjects. In patients with AD, audiometry was performed to assess the impact on test performance, and in normal controls the Hearing Handicap Inventory for the Elderly - Screening Version to exclude more than mild hearing loss. We adapted a validated environmental sound battery and found near perfect scores in controls. We found that environmental sound agnosia is common in mild AD. We found a statistically significant difference in mean pure tone audiometry in the best ear between patients with and those patients without naming deficits of 11.3 dB (p = 0.010) and of 14.7 dB (p = 0.000) between those with and without recognition deficits. Statistical significance remained after correcting for age, aphasia, Mini-Mental State Examination score, and working memory. Slight and moderate peripheral hearing loss increases the odds ratio of recognition deficits by 13.75 (confidence interval 2.3-81.5) compared to normal hearing patients. We did not find evidence for different forms of AES. This work suggests that an interaction between peripheral hearing loss and AD pathology produces problems with environmental sound recognition. It confirms that the relationship between hearing and dementia is complex but also suggests that interventions to prevent and treat hearing loss could have an effect on AD in its clinical expression.
Collapse
Affiliation(s)
- Jan A F Coebergh
- Department of Neurology, HagaHospital, The Hague, The Netherlands.,Department of Neurology, Ashford and St. Peter's Hospital, Chertsey, United Kingdom.,Department of Neurology, St. George's Hospital, Tooting, United Kingdom
| | - Steven McDowell
- Department of Neurology, HagaHospital, The Hague, The Netherlands
| | | | - Jan P Koopman
- Department of Ear, Nose and Throat Surgery, HagaHospital, The Hague, The Netherlands
| | - Jacqueline Mulder
- Department of Neuropsychology, HagaHospital, The Hague, The Netherlands
| | | |
Collapse
|
10
|
Olszewska J, Hodel A, Falkowski A, Woldt B, Bednarek H, Luttenberger D. Meaningful Versus Meaningless Sounds and Words. Exp Psychol 2021; 68:4-17. [PMID: 33843255 DOI: 10.1027/1618-3169/a000506] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
The current study assessed memory performance for perceptually similar environmental sounds and speech-based material after short and long delays. In two studies, we demonstrated a similar pattern of memory performance for sounds and words in short-term memory, yet in long-term memory, the performance patterns differed. Experiment 1 examined the effects of two different types of sounds: meaningful (MFUL) and meaningless (MLESS), whereas Experiment 2 assessed memory performance for words and nonwords. We utilized a modified version of the classical Deese-Roediger-McDermott (Deese, 1959; Roediger & McDermott, 1995) procedure and adjusted it to test the effects of acoustic similarities between auditorily presented stimuli. Our findings revealed no difference in memory performance between MFUL and MLESS sounds, and between words and nonwords after short delays. However, following long delays, greater reliance on meaning was noticed for MFUL sounds than MLESS sounds, while performance for linguistic material did not differ between words and nonwords. Importantly, participants' memory performance for words and nonwords was accompanied by a more lenient response strategy. The results are discussed in terms of perceptual and semantic similarities between MLESS and MFUL sounds, as well as between words and nonwords.
Collapse
Affiliation(s)
| | - Amy Hodel
- Department of Psychology, University of Wisconsin Oshkosh, WI, USA
| | | | - Bernadette Woldt
- Department of Psychology, University of Wisconsin Oshkosh, WI, USA
| | - Hanna Bednarek
- SWPS University of Social Sciences and Humanities, Warsaw, Poland
| | | |
Collapse
|
11
|
Cosper SH, Männel C, Mueller JL. In the absence of visual input: Electrophysiological evidence of infants' mapping of labels onto auditory objects. Dev Cogn Neurosci 2020; 45:100821. [PMID: 32658761 PMCID: PMC7358178 DOI: 10.1016/j.dcn.2020.100821] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2019] [Revised: 05/13/2020] [Accepted: 06/29/2020] [Indexed: 11/25/2022] Open
Abstract
Despite the prominence of non-visual semantic features for some words (e.g., siren or thunder), little is known about when and how the meanings of those words that refer to auditory objects can be acquired in early infancy. With associative learning being an important mechanism of word learning, we ask the question whether associations between sounds and words lead to similar learning effects as associations between visual objects and words. In an event-related potential (ERP) study, 10- to 12-month-old infants were presented with pairs of environmental sounds and pseudowords in either a consistent (where sound-word mapping can occur) or inconsistent manner. Subsequently, the infants were presented with sound-pseudoword combinations either matching or violating the consistent pairs from the training phase. In the training phase, we observed word-form familiarity effects and pairing consistency effects for ERPs time-locked to the onset of the word. The test phase revealed N400-like effects for violated pairs as compared to matching pairs. These results indicate that associative word learning is also possible for auditory objects before infants' first birthday. The specific temporal occurrence of the N400-like effect and topological distribution of the ERPs suggests that the object's modality has an impact on how novel words are processed.
Collapse
Affiliation(s)
- Samuel H Cosper
- Institute of Cognitive Science, University of Osnabrück, Germany.
| | - Claudia Männel
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany; Department of Audiology and Phoniatrics, Charité-Universitätsmedizin Berlin, Germany.
| | - Jutta L Mueller
- Institute of Cognitive Science, University of Osnabrück, Germany; Department of Linguistics, University of Vienna, Austria.
| |
Collapse
|
12
|
Adamson LB, Bakeman R, Suma K, Robins DL. Autism Adversely Affects Auditory Joint Engagement During Parent-toddler Interactions. Autism Res 2020; 14:301-314. [PMID: 32809260 DOI: 10.1002/aur.2355] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2020] [Revised: 06/01/2020] [Accepted: 07/06/2020] [Indexed: 11/06/2022]
Abstract
This study documents the early adverse effects of autism spectrum disorder (ASD) on auditory joint engagement-the sharing of sounds during interactions. A total of 141 toddlers (49 typically developing [TD], 46 with ASD, and 46 with non-ASD developmental disorders [DD]; average age 22.6 months) were observed during a semi-naturalistic play session with a parent. Reactions to four types of sounds-speech about the child, instrumental music, animal calls, and mechanical noises-were observed before and as parents tried to scaffold joint engagement with the sound. Toddlers with ASD usually appeared aware of a new sound, often alerting to and orienting toward it. But compared to TD toddlers and toddlers with DD, they alerted and oriented less often to speech, a difference not found with the other sounds. Furthermore, toddlers with ASD were far less likely to spontaneously try to share the sound with the parents and to engage with the parent and the sound when parents tried to share it with them. These findings reveal how ASD can have significant effects on shared experiences with nonvisible targets in the environment that attract toddlers' attention. Future studies should address the association between auditory joint engagement difficulties and variations in multimodal joint engagement, sensory profiles, and ASD severity and the reciprocal influence over time of auditory joint engagement experience and language development. LAY SUMMARY: Like most toddlers, toddlers with autism spectrum disorder often alert when they hear sounds like a cat's meow or a train's rumble. But they are less likely to alert when they hear their own name, and they are far less likely to share new sounds with their parents. These findings raise important questions about how toddlers with autism spectrum disorder experience their everyday auditory world, including how they share it with parents who can enrich this experience.
Collapse
Affiliation(s)
- Lauren B Adamson
- Department of Psychology, Georgia State University, Atlanta, Georgia, USA
| | - Roger Bakeman
- Department of Psychology, Georgia State University, Atlanta, Georgia, USA
| | - Katharine Suma
- Department of Psychology, Georgia State University, Atlanta, Georgia, USA
| | - Diana L Robins
- The A.J. Drexel Autism Institute, Drexel University, Philadelphia, Pennsylvania, USA
| |
Collapse
|
13
|
Calma-Roddin N, Drury JE. Music, Language, and The N400: ERP Interference Patterns Across Cognitive Domains. Sci Rep 2020; 10:11222. [PMID: 32641708 PMCID: PMC7343814 DOI: 10.1038/s41598-020-66732-0] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2018] [Accepted: 04/03/2020] [Indexed: 11/09/2022] Open
Abstract
Studies of the relationship of language and music have suggested these two systems may share processing resources involved in the computation/maintenance of abstract hierarchical structure (syntax). One type of evidence comes from ERP interference studies involving concurrent language/music processing showing interaction effects when both processing streams are simultaneously perturbed by violations (e.g., syntactically incorrect words paired with incongruent completion of a chord progression). Here, we employ this interference methodology to target the mechanisms supporting long term memory (LTM) access/retrieval in language and music. We used melody stimuli from previous work showing out-of-key or unexpected notes may elicit a musical analogue of language N400 effects, but only for familiar melodies, and not for unfamiliar ones. Target notes in these melodies were time-locked to visually presented target words in sentence contexts manipulating lexical/conceptual semantic congruity. Our study succeeded in eliciting expected N400 responses from each cognitive domain independently. Among several new findings we argue to be of interest, these data demonstrate that: (i) language N400 effects are delayed in onset by concurrent music processing only when melodies are familiar, and (ii) double violations with familiar melodies (but not with unfamiliar ones) yield a sub-additive N400 response. In addition: (iii) early negativities (RAN effects), which previous work has connected to musical syntax, along with the music N400, were together delayed in onset for familiar melodies relative to the timing of these effects reported in the previous music-only study using these same stimuli, and (iv) double violation cases involving unfamiliar/novel melodies also delayed the RAN effect onset. These patterns constitute the first demonstration of N400 interference effects across these domains and together contribute previously undocumented types of interactions to the available pool of findings relevant to understanding whether language and music may rely on shared underlying mechanisms.
Collapse
Affiliation(s)
- Nicole Calma-Roddin
- Department of Behavioral Sciences, New York Institute of Technology, Old Westbury, New York, USA.
- Department of Psychology, Stony Brook University, New York, USA.
| | - John E Drury
- School of Linguistic Sciences and Arts, Jiangsu Normal University, Xuzhou, China
| |
Collapse
|
14
|
Multimodal feature binding in object memory retrieval using event-related potentials: Implications for models of semantic memory. Int J Psychophysiol 2020; 153:116-126. [PMID: 32389620 DOI: 10.1016/j.ijpsycho.2020.04.024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2020] [Revised: 03/30/2020] [Accepted: 04/29/2020] [Indexed: 11/23/2022]
Abstract
To test the hypothesis that semantic processes are represented in multiple subsystems, we recorded electroencephalogram (EEG) as we elicited object memories using the modified Semantic Object Retrieval Test, during which an object feature, presented as a visual word [VW], an auditory word [AW], or a picture [Pic], was followed by a second feature always presented as a visual word. We performed both hypothesis-driven and data-driven analyses using event-related potentials (ERPs) time locked to the second stimulus. We replicated a previously reported left fronto-temporal ERP effect (750-1000 ms post-stimulus) in the VW task, and also found that this ERP component was only present during object memory retrieval in verbal (VW, AW) as opposed to non-verbal (Pic) stimulus types. We also found a right temporal ERP effect (850-1000 ms post-stimulus) that was present in auditory (AW) but not in visual (VW, Pic) stimulus types. In addition, we found an earlier left temporo-parietal ERP effect between 350 and 700 ms post-stimulus and a later midline parietal ERP effect between 700 and 1100 ms post-stimulus, present in all stimulus types, suggesting common neural mechanisms for object retrieval processes and object activation, respectively. These findings support multiple semantic subsystems that respond to varying stimulus modalities, and argue against an ultimate unitary amodal semantic analysis.
Collapse
|
15
|
Adamson LB, Bakeman R, Suma K, Robins DL. Sharing sounds: The development of auditory joint engagement during early parent-child interaction. Dev Psychol 2019; 55:2491-2504. [PMID: 31524417 PMCID: PMC6861634 DOI: 10.1037/dev0000822] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Joint engagement-the sharing of events during social interactions-is an important context for early learning. To date, sharing topics that are only heard has not been systematically documented. To describe the development of auditory joint engagement, 48 child-parent dyads were observed 5 times from 12 to 30 months during seminaturalistic play. Reactions to 4 types of sounds-overheard speech about the child, instrumental music, animal calls, and mechanical noises-were observed before and as parents scaffolded shared listening and after the sound ceased. Before parents reacted, even 12-month-old infants readily alerted and oriented to the sounds; over time they increasingly tried to share new sounds with their parents. When parents then joined in sharing a sound, periods of auditory joint engagement often ensued, increasing from two thirds of 12-month observations to almost ceiling level at the 18- through 30-month observations. Overall, the developmental course and structure of auditory joint engagement and joint engagement with multimodal objects and events are remarkably similar. Symbol-infused auditory joint engagement occurred rarely at first but increased steadily. Children's labeling of the sound and parents' language scaffolding also increased linearly while child pointing toward it rose until 18 months and then declined. Future studies should address variations in the development of auditory joint engagement, whether autism spectrum disorder affects how toddlers share sounds, and the role auditory joint engagement may play in gestural and language development. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Collapse
Affiliation(s)
| | | | | | - Diana L. Robins
- The A.J. Drexel Autism Institute, Drexel University, Philadelphia, PA, USA
| |
Collapse
|
16
|
Delatorre P, Salguero A, León C, Tapscott A. The Impact of Context on Affective Norms: A Case of Study With Suspense. Front Psychol 2019; 10:1988. [PMID: 31543851 PMCID: PMC6728922 DOI: 10.3389/fpsyg.2019.01988] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2019] [Accepted: 08/14/2019] [Indexed: 11/13/2022] Open
Abstract
The emotional response to a stimulus is typically measured in three variables called valence, arousal and dominance. Based on such dimensions, Bradley and Lang (1999) published the Affective Norms for English Words (ANEW), a corpus of affective ratings for 1,034 non-contextualized words. Expanded and adapted to many languages, ANEW provides a corpus to evaluate and to predict human responses to different stimuli, and it has been used in a number of studies involving analysis of emotions. However, ANEW seems not to appropriately predict affective responses to concepts when these are contextualized in certain situational backgrounds, in which words can have different connotations from those in non-contextualized scenarios. These contextualized affective norms have not been sufficiently contrasted yet because the literature does not provide a corpus of the ANEW list in specific contexts. On this basis, this paper reports on the creation of a new corpus of affective norms for the original 1,034 ANEW words in a particular context (a fictional scene of suspense). An extensive quantitative data analysis comparing both corpora was carried out, confirming that the affective ratings are highly influenced by the context. The corpus can be downloaded as Supplementary Material.
Collapse
Affiliation(s)
- Pablo Delatorre
- Department of Computer Science, University of Cadiz, Cádiz, Spain
| | - Alberto Salguero
- Department of Computer Science, University of Cadiz, Cádiz, Spain
| | - Carlos León
- Department of Software Engineering and Artificial Intelligence, Instituto de Tecnología del Conocimiento, Universidad Complutense de Madrid, Madrid, Spain
| | - Alan Tapscott
- Department of Software Engineering and Artificial Intelligence, Instituto de Tecnología del Conocimiento, Universidad Complutense de Madrid, Madrid, Spain
| |
Collapse
|
17
|
Rapid Ocular Responses Are Modulated by Bottom-up-Driven Auditory Salience. J Neurosci 2019; 39:7703-7714. [PMID: 31391262 PMCID: PMC6764203 DOI: 10.1523/jneurosci.0776-19.2019] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2019] [Revised: 06/28/2019] [Accepted: 07/12/2019] [Indexed: 02/03/2023] Open
Abstract
Despite the prevalent use of alerting sounds in alarms and human-machine interface systems and the long-hypothesized role of the auditory system as the brain's "early warning system," we have only a rudimentary understanding of what determines auditory salience-the automatic attraction of attention by sound-and which brain mechanisms underlie this process. A major roadblock has been the lack of a robust, objective means of quantifying sound-driven attentional capture. Here we demonstrate that: (1) a reliable salience scale can be obtained from crowd-sourcing (N = 911), (2) acoustic roughness appears to be a driving feature behind this scaling, consistent with previous reports implicating roughness in the perceptual distinctiveness of sounds, and (3) crowd-sourced auditory salience correlates with objective autonomic measures. Specifically, we show that a salience ranking obtained from online raters correlated robustly with the superior colliculus-mediated ocular freezing response, microsaccadic inhibition (MSI), measured in naive, passively listening human participants (of either sex). More salient sounds evoked earlier and larger MSI, consistent with a faster orienting response. These results are consistent with the hypothesis that MSI reflects a general reorienting response that is evoked by potentially behaviorally important events regardless of their modality.SIGNIFICANCE STATEMENT Microsaccades are small, rapid, fixational eye movements that are measurable with sensitive eye-tracking equipment. We reveal a novel, robust link between microsaccade dynamics and the subjective salience of brief sounds (salience rankings obtained from a large number of participants in an online experiment): Within 300 ms of sound onset, the eyes of naive, passively listening participants demonstrate different microsaccade patterns as a function of the sound's crowd-sourced salience. These results position the superior colliculus (hypothesized to underlie microsaccade generation) as an important brain area to investigate in the context of a putative multimodal salience hub. They also demonstrate an objective means for quantifying auditory salience.
Collapse
|
18
|
Abstract
Human information processing is incredibly fast and flexible. In order to survive, the human brain has to integrate information from various sources and to derive a coherent interpretation, ideally leading to adequate behavior. In experimental setups, such integration phenomena are often investigated in terms of cross-modal association effects. Interestingly, to date, most of these cross-modal association effects using linguistic stimuli have shown that single words can influence the processing of non-linguistic stimuli, and vice versa. In the present study, we were particularly interested in how far linguistic input beyond single words influences the processing of non-linguistic stimuli; in our case, environmental sounds. Participants read sentences either in an affirmative or negated version: for example: "The dog does (not) bark". Subsequently, participants listened to a sound either matching or mismatching the affirmative version of the sentence ('woof' vs. 'meow', respectively). In line with previous studies, we found a clear N400-like effect during sound perception following affirmative sentences. Interestingly, this effect was identically present following negated sentences, and the negation operator did not modulate the cross-modal association effect observed between the content words of the sentence and the sound. In summary, these results suggest that negation is not incorporated during information processing in a manner that word-sound association effects would be influenced.
Collapse
|
19
|
Fritz TH, Schütte F, Steixner A, Contier O, Obrig H, Villringer A. Musical meaning modulates word acquisition. BRAIN AND LANGUAGE 2019; 190:10-15. [PMID: 30665002 DOI: 10.1016/j.bandl.2018.12.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/28/2018] [Revised: 10/16/2018] [Accepted: 12/05/2018] [Indexed: 06/09/2023]
Abstract
Musical excerpts have been shown to have the capacity to prime the processing of target words and vice versa, strongly suggesting that music can convey concepts. However, to date no study has investigated an influence of musical semantics on novel word acquisition, thus corroborating evidence for a similarity of underlying semantic processing of music and words behaviourally. The current study investigates whether semantic content of music can assist the acquisition of novel words. Forty novel words and their German translation were visually presented to 26 participants accompanied by either semantically congruent or incongruent music. Semantic congruence between music and words was expected to increase performance in the subsequent forced-choice recognition test. Participants performed significantly better on the retention of novel words presented with semantically congruent music compared to those presented with semantically incongruent music. This provides first evidence that semantic "enrichment" by music during novel word learning can augment novel word acquisition. This finding may lead to novel approaches in foreign language acquisition and language rehabilitation, and further strongly supports the concept that music has a strong capacity to iconically convey meaning.
Collapse
Affiliation(s)
- Thomas Hans Fritz
- Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstrasse 1A, 04103 Leipzig, Germany; Institute for Psychoacoustics and Electronic Music (IPEM), Blandijnberg 2, B-9000 Ghent, Belgium.
| | - Friederike Schütte
- Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstrasse 1A, 04103 Leipzig, Germany
| | - Agnes Steixner
- Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstrasse 1A, 04103 Leipzig, Germany
| | - Oliver Contier
- Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstrasse 1A, 04103 Leipzig, Germany
| | - Hellmuth Obrig
- Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstrasse 1A, 04103 Leipzig, Germany
| | - Arno Villringer
- Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstrasse 1A, 04103 Leipzig, Germany
| |
Collapse
|
20
|
Hendrickson K, Love T, Walenski M, Friend M. The organization of words and environmental sounds in the second year: Behavioral and electrophysiological evidence. Dev Sci 2019; 22:e12746. [PMID: 30159958 PMCID: PMC6294716 DOI: 10.1111/desc.12746] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2016] [Accepted: 05/18/2018] [Indexed: 11/30/2022]
Abstract
The majority of research examining early auditory-semantic processing and organization is based on studies of meaningful relations between words and referents. However, a thorough investigation into the fundamental relation between acoustic signals and meaning requires an understanding of how meaning is associated with both lexical and non-lexical sounds. Indeed, it is unknown how meaningful auditory information that is not lexical (e.g., environmental sounds) is processed and organized in the young brain. To capture the structure of semantic organization for words and environmental sounds, we record event-related potentials as 20-month-olds view images of common nouns (e.g., dog) while hearing words or environmental sounds that match the picture (e.g., "dog" or barking), that are within-category violations (e.g., "cat" or meowing), or that are between-category violations (e.g., "pen" or scribbling). Results show both words and environmental sounds exhibit larger negative amplitudes to between-category violations relative to matches. Unlike words, which show a greater negative response early and consistently to within-category violations, such an effect for environmental sounds occurs late in semantic processing. Thus, as in adults, the young brain represents semantic relations between words and between environmental sounds, though it more readily differentiates semantically similar words compared to environmental sounds.
Collapse
Affiliation(s)
- Kristi Hendrickson
- Department of Communication Sciences & Disorders, University of Iowa, USA
| | - Tracy Love
- Center for Research in Language, University of California, San Diego, USA
- School of Speech, Language, and Hearing Sciences, San Diego State University, USA
| | | | | |
Collapse
|
21
|
Manfredi M, Cohn N, De Araújo Andreoli M, Boggio PS. Listening beyond seeing: Event-related potentials to audiovisual processing in visual narrative. BRAIN AND LANGUAGE 2018; 185:1-8. [PMID: 29986168 DOI: 10.1016/j.bandl.2018.06.008] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/08/2017] [Revised: 06/28/2018] [Accepted: 06/28/2018] [Indexed: 06/08/2023]
Abstract
Every day we integrate meaningful information coming from different sensory modalities, and previous work has debated whether conceptual knowledge is represented in modality-specific neural stores specialized for specific types of information, and/or in an amodal, shared system. In the current study, we investigated semantic processing through a cross-modal paradigm which asked whether auditory semantic processing could be modulated by the constraints of context built up across a meaningful visual narrative sequence. We recorded event-related brain potentials (ERPs) to auditory words and sounds associated to events in visual narratives-i.e., seeing images of someone spitting while hearing either a word (Spitting!) or a sound (the sound of spitting)-which were either semantically congruent or incongruent with the climactic visual event. Our results showed that both incongruent sounds and words evoked an N400 effect, however, the distribution of the N400 effect to words (centro-parietal) differed from that of sounds (frontal). In addition, words had an earlier latency N400 than sounds. Despite these differences, a sustained late frontal negativity followed the N400s and did not differ between modalities. These results support the idea that semantic memory balances a distributed cortical network accessible from multiple modalities, yet also engages amodal processing insensitive to specific modalities.
Collapse
Affiliation(s)
- Mirella Manfredi
- Social and Cognitive Neuroscience Laboratory, Center for Biological Science and Health, Mackenzie Presbyterian University, São Paulo, Brazil.
| | - Neil Cohn
- Tilburg Center for Cognition and Communication, Tilburg University, Tilburg, Netherlands
| | - Mariana De Araújo Andreoli
- Social and Cognitive Neuroscience Laboratory, Center for Biological Science and Health, Mackenzie Presbyterian University, São Paulo, Brazil
| | - Paulo Sergio Boggio
- Social and Cognitive Neuroscience Laboratory, Center for Biological Science and Health, Mackenzie Presbyterian University, São Paulo, Brazil
| |
Collapse
|
22
|
Chen YC, Spence C. Dissociating the time courses of the cross-modal semantic priming effects elicited by naturalistic sounds and spoken words. Psychon Bull Rev 2018; 25:1138-1146. [PMID: 28600716 PMCID: PMC5990551 DOI: 10.3758/s13423-017-1324-6] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The present study compared the time courses of the cross-modal semantic priming effects elicited by naturalistic sounds and spoken words on visual picture processing. Following an auditory prime, a picture (or blank frame) was briefly presented and then immediately masked. The participants had to judge whether or not a picture had been presented. Naturalistic sounds consistently elicited a cross-modal semantic priming effect on visual sensitivity (d') for pictures (higher d' in the congruent than in the incongruent condition) at the 350-ms rather than at the 1,000-ms stimulus onset asynchrony (SOA). Spoken words mainly elicited a cross-modal semantic priming effect at the 1,000-ms rather than at the 350-ms SOA, but this effect was modulated by the order of testing these two SOAs. It would therefore appear that visual picture processing can be rapidly primed by naturalistic sounds via cross-modal associations, and this effect is short lived. In contrast, spoken words prime visual picture processing over a wider range of prime-target intervals, though this effect was conditioned by the prior context.
Collapse
Affiliation(s)
- Yi-Chuan Chen
- Crossmodal Research Laboratory, Department of Experimental Psychology, University of Oxford, 9 South Parks Road, Oxford, OX1 3UD, UK.
| | - Charles Spence
- Crossmodal Research Laboratory, Department of Experimental Psychology, University of Oxford, 9 South Parks Road, Oxford, OX1 3UD, UK
| |
Collapse
|
23
|
Liu X, Xu Y, Alter K, Tuomainen J. Emotional Connotations of Musical Instrument Timbre in Comparison With Emotional Speech Prosody: Evidence From Acoustics and Event-Related Potentials. Front Psychol 2018; 9:737. [PMID: 29867690 PMCID: PMC5962697 DOI: 10.3389/fpsyg.2018.00737] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2018] [Accepted: 04/26/2018] [Indexed: 11/25/2022] Open
Abstract
Music and speech both communicate emotional meanings in addition to their domain-specific contents. But it is not clear whether and how the two kinds of emotional meanings are linked. The present study is focused on exploring the emotional connotations of musical timbre of isolated instrument sounds through the perspective of emotional speech prosody. The stimuli were isolated instrument sounds and emotional speech prosody categorized by listeners into anger, happiness and sadness, respectively. We first analyzed the timbral features of the stimuli, which showed that relations between the three emotions were relatively consistent in those features for speech and music. The results further echo the size-code hypothesis in which different sound timbre indicates different body size projections. Then we conducted an ERP experiment using a priming paradigm with isolated instrument sounds as primes and emotional speech prosody as targets. The results showed that emotionally incongruent instrument-speech pairs triggered a larger N400 response than emotionally congruent pairs. Taken together, this is the first study to provide evidence that the timbre of simple and isolated musical instrument sounds can convey emotion in a way similar to emotional speech prosody.
Collapse
Affiliation(s)
- Xiaoluan Liu
- Department of Speech, Hearing and Phonetic Sciences, University College London, London, United Kingdom
| | - Yi Xu
- Department of Speech, Hearing and Phonetic Sciences, University College London, London, United Kingdom
| | - Kai Alter
- Faculty of Linguistics, Philology and Phonetics, University of Oxford, Oxford, United Kingdom.,Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, United Kingdom
| | - Jyrki Tuomainen
- Department of Speech, Hearing and Phonetic Sciences, University College London, London, United Kingdom
| |
Collapse
|
24
|
Uddin S, Heald SLM, Van Hedger SC, Klos S, Nusbaum HC. Understanding environmental sounds in sentence context. Cognition 2018; 172:134-143. [PMID: 29272740 PMCID: PMC6309373 DOI: 10.1016/j.cognition.2017.12.009] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2016] [Revised: 11/16/2017] [Accepted: 12/14/2017] [Indexed: 01/01/2023]
Abstract
There is debate about how individuals use context to successfully predict and recognize words. One view argues that context supports neural predictions that make use of the speech motor system, whereas other views argue for a sensory or conceptual level of prediction. While environmental sounds can convey clear referential meaning, they are not linguistic signals, and are thus neither produced with the vocal tract nor typically encountered in sentence context. We compared the effect of spoken sentence context on recognition and comprehension of spoken words versus nonspeech, environmental sounds. In Experiment 1, sentence context decreased the amount of signal needed for recognition of spoken words and environmental sounds in similar fashion. In Experiment 2, listeners judged sentence meaning in both high and low contextually constraining sentence frames, when the final word was present or replaced with a matching environmental sound. Results showed that sentence constraint affected decision time similarly for speech and nonspeech, such that high constraint sentences (i.e., frame plus completion) were processed faster than low constraint sentences for speech and nonspeech. Linguistic context facilitates the recognition and understanding of nonspeech sounds in much the same way as for spoken words. This argues against a simple form of a speech-motor explanation of predictive coding in spoken language understanding, and suggests support for conceptual-level predictions.
Collapse
Affiliation(s)
- Sophia Uddin
- Department of Psychology, The University of Chicago, 5848 S. University Ave., Chicago, IL 60637, USA.
| | - Shannon L M Heald
- Department of Psychology, The University of Chicago, 5848 S. University Ave., Chicago, IL 60637, USA
| | - Stephen C Van Hedger
- Department of Psychology, The University of Chicago, 5848 S. University Ave., Chicago, IL 60637, USA
| | - Serena Klos
- Department of Psychology, The University of Chicago, 5848 S. University Ave., Chicago, IL 60637, USA
| | - Howard C Nusbaum
- Department of Psychology, The University of Chicago, 5848 S. University Ave., Chicago, IL 60637, USA
| |
Collapse
|
25
|
Huang M, Jin J, Zhang Y, Hu D, Wang X. Usage of drip drops as stimuli in an auditory P300 BCI paradigm. Cogn Neurodyn 2017; 12:85-94. [PMID: 29435089 DOI: 10.1007/s11571-017-9456-y] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2017] [Revised: 07/17/2017] [Accepted: 10/10/2017] [Indexed: 11/28/2022] Open
Abstract
Recently, many auditory BCIs are using beeps as auditory stimuli, while beeps sound unnatural and unpleasant for some people. It is proved that natural sounds make people feel comfortable, decrease fatigue, and improve the performance of auditory BCI systems. Drip drop is a kind of natural sounds that makes humans feel relaxed and comfortable. In this work, three kinds of drip drops were used as stimuli in an auditory-based BCI system to improve the user-friendness of the system. This study explored whether drip drops could be used as stimuli in the auditory BCI system. The auditory BCI paradigm with drip-drop stimuli, which was called the drip-drop paradigm (DP), was compared with the auditory paradigm with beep stimuli, also known as the beep paradigm (BP), in items of event-related potential amplitudes, online accuracies and scores on the likability and difficulty to demonstrate the advantages of DP. DP obtained significantly higher online accuracy and information transfer rate than the BP (p < 0.05, Wilcoxon signed test; p < 0.05, Wilcoxon signed test). Besides, DP obtained higher scores on the likability with no significant difference on the difficulty (p < 0.05, Wilcoxon signed test). The results showed that the drip drops were reliable acoustic materials as stimuli in an auditory BCI system.
Collapse
Affiliation(s)
- Minqiang Huang
- 1Key Laboratory of Advanced Control and Optimization for Chemical Processes, Ministry of Education, East China University of Science and Technology, Shanghai, People's Republic of China
| | - Jing Jin
- 1Key Laboratory of Advanced Control and Optimization for Chemical Processes, Ministry of Education, East China University of Science and Technology, Shanghai, People's Republic of China
| | - Yu Zhang
- 1Key Laboratory of Advanced Control and Optimization for Chemical Processes, Ministry of Education, East China University of Science and Technology, Shanghai, People's Republic of China
| | - Dewen Hu
- 2College of Mechatronics and Automation, National University of Defense Technology, Changsha, Hunan 410073 People's Republic of China
| | - Xingyu Wang
- 1Key Laboratory of Advanced Control and Optimization for Chemical Processes, Ministry of Education, East China University of Science and Technology, Shanghai, People's Republic of China
| |
Collapse
|
26
|
Alday PM, Schlesewsky M, Bornkessel-Schlesewsky I. Electrophysiology Reveals the Neural Dynamics of Naturalistic Auditory Language Processing: Event-Related Potentials Reflect Continuous Model Updates. eNeuro 2017; 4:ENEURO.0311-16.2017. [PMID: 29379867 PMCID: PMC5779117 DOI: 10.1523/eneuro.0311-16.2017] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2016] [Revised: 09/05/2017] [Accepted: 11/02/2017] [Indexed: 11/21/2022] Open
Abstract
The recent trend away from ANOVA-based analyses places experimental investigations into the neurobiology of cognition in more naturalistic and ecologically valid designs within reach. Using mixed-effects models for epoch-based regression, we demonstrate the feasibility of examining event-related potentials (ERPs), and in particular the N400, to study the neural dynamics of human auditory language processing in a naturalistic setting. Despite the large variability between trials during naturalistic stimulation, we replicated previous findings from the literature: the effects of frequency, animacy, and word order and find previously unexplored interaction effects. This suggests a new perspective on ERPs, namely, as a continuous modulation reflecting continuous stimulation instead of a series of discrete and essentially sequential processes locked to discrete events.
Collapse
Affiliation(s)
- Phillip M. Alday
- Department of the Psychology of Language, Max-Planck-Institute for Psycholinguistics, Nijmegen 6500AH, The Netherlands
| | - Matthias Schlesewsky
- Cognitive Neuroscience Laboratory, School of Psychology, Social Work and Social Policy, University of South Australia, Adelaide SA 5001, Australia
| | - Ina Bornkessel-Schlesewsky
- Cognitive Neuroscience Laboratory, School of Psychology, Social Work and Social Policy, University of South Australia, Adelaide SA 5001, Australia
| |
Collapse
|
27
|
Anderson JD, Wagovich SA. Explicit and Implicit Verbal Response Inhibition in Preschool-Age Children Who Stutter. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2017; 60:836-852. [PMID: 28384673 PMCID: PMC5548080 DOI: 10.1044/2016_jslhr-s-16-0135] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2016] [Revised: 08/10/2016] [Accepted: 10/08/2016] [Indexed: 05/22/2023]
Abstract
PURPOSE The purpose of this study was to examine (a) explicit and implicit verbal response inhibition in preschool children who do stutter (CWS) and do not stutter (CWNS) and (b) the relationship between response inhibition and language skills. METHOD Participants were 41 CWS and 41 CWNS between the ages of 3;1 and 6;1 (years;months). Explicit verbal response inhibition was measured using a computerized version of the grass-snow task (Carlson & Moses, 2001), and implicit verbal response inhibition was measured using the baa-meow task. Main dependent variables were reaction time and accuracy. RESULTS The CWS were significantly less accurate than the CWNS on the implicit task, but not the explicit task. The CWS also exhibited slower reaction times than the CWNS on both tasks. Between-group differences in performance could not be attributed to working memory demands. Overall, children's performance on the inhibition tasks corresponded with parents' perceptions of their children's inhibition skills in daily life. CONCLUSIONS CWS are less effective and efficient than CWNS in suppressing a dominant response while executing a conflicting response in the verbal domain.
Collapse
Affiliation(s)
- Julie D. Anderson
- Department of Speech and Hearing Sciences, Indiana University, Bloomington
| | - Stacy A. Wagovich
- Department of Communication Science and Disorders, University of Missouri, Columbia
| |
Collapse
|
28
|
The Sounds of Sentences: Differentiating the Influence of Physical Sound, Sound Imagery, and Linguistically Implied Sounds on Physical Sound Processing. COGNITIVE AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2016; 16:940-61. [DOI: 10.3758/s13415-016-0444-1] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
29
|
The effects of stereo disparity on the behavioural and electrophysiological correlates of perception of audio–visual motion in depth. Neuropsychologia 2015; 78:51-62. [DOI: 10.1016/j.neuropsychologia.2015.09.023] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2015] [Revised: 09/09/2015] [Accepted: 09/15/2015] [Indexed: 11/18/2022]
|
30
|
Roux FE, Minkin K, Durand JB, Sacko O, Réhault E, Tanova R, Démonet JF. Electrostimulation mapping of comprehension of auditory and visual words. Cortex 2015; 71:398-408. [DOI: 10.1016/j.cortex.2015.07.001] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2014] [Revised: 03/19/2015] [Accepted: 07/02/2015] [Indexed: 12/21/2022]
|
31
|
The P600 as a correlate of ventral attention network reorientation. Cortex 2015; 66:A3-A20. [DOI: 10.1016/j.cortex.2014.12.019] [Citation(s) in RCA: 41] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2014] [Revised: 12/16/2014] [Accepted: 12/22/2014] [Indexed: 11/24/2022]
|
32
|
Hendrickson K, Walenski M, Friend M, Love T. The organization of words and environmental sounds in memory. Neuropsychologia 2015; 69:67-76. [PMID: 25624059 DOI: 10.1016/j.neuropsychologia.2015.01.035] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2014] [Revised: 11/21/2014] [Accepted: 01/23/2015] [Indexed: 11/24/2022]
Abstract
In the present study we used event-related potentials to compare the organization of linguistic and meaningful nonlinguistic sounds in memory. We examined N400 amplitudes as adults viewed pictures presented with words or environmental sounds that matched the picture (Match), that shared semantic features with the expected match (Near Violation), and that shared relatively few semantic features with the expected match (Far Violation). Words demonstrated incremental N400 amplitudes based on featural similarity from 300-700ms, such that both Near and Far Violations exhibited significant N400 effects, however Far Violations exhibited greater N400 effects than Near Violations. For environmental sounds, Far Violations but not Near Violations elicited significant N400 effects, in both early (300-400ms) and late (500-700ms) time windows, though a graded pattern similar to that of words was seen in the mid-latency time window (400-500ms). These results indicate that the organization of words and environmental sounds in memory is differentially influenced by featural similarity, with a consistently fine-grained graded structure for words but not sounds.
Collapse
Affiliation(s)
- Kristi Hendrickson
- Center for Research in Language, University of California, San Diego, USA; School of Speech, Language, and Hearing Sciences, San Diego State University, USA; Joint Doctoral Program in Language and Communicative Disorders, San Diego State University, USA.
| | - Matthew Walenski
- School of Speech, Language, and Hearing Sciences, San Diego State University, USA.
| | | | - Tracy Love
- Center for Research in Language, University of California, San Diego, USA; School of Speech, Language, and Hearing Sciences, San Diego State University, USA.
| |
Collapse
|
33
|
Sassenhagen J, Schlesewsky M, Bornkessel-Schlesewsky I. The P600-as-P3 hypothesis revisited: single-trial analyses reveal that the late EEG positivity following linguistically deviant material is reaction time aligned. BRAIN AND LANGUAGE 2014; 137:29-39. [PMID: 25151545 DOI: 10.1016/j.bandl.2014.07.010] [Citation(s) in RCA: 84] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/21/2013] [Revised: 07/17/2014] [Accepted: 07/27/2014] [Indexed: 06/03/2023]
Abstract
The P600, a late positive ERP component following linguistically deviant stimuli, is commonly seen as indexing structural, high-level processes, e.g. of linguistic (re)analysis. It has also been identified with the P3 (P600-as-P3 hypothesis), which is thought to reflect a systemic neuromodulator release facilitating behavioural shifts and is usually response time aligned. We investigated single-trial alignment of the P600 to response, a critical prediction of the P600-as-P3 hypothesis. Participants heard sentences containing morphosyntactic and semantic violations and responded via a button press. The elicited P600 was perfectly response aligned, while an N400 following semantic deviations was stimulus aligned. This is, to our knowledge, the first single-trial analysis of language processing data using within-sentence behavioural responses as temporal covariates. Results support the P600-as-P3 perspective and thus constitute a step towards a neurophysiological grounding of language-related ERPs.
Collapse
Affiliation(s)
- Jona Sassenhagen
- Department of Germanic Linguistics, University of Marburg, Marburg, Germany; Department of English and Linguistics, Johannes Gutenberg-University, Mainz, Germany
| | - Matthias Schlesewsky
- Department of English and Linguistics, Johannes Gutenberg-University, Mainz, Germany
| | - Ina Bornkessel-Schlesewsky
- Department of Germanic Linguistics, University of Marburg, Marburg, Germany; School of Psychology, Social Work and Social Policy, University of South Australia, Adelaide, Australia.
| |
Collapse
|
34
|
Zhou L, Jiang C, Delogu F, Yang Y. Spatial conceptual associations between music and pictures as revealed by N400 effect. Psychophysiology 2014; 51:520-8. [DOI: 10.1111/psyp.12195] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2013] [Accepted: 12/13/2013] [Indexed: 11/29/2022]
Affiliation(s)
- Linshu Zhou
- Key Laboratory of Behavioral Science, Institute of Psychology; Chinese Academy of Sciences; Beijing China
- University of Chinese Academy of Sciences; Beijing China
| | - Cunmei Jiang
- Music College; Shanghai Normal University; Shanghai China
| | - Franco Delogu
- College of Arts and Sciences; Lawrence Technological University; Southfield Michigan USA
| | - Yufang Yang
- Key Laboratory of Behavioral Science, Institute of Psychology; Chinese Academy of Sciences; Beijing China
| |
Collapse
|
35
|
Frey A, Aramaki M, Besson M. Conceptual priming for realistic auditory scenes and for auditory words. Brain Cogn 2013; 84:141-52. [PMID: 24378910 DOI: 10.1016/j.bandc.2013.11.013] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2012] [Revised: 11/22/2013] [Accepted: 11/26/2013] [Indexed: 10/25/2022]
Abstract
Two experiments were conducted using both behavioral and Event-Related brain Potentials methods to examine conceptual priming effects for realistic auditory scenes and for auditory words. Prime and target sounds were presented in four stimulus combinations: Sound-Sound, Word-Sound, Sound-Word and Word-Word. Within each combination, targets were conceptually related to the prime, unrelated or ambiguous. In Experiment 1, participants were asked to judge whether the primes and targets fit together (explicit task) and in Experiment 2 they had to decide whether the target was typical or ambiguous (implicit task). In both experiments and in the four stimulus combinations, reaction times and/or error rates were longer/higher and the N400 component was larger to ambiguous targets than to conceptually related targets, thereby pointing to a common conceptual system for processing auditory scenes and linguistic stimuli in both explicit and implicit tasks. However, fine-grained analyses also revealed some differences between experiments and conditions in scalp topography and duration of the priming effects possibly reflecting differences in the integration of perceptual and cognitive attributes of linguistic and nonlinguistic sounds. These results have clear implications for the building-up of virtual environments that need to convey meaning without words.
Collapse
Affiliation(s)
- Aline Frey
- Laboratoire Cognitions Humaine & Artificielle, Université Paris 8, Saint-Denis, France.
| | - Mitsuko Aramaki
- Laboratoire de Mécanique et d'Acoustique, CNRS, Marseille, France
| | - Mireille Besson
- Laboratoire de Neurosciences Cognitives, CNRS & Aix-Marseille Université, Marseille, France; Cuban Neuroscience Center, Habana, Cuba
| |
Collapse
|
36
|
Sherwin J, Sajda P. Musical experts recruit action-related neural structures in harmonic anomaly detection: evidence for embodied cognition in expertise. Brain Cogn 2013; 83:190-202. [PMID: 24056235 DOI: 10.1016/j.bandc.2013.07.002] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2012] [Revised: 04/30/2013] [Accepted: 07/12/2013] [Indexed: 10/26/2022]
Abstract
Humans are extremely good at detecting anomalies in sensory input. For example, while listening to a piece of Western-style music, an anomalous key change or an out-of-key pitch is readily apparent, even to the non-musician. In this paper we investigate differences between musical experts and non-experts during musical anomaly detection. Specifically, we analyzed the electroencephalograms (EEG) of five expert cello players and five non-musicians while they listened to excerpts of J.S. Bach's Prelude from Cello Suite No. 1. All subjects were familiar with the piece, though experts also had extensive experience playing the piece. Subjects were told that anomalous musical events (AMEs) could occur at random within the excerpts of the piece and were told to report the number of AMEs after each excerpt. Furthermore, subjects were instructed to remain still while listening to the excerpts and their lack of movement was verified via visual and EEG monitoring. Experts had significantly better behavioral performance (i.e. correctly reporting AME counts) than non-experts, though both groups had mean accuracies greater than 80%. These group differences were also reflected in the EEG correlates of key-change detection post-stimulus, with experts showing more significant, greater magnitude, longer periods of, and earlier peaks in condition-discriminating EEG activity than novices. Using the timing of the maximum discriminating neural correlates, we performed source reconstruction and compared significant differences between cellists and non-musicians. We found significant differences that included a slightly right lateralized motor and frontal source distribution. The right lateralized motor activation is consistent with the cortical representation of the left hand - i.e. the hand a cellist would use, while playing, to generate the anomalous key-changes. In general, these results suggest that sensory anomalies detected by experts may in fact be partially a result of an embodied cognition, with a model of the action for generating the anomaly playing a role in its detection.
Collapse
Affiliation(s)
- Jason Sherwin
- Department of Biomedical Engineering, Columbia University, New York, NY 10027, USA; Human Research and Engineering Directorate, U.S. Army Research Laboratory, Aberdeen, MD 21001, USA.
| | | |
Collapse
|
37
|
Krishnan S, Leech R, Aydelott J, Dick F. School-age children's environmental object identification in natural auditory scenes: Effects of masking and contextual congruence. Hear Res 2013; 300:46-55. [DOI: 10.1016/j.heares.2013.03.003] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/22/2012] [Revised: 02/17/2013] [Accepted: 03/05/2013] [Indexed: 11/24/2022]
|
38
|
Meyer GF, Harrison NR, Wuerger SM. The time course of auditory-visual processing of speech and body actions: evidence for the simultaneous activation of an extended neural network for semantic processing. Neuropsychologia 2013; 51:1716-25. [PMID: 23727570 DOI: 10.1016/j.neuropsychologia.2013.05.014] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2012] [Revised: 04/16/2013] [Accepted: 05/20/2013] [Indexed: 11/17/2022]
Abstract
An extensive network of cortical areas is involved in multisensory object and action recognition. This network draws on inferior frontal, posterior temporal, and parietal areas; activity is modulated by familiarity and the semantic congruency of auditory and visual component signals even if semantic incongruences are created by combining visual and auditory signals representing very different signal categories, such as speech and whole body actions. Here we present results from a high-density ERP study designed to examine the time-course and source location of responses to semantically congruent and incongruent audiovisual speech and body actions to explore whether the network involved in action recognition consists of a hierarchy of sequentially activated processing modules or a network of simultaneously active processing sites. We report two main results:1) There are no significant early differences in the processing of congruent and incongruent audiovisual action sequences. The earliest difference between congruent and incongruent audiovisual stimuli occurs between 240 and 280 ms after stimulus onset in the left temporal region. Between 340 and 420 ms, semantic congruence modulates responses in central and right frontal areas. Late differences (after 460 ms) occur bilaterally in frontal areas.2) Source localisation (dipole modelling and LORETA) reveals that an extended network encompassing inferior frontal, temporal, parasaggital, and superior parietal sites are simultaneously active between 180 and 420 ms to process auditory–visual action sequences. Early activation (before 120 ms) can be explained by activity in mainly sensory cortices. . The simultaneous activation of an extended network between 180 and 420 ms is consistent with models that posit parallel processing of complex action sequences in frontal, temporal and parietal areas rather than models that postulate hierarchical processing in a sequence of brain regions.
Collapse
Affiliation(s)
- Georg F Meyer
- Department of Psychological Sciences, University of Liverpool, Liverpool L697ZA, UK.
| | | | | |
Collapse
|
39
|
Evidence for a basic level in a taxonomy of everyday action sounds. Exp Brain Res 2013; 226:253-64. [PMID: 23411674 DOI: 10.1007/s00221-013-3430-7] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2012] [Accepted: 01/23/2013] [Indexed: 10/27/2022]
Abstract
We searched for evidence that the auditory organization of categories of sounds produced by actions includes a privileged or "basic" level of description. The sound events consisted of single objects (or substances) undergoing simple actions. Performance on sound events was measured in two ways: sounds were directly verified as belonging to a category, or sounds were used to create lexical priming. The category verification experiment measured the accuracy and reaction time to brief excerpts of these sounds. The lexical priming experiment measured reaction time benefits and costs caused by the presentation of these sounds prior to a lexical decision. The level of description of a sound varied in how specifically it described the physical properties of the action producing the sound. Both identification and priming effects were superior when a label described the specific interaction causing the sound (e.g. trickling) in comparison to the following: (1) more general descriptions (e.g. pour, liquid: trickling is a specific manner of pouring liquid), (2) more detailed descriptions using adverbs to provide detail regarding the manner of the action (e.g. trickling evenly). These results are consistent with neuroimaging studies showing that auditory representations of sounds produced by actions familiar to the listener activate motor representations of the gestures involved in sound production.
Collapse
|
40
|
Yoo S, Chung JY, Jeon HA, Lee KM, Kim YB, Cho ZH. Dual routes for verbal repetition: articulation-based and acoustic-phonetic codes for pseudoword and word repetition, respectively. BRAIN AND LANGUAGE 2012; 122:1-10. [PMID: 22632812 DOI: 10.1016/j.bandl.2012.04.011] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/18/2011] [Revised: 04/18/2012] [Accepted: 04/20/2012] [Indexed: 06/01/2023]
Abstract
Speech production is inextricably linked to speech perception, yet they are usually investigated in isolation. In this study, we employed a verbal-repetition task to identify the neural substrates of speech processing with two ends active simultaneously using functional MRI. Subjects verbally repeated auditory stimuli containing an ambiguous vowel sound that could be perceived as either a word or a pseudoword depending on the interpretation of the vowel. We found verbal repetition commonly activated the audition-articulation interface bilaterally at Sylvian fissures and superior temporal sulci. Contrasting word-versus-pseudoword trials revealed neural activities unique to word repetition in the left posterior middle temporal areas and activities unique to pseudoword repetition in the left inferior frontal gyrus. These findings imply that the tasks are carried out using different speech codes: an articulation-based code of pseudowords and an acoustic-phonetic code of words. It also supports the dual-stream model and imitative learning of vocabulary.
Collapse
Affiliation(s)
- Sejin Yoo
- Interdisciplinary Program in Cognitive Science, Seoul National University, Republic of Korea
| | | | | | | | | | | |
Collapse
|
41
|
Wang W, Li X, Ning N, Zhang JX. The nature of the homophone density effect: An ERP study with Chinese spoken monosyllable homophones. Neurosci Lett 2012; 516:67-71. [DOI: 10.1016/j.neulet.2012.03.059] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2012] [Revised: 03/13/2012] [Accepted: 03/20/2012] [Indexed: 01/18/2023]
|
42
|
Lemaitre G, Dessein A, Susini P, Aura K. Vocal Imitations and the Identification of Sound Events. ECOLOGICAL PSYCHOLOGY 2011. [DOI: 10.1080/10407413.2011.617225] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022]
|
43
|
Gygi B, Shafiro V. The incongruency advantage for environmental sounds presented in natural auditory scenes. J Exp Psychol Hum Percept Perform 2011; 37:551-65. [PMID: 21355664 DOI: 10.1037/a0020671] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The effect of context on the identification of common environmental sounds (e.g., dogs barking or cars honking) was tested by embedding them in familiar auditory background scenes (street ambience, restaurants). Initial results with subjects trained on both the scenes and the sounds to be identified showed a significant advantage of about five percentage points better accuracy for sounds that were contextually incongruous with the background scene (e.g., a rooster crowing in a hospital). Further studies with naive (untrained) listeners showed that this incongruency advantage (IA) is level-dependent: there is no advantage for incongruent sounds lower than a Sound/Scene ratio (So/Sc) of -7.5 dB, but there is about five percentage points better accuracy for sounds with greater So/Sc. Testing a new group of trained listeners on a larger corpus of sounds and scenes showed that the effect is robust and not confined to a specific stimulus set. Modeling using spectral-temporal measures showed that neither analyses based on acoustic features, nor semantic assessments of sound-scene congruency can account for this difference, indicating the IA is a complex effect, possibly arising from the sensitivity of the auditory system to new and unexpected events, under particular listening conditions.
Collapse
Affiliation(s)
- Brian Gygi
- Speech and Hearing Research, Veterans Affairs Northern California Health Care System, 150 Muir Road, Martinez, CA 94553, USA.
| | | |
Collapse
|
44
|
Wu YJ, Athanassiou S, Dorjee D, Roberts M, Thierry G. Brain Potentials Dissociate Emotional and Conceptual Cross-Modal Priming of Environmental Sounds. Cereb Cortex 2011; 22:577-83. [DOI: 10.1093/cercor/bhr128] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
|
45
|
Renvall H, Formisano E, Parviainen T, Bonte M, Vihla M, Salmelin R. Parametric Merging of MEG and fMRI Reveals Spatiotemporal Differences in Cortical Processing of Spoken Words and Environmental Sounds in Background Noise. Cereb Cortex 2011; 22:132-43. [DOI: 10.1093/cercor/bhr095] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
46
|
Koelsch S. Towards a neural basis of processing musical semantics. Phys Life Rev 2011; 8:89-105. [PMID: 21601541 DOI: 10.1016/j.plrev.2011.04.004] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2011] [Accepted: 04/27/2011] [Indexed: 10/18/2022]
Abstract
Processing of meaning is critical for language perception, and therefore the majority of research on meaning processing has focused on the semantic, lexical, conceptual, and propositional processing of language. However, music is another a means of communication, and meaning also emerges from the interpretation of musical information. This article provides a framework for the investigation of the processing of musical meaning, and reviews neuroscience studies investigating this issue. These studies reveal two neural correlates of meaning processing, the N400 and the N5 (which are both components of the event-related electric brain potential). Here I argue that the N400 can be elicited by musical stimuli due to the processing of extra-musical meaning, whereas the N5 can be elicited due to the processing of intra-musical meaning. Notably, whereas the N400 can be elicited by both linguistic and musical stimuli, the N5 has so far only been observed for the processing of meaning in music. Thus, knowledge about both the N400 and the N5 can advance our understanding of how the human brain processes meaning information.
Collapse
Affiliation(s)
- Stefan Koelsch
- Cluster of Excellence der Freien Universität Berlin, Languages of Emotion, Habelschwerdter Allee 45, 14195 Berlin, Germany
| |
Collapse
|
47
|
Schirmer A, Soh YH, Penney TB, Wyse L. Perceptual and conceptual priming of environmental sounds. J Cogn Neurosci 2011; 23:3241-53. [PMID: 21281092 DOI: 10.1162/jocn.2011.21623] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
It is still unknown whether sonic environments influence the processing of individual sounds in a similar way as discourse or sentence context influences the processing of individual words. One obstacle to answering this question has been the failure to dissociate perceptual (i.e., how similar are sonic environment and target sound?) and conceptual (i.e., how related are sonic environment and target?) priming effects. In this study, we dissociate these effects by creating prime-target pairs with a purely perceptual or both a perceptual and conceptual relationship. Perceptual prime-target pairs were derived from perceptual-conceptual pairs (i.e., meaningful environmental sounds) by shuffling the spectral composition of primes and targets so as to preserve their perceptual relationship while making them unrecognizable. Hearing both original and shuffled targets elicited a more positive N1/P2 complex in the ERP when targets were related to a preceding prime as compared with unrelated. Only related original targets reduced the N400 amplitude. Related shuffled targets tended to decrease the amplitude of a late temporo-parietal positivity. Taken together, these effects indicate that sonic environments influence first the perceptual and then the conceptual processing of individual sounds. Moreover, the influence on conceptual processing is comparable to the influence linguistic context has on the processing of individual words.
Collapse
|
48
|
Aramaki M, Marie C, Kronland-Martinet R, Ystad S, Besson M. Sound categorization and conceptual priming for nonlinguistic and linguistic sounds. J Cogn Neurosci 2010; 22:2555-69. [PMID: 19929328 DOI: 10.1162/jocn.2009.21398] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The aim of these experiments was to compare conceptual priming for linguistic and for a homogeneous class of nonlinguistic sounds, impact sounds, by using both behavioral (percentage errors and RTs) and electrophysiological measures (ERPs). Experiment 1 aimed at studying the neural basis of impact sound categorization by creating typical and ambiguous sounds from different material categories (wood, metal, and glass). Ambiguous sounds were associated with slower RTs and larger N280, smaller P350/P550 components, and larger negative slow wave than typical impact sounds. Thus, ambiguous sounds were more difficult to categorize than typical sounds. A category membership task was used in Experiment 2. Typical sounds were followed by sounds from the same or from a different category or by ambiguous sounds. Words were followed by words, pseudowords, or nonwords. Error rate was highest for ambiguous sounds and for pseudowords and both elicited larger N400-like components than same typical sounds and words. Moreover, both different typical sounds and nonwords elicited P300 components. These results are discussed in terms of similar conceptual priming effects for nonlinguistic and linguistic stimuli.
Collapse
Affiliation(s)
- Mitsuko Aramaki
- CNRS-Institut de Neurosciences Cognitives de la Méditerranée, Marseille Cedex, France.
| | | | | | | | | |
Collapse
|
49
|
Painter JG, Koelsch S. Can out-of-context musical sounds convey meaning? An ERP study on the processing of meaning in music. Psychophysiology 2010; 48:645-55. [DOI: 10.1111/j.1469-8986.2010.01134.x] [Citation(s) in RCA: 37] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
50
|
Schön D, Ystad S, Kronland-Martinet R, Besson M. The evocative power of sounds: conceptual priming between words and nonverbal sounds. J Cogn Neurosci 2010; 22:1026-35. [PMID: 19583472 DOI: 10.1162/jocn.2009.21302] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Two experiments were conducted to examine the conceptual relation between words and nonmeaningful sounds. In order to reduce the role of linguistic mediation, sounds were recorded in such a way that it was highly unlikely to identify the source that produced them. Related and unrelated sound-word pairs were presented in Experiment 1 and the order of presentation was reversed in Experiment 2 (word-sound). Results showed that, in both experiments, participants were sensitive to the conceptual relation between the two items. They were able to correctly categorize items as related or unrelated with good accuracy. Moreover, a relatedness effect developed in the event-related brain potentials between 250 and 600 msec, although with a slightly different scalp topography for word and sound targets. Results are discussed in terms of similar conceptual processing networks and we propose a tentative model of the semiotics of sounds.
Collapse
Affiliation(s)
- Daniele Schön
- CNRS & Université de la Méditerranée, Marseille, France.
| | | | | | | |
Collapse
|