1
|
Raynal L, Clément E, Goyet L, Rämä P, Sander E. Neural correlates of unconventional verb extensions reveal preschoolers' analogical abilities. J Exp Child Psychol 2024; 246:105984. [PMID: 38879929 DOI: 10.1016/j.jecp.2024.105984] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2023] [Revised: 05/05/2024] [Accepted: 05/06/2024] [Indexed: 06/18/2024]
Abstract
In the current event-related potential (ERP) study, we assessed 4-year-olds' ability to extend verbs to new action events on the basis of abstract similarities. Participants were presented with images of actions (e.g., peeling an orange) while hearing sentences containing a conventional verb (e.g., peeling), a verb sharing an abstract relation (i.e., an analogical verb, e.g., undressing), a verb sharing an object type (i.e., an object-related verb, e.g., pressing) with the action, or a pseudoverb (e.g., kebraying). The amplitude of the N400 gradually increased as a function of verb type-from conventional verbs to analogical verbs to object-related verbs to pseudoverbs. These findings suggest that accessing the meaning of a verb is easier when it shares abstract relations with the expected verb. Our results illustrate that measuring brain signals in response to analogical word extensions provides a useful tool to investigate preschools' analogical abilities.
Collapse
Affiliation(s)
- Lucas Raynal
- Université Paris Cité, Laboratoire INCC UMR 8002, CNRS, F-75006 Paris, France; Université CY Cergy Paris, Laboratoire Paragraphe, EA 349, 92230 Gennevilliers, France; Université de Genève, Faculté de Psychologie et Sciences de l'Education, Equipe IDEA, 1211 Genève, Switzerland.
| | - Evelyne Clément
- Université CY Cergy Paris, Laboratoire Paragraphe, EA 349, 92230 Gennevilliers, France
| | - Louise Goyet
- Université Paris VIII-Vincennes, Laboratoire DysCo, 93200 Saint-Denis, France
| | - Pia Rämä
- Université Paris Cité, Laboratoire INCC UMR 8002, CNRS, F-75006 Paris, France
| | - Emmanuel Sander
- Université de Genève, Faculté de Psychologie et Sciences de l'Education, Equipe IDEA, 1211 Genève, Switzerland
| |
Collapse
|
2
|
Lasting effects of the COVID-19 pandemic on language processing. PLoS One 2022; 17:e0269242. [PMID: 35704594 PMCID: PMC9200165 DOI: 10.1371/journal.pone.0269242] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Accepted: 05/17/2022] [Indexed: 11/29/2022] Open
Abstract
A central question in understanding human language is how people store, access, and comprehend words. The ongoing COVID-19 pandemic presented a natural experiment to investigate whether language comprehension can be changed in a lasting way by external experiences. We leveraged the sudden increase in the frequency of certain words (mask, isolation, lockdown) to investigate the effects of rapid contextual changes on word comprehension, measured over 10 months within the first year of the pandemic. Using the phonemic restoration paradigm, in which listeners are presented with ambiguous auditory input and report which word they hear, we conducted four online experiments with adult participants across the United States (combined N = 899). We find that the pandemic has reshaped language processing for the long term, changing how listeners process speech and what they expect from ambiguous input. These results show that abrupt changes in linguistic exposure can cause enduring changes to the language system.
Collapse
|
3
|
Olszewska J, Hodel A, Falkowski A, Woldt B, Bednarek H, Luttenberger D. Meaningful Versus Meaningless Sounds and Words. Exp Psychol 2021; 68:4-17. [PMID: 33843255 DOI: 10.1027/1618-3169/a000506] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
The current study assessed memory performance for perceptually similar environmental sounds and speech-based material after short and long delays. In two studies, we demonstrated a similar pattern of memory performance for sounds and words in short-term memory, yet in long-term memory, the performance patterns differed. Experiment 1 examined the effects of two different types of sounds: meaningful (MFUL) and meaningless (MLESS), whereas Experiment 2 assessed memory performance for words and nonwords. We utilized a modified version of the classical Deese-Roediger-McDermott (Deese, 1959; Roediger & McDermott, 1995) procedure and adjusted it to test the effects of acoustic similarities between auditorily presented stimuli. Our findings revealed no difference in memory performance between MFUL and MLESS sounds, and between words and nonwords after short delays. However, following long delays, greater reliance on meaning was noticed for MFUL sounds than MLESS sounds, while performance for linguistic material did not differ between words and nonwords. Importantly, participants' memory performance for words and nonwords was accompanied by a more lenient response strategy. The results are discussed in terms of perceptual and semantic similarities between MLESS and MFUL sounds, as well as between words and nonwords.
Collapse
Affiliation(s)
| | - Amy Hodel
- Department of Psychology, University of Wisconsin Oshkosh, WI, USA
| | | | - Bernadette Woldt
- Department of Psychology, University of Wisconsin Oshkosh, WI, USA
| | - Hanna Bednarek
- SWPS University of Social Sciences and Humanities, Warsaw, Poland
| | | |
Collapse
|
4
|
Rassili O, Ordin M. The effect of regular rhythm on the perception of linguistic and non-linguistic auditory input. Eur J Neurosci 2020; 55:3365-3372. [PMID: 33125787 DOI: 10.1111/ejn.15029] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2020] [Revised: 09/29/2020] [Accepted: 10/26/2020] [Indexed: 11/30/2022]
Abstract
Regular distribution of auditory stimuli over time can facilitate perception and attention. However, such effects have to date only been observed in separate studies using either linguistic or non-linguistic materials. This has made it difficult to compare the effects of rhythmic regularity on attention across domains. The current study was designed to provide an explicit within-subject comparison of reaction times and accuracy in an auditory target-detection task using sequences of regularly and irregularly distributed syllables (linguistic material) and environmental sounds (non-linguistic material). We explored how reaction times and accuracy were modulated by regular and irregular rhythms in a sound- (non-linguistic) and syllable-monitoring (linguistic) task performed by native Spanish speakers (N = 25). Surprisingly, we did not observe that regular rhythm exerted a facilitatory effect on reaction times or accuracy. Further exploratory analysis showed that targets that appear later in sequences of syllables and sounds are identified more quickly. In late targets, reaction times in stimuli with a regular rhythm were lower than in stimuli with irregular rhythm for linguistic material, but not for non-linguistic material. The difference in reaction times on stimuli with regular and irregular rhythm for late targets was also larger for linguistic than for non-linguistic material. This suggests a modulatory effect of rhythm on linguistic stimuli only once the percept of temporal isochrony has been established. We suggest that temporal isochrony modulates attention to linguistic more than to non-linguistic stimuli because the human auditory system is tuned to process speech. The results, however, need to be further tested in confirmatory studies.
Collapse
Affiliation(s)
- Outhmane Rassili
- BCBL - Basque Centre on Cognition, Brain and Language, San Sebastián, Spain
| | - Mikhail Ordin
- BCBL - Basque Centre on Cognition, Brain and Language, San Sebastián, Spain.,Ikerbasque - Basque Foundation for Science, Bilbao, Spain
| |
Collapse
|
5
|
Dynamic Time-Locking Mechanism in the Cortical Representation of Spoken Words. eNeuro 2020; 7:ENEURO.0475-19.2020. [PMID: 32513662 PMCID: PMC7470935 DOI: 10.1523/eneuro.0475-19.2020] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2019] [Revised: 05/15/2020] [Accepted: 06/01/2020] [Indexed: 11/21/2022] Open
Abstract
Human speech has a unique capacity to carry and communicate rich meanings. However, it is not known how the highly dynamic and variable perceptual signal is mapped to existing linguistic and semantic representations. In this novel approach, we used the natural acoustic variability of sounds and mapped them to magnetoencephalography (MEG) data using physiologically-inspired machine-learning models. We aimed at determining how well the models, differing in their representation of temporal information, serve to decode and reconstruct spoken words from MEG recordings in 16 healthy volunteers. We discovered that dynamic time-locking of the cortical activation to the unfolding speech input is crucial for the encoding of the acoustic-phonetic features of speech. In contrast, time-locking was not highlighted in cortical processing of non-speech environmental sounds that conveyed the same meanings as the spoken words, including human-made sounds with temporal modulation content similar to speech. The amplitude envelope of the spoken words was particularly well reconstructed based on cortical evoked responses. Our results indicate that speech is encoded cortically with especially high temporal fidelity. This speech tracking by evoked responses may partly reflect the same underlying neural mechanism as the frequently reported entrainment of the cortical oscillations to the amplitude envelope of speech. Furthermore, the phoneme content was reflected in cortical evoked responses simultaneously with the spectrotemporal features, pointing to an instantaneous transformation of the unfolding acoustic features into linguistic representations during speech processing.
Collapse
|
6
|
Calma-Roddin N, Drury JE. Music, Language, and The N400: ERP Interference Patterns Across Cognitive Domains. Sci Rep 2020; 10:11222. [PMID: 32641708 PMCID: PMC7343814 DOI: 10.1038/s41598-020-66732-0] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2018] [Accepted: 04/03/2020] [Indexed: 11/09/2022] Open
Abstract
Studies of the relationship of language and music have suggested these two systems may share processing resources involved in the computation/maintenance of abstract hierarchical structure (syntax). One type of evidence comes from ERP interference studies involving concurrent language/music processing showing interaction effects when both processing streams are simultaneously perturbed by violations (e.g., syntactically incorrect words paired with incongruent completion of a chord progression). Here, we employ this interference methodology to target the mechanisms supporting long term memory (LTM) access/retrieval in language and music. We used melody stimuli from previous work showing out-of-key or unexpected notes may elicit a musical analogue of language N400 effects, but only for familiar melodies, and not for unfamiliar ones. Target notes in these melodies were time-locked to visually presented target words in sentence contexts manipulating lexical/conceptual semantic congruity. Our study succeeded in eliciting expected N400 responses from each cognitive domain independently. Among several new findings we argue to be of interest, these data demonstrate that: (i) language N400 effects are delayed in onset by concurrent music processing only when melodies are familiar, and (ii) double violations with familiar melodies (but not with unfamiliar ones) yield a sub-additive N400 response. In addition: (iii) early negativities (RAN effects), which previous work has connected to musical syntax, along with the music N400, were together delayed in onset for familiar melodies relative to the timing of these effects reported in the previous music-only study using these same stimuli, and (iv) double violation cases involving unfamiliar/novel melodies also delayed the RAN effect onset. These patterns constitute the first demonstration of N400 interference effects across these domains and together contribute previously undocumented types of interactions to the available pool of findings relevant to understanding whether language and music may rely on shared underlying mechanisms.
Collapse
Affiliation(s)
- Nicole Calma-Roddin
- Department of Behavioral Sciences, New York Institute of Technology, Old Westbury, New York, USA.
- Department of Psychology, Stony Brook University, New York, USA.
| | - John E Drury
- School of Linguistic Sciences and Arts, Jiangsu Normal University, Xuzhou, China
| |
Collapse
|
7
|
Bartolotti J, Schroeder SR, Hayakawa S, Rochanavibhata S, Chen P, Marian V. Listening to speech and non-speech sounds activates phonological and semantic knowledge differently. Q J Exp Psychol (Hove) 2020; 73:1135-1149. [PMID: 32338572 DOI: 10.1177/1747021820923944] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
How does the mind process linguistic and non-linguistic sounds? The current study assessed the different ways that spoken words (e.g., "dog") and characteristic sounds (e.g., <barking>) provide access to phonological information (e.g., word-form of "dog") and semantic information (e.g., knowledge that a dog is associated with a leash). Using an eye-tracking paradigm, we found that listening to words prompted rapid phonological activation, which was then followed by semantic access. The opposite pattern emerged for sounds, with early semantic access followed by later retrieval of phonological information. Despite differences in the time courses of conceptual access, both words and sounds elicited robust activation of phonological and semantic knowledge. These findings inform models of auditory processing by revealing the pathways between speech and non-speech input and their corresponding word forms and concepts, which influence the speed, magnitude, and duration of linguistic and nonlinguistic activation.
Collapse
Affiliation(s)
- James Bartolotti
- Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, USA
- Department of Psychology, The University of Kansas, Lawrence, KS, USA
| | - Scott R Schroeder
- Department of Speech-Language-Hearing Sciences, Hofstra University, Hempstead, NY, USA
| | - Sayuri Hayakawa
- Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, USA
| | - Sirada Rochanavibhata
- Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, USA
| | - Peiyao Chen
- Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, USA
| | - Viorica Marian
- Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, USA
| |
Collapse
|
8
|
Toon J, Kukona A. Activating Semantic Knowledge During Spoken Words and Environmental Sounds: Evidence From the Visual World Paradigm. Cogn Sci 2020; 44:e12810. [PMID: 31960505 DOI: 10.1111/cogs.12810] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2018] [Revised: 10/09/2019] [Accepted: 11/11/2019] [Indexed: 11/30/2022]
Abstract
Two visual world experiments investigated the activation of semantically related concepts during the processing of environmental sounds and spoken words. Participants heard environmental sounds such as barking or spoken words such as "puppy" while viewing visual arrays with objects such as a bone (semantically related competitor) and candle (unrelated distractor). In Experiment 1, a puppy (target) was also included in the visual array; in Experiment 2, it was not. During both types of auditory stimuli, competitors were fixated significantly more than distractors, supporting the coactivation of semantically related concepts in both cases; comparisons of the two types of auditory stimuli also revealed significantly larger effects with environmental sounds than spoken words. We discuss implications of these results for theories of semantic knowledge.
Collapse
Affiliation(s)
- Josef Toon
- Division of Psychology, De Montfort University
| | | |
Collapse
|
9
|
Abstract
Human information processing is incredibly fast and flexible. In order to survive, the human brain has to integrate information from various sources and to derive a coherent interpretation, ideally leading to adequate behavior. In experimental setups, such integration phenomena are often investigated in terms of cross-modal association effects. Interestingly, to date, most of these cross-modal association effects using linguistic stimuli have shown that single words can influence the processing of non-linguistic stimuli, and vice versa. In the present study, we were particularly interested in how far linguistic input beyond single words influences the processing of non-linguistic stimuli; in our case, environmental sounds. Participants read sentences either in an affirmative or negated version: for example: "The dog does (not) bark". Subsequently, participants listened to a sound either matching or mismatching the affirmative version of the sentence ('woof' vs. 'meow', respectively). In line with previous studies, we found a clear N400-like effect during sound perception following affirmative sentences. Interestingly, this effect was identically present following negated sentences, and the negation operator did not modulate the cross-modal association effect observed between the content words of the sentence and the sound. In summary, these results suggest that negation is not incorporated during information processing in a manner that word-sound association effects would be influenced.
Collapse
|
10
|
Hendrickson K, Love T, Walenski M, Friend M. The organization of words and environmental sounds in the second year: Behavioral and electrophysiological evidence. Dev Sci 2019; 22:e12746. [PMID: 30159958 PMCID: PMC6294716 DOI: 10.1111/desc.12746] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2016] [Accepted: 05/18/2018] [Indexed: 11/30/2022]
Abstract
The majority of research examining early auditory-semantic processing and organization is based on studies of meaningful relations between words and referents. However, a thorough investigation into the fundamental relation between acoustic signals and meaning requires an understanding of how meaning is associated with both lexical and non-lexical sounds. Indeed, it is unknown how meaningful auditory information that is not lexical (e.g., environmental sounds) is processed and organized in the young brain. To capture the structure of semantic organization for words and environmental sounds, we record event-related potentials as 20-month-olds view images of common nouns (e.g., dog) while hearing words or environmental sounds that match the picture (e.g., "dog" or barking), that are within-category violations (e.g., "cat" or meowing), or that are between-category violations (e.g., "pen" or scribbling). Results show both words and environmental sounds exhibit larger negative amplitudes to between-category violations relative to matches. Unlike words, which show a greater negative response early and consistently to within-category violations, such an effect for environmental sounds occurs late in semantic processing. Thus, as in adults, the young brain represents semantic relations between words and between environmental sounds, though it more readily differentiates semantically similar words compared to environmental sounds.
Collapse
Affiliation(s)
- Kristi Hendrickson
- Department of Communication Sciences & Disorders, University of Iowa, USA
| | - Tracy Love
- Center for Research in Language, University of California, San Diego, USA
- School of Speech, Language, and Hearing Sciences, San Diego State University, USA
| | | | | |
Collapse
|
11
|
Manfredi M, Cohn N, De Araújo Andreoli M, Boggio PS. Listening beyond seeing: Event-related potentials to audiovisual processing in visual narrative. BRAIN AND LANGUAGE 2018; 185:1-8. [PMID: 29986168 DOI: 10.1016/j.bandl.2018.06.008] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/08/2017] [Revised: 06/28/2018] [Accepted: 06/28/2018] [Indexed: 06/08/2023]
Abstract
Every day we integrate meaningful information coming from different sensory modalities, and previous work has debated whether conceptual knowledge is represented in modality-specific neural stores specialized for specific types of information, and/or in an amodal, shared system. In the current study, we investigated semantic processing through a cross-modal paradigm which asked whether auditory semantic processing could be modulated by the constraints of context built up across a meaningful visual narrative sequence. We recorded event-related brain potentials (ERPs) to auditory words and sounds associated to events in visual narratives-i.e., seeing images of someone spitting while hearing either a word (Spitting!) or a sound (the sound of spitting)-which were either semantically congruent or incongruent with the climactic visual event. Our results showed that both incongruent sounds and words evoked an N400 effect, however, the distribution of the N400 effect to words (centro-parietal) differed from that of sounds (frontal). In addition, words had an earlier latency N400 than sounds. Despite these differences, a sustained late frontal negativity followed the N400s and did not differ between modalities. These results support the idea that semantic memory balances a distributed cortical network accessible from multiple modalities, yet also engages amodal processing insensitive to specific modalities.
Collapse
Affiliation(s)
- Mirella Manfredi
- Social and Cognitive Neuroscience Laboratory, Center for Biological Science and Health, Mackenzie Presbyterian University, São Paulo, Brazil.
| | - Neil Cohn
- Tilburg Center for Cognition and Communication, Tilburg University, Tilburg, Netherlands
| | - Mariana De Araújo Andreoli
- Social and Cognitive Neuroscience Laboratory, Center for Biological Science and Health, Mackenzie Presbyterian University, São Paulo, Brazil
| | - Paulo Sergio Boggio
- Social and Cognitive Neuroscience Laboratory, Center for Biological Science and Health, Mackenzie Presbyterian University, São Paulo, Brazil
| |
Collapse
|
12
|
Foreign-accented speech modulates linguistic anticipatory processes. Neuropsychologia 2016; 85:245-55. [DOI: 10.1016/j.neuropsychologia.2016.03.022] [Citation(s) in RCA: 36] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2015] [Revised: 03/18/2016] [Accepted: 03/21/2016] [Indexed: 11/23/2022]
|