1
|
Pomiechowska B, Takács S, Volein Á, Parise E. The nature of label-induced categories: preverbal infants represent surface features and category symbols. Proc Biol Sci 2024; 291:20241433. [PMID: 39561796 PMCID: PMC11576112 DOI: 10.1098/rspb.2024.1433] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Revised: 08/30/2024] [Accepted: 10/10/2024] [Indexed: 11/21/2024] Open
Abstract
Humans categorize objects not only based on perceptual features (e.g. red, rounded), but also function (e.g. used to transport people). Category membership can be communicated via labelling (e.g. 'apple', 'vehicle'). While it is well established that even preverbal infants rely on labels to learn categories, it remains unclear what is the nature of those categories: whether they simply contain sets of visual features diagnostic of category membership, or whether they additionally contain abstract category markers or symbols (e.g. linguistic in the form of category labels or non-linguistic). To address this question, we first used labelling to teach two novel object categories, each composed of unfamiliar visually unrelated objects, to adults and nine-month-olds. Then, we assessed categorization in an electroencephalography category-oddball task. Both adults and infants displayed stronger neural responses to the infrequent category, which, in the absence of visual features shared by all category members, indicates that the categories they set up contained feature-independent category markers. Well before language production starts, labels help infants to discover categories without relying on perceptual similarities across objects and build category representations with summary elements that may be critical for the development of abstract thought.
Collapse
Affiliation(s)
- Barbara Pomiechowska
- Centre for Developmental Science & Centre for Human Brain Health, School of Psychology, University of Birmingham, Birmingham, UK
- Cognitive Development Center, Department of Cognitive Science, Central European University, Quellenstrasse 51, Vienna1100, Austria
| | - Szilvia Takács
- Cognitive Development Center, Department of Cognitive Science, Central European University, Quellenstrasse 51, Vienna1100, Austria
| | - Ágnes Volein
- Cognitive Development Center, Department of Cognitive Science, Central European University, Quellenstrasse 51, Vienna1100, Austria
| | - Eugenio Parise
- Cognitive Development Center, Department of Cognitive Science, Central European University, Quellenstrasse 51, Vienna1100, Austria
- CIMeC - Center for Mind/Brain Sciences, University of Trento, Trento, Italy
- Department of Psychology, Lancaster University, Bailrigg, LancasterLA1 4YF, UK
| |
Collapse
|
2
|
Suffill E, van Paridon J, Lupyan G. Mind Melds: Verbal Labels Induce Greater Representational Alignment. Open Mind (Camb) 2024; 8:950-971. [PMID: 39170795 PMCID: PMC11338297 DOI: 10.1162/opmi_a_00153] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Accepted: 06/11/2024] [Indexed: 08/23/2024] Open
Abstract
What determines whether two people represent something in a similar way? We examined the role of verbal labels in promoting representational alignment. Across two experiments, three groups of participants sorted novel shapes from two visually dissimilar categories. Prior to sorting, participants in two of the groups were pre-exposed to the shapes using a simple visual matching task designed to reinforce the visual category structure. In one of these groups, participants additionally heard one of two nonsense category labels accompanying the shapes. Exposure to these redundant labels led people to represent the shapes in a more categorical way, which led to greater alignment between sorters. We found this effect of label-induced alignment despite the two categories being highly visually distinct and despite participants in both pre-exposure conditions receiving identical visual experience with the shapes. Experiment 2 replicated this basic result using more even more stringent testing conditions. The results hint at the possibly extensive role that labels may play in aligning people's mental representations.
Collapse
|
3
|
Frugarello P, Rusconi E, Job R. The label-feedback effect is influenced by target category in visual search. PLoS One 2024; 19:e0306736. [PMID: 39088399 PMCID: PMC11293709 DOI: 10.1371/journal.pone.0306736] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2024] [Accepted: 06/22/2024] [Indexed: 08/03/2024] Open
Abstract
The label-feedback hypothesis states that language can modulate visual processing. In particular, hearing or reading aloud target names (labels) speeds up performance in visual search tasks by facilitating target detection and such advantage is often measured against a condition where the target name is shown visually (i.e. via the same modality as the search task). The current study conceptually complements and expands previous investigations. The effect of a multimodal label presentation (i.e., an audio+visual, AV, priming label) in a visual search task is compared to that of a multimodal (i.e. white noise+visual, NV, label) and two unimodal (i.e. audio, A, label or visual, V, label) control conditions. The name of a category (i.e. a label at the superordinate level) is used as a cue, instead of the more commonly used target name (a basic level label), with targets belonging to one of three categories: garments, improper weapons, and proper weapons. These categories vary for their structure, improper weapons being an ad hoc category (i.e. context-dependent), unlike proper weapons and garments. The preregistered analysis shows an overall facilitation of visual search performance in the AV condition compared to the NV condition, confirming that the label-feedback effect may not be explained away by the effects of multimodal stimulation only and that it extends to superordinate labels. Moreover, exploratory analyses show that such facilitation is driven by the garments and proper weapons categories, rather than improper weapons. Thus, the superordinate label-feedback effect is modulated by the structural properties of a category. These findings are consistent with the idea that the AV condition prompts an "up-regulation" of the label, a requirement for enhancing the label's beneficial effects, but not when the label refers to an ad hoc category. They also highlight the peculiar status of the category of improper weapons and set it apart from that of proper weapons.
Collapse
Affiliation(s)
- Paolo Frugarello
- Department of Psychology and Cognitive Science, University of Trento, Rovereto (Trento), Italy
- Centre of Security and Crime Sciences, University of Trento – University of Verona, Trento, Italy
| | - Elena Rusconi
- Department of Psychology and Cognitive Science, University of Trento, Rovereto (Trento), Italy
- Centre of Security and Crime Sciences, University of Trento – University of Verona, Trento, Italy
| | - Remo Job
- Department of Psychology and Cognitive Science, University of Trento, Rovereto (Trento), Italy
| |
Collapse
|
4
|
Gao M, Zhu W, Drewes J. The temporal dynamics of conscious and unconscious audio-visual semantic integration. Heliyon 2024; 10:e33828. [PMID: 39055801 PMCID: PMC11269866 DOI: 10.1016/j.heliyon.2024.e33828] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Revised: 06/11/2024] [Accepted: 06/27/2024] [Indexed: 07/28/2024] Open
Abstract
We compared the time course of cross-modal semantic effects induced by both naturalistic sounds and spoken words on the processing of visual stimuli, whether visible or suppressed form awareness through continuous flash suppression. We found that, under visible conditions, spoken words elicited audio-visual semantic effects over longer time (-1000, -500, -250 ms SOAs) than naturalistic sounds (-500, -250 ms SOAs). Performance was generally better with auditory primes, but more so with congruent stimuli. Spoken words presented in advance (-1000, -500 ms) outperformed naturalistic sounds; the opposite was true for (near-)simultaneous presentations. Congruent spoken words demonstrated superior categorization performance compared to congruent naturalistic sounds. The audio-visual semantic congruency effect still occurred with suppressed visual stimuli, although without significant variations in the temporal patterns between auditory types. These findings indicate that: 1. Semantically congruent auditory input can enhance visual processing performance, even when the visual stimulus is imperceptible to conscious awareness. 2. The temporal dynamics is contingent on the auditory types only when the visual stimulus is visible. 3. Audiovisual semantic integration requires sufficient time for processing auditory information.
Collapse
Affiliation(s)
- Mingjie Gao
- School of Information Science, Yunnan University, Kunming, China
| | - Weina Zhu
- School of Information Science, Yunnan University, Kunming, China
| | - Jan Drewes
- Institute of Brain and Psychological Sciences, Sichuan Normal University, Chengdu, China
| |
Collapse
|
5
|
Williams JR, Störmer VS. Cutting Through the Noise: Auditory Scenes and Their Effects on Visual Object Processing. Psychol Sci 2024; 35:814-824. [PMID: 38889285 DOI: 10.1177/09567976241237737] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/20/2024] Open
Abstract
Despite the intuitive feeling that our visual experience is coherent and comprehensive, the world is full of ambiguous and indeterminate information. Here we explore how the visual system might take advantage of ambient sounds to resolve this ambiguity. Young adults (ns = 20-30) were tasked with identifying an object slowly fading in through visual noise while a task-irrelevant sound played. We found that participants demanded more visual information when the auditory object was incongruent with the visual object compared to when it was not. Auditory scenes, which are only probabilistically related to specific objects, produced similar facilitation even for unheard objects (e.g., a bench). Notably, these effects traverse categorical and specific auditory and visual-processing domains as participants performed across-category and within-category visual tasks, underscoring cross-modal integration across multiple levels of perceptual processing. To summarize, our study reveals the importance of audiovisual interactions to support meaningful perceptual experiences in naturalistic settings.
Collapse
Affiliation(s)
| | - Viola S Störmer
- Department of Psychology, University of California, San Diego
- Department of Psychological and Brain Sciences, Dartmouth College
| |
Collapse
|
6
|
Yu L, Wang W, Li Z, Ren Y, Liu J, Jiao L, Xu Q. Alexithymia modulates emotion concept activation during facial expression processing. Cereb Cortex 2024; 34:bhae071. [PMID: 38466112 DOI: 10.1093/cercor/bhae071] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2023] [Revised: 01/23/2024] [Accepted: 02/06/2024] [Indexed: 03/12/2024] Open
Abstract
Alexithymia is characterized by difficulties in emotional information processing. However, the underlying reasons for emotional processing deficits in alexithymia are not fully understood. The present study aimed to investigate the mechanism underlying emotional deficits in alexithymia. Using the Toronto Alexithymia Scale-20, we recruited college students with high alexithymia (n = 24) or low alexithymia (n = 24) in this study. Participants judged the emotional consistency of facial expressions and contextual sentences while recording their event-related potentials. Behaviorally, the high alexithymia group showed longer response times versus the low alexithymia group in processing facial expressions. The event-related potential results showed that the high alexithymia group had more negative-going N400 amplitudes compared with the low alexithymia group in the incongruent condition. More negative N400 amplitudes are also associated with slower responses to facial expressions. Furthermore, machine learning analyses based on N400 amplitudes could distinguish the high alexithymia group from the low alexithymia group in the incongruent condition. Overall, these findings suggest worse facial emotion perception for the high alexithymia group, potentially due to difficulty in spontaneously activating emotion concepts. Our findings have important implications for the affective science and clinical intervention of alexithymia-related affective disorders.
Collapse
Affiliation(s)
- Linwei Yu
- Department of Psychology, Ningbo University, Ningbo 315211, China
| | - Weihan Wang
- Department of Psychology, Ningbo University, Ningbo 315211, China
| | - Zhiwei Li
- Department of Psychology, Ningbo University, Ningbo 315211, China
| | - Yi Ren
- Department of Psychology, Ningbo University, Ningbo 315211, China
| | - Jiabin Liu
- Beijing Key Laboratory of Applied Experimental Psychology, National Demonstration Center for Experimental Psychology Education (Beijing Normal University), Faculty of Psychology, Beijing Normal University, Beijing 100875, China
| | - Lan Jiao
- Department of Psychology, Ningbo University, Ningbo 315211, China
| | - Qiang Xu
- Department of Psychology, Ningbo University, Ningbo 315211, China
| |
Collapse
|
7
|
Bothe R, Trouillet L, Elsner B, Mani N. Words and arbitrary actions in early object categorization: weak evidence for a word advantage. ROYAL SOCIETY OPEN SCIENCE 2024; 11:230648. [PMID: 38384782 PMCID: PMC10878798 DOI: 10.1098/rsos.230648] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/12/2023] [Accepted: 01/19/2024] [Indexed: 02/23/2024]
Abstract
Both words and gestures have been shown to influence object categorization, often even overriding perceptual similarities to cue category membership. However, gestures are often meaningful to infants while words are arbitrarily related to an object they refer to, more similar to arbitrary actions that can be performed on objects. In this study, we examine how words and arbitrary actions shape category formation. Across three conditions (word cue, action cue, word-action cue), we presented infants (N = 90) with eight videos of single-category objects which vary in colour and other perceptual features. The objects were either accompanied by a word and/or an action that is being performed on the object. Infants in the word and action condition showed a decrease in looking over the course of the familiarization phase indicating habituation to the category, but infants in the word-action condition did not. At test, infants saw a novel object of the just-learned category and a novel object from another category side-by-side on the screen. There was some evidence for an advantage for words in shaping early object categorization, although we note that this was not robust across analyses.
Collapse
Affiliation(s)
- Ricarda Bothe
- Psychology of Language, Georg-August University Goettingen, Gottingen, Niedersachsen, Germany
- Leibniz Science Campus 'Primate Cognition', Goettingen, Germany
| | - Léonie Trouillet
- Developmental Psychology, University of Potsdam, Potsdam, Germany
| | - Birgit Elsner
- Developmental Psychology, University of Potsdam, Potsdam, Germany
| | - Nivedita Mani
- Psychology of Language, Georg-August University Goettingen, Gottingen, Niedersachsen, Germany
- Leibniz Science Campus 'Primate Cognition', Goettingen, Germany
| |
Collapse
|
8
|
Yuan L, Novack M, Uttal D, Franconeri S. Language systematizes attention: How relational language enhances relational representation by guiding attention. Cognition 2024; 243:105671. [PMID: 38039798 DOI: 10.1016/j.cognition.2023.105671] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Revised: 11/16/2023] [Accepted: 11/20/2023] [Indexed: 12/03/2023]
Abstract
Language can affect cognition, but through what mechanism? Substantial past research has focused on how labeling can elicit categorical representation during online processing. We focus here on a particularly powerful type of language-relational language-and show that relational language can enhance relational representation in children through an embodied attention mechanism. Four-year-old children were given a color-location conjunction task, in which they were asked to encode a two-color square, split either vertically or horizontally (e.g., red on the left, blue on the right), and later recall the same configuration from its mirror reflection. During the encoding phase, children in the experimental condition heard relational language (e.g., "Red is on the left of blue"), while those in the control condition heard generic non-relational language (e.g., "Look at this one, look at it closely"). At recall, children in the experimental condition were more successful at choosing the correct relational representation between the two colors compared to the control group. Moreover, they exhibited different attention patterns as predicted by the attention shift account of relational representation (Franconeri et al., 2012). To test the sustained effect of language and the role of attention, during the second half of the study, the experimental condition was given generic non-relational language. There was a sustained advantage in the experimental condition for both behavioral accuracies and signature attention patterns. Overall, our findings suggest that relational language enhances relational representation by guiding learners' attention, and this facilitative effect persists over time even in the absence of language. Implications for the mechanism of how relational language can enhance the learning of relational systems (e.g., mathematics, spatial cognition) by guiding attention will be discussed.
Collapse
Affiliation(s)
- Lei Yuan
- Department of Psychology and Neuroscience, University of Colorado Boulder, USA.
| | - Miriam Novack
- Department of Medical Social Sciences, Northwestern University Feinberg School of Medicine, USA
| | - David Uttal
- Department of Psychology, Northwestern University, USA
| | | |
Collapse
|
9
|
Gervits F, Johanson M, Papafragou A. Relevance and the Role of Labels in Categorization. Cogn Sci 2023; 47:e13395. [PMID: 38148613 DOI: 10.1111/cogs.13395] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2021] [Revised: 11/20/2023] [Accepted: 11/27/2023] [Indexed: 12/28/2023]
Abstract
Language has been shown to influence the ability to form categories. Nevertheless, in most prior work, the effects of language could have been bolstered by the fact that linguistic labels were introduced by the experimenter prior to the categorization task in ways that could have highlighted their relevance for the task. Here, we compared the potency of labels to that of other non-linguistic cues on how people categorized novel, perceptually ambiguous natural kinds (e.g., flowers or birds). Importantly, we varied whether these cues were explicitly presented as relevant to the categorization task. In Experiment 1, we compared labels, numbers, and symbols: One group of participants was told to pay attention to these cues because they would be helpful (Relevant condition), a second group was told that the cues were irrelevant and should be ignored (Irrelevant condition), and a third group was told nothing about the cues (Neutral condition). Even though task relevance affected overall reliance on cues during categorization, participants were more likely to use labels to determine category boundaries, compared to numbers or symbols. In Experiments 2 and 3, we replicated and fine-tuned the advantage of labels in more stringent categorization tasks. These results offer novel evidence for the position that labels offer unique indications of category membership, compared to non-linguistic cues.
Collapse
Affiliation(s)
- Felix Gervits
- Department of Linguistics and Cognitive Science, University of Delaware
| | - Megan Johanson
- Department of Psychological and Brain Sciences, University of Delaware
- Practice Analytics Team, Mayo Clinic, Mankato
| | | |
Collapse
|
10
|
van Hoef R, Connell L, Lynott D. The effects of sensorimotor and linguistic information on the basic-level advantage. Cognition 2023; 241:105606. [PMID: 37722237 DOI: 10.1016/j.cognition.2023.105606] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Revised: 08/05/2023] [Accepted: 08/25/2023] [Indexed: 09/20/2023]
Abstract
The basic-level advantage is one of the best-known effects in human categorisation. Traditional accounts argue that basic-level categories present a maximally informative or entry level into a taxonomic organisation of concepts in semantic memory. However, these explanations are not fully compatible with most recent views on the structure of the conceptual system such as linguistic-simulation accounts, which emphasise the dual role of sensorimotor (i.e., perception-action experience of the world) and linguistic distributional information (i.e., statistical distribution of words in language) in conceptual processing. In four preregistered word→picture categorisation studies, we examined whether novel measures of sensorimotor and linguistic distance contribute to the basic level-advantage in categorical decision-making. Results showed that overlap in sensorimotor experience between category concept and member concept (e.g., animal→dog) predicted RT and accuracy at least as well as a traditional division into discrete subordinate, basic, and superordinate taxonomic levels. Furthermore, linguistic distributional information contributed to capturing effects of graded category structure where typicality ratings did not. Finally, when image label production frequency was taken into account (i.e., how often people actually produced specific labels for images), linguistic distributional information predicted RT and accuracy above and beyond sensorimotor information. These findings add to our understanding of how sensorimotor-linguistic theories of the conceptual system can explain categorisation behaviour.
Collapse
Affiliation(s)
- Rens van Hoef
- Department of Psychology, Lancaster University, Lancaster, United Kingdom.
| | - Louise Connell
- Department of Psychology, Lancaster University, Lancaster, United Kingdom; Department of Psychology, Maynooth University, Maynooth, Ireland
| | - Dermot Lynott
- Department of Psychology, Maynooth University, Maynooth, Ireland
| |
Collapse
|
11
|
Niimi R, Saiki T, Yokosawa K. Auditory scene context facilitates visual recognition of objects in consistent visual scenes. Atten Percept Psychophys 2023; 85:1267-1275. [PMID: 36977906 DOI: 10.3758/s13414-023-02699-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/13/2023] [Indexed: 03/29/2023]
Abstract
Visual object recognition is facilitated by contextually consistent scenes in which the object is embedded. Scene gist representations extracted from the scenery backgrounds yield this scene consistency effect. Here we examined whether the scene consistency effect is specific to the visual domain or if it is crossmodal. Through four experiments, the accuracy of the naming of briefly presented visual objects was assessed. In each trial, a 4-s sound clip was presented and a visual scene containing the target object was briefly shown at the end of the sound clip. In a consistent sound condition, an environmental sound associated with the scene in which the target object typically appears was presented (e.g., forest noise for a bear target object). In an inconsistent sound condition, a sound clip contextually inconsistent with the target object was presented (e.g., city noise for a bear). In a control sound condition, a nonsensical sound (sawtooth wave) was presented. When target objects were embedded in contextually consistent visual scenes (Experiment 1: a bear in a forest background), consistent sounds increased object-naming accuracy. In contrast, sound conditions did not show a significant effect when target objects were embedded in contextually inconsistent visual scenes (Experiment 2: a bear in a pedestrian crossing background) or in a blank background (Experiments 3 and 4). These results suggested that auditory scene context has weak or no direct influence on visual object recognition. It seems likely that consistent auditory scenes indirectly facilitate visual object recognition by promoting visual scene processing.
Collapse
|
12
|
Marti L, Wu S, Piantadosi ST, Kidd C. Latent Diversity in Human Concepts. Open Mind (Camb) 2023; 7:79-92. [PMID: 37416074 PMCID: PMC10320827 DOI: 10.1162/opmi_a_00072] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Accepted: 01/14/2023] [Indexed: 08/29/2023] Open
Abstract
Many social and legal conflicts hinge on semantic disagreements. Understanding the origins and implications of these disagreements necessitates novel methods for identifying and quantifying variation in semantic cognition between individuals. We collected conceptual similarity ratings and feature judgements from a variety of words in two domains. We analyzed this data using a non-parametric clustering scheme, as well as an ecological statistical estimator, in order to infer the number of different variants of common concepts that exist in the population. Our results show at least ten to thirty quantifiably different variants of word meanings exist for even common nouns. Further, people are unaware of this variation, and exhibit a strong bias to erroneously believe that other people share their semantics. This highlights conceptual factors that likely interfere with productive political and social discourse.
Collapse
Affiliation(s)
- Louis Marti
- University of California, Berkeley, Berkeley, CA
| | - Shengyi Wu
- University of California, Berkeley, Berkeley, CA
| | | | - Celeste Kidd
- University of California, Berkeley, Berkeley, CA
| |
Collapse
|
13
|
Meyer AM, Snider SF, Tippett DC, Saloma R, Turkeltaub PE, Hillis AE, Friedman RB. Baseline Conceptual-Semantic Impairment Predicts Longitudinal Treatment Effects for Anomia in Primary Progressive Aphasia and Alzheimer's Disease. APHASIOLOGY 2023; 38:205-236. [PMID: 38283767 PMCID: PMC10809875 DOI: 10.1080/02687038.2023.2183075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/09/2022] [Accepted: 02/16/2023] [Indexed: 01/30/2024]
Abstract
Background An individual's diagnostic subtype may fail to predict the efficacy of a given type of treatment for anomia. Classification by conceptual-semantic impairment may be more informative. Aims This study examined the effects of conceptual-semantic impairment and diagnostic subtype on anomia treatment effects in primary progressive aphasia (PPA) and Alzheimer's disease (AD). Methods & Procedures At baseline, the picture and word versions of the Pyramids and Palm Trees and Kissing and Dancing tests were used to measure conceptual-semantic processing. Based on norming that was conducted with unimpaired older adults, participants were classified as being impaired on both the picture and word versions (i.e., modality-general conceptual-semantic impairment), the picture version (Objects or Actions) only (i.e., visual-conceptual impairment), the word version (Nouns or Verbs) only (i.e., lexical-semantic impairment), or neither the picture nor the word version (i.e., no impairment). Following baseline testing, a lexical treatment and a semantic treatment were administered to all participants. The treatment stimuli consisted of nouns and verbs that were consistently named correctly at baseline (Prophylaxis items) and/or nouns and verbs that were consistently named incorrectly at baseline (Remediation items). Naming accuracy was measured at baseline, and it was measured at three, seven, eleven, fourteen, eighteen, and twenty-one months. Outcomes & Results Compared to baseline naming performance, lexical and semantic treatments both improved naming accuracy for treated Remediation nouns and verbs. For Prophylaxis items, lexical treatment was effective for both nouns and verbs, and semantic treatment was effective for verbs, but the pattern of results was different for nouns -- the effect of semantic treatment was initially nonsignificant or marginally significant, but it was significant beginning at 11 Months, suggesting that the effects of prophylactic semantic treatment may become more apparent as the disorder progresses. Furthermore, the interaction between baseline Conceptual-Semantic Impairment and the Treatment Condition (Lexical vs. Semantic) was significant for verb Prophylaxis items at 3 and 18 Months, and it was significant for noun Prophylaxis items at 14 and 18 Months. Conclusions The pattern of results suggested that individuals who have modality-general conceptual-semantic impairment at baseline are more likely to benefit from lexical treatment, while individuals who have unimpaired conceptual-semantic processing at baseline are more likely to benefit from semantic treatment as the disorder progresses. In contrast to conceptual-semantic impairment, diagnostic subtype did not typically predict the treatment effects.
Collapse
Affiliation(s)
- Aaron M. Meyer
- Center for Aphasia Research and Rehabilitation, Georgetown University Medical Center
| | - Sarah F. Snider
- Center for Aphasia Research and Rehabilitation, Georgetown University Medical Center
| | | | - Ryan Saloma
- Center for Aphasia Research and Rehabilitation, Georgetown University Medical Center
| | - Peter E. Turkeltaub
- Center for Aphasia Research and Rehabilitation, Georgetown University Medical Center
| | | | - Rhonda B. Friedman
- Center for Aphasia Research and Rehabilitation, Georgetown University Medical Center
| |
Collapse
|
14
|
Long-term memory representations for audio-visual scenes. Mem Cognit 2023; 51:349-370. [PMID: 36100821 PMCID: PMC9950240 DOI: 10.3758/s13421-022-01355-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/26/2022] [Indexed: 11/08/2022]
Abstract
In this study, we investigated the nature of long-term memory representations for naturalistic audio-visual scenes. Whereas previous research has shown that audio-visual scenes are recognized more accurately than their unimodal counterparts, it remains unclear whether this benefit stems from audio-visually integrated long-term memory representations or a summation of independent retrieval cues. We tested two predictions for audio-visually integrated memory representations. First, we used a modeling approach to test whether recognition performance for audio-visual scenes is more accurate than would be expected from independent retrieval cues. This analysis shows that audio-visual integration is not necessary to explain the benefit of audio-visual scenes relative to purely auditory or purely visual scenes. Second, we report a series of experiments investigating the occurrence of study-test congruency effects for unimodal and audio-visual scenes. Most importantly, visually encoded information was immune to additional auditory information presented during testing, whereas auditory encoded information was susceptible to additional visual information presented during testing. This renders a true integration of visual and auditory information in long-term memory representations unlikely. In sum, our results instead provide evidence for visual dominance in long-term memory. Whereas associative auditory information is capable of enhancing memory performance, the long-term memory representations appear to be primarily visual.
Collapse
|
15
|
Online mouse cursor trajectories distinguish phonological activation by linguistic and nonlinguistic sounds. Psychon Bull Rev 2023; 30:362-372. [PMID: 35882722 PMCID: PMC9971122 DOI: 10.3758/s13423-022-02153-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/15/2022] [Indexed: 11/08/2022]
Abstract
Four online mouse cursor tracking experiments (total N = 208) examined the activation of phonological representations by linguistic and nonlinguistic auditory stimuli. Participants hearing spoken words (e.g., "bell") produced less direct mouse cursor trajectories toward corresponding pictures or text when visual arrays also included phonologically related competitors (e.g., belt) as compared with unrelated distractors (e.g., hose), but no such phonological competition was observed during environmental sounds (e.g., the ring of a bell). While important similarities have been observed between spoken words and environmental sounds, these experiments provide novel mouse cursor evidence that environmental sounds directly activate conceptual knowledge without needing to engage linguistic knowledge, contrasting with spoken words. Implications for theories of conceptual knowledge are discussed.
Collapse
|
16
|
What makes foods and flavours fit? Consumer perception of (un)usual product combinations. Food Qual Prefer 2022. [DOI: 10.1016/j.foodqual.2022.104680] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
17
|
Suffill E, Schonberg C, Vlach HA, Lupyan G. Children's knowledge of superordinate words predicts subsequent inductive reasoning. J Exp Child Psychol 2022; 221:105449. [PMID: 35550281 PMCID: PMC10078766 DOI: 10.1016/j.jecp.2022.105449] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2021] [Revised: 04/08/2022] [Accepted: 04/08/2022] [Indexed: 10/18/2022]
Abstract
Children's early language knowledge-typically assessed using standardized word comprehension tests or through parental reports-has been positively linked to a variety of later outcomes, from reasoning tests to academic performance to income and health. To better understand the mechanisms behind these links, we examined whether knowledge of certain "seed words"-words with high inductive potential-is positively associated with inductive reasoning. This hypothesis stems from prior work on the effects of language on categorization suggesting that certain words may be important for helping people to deploy categorical hypotheses. Using a longitudinal design, we assessed 36 2- to 4-year-old children's knowledge of 333 words of varying levels of generality (e.g., toy vs. pinwheel, number vs. five). We predicted that adjusting for overall vocabulary, knowledge of more general words (e.g., toy, number) would predict children's performance on inductive reasoning tasks administered 6 months later (i.e., a subset of the Stanford-Binet Intelligence Scales for Early Childhood-Fifth Edition [SB-5] and Woodcock-Johnson Tests of Cognitive Abilities [WJ] concept formation tasks). This prediction was confirmed for one of the measures of inductive reasoning (i.e., the SB-5 but not the WJ) and notably for the task considered to be less reliant on language. Although our experimental design demonstrates only a correlational relationship between seed word knowledge and inductive reasoning ability, our results are consistent with the possibility that early knowledge of certain seed words facilitates performance on putatively nonverbal reasoning tasks.
Collapse
Affiliation(s)
- Ellise Suffill
- Department of Educational Psychology, University of Wisconsin-Madison, Madison, WI 53706, USA; Department of Psychology, University of Wisconsin-Madison, Madison, WI 53706, USA; Department of Psychology, University of Vienna, Vienna, 1010, Austria.
| | - Christina Schonberg
- Department of Educational Psychology, University of Wisconsin-Madison, Madison, WI 53706, USA; Department of Psychology, University of Wisconsin-Madison, Madison, WI 53706, USA
| | - Haley A Vlach
- Department of Educational Psychology, University of Wisconsin-Madison, Madison, WI 53706, USA
| | - Gary Lupyan
- Department of Psychology, University of Wisconsin-Madison, Madison, WI 53706, USA
| |
Collapse
|
18
|
Emotional violation of faces, emojis, and words: Evidence from N400. Biol Psychol 2022; 173:108405. [DOI: 10.1016/j.biopsycho.2022.108405] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2022] [Revised: 07/28/2022] [Accepted: 08/02/2022] [Indexed: 11/17/2022]
|
19
|
Wing EA, Burles F, Ryan JD, Gilboa A. The structure of prior knowledge enhances memory in experts by reducing interference. Proc Natl Acad Sci U S A 2022; 119:e2204172119. [PMID: 35737844 PMCID: PMC9245613 DOI: 10.1073/pnas.2204172119] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Accepted: 05/08/2022] [Indexed: 12/25/2022] Open
Abstract
The influence of prior knowledge on memory is ubiquitous, making the specific mechanisms of this relationship difficult to disentangle. Here, we show that expert knowledge produces a fundamental shift in the way that interitem similarity (i.e., the perceived resemblance between items in a set) biases episodic recognition. Within a group of expert birdwatchers and matched controls, we characterized the psychological similarity space for a set of well-known local species and a set of less familiar, nonlocal species. In experts, interitem similarity was influenced most strongly by taxonomic features, whereas in controls, similarity judgments reflected bird color. In controls, perceived episodic oldness during a recognition memory task increased along with measures of global similarity between items, consistent with classic models of episodic recognition. Surprisingly, for experts, high global similarity did not drive oldness signals. Instead, for local birds memory tracked the availability of species-level name knowledge, whereas for nonlocal birds, it was mediated by the organization of generalized conceptual space. These findings demonstrate that episodic memory in experts can benefit from detailed subcategory knowledge, or, lacking that, from the overall relational structure of concepts. Expertise reshapes psychological similarity space, helping to resolve mnemonic separation challenges arising from high interitem overlap. Thus, even in the absence of knowledge about item-specific details or labels, the presence of generalized knowledge appears to support episodic recognition in domains of expertise by altering the typical relationship between psychological similarity and memory.
Collapse
Affiliation(s)
- Erik A. Wing
- Rotman Research Institute, Baycrest Health Sciences, Toronto, ON M6A 2E1, Canada
| | - Ford Burles
- Rotman Research Institute, Baycrest Health Sciences, Toronto, ON M6A 2E1, Canada
| | - Jennifer D. Ryan
- Rotman Research Institute, Baycrest Health Sciences, Toronto, ON M6A 2E1, Canada
- Department of Psychology, University of Toronto, Toronto, ON M5S 3G3, Canada
| | - Asaf Gilboa
- Rotman Research Institute, Baycrest Health Sciences, Toronto, ON M6A 2E1, Canada
- Department of Psychology, University of Toronto, Toronto, ON M5S 3G3, Canada
- Toronto Rehabilitation Institute, University Health Network, Toronto, ON M5G 2A2, Canada
| |
Collapse
|
20
|
Predictive Feedback, Early Sensory Representations, and Fast Responses to Predicted Stimuli Depend on NMDA Receptors. J Neurosci 2021; 41:10130-10147. [PMID: 34732525 DOI: 10.1523/jneurosci.1311-21.2021] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2021] [Revised: 09/23/2021] [Accepted: 10/25/2021] [Indexed: 01/03/2023] Open
Abstract
Learned associations between stimuli allow us to model the world and make predictions, crucial for efficient behavior (e.g., hearing a siren, we expect to see an ambulance and quickly make way). While there are theoretical and computational frameworks for prediction, the circuit and receptor-level mechanisms are unclear. Using high-density EEG, Bayesian modeling, and machine learning, we show that inferred "causal" relationships between stimuli and frontal alpha activity account for reaction times (a proxy for predictions) on a trial-by-trial basis in an audiovisual delayed match-to-sample task which elicited predictions. Predictive β feedback activated sensory representations in advance of predicted stimuli. Low-dose ketamine, an NMDAR blocker, but not the control drug dexmedetomidine, perturbed behavioral indices of predictions, their representation in higher-order cortex, feedback to posterior cortex, and pre-activation of sensory templates in higher-order sensory cortex. This study suggests that predictions depend on alpha activity in higher-order cortex, β feedback, and NMDARs, and ketamine blocks access to learned predictive information.SIGNIFICANCE STATEMENT We learn the statistical regularities around us, creating associations between sensory stimuli. These associations can be exploited by generating predictions, which enable fast and efficient behavior. When predictions are perturbed, it can negatively influence perception and even contribute to psychiatric disorders, such as schizophrenia. Here we show that the frontal lobe generates predictions and sends them to posterior brain areas, to activate representations of predicted sensory stimuli before their appearance. Oscillations in neural activity (α and β waves) are vital for these predictive mechanisms. The drug ketamine blocks predictions and the underlying mechanisms. This suggests that the generation of predictions in the frontal lobe, and the feedback pre-activating sensory representations in advance of stimuli, depend on NMDARs.
Collapse
|
21
|
Linguistic labels cue biological motion perception and misperception. Sci Rep 2021; 11:17239. [PMID: 34446746 PMCID: PMC8390742 DOI: 10.1038/s41598-021-96649-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2021] [Accepted: 08/05/2021] [Indexed: 11/24/2022] Open
Abstract
Linguistic labels exert a particularly strong top-down influence on perception. The potency of this influence has been ascribed to their ability to evoke category-diagnostic features of concepts. In doing this, they facilitate the formation of a perceptual template concordant with those features, effectively biasing perceptual activation towards the labelled category. In this study, we employ a cueing paradigm with moving, point-light stimuli across three experiments, in order to examine how the number of biological motion features (form and kinematics) encoded in lexical cues modulates the efficacy of lexical top-down influence on perception. We find that the magnitude of lexical influence on biological motion perception rises as a function of the number of biological motion-relevant features carried by both cue and target. When lexical cues encode multiple biological motion features, this influence is robust enough to mislead participants into reporting erroneous percepts, even when a masking level yielding high performance is used.
Collapse
|
22
|
Marian V, Hayakawa S, Schroeder SR. Cross-Modal Interaction Between Auditory and Visual Input Impacts Memory Retrieval. Front Neurosci 2021; 15:661477. [PMID: 34381328 PMCID: PMC8350348 DOI: 10.3389/fnins.2021.661477] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2021] [Accepted: 06/24/2021] [Indexed: 11/13/2022] Open
Abstract
How we perceive and learn about our environment is influenced by our prior experiences and existing representations of the world. Top-down cognitive processes, such as attention and expectations, can alter how we process sensory stimuli, both within a modality (e.g., effects of auditory experience on auditory perception), as well as across modalities (e.g., effects of visual feedback on sound localization). Here, we demonstrate that experience with different types of auditory input (spoken words vs. environmental sounds) modulates how humans remember concurrently-presented visual objects. Participants viewed a series of line drawings (e.g., picture of a cat) displayed in one of four quadrants while listening to a word or sound that was congruent (e.g., "cat" or ), incongruent (e.g., "motorcycle" or ), or neutral (e.g., a meaningless pseudoword or a tonal beep) relative to the picture. Following the encoding phase, participants were presented with the original drawings plus new drawings and asked to indicate whether each one was "old" or "new." If a drawing was designated as "old," participants then reported where it had been displayed. We find that words and sounds both elicit more accurate memory for what objects were previously seen, but only congruent environmental sounds enhance memory for where objects were positioned - this, despite the fact that the auditory stimuli were not meaningful spatial cues of the objects' locations on the screen. Given that during real-world listening conditions, environmental sounds, but not words, reliably originate from the location of their referents, listening to sounds may attune the visual dorsal pathway to facilitate attention and memory for objects' locations. We propose that audio-visual associations in the environment and in our previous experience jointly contribute to visual memory, strengthening visual memory through exposure to auditory input.
Collapse
Affiliation(s)
- Viorica Marian
- Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, United States
| | - Sayuri Hayakawa
- Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, United States
| | - Scott R. Schroeder
- Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, United States
- Department of Speech-Language-Hearing Sciences, Hofstra University, Hempstead, NY, United States
| |
Collapse
|
23
|
Satpute AB, Lindquist KA. At the Neural Intersection Between Language and Emotion. AFFECTIVE SCIENCE 2021; 2:207-220. [PMID: 36043170 PMCID: PMC9382959 DOI: 10.1007/s42761-021-00032-2] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/07/2020] [Accepted: 01/25/2021] [Indexed: 10/21/2022]
Abstract
What role does language play in emotion? Behavioral research shows that emotion words such as "anger" and "fear" alter emotion experience, but questions still remain about mechanism. Here, we review the neuroscience literature to examine whether neural processes associated with semantics are also involved in emotion. Our review suggests that brain regions involved in the semantic processing of words: (i) are engaged during experiences of emotion, (ii) coordinate with brain regions involved in affect to create emotions, (iii) hold representational content for emotion, and (iv) may be necessary for constructing emotional experience. We relate these findings with respect to four theoretical relationships between language and emotion, which we refer to as "non-interactive," "interactive," "constitutive," and "deterministic." We conclude that findings are most consistent with the interactive and constitutive views with initial evidence suggestive of a constitutive view, in particular. We close with several future directions that may help test hypotheses of the constitutive view.
Collapse
Affiliation(s)
- Ajay B. Satpute
- Department of Psychology, Northeastern University, 360 Huntington Ave, 125 NI, Boston, MA 02115 USA
| | - Kristen A. Lindquist
- Department of Psychology and Neuroscience, University of North Carolina, Chapel Hill, NC USA
| |
Collapse
|
24
|
Viganò S, Rubino V, Soccio AD, Buiatti M, Piazza M. Grid-like and distance codes for representing word meaning in the human brain. Neuroimage 2021; 232:117876. [PMID: 33636346 DOI: 10.1016/j.neuroimage.2021.117876] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2020] [Revised: 02/08/2021] [Accepted: 02/12/2021] [Indexed: 10/22/2022] Open
Abstract
Relational information about items in memory is thought to be represented in our brain thanks to an internal comprehensive model, also referred to as a "cognitive map". In the human neuroimaging literature, two signatures of bi-dimensional cognitive maps have been reported: the grid-like code and the distance-dependent code. While these kinds of representation were previously observed during spatial navigation and, more recently, during processing of perceptual stimuli, it is still an open question whether they also underlie the representation of the most basic items of language: words. Here we taught human participants the meaning of novel words as arbitrary labels for a set of audiovisual objects varying orthogonally in size and sound. The novel words were therefore conceivable as points in a navigable 2D map of meaning. While subjects performed a word comparison task, we recorded their brain activity using functional magnetic resonance imaging (fMRI). By applying a combination of representational similarity and fMRI-adaptation analyses, we found evidence of (i) a grid-like code, in the right postero-medial entorhinal cortex, representing the relative angular positions of words in the word space, and (ii) a distance-dependent code, in medial prefrontal, orbitofrontal, and mid-cingulate cortices, representing the Euclidean distance between words. Additionally, we found evidence that the brain also separately represents the single dimensions of word meaning: their implied size, encoded in visual areas, and their implied sound, in Heschl's gyrus/Insula. These results support the idea that the meaning of words, when they are organized along two dimensions, is represented in the human brain across multiple maps of different dimensionality. SIGNIFICANT STATEMENT: How do we represent the meaning of words and perform comparative judgements on them in our brain? According to influential theories, concepts are conceivable as points of an internal map (where distance represents similarity) that, as the physical space, can be mentally navigated. Here we use fMRI to show that when humans compare newly learnt words, they recruit a grid-like and a distance code, the same types of neural codes that, in mammals, represent relations between locations in the environment and support physical navigation between them.
Collapse
Affiliation(s)
- Simone Viganò
- CIMEC - Center for Mind/Brain Sciences, University of Trento, Italy.
| | - Valerio Rubino
- CIMEC - Center for Mind/Brain Sciences, University of Trento, Italy
| | | | - Marco Buiatti
- CIMEC - Center for Mind/Brain Sciences, University of Trento, Italy
| | - Manuela Piazza
- CIMEC - Center for Mind/Brain Sciences, University of Trento, Italy
| |
Collapse
|
25
|
Consistent verbal labels promote odor category learning. Cognition 2021; 206:104485. [DOI: 10.1016/j.cognition.2020.104485] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2020] [Revised: 09/27/2020] [Accepted: 10/04/2020] [Indexed: 02/03/2023]
|
26
|
Lupyan G, Abdel Rahman R, Boroditsky L, Clark A. Effects of Language on Visual Perception. Trends Cogn Sci 2020; 24:930-944. [PMID: 33012687 DOI: 10.1016/j.tics.2020.08.005] [Citation(s) in RCA: 54] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2020] [Revised: 08/22/2020] [Accepted: 08/25/2020] [Indexed: 11/24/2022]
Abstract
Does language change what we perceive? Does speaking different languages cause us to perceive things differently? We review the behavioral and electrophysiological evidence for the influence of language on perception, with an emphasis on the visual modality. Effects of language on perception can be observed both in higher-level processes such as recognition and in lower-level processes such as discrimination and detection. A consistent finding is that language causes us to perceive in a more categorical way. Rather than being fringe or exotic, as they are sometimes portrayed, we discuss how effects of language on perception naturally arise from the interactive and predictive nature of perception.
Collapse
Affiliation(s)
- Gary Lupyan
- University of Wisconsin-Madison, Madison, WI, USA.
| | | | | | - Andy Clark
- University of Sussex, Brighton, UK; Macquarie University, Sydney, Australia
| |
Collapse
|
27
|
Abstract
A foundation of human cognition is the flexibility with which we can represent any object as either a unique individual (my dog Fred) or a member of an object category (dog, animal). This conceptual flexibility is supported by language; the way we name an object is instrumental to our construal of that object as an individual or a category member. Evidence from a new recognition memory task reveals that infants are sensitive to this principled link between naming and object representation by age 12 mo. During training, all infants (n = 77) viewed four distinct objects from the same object category, each introduced in conjunction with either the same novel noun (Consistent Name condition), a distinct novel noun for each object (Distinct Names condition), or the same sine-wave tone sequence (Consistent Tone condition). At test, infants saw each training object again, presented in silence along with a new object from the same category. Infants in the Consistent Name condition showed poor recognition memory at test, suggesting that consistently applied names focused them primarily on commonalities among the named objects at the expense of distinctions among them. Infants in the Distinct Names condition recognized three of the four objects, suggesting that applying distinct names enhanced infants' encoding of the distinctions among the objects. Infants in the control Consistent Tone condition recognized only the object they had most recently seen. Thus, even for infants just beginning to speak their first words, the way in which an object is named guides infants' encoding, representation, and memory for that object.
Collapse
|
28
|
Sirri L, Guerra E, Linnert S, Smith ES, Reid V, Parise E. Infants' conceptual representations of meaningful verbal and nonverbal sounds. PLoS One 2020; 15:e0233968. [PMID: 32512583 PMCID: PMC7279894 DOI: 10.1371/journal.pone.0233968] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2019] [Accepted: 05/15/2020] [Indexed: 11/17/2022] Open
Abstract
In adults, words are more effective than sounds at activating conceptual representations. We aimed to replicate these findings and extend them to infants. In a series of experiments using an eye tracker object recognition task, suitable for both adults and infants, participants heard either a word (e.g. cow) or an associated sound (e.g. mooing) followed by an image illustrating a target (e.g. cow) and a distracter (e.g. telephone). The results showed that adults reacted faster when the visual object matched the auditory stimulus and even faster in the word relative to the associated sound condition. Infants, however, did not show a similar pattern of eye-movements: only eighteen-month-olds, but not 9- or 12-month-olds, were equally fast at recognizing the target object in both conditions. Looking times, however, were longer for associated sounds, suggesting that processing sounds elicits greater allocation of attention. Our findings suggest that the advantage of words over associated sounds in activating conceptual representations emerges at a later stage during language development.
Collapse
Affiliation(s)
- Louah Sirri
- Department of Education, Manchester Metropolitan University, Manchester, United Kingdom.,Department of Psychology, Lancaster University, Lancaster, United Kingdom
| | - Ernesto Guerra
- Institute of Education and Center for Advanced Research in Education, Universidad de Chile, Santiago, Chile
| | - Szilvia Linnert
- Department of Psychology, Lancaster University, Lancaster, United Kingdom
| | - Eleanor S Smith
- Department of Experimental Psychology, University of Cambridge, Cambridge, United Kingdom
| | - Vincent Reid
- Department of Psychology, Lancaster University, Lancaster, United Kingdom.,School of Psychology, University of Waikato, Waikato, New Zealand
| | - Eugenio Parise
- Department of Psychology, Lancaster University, Lancaster, United Kingdom
| |
Collapse
|
29
|
Abstract
Seeing an object is a natural source for learning about the object's configuration. We show that language can also shape our knowledge about visual objects. We investigated sign language that enables deaf individuals to communicate through hand movements with as much expressive power as any other natural language. A few signs represent objects in a specific orientation. Sign-language users (signers) recognized visual objects faster when oriented as in the sign, and this match in orientation elicited specific brain responses in signers, as measured by event-related potentials (ERPs). Further analyses suggested that signers' responsiveness to object orientation derived from changes in the visual object representations induced by the signs. Our results also show that language facilitates discrimination between objects of the same kind (e.g., different cars), an effect never reported before with spoken languages. By focusing on sign language we could better characterize the impact of language (a uniquely human ability) on object visual processing.
Collapse
|
30
|
Andrä C, Mathias B, Schwager A, Macedonia M, von Kriegstein K. Learning Foreign Language Vocabulary with Gestures and Pictures Enhances Vocabulary Memory for Several Months Post-Learning in Eight-Year-Old School Children. EDUCATIONAL PSYCHOLOGY REVIEW 2020. [DOI: 10.1007/s10648-020-09527-z] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
AbstractThe integration of gestures and pictures into pedagogy has demonstrated potential for improving adults’ learning of foreign language (L2) vocabulary. However, the relative benefits of gestures and pictures on children’s L2 vocabulary learning have not been formally evaluated. In three experiments, we investigated the effects of gesture-based and picture-based learning on 8-year-old primary school children’s acquisition of novel L2 vocabulary. In each experiment, German children were trained over 5 consecutive days on auditorily presented, concrete and abstract, English vocabulary. In Experiments 1 and 2, gesture enrichment (auditorily presented L2 words accompanied with self-performed gestures) was compared with a non-enriched baseline condition. In Experiment 3, gesture enrichment was compared with picture enrichment (auditorily presented words accompanied with pictures). Children performed vocabulary recall and translation tests at 3 days, 2 months, and 6 months post-learning. Both gesture and picture enrichment enhanced children’s test performance compared with non-enriched learning. Benefits of gesture and picture enrichment persisted up to 6 months after training and occurred for both concrete and abstract words. Gesture-enriched learning was hypothesized to boost learning outcomes more than picture-enriched learning on the basis of previous findings in adults. Unexpectedly, however, we observed similar benefits of gesture and picture enrichment on children’s L2 learning. These findings suggest that both gestures and pictures enhance children’s L2 learning and that performance benefits are robust over long timescales.
Collapse
|
31
|
Abstract
Does the format in which we experience our moment-to-moment thoughts vary from person to person? Many people claim that their thinking takes place in an inner voice and that using language outside of interpersonal communication is a regular experience for them. Other people disagree. We present a novel measure, the Internal Representation Questionnaire (IRQ) designed to assess people's subjective mode of internal representations, and to quantify individual differences in "modes of thinking" along multiple factors in a single questionnaire. Exploratory factor analysis identified four factors: Internal Verbalization, Visual Imagery, Orthographic Imagery, and Representational Manipulation. All four factors were positively correlated with one another, but accounted for unique predictions. We describe the properties of the IRQ and report a test of its ability to predict patterns of interference in a speeded word-picture verification task. Taken together, the results suggest that self-reported differences in how people internally represent their thoughts relates to differences in processing familiar images and written words.
Collapse
|
32
|
Fourtassi A, Frank MC. How optimal is word recognition under multimodal uncertainty? Cognition 2020; 199:104092. [PMID: 32135386 DOI: 10.1016/j.cognition.2019.104092] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2018] [Revised: 09/23/2019] [Accepted: 10/03/2019] [Indexed: 11/16/2022]
Abstract
Identifying a spoken word in a referential context requires both the ability to integrate multimodal input and the ability to reason under uncertainty. How do these tasks interact with one another? We study how adults identify novel words under joint uncertainty in the auditory and visual modalities, and we propose an ideal observer model of how cues in these modalities are combined optimally. Model predictions are tested in four experiments where recognition is made under various sources of uncertainty. We found that participants use both auditory and visual cues to recognize novel words. When the signal is not distorted with environmental noise, participants weight the auditory and visual cues optimally, that is, according to the relative reliability of each modality. In contrast, when one modality has noise added to it, human perceivers systematically prefer the unperturbed modality to a greater extent than the optimal model does. This work extends the literature on perceptual cue combination to the case of word recognition in a referential context. In addition, this context offers a link to the study of multimodal information in word meaning learning.
Collapse
Affiliation(s)
| | - Michael C Frank
- Department of Psychology, Stanford University, United States
| |
Collapse
|
33
|
Toon J, Kukona A. Activating Semantic Knowledge During Spoken Words and Environmental Sounds: Evidence From the Visual World Paradigm. Cogn Sci 2020; 44:e12810. [PMID: 31960505 DOI: 10.1111/cogs.12810] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2018] [Revised: 10/09/2019] [Accepted: 11/11/2019] [Indexed: 11/30/2022]
Abstract
Two visual world experiments investigated the activation of semantically related concepts during the processing of environmental sounds and spoken words. Participants heard environmental sounds such as barking or spoken words such as "puppy" while viewing visual arrays with objects such as a bone (semantically related competitor) and candle (unrelated distractor). In Experiment 1, a puppy (target) was also included in the visual array; in Experiment 2, it was not. During both types of auditory stimuli, competitors were fixated significantly more than distractors, supporting the coactivation of semantically related concepts in both cases; comparisons of the two types of auditory stimuli also revealed significantly larger effects with environmental sounds than spoken words. We discuss implications of these results for theories of semantic knowledge.
Collapse
Affiliation(s)
- Josef Toon
- Division of Psychology, De Montfort University
| | | |
Collapse
|
34
|
Gandolfo M, Downing PE. Causal Evidence for Expression of Perceptual Expectations in Category-Selective Extrastriate Regions. Curr Biol 2019; 29:2496-2500.e3. [DOI: 10.1016/j.cub.2019.06.024] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2019] [Revised: 06/06/2019] [Accepted: 06/10/2019] [Indexed: 11/15/2022]
|
35
|
Abstract
Some research suggests typicality is stable, other research suggests it is malleable, and some suggests it is unstable. The two ends of this continuum-stability and instability-make somewhat contradictory claims. Stability claims that typicality is determined by our experience of decontextualized feature correlations in the world and is therefore fairly consistent. Instability claims that typicality depends on context and is therefore extremely inconsistent. After reviewing evidence for these two claims, we argue that typicality's stability and instability are not contradictory but rather complementary when they are understood as operating on two different levels. Stability reflects how information gets encoded into semantic memory-what we call structural typicality. Instability reflects the task-dependent recruitment of semantic knowledge-what we call functional typicality. Finally, we speculate on potential factors that may mediate between the recruitment of structural or functional typicality.
Collapse
|
36
|
van Bergen G, Flecken M, Wu R. Rapid target selection of object categories based on verbs: Implications for language-categorization interactions. Psychophysiology 2019; 56:e13395. [PMID: 31115079 DOI: 10.1111/psyp.13395] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2018] [Revised: 03/13/2019] [Accepted: 04/26/2019] [Indexed: 11/29/2022]
Abstract
Although much is known about how nouns facilitate object categorization, very little is known about how verbs (e.g., posture verbs such as stand or lie) facilitate object categorization. Native Dutch speakers are a unique population to investigate this issue with because the configurational categories distinguished by staan (to stand) and liggen (to lie) are inherent in everyday Dutch language. Using an ERP component (N2pc), four experiments demonstrate that selection of posture verb categories is rapid (between 220-320 ms). The effect was attenuated, though present, when removing the perceptual distinction between categories. A similar attenuated effect was obtained in native English speakers, where the category distinction is less familiar, and when category labels were implicit for native Dutch speakers. Our results are among the first to demonstrate that category search based on verbs can be rapid, although extensive linguistic experience and explicit labels may not be necessary to facilitate categorization in this case.
Collapse
Affiliation(s)
- Geertje van Bergen
- Max Planck Institute for Psycholinguistics, Radboud University Nijmegen, Nijmegen, The Netherlands.,Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands
| | - Monique Flecken
- Max Planck Institute for Psycholinguistics, Radboud University Nijmegen, Nijmegen, The Netherlands.,Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands
| | - Rachel Wu
- Department of Psychology, University of California, Riverside, Riverside, California
| |
Collapse
|
37
|
Flecken M, van Bergen G. Can the English stand the bottle like the Dutch? Effects of relational categories on object perception. Cogn Neuropsychol 2019; 37:271-287. [DOI: 10.1080/02643294.2019.1607272] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Affiliation(s)
- Monique Flecken
- Neurobiology of Language Department, Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Geertje van Bergen
- Neurobiology of Language Department, Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| |
Collapse
|
38
|
Dove G. Language as a disruptive technology: abstract concepts, embodiment and the flexible mind. Philos Trans R Soc Lond B Biol Sci 2019; 373:rstb.2017.0135. [PMID: 29915003 DOI: 10.1098/rstb.2017.0135] [Citation(s) in RCA: 37] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/12/2017] [Indexed: 11/12/2022] Open
Abstract
A growing body of evidence suggests that cognition is embodied and grounded. Abstract concepts, though, remain a significant theoretical challenge. A number of researchers have proposed that language makes an important contribution to our capacity to acquire and employ concepts, particularly abstract ones. In this essay, I critically examine this suggestion and ultimately defend a version of it. I argue that a successful account of how language augments cognition should emphasize its symbolic properties and incorporate a view of embodiment that recognizes the flexible, multimodal and task-related nature of action, emotion and perception systems. On this view, language is an ontogenetically disruptive cognitive technology that expands our conceptual reach.This article is part of the theme issue 'Varieties of abstract concepts: development, use and representation in the brain'.
Collapse
Affiliation(s)
- Guy Dove
- Department of Philosophy, University of Louisville, Louisville, KY, USA
| |
Collapse
|
39
|
Fini C, Borghi AM. Sociality to Reach Objects and to Catch Meaning. Front Psychol 2019; 10:838. [PMID: 31068854 PMCID: PMC6491622 DOI: 10.3389/fpsyg.2019.00838] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2018] [Accepted: 03/29/2019] [Indexed: 11/13/2022] Open
Abstract
Sociality influences both concrete and abstract concepts acquisition and representation, but in different ways. Here we propose that sociality is crucial during the acquisition of abstract concepts but less for concrete concepts, that have a bounded perceptual referent and can be learned more autonomously. For the acquisition of abstract concepts, instead, the human relation would be pivotal in order to master complex meanings. Once acquired, concrete words can act as tools, able to modify our sensorimotor representation of the surrounding environment. Indeed, pronouncing a word the referent of which is distant from us we implicitly assume that, thanks to the contribution of others, the object becomes reachable; this would expand our perception of the near bodily space. Abstract concepts would modify our sensorimotor representation of the space only in the earlier phases of their acquisition, specifically when the child represents an interlocutor as a real, physical "ready to help actor" who can help her in forming categories and in explaining the meaning of words that do not possess a concrete referent. Once abstract concepts are acquired, they can work as social tools: the social metacognition mechanism (awareness of our concepts and of our need of the help of others) can evoke the presence of a "ready to help actor" in an implicit way, as a predisposition to ask information to fill the knowledge gaps.
Collapse
Affiliation(s)
- Chiara Fini
- Department of Dynamic and Clinical Psychology, Faculty of Medicine and Psychology, Sapienza University of Rome, Rome, Italy
| | - Anna M. Borghi
- Department of Dynamic and Clinical Psychology, Faculty of Medicine and Psychology, Sapienza University of Rome, Rome, Italy
- Institute of Cognitive Sciences and Technologies, National Research Council, Rome, Italy
| |
Collapse
|
40
|
Zhou Z, Firestone C. Humans can decipher adversarial images. Nat Commun 2019; 10:1334. [PMID: 30902973 PMCID: PMC6430776 DOI: 10.1038/s41467-019-08931-6] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2018] [Accepted: 01/21/2019] [Indexed: 02/04/2023] Open
Abstract
Does the human mind resemble the machine-learning systems that mirror its performance? Convolutional neural networks (CNNs) have achieved human-level benchmarks in classifying novel images. These advances support technologies such as autonomous vehicles and machine diagnosis; but beyond this, they serve as candidate models for human vision itself. However, unlike humans, CNNs are “fooled” by adversarial examples—nonsense patterns that machines recognize as familiar objects, or seemingly irrelevant image perturbations that nevertheless alter the machine’s classification. Such bizarre behaviors challenge the promise of these new advances; but do human and machine judgments fundamentally diverge? Here, we show that human and machine classification of adversarial images are robustly related: In 8 experiments on 5 prominent and diverse adversarial imagesets, human subjects correctly anticipated the machine’s preferred label over relevant foils—even for images described as “totally unrecognizable to human eyes”. Human intuition may be a surprisingly reliable guide to machine (mis)classification—with consequences for minds and machines alike. Convolutional Neural Networks (CNNs) have reached human-level benchmarks in classifying images, but they can be “fooled” by adversarial examples that elicit bizarre misclassifications from machines. Here, the authors show how humans can anticipate which objects CNNs will see in adversarial images.
Collapse
Affiliation(s)
- Zhenglong Zhou
- Department of Psychological & Brain Sciences, Johns Hopkins University, 3400 N Charles St., Baltimore, MD, 21218, USA
| | - Chaz Firestone
- Department of Psychological & Brain Sciences, Johns Hopkins University, 3400 N Charles St., Baltimore, MD, 21218, USA.
| |
Collapse
|
41
|
Schmidt TT, Miller TM, Blankenburg F, Pulvermüller F. Neuronal correlates of label facilitated tactile perception. Sci Rep 2019; 9:1606. [PMID: 30733578 PMCID: PMC6367477 DOI: 10.1038/s41598-018-37877-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2018] [Accepted: 12/14/2018] [Indexed: 11/30/2022] Open
Abstract
It is a long-standing question in neurolinguistics, to what extent language can have a causal effect on perception. A recent behavioural study reported that participants improved their discrimination ability of Braille-like tactile stimuli after one week of implicit association training with language stimuli being co-presented redundantly with the tactile stimuli. In that experiment subjects were exposed twice a day for 1 h to the joint presentation of tactile stimuli presented to the fingertip and auditorily presented pseudowords. Their discrimination ability improved only for those tactile stimuli that were consistently paired with pseudowords, but not for those that were discordantly paired with different pseudowords. Thereby, a causal effect of verbal labels on tactile perception has been demonstrated under controlled laboratory conditions. This raises the question as to what the neuronal mechanisms underlying this implicit learning effect are. Here, we present fMRI data collected before and after the aforementioned behavioral learning to test for changes in brain connectivity as the underlying mechanism of the observed behavioral effects. The comparison of pre- and post-training revealed a language-driven increase in connectivity strength between auditory and secondary somatosensory cortex and the hippocampus as an association-learning related region.
Collapse
Affiliation(s)
- Timo Torsten Schmidt
- Neurocomputation and Neuroimaging Unit (NNU), Department of Education and Psychology, Freie Universität Berlin, 14195, Berlin, Germany.
| | - Tally McCormick Miller
- Brain Language Laboratory, Department of Philosophy and Humanities, WE4, Freie Universität Berlin, 14195, Berlin, Germany
- Berlin School of Mind and Brain, Humboldt Universität zu Berlin, 10099, Berlin, Germany
| | - Felix Blankenburg
- Neurocomputation and Neuroimaging Unit (NNU), Department of Education and Psychology, Freie Universität Berlin, 14195, Berlin, Germany
- Berlin School of Mind and Brain, Humboldt Universität zu Berlin, 10099, Berlin, Germany
| | - Friedemann Pulvermüller
- Brain Language Laboratory, Department of Philosophy and Humanities, WE4, Freie Universität Berlin, 14195, Berlin, Germany
- Berlin School of Mind and Brain, Humboldt Universität zu Berlin, 10099, Berlin, Germany
| |
Collapse
|
42
|
Hoemann K, Barrett LF. Concepts dissolve artificial boundaries in the study of emotion and cognition, uniting body, brain, and mind. Cogn Emot 2019; 33:67-76. [PMID: 30336722 PMCID: PMC6399041 DOI: 10.1080/02699931.2018.1535428] [Citation(s) in RCA: 40] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2018] [Revised: 09/17/2018] [Accepted: 10/09/2018] [Indexed: 12/14/2022]
Abstract
Theories of emotion have often maintained artificial boundaries: for instance, that cognition and emotion are separable, and that an emotion concept is separable from the emotional events that comprise its category (e.g. "fear" is distinct from instances of fear). Over the past several years, research has dissolved these artificial boundaries, suggesting instead that conceptual construction is a domain-general process-a process by which the brain makes meaning of the world. The brain constructs emotion concepts, but also cognitions and perceptions, all in the service of guiding action. In this view, concepts are multimodal constructions, dynamically prepared from a set of highly variable instances. This approach obviates old questions (e.g. how does cognition regulate emotion?) but generates new ones (e.g. how does a brain learn emotion concepts?). In this paper, we review this constructionist, predictive coding account of emotion, considering its implications for health and well-being, culture and development.
Collapse
Affiliation(s)
- Katie Hoemann
- Department of Psychology, Northeastern University, Boston, MA, USA
| | - Lisa Feldman Barrett
- Department of Psychology, Northeastern University, Boston, MA, USA
- Massachusetts General Hospital/Martinos Center for Biomedical Imaging, Charlestown, MA, USA
| |
Collapse
|
43
|
Abstract
People have long pondered the evolution of language and the origin of words. Here, we investigate how conventional spoken words might emerge from imitations of environmental sounds. Does the repeated imitation of an environmental sound gradually give rise to more word-like forms? In what ways do these forms resemble the original sounds that motivated them (i.e. exhibit iconicity)? Participants played a version of the children's game 'Telephone'. The first generation of participants imitated recognizable environmental sounds (e.g. glass breaking, water splashing). Subsequent generations imitated the previous generation of imitations for a maximum of eight generations. The results showed that the imitations became more stable and word-like, and later imitations were easier to learn as category labels. At the same time, even after eight generations, both spoken imitations and their written transcriptions could be matched above chance to the category of environmental sound that motivated them. These results show how repeated imitation can create progressively more word-like forms while continuing to retain a resemblance to the original sound that motivated them, and speak to the possible role of human vocal imitation in explaining the origins of at least some spoken words.
Collapse
Affiliation(s)
- Pierce Edmiston
- Department of Psychology, University of Wisconsin-Madison, 1202 West Johnson Street, Madison, WI 53703, USA
| | - Marcus Perlman
- Department of English Language and Applied Linguistics, University of Birmingham, Birmingham, UK
| | - Gary Lupyan
- Department of Psychology, University of Wisconsin-Madison, 1202 West Johnson Street, Madison, WI 53703, USA
| |
Collapse
|
44
|
No matter how: Top-down effects of verbal and semantic category knowledge on early visual perception. COGNITIVE AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2019; 19:859-876. [DOI: 10.3758/s13415-018-00679-8] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
45
|
Brady TF, Störmer VS, Shafer-Skelton A, Williams JR, Chapman AF, Schill HM. Scaling up visual attention and visual working memory to the real world. PSYCHOLOGY OF LEARNING AND MOTIVATION 2019. [DOI: 10.1016/bs.plm.2019.03.001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/16/2023]
|
46
|
Peterson MA. Past experience and meaning affect object detection: A hierarchical Bayesian approach. PSYCHOLOGY OF LEARNING AND MOTIVATION 2019. [DOI: 10.1016/bs.plm.2019.03.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
47
|
Greene MR. The information content of scene categories. PSYCHOLOGY OF LEARNING AND MOTIVATION 2019. [DOI: 10.1016/bs.plm.2019.03.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
48
|
Mirković J, Altmann GTM. Unfolding meaning in context: The dynamics of conceptual similarity. Cognition 2018; 183:19-43. [PMID: 30408707 DOI: 10.1016/j.cognition.2018.10.018] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2017] [Revised: 10/22/2018] [Accepted: 10/25/2018] [Indexed: 11/19/2022]
Abstract
How are relationships between concepts affected by the interplay between short-term contextual constraints and long-term conceptual knowledge? Across two studies we investigate the consequence of changes in visual context for the dynamics of conceptual processing. Participants' eye movements were tracked as they viewed a visual depiction of e.g. a canary in a birdcage (Experiment 1), or a canary and three unrelated objects, each in its own quadrant (Experiment 2). In both studies participants heard either a semantically and contextually similar "robin" (a bird; similar size), an equally semantically similar but not contextually similar "stork" (a bird; bigger than a canary, incompatible with the birdcage), or unrelated "tent". The changing patterns of fixations across time indicated first, that the visual context strongly influenced the eye movements such that, in the context of a birdcage, early on (by word offset) hearing "robin" engendered more looks to the canary than hearing "stork" or "tent" (which engendered the same number of looks), unlike in the context of unrelated objects (in which case "robin" and "stork" engendered equivalent looks to the canary, and more than did "tent"). Second, within the 500 ms post-word-offset eye movements in both experiments converged onto a common pattern (more looks to the canary after "robin" than after "stork", and for both more than after "tent"). We interpret these findings as indicative of the dynamics of activation within semantic memory accessed via pictures and via words, and reflecting the complex interaction between systems representing context-independent and context-dependent conceptual knowledge driven by predictive processing.
Collapse
Affiliation(s)
- Jelena Mirković
- York St John University, School of Psychological and Social Sciences, Lord Mayor's Walk, York YO31 7EX, UK; University of York, Heslington, York YO10 5DD, UK.
| | - Gerry T M Altmann
- University of Connecticut, Department of Psychological Sciences, 406 Babbidge Road, Unit 1020, Storrs, CT 06269-1020, USA
| |
Collapse
|
49
|
|
50
|
Words affect visual perception by activating object shape representations. Sci Rep 2018; 8:14156. [PMID: 30237542 PMCID: PMC6148044 DOI: 10.1038/s41598-018-32483-2] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2018] [Accepted: 09/07/2018] [Indexed: 11/08/2022] Open
Abstract
Linguistic labels are known to facilitate object recognition, yet the mechanism of this facilitation is not well understood. Previous psychophysical studies have suggested that words guide visual perception by activating information about visual object shape. Here we aimed to test this hypothesis at the neural level, and to tease apart the visual and semantic contribution of words to visual object recognition. We created a set of object pictures from two semantic categories with varying shapes, and obtained subjective ratings of their shape and category similarity. We then conducted a word-picture matching experiment, while recording participants’ EEG, and tested if the shape or the category similarity between the word’s referent and target picture explained the spatiotemporal pattern of the picture-evoked responses. The results show that hearing a word activates representations of its referent’s shape, which interacts with the visual processing of a subsequent picture within 100 ms from its onset. Furthermore, non-visual categorical information, carried by the word, affects the visual processing at later stages. These findings advance our understanding of the interaction between language and visual perception and provide insights into how the meanings of words are represented in the brain.
Collapse
|