1
|
Mao W, Li G, Li X. Improved Re-Parameterized Convolution for Wildlife Detection in Neighboring Regions of Southwest China. Animals (Basel) 2024; 14:1152. [PMID: 38672300 PMCID: PMC11047598 DOI: 10.3390/ani14081152] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2024] [Revised: 03/19/2024] [Accepted: 04/08/2024] [Indexed: 04/28/2024] Open
Abstract
To autonomously detect wildlife images captured by camera traps on a platform with limited resources and address challenges such as filtering out photos without optimal objects, as well as classifying and localizing species in photos with objects, we introduce a specialized wildlife object detector tailored for camera traps. This detector is developed using a dataset acquired by the Saola Working Group (SWG) through camera traps deployed in Vietnam and Laos. Utilizing the YOLOv6-N object detection algorithm as its foundation, the detector is enhanced by a tailored optimizer for improved model performance. We deliberately introduce asymmetric convolutional branches to enhance the feature characterization capability of the Backbone network. Additionally, we streamline the Neck and use CIoU loss to improve detection performance. For quantitative deployment, we refine the RepOptimizer to train a pure VGG-style network. Experimental results demonstrate that our proposed method empowers the model to achieve an 88.3% detection accuracy on the wildlife dataset in this paper. This accuracy is 3.1% higher than YOLOv6-N, and surpasses YOLOv7-T and YOLOv8-N by 5.5% and 2.8%, respectively. The model consistently maintains its detection performance even after quantization to the INT8 precision, achieving an inference speed of only 6.15 ms for a single image on the NVIDIA Jetson Xavier NX device. The improvements we introduce excel in tasks related to wildlife image recognition and object localization captured by camera traps, providing practical solutions to enhance wildlife monitoring and facilitate efficient data acquisition. Our current work represents a significant stride toward a fully automated animal observation system in real-time in-field applications.
Collapse
Affiliation(s)
| | - Gang Li
- School of Mathematics and Computer Science, Dali University, Dali 671003, China; (W.M.); (X.L.)
| | | |
Collapse
|
2
|
Hauptman M, Elli G, Pant R, Bedny M. Neural specialization for 'visual' concepts emerges in the absence of vision. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.08.23.552701. [PMID: 37662234 PMCID: PMC10473738 DOI: 10.1101/2023.08.23.552701] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/05/2023]
Abstract
Vision provides a key source of information about many concepts, including 'living things' (e.g., tiger) and visual events (e.g., sparkle). According to a prominent theoretical framework, neural specialization for different conceptual categories is shaped by sensory features, e.g., living things are neurally dissociable from navigable places because living things concepts depend more on visual features. We tested this framework by comparing the neural basis of 'visual' concepts across sighted (n=22) and congenitally blind (n=21) adults. Participants judged the similarity of words varying in their reliance on vision while undergoing fMRI. We compared neural responses to living things nouns (birds, mammals) and place nouns (natural, manmade). In addition, we compared visual event verbs (e.g., 'sparkle') to non-visual events (sound emission, hand motion, mouth motion). People born blind exhibited distinctive univariate and multivariate responses to living things in a temporo-parietal semantic network activated by nouns, including the precuneus (PC). To our knowledge, this is the first demonstration that neural selectivity for living things does not require vision. We additionally observed preserved neural signatures of 'visual' light events in the left middle temporal gyrus (LMTG+). Across a wide range of semantic types, neural representations of sensory concepts develop independent of sensory experience.
Collapse
Affiliation(s)
- Miriam Hauptman
- Department of Psychological & Brain Sciences, Johns Hopkins University, Baltimore, MD, USA
| | - Giulia Elli
- Department of Psychological & Brain Sciences, Johns Hopkins University, Baltimore, MD, USA
| | - Rashi Pant
- Department of Psychological & Brain Sciences, Johns Hopkins University, Baltimore, MD, USA
- Department of Biological Psychology & Neuropsychology, Universität Hamburg, Germany
| | - Marina Bedny
- Department of Psychological & Brain Sciences, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
3
|
Canessa E, Chaigneau SE, Moreno S. Using agreement probability to study differences in types of concepts and conceptualizers. Behav Res Methods 2024; 56:93-112. [PMID: 36471211 DOI: 10.3758/s13428-022-02030-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/18/2022] [Indexed: 12/12/2022]
Abstract
Agreement probability p(a) is a homogeneity measure of lists of properties produced by participants in a Property Listing Task (PLT) for a concept. Agreement probability's mathematical properties allow a rich analysis of property-based descriptions. To illustrate, we use p(a) to delve into the differences between concrete and abstract concepts in sighted and blind populations. Results show that concrete concepts are more homogeneous within sighted and blind groups than abstract ones (i.e., exhibit a higher p(a) than abstract ones) and that concrete concepts in the blind group are less homogeneous than in the sighted sample. This supports the idea that listed properties for concrete concepts should be more similar across subjects due to the influence of visual/perceptual information on the learning process. In contrast, abstract concepts are learned based mainly on social and linguistic information, which exhibit more variability among people, thus, making the listed properties more dissimilar across subjects. Relative to abstract concepts, the difference in p(a) between sighted and blind is not statistically significant. Though this is a null result, and should be considered with care, it is expected because abstract concepts should be learned by paying attention to the same social and linguistic input in both, blind and sighted, and thus, there is no reason to expect that the respective lists of properties should differ. Finally, we used p(a) to classify concrete and abstract concepts with a good level of certainty. All these analyses suggest that p(a) can be fruitfully used to study data obtained in a PLT.
Collapse
Affiliation(s)
- Enrique Canessa
- Center for Cognition Research (CINCO), School of Psychology, Universidad Adolfo Ibáñez, Av. Presidente Errázuriz 3328, Las Condes, Santiago, Chile.
- Faculty of Engineering and Science, Universidad Adolfo Ibáñez, Av. P. Hurtado 750, Lote H, Viña del Mar, Chile.
| | - Sergio E Chaigneau
- Center for Cognition Research (CINCO), School of Psychology, Universidad Adolfo Ibáñez, Av. Presidente Errázuriz 3328, Las Condes, Santiago, Chile
- Center for Social and Cognitive Neuroscience, School of Psychology, Universidad Adolfo Ibáñez, Av. Presidente Errázuriz 3328, Las Condes, Santiago, Chile
| | - Sebastián Moreno
- Faculty of Engineering and Science, Universidad Adolfo Ibáñez, Av. P. Hurtado 750, Lote H, Viña del Mar, Chile
| |
Collapse
|
4
|
Fernandino L, Conant LL. The Primacy of Experience in Language Processing: Semantic Priming Is Driven Primarily by Experiential Similarity. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.03.21.533703. [PMID: 36993310 PMCID: PMC10055357 DOI: 10.1101/2023.03.21.533703] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/30/2023]
Abstract
The organization of semantic memory, including memory for word meanings, has long been a central question in cognitive science. Although there is general agreement that word meaning representations must make contact with sensory-motor and affective experiences in a non-arbitrary fashion, the nature of this relationship remains controversial. One prominent view proposes that word meanings are represented directly in terms of their experiential content (i.e., sensory-motor and affective representations). Opponents of this view argue that the representation of word meanings reflects primarily taxonomic structure, that is, their relationships to natural categories. In addition, the recent success of language models based on word co-occurrence (i.e., distributional) information in emulating human linguistic behavior has led to proposals that this kind of information may play an important role in the representation of lexical concepts. We used a semantic priming paradigm designed for representational similarity analysis (RSA) to quantitatively assess how well each of these theories explains the representational similarity pattern for a large set of words. Crucially, we used partial correlation RSA to account for intercorrelations between model predictions, which allowed us to assess, for the first time, the unique effect of each model. Semantic priming was driven primarily by experiential similarity between prime and target, with no evidence of an independent effect of distributional or taxonomic similarity. Furthermore, only the experiential models accounted for unique variance in priming after partialling out explicit similarity ratings. These results support experiential accounts of semantic representation and indicate that, despite their good performance at some linguistic tasks, the distributional models evaluated here do not encode the same kind of information used by the human semantic system.
Collapse
Affiliation(s)
- Leonardo Fernandino
- Department of Neurology, Medical College of Wisconsin
- Department of Biomedical Engineering, Medical College of Wisconsin
| | | |
Collapse
|
5
|
Kauf C, Ivanova AA, Rambelli G, Chersoni E, She JS, Chowdhury Z, Fedorenko E, Lenci A. Event Knowledge in Large Language Models: The Gap Between the Impossible and the Unlikely. Cogn Sci 2023; 47:e13386. [PMID: 38009752 DOI: 10.1111/cogs.13386] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2022] [Revised: 10/27/2023] [Accepted: 11/04/2023] [Indexed: 11/29/2023]
Abstract
Word co-occurrence patterns in language corpora contain a surprising amount of conceptual knowledge. Large language models (LLMs), trained to predict words in context, leverage these patterns to achieve impressive performance on diverse semantic tasks requiring world knowledge. An important but understudied question about LLMs' semantic abilities is whether they acquire generalized knowledge of common events. Here, we test whether five pretrained LLMs (from 2018's BERT to 2023's MPT) assign a higher likelihood to plausible descriptions of agent-patient interactions than to minimally different implausible versions of the same event. Using three curated sets of minimal sentence pairs (total n = 1215), we found that pretrained LLMs possess substantial event knowledge, outperforming other distributional language models. In particular, they almost always assign a higher likelihood to possible versus impossible events (The teacher bought the laptop vs. The laptop bought the teacher). However, LLMs show less consistent preferences for likely versus unlikely events (The nanny tutored the boy vs. The boy tutored the nanny). In follow-up analyses, we show that (i) LLM scores are driven by both plausibility and surface-level sentence features, (ii) LLM scores generalize well across syntactic variants (active vs. passive constructions) but less well across semantic variants (synonymous sentences), (iii) some LLM errors mirror human judgment ambiguity, and (iv) sentence plausibility serves as an organizing dimension in internal LLM representations. Overall, our results show that important aspects of event knowledge naturally emerge from distributional linguistic patterns, but also highlight a gap between representations of possible/impossible and likely/unlikely events.
Collapse
Affiliation(s)
- Carina Kauf
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology
- McGovern Institute for Brain Research, Massachusetts Institute of Technology
| | - Anna A Ivanova
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology
- McGovern Institute for Brain Research, Massachusetts Institute of Technology
- Computer Science and Artificial Intelligence Lab, Massachusetts Institute of Technology
| | - Giulia Rambelli
- Department of Modern Languages, Literatures and Cultures, University of Bologna
| | - Emmanuele Chersoni
- Department of Chinese and Bilingual Studies, Hong Kong Polytechnic University
| | - Jingyuan Selena She
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology
- McGovern Institute for Brain Research, Massachusetts Institute of Technology
| | | | - Evelina Fedorenko
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology
- McGovern Institute for Brain Research, Massachusetts Institute of Technology
| | - Alessandro Lenci
- Department of Philology, Literature, and Linguistics, University of Pisa
| |
Collapse
|
6
|
Körner A, Castillo M, Drijvers L, Fischer MH, Günther F, Marelli M, Platonova O, Rinaldi L, Shaki S, Trujillo JP, Tsaregorodtseva O, Glenberg AM. Embodied Processing at Six Linguistic Granularity Levels: A Consensus Paper. J Cogn 2023; 6:60. [PMID: 37841668 PMCID: PMC10573585 DOI: 10.5334/joc.231] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Accepted: 06/13/2022] [Indexed: 10/17/2023] Open
Abstract
Language processing is influenced by sensorimotor experiences. Here, we review behavioral evidence for embodied and grounded influences in language processing across six linguistic levels of granularity. We examine (a) sub-word features, discussing grounded influences on iconicity (systematic associations between word form and meaning); (b) words, discussing boundary conditions and generalizations for the simulation of color, sensory modality, and spatial position; (c) sentences, discussing boundary conditions and applications of action direction simulation; (d) texts, discussing how the teaching of simulation can improve comprehension in beginning readers; (e) conversations, discussing how multi-modal cues improve turn taking and alignment; and (f) text corpora, discussing how distributional semantic models can reveal how grounded and embodied knowledge is encoded in texts. These approaches are converging on a convincing account of the psychology of language, but at the same time, there are important criticisms of the embodied approach and of specific experimental paradigms. The surest way forward requires the adoption of a wide array of scientific methods. By providing complimentary evidence, a combination of multiple methods on various levels of granularity can help us gain a more complete understanding of the role of embodiment and grounding in language processing.
Collapse
Affiliation(s)
- Anita Körner
- Department of Psychology, University of Kassel, DE
| | - Mauricio Castillo
- Center for Basic Research in Psychology, University of the Republic of Uruguay, UY
| | | | | | - Fritz Günther
- Department of Psychology, Humboldt-Universität zu Berlin, DE
| | - Marco Marelli
- Department of Psychology, University of Milano-Bicocca, IT
| | | | - Luca Rinaldi
- Department of Brain and Behavioral Sciences, University of Pavia, IT
| | - Samuel Shaki
- Department of Behavioral Sciences, Ariel University, IL
| | - James P. Trujillo
- Max Planck Institute for Psycholinguistics, NL
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, NL
| | - Oksana Tsaregorodtseva
- Department of Psychology, University of Tübingen, DE
- Linguistic Anthropology Laboratory, Tomsk State University, RU
| | - Arthur M. Glenberg
- Department of Psychology, Arizona State University, US
- Department of Psychology, University of Wisconsin-Madison, US
- INICO, Universidad de Salamanca, ES
| |
Collapse
|
7
|
Dove G. Concepts require flexible grounding. BRAIN AND LANGUAGE 2023; 245:105322. [PMID: 37713771 DOI: 10.1016/j.bandl.2023.105322] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Revised: 07/30/2023] [Accepted: 09/06/2023] [Indexed: 09/17/2023]
Abstract
Research on semantic memory has a problem. On the one hand, a robust body of evidence implicates sensorimotor regions in conceptual processing. On the other hand, a different body of evidence implicates a modality independent semantic system. The standard solution to this tension is to posit a hub-and-spoke system with modality independent hubs and modality specific spokes. In this paper, I argue in support of an alternative view of grounding which remains committed to neural reenactment but emphasizes the multimodal and multilevel nature of the semantic system. This view is built upon the recognition that abstraction is a design feature of concepts. Semantic memory employs hierarchically structured representations to capture different degrees of abstraction. Grounding does not work the way that many embodied approaches have assumed.
Collapse
Affiliation(s)
- Guy Dove
- Department of Philosophy, University of Louisville, United States.
| |
Collapse
|
8
|
Diachek E, Brown-Schmidt S, Polyn SM. Items Outperform Adjectives in a Computational Model of Binary Semantic Classification. Cogn Sci 2023; 47:e13336. [PMID: 37695844 DOI: 10.1111/cogs.13336] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2022] [Revised: 08/03/2023] [Accepted: 08/23/2023] [Indexed: 09/13/2023]
Abstract
Semantic memory encompasses one's knowledge about the world. Distributional semantic models, which construct vector spaces with embedded words, are a proposed framework for understanding the representational structure of human semantic knowledge. Unlike some classic semantic models, distributional semantic models lack a mechanism for specifying the properties of concepts, which raises questions regarding their utility for a general theory of semantic knowledge. Here, we develop a computational model of a binary semantic classification task, in which participants judged target words for the referent's size or animacy. We created a family of models, evaluating multiple distributional semantic models, and mechanisms for performing the classification. The most successful model constructed two composite representations for each extreme of the decision axis (e.g., one averaging together representations of characteristically big things and another of characteristically small things). Next, the target item was compared to each composite representation, allowing the model to classify more than 1,500 words with human-range performance and to predict response times. We propose that when making a decision on a binary semantic classification task, humans use task prompts to retrieve instances representative of the extremes on that semantic dimension and compare the probe to those instances. This proposal is consistent with the principles of the instance theory of semantic memory.
Collapse
Affiliation(s)
- Evgeniia Diachek
- Department of Psychology and Human Development, Peabody College, Vanderbilt University
| | - Sarah Brown-Schmidt
- Department of Psychology and Human Development, Peabody College, Vanderbilt University
| | - Sean M Polyn
- Department of Psychology, College of Arts and Sciences, Vanderbilt University
| |
Collapse
|
9
|
Giraud M, Marelli M, Nava E. Embodied language of emotions: Predicting human intuitions with linguistic distributions in blind and sighted individuals. Heliyon 2023; 9:e17864. [PMID: 37539291 PMCID: PMC10395297 DOI: 10.1016/j.heliyon.2023.e17864] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Revised: 06/11/2023] [Accepted: 06/29/2023] [Indexed: 08/05/2023] Open
Abstract
Recent constructionist theories have suggested that language and sensory experience play a crucial role not only in how individuals categorise emotions but also in how they experience and shape them, helping to acquire abstract concepts that are used to make sense of bodily perceptions associated with specific emotions. Here, we aimed to investigate the role of sensory experience in conceptualising bodily felt emotions by asking 126 Italian blind participants to freely recall in which part of the body they commonly feel specific emotions (N = 15). Participants varied concerning visual experience in terms of blindness onset (i.e., congenital vs late) and degree of visual experience (i.e., total vs partial sensory loss). Using an Italian semantic model to estimate to what extent discrete emotions are associated with body parts in language experience, we found that all participants' reports correlated with the model predictions. Interestingly, blind - and especially congenitally blind - participants' responses were more strongly correlated with the model, suggesting that language might be one of the possible compensative mechanisms for the lack of visual feedback in constructing bodily felt emotions. Our findings present theoretical implications for the study of emotions, as well as potential real-world applications for blind individuals, by revealing, on the one hand, that vision plays an essential role in the construction of felt emotions and the way we talk about our related bodily (emotional) experiences. On the other hand, evidence that blind individuals rely more strongly on linguistic cues suggests that vision is a strong cue to acquire emotional information from the surrounding world, influencing how we experience emotions. While our findings do not suggest that blind individuals experience emotions in an atypical and dysfunctional way, they nonetheless support the view that promoting the use of non-visual emotional signs and body language since early on might help the blind child to develop a good emotional awareness as well as good emotion regulation abilities.
Collapse
|
10
|
Liu Q, Lupyan G. Cross-domain semantic alignment: concrete concepts are more abstract than you think. Philos Trans R Soc Lond B Biol Sci 2023; 378:20210372. [PMID: 36571138 PMCID: PMC9791493 DOI: 10.1098/rstb.2021.0372] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2022] [Accepted: 08/05/2022] [Indexed: 12/27/2022] Open
Abstract
We can easily evaluate similarities between concepts within semantic domains, e.g. doctor and nurse, or violin and piano. Here, we show that people are also able to evaluate similarities across domains, e.g. aligning doctors with pianos and nurses with violins. We argue that understanding how people do this is important for understanding conceptual organization and the ubiquity of metaphorical language. We asked people to answer questions of the form 'If a nurse were an animal, they would be a(n) …' (Experiments 1 and 2) and asked them to explain the basis for their response (Experiment 1). People converged to a surprising degree (e.g. 20% answered 'cat'). In Experiment 3, we presented people with cross-domain mappings of the form 'If a nurse were an animal, they would be a cat' and asked them to indicate how good each mapping was. The results showed that the targets people chose and their goodness ratings of a given response were predicted by similarity along abstract semantic dimensions such as valence, speed and genderedness. Reliance on such dimensions was also the most common explanation for their responses. Altogether, we show that people can evaluate similarity between very different domains in predictable ways, suggesting either that seemingly concrete concepts are represented along relatively abstract dimensions (e.g. weak-strong) or that they can be readily projected onto these dimensions. This article is part of the theme issue 'Concepts in interaction: social engagement and inner experiences'.
Collapse
Affiliation(s)
- Qiawen Liu
- Department of Psychology, University of Wisconsin, Madison, WI 53706, USA
| | - Gary Lupyan
- Department of Psychology, University of Wisconsin, Madison, WI 53706, USA
| |
Collapse
|
11
|
Dove GO. Rethinking the role of language in embodied cognition. Philos Trans R Soc Lond B Biol Sci 2023; 378:20210375. [PMID: 36571130 PMCID: PMC9791473 DOI: 10.1098/rstb.2021.0375] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Accepted: 06/26/2022] [Indexed: 12/27/2022] Open
Abstract
There has been a lot of recent interest in the way that language might enhance embodied cognition. This interest is driven in large part by a growing body of evidence implicating the language system in various aspects of semantic memory-including, but not limited to, its apparent contribution to abstract concepts. In this essay, I develop and defend a novel account of the cognitive role played by language in our concepts. This account relies on the embodied nature of the language system itself, diverges in significant ways from traditional accounts, and is part of a flexible, multimodal and multilevel view of our conceptual system. This article is part of the theme issue 'Concepts in interaction: social engagement and inner experiences'.
Collapse
Affiliation(s)
- Guy O. Dove
- Department of Philosophy, University of Louisville, 313 Humanities Building, Louisville, KY 40292, USA
| |
Collapse
|
12
|
Fu Z, Wang X, Wang X, Yang H, Wang J, Wei T, Liao X, Liu Z, Chen H, Bi Y. Different computational relations in language are captured by distinct brain systems. Cereb Cortex 2023; 33:997-1013. [PMID: 35332914 DOI: 10.1093/cercor/bhac117] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2021] [Revised: 02/25/2022] [Accepted: 02/26/2022] [Indexed: 11/14/2022] Open
Abstract
A critical way for humans to acquire information is through language, yet whether and how language experience drives specific neural semantic representations is still poorly understood. We considered statistical properties captured by 3 different computational principles of language (simple co-occurrence, network-(graph)-topological relations, and neural-network-vector-embedding relations) and tested the extent to which they can explain the neural patterns of semantic representations, measured by 2 functional magnetic resonance imaging experiments that shared common semantic processes. Distinct graph-topological word relations, and not simple co-occurrence or neural-network-vector-embedding relations, had unique explanatory power for the neural patterns in the anterior temporal lobe (capturing graph-common-neighbors), inferior frontal gyrus, and posterior middle/inferior temporal gyrus (capturing graph-shortest-path). These results were relatively specific to language: they were not explained by sensory-motor similarities and the same computational relations of visual objects (based on visual image database) showed effects in the visual cortex in the picture naming experiment. That is, different topological properties within language and the same topological computations (common-neighbors) for language and visual inputs are captured by different brain regions. These findings reveal the specific neural semantic representations along graph-topological properties of language, highlighting the information type-specific and statistical property-specific manner of semantic representations in the human brain.
Collapse
Affiliation(s)
- Ze Fu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China.,Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing 100875, China
| | - Xiaosha Wang
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China.,Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing 100875, China
| | - Xiaoying Wang
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China.,Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing 100875, China
| | - Huichao Yang
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China.,Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing 100875, China.,School of Systems Science, Beijing Normal University, Beijing 100875, China
| | - Jiahuan Wang
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China.,Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing 100875, China
| | - Tao Wei
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China.,Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing 100875, China
| | - Xuhong Liao
- School of Systems Science, Beijing Normal University, Beijing 100875, China
| | - Zhiyuan Liu
- Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China
| | - Huimin Chen
- School of Journalism and Communication, Tsinghua University, Beijing 100084, China
| | - Yanchao Bi
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China.,Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing 100875, China.,Chinese Institute for Brain Research, Beijing 102206, China
| |
Collapse
|
13
|
Mamus E, Speed LJ, Rissman L, Majid A, Özyürek A. Lack of Visual Experience Affects Multimodal Language Production: Evidence From Congenitally Blind and Sighted People. Cogn Sci 2023; 47:e13228. [PMID: 36607157 PMCID: PMC10078191 DOI: 10.1111/cogs.13228] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2022] [Revised: 10/08/2022] [Accepted: 11/25/2022] [Indexed: 01/07/2023]
Abstract
The human experience is shaped by information from different perceptual channels, but it is still debated whether and how differential experience influences language use. To address this, we compared congenitally blind, blindfolded, and sighted people's descriptions of the same motion events experienced auditorily by all participants (i.e., via sound alone) and conveyed in speech and gesture. Comparison of blind and sighted participants to blindfolded participants helped us disentangle the effects of a lifetime experience of being blind versus the task-specific effects of experiencing a motion event by sound alone. Compared to sighted people, blind people's speech focused more on path and less on manner of motion, and encoded paths in a more segmented fashion using more landmarks and path verbs. Gestures followed the speech, such that blind people pointed to landmarks more and depicted manner less than sighted people. This suggests that visual experience affects how people express spatial events in the multimodal language and that blindness may enhance sensitivity to paths of motion due to changes in event construal. These findings have implications for the claims that language processes are deeply rooted in our sensory experiences.
Collapse
Affiliation(s)
- Ezgi Mamus
- Centre for Language Studies, Radboud University.,Max Planck Institute for Psycholinguistics
| | | | - Lilia Rissman
- Department of Psychology, University of Wisconsin - Madison
| | - Asifa Majid
- Department of Experimental Psychology, University of Oxford
| | - Aslı Özyürek
- Centre for Language Studies, Radboud University.,Max Planck Institute for Psycholinguistics.,Donders Center for Cognition, Radboud University
| |
Collapse
|
14
|
Speed LJ, Iravani B, Lundström JN, Majid A. Losing the sense of smell does not disrupt processing of odor words. BRAIN AND LANGUAGE 2022; 235:105200. [PMID: 36347207 DOI: 10.1016/j.bandl.2022.105200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/29/2022] [Revised: 09/14/2022] [Accepted: 10/27/2022] [Indexed: 06/16/2023]
Abstract
Whether language is grounded in action and perception has been a key question in cognitive science, yet little attention has been given to the sense of smell. We directly test whether smell is necessary for comprehension of odor language, by comparing language processing in a group of participants with no sense of smell (anosmics) to a group of control participants. We found no evidence for a difference in online comprehension of odor and taste language between anosmics and controls using a lexical decision task and a semantic similarity judgment task, suggesting olfaction is not critical to the comprehension of odor language. Contrary to predictions, anosmics were better at remembering odor words, and rated odor and taste words as more positively valenced than control participants. This study finds no detriment to odor language after losing the sense of smell, supporting the proposal that odor language is not grounded in odor perception.
Collapse
Affiliation(s)
- Laura J Speed
- Centre for Language Studies, Radboud University, Nijmegen, Netherlands.
| | - Behzad Iravani
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden
| | - Johan N Lundström
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden; Stockholm University Brain Imaging Centre, Stockholm University, Stockholm, Sweden
| | - Asifa Majid
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| |
Collapse
|
15
|
Roy AM, Bhaduri J, Kumar T, Raj K. WilDect-YOLO: An efficient and robust computer vision-based accurate object localization model for automated endangered wildlife detection. ECOL INFORM 2022. [DOI: 10.1016/j.ecoinf.2022.101919] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
16
|
Campbell EE, Bergelson E. Making sense of sensory language: Acquisition of sensory knowledge by individuals with congenital sensory impairments. Neuropsychologia 2022; 174:108320. [PMID: 35842021 DOI: 10.1016/j.neuropsychologia.2022.108320] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2021] [Revised: 06/21/2022] [Accepted: 07/06/2022] [Indexed: 10/17/2022]
Abstract
The present article provides a narrative review on how language communicates sensory information and how knowledge of sight and sound develops in individuals born deaf or blind. Studying knowledge of the perceptually inaccessible sensory domain for these populations offers a lens into how humans learn about that which they cannot perceive. We first review the linguistic strategies within language that communicate sensory information. Highlighting the power of language to shape knowledge, we next review the detailed knowledge of sensory information by individuals with congenital sensory impairments, limitations therein, and neural representations of imperceptible phenomena. We suggest that the acquisition of sensory knowledge is supported by language, experience with multiple perceptual domains, and cognitive and social abilities which mature over the first years of life, both in individuals with and without sensory impairment. We conclude by proposing a developmental trajectory for acquiring sensory knowledge in the absence of sensory perception.
Collapse
Affiliation(s)
- Erin E Campbell
- Duke University, Department of Psychology and Neuroscience, USA.
| | - Elika Bergelson
- Duke University, Department of Psychology and Neuroscience, USA
| |
Collapse
|
17
|
Rissman L, van Putten S, Majid A. Evidence for a Shared Instrument Prototype from English, Dutch, and German. Cogn Sci 2022; 46:e13140. [PMID: 35523145 PMCID: PMC9285710 DOI: 10.1111/cogs.13140] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2021] [Revised: 03/24/2022] [Accepted: 03/31/2022] [Indexed: 11/30/2022]
Abstract
At conceptual and linguistic levels of cognition, events are said to be represented in terms of abstract categories, for example, the sentence Jackie cut the bagel with a knife encodes the categories Agent (i.e., Jackie) and Patient (i.e., the bagel). In this paper, we ask whether entities such as the knife are also represented in terms of such a category (often labeled “Instrument”) and, if so, whether this category has a prototype structure. We hypothesized the Proto‐instrument is a tool: a physical object manipulated by an intentional agent to affect a change in another individual or object. To test this, we asked speakers of English, Dutch, and German to complete an event description task and a sentence acceptability judgment task in which events were viewed with more or less prototypical instruments. We found broad similarities in how English, Dutch, and German partition the semantic space of instrumental events, suggesting there is a shared concept of the Instrument category. However, there was no evidence to support the specific hypothesis that tools are the core of the Instrument category—instead, our results suggest the most prototypical Instrument is the direct extension of an intentional agent. This paper supports theoretical frameworks where thematic roles are analyzed in terms of prototypes and suggests new avenues of research on how instrumental category structure differs across linguistic and conceptual domains.
Collapse
Affiliation(s)
- Lilia Rissman
- Department of Psychology, University of Wisconsin-Madison
| | | | - Asifa Majid
- Department of Experimental Psychology, University of Oxford
| |
Collapse
|
18
|
Grand G, Blank IA, Pereira F, Fedorenko E. Semantic projection recovers rich human knowledge of multiple object features from word embeddings. Nat Hum Behav 2022; 6:975-987. [PMID: 35422527 DOI: 10.1038/s41562-022-01316-8] [Citation(s) in RCA: 24] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2020] [Accepted: 01/31/2022] [Indexed: 12/14/2022]
Abstract
How is knowledge about word meaning represented in the mental lexicon? Current computational models infer word meanings from lexical co-occurrence patterns. They learn to represent words as vectors in a multidimensional space, wherein words that are used in more similar linguistic contexts-that is, are more semantically related-are located closer together. However, whereas inter-word proximity captures only overall relatedness, human judgements are highly context dependent. For example, dolphins and alligators are similar in size but differ in dangerousness. Here, we use a domain-general method to extract context-dependent relationships from word embeddings: 'semantic projection' of word-vectors onto lines that represent features such as size (the line connecting the words 'small' and 'big') or danger ('safe' to 'dangerous'), analogous to 'mental scales'. This method recovers human judgements across various object categories and properties. Thus, the geometry of word embeddings explicitly represents a wealth of context-dependent world knowledge.
Collapse
|
19
|
Arcos K, Harhen N, Loiotile R, Bedny M. Superior verbal but not nonverbal memory in congenital blindness. Exp Brain Res 2022; 240:897-908. [PMID: 35076724 PMCID: PMC9204649 DOI: 10.1007/s00221-021-06304-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2021] [Accepted: 12/30/2021] [Indexed: 11/28/2022]
Abstract
Previous studies suggest that people who are congenitally blind outperform sighted people on some memory tasks. Whether blindness-associated memory advantages are specific to verbal materials or are also observed with nonverbal sounds has not been determined. Congenitally blind individuals (n = 20) and age and education matched blindfolded sighted controls (n = 22) performed a series of auditory memory tasks. These included: verbal forward and backward letter spans, a complex letter span with intervening equations, as well as two matched recognition tasks: one with verbal stimuli (i.e., letters) and one with nonverbal complex meaningless sounds. Replicating previously observed findings, blind participants outperformed sighted people on forward and backward letter span tasks. Blind participants also recalled more letters on the complex letter span task despite the interference of intervening equations. Critically, the same blind participants showed larger advantages on the verbal as compared to the nonverbal recognition task. These results suggest that blindness selectively enhances memory for verbal material. Possible explanations for blindness-related verbal memory advantages include blindness-induced memory practice and 'visual' cortex recruitment for verbal processing.
Collapse
Affiliation(s)
- Karen Arcos
- Department of Cognitive Sciences, University of California, Irvine, Irvine, CA, USA.
| | - Nora Harhen
- Department of Cognitive Sciences, University of California, Irvine, Irvine, CA, United States
| | - Rita Loiotile
- Department of Psychological & Brain Sciences, Johns Hopkins University, Baltimore, MD, United States
| | - Marina Bedny
- Department of Psychological & Brain Sciences, Johns Hopkins University, Baltimore, MD, United States
| |
Collapse
|
20
|
Dietz CD, Malaspina M, Albonico A, Barton JJS. The persistence of remote visual semantic memory following ocular blindness. Neuropsychologia 2021; 165:108110. [PMID: 34890692 DOI: 10.1016/j.neuropsychologia.2021.108110] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2020] [Revised: 12/03/2021] [Accepted: 12/06/2021] [Indexed: 11/17/2022]
Abstract
Subjects with complete ocular blindness in both eyes provide a unique opportunity to study the long-term durability of visual semantic memory. In this cross-sectional study we recruited eleven subjects who had acquired blindness for between 1 and 36 years. For comparison, we studied four subjects with congenital blindness and seventeen age- and sex-matched sighted control subjects. We administered ten forced-choice questionnaires that probed one auditory category and four visual categories, namely object shape and size; object hue and lightness; word and letter shape; and the shape and features of famous faces. Subjects with congenital blindness performed worse than controls on all visual categories, but nevertheless performed better than chance on object structure or colour, suggesting that the answers to some questions about visual properties can be derived from haptic or non-visual semantic information. Subjects with acquired blindness performed similarly to controls on all categories except for facial memory, particularly for facial features. We conclude that there is a substantial "permastore" of visual semantic memory but that facial memories are less durable, perhaps indicating that they are either less over-learned or more dependent on visual representations than other forms of visual object information.
Collapse
Affiliation(s)
- Connor D Dietz
- Human Vision and Eye Movement Laboratory, Departments of Medicine (Neurology) and Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, Canada.
| | - Manuela Malaspina
- Human Vision and Eye Movement Laboratory, Departments of Medicine (Neurology) and Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, Canada
| | - Andrea Albonico
- Human Vision and Eye Movement Laboratory, Departments of Medicine (Neurology) and Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, Canada
| | - Jason J S Barton
- Human Vision and Eye Movement Laboratory, Departments of Medicine (Neurology) and Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, Canada
| |
Collapse
|
21
|
Canessa E, Chaigneau SE, Moreno S. Language Processing Differences Between Blind and Sighted Individuals and the Abstract Versus Concrete Concept Difference. Cogn Sci 2021; 45:e13044. [PMID: 34606124 DOI: 10.1111/cogs.13044] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2020] [Revised: 08/18/2021] [Accepted: 08/22/2021] [Indexed: 11/29/2022]
Abstract
In the property listing task (PLT), participants are asked to list properties for a concept (e.g., for the concept dog, "barks," and "is a pet" may be produced). In conceptual property norming (CPNs) studies, participants are asked to list properties for large sets of concepts. Here, we use a mathematical model of the property listing process to explore two longstanding issues: characterizing the difference between concrete and abstract concepts, and characterizing semantic knowledge in the blind versus sighted population. When we apply our mathematical model to a large CPN reporting properties listed by sighted and blind participants, the model uncovers significant differences between concrete and abstract concepts. Though we also find that blind individuals show many of the same processing differences between abstract and concrete concepts found in sighted individuals, our model shows that those differences are noticeably less pronounced than in sighted individuals. We discuss our results vis-a-vis theories attempting to characterize abstract concepts.
Collapse
Affiliation(s)
- Enrique Canessa
- Center for Cognition Research (CINCO), School of Psychology, Universidad Adolfo Ibáñe.,Faculty of Engineering and Science, Universidad Adolfo Ibáñez
| | - Sergio E Chaigneau
- Center for Cognition Research (CINCO), School of Psychology, Universidad Adolfo Ibáñe.,Center for Social and Cognitive Neuroscience, School of Psychology, Universidad Adolfo Ibáñez
| | | |
Collapse
|
22
|
Dual coding of knowledge in the human brain. Trends Cogn Sci 2021; 25:883-895. [PMID: 34509366 DOI: 10.1016/j.tics.2021.07.006] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2021] [Revised: 07/10/2021] [Accepted: 07/14/2021] [Indexed: 11/23/2022]
Abstract
How does the human brain code knowledge about the world? While disciplines such as artificial intelligence represent world knowledge based on human language, neurocognitive models of knowledge have been dominated by sensory embodiment, in which knowledge is derived from sensory/motor experience and supported by high-level sensory/motor and association cortices. The neural correlates of an alternative disembodied symbolic system had previously been difficult to establish. A recent line of studies exploring knowledge about visual properties, such as color, in visually deprived individuals converge to provide positive, compelling evidence for non-sensory, language-derived, knowledge representation in dorsal anterior temporal lobe and extended language network, in addition to the sensory-derived representations, leading to a sketch of a dual-coding knowledge neural framework.
Collapse
|
23
|
Abstract
Empiricist philosophers such as Locke famously argued that people born blind might learn arbitrary color facts (e.g., marigolds are yellow) but would lack color understanding. Contrary to this intuition, we find that blind and sighted adults share causal understanding of color, despite not always agreeing about arbitrary color facts. Relative to sighted people, blind individuals are less likely to generate "yellow" for banana and "red" for stop sign but make similar generative inferences about real and novel objects' colors, and provide similar causal explanations. For example, people infer that two natural kinds (e.g., bananas) and two artifacts with functional colors (e.g., stop signs) are more likely to have the same color than two artifacts with nonfunctional colors (e.g., cars). People develop intuitive and inferentially rich "theories" of color regardless of visual experience. Linguistic communication is more effective at aligning intuitive theories than knowledge of arbitrary facts.
Collapse
|
24
|
Unger L, Fisher AV. The Emergence of Richly Organized Semantic Knowledge from Simple Statistics: A Synthetic Review. DEVELOPMENTAL REVIEW 2021; 60:100949. [PMID: 33840880 PMCID: PMC8026144 DOI: 10.1016/j.dr.2021.100949] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
As adults, we draw upon our ample knowledge about the world to support such vital cognitive feats as using language, reasoning, retrieving knowledge relevant to our current goals, planning for the future, adapting to unexpected events, and navigating through the environment. Our knowledge readily supports these feats because it is not merely a collection of stored facts, but rather functions as an organized, semantic network of concepts connected by meaningful relations. How do the relations that fundamentally organize semantic concepts emerge with development? Here, we cast a spotlight on a potentially powerful but often overlooked driver of semantic organization: Rich statistical regularities that are ubiquitous in both language and visual input. In this synthetic review, we show that a driving role for statistical regularities is convergently supported by evidence from diverse fields, including computational modeling, statistical learning, and semantic development. Finally, we identify a number of key avenues of future research into how statistical regularities may drive the development of semantic organization.
Collapse
Affiliation(s)
- Layla Unger
- Department of Psychology, Ohio State University, Columbus OH
| | - Anna V Fisher
- Department of Psychology, Carnegie Mellon University, Pittsburgh PA
| |
Collapse
|
25
|
Schwering SC, Ghaffari-Nikou NM, Zhao F, Niedenthal PM, MacDonald MC. Exploring the Relationship Between Fiction Reading and Emotion Recognition. AFFECTIVE SCIENCE 2021; 2:178-186. [PMID: 36043173 PMCID: PMC9382981 DOI: 10.1007/s42761-021-00034-0] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/13/2020] [Accepted: 01/27/2021] [Indexed: 05/15/2023]
Abstract
Fiction reading experience affects emotion recognition abilities, yet the causal link remains underspecified. Current theory suggests fiction reading promotes the simulation of fictional minds, which supports emotion recognition skills. We examine the extent to which contextualized statistical experience with emotion category labels in language is associated with emotion recognition. Using corpus analyses, we demonstrate fiction texts reliably use emotion category labels in an emotive sense (e.g., cry of relief), whereas other genres often use alternative senses (e.g., hurricane relief fund). Furthermore, fiction texts were shown to be a particularly reliable source of information about complex emotions. The extent to which these patterns affect human emotion concepts was analyzed in two behavioral experiments. In experiment 1 (n = 134), experience with fiction text predicted recognition of emotions employed in an emotive sense in fiction texts. In experiment 2 (n = 387), fiction reading experience predicted emotion recognition abilities, overall. These results suggest that long-term language experience, and fiction reading, in particular, supports emotion concepts through exposure to these emotions in context.
Collapse
Affiliation(s)
| | | | - Fangyun Zhao
- Department of Psychology, University of Wisconsin-Madison, Madison, WI USA
| | | | | |
Collapse
|
26
|
Johnson BP, Dayan E, Censor N, Cohen LG. Crowdsourcing in Cognitive and Systems Neuroscience. Neuroscientist 2021; 28:425-437. [PMID: 34032146 DOI: 10.1177/10738584211017018] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2023]
Abstract
Behavioral research in cognitive and human systems neuroscience has been largely carried out in-person in laboratory settings. Underpowering and lack of reproducibility due to small sample sizes have weakened conclusions of these investigations. In other disciplines, such as neuroeconomics and social sciences, crowdsourcing has been extensively utilized as a data collection tool, and a means to increase sample sizes. Recent methodological advances allow scientists, for the first time, to test online more complex cognitive, perceptual, and motor tasks. Here we review the nascent literature on the use of online crowdsourcing in cognitive and human systems neuroscience. These investigations take advantage of the ability to reliably track the activity of a participant's computer keyboard, mouse, and eye gaze in the context of large-scale studies online that involve diverse research participant pools. Crowdsourcing allows for testing the generalizability of behavioral hypotheses in real-life environments that are less accessible to lab-designed investigations. Crowdsourcing is further useful when in-laboratory studies are limited, for example during the current COVID-19 pandemic. We also discuss current limitations of crowdsourcing research, and suggest pathways to address them. We conclude that online crowdsourcing is likely to widen the scope and strengthen conclusions of cognitive and human systems neuroscience investigations.
Collapse
Affiliation(s)
- Brian P Johnson
- Human Cortical Physiology and Neurorehabilitation Section, National Institute of Neurological Disorders and Stroke, Bethesda, MD, USA
| | - Eran Dayan
- Department of Radiology and Biomedical Research Imaging Center, School of Medicine, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Nitzan Censor
- School of Psychological Sciences and Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel
| | - Leonardo G Cohen
- Human Cortical Physiology and Neurorehabilitation Section, National Institute of Neurological Disorders and Stroke, Bethesda, MD, USA
| |
Collapse
|
27
|
Honari-Jahromi M, Chouinard B, Blanco-Elorrieta E, Pylkkänen L, Fyshe A. Neural representation of words within phrases: Temporal evolution of color-adjectives and object-nouns during simple composition. PLoS One 2021; 16:e0242754. [PMID: 33661954 PMCID: PMC7932185 DOI: 10.1371/journal.pone.0242754] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2020] [Accepted: 02/11/2021] [Indexed: 11/22/2022] Open
Abstract
In language, stored semantic representations of lexical items combine into an infinitude of complex expressions. While the neuroscience of composition has begun to mature, we do not yet understand how the stored representations evolve and morph during composition. New decoding techniques allow us to crack open this very hard question: we can train a model to recognize a representation in one context or time-point and assess its accuracy in another. We combined the decoding approach with magnetoencephalography recorded during a picture naming task to investigate the temporal evolution of noun and adjective representations during speech planning. We tracked semantic representations as they combined into simple two-word phrases, using single words and two-word lists as non-combinatory controls. We found that nouns were generally more decodable than adjectives, suggesting that noun representations were stronger and/or more consistent across trials than those of adjectives. When training and testing across contexts and times, the representations of isolated nouns were recoverable when those nouns were embedded in phrases, but not so if they were embedded in lists. Adjective representations did not show a similar consistency across isolated and phrasal contexts. Noun representations in phrases also sustained over time in a way that was not observed for any other pairing of word class and context. These findings offer a new window into the temporal evolution and context sensitivity of word representations during composition, revealing a clear asymmetry between adjectives and nouns. The impact of phrasal contexts on the decodability of nouns may be due to the nouns’ status as head of phrase—an intriguing hypothesis for future research.
Collapse
Affiliation(s)
| | - Brea Chouinard
- Department of Computing Science, University of Alberta, Edmonton, AB, Canada
| | - Esti Blanco-Elorrieta
- NYUAD Institute, New York University, Abu Dhabi, UAE
- Department of Psychology, New York University, New York, NY, United States of America
| | - Liina Pylkkänen
- NYUAD Institute, New York University, Abu Dhabi, UAE
- Department of Psychology, New York University, New York, NY, United States of America
- Department of Linguistics, New York University, New York, NY, United States of America
| | - Alona Fyshe
- Department of Computing Science, University of Alberta, Edmonton, AB, Canada
- Department of Psychology, University of Alberta, Edmonton, AB, Canada
- Alberta Machine Intelligence Institute, Edmonton, AB, Canada
- * E-mail:
| |
Collapse
|
28
|
Reilly J, Finley AM, Kelly A, Zuckerman B, Flurie M. Olfactory language and semantic processing in anosmia: a neuropsychological case control study. Neurocase 2021; 27:86-96. [PMID: 33400623 PMCID: PMC8026498 DOI: 10.1080/13554794.2020.1871491] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
A longstanding debate within philosophy and neuroscience involves the extent to which sensory information is a necessary condition for conceptual knowledge. Much of our understanding of this relationship has been informed by examining the impact of congenital blindness and deafness on language and cognitive development. Relatively little is known about the "lesser" senses of smell and taste. Here we report a neuropsychological case-control study contrasting a young adult male (P01) diagnosed with anosmia (i.e. no olfaction) during early childhood relative to an age- and sex-matched control group. A structural MRI of P01's brain revealed profoundly atrophic/aplastic olfactory bulbs, and standardized smell testing confirmed his prior pediatric diagnosis of anosmia. Participants completed three language experiments examining comprehension, production, and subjective experiential ratings of odor salient words (e.g. sewer) and scenarios (e.g. fish market). P01's ratings of odor salience of single words were lower than all control participants, whereas his ratings on five other perceptual and affective dimensions were similar to controls. P01 produced unusual associations when cued to generate words that smelled similar to odor-neutral target words (e.g. ink → plant). In narrative picture description for odor salient scenes (e.g. bakery), P01 was indistinguishable from controls. These results suggest that odor deprivation does not overtly impair functional language use. However, subtle lexical-semantic effects of anosmia may be revealed using sensitive linguistic measures.
Collapse
Affiliation(s)
- Jamie Reilly
- Temple University, Eleanor M. Saffran Center for Cognitive Neuroscience, Philadelphia PA, USA.,Department of Communication Sciences and Disorders.,Department of Psychology, Temple University, Philadelphia, PA, USA
| | - Ann Marie Finley
- Temple University, Eleanor M. Saffran Center for Cognitive Neuroscience, Philadelphia PA, USA.,Department of Communication Sciences and Disorders
| | - Alexandra Kelly
- Department of Psychology, Drexel University, Philadelphia, PA, USA
| | - Bonnie Zuckerman
- Temple University, Eleanor M. Saffran Center for Cognitive Neuroscience, Philadelphia PA, USA.,Department of Communication Sciences and Disorders
| | - Maurice Flurie
- Temple University, Eleanor M. Saffran Center for Cognitive Neuroscience, Philadelphia PA, USA.,Department of Communication Sciences and Disorders
| |
Collapse
|
29
|
Abstract
A central question in the cognitive sciences is which role embodiment plays for high-level cognitive functions, such as conceptual processing. Here, we propose that one reason why progress regarding this question has been slow is a lacking focus on what Platt (1964) called “strong inference”. Strong inference is possible when results from an experimental paradigm are not merely consistent with a hypothesis, but they provide decisive evidence for one particular hypothesis compared to competing hypotheses. We discuss how causal paradigms, which test the functional relevance of sensory-motor processes for high-level cognitive functions, can move the field forward. In particular, we explore how congenital sensory-motor disorders, acquired sensory-motor deficits, and interference paradigms with healthy participants can be utilized as an opportunity to better understand the role of sensory experience in conceptual processing. Whereas all three approaches can bring about valuable insights, we highlight that the study of congenitally and acquired sensorimotor disorders is particularly effective in the case of conceptual domains with strong unimodal basis (e.g., colors), whereas interference paradigms with healthy participants have a broader application, avoid many of the practical and interpretational limitations of patient studies, and allow a systematic and step-wise progressive inference approach to causal mechanisms.
Collapse
|
30
|
The many timescales of context in language processing. PSYCHOLOGY OF LEARNING AND MOTIVATION 2021. [DOI: 10.1016/bs.plm.2021.08.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
31
|
Cheng Q, Silvano E, Bedny M. Sensitive periods in cortical specialization for language: insights from studies with Deaf and blind individuals. Curr Opin Behav Sci 2020; 36:169-176. [PMID: 33718533 PMCID: PMC7945734 DOI: 10.1016/j.cobeha.2020.10.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
Studies with Deaf and blind individuals demonstrate that linguistic and sensory experiences during sensitive periods have potent effects on neurocognitive basis of language. Native users of sign and spoken languages recruit similar fronto-temporal systems during language processing. By contrast, delays in sign language access impact proficiency and the neural basis of language. Analogously, early but not late-onset blindness modifies the neural basis of language. People born blind recruit 'visual' areas during language processing, show reduced left-lateralization of language and enhanced performance on some language tasks. Sensitive period plasticity in and outside fronto-temporal language systems shapes the neural basis of language.
Collapse
Affiliation(s)
- Qi Cheng
- University of California San Diego
- University of Washington
| | - Emily Silvano
- Federal University of Rio de Janeiro
- Johns Hopkins University
| | | |
Collapse
|
32
|
Words have a weight: language as a source of inner grounding and flexibility in abstract concepts. PSYCHOLOGICAL RESEARCH 2020; 86:2451-2467. [DOI: 10.1007/s00426-020-01438-6] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
33
|
Lupyan G, Abdel Rahman R, Boroditsky L, Clark A. Effects of Language on Visual Perception. Trends Cogn Sci 2020; 24:930-944. [PMID: 33012687 DOI: 10.1016/j.tics.2020.08.005] [Citation(s) in RCA: 53] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2020] [Revised: 08/22/2020] [Accepted: 08/25/2020] [Indexed: 11/24/2022]
Abstract
Does language change what we perceive? Does speaking different languages cause us to perceive things differently? We review the behavioral and electrophysiological evidence for the influence of language on perception, with an emphasis on the visual modality. Effects of language on perception can be observed both in higher-level processes such as recognition and in lower-level processes such as discrimination and detection. A consistent finding is that language causes us to perceive in a more categorical way. Rather than being fringe or exotic, as they are sometimes portrayed, we discuss how effects of language on perception naturally arise from the interactive and predictive nature of perception.
Collapse
Affiliation(s)
- Gary Lupyan
- University of Wisconsin-Madison, Madison, WI, USA.
| | | | | | - Andy Clark
- University of Sussex, Brighton, UK; Macquarie University, Sydney, Australia
| |
Collapse
|
34
|
Wang X, Men W, Gao J, Caramazza A, Bi Y. Two Forms of Knowledge Representations in the Human Brain. Neuron 2020; 107:383-393.e5. [PMID: 32386524 DOI: 10.1016/j.neuron.2020.04.010] [Citation(s) in RCA: 43] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2019] [Revised: 03/05/2020] [Accepted: 04/06/2020] [Indexed: 01/09/2023]
Abstract
Sensory experience shapes what and how knowledge is stored in the brain-our knowledge about the color of roses depends in part on the activity of color-responsive neurons based on experiences of seeing roses. We compared the brain basis of color knowledge in congenitally (or early) blind individuals, whose color knowledge can only be obtained through language descriptions and/or cognitive inference, to that of sighted individuals whose color-knowledge benefits from both sensory experience and language. We found that some regions support color knowledge only in the sighted, whereas a region in the left dorsal anterior temporal lobe supports object-color knowledge in both the blind and sighted groups, indicating the existence of a sensory-independent knowledge coding system in both groups. Thus, there are (at least) two forms of object knowledge representations in the human brain: sensory-derived and language- and cognition-derived knowledge, supported by different brain systems.
Collapse
Affiliation(s)
- Xiaoying Wang
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China; Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing 100875, China
| | - Weiwei Men
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China; Beijing City Key Lab for Medical Physics and Engineering, Institute of Heavy Ion Physics, School of Physics, Peking University, Beijing 100871, China
| | - Jiahong Gao
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China; Beijing City Key Lab for Medical Physics and Engineering, Institute of Heavy Ion Physics, School of Physics, Peking University, Beijing 100871, China; McGovern Institute for Brain Research, Peking University, Beijing 100871, China
| | - Alfonso Caramazza
- Department of Psychology, Harvard University, Cambridge, MA 02138, USA; Center for Mind/Brain Sciences, University of Trento, 38068 Rovereto, Italy
| | - Yanchao Bi
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China; Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing 100875, China.
| |
Collapse
|
35
|
Bottini R, Ferraro S, Nigri A, Cuccarini V, Bruzzone MG, Collignon O. Brain Regions Involved in Conceptual Retrieval in Sighted and Blind People. J Cogn Neurosci 2020; 32:1009-1025. [PMID: 32013684 DOI: 10.1162/jocn_a_01538] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
If conceptual retrieval is partially based on the simulation of sensorimotor experience, people with a different sensorimotor experience, such as congenitally blind people, should retrieve concepts in a different way. However, studies investigating the neural basis of several conceptual domains (e.g., actions, objects, places) have shown a very limited impact of early visual deprivation. We approached this problem by investigating brain regions that encode the perceptual similarity of action and color concepts evoked by spoken words in sighted and congenitally blind people. At first, and in line with previous findings, a contrast between action and color concepts (independently of their perceptual similarity) revealed similar activations in sighted and blind people for action concepts and partially different activations for color concepts, but outside visual areas. On the other hand, adaptation analyses based on subjective ratings of perceptual similarity showed compelling differences across groups. Perceptually similar colors and actions induced adaptation in the posterior occipital cortex of sighted people only, overlapping with regions known to represent low-level visual features of those perceptual domains. Early-blind people instead showed a stronger adaptation for perceptually similar concepts in temporal regions, arguably indexing higher reliance on a lexical-semantic code to represent perceptual knowledge. Overall, our results show that visual deprivation does changes the neural bases of conceptual retrieval, but mostly at specific levels of representation supporting perceptual similarity discrimination, reconciling apparently contrasting findings in the field.
Collapse
Affiliation(s)
| | | | - Anna Nigri
- Fondazione IRCCS Istituto Neurologico Carlo Besta, Milan, Italy
| | | | | | | |
Collapse
|
36
|
Reply to Ostarek et al.: Language, but not co-occurrence statistics, is useful for learning animal appearance. Proc Natl Acad Sci U S A 2019; 116:21974-21975. [PMID: 31615887 DOI: 10.1073/pnas.1912854116] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
|
37
|
Sighted people's language is not helpful for blind individuals' acquisition of typical animal colors. Proc Natl Acad Sci U S A 2019; 116:21972-21973. [PMID: 31615880 DOI: 10.1073/pnas.1912302116] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
|
38
|
Lewis M, Zettersten M, Lupyan G. Distributional semantics as a source of visual knowledge. Proc Natl Acad Sci U S A 2019; 116:19237-19238. [PMID: 31488726 PMCID: PMC6765286 DOI: 10.1073/pnas.1910148116] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Affiliation(s)
- Molly Lewis
- Department of Psychology, University of Wisconsin-Madison, Madison, WI, 53706
| | - Martin Zettersten
- Department of Psychology, University of Wisconsin-Madison, Madison, WI, 53706
| | - Gary Lupyan
- Department of Psychology, University of Wisconsin-Madison, Madison, WI, 53706
| |
Collapse
|
39
|
Reply to Lewis et al.: Inference is key to learning appearance from language, for humans and distributional semantic models alike. Proc Natl Acad Sci U S A 2019; 116:19239-19240. [PMID: 31488727 DOI: 10.1073/pnas.1910410116] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
|