1
|
Lehser C, Hillyard SA, Strauss DJ. Feeling senseless sensations: a crossmodal EEG study of mismatched tactile and visual experiences in virtual reality. J Neural Eng 2024; 21:056042. [PMID: 39374631 DOI: 10.1088/1741-2552/ad83f5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2024] [Accepted: 10/07/2024] [Indexed: 10/09/2024]
Abstract
Objective.To create highly immersive experiences in virtual reality (VR) it is important to not only include the visual sense but also to involve multimodal sensory input. To achieve optimal results, the temporal and spatial synchronization of these multimodal inputs is critical. It is therefore necessary to find methods to objectively evaluate the synchronization of VR experiences with a continuous tracking of the user.Approach.In this study a passive touch experience was incorporated in a visual-tactile VR setup using VR glasses and tactile sensations in mid-air. Inconsistencies of multimodal perception were intentionally integrated into a discrimination task. The participants' electroencephalogram (EEG) was recorded to obtain neural correlates of visual-tactile mismatch situations.Main results.The results showed significant differences in the event-related potentials (ERP) between match and mismatch situations. A biphasic ERP configuration consisting of a positivity at 120 ms and a later negativity at 370 ms was observed following a visual-tactile mismatch.Significance.This late negativity could be related to the N400 that is associated with semantic incongruency. These results provide a promising approach towards the objective evaluation of visual-tactile synchronization in virtual experiences.
Collapse
Affiliation(s)
- Caroline Lehser
- Systems Neuroscience and Neurotechnology Unit, Faculty of Medicine, Saarland University & School of Engineering, htw saar, Homburg/Saar, Germany
- Center for Digital Neurotechnologies Saar, Homburg/Saar & Saarbruecken, Germany
| | - Steven A Hillyard
- Leibniz Institute of Neurobiology, Magdeburg, Germany
- Department of Neurosciences, University of California, San Diego, CA, United States of America
| | - Daniel J Strauss
- Systems Neuroscience and Neurotechnology Unit, Faculty of Medicine, Saarland University & School of Engineering, htw saar, Homburg/Saar, Germany
- Center for Digital Neurotechnologies Saar, Homburg/Saar & Saarbruecken, Germany
| |
Collapse
|
2
|
Cai C, Zhang L, Guo Z, Fang X, Quan Z. Effects of color-flavor association on visual search process for reference pictures on beverage packaging: behavioral, electrophysiological, and causal mechanisms. Front Psychol 2024; 15:1433277. [PMID: 39315035 PMCID: PMC11417035 DOI: 10.3389/fpsyg.2024.1433277] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2024] [Accepted: 08/28/2024] [Indexed: 09/25/2024] Open
Abstract
The visual search for product packaging involves intricate cognitive processes that are prominently impacted by learned associations derived from extensive long-term experiences. The present research employed EEG technology and manipulated the color display of reference pictures on beverage bottles to explore the underlying neurocognitive pathways. Specifically, we aimed to investigate the influence of color-flavor association strength on the visual processing of such stimuli as well as the in-depth neural mechanisms. The behavioral results revealed that stimuli with strong association strength triggered the fastest response and the highest accuracy, compared with the stimuli with weak association strength and the achromatic ones. The EEG findings further substantiated that the chromatic stimuli evoked a more pronounced N2 component than achromatic ones, and the stimuli with strong association strength elicited larger P3 and smaller N400 amplitudes than the ones with weak association strength. Additionally, the source localization using sLORETA showed significant activations in the inferior temporal gyrus. In conclusion, our research suggests that (1) color expectations would guide visual search process and trigger faster responses to congruent visual stimuli, (2) both the initial perceptual representation and subsequent semantic representation play pivotal roles in effective visual search for the targets, and (3) the color-flavor association strength potentially exerts an impact on visual processing by modulating memory accessibility.
Collapse
Affiliation(s)
- Chen Cai
- Department of Psychology, Normal College, Qingdao University, Qingdao, Shandong, China
| | - Le Zhang
- Department of Psychology, Normal College, Qingdao University, Qingdao, Shandong, China
- School of Psychology, Center for Studies of Psychological Application, South China Normal University, Guangzhou, Guangdong, China
| | - Zitao Guo
- Department of Psychology, Normal College, Qingdao University, Qingdao, Shandong, China
| | - Xin Fang
- Department of Psychology, Normal College, Qingdao University, Qingdao, Shandong, China
| | - Zihan Quan
- Department of Psychology, Normal College, Qingdao University, Qingdao, Shandong, China
| |
Collapse
|
3
|
Nicholls VI, Alsbury-Nealy B, Krugliak A, Clarke A. Context effects on object recognition in real-world environments: A study protocol. Wellcome Open Res 2023; 7:165. [PMID: 37274451 PMCID: PMC10238820 DOI: 10.12688/wellcomeopenres.17856.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/05/2023] [Indexed: 07/22/2023] Open
Abstract
Background: The environments that we live in impact on our ability to recognise objects, with recognition being facilitated when objects appear in expected locations (congruent) compared to unexpected locations (incongruent). However, these findings are based on experiments where the object is isolated from its environment. Moreover, it is not clear which components of the recognition process are impacted by the environment. In this experiment, we seek to examine the impact real world environments have on object recognition. Specifically, we will use mobile electroencephalography (mEEG) and augmented reality (AR) to investigate how the visual and semantic processing aspects of object recognition are changed by the environment. Methods: We will use AR to place congruent and incongruent virtual objects around indoor and outdoor environments. During the experiment a total of 34 participants will walk around the environments and find these objects while we record their eye movements and neural signals. We will perform two primary analyses. First, we will analyse the event-related potential (ERP) data using paired samples t-tests in the N300/400 time windows in an attempt to replicate congruency effects on the N300/400. Second, we will use representational similarity analysis (RSA) and computational models of vision and semantics to determine how visual and semantic processes are changed by congruency. Conclusions: Based on previous literature, we hypothesise that scene-object congruence would facilitate object recognition. For ERPs, we predict a congruency effect in the N300/N400, and for RSA we predict that higher level visual and semantic information will be represented earlier for congruent scenes than incongruent scenes. By collecting mEEG data while participants are exploring a real-world environment, we will be able to determine the impact of a natural context on object recognition, and the different processing stages of object recognition.
Collapse
Affiliation(s)
| | | | - Alexandra Krugliak
- Department of Psychology, University of Cambridge, Cambridge, CB2 3EB, UK
| | - Alex Clarke
- Department of Psychology, University of Cambridge, Cambridge, CB2 3EB, UK
| |
Collapse
|
4
|
Tachmatzidou O, Vatakis A. Attention and schema violations of real world scenes differentially modulate time perception. Sci Rep 2023; 13:10002. [PMID: 37340029 DOI: 10.1038/s41598-023-37030-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Accepted: 06/14/2023] [Indexed: 06/22/2023] Open
Abstract
In the real world, object arrangement follows a number of rules. Some of the rules pertain to the spatial relations between objects and scenes (i.e., syntactic rules) and others about the contextual relations (i.e., semantic rules). Research has shown that violation of semantic rules influences interval timing with the duration of scenes containing such violations to be overestimated as compared to scenes with no violations. However, no study has yet investigated whether both semantic and syntactic violations can affect timing in the same way. Furthermore, it is unclear whether the effect of scene violations on timing is due to attentional or other cognitive accounts. Using an oddball paradigm and real-world scenes with or without semantic and syntactic violations, we conducted two experiments on whether time dilation will be obtained in the presence of any type of scene violation and the role of attention in any such effect. Our results from Experiment 1 showed that time dilation indeed occurred in the presence of syntactic violations, while time compression was observed for semantic violations. In Experiment 2, we further investigated whether these estimations were driven by attentional accounts, by utilizing a contrast manipulation of the target objects. The results showed that an increased contrast led to duration overestimation for both semantic and syntactic oddballs. Together, our results indicate that scene violations differentially affect timing due to violation processing differences and, moreover, their effect on timing seems to be sensitive to attentional manipulations such as target contrast.
Collapse
Affiliation(s)
- Ourania Tachmatzidou
- Multisensory and Temporal Processing Laboratory (MultiTimeLab), Department of Psychology, Panteion University of Social and Political Sciences, 136 Syngrou Ave., 17671, Athens, Greece
| | - Argiro Vatakis
- Multisensory and Temporal Processing Laboratory (MultiTimeLab), Department of Psychology, Panteion University of Social and Political Sciences, 136 Syngrou Ave., 17671, Athens, Greece.
| |
Collapse
|
5
|
Bracci S, Mraz J, Zeman A, Leys G, Op de Beeck H. The representational hierarchy in human and artificial visual systems in the presence of object-scene regularities. PLoS Comput Biol 2023; 19:e1011086. [PMID: 37115763 PMCID: PMC10171658 DOI: 10.1371/journal.pcbi.1011086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2022] [Revised: 05/10/2023] [Accepted: 04/09/2023] [Indexed: 04/29/2023] Open
Abstract
Human vision is still largely unexplained. Computer vision made impressive progress on this front, but it is still unclear to which extent artificial neural networks approximate human object vision at the behavioral and neural levels. Here, we investigated whether machine object vision mimics the representational hierarchy of human object vision with an experimental design that allows testing within-domain representations for animals and scenes, as well as across-domain representations reflecting their real-world contextual regularities such as animal-scene pairs that often co-occur in the visual environment. We found that DCNNs trained in object recognition acquire representations, in their late processing stage, that closely capture human conceptual judgements about the co-occurrence of animals and their typical scenes. Likewise, the DCNNs representational hierarchy shows surprising similarities with the representational transformations emerging in domain-specific ventrotemporal areas up to domain-general frontoparietal areas. Despite these remarkable similarities, the underlying information processing differs. The ability of neural networks to learn a human-like high-level conceptual representation of object-scene co-occurrence depends upon the amount of object-scene co-occurrence present in the image set thus highlighting the fundamental role of training history. Further, although mid/high-level DCNN layers represent the category division for animals and scenes as observed in VTC, its information content shows reduced domain-specific representational richness. To conclude, by testing within- and between-domain selectivity while manipulating contextual regularities we reveal unknown similarities and differences in the information processing strategies employed by human and artificial visual systems.
Collapse
Affiliation(s)
- Stefania Bracci
- Center for Mind/Brain Sciences-CIMeC, University of Trento, Rovereto, Italy
- KU Leuven, Leuven Brain Institute, Brain & Cognition Research Unit, Leuven, Belgium
| | - Jakob Mraz
- KU Leuven, Leuven Brain Institute, Brain & Cognition Research Unit, Leuven, Belgium
| | - Astrid Zeman
- KU Leuven, Leuven Brain Institute, Brain & Cognition Research Unit, Leuven, Belgium
| | - Gaëlle Leys
- KU Leuven, Leuven Brain Institute, Brain & Cognition Research Unit, Leuven, Belgium
| | - Hans Op de Beeck
- KU Leuven, Leuven Brain Institute, Brain & Cognition Research Unit, Leuven, Belgium
| |
Collapse
|
6
|
Sargent M, LePage A, Kenett YN, Matheson HE. The Effects of Environmental Scene and Body Posture on Embodied Strategies in Creative Thinking. CREATIVITY RESEARCH JOURNAL 2023. [DOI: 10.1080/10400419.2022.2160563] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Affiliation(s)
- Matthew Sargent
- Psychology Department, University of Northern British Columbia
| | - Alex LePage
- Psychology Department, University of Northern British Columbia
| | - Yoed N. Kenett
- The Faculty of Data and Decision Sciences, Technion - Israel Institute of Technology
| | | |
Collapse
|
7
|
Wang Y, Yang K, Fu P, Zheng X, Yang H, Zhou Q, Ma W, Wang P. The Ability to Use Contextual Information in Object and Scene Recognition in Patients with Mild Cognitive Impairment. J Alzheimers Dis 2023; 95:945-963. [PMID: 37638431 DOI: 10.3233/jad-221132] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/29/2023]
Abstract
BACKGROUND The ability to understand and make use of object-scene relationships are critical for object and scene recognition. OBJECTIVE The current study assessed whether patients with mild cognitive impairment (MCI), possibly in the preclinical phase of Alzheimer's disease, exhibited impairment in processing contextual information in scene and object recognition. METHODS In Experiment 1, subjects viewed images of foreground objects in either semantic consistent or inconsistent scenes under no time pressure, and they verbally reported the names of foreground objects and backgrounds. Experiment 2 replicated Experiment 1, except that subjects were required to name scene first. Experiment 3 examined object and scene recognition accuracy baselines, recognition difficulty, familiarity with objects/scenes, and object-scene consistency judgements. RESULTS There were contextual consistency effects on scene recognition for MCI and healthy subjects, regardless of response sequence. Scenes were recognized more accurately under the consistent condition than the inconsistent condition. Additionally, MCI patients were more susceptible to incongruent contextual information, possibly due to inhibitory deficits or over-dependence on semantic knowledge. However, no significant differences between MCI and healthy subjects were observed in consistency judgement, recognition accuracy, recognition difficulty and familiarity rating, suggesting no significant impairment in object and scene knowledge among MCI subjects. CONCLUSIONS The study indicates that MCI patients retain relatively intact contextual processing ability but may exhibit inhibitory deficits or over-reliance on semantic knowledge.
Collapse
Affiliation(s)
- Yaqi Wang
- School of Foreign Languages and Literature, Shandong University, Jinan, China
- Center for Language Science, Shandong University, Jinan, China
| | - Kai Yang
- School of Foreign Languages and Literature, Shandong University, Jinan, China
| | - Pengrui Fu
- Department of Neurology, The Second Hospital of Shandong University, Jinan, China
| | - Xiaolei Zheng
- Department of Neurology, The Second Hospital of Shandong University, Jinan, China
| | - Hui Yang
- Department of Neurology, The Second Hospital of Shandong University, Jinan, China
| | - Qingbo Zhou
- Department of Neurology, The Second Hospital of Shandong University, Jinan, China
| | - Wen Ma
- School of Foreign Languages and Literature, Shandong University, Jinan, China
- Center for Language Science, Shandong University, Jinan, China
| | - Ping Wang
- Center for Language Science, Shandong University, Jinan, China
- Department of Neurology, The Second Hospital of Shandong University, Jinan, China
| |
Collapse
|
8
|
Zhang R, Hu Y, Zhang J, Wu Y, Huang L. Event‐related potential response to drivers' facial expressions in an online car‐hailing scene. Psych J 2022; 12:195-201. [PMID: 36336336 DOI: 10.1002/pchj.613] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2021] [Accepted: 08/17/2022] [Indexed: 11/09/2022]
Abstract
Recognizing facial expressions is crucial for adaptive social interaction. Prior empirical research on facial expression processing has primarily focused on isolated faces; however, facial expressions appear embedded in surrounding scenes in everyday life. In this study, we attempted to demonstrate how the online car-hailing scene affects the processing of facial expression. This study examined the processing of drivers' facial expressions in scenes by recording event-related potentials, in which neutral or happy faces embedded in online car-hailing orders were constructed (with type of vehicle, driver rating, driver surname, and level of reputation controlled). A total of 35 female volunteers participated in this experiment and were asked to judge which facial expressions that emerged in scenes of online car-hailing were more trustworthy. The results revealed an interaction between facial expression scenes, brain areas, and electrode sites in the late positive potential, which indicated that happy faces elicited larger amplitudes than did neutral ones in the parietal areas and that scenes with happy facial expressions had shorter latencies than did those with neutral ones. As expected, the late positive potential evoked by happy facial expressions in a scene was larger than that evoked by neutral ones, which reflected motivated attention and motivational response processes. This study highlights the importance of scenes as context in the study of facial expression processing.
Collapse
Affiliation(s)
- Ran‐Ran Zhang
- Department of Psychology School of Medical Humanitarians, Guizhou Medical University Guiyang China
| | - Yu‐Wei Hu
- Department of Psychology School of Medical Humanitarians, Guizhou Medical University Guiyang China
| | - Jia‐Rui Zhang
- Department of Psychology School of Medical Humanitarians, Guizhou Medical University Guiyang China
| | - Yi‐Xun Wu
- Department of Psychology School of Medical Humanitarians, Guizhou Medical University Guiyang China
| | - Lie‐Yu Huang
- Department of Psychology School of Medical Humanitarians, Guizhou Medical University Guiyang China
| |
Collapse
|
9
|
Altered functional connectivity: A possible reason for reduced performance during visual cognition involving scene incongruence and negative affect. IBRO Neurosci Rep 2022; 13:533-542. [DOI: 10.1016/j.ibneur.2022.11.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2022] [Accepted: 11/19/2022] [Indexed: 11/22/2022] Open
|
10
|
Šoškić A, Jovanović V, Styles SJ, Kappenman ES, Ković V. How to do Better N400 Studies: Reproducibility, Consistency and Adherence to Research Standards in the Existing Literature. Neuropsychol Rev 2022; 32:577-600. [PMID: 34374003 PMCID: PMC9381463 DOI: 10.1007/s11065-021-09513-4] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2019] [Accepted: 04/06/2021] [Indexed: 11/11/2022]
Abstract
Given the complexity of ERP recording and processing pipeline, the resulting variability of methodological options, and the potential for these decisions to influence study outcomes, it is important to understand how ERP studies are conducted in practice and to what extent researchers are transparent about their data collection and analysis procedures. The review gives an overview of methodology reporting in a sample of 132 ERP papers, published between January 1980 - June 2018 in journals included in two large databases: Web of Science and PubMed. Because ERP methodology partly depends on the study design, we focused on a well-established component (the N400) in the most commonly assessed population (healthy neurotypical adults), in one of its most common modalities (visual images). The review provides insights into 73 properties of study design, data pre-processing, measurement, statistics, visualization of results, and references to supplemental information across studies within the same subfield. For each of the examined methodological decisions, the degree of consistency, clarity of reporting and deviations from the guidelines for best practice were examined. Overall, the results show that each study had a unique approach to ERP data recording, processing and analysis, and that at least some details were missing from all papers. In the review, we highlight the most common reporting omissions and deviations from established recommendations, as well as areas in which there was the least consistency. Additionally, we provide guidance for a priori selection of the N400 measurement window and electrode locations based on the results of previous studies.
Collapse
Affiliation(s)
- Anđela Šoškić
- Teacher Education Faculty, University of Belgrade, Belgrade, Serbia.
- Laboratory for Neurocognition and Applied Cognition, Department of Psychology, Faculty of Philosophy, University of Belgrade, Belgrade, Serbia.
| | - Vojislav Jovanović
- Laboratory for Neurocognition and Applied Cognition, Department of Psychology, Faculty of Philosophy, University of Belgrade, Belgrade, Serbia
| | - Suzy J Styles
- Division of Psychology, School of Social Sciences, Nanyang Technological University, Singapore, Singapore
- Centre for Research and Development On Learning (CRADLE), Nanyang Technological University, Singapore, Singapore
- Singapore Institute for Clinical Sciences (SICS), A*Star Research Entities, Singapore, Singapore
| | - Emily S Kappenman
- Department of Psychology, San Diego State University, San Diego, CA, USA
| | - Vanja Ković
- Laboratory for Neurocognition and Applied Cognition, Department of Psychology, Faculty of Philosophy, University of Belgrade, Belgrade, Serbia
| |
Collapse
|
11
|
Helbing J, Draschkow D, L-H Võ M. Auxiliary Scene-Context Information Provided by Anchor Objects Guides Attention and Locomotion in Natural Search Behavior. Psychol Sci 2022; 33:1463-1476. [PMID: 35942922 DOI: 10.1177/09567976221091838] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Successful adaptive behavior requires efficient attentional and locomotive systems. Previous research has thoroughly investigated how we achieve this efficiency during natural behavior by exploiting prior knowledge related to targets of our actions (e.g., attending to metallic targets when looking for a pot) and to the environmental context (e.g., looking for the pot in the kitchen). Less is known about whether and how individual nontarget components of the environment support natural behavior. In our immersive virtual reality task, 24 adult participants searched for objects in naturalistic scenes in which we manipulated the presence and arrangement of large, static objects that anchor predictions about targets (e.g., the sink provides a prediction for the location of the soap). Our results show that gaze and body movements in this naturalistic setting are strongly guided by these anchors. These findings demonstrate that objects auxiliary to the target are incorporated into the representations guiding attention and locomotion.
Collapse
Affiliation(s)
- Jason Helbing
- Scene Grammar Lab, Department of Psychology, Goethe University Frankfurt
| | - Dejan Draschkow
- Brain and Cognition Laboratory, Department of Experimental Psychology, University of Oxford.,Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford
| | - Melissa L-H Võ
- Scene Grammar Lab, Department of Psychology, Goethe University Frankfurt
| |
Collapse
|
12
|
Chen L, Cichy RM, Kaiser D. Semantic Scene-Object Consistency Modulates N300/400 EEG Components, but Does Not Automatically Facilitate Object Representations. Cereb Cortex 2022; 32:3553-3567. [PMID: 34891169 DOI: 10.1093/cercor/bhab433] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Revised: 11/02/2021] [Accepted: 11/03/2021] [Indexed: 11/13/2022] Open
Abstract
During natural vision, objects rarely appear in isolation, but often within a semantically related scene context. Previous studies reported that semantic consistency between objects and scenes facilitates object perception and that scene-object consistency is reflected in changes in the N300 and N400 components in EEG recordings. Here, we investigate whether these N300/400 differences are indicative of changes in the cortical representation of objects. In two experiments, we recorded EEG signals, while participants viewed semantically consistent or inconsistent objects within a scene; in Experiment 1, these objects were task-irrelevant, while in Experiment 2, they were directly relevant for behavior. In both experiments, we found reliable and comparable N300/400 differences between consistent and inconsistent scene-object combinations. To probe the quality of object representations, we performed multivariate classification analyses, in which we decoded the category of the objects contained in the scene. In Experiment 1, in which the objects were not task-relevant, object category could be decoded from ~100 ms after the object presentation, but no difference in decoding performance was found between consistent and inconsistent objects. In contrast, when the objects were task-relevant in Experiment 2, we found enhanced decoding of semantically consistent, compared with semantically inconsistent, objects. These results show that differences in N300/400 components related to scene-object consistency do not index changes in cortical object representations but rather reflect a generic marker of semantic violations. Furthermore, our findings suggest that facilitatory effects between objects and scenes are task-dependent rather than automatic.
Collapse
Affiliation(s)
- Lixiang Chen
- Department of Education and Psychology, Freie Universität Berlin, Berlin 14195, Germany
| | - Radoslaw Martin Cichy
- Department of Education and Psychology, Freie Universität Berlin, Berlin 14195, Germany
| | - Daniel Kaiser
- Mathematical Institute, Department of Mathematics and Computer Science, Physics, Geography, Justus-Liebig-Universität Gießen, Gießen 35392, Germany.,Center for Mind, Brain and Behavior (CMBB), Philipps-Universität Marburg and Justus-Liebig-Universität Gießen, Marburg 35032, Germany
| |
Collapse
|
13
|
Pitt KM, Mansouri A, Wang Y, Zosky J. Toward P300-brain-computer interface access to contextual scene displays for AAC: An initial exploration of context and asymmetry processing in healthy adults. Neuropsychologia 2022; 173:108289. [PMID: 35690117 DOI: 10.1016/j.neuropsychologia.2022.108289] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Revised: 05/04/2022] [Accepted: 06/04/2022] [Indexed: 11/16/2022]
Abstract
Brain-computer interfaces for augmentative and alternative communication (BCI-AAC) may help overcome physical barriers to AAC access. Traditionally, visually based P300-BCI-AAC displays utilize a symmetrical grid layout. Contextual scene displays are composed of context-rich images (e.g., photographs) and may support AAC success. However, contextual scene displays contrast starkly with the standard P300-grid approach. Understanding the neurological processes from which BCI-AAC devices function is crucial to human-centered computing for BCI-AAC. Therefore, the aim of this multidisciplinary investigation is to provide an initial exploration of contextual scene use for BCI-AAC. METHODS Participants completed three experimental conditions to evaluate the effects of item arrangement asymmetry and context on P300-based BCI-AAC signals and offline BCI-AAC accuracy, including 1) the full contextual scene condition, 2) asymmetrical item arraignment without context condition and 3) the grid condition. Following each condition, participants completed task-evaluation ratings (e.g., engagement). Offline BCI-AAC accuracy for each condition was evaluated using cross-validation. RESULTS Display asymmetry significantly decreased P300 latency in the centro-parietal cluster. P300 amplitudes in the frontal cluster were decreased, though nonsignificantly. Display context significantly increased N170 amplitudes in the occipital cluster, and N400 amplitudes in the centro-parietal and occipital clusters. Scenes were rated as more visually appealing and engaging, and offline BCI-AAC performance for the scene condition was not statistically different from the grid standard. CONCLUSION Findings support the feasibility of incorporating scene-based displays for P300-BCI-AAC development to help provide communication for individuals with minimal or emerging language and literacy skills.
Collapse
Affiliation(s)
- Kevin M Pitt
- Department of Special Education and Communication Disorders, University of Nebraska-Lincoln, Lincoln, NE, USA.
| | - Amirsalar Mansouri
- Department of Electrical and Computer Engineering, University of Nebraska-Lincoln, Lincoln, NE, USA
| | - Yingying Wang
- Department of Special Education and Communication Disorders, University of Nebraska-Lincoln, Lincoln, NE, USA
| | - Joshua Zosky
- Department of Psychology, University of Nebraska-Lincoln, Lincoln, NE, USA
| |
Collapse
|
14
|
Nicholls VI, Alsbury-Nealy B, Krugliak A, Clarke A. Context effects on object recognition in real-world environments: A study protocol. Wellcome Open Res 2022. [DOI: 10.12688/wellcomeopenres.17856.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Abstract
Background: The environments that we live in impact on our ability to recognise objects, with recognition being facilitated when objects appear in expected locations (congruent) compared to unexpected locations (incongruent). However, these findings are based on experiments where the object is isolated from its environment. Moreover, it is not clear which components of the recognition process are impacted by the environment. In this experiment, we seek to examine the impact real world environments have on object recognition. Specifically, we will use mobile electroencephalography (mEEG) and augmented reality (AR) to investigate how the visual and semantic processing aspects of object recognition are changed by the environment. Methods: We will use AR to place congruent and incongruent virtual objects around indoor and outdoor environments. During the experiment a total of 34 participants will walk around the environments and find these objects while we record their eye movements and neural signals. We will perform two primary analyses. First, we will analyse the event-related potential (ERP) data using paired samples t-tests in the N300/400 time windows in an attempt to replicate congruency effects on the N300/400. Second, we will use representational similarity analysis (RSA) and computational models of vision and semantics to determine how visual and semantic processes are changed by congruency. Conclusions: Based on previous literature, we hypothesise that scene-object congruence would facilitate object recognition. For ERPs, we predict a congruency effect in the N300/N400, and for RSA we predict that higher level visual and semantic information will be represented earlier for congruent scenes than incongruent scenes. By collecting mEEG data while participants are exploring a real-world environment, we will be able to determine the impact of a natural context on object recognition, and the different processing stages of object recognition.
Collapse
|
15
|
Xie M, Liu Z, Guo C. Effect of the congruity of emotional contexts at encoding on source memory: Evidence from ERPs. Int J Psychophysiol 2022; 173:45-57. [PMID: 34999142 DOI: 10.1016/j.ijpsycho.2022.01.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2021] [Revised: 12/15/2021] [Accepted: 01/03/2022] [Indexed: 11/25/2022]
Abstract
Emotion's influence on source memory has proven more elusive and the lack of studies investigates the effect of the congruent emotional contexts on source memory. Here, we investigated these issues using event-related potentials (ERPs) to assess emotional-induced neural correlates. During encoding, congruent word-picture (a word 'shoes' - a picture described shoes) and incongruent word-picture (a word 'pepper' - a picture described shoes) with a prompt (Common? or Natural?) were presented. At retrieval, participants indicated which prompts were concomitantly presented with the word during encoding. Behavioral results revealed that source memory accuracy was enhanced in the neutral contexts compared to the negative contexts, and enhanced in the incongruent condition relative to the congruent condition, suggesting that emotional contexts impaired source memory performance, and incongruent information enhanced source memory. ERPs results showed that early P2 old/new effect (150-250 ms) and FN400 old/new effect (300-450 ms) were observed for words with correct source that had been encoded in the congruent emotional contexts, and that a larger parietal old/new effect, between 500 and 700 ms, was observed for words with correct source that had been encoded in the incongruent condition than in the congruent condition, irrespective the nature of context. The ERPs results indicate that retrieval of source details for the associated emotionally congruent information supports the idea that emotional events could attract more attentional resources, and reflects the contribution of familiarity-based process. Meanwhile, retrieval of source details for the associated incongruent information reflects a stronger contribution of recollection-based process.
Collapse
Affiliation(s)
- Miaomiao Xie
- Beijing Key Laboratory of Learning and Cognition, Department of Psychology, Capital Normal University, Beijing, PR China
| | - Zejun Liu
- Beijing Key Laboratory of Learning and Cognition, Department of Psychology, Capital Normal University, Beijing, PR China
| | - Chunyan Guo
- Beijing Key Laboratory of Learning and Cognition, Department of Psychology, Capital Normal University, Beijing, PR China.
| |
Collapse
|
16
|
Lauer T, Schmidt F, Võ MLH. The role of contextual materials in object recognition. Sci Rep 2021; 11:21988. [PMID: 34753999 PMCID: PMC8578445 DOI: 10.1038/s41598-021-01406-z] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Accepted: 10/22/2021] [Indexed: 01/01/2023] Open
Abstract
While scene context is known to facilitate object recognition, little is known about which contextual "ingredients" are at the heart of this phenomenon. Here, we address the question of whether the materials that frequently occur in scenes (e.g., tiles in a bathroom) associated with specific objects (e.g., a perfume) are relevant for the processing of that object. To this end, we presented photographs of consistent and inconsistent objects (e.g., perfume vs. pinecone) superimposed on scenes (e.g., a bathroom) and close-ups of materials (e.g., tiles). In Experiment 1, consistent objects on scenes were named more accurately than inconsistent ones, while there was only a marginal consistency effect for objects on materials. Also, we did not find any consistency effect for scrambled materials that served as color control condition. In Experiment 2, we recorded event-related potentials and found N300/N400 responses-markers of semantic violations-for objects on inconsistent relative to consistent scenes. Critically, objects on materials triggered N300/N400 responses of similar magnitudes. Our findings show that contextual materials indeed affect object processing-even in the absence of spatial scene structure and object content-suggesting that material is one of the contextual "ingredients" driving scene context effects.
Collapse
Affiliation(s)
- Tim Lauer
- Scene Grammar Lab, Department of Psychology, Goethe University Frankfurt, Theodor-W.-Adorno-Platz 6, PEG 5.G144, 60323, Frankfurt am Main, Germany.
| | - Filipp Schmidt
- Department of Experimental Psychology, Justus Liebig University Giessen, 35394, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University, Giessen, Germany
| | - Melissa L-H Võ
- Scene Grammar Lab, Department of Psychology, Goethe University Frankfurt, Theodor-W.-Adorno-Platz 6, PEG 5.G144, 60323, Frankfurt am Main, Germany
| |
Collapse
|
17
|
Manfredi M, Boggio PS. Neural correlates of sex differences in communicative gestures and speech comprehension: A preliminary study. Soc Neurosci 2021; 16:653-667. [PMID: 34697990 DOI: 10.1080/17470919.2021.1997800] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
The goal of this study was to investigate whether the semantic processing of the audiovisual combination of communicative gestures with speech differs between men and women. We recorded event-related brain potentials in women and men during the presentation of communicative gestures that were either congruent or incongruent with the speech.Our results showed that incongruent gestures elicited an N400 effect over frontal sites compared to congruent ones in both groups. Moreover, the females showed an earlier N2 response to incongruent stimuli than congruent ones, while larger sustained negativity and late positivity in response to incongruent stimuli was observed only in males. These results suggest that women rapidly recognize and process audiovisual combinations of communicative gestures and speech (as early as 300 ms) whereas men analyze them at the later stages of the process.
Collapse
Affiliation(s)
- Mirella Manfredi
- Department of Psychology, University of Zurich, Zurich, Switzerland
| | - Paulo Sergio Boggio
- Social and Cognitive Neuroscience Laboratory, Center for Biological Science and Health, Mackenzie Presbyterian University, São Paulo, Brazil
| |
Collapse
|
18
|
Manfredi M, Sanchez Mello de Pinho P, Murrins Marques L, de Oliveira Ribeiro B, Boggio PS. Crossmodal processing of environmental sounds and everyday life actions: An ERP study. Heliyon 2021; 7:e07937. [PMID: 34541349 PMCID: PMC8436072 DOI: 10.1016/j.heliyon.2021.e07937] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2021] [Revised: 07/06/2021] [Accepted: 09/01/2021] [Indexed: 12/03/2022] Open
Abstract
To investigate the processing of environmental sounds, previous researchers have compared the semantic processing of words and sounds, yielding mixed results. This study aimed to specifically investigate the electrophysiological mechanism underlying the semantic processing of environmental sounds presented in a naturalistic visual scene. We recorded event-related brain potentials in a group of young adults over the presentation of everyday life actions that were either congruent or incongruent with environmental sounds. Our results showed that incongruent environmental sounds evoked both a P400 and an N400 effect, reflecting sensitivity to physical and semantic violations of environmental sounds’ properties, respectively. In addition, our findings showed an enhanced late positivity in response to incongruous environmental sounds, probably reflecting additional reanalysis costs. In conclusion, these results indicate that the crossmodal processing of the environmental sounds might require the simultaneous involvement of different cognitive processes.
Collapse
Affiliation(s)
- Mirella Manfredi
- Department of Psychology, University of Zurich, Zurich, Switzerland
- Corresponding author.
| | - Pamella Sanchez Mello de Pinho
- Social and Cognitive Neuroscience Laboratory, Developmental Disorders Program, Center for Health and Biological Sciences, Mackenzie Presbyterian University, Sao Paulo, Brazil
| | - Lucas Murrins Marques
- Social and Cognitive Neuroscience Laboratory, Developmental Disorders Program, Center for Health and Biological Sciences, Mackenzie Presbyterian University, Sao Paulo, Brazil
| | - Beatriz de Oliveira Ribeiro
- Social and Cognitive Neuroscience Laboratory, Developmental Disorders Program, Center for Health and Biological Sciences, Mackenzie Presbyterian University, Sao Paulo, Brazil
| | - Paulo Sergio Boggio
- Social and Cognitive Neuroscience Laboratory, Developmental Disorders Program, Center for Health and Biological Sciences, Mackenzie Presbyterian University, Sao Paulo, Brazil
- Corresponding author.
| |
Collapse
|
19
|
Gronau N. To Grasp the World at a Glance: The Role of Attention in Visual and Semantic Associative Processing. J Imaging 2021; 7:jimaging7090191. [PMID: 34564117 PMCID: PMC8470651 DOI: 10.3390/jimaging7090191] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2021] [Revised: 08/30/2021] [Accepted: 09/15/2021] [Indexed: 11/16/2022] Open
Abstract
Associative relations among words, concepts and percepts are the core building blocks of high-level cognition. When viewing the world ‘at a glance’, the associative relations between objects in a scene, or between an object and its visual background, are extracted rapidly. The extent to which such relational processing requires attentional capacity, however, has been heavily disputed over the years. In the present manuscript, I review studies investigating scene–object and object–object associative processing. I then present a series of studies in which I assessed the necessity of spatial attention to various types of visual–semantic relations within a scene. Importantly, in all studies, the spatial and temporal aspects of visual attention were tightly controlled in an attempt to minimize unintentional attention shifts from ‘attended’ to ‘unattended’ regions. Pairs of stimuli—either objects, scenes or a scene and an object—were briefly presented on each trial, while participants were asked to detect a pre-defined target category (e.g., an animal, a nonsense shape). Response times (RTs) to the target detection task were registered when visual attention spanned both stimuli in a pair vs. when attention was focused on only one of two stimuli. Among non-prioritized stimuli that were not defined as to-be-detected targets, findings consistently demonstrated rapid associative processing when stimuli were fully attended, i.e., shorter RTs to associated than unassociated pairs. Focusing attention on a single stimulus only, however, largely impaired this relational processing. Notably, prioritized targets continued to affect performance even when positioned at an unattended location, and their associative relations with the attended items were well processed and analyzed. Our findings portray an important dissociation between unattended task-irrelevant and task-relevant items: while the former require spatial attentional resources in order to be linked to stimuli positioned inside the attentional focus, the latter may influence high-level recognition and associative processes via feature-based attentional mechanisms that are largely independent of spatial attention.
Collapse
Affiliation(s)
- Nurit Gronau
- Department of Psychology and Department of Cognitive Science Studies, The Open University of Israel, Raanana 4353701, Israel
| |
Collapse
|
20
|
Monticelli M, Zeppa P, Mammi M, Penner F, Melcarne A, Zenga F, Garbossa D. Where We Mentalize: Main Cortical Areas Involved in Mentalization. Front Neurol 2021; 12:712532. [PMID: 34512525 PMCID: PMC8432612 DOI: 10.3389/fneur.2021.712532] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2021] [Accepted: 07/12/2021] [Indexed: 11/13/2022] Open
Abstract
When discussing “mentalization,” we refer to a very special ability that only humans and few species of great apes possess: the ability to think about themselves and to represent in their mind their own mental state, attitudes, and beliefs and those of others. In this review, a summary of the main cortical areas involved in mentalization is presented. A thorough literature search using PubMed MEDLINE database was performed. The search terms “cognition,” “metacognition,” “mentalization,” “direct electrical stimulation,” “theory of mind,” and their synonyms were combined with “prefrontal cortex,” “temporo-parietal junction,” “parietal cortex,” “inferior frontal gyrus,” “cingulate gyrus,” and the names of other cortical areas to extract relevant published papers. Non-English publications were excluded. Data were extracted and analyzed in a qualitative manner. It is the authors' belief that knowledge of the neural substrate of metacognition is essential not only for the “neuroscientist” but also for the “practical neuroscientist” (i.e., the neurosurgeon), in order to better understand the pathophysiology of mentalizing dysfunctions in brain pathologies, especially those in which integrity of cortical areas or white matter connectivity is compromised. Furthermore, in the context of neuro-oncological surgery, understanding the anatomical structures involved in the theory of mind can help the neurosurgeon obtain a wider and safer resection. Though beyond of the scope of this paper, an important but unresolved issue concerns the long-range white matter connections that unify these cortical areas and that may be themselves involved in neural information processing.
Collapse
Affiliation(s)
- Matteo Monticelli
- Neurosurgery Unit, Department of Neuroscience "Rita Levi Montalcini," Turin University, Turin, Italy
| | - Pietro Zeppa
- Neurosurgery Unit, Department of Neuroscience "Rita Levi Montalcini," Turin University, Turin, Italy
| | - Marco Mammi
- Neurosurgery Unit, Department of Neuroscience "Rita Levi Montalcini," Turin University, Turin, Italy
| | - Federica Penner
- Neurosurgery Unit, Department of Neuroscience "Rita Levi Montalcini," Turin University, Turin, Italy
| | - Antonio Melcarne
- Neurosurgery Unit, Department of Neuroscience "Rita Levi Montalcini," Turin University, Turin, Italy
| | - Francesco Zenga
- Neurosurgery Unit, Department of Neuroscience "Rita Levi Montalcini," Turin University, Turin, Italy
| | - Diego Garbossa
- Neurosurgery Unit, Department of Neuroscience "Rita Levi Montalcini," Turin University, Turin, Italy
| |
Collapse
|
21
|
Association between symbol digit modalities test and regional cortex thickness in young adults with relapsing-remitting multiple sclerosis. Clin Neurol Neurosurg 2021; 207:106805. [PMID: 34280674 DOI: 10.1016/j.clineuro.2021.106805] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2021] [Revised: 06/24/2021] [Accepted: 06/25/2021] [Indexed: 11/22/2022]
Abstract
BACKGROUND Multiple sclerosis (MS) is a demyelinating disease of the central nervous system, predominating within young adults. Cognitive disorders are common in MS and have are associated with several Magnetic Resonance Imaging (MRI) markers, especially brain atrophy. Many have found the symbol digit modalities test (SDMT) to be the most sensitive individual cognitive measure relevant to MS. However, the relationship between SDMT and regional brain cortex thickness in young adults with relapsing-remitting multiple sclerosis (YA-RRMS) has been little explored. The purpose of this study was to investigate the association between the SDMT and regional cortex thickness in YA-RRMS by FreeSurfer, which is an automatic brain structure segmentation method. METHOD Twenty-eight YA-RRMS patients (18-35 years old) were enrolled in the present study. Informed consent and information including gender, age, disease duration, number of relapses, annual relapse rate was collected from all patients. Clinical cognitive evaluations (SDMT and auditory verbal learning test (AVLT)) and daily performance: activities of daily living (ADL) were assessed in the present study. MRI scans were performed at the Institute of Neurosurgery of Tiantan Hospital. Twenty-eight matched healthy controls (HC) MRI data were obtained from Tiantan Hospital database. Data on thirty-four points of bilateral cortical structure thickness using statistically defined brain regions-of-interest from FreeSurfer were obtained from all participants. RESULTS Patients with RRMS exhibited extensively thinner cerebellar cortex compared with HC. SDMT scores were significantly correlated with AVLT subentries (IM, immediate memory; DRM, delayed recall memory; LTRM, long-term recognition memory) in YA-RRMS patients (P < 0.05). SDMT was strongly correlated with regional cortex thickness differences of the right temporal pole (r = 0.68) and bilateral parahippocampal areas (right r = 0.62; left r = 0.60), and moderately correlated with regional cortex thickness differences including the left superior temporal and right insula (r = 0.57 and 0.56, respectively) in YA-RRMS patients. CONCLUSION The present study has shown the SDMT is strongly correlated with selected cortex regions including the bilateral parahippocampal area and the right temporal pole which are involved in geometric structures processing.
Collapse
|
22
|
Zacharia AA, Ahuja N, Kaur S, Sharma R. Frontal activation as a key for deciphering context congruity and valence during visual perception: An electrical neuroimaging study. Brain Cogn 2021; 150:105711. [PMID: 33774336 DOI: 10.1016/j.bandc.2021.105711] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2020] [Revised: 01/12/2021] [Accepted: 02/24/2021] [Indexed: 11/20/2022]
Abstract
The object-context associations and the valence are two important stimulus attributes that influence visual perception. The current study investigates the neural sources associated with schema congruent and incongruent object-context associations within positive, negative, and neutral valence during an intermittent binocular rivalry task with simultaneous high-density EEG recording. Cortical sourceswere calculated using the sLORETA algorithm in 150 ms after stimulus onset (Stim + 150) and 400 ms before response (Resp-400) time windows. No significant difference in source activity was found between congruent and incongruent associations in any of the valence categories in the Stim + 150 ms window indicating that immediately after stimulus presentation the basic visual processing remains the same for both. In the Resp-400 ms window, different frontal regions showed higher activity for incongruent associations with different valence such as the superior frontal gyrus showed significantly higher activations for negative while the middle and medial frontal gyrus showed higher activations for neutral and finally, the inferior frontal gyrus showed higher activations for positive valence. Besides replicating the previous knowledge of frontal activations in response to context congruity, the current study provides further evidence for the sensitivity of the frontal lobe to the valence associated with the incongruent stimuli.
Collapse
Affiliation(s)
- Angel Anna Zacharia
- Stress and Cognitive Electroimaging Lab, Department of Physiology, All India Institute of Medical Sciences, New Delhi 110029, India
| | - Navdeep Ahuja
- Stress and Cognitive Electroimaging Lab, Department of Physiology, All India Institute of Medical Sciences, New Delhi 110029, India
| | - Simran Kaur
- Stress and Cognitive Electroimaging Lab, Department of Physiology, All India Institute of Medical Sciences, New Delhi 110029, India
| | - Ratna Sharma
- Stress and Cognitive Electroimaging Lab, Department of Physiology, All India Institute of Medical Sciences, New Delhi 110029, India.
| |
Collapse
|
23
|
|
24
|
Võ MLH. The meaning and structure of scenes. Vision Res 2021; 181:10-20. [PMID: 33429218 DOI: 10.1016/j.visres.2020.11.003] [Citation(s) in RCA: 40] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2019] [Revised: 10/31/2020] [Accepted: 11/03/2020] [Indexed: 01/09/2023]
Abstract
We live in a rich, three dimensional world with complex arrangements of meaningful objects. For decades, however, theories of visual attention and perception have been based on findings generated from lines and color patches. While these theories have been indispensable for our field, the time has come to move on from this rather impoverished view of the world and (at least try to) get closer to the real thing. After all, our visual environment consists of objects that we not only look at, but constantly interact with. Having incorporated the meaning and structure of scenes, i.e. its "grammar", then allows us to easily understand objects and scenes we have never encountered before. Studying this grammar provides us with the fascinating opportunity to gain new insights into the complex workings of attention, perception, and cognition. In this review, I will discuss how the meaning and the complex, yet predictive structure of real-world scenes influence attention allocation, search, and object identification.
Collapse
Affiliation(s)
- Melissa Le-Hoa Võ
- Department of Psychology, Johann Wolfgang-Goethe-Universität, Frankfurt, Germany. https://www.scenegrammarlab.com/
| |
Collapse
|
25
|
Meghdadi AH, Giesbrecht B, Eckstein MP. EEG signatures of contextual influences on visual search with real scenes. Exp Brain Res 2021; 239:797-809. [PMID: 33398454 DOI: 10.1007/s00221-020-05984-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2020] [Accepted: 11/07/2020] [Indexed: 01/23/2023]
Abstract
The use of scene context is a powerful way by which biological organisms guide and facilitate visual search. Although many studies have shown enhancements of target-related electroencephalographic activity (EEG) with synthetic cues, there have been fewer studies demonstrating such enhancements during search with scene context and objects in real world scenes. Here, observers covertly searched for a target in images of real scenes while we used EEG to measure the steady state visual evoked response to objects flickering at different frequencies. The target appeared in its typical contextual location or out of context while we controlled for low-level properties of the image including target saliency against the background and retinal eccentricity. A pattern classifier using EEG activity at the relevant modulated frequencies showed target detection accuracy increased when the target was in a contextually appropriate location. A control condition for which observers searched the same images for a different target orthogonal to the contextual manipulation, resulted in no effects of scene context on classifier performance, confirming that image properties cannot explain the contextual modulations of neural activity. Pattern classifier decisions for individual images were also related to the aggregated observer behavioral decisions for individual images. Together, these findings demonstrate target-related neural responses are modulated by scene context during visual search with real world scenes and can be related to behavioral search decisions.
Collapse
Affiliation(s)
- Amir H Meghdadi
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, Santa Barbara, CA, 93106-9660, USA.
- Institute for Collaborative Biotechnologies, University of California, Santa Barbara, Santa Barbara, CA, 93106-5100, USA.
| | - Barry Giesbrecht
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, Santa Barbara, CA, 93106-9660, USA
- Institute for Collaborative Biotechnologies, University of California, Santa Barbara, Santa Barbara, CA, 93106-5100, USA
- Interdepartmental Graduate Program in Dynamical Neuroscience, University of California, Santa Barbara, Santa Barbara, CA, 93106-5100, USA
| | - Miguel P Eckstein
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, Santa Barbara, CA, 93106-9660, USA
- Institute for Collaborative Biotechnologies, University of California, Santa Barbara, Santa Barbara, CA, 93106-5100, USA
- Interdepartmental Graduate Program in Dynamical Neuroscience, University of California, Santa Barbara, Santa Barbara, CA, 93106-5100, USA
| |
Collapse
|
26
|
Quek GL, Peelen MV. Contextual and Spatial Associations Between Objects Interactively Modulate Visual Processing. Cereb Cortex 2020; 30:6391-6404. [PMID: 32754744 PMCID: PMC7609942 DOI: 10.1093/cercor/bhaa197] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2020] [Revised: 06/29/2020] [Accepted: 06/29/2020] [Indexed: 01/23/2023] Open
Abstract
Much of what we know about object recognition arises from the study of isolated objects. In the real world, however, we commonly encounter groups of contextually associated objects (e.g., teacup and saucer), often in stereotypical spatial configurations (e.g., teacup above saucer). Here we used electroencephalography to test whether identity-based associations between objects (e.g., teacup-saucer vs. teacup-stapler) are encoded jointly with their typical relative positioning (e.g., teacup above saucer vs. below saucer). Observers viewed a 2.5-Hz image stream of contextually associated object pairs intermixed with nonassociated pairs as every fourth image. The differential response to nonassociated pairs (measurable at 0.625 Hz in 28/37 participants) served as an index of contextual integration, reflecting the association of object identities in each pair. Over right occipitotemporal sites, this signal was larger for typically positioned object streams, indicating that spatial configuration facilitated the extraction of the objects' contextual association. This high-level influence of spatial configuration on object identity integration arose ~ 320 ms post-stimulus onset, with lower-level perceptual grouping (shared with inverted displays) present at ~ 130 ms. These results demonstrate that contextual and spatial associations between objects interactively influence object processing. We interpret these findings as reflecting the high-level perceptual grouping of objects that frequently co-occur in highly stereotyped relative positions.
Collapse
Affiliation(s)
- Genevieve L Quek
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Gelderland, The Netherlands
| | - Marius V Peelen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Gelderland, The Netherlands
| |
Collapse
|
27
|
Vision at a glance: The role of attention in processing object-to-object categorical relations. Atten Percept Psychophys 2020; 82:671-688. [PMID: 31907840 DOI: 10.3758/s13414-019-01940-z] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
When viewing a scene at a glance, the visual and categorical relations between objects in the scene are extracted rapidly. In the present study, the involvement of spatial attention in the processing of such relations was investigated. Participants performed a category detection task (e.g., "is there an animal") on briefly flashed object pairs. In one condition, visual attention spanned both stimuli, and in another, attention was focused on a single object while its counterpart object served as a task-irrelevant distractor. The results showed that when participants attended to both objects, a categorical relation effect was obtained (Exp. 1). Namely, latencies were shorter to objects from the same category than to those from different superordinate categories (e.g., clothes, vehicles), even if categories were not prioritized by the task demands. Focusing attention on only one of two stimuli, however, largely eliminated this effect (Exp. 2). Some relational processing was seen when categories were narrowed to the basic level and were highly distinct from each other (Exp. 3), implying that categorical relational processing necessitates attention, unless the unattended input is highly predictable. Critically, when a prioritized (to-be-detected) object category, positioned in a distractor's location, differed from an attended object, a robust distraction effect was consistently observed, regardless of category homogeneity and/or of response conflict factors (Exp. 4). This finding suggests that object relations that involve stimuli that are highly relevant to the task settings may survive attentional deprivation at the distractor location. The involvement of spatial attention in object-to-object categorical processing is most critical in situations that include wide categories that are irrelevant to one's current goals.
Collapse
|
28
|
Lauer T, Willenbockel V, Maffongelli L, Võ MLH. The influence of scene and object orientation on the scene consistency effect. Behav Brain Res 2020; 394:112812. [DOI: 10.1016/j.bbr.2020.112812] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2019] [Revised: 07/06/2020] [Accepted: 07/14/2020] [Indexed: 01/18/2023]
|
29
|
Coderre EL, O'Donnell E, O'Rourke E, Cohn N. Predictability modulates neurocognitive semantic processing of non-verbal narratives. Sci Rep 2020; 10:10326. [PMID: 32587312 PMCID: PMC7316725 DOI: 10.1038/s41598-020-66814-z] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2019] [Accepted: 05/28/2020] [Indexed: 11/26/2022] Open
Abstract
Predictability is known to modulate semantic processing in language, but it is unclear to what extent this applies for other modalities. Here we ask whether similar cognitive processes are at play in predicting upcoming events in a non-verbal visual narrative. Typically developing adults viewed comics sequences in which a target panel was highly predictable (“high cloze”), less predictable (“low cloze”), or incongruent with the preceding narrative context (“anomalous”) during EEG recording. High and low predictable sequences were determined by a pretest where participants assessed “what happened next?”, resulting in cloze probability scores for sequence outcomes comparable to those used to measure predictability in sentence processing. Through both factorial and correlational analyses, we show a significant modulation of neural responses by cloze such that N400 effects are diminished as a target panel in a comic sequence becomes more predictable. Predictability thus appears to play a similar role in non-verbal comprehension of sequential images as in language comprehension, providing further evidence for the domain generality of semantic processing in the brain.
Collapse
Affiliation(s)
- Emily L Coderre
- Department of Communication Sciences and Disorders, University of Vermont, Burlington, VT, United States.
| | - Elizabeth O'Donnell
- Department of Communication Sciences and Disorders, University of Vermont, Burlington, VT, United States
| | - Emme O'Rourke
- Department of Communication Sciences and Disorders, University of Vermont, Burlington, VT, United States
| | - Neil Cohn
- Department of Communication and Cognition, Tilburg School of Humanities and Digital Sciences, Tilburg Center for Cognition and Communication (TiCC), Tilburg University, Tilburg, The Netherlands
| |
Collapse
|
30
|
Lai LY, Frömer R, Festa EK, Heindel WC. Age-related changes in the neural dynamics of bottom-up and top-down processing during visual object recognition: an electrophysiological investigation. Neurobiol Aging 2020; 94:38-49. [PMID: 32562874 DOI: 10.1016/j.neurobiolaging.2020.05.010] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2019] [Revised: 03/10/2020] [Accepted: 05/15/2020] [Indexed: 10/24/2022]
Abstract
When recognizing objects in our environments, we rely on both what we see and what we know. While older adults often display increased sensitivity to top-down influences of contextual information during object recognition, the locus of this increased sensitivity remains unresolved. To examine the effects of aging on the neural dynamics of bottom-up and top-down visual processing during rapid object recognition, we probed the differential effects of object perceptual ambiguity and scene context congruity on specific EEG event-related potential components indexing dissociable processes along the visual processing stream. Older adults displayed larger behavioral scene congruity effects than young adults. Older adults' larger visual P2 amplitudes to object perceptual ambiguity (as opposed to the scene congruity P2 effects in young adults) suggest continued resolution of perceptual ambiguity that interfered with scene congruity processing, while post-perceptual semantic integration (as indexed by N400) remained largely intact. These findings suggest that compromised bottom-up perceptual processing in healthy aging leads to an increased involvement of top-down processes to resolve greater perceptual ambiguity during object recognition.
Collapse
Affiliation(s)
- Leslie Y Lai
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, RI 02912
| | - Romy Frömer
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, RI 02912
| | - Elena K Festa
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, RI 02912
| | - William C Heindel
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, RI 02912.
| |
Collapse
|
31
|
Abstract
Detecting a suspect's recognition of a crime scene (e.g., a burgled room or a location visited for criminal activity) can be of great value during criminal investigations. Although it is established that the Reaction-Time Concealed Information Test (RT-CIT) can determine whether a suspect recognizes crime-related objects, no research has tested whether this capability extends to the recognition of scenes. In Experiment 1, participants were given an autobiographic scene-based RT-CIT. In Experiment 2, participants watched a mock crime video before completing an RT-CIT that included both scenes and objects. In Experiment 3, participants completed an autobiographic scene-based RT-CIT, with half instructed to perform a physical countermeasure. Overall, the findings showed that an equivalent RT-CIT effect can be found with both scene and object stimuli and that RT-CITs may not be susceptible to physical countermeasure strategies, thereby increasing its real-world applicability.
Collapse
|
32
|
Kaiser D, Häberle G, Cichy RM. Real-world structure facilitates the rapid emergence of scene category information in visual brain signals. J Neurophysiol 2020; 124:145-151. [PMID: 32519577 DOI: 10.1152/jn.00164.2020] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
In everyday life, our visual surroundings are not arranged randomly but structured in predictable ways. Although previous studies have shown that the visual system is sensitive to such structural regularities, it remains unclear whether the presence of an intact structure in a scene also facilitates the cortical analysis of the scene's categorical content. To address this question, we conducted an EEG experiment during which participants viewed natural scene images that were either "intact" (with their quadrants arranged in typical positions) or "jumbled" (with their quadrants arranged into atypical positions). We then used multivariate pattern analysis to decode the scenes' category from the EEG signals (e.g., whether the participant had seen a church or a supermarket). The category of intact scenes could be decoded rapidly within the first 100 ms of visual processing. Critically, within 200 ms of processing, category decoding was more pronounced for the intact scenes compared with the jumbled scenes, suggesting that the presence of real-world structure facilitates the extraction of scene category information. No such effect was found when the scenes were presented upside down, indicating that the facilitation of neural category information is indeed linked to a scene's adherence to typical real-world structure rather than to differences in visual features between intact and jumbled scenes. Our results demonstrate that early stages of categorical analysis in the visual system exhibit tuning to the structure of the world that may facilitate the rapid extraction of behaviorally relevant information from rich natural environments.NEW & NOTEWORTHY Natural scenes are structured, with different types of information appearing in predictable locations. Here, we use EEG decoding to show that the visual brain uses this structure to efficiently analyze scene content. During early visual processing, the category of a scene (e.g., a church vs. a supermarket) could be more accurately decoded from EEG signals when the scene adhered to its typical spatial structure compared with when it did not.
Collapse
Affiliation(s)
- Daniel Kaiser
- Department of Psychology, University of York, York, United Kingdom
| | - Greta Häberle
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany.,Charité - Universitätsmedizin Berlin, Einstein Center for Neurosciences Berlin, Berlin, Germany.,Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Radoslaw M Cichy
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany.,Charité - Universitätsmedizin Berlin, Einstein Center for Neurosciences Berlin, Berlin, Germany.,Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Berlin, Germany.,Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
| |
Collapse
|
33
|
Disentangling the Independent Contributions of Visual and Conceptual Features to the Spatiotemporal Dynamics of Scene Categorization. J Neurosci 2020; 40:5283-5299. [PMID: 32467356 DOI: 10.1523/jneurosci.2088-19.2020] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2019] [Revised: 04/18/2020] [Accepted: 04/23/2020] [Indexed: 11/21/2022] Open
Abstract
Human scene categorization is characterized by its remarkable speed. While many visual and conceptual features have been linked to this ability, significant correlations exist between feature spaces, impeding our ability to determine their relative contributions to scene categorization. Here, we used a whitening transformation to decorrelate a variety of visual and conceptual features and assess the time course of their unique contributions to scene categorization. Participants (both sexes) viewed 2250 full-color scene images drawn from 30 different scene categories while having their brain activity measured through 256-channel EEG. We examined the variance explained at each electrode and time point of visual event-related potential (vERP) data from nine different whitened encoding models. These ranged from low-level features obtained from filter outputs to high-level conceptual features requiring human annotation. The amount of category information in the vERPs was assessed through multivariate decoding methods. Behavioral similarity measures were obtained in separate crowdsourced experiments. We found that all nine models together contributed 78% of the variance of human scene similarity assessments and were within the noise ceiling of the vERP data. Low-level models explained earlier vERP variability (88 ms after image onset), whereas high-level models explained later variance (169 ms). Critically, only high-level models shared vERP variability with behavior. Together, these results suggest that scene categorization is primarily a high-level process, but reliant on previously extracted low-level features.SIGNIFICANCE STATEMENT In a single fixation, we glean enough information to describe a general scene category. Many types of features are associated with scene categories, ranging from low-level properties, such as colors and contours, to high-level properties, such as objects and attributes. Because these properties are correlated, it is difficult to understand each property's unique contributions to scene categorization. This work uses a whitening transformation to remove the correlations between features and examines the extent to which each feature contributes to visual event-related potentials over time. We found that low-level visual features contributed first but were not correlated with categorization behavior. High-level features followed 80 ms later, providing key insights into how the brain makes sense of a complex visual world.
Collapse
|
34
|
Leroy A, Faure S, Spotorno S. Reciprocal semantic predictions drive categorization of scene contexts and objects even when they are separate. Sci Rep 2020; 10:8447. [PMID: 32439874 PMCID: PMC7242336 DOI: 10.1038/s41598-020-65158-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2019] [Accepted: 03/20/2020] [Indexed: 11/10/2022] Open
Abstract
Visual categorization improves when object-context associations in scenes are semantically consistent, thus predictable from schemas stored in long-term memory. However, it is unclear whether this is due to differences in early perceptual processing, in matching of memory representations or in later stages of response selection. We tested these three concurrent explanations across five experiments. At each trial, participants had to categorize a scene context and an object briefly presented within the same image (Experiment 1), or separately in simultaneous images (Experiments 2–5). We analyzed unilateral (Experiments 1, 3) and bilateral presentations (Experiments 2, 4, 5), and presentations on the screen’s horizontal midline (Experiments 1–2) and in the upper and lower visual fields (Experiments 3, 4). In all the experiments, we found a semantic consistency advantage for both context categorization and object categorization. This shows that the memory for object-context semantic associations is activated regardless of whether these two scene components are integrated in the same percept. Our study suggests that the facilitation effect of semantic consistency on categorization occurs at the stage of matching the percept with previous knowledge, supporting the object selection account and extending this framework to an object-context reciprocal influence on matching processes (object-context selection account).
Collapse
Affiliation(s)
- Anaïs Leroy
- Laboratoire d'Anthropologie et de Psychologie Cliniques, Cognitives et Sociales (LAPCOS), Université Cote d'Azur, Nice, France.
| | - Sylvane Faure
- Laboratoire d'Anthropologie et de Psychologie Cliniques, Cognitives et Sociales (LAPCOS), Université Cote d'Azur, Nice, France
| | - Sara Spotorno
- School of Psychology, University of Keele, Keele, United Kingdom
| |
Collapse
|
35
|
Multimodal feature binding in object memory retrieval using event-related potentials: Implications for models of semantic memory. Int J Psychophysiol 2020; 153:116-126. [PMID: 32389620 DOI: 10.1016/j.ijpsycho.2020.04.024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2020] [Revised: 03/30/2020] [Accepted: 04/29/2020] [Indexed: 11/23/2022]
Abstract
To test the hypothesis that semantic processes are represented in multiple subsystems, we recorded electroencephalogram (EEG) as we elicited object memories using the modified Semantic Object Retrieval Test, during which an object feature, presented as a visual word [VW], an auditory word [AW], or a picture [Pic], was followed by a second feature always presented as a visual word. We performed both hypothesis-driven and data-driven analyses using event-related potentials (ERPs) time locked to the second stimulus. We replicated a previously reported left fronto-temporal ERP effect (750-1000 ms post-stimulus) in the VW task, and also found that this ERP component was only present during object memory retrieval in verbal (VW, AW) as opposed to non-verbal (Pic) stimulus types. We also found a right temporal ERP effect (850-1000 ms post-stimulus) that was present in auditory (AW) but not in visual (VW, Pic) stimulus types. In addition, we found an earlier left temporo-parietal ERP effect between 350 and 700 ms post-stimulus and a later midline parietal ERP effect between 700 and 1100 ms post-stimulus, present in all stimulus types, suggesting common neural mechanisms for object retrieval processes and object activation, respectively. These findings support multiple semantic subsystems that respond to varying stimulus modalities, and argue against an ultimate unitary amodal semantic analysis.
Collapse
|
36
|
Smith ME, Loschky LC. The influence of sequential predictions on scene-gist recognition. J Vis 2020; 19:14. [PMID: 31622473 DOI: 10.1167/19.12.14] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Past research suggests that recognizing scene gist, a viewer's holistic semantic representation of a scene acquired within a single eye fixation, involves purely feed-forward mechanisms. We investigated whether expectations can influence scene categorization. To do this, we embedded target scenes in more ecologically valid, first-person-viewpoint image sequences, along spatiotemporally connected routes (e.g., an office to a parking lot). We manipulated the sequences' spatiotemporal coherence by presenting them either coherently or in random order. Participants identified the category of one target scene in a 10-scene-image rapid serial visual presentation. Categorization accuracy was greater for targets in coherent sequences. Accuracy was also greater for targets with more visually similar primes. In Experiment 2, we investigated whether targets in coherent sequences were more predictable and whether predictable images were identified more accurately in Experiment 1 after accounting for the effect of prime-to-target visual similarity. To do this, we removed targets and had participants predict the category of the missing scene. Images were more accurately predicted in coherent sequences, and both image predictability and prime-to-target visual similarity independently contributed to performance in Experiment 1. To test whether prediction-based facilitation effects were solely due to response bias, participants performed a two-alternative forced-choice task in which they indicated whether the target was an intact or a phase-randomized scene. Critically, predictability of the target category was irrelevant to this task. Nevertheless, results showed that sensitivity, but not response bias, was greater for targets in coherent sequences. Predictions made prior to viewing a scene facilitate scene-gist recognition.
Collapse
Affiliation(s)
- Maverick E Smith
- Department of Psychological Sciences, Kansas State University, Manhattan, KS, USA
| | - Lester C Loschky
- Department of Psychological Sciences, Kansas State University, Manhattan, KS, USA
| |
Collapse
|
37
|
Coco MI, Nuthmann A, Dimigen O. Fixation-related Brain Potentials during Semantic Integration of Object–Scene Information. J Cogn Neurosci 2020; 32:571-589. [DOI: 10.1162/jocn_a_01504] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2023]
Abstract
Abstract
In vision science, a particularly controversial topic is whether and how quickly the semantic information about objects is available outside foveal vision. Here, we aimed at contributing to this debate by coregistering eye movements and EEG while participants viewed photographs of indoor scenes that contained a semantically consistent or inconsistent target object. Linear deconvolution modeling was used to analyze the ERPs evoked by scene onset as well as the fixation-related potentials (FRPs) elicited by the fixation on the target object (t) and by the preceding fixation (t − 1). Object–scene consistency did not influence the probability of immediate target fixation or the ERP evoked by scene onset, which suggests that object–scene semantics was not accessed immediately. However, during the subsequent scene exploration, inconsistent objects were prioritized over consistent objects in extrafoveal vision (i.e., looked at earlier) and were more effortful to process in foveal vision (i.e., looked at longer). In FRPs, we demonstrate a fixation-related N300/N400 effect, whereby inconsistent objects elicit a larger frontocentral negativity than consistent objects. In line with the behavioral findings, this effect was already seen in FRPs aligned to the pretarget fixation t − 1 and persisted throughout fixation t, indicating that the extraction of object semantics can already begin in extrafoveal vision. Taken together, the results emphasize the usefulness of combined EEG/eye movement recordings for understanding the mechanisms of object–scene integration during natural viewing.
Collapse
Affiliation(s)
- Moreno I. Coco
- The University of East London
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa
| | | | | |
Collapse
|
38
|
Furtak M, Doradzińska Ł, Ptashynska A, Mudrik L, Nowicka A, Bola M. Automatic Attention Capture by Threatening, But Not by Semantically Incongruent Natural Scene Images. Cereb Cortex 2020; 30:4158-4168. [DOI: 10.1093/cercor/bhaa040] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2019] [Revised: 01/22/2020] [Accepted: 02/03/2020] [Indexed: 12/15/2022] Open
Abstract
Abstract
Visual objects are typically perceived as parts of an entire visual scene, and the scene’s context provides information crucial in the object recognition process. Fundamental insights into the mechanisms of context-object integration have come from research on semantically incongruent objects, which are defined as objects with a very low probability of occurring in a given context. However, the role of attention in processing of the context-object mismatch remains unclear, with some studies providing evidence in favor, but other against an automatic capture of attention by incongruent objects. Therefore, in the present study, 25 subjects completed a dot-probe task, in which pairs of scenes—congruent and incongruent or neutral and threatening—were presented as task-irrelevant distractors. Importantly, threatening scenes are known to robustly capture attention and thus were included in the present study to provide a context for interpretation of results regarding incongruent scenes. Using N2 posterior-contralateral ERP component as a primary measure, we revealed that threatening images indeed capture attention automatically and rapidly, but semantically incongruent scenes do not benefit from an automatic attentional selection. Thus, our results suggest that identification of the context-object mismatch is not preattentive.
Collapse
Affiliation(s)
- Marcin Furtak
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, 02-093 Warsaw, Poland
| | - Łucja Doradzińska
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, 02-093 Warsaw, Poland
| | - Alina Ptashynska
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, 02-093 Warsaw, Poland
| | - Liad Mudrik
- School of Psychological Science, Tel Aviv University, 69978 Tel Aviv, Israel
- Sagol School of Neuroscience, Tel Aviv University, 69978 Tel Aviv, Israel
| | - Anna Nowicka
- Laboratory of Language Neurobiology, Nencki Institute of Experimental Biology, 02-093 Warsaw, Poland
| | - Michał Bola
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, 02-093 Warsaw, Poland
| |
Collapse
|
39
|
Caplette L, Gosselin F, Mermillod M, Wicker B. Real-world expectations and their affective value modulate object processing. Neuroimage 2020; 213:116736. [PMID: 32171924 DOI: 10.1016/j.neuroimage.2020.116736] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2019] [Revised: 03/09/2020] [Accepted: 03/10/2020] [Indexed: 10/24/2022] Open
Abstract
It is well known that expectations influence how we perceive the world. Yet the neural mechanisms underlying this process remain unclear. Studies about the effects of prior expectations have focused so far on artificial contingencies between simple neutral cues and events. Real-world expectations are however often generated from complex associations between contexts and objects learned over a lifetime. Additionally, these expectations may contain some affective value and recent proposals present conflicting hypotheses about the mechanisms underlying affect in predictions. In this study, we used fMRI to investigate how object processing is influenced by realistic context-based expectations, and how affect impacts these expectations. First, we show that the precuneus, the inferotemporal cortex and the frontal cortex are more active during object recognition when expectations have been elicited a priori, irrespectively of their validity or their affective intensity. This result supports previous hypotheses according to which these brain areas integrate contextual expectations with object sensory information. Notably, these brain areas are different from those responsible for simultaneous context-object interactions, dissociating the two processes. Then, we show that early visual areas, on the contrary, are more active during object recognition when no prior expectation has been elicited by a context. Lastly, BOLD activity was shown to be enhanced in early visual areas when objects are less expected, but only when contexts are neutral; the reverse effect is observed when contexts are affective. This result supports the proposal that affect modulates the weighting of sensory information during predictions. Together, our results help elucidate the neural mechanisms of real-world expectations.
Collapse
Affiliation(s)
- Laurent Caplette
- Département de Psychologie, Université de Montréal, Montréal, Québec, Canada.
| | - Frédéric Gosselin
- Département de Psychologie, Université de Montréal, Montréal, Québec, Canada
| | | | - Bruno Wicker
- Département de Psychologie, Université de Montréal, Montréal, Québec, Canada; LNC, CNRS & Aix-Marseille Université, 13331, Marseille, France
| |
Collapse
|
40
|
Wodniecka Z, Szewczyk J, Kałamała P, Mandera P, Durlik J. When a second language hits a native language. What ERPs (do and do not) tell us about language retrieval difficulty in bilingual language production. Neuropsychologia 2020; 141:107390. [PMID: 32057934 DOI: 10.1016/j.neuropsychologia.2020.107390] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2018] [Revised: 09/01/2019] [Accepted: 02/09/2020] [Indexed: 02/05/2023]
Abstract
The accumulating evidence suggests that prior usage of a second language (L2) leads to processing costs on the subsequent production of a native language (L1). However, it is unclear what mechanism underlies this effect. It has been proposed that the L1 cost reflects inhibition of L1 representation acting during L1 production; however, previous studies exploring this issue were inconclusive. It is also unsettled whether the mechanism operates on the whole-language level or is restricted to translation equivalents in the two languages. We report a study that allowed us to address both issues behaviorally with the use of ERPs while focusing on the consequences of using L2 on the production of L1. In our experiment, native speakers of Polish (L1) and learners of English (L2) named a set of pictures in L1 following a set of pictures in either L1 or L2. Half of the pictures were repeated from the preceding block and half were new; this enabled dissociation of the effects on the level of the whole language from those specific to individual lexical items. Our results are consistent with the notion that language after-effects operate at a whole-language level. Behaviorally, we observed a clear processing cost on the whole-language level and a small facilitation on the item-specific level. The whole-language effect was accompanied by an enhanced, fronto-centrally distributed negativity in the 250-350 ms time-window which we identified as the N300 (in contrast to previous research, which probably misidentified the effect as the N2), a component that presumably reflects retrieval difficulty of relevant language representations during picture naming. As such, unlike previous studies that reported N2 for naming pictures in L1 after L2 use, we propose that the reported ERPs (N300) indicate that prior usage of L2 hampers lexical access to names in L1. Based on the literature, the after-effects could be caused by L1 inhibition and/or L2 interference, but the ERPs so far have not been informative about the causal mechanism.
Collapse
Affiliation(s)
- Zofia Wodniecka
- Psychology of Language and Bilingualism Lab, Institute of Psychology, Jagiellonian University, Krakow, Poland.
| | - Jakub Szewczyk
- Psychology of Language and Bilingualism Lab, Institute of Psychology, Jagiellonian University, Krakow, Poland.
| | - Patrycja Kałamała
- Psychology of Language and Bilingualism Lab, Institute of Psychology, Jagiellonian University, Krakow, Poland
| | - Paweł Mandera
- Psychology of Language and Bilingualism Lab, Institute of Psychology, Jagiellonian University, Krakow, Poland
| | - Joanna Durlik
- Psychology of Language and Bilingualism Lab, Institute of Psychology, Jagiellonian University, Krakow, Poland
| |
Collapse
|
41
|
Guillaume F, Baier S, Etienne Y. An ERP investigation of item-scene incongruity at encoding on subsequent recognition. Psychophysiology 2020; 57:e13534. [PMID: 31985081 DOI: 10.1111/psyp.13534] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2018] [Revised: 01/07/2020] [Accepted: 01/09/2020] [Indexed: 11/28/2022]
Abstract
The aim of the present study was to investigate how item-scene incongruity at encoding influences subsequent item recognition and the associated event-related potential (ERP) old/new effects. Participants (N = 26) studied pictures showing an item in a scene, either in a congruent condition (e.g., a tent in a field) or an incongruent condition (e.g., a shower cabin in a field). Items were presented alone at test. Behavioral data revealed a benefit of incongruent information, with greater source memory performance but no significant effect on old/new recognition judgments. Longer response times for old compared to new items showed that participants not only evaluated the old-new status of objects during recognition, but also worked already on the scene context decision relative to the source memory judgment. An ERP incongruity effect was found at study, with greater N400 amplitude in the incongruent condition than the congruent condition. During recognition, the results provide evidence that item-scene incongruity at study increases the amplitude of ERP old/new effects. A mid-frontal N400 old/new effect was found in the early time window (300-500 ms), and a right frontal sub-component was modulated by item-scene incongruity at encoding. The modulation observed in the later time window (500-800 ms) confirmed previous studies showing that the parietal old/new effect reflects the retrieval of episodic contextual details. The present study shows that the magnitude of ERP old/new effects is sensitive to item-scene incongruity at encoding from the early time window in the right frontal region to the later retrieval processes.
Collapse
Affiliation(s)
- Fabrice Guillaume
- Laboratoire de Psychologie Cognitive (CNRS UMR 7290), Aix-Marseille Université, Marseille, France
| | - Sophia Baier
- Laboratoire d'Anthropologie et de Psychology Cognitive et Sociale (EA 7278), Université de Nice Sophia Antipolis, Nice, France
| | - Yann Etienne
- Laboratoire de Psychologie Cognitive (CNRS UMR 7290), Aix-Marseille Université, Marseille, France
| |
Collapse
|
42
|
Smith CM, Federmeier KD. Neural Signatures of Learning Novel Object-Scene Associations. J Cogn Neurosci 2020; 32:783-803. [PMID: 31933437 DOI: 10.1162/jocn_a_01530] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Objects are perceived within rich visual contexts, and statistical associations may be exploited to facilitate their rapid recognition. Recent work using natural scene-object associations suggests that scenes can prime the visual form of associated objects, but it remains unknown whether this relies on an extended learning process. We asked participants to learn categorically structured associations between novel objects and scenes in a paired associate memory task while ERPs were recorded. In the test phase, scenes were first presented (2500 msec), followed by objects that matched or mismatched the scene; degree of contextual mismatch was manipulated along visual and categorical dimensions. Matching objects elicited a reduced N300 response, suggesting visuostructural priming based on recently formed associations. Amplitude of an extended positivity (onset ∼200 msec) was sensitive to visual distance between the presented object and the contextually associated target object, most likely indexing visual template matching. Results suggest recent associative memories may be rapidly recruited to facilitate object recognition in a top-down fashion, with clinical implications for populations with impairments in hippocampal-dependent memory and executive function.
Collapse
|
43
|
Kaiser D, Häberle G, Cichy RM. Cortical sensitivity to natural scene structure. Hum Brain Mapp 2019; 41:1286-1295. [PMID: 31758632 PMCID: PMC7267931 DOI: 10.1002/hbm.24875] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2019] [Revised: 11/07/2019] [Accepted: 11/07/2019] [Indexed: 11/23/2022] Open
Abstract
Natural scenes are inherently structured, with meaningful objects appearing in predictable locations. Human vision is tuned to this structure: When scene structure is purposefully jumbled, perception is strongly impaired. Here, we tested how such perceptual effects are reflected in neural sensitivity to scene structure. During separate fMRI and EEG experiments, participants passively viewed scenes whose spatial structure (i.e., the position of scene parts) and categorical structure (i.e., the content of scene parts) could be intact or jumbled. Using multivariate decoding, we show that spatial (but not categorical) scene structure profoundly impacts on cortical processing: Scene‐selective responses in occipital and parahippocampal cortices (fMRI) and after 255 ms (EEG) accurately differentiated between spatially intact and jumbled scenes. Importantly, this differentiation was more pronounced for upright than for inverted scenes, indicating genuine sensitivity to spatial structure rather than sensitivity to low‐level attributes. Our findings suggest that visual scene analysis is tightly linked to the spatial structure of our natural environments. This link between cortical processing and scene structure may be crucial for rapidly parsing naturalistic visual inputs.
Collapse
Affiliation(s)
- Daniel Kaiser
- Department of Psychology, University of York, York, UK.,Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany
| | - Greta Häberle
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany.,Einstein Center for Neurosciences Berlin, Humboldt-Universität Berlin, Berlin, Germany.,Berlin School of Mind and Brain, Humboldt-Universität Berlin, Berlin, Germany
| | - Radoslaw M Cichy
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany.,Einstein Center for Neurosciences Berlin, Humboldt-Universität Berlin, Berlin, Germany.,Berlin School of Mind and Brain, Humboldt-Universität Berlin, Berlin, Germany.,Bernstein Center for Computational Neuroscience Berlin, Humboldt-Universität Berlin, Berlin, Germany
| |
Collapse
|
44
|
Võ MLH, Boettcher SEP, Draschkow D. Reading scenes: how scene grammar guides attention and aids perception in real-world environments. Curr Opin Psychol 2019; 29:205-210. [DOI: 10.1016/j.copsyc.2019.03.009] [Citation(s) in RCA: 80] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2018] [Revised: 03/07/2019] [Accepted: 03/13/2019] [Indexed: 11/30/2022]
|
45
|
Jacob MS, Ford JM, Roach BJ, Calhoun VD, Mathalon DH. Aberrant activity in conceptual networks underlies N400 deficits and unusual thoughts in schizophrenia. Neuroimage Clin 2019; 24:101960. [PMID: 31398555 PMCID: PMC6699247 DOI: 10.1016/j.nicl.2019.101960] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2019] [Revised: 06/25/2019] [Accepted: 07/21/2019] [Indexed: 11/22/2022]
Abstract
BACKGROUND The N400 event-related potential (ERP) is triggered by meaningful stimuli that are incongruous, or unmatched, with their semantic context. Functional magnetic resonance imaging (fMRI) studies have identified brain regions activated by semantic incongruity, but their precise links to the N400 ERP are unclear. In schizophrenia (SZ), N400 amplitude reduction is thought to reflect overly broad associations in semantic networks, but the abnormalities in brain networks underlying deficient N400 remain unknown. We utilized joint independent component analysis (JICA) to link temporal patterns in ERPs to neuroanatomical patterns from fMRI and investigate relationships between N400 amplitude and neuroanatomical activation in SZ patients and healthy controls (HC). METHODS SZ patients (n = 24) and HC participants (n = 25) performed a picture-word matching task, in which words were either matched (APPLE→apple) by preceding pictures, or were unmatched by semantically related (in-category; IC, APPLE→lemon) or unrelated (out of category; OC, APPLE→cow) pictures, in separate ERP and fMRI sessions. A JICA "data fusion" analysis was conducted to identify the fMRI brain regions specifically associated with the ERP N400 component. SZ and HC loading weights were compared and correlations with clinical symptoms were assessed. RESULTS JICA identified an ERP-fMRI "fused" component that captured the N400, with loading weights that were reduced in SZ. The JICA map for the IC condition showed peaks of activation in the cingulate, precuneus, bilateral temporal poles and cerebellum, whereas the JICA map from the OC condition was linked primarily to visual cortical activation and the left temporal pole. Among SZ patients, fMRI activity from the IC condition was inversely correlated with unusual thought content. CONCLUSIONS The neural networks associated with the N400 ERP response to semantic violations depends on conceptual relatedness. These findings are consistent with a distributed network underlying neural responses to semantic incongruity including unimodal visual areas as well as integrative, transmodal areas. Unusual thoughts in SZ may reflect impaired processing in transmodal hub regions such as the precuneus, leading to overly broad semantic associations.
Collapse
Affiliation(s)
- Michael S Jacob
- San Francisco VA Medical Center, 4150 Clement St, San Francisco, CA 94110, United States; University of California, Department of Psychiatry, 401 Parnassus Avenue, San Francisco, CA 94143, United States.
| | - Judith M Ford
- San Francisco VA Medical Center, 4150 Clement St, San Francisco, CA 94110, United States; University of California, Department of Psychiatry, 401 Parnassus Avenue, San Francisco, CA 94143, United States.
| | - Brian J Roach
- San Francisco VA Medical Center, 4150 Clement St, San Francisco, CA 94110, United States.
| | - Vince D Calhoun
- The Mind Research Network, 1101 Yale Blvd. NE, Albuquerque, NM 87106, United States; The University of New Mexico, 1 University of New Mexico, Albuquerque, NM 87108, United States.
| | - Daniel H Mathalon
- San Francisco VA Medical Center, 4150 Clement St, San Francisco, CA 94110, United States; University of California, Department of Psychiatry, 401 Parnassus Avenue, San Francisco, CA 94143, United States.
| |
Collapse
|
46
|
Art looks different - Semantic and syntactic processing of paintings and associated neurophysiological brain responses. Brain Cogn 2019; 134:58-66. [PMID: 31151085 DOI: 10.1016/j.bandc.2019.05.008] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2018] [Revised: 05/18/2019] [Accepted: 05/21/2019] [Indexed: 11/20/2022]
Abstract
The concept of semantics (meaning) and syntax (structure) seems to be an integral way of how humans perceive and order their environment. Processing natural scenes with semantic or syntactic inconsistencies evokes distinct Event-Related Potentials, ERPs, in the N300/400 and P600, respectively (Vo & Wolfe, 2013). Artworks, however, can by definition use violations of natural relationships as a means of style, especially in surrealist art. To test whether inconsistencies are processed differently in artworks, we presented participants with surrealist paintings containing semantic or syntactic inconsistencies, edited versions without inconsistencies, and as control real photographic versions of each painting. Photographs elicited more pronounced negative ERP amplitudes than paintings in all time windows, N300, N400 and P600. However, the lack of an interaction between image type and inconsistency type indicates that all presented images were processed as artworks, probably due to context effects. The ERPs were largely opposite to those reported previously with everyday life pictures, with syntactic inconsistencies driving the earlier components and eliciting higher amplitudes than semantic ones in the N400, and semantic inconsistencies eliciting a higher amplitude in the P600. We conclude that viewing artworks includes a specific processing mode, entailing syntactic and semantic expectations different from those in natural scenes.
Collapse
|
47
|
Debruille JB, Touzel M, Segal J, Snidal C, Renoult L. A Central Component of the N1 Event-Related Brain Potential Could Index the Early and Automatic Inhibition of the Actions Systematically Activated by Objects. Front Behav Neurosci 2019; 13:95. [PMID: 31139060 PMCID: PMC6517799 DOI: 10.3389/fnbeh.2019.00095] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2018] [Accepted: 04/17/2019] [Indexed: 11/13/2022] Open
Abstract
Stimuli of the environment, like objects, systematically activate the actions they are associated to. These activations occur extremely fast. Nevertheless, behavioral data reveal that, in most cases, these activations are then automatically inhibited, around 100 ms after the occurrence of the stimulus. We thus tested whether this early inhibition could be indexed by a central component of the N1 event-related brain potential (ERP). To achieve that goal, we looked at whether this ERP component is larger in tasks that could increase the inhibition and in trials where reaction times (RTs) happen to be long. The illumination of a real space bar of a keyboard out of the dark was used as a stimulus. To maximize the modulation of the inhibition, the task participants had to perform was manipulated across blocks. A look-only task and a count task were used to increase inhibition and an immediate press task was used to decrease it. ERPs of the two block-conditions where presses had to be prevented and where the largest central N1s were predicted were compared to those elicited in the press task, differentiating the ERPs to the third of the trials where presses were the slowest from the ERPs to the third of the trials with the fastest presses. Despite larger negativities due to lateralized readiness potentials (LRPs) and despite greater attention likely in immediate press-trials, central N1s were found to be minimal for the fastest presses, intermediate for the slowest ones and maximal for the two no-press conditions. These results thus provide a strong support for the idea that the central N1 indexes an early and short lasting automatic inhibition of the actions systematically activated by objects. They also confirm that the strength of this automatic inhibition spontaneously fluctuates across trials and tasks. On the other hand, just before N1s, parietal P1s were found larger for fastest presses. They might thus index the initial activation of these actions. Finally, consistent with the idea that N300s index late inhibition processes, that occur preferentially when the task requires them, these ERPs were quasi absent for fast presses trials and much larger in the three other conditions.
Collapse
Affiliation(s)
- J. Bruno Debruille
- Department of Neuroscience, McGill University, Montreal, QC, Canada
- Department of Psychiatry, McGill University, Montreal, QC, Canada
| | - Molly Touzel
- Department of Neuroscience, McGill University, Montreal, QC, Canada
| | - Julia Segal
- Department of Neuroscience, McGill University, Montreal, QC, Canada
| | - Christine Snidal
- Department of Neuroscience, McGill University, Montreal, QC, Canada
| | - Louis Renoult
- School of Psychology, University of East Anglia, Norwich, United Kingdom
| |
Collapse
|
48
|
Cohn N. Your Brain on Comics: A Cognitive Model of Visual Narrative Comprehension. Top Cogn Sci 2019; 12:352-386. [PMID: 30963724 PMCID: PMC9328425 DOI: 10.1111/tops.12421] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2018] [Revised: 01/21/2019] [Accepted: 03/18/2019] [Indexed: 11/30/2022]
Abstract
The past decade has seen a rapid growth of cognitive and brain research focused on visual narratives like comics and picture stories. This paper will summarize and integrate this emerging literature into the Parallel Interfacing Narrative‐Semantics Model (PINS Model)—a theory of sequential image processing characterized by an interaction between two representational levels: semantics and narrative structure. Ongoing semantic processes build meaning into an evolving mental model of a visual discourse. Updating of spatial, referential, and event information then incurs costs when they are discontinuous with the growing context. In parallel, a narrative structure organizes semantic information into coherent sequences by assigning images to categorical roles, which are then embedded within a hierarchic constituent structure. Narrative constructional schemas allow for specific predictions of structural sequencing, independent of semantics. Together, these interacting levels of representation engage in an iterative process of retrieval of semantic and narrative information, prediction of upcoming information based on those assessments, and subsequent updating based on discontinuity. These core mechanisms are argued to be domain‐general—spanning across expressive systems—as suggested by similar electrophysiological brain responses (N400, P600, anterior negativities) generated in response to manipulation of sequential images, music, and language. Such similarities between visual narratives and other domains thus pose fundamental questions for the linguistic and cognitive sciences. Visual narratives like comics involve a range of complex cognitive operations in order to be understood. The Parallel Interfacing Narrative‐Semantics (PINS) Model integrates an emerging literature showing that comprehension of wordless image sequences balances two representational levels of semantic and narrative structure. The neurocognitive mechanisms that guide these processes are argued to overlap with other domains, such as language and music.
Collapse
Affiliation(s)
- Neil Cohn
- Department of Communication and Cognition, Tilburg University
| |
Collapse
|
49
|
Faivre N, Dubois J, Schwartz N, Mudrik L. Imaging object-scene relations processing in visible and invisible natural scenes. Sci Rep 2019; 9:4567. [PMID: 30872607 PMCID: PMC6418099 DOI: 10.1038/s41598-019-38654-z] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2018] [Accepted: 12/13/2018] [Indexed: 11/17/2022] Open
Abstract
Integrating objects with their context is a key step in interpreting complex visual scenes. Here, we used functional Magnetic Resonance Imaging (fMRI) while participants viewed visual scenes depicting a person performing an action with an object that was either congruent or incongruent with the scene. Univariate and multivariate analyses revealed different activity for congruent vs. incongruent scenes in the lateral occipital complex, inferior temporal cortex, parahippocampal cortex, and prefrontal cortex. Importantly, and in contrast to previous studies, these activations could not be explained by task-induced conflict. A secondary goal of this study was to examine whether processing of object-context relations could occur in the absence of awareness. We found no evidence for brain activity differentiating between congruent and incongruent invisible masked scenes, which might reflect a genuine lack of activation, or stem from the limitations of our study. Overall, our results provide novel support for the roles of parahippocampal cortex and frontal areas in conscious processing of object-context relations, which cannot be explained by either low-level differences or task demands. Yet they further suggest that brain activity is decreased by visual masking to the point of becoming undetectable with our fMRI protocol.
Collapse
Affiliation(s)
- Nathan Faivre
- Division of Biology, California Institute of Technology, Pasadena, CA, 91125, USA. .,Laboratory of Cognitive Neuroscience, Brain Mind Institute, Faculty of Life Sciences, Swiss Federal Institute of Technology (EPFL), Geneva, Switzerland. .,Centre d'Economie de la Sorbonne, CNRS UMR 8174, Paris, France.
| | - Julien Dubois
- Division of the Humanities and Social Sciences, California Institute of Technology, Pasadena, CA, USA.,Department of Neurosurgery, Cedars Sinai Medical Center, Los Angeles, CA, USA
| | - Naama Schwartz
- Division of Biology, California Institute of Technology, Pasadena, CA, 91125, USA.,School of Psychological sciences, Tel Aviv University, Tel Aviv, Israel
| | - Liad Mudrik
- Division of Biology, California Institute of Technology, Pasadena, CA, 91125, USA. .,School of Psychological sciences, Tel Aviv University, Tel Aviv, Israel. .,Sagol school of Neuroscience, Tel Aviv University, Tel Aviv, Israel.
| |
Collapse
|
50
|
Schendan HE. Memory influences visual cognition across multiple functional states of interactive cortical dynamics. PSYCHOLOGY OF LEARNING AND MOTIVATION 2019. [DOI: 10.1016/bs.plm.2019.07.007] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
|