1
|
Damiano C, Leemans M, Wagemans J. Exploring the Semantic-Inconsistency Effect in Scenes Using a Continuous Measure of Linguistic-Semantic Similarity. Psychol Sci 2024; 35:623-634. [PMID: 38652604 DOI: 10.1177/09567976241238217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/25/2024] Open
Abstract
Viewers use contextual information to visually explore complex scenes. Object recognition is facilitated by exploiting object-scene relations (which objects are expected in a given scene) and object-object relations (which objects are expected because of the occurrence of other objects). Semantically inconsistent objects deviate from these expectations, so they tend to capture viewers' attention (the semantic-inconsistency effect). Some objects fit the identity of a scene more or less than others, yet semantic inconsistencies have hitherto been operationalized as binary (consistent vs. inconsistent). In an eye-tracking experiment (N = 21 adults), we study the semantic-inconsistency effect in a continuous manner by using the linguistic-semantic similarity of an object to the scene category and to other objects in the scene. We found that both highly consistent and highly inconsistent objects are viewed more than other objects (U-shaped relationship), revealing that the (in)consistency effect is more than a simple binary classification.
Collapse
Affiliation(s)
- Claudia Damiano
- Department of Psychology, University of Toronto
- Laboratory of Experimental Psychology, Department of Brain and Cognition, KU Leuven
| | - Maarten Leemans
- Laboratory of Experimental Psychology, Department of Brain and Cognition, KU Leuven
| | - Johan Wagemans
- Laboratory of Experimental Psychology, Department of Brain and Cognition, KU Leuven
| |
Collapse
|
2
|
Peacock CE, Hall EH, Henderson JM. Objects are selected for attention based upon meaning during passive scene viewing. Psychon Bull Rev 2023; 30:1874-1886. [PMID: 37095319 PMCID: PMC11164276 DOI: 10.3758/s13423-023-02286-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/26/2023] [Indexed: 04/26/2023]
Abstract
While object meaning has been demonstrated to guide attention during active scene viewing and object salience guides attention during passive viewing, it is unknown whether object meaning predicts attention in passive viewing tasks and whether attention during passive viewing is more strongly related to meaning or salience. To answer this question, we used a mixed modeling approach where we computed the average meaning and physical salience of objects in scenes while statistically controlling for the roles of object size and eccentricity. Using eye-movement data from aesthetic judgment and memorization tasks, we then tested whether fixations are more likely to land on high-meaning objects than low-meaning objects while controlling for object salience, size, and eccentricity. The results demonstrated that fixations are more likely to be directed to high meaning objects than low meaning objects regardless of these other factors. Further analyses revealed that fixation durations were positively associated with object meaning irrespective of the other object properties. Overall, these findings provide the first evidence that objects are, in part, selected by meaning for attentional selection during passive scene viewing.
Collapse
Affiliation(s)
- Candace E Peacock
- Center for Mind and Brain, University of California, 267 Cousteau Place, Davis, CA, 95618, USA.
- Department of Psychology, University of California, Davis, CA, USA.
| | - Elizabeth H Hall
- Center for Mind and Brain, University of California, 267 Cousteau Place, Davis, CA, 95618, USA
- Department of Psychology, University of California, Davis, CA, USA
| | - John M Henderson
- Center for Mind and Brain, University of California, 267 Cousteau Place, Davis, CA, 95618, USA
- Department of Psychology, University of California, Davis, CA, USA
| |
Collapse
|
3
|
Isasi-Isasmendi A, Andrews C, Flecken M, Laka I, Daum MM, Meyer M, Bickel B, Sauppe S. The Agent Preference in Visual Event Apprehension. Open Mind (Camb) 2023; 7:240-282. [PMID: 37416075 PMCID: PMC10320828 DOI: 10.1162/opmi_a_00083] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Accepted: 03/19/2023] [Indexed: 07/08/2023] Open
Abstract
A central aspect of human experience and communication is understanding events in terms of agent ("doer") and patient ("undergoer" of action) roles. These event roles are rooted in general cognition and prominently encoded in language, with agents appearing as more salient and preferred over patients. An unresolved question is whether this preference for agents already operates during apprehension, that is, the earliest stage of event processing, and if so, whether the effect persists across different animacy configurations and task demands. Here we contrast event apprehension in two tasks and two languages that encode agents differently; Basque, a language that explicitly case-marks agents ('ergative'), and Spanish, which does not mark agents. In two brief exposure experiments, native Basque and Spanish speakers saw pictures for only 300 ms, and subsequently described them or answered probe questions about them. We compared eye fixations and behavioral correlates of event role extraction with Bayesian regression. Agents received more attention and were recognized better across languages and tasks. At the same time, language and task demands affected the attention to agents. Our findings show that a general preference for agents exists in event apprehension, but it can be modulated by task and language demands.
Collapse
Affiliation(s)
- Arrate Isasi-Isasmendi
- Department of Comparative Language Science, University of Zurich, Zurich, Switzerland
- Center for the Interdisciplinary Study of Language Evolution (ISLE), University of Zurich, Zurich, Switzerland
| | - Caroline Andrews
- Department of Comparative Language Science, University of Zurich, Zurich, Switzerland
- Center for the Interdisciplinary Study of Language Evolution (ISLE), University of Zurich, Zurich, Switzerland
| | - Monique Flecken
- Department of Linguistics, Amsterdam Centre for Language and Communication, University of Amsterdam, Amsterdam, The Netherlands
| | - Itziar Laka
- Department of Linguistics and Basque Studies, University of the Basque Country (UPV/EHU), Leioa, Spain
| | - Moritz M. Daum
- Center for the Interdisciplinary Study of Language Evolution (ISLE), University of Zurich, Zurich, Switzerland
- Department of Psychology, University of Zurich, Zurich, Switzerland
- Jacobs Center for Productive Youth Development, University of Zurich, Zurich, Switzerland
| | - Martin Meyer
- Department of Comparative Language Science, University of Zurich, Zurich, Switzerland
- Center for the Interdisciplinary Study of Language Evolution (ISLE), University of Zurich, Zurich, Switzerland
- Cognitive Psychology Unit, University of Klagenfurt, Klagenfurt, Austria
| | - Balthasar Bickel
- Department of Comparative Language Science, University of Zurich, Zurich, Switzerland
- Center for the Interdisciplinary Study of Language Evolution (ISLE), University of Zurich, Zurich, Switzerland
| | - Sebastian Sauppe
- Department of Comparative Language Science, University of Zurich, Zurich, Switzerland
- Center for the Interdisciplinary Study of Language Evolution (ISLE), University of Zurich, Zurich, Switzerland
| |
Collapse
|
4
|
Peacock CE, Singh P, Hayes TR, Rehrig G, Henderson JM. Searching for meaning: Local scene semantics guide attention during natural visual search in scenes. Q J Exp Psychol (Hove) 2023; 76:632-648. [PMID: 35510885 PMCID: PMC11132926 DOI: 10.1177/17470218221101334] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Models of visual search in scenes include image salience as a source of attentional guidance. However, because scene meaning is correlated with image salience, it could be that the salience predictor in these models is driven by meaning. To test this proposal, we generated meaning maps that represented the spatial distribution of semantic informativeness in scenes, and salience maps which represented the spatial distribution of conspicuous image features and tested their influence on fixation densities from two object search tasks in real-world scenes. The results showed that meaning accounted for significantly greater variance in fixation densities than image salience, both overall and in early attention across both studies. Here, meaning explained 58% and 63% of the theoretical ceiling of variance in attention across both studies, respectively. Furthermore, both studies demonstrated that fast initial saccades were not more likely to be directed to higher salience regions than slower initial saccades, and initial saccades of all latencies were directed to regions containing higher meaning than salience. Together, these results demonstrated that even though meaning was task-neutral, the visual system still selected meaningful over salient scene regions for attention during search.
Collapse
Affiliation(s)
- Candace E Peacock
- Center for Mind and Brain, University of California, Davis, Davis, CA, USA
- Department of Psychology, University of California, Davis, Davis, CA, USA
| | - Praveena Singh
- Center for Neuroscience, University of California, Davis, Davis, CA, USA
| | - Taylor R Hayes
- Center for Mind and Brain, University of California, Davis, Davis, CA, USA
| | - Gwendolyn Rehrig
- Department of Psychology, University of California, Davis, Davis, CA, USA
| | - John M Henderson
- Center for Mind and Brain, University of California, Davis, Davis, CA, USA
- Department of Psychology, University of California, Davis, Davis, CA, USA
| |
Collapse
|
5
|
Rehrig G, Hayes TR, Henderson JM, Ferreira F. Visual attention during seeing for speaking in healthy aging. Psychol Aging 2023; 38:49-66. [PMID: 36395016 PMCID: PMC10021028 DOI: 10.1037/pag0000718] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
As we age, we accumulate a wealth of information about the surrounding world. Evidence from visual search suggests that older adults retain intact knowledge for where objects tend to occur in everyday environments (semantic information) that allows them to successfully locate objects in scenes, but may overrely on semantic guidance. We investigated age differences in the allocation of attention to semantically informative and visually salient information in a task in which the eye movements of younger (N = 30, aged 18-24) and older (N = 30, aged 66-82) adults were tracked as they described real-world scenes. We measured the semantic information in scenes based on "meaning map" ratings from a norming sample of young and older adults, and image salience as graph-based visual saliency. Logistic mixed-effects modeling was used to determine whether, controlling for center bias, fixated scene locations differed in semantic informativeness and visual salience from locations that were not fixated, and whether these effects differed for young and older adults. Semantic informativeness predicted fixated locations well overall, as did image salience, although unique variance in the model was better explained by semantic informativeness than image salience. Older adults were less likely to fixate informative locations in scenes than young adults were, though the locations older adults' fixated were independently predicted well by informativeness. These results suggest young and older adults both use semantic information to guide attention in scenes and that older adults do not overrely on semantic information across the board. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
Collapse
Affiliation(s)
| | | | - John M. Henderson
- Department of Psychology, University of California, Davis
- Center for Mind and Brain, University of California, Davis
| | | |
Collapse
|
6
|
Rehrig G, Barker M, Peacock CE, Hayes TR, Henderson JM, Ferreira F. Look at what I can do: Object affordances guide visual attention while speakers describe potential actions. Atten Percept Psychophys 2022; 84:1583-1610. [PMID: 35484443 PMCID: PMC9246959 DOI: 10.3758/s13414-022-02467-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/23/2022] [Indexed: 11/08/2022]
Abstract
As we act on the world around us, our eyes seek out objects we plan to interact with. A growing body of evidence suggests that overt visual attention selects objects in the environment that could be interacted with, even when the task precludes physical interaction. In previous work, objects that afford grasping interactions influenced attention when static scenes depicted reachable spaces, and attention was otherwise better explained by general informativeness. Because grasping is but one of many object interactions, previous work may have downplayed the influence of object affordances on attention. The current study investigated the relationship between overt visual attention and object affordances versus broadly construed semantic information in scenes as speakers describe or memorize scenes. In addition to meaning and grasp maps-which capture informativeness and grasping object affordances in scenes, respectively-we introduce interact maps, which capture affordances more broadly. In a mixed-effects analysis of 5 eyetracking experiments, we found that meaning predicted fixated locations in a general description task and during scene memorization. Grasp maps marginally predicted fixated locations during action description for scenes that depicted reachable spaces only. Interact maps predicted fixated regions in description experiments alone. Our findings suggest observers allocate attention to scene regions that could be readily interacted with when talking about the scene, while general informativeness preferentially guides attention when the task does not encourage careful consideration of objects in the scene. The current study suggests that the influence of object affordances on visual attention in scenes is mediated by task demands.
Collapse
Affiliation(s)
- Gwendolyn Rehrig
- Department of Psychology, University of California, Davis, Davis, CA, 95616, USA.
| | - Madison Barker
- Department of Psychology, University of California, Davis, Davis, CA, 95616, USA
| | - Candace E Peacock
- Department of Psychology and Center for Mind and Brain, University of California, Davis, Davis, CA, USA
| | - Taylor R Hayes
- Center for Mind and Brain, University of California, Davis, Davis, CA, USA
| | - John M Henderson
- Department of Psychology and Center for Mind and Brain, University of California, Davis, Davis, CA, USA
| | - Fernanda Ferreira
- Department of Psychology, University of California, Davis, Davis, CA, 95616, USA
| |
Collapse
|
7
|
Ivanova AA, Mineroff Z, Zimmerer V, Kanwisher N, Varley R, Fedorenko E. The Language Network Is Recruited but Not Required for Nonverbal Event Semantics. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2021; 2:176-201. [PMID: 37216147 PMCID: PMC10158592 DOI: 10.1162/nol_a_00030] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/04/2020] [Accepted: 01/07/2021] [Indexed: 05/24/2023]
Abstract
The ability to combine individual concepts of objects, properties, and actions into complex representations of the world is often associated with language. Yet combinatorial event-level representations can also be constructed from nonverbal input, such as visual scenes. Here, we test whether the language network in the human brain is involved in and necessary for semantic processing of events presented nonverbally. In Experiment 1, we scanned participants with fMRI while they performed a semantic plausibility judgment task versus a difficult perceptual control task on sentences and line drawings that describe/depict simple agent-patient interactions. We found that the language network responded robustly during the semantic task performed on both sentences and pictures (although its response to sentences was stronger). Thus, language regions in healthy adults are engaged during a semantic task performed on pictorial depictions of events. But is this engagement necessary? In Experiment 2, we tested two individuals with global aphasia, who have sustained massive damage to perisylvian language areas and display severe language difficulties, against a group of age-matched control participants. Individuals with aphasia were severely impaired on the task of matching sentences to pictures. However, they performed close to controls in assessing the plausibility of pictorial depictions of agent-patient interactions. Overall, our results indicate that the left frontotemporal language network is recruited but not necessary for semantic processing of nonverbally presented events.
Collapse
Affiliation(s)
- Anna A. Ivanova
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Zachary Mineroff
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Vitor Zimmerer
- Division of Psychology and Language Sciences, University College London, London, UK
| | - Nancy Kanwisher
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Rosemary Varley
- Division of Psychology and Language Sciences, University College London, London, UK
| | - Evelina Fedorenko
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| |
Collapse
|
8
|
Speaking for seeing: Sentence structure guides visual event apprehension. Cognition 2020; 206:104516. [PMID: 33228969 DOI: 10.1016/j.cognition.2020.104516] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2020] [Revised: 11/05/2020] [Accepted: 11/11/2020] [Indexed: 11/24/2022]
Abstract
Human experience and communication are centred on events, and event apprehension is a rapid process that draws on the visual perception and immediate categorization of event roles ("who does what to whom"). We demonstrate a role for syntactic structure in visual information uptake for event apprehension. An event structure foregrounding either the agent or patient was activated during speaking, transiently modulating the apprehension of subsequently viewed unrelated events. Speakers of Dutch described pictures with actives and passives (agent and patient foregrounding, respectively). First fixations on pictures of unrelated events that were briefly presented (for 300 ms) next were influenced by the active or passive structure of the previously produced sentence. Going beyond the study of how single words cue object perception, we show that sentence structure guides the viewpoint taken during rapid event apprehension.
Collapse
|
9
|
Henderson JM. Meaning and attention in scenes. PSYCHOLOGY OF LEARNING AND MOTIVATION 2020. [DOI: 10.1016/bs.plm.2020.08.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|