1
|
Kisker J, Johnsdorf M, Sagehorn M, Hofmann T, Gruber T, Schöne B. Visual information processing of 2D, virtual 3D and real-world objects marked by theta band responses: Visuospatial processing and cognitive load as a function of modality. Eur J Neurosci 2025; 61:e16634. [PMID: 39648815 DOI: 10.1111/ejn.16634] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2024] [Revised: 11/15/2024] [Accepted: 11/20/2024] [Indexed: 12/10/2024]
Abstract
While pictures share global similarities with the real-world objects they depict, the latter have unique characteristics going beyond 2D representations. Due to its three-dimensional presentation mode, Virtual Reality (VR) is increasingly used to further approach real-world visual processing, yet it remains unresolved to what extent VR yields process comparable to real-world processes. Consequently, our study examined visuospatial processing by a triangular comparison of 2D objects, virtual 3D objects and real 3D objects. The theta band response (TBR) was analysed as an electrophysiological correlate of visual processing, allowing for the differentiation of predominantly stimulus-driven processes mirrored in the evoked response and internal, complex processing reflected in the induced response. Our results indicate that the differences between conditions driven by sensory features go beyond a binary division into 2D and 3D materials but are based on further sensory features: The evoked posterior TBR differentiated between all conditions but revealed fewer differences between processing of real-world and VR objects. Moreover, the induced midfrontal TBR indicated higher cognitive load for 2D objects compared to VR and real-world objects, while no difference between both latter conditions was revealed. In conclusion, our results demonstrate that the transferability of 2D- and VR-based findings to real-world processes depends to some degree on whether predominantly sensory stimulus features or higher cognitive processes are examined. Yet although VR and real-world processes are not to be equated based on our results, their comparison yielded fewer significant differences relative to the PC condition, advising the use of VR to examine visuospatial processing.
Collapse
Affiliation(s)
- Joanna Kisker
- Experimental Psychology I, Institute of Psychology, Osnabrück University, Osnabrück, Germany
| | - Marike Johnsdorf
- Experimental Psychology I, Institute of Psychology, Osnabrück University, Osnabrück, Germany
| | - Merle Sagehorn
- Experimental Psychology I, Institute of Psychology, Osnabrück University, Osnabrück, Germany
| | - Thomas Hofmann
- Industrial Design, Engineering and Computer Science, University of Applied Sciences Osnabrück, Osnabrück, Germany
| | - Thomas Gruber
- Experimental Psychology I, Institute of Psychology, Osnabrück University, Osnabrück, Germany
| | - Benjamin Schöne
- Experimental Psychology I, Institute of Psychology, Osnabrück University, Osnabrück, Germany
- Department of Psychology, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
2
|
Zhang Y, Wu X, Zheng C, Zhao Y, Gao J, Deng Z, Zhang X, Chen J. Effects of Vergence Eye Movement Planning on Size Perception and Early Visual Processing. J Cogn Neurosci 2024; 36:2793-2806. [PMID: 38940732 DOI: 10.1162/jocn_a_02207] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/29/2024]
Abstract
Our perception of objects depends on non-oculomotor depth cues, such as pictorial distance cues and binocular disparity, and oculomotor depth cues, such as vergence and accommodation. Although vergence eye movements are always involved in perceiving real distance, previous studies have mainly focused on the effect of oculomotor state via "proprioception" on distance and size perception. It remains unclear whether the oculomotor command of vergence eye movement would also influence visual processing. To address this question, we placed a light at 28.5 cm and a screen for stimulus presentation at 57 cm from the participants. In the NoDivergence condition, participants were asked to maintain fixation on the light regardless of stimulus presentation throughout the trial. In the WithDivergence condition, participants were instructed to initially maintain fixation on the near light and then turn their two eyes outward to look at the stimulus on the far screen. The stimulus was presented for 100 msec, entirely within the preparation stage of the divergence eye movement. We found that participants perceived the stimulus as larger but were less sensitive to stimulus sizes in the WithDivergence condition than in the NoDivergence condition. The earliest visual evoked component C1 (peak latency 80 msec), which varied with stimulus size in the NoDivergence condition, showed similar amplitudes for larger and smaller stimuli in the WithDivergence condition. These results show that vergence eye movement planning affects the earliest visual processing and size perception, and demonstrate an example of the effect of motor command on sensory processing.
Collapse
Affiliation(s)
| | | | | | | | - Jie Gao
- South China Normal University
| | | | | | | |
Collapse
|
3
|
Jacobs OL, Andrinopoulos K, Steeves JKE, Kingstone A. Sex differences persist in visuospatial mental rotation under 3D VR conditions. PLoS One 2024; 19:e0314270. [PMID: 39585859 PMCID: PMC11588239 DOI: 10.1371/journal.pone.0314270] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Accepted: 11/07/2024] [Indexed: 11/27/2024] Open
Abstract
The classic Vandenberg and Kuse Mental Rotations Test (MRT) shows a male advantage for visuospatial rotation. However, MRTs that have been adapted for use with real or physical objects have found that sex differences are reduced or abolished. Previous work has also suggested that virtual 3D objects will eliminate sex differences, although this has not been demonstrated in a purely visuospatial paradigm without motor input. In the present study we sought to examine potential sex differences in mental rotation using a fully-immersive 3D VR adaptation of the original MRT that is purely visuospatial in nature. With unlimited time 23 females and 23 males completed a VR MRT designed to approximate the original Vandenberg and Kuse stimuli. Despite the immersive VR experience and lack of time pressure, we found a large male performance advantage in response accuracy, exceeding what has typically been reported for 2D MRTs. No sex differences were observed in response time. Thus, a male advantage in pure mental rotation for 2D stimuli can extend to 3D objects in VR, even when there are no time constraints.
Collapse
Affiliation(s)
- Oliver L. Jacobs
- Department of Psychology, University of British Columbia, Vancouver, Canada
| | - Katerina Andrinopoulos
- Department of Psychology and Centre for Vision Research, York University, Toronto, Canada
| | - Jennifer K. E. Steeves
- Department of Psychology and Centre for Vision Research, York University, Toronto, Canada
| | - Alan Kingstone
- Department of Psychology, University of British Columbia, Vancouver, Canada
| |
Collapse
|
4
|
Vigliocco G, Convertino L, De Felice S, Gregorians L, Kewenig V, Mueller MAE, Veselic S, Musolesi M, Hudson-Smith A, Tyler N, Flouri E, Spiers HJ. Ecological brain: reframing the study of human behaviour and cognition. ROYAL SOCIETY OPEN SCIENCE 2024; 11:240762. [PMID: 39525361 PMCID: PMC11544371 DOI: 10.1098/rsos.240762] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/08/2024] [Revised: 08/16/2024] [Accepted: 08/19/2024] [Indexed: 11/16/2024]
Abstract
The last decade has seen substantial advances in the capacity to record behaviour and neural activity in humans in real-world settings, to simulate real-world situations in laboratory settings and to apply sophisticated analyses to large-scale data. Along with these developments, a growing number of groups has begun to advocate for real-world neuroscience and cognitive science. Here, we review the arguments and the available methods for real-world research and outline an overarching framework that embeds key ideas proposed in the literature integrating them into a cyclic process of 'bringing the lab to the real world' (recording behavioural and neural activity in real-world settings) and 'bringing the real-world to the lab' (manipulating the environments in which behaviours occur in the laboratory) that combines exploratory and confirmatory research and is interdisciplinary (including those sciences concerned with the natural, built or virtual environment). We highlight the benefits brought by this framework emphasizing the greater potential for novel discovery, theory development and human-centred applications to the environment.
Collapse
Affiliation(s)
- Gabriella Vigliocco
- Leverhulme Doctoral Training Programme for the Ecological Study of the Brain, University College London, London, UK
- Experimental Psychology, University College London, London, UK
| | - Laura Convertino
- Leverhulme Doctoral Training Programme for the Ecological Study of the Brain, University College London, London, UK
- Institute for Cognitive Neuroscience, University College London, London, UK
| | - Sara De Felice
- Leverhulme Doctoral Training Programme for the Ecological Study of the Brain, University College London, London, UK
- Institute for Cognitive Neuroscience, University College London, London, UK
| | - Lara Gregorians
- Leverhulme Doctoral Training Programme for the Ecological Study of the Brain, University College London, London, UK
- Experimental Psychology, University College London, London, UK
| | - Viktor Kewenig
- Leverhulme Doctoral Training Programme for the Ecological Study of the Brain, University College London, London, UK
- Experimental Psychology, University College London, London, UK
| | - Marie A. E. Mueller
- Leverhulme Doctoral Training Programme for the Ecological Study of the Brain, University College London, London, UK
- Division of Psychiatry, University College London, London, UK
| | - Sebastijan Veselic
- Leverhulme Doctoral Training Programme for the Ecological Study of the Brain, University College London, London, UK
- Institute of Neurology, University College London, London, UK
| | - Mirco Musolesi
- Leverhulme Doctoral Training Programme for the Ecological Study of the Brain, University College London, London, UK
- Computer Science, University College London, London, UK
| | - Andrew Hudson-Smith
- Leverhulme Doctoral Training Programme for the Ecological Study of the Brain, University College London, London, UK
- Centre for Advanced Spatial Analysis, University College London, London, UK
| | - Nicholas Tyler
- Leverhulme Doctoral Training Programme for the Ecological Study of the Brain, University College London, London, UK
- Civil, Environmental and Geomatic Engineering, University College London, London, UK
| | - Eirini Flouri
- Leverhulme Doctoral Training Programme for the Ecological Study of the Brain, University College London, London, UK
- Institute of Education, University College London, London, UK
| | - Hugo J. Spiers
- Leverhulme Doctoral Training Programme for the Ecological Study of the Brain, University College London, London, UK
- Experimental Psychology, University College London, London, UK
| |
Collapse
|
5
|
Deng Z, Gao J, Li T, Chen Y, Gao B, Fang F, Culham JC, Chen J. Viewpoint adaptation revealed potential representational differences between 2D images and 3D objects. Cognition 2024; 251:105903. [PMID: 39126975 DOI: 10.1016/j.cognition.2024.105903] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Revised: 07/17/2024] [Accepted: 07/22/2024] [Indexed: 08/12/2024]
Abstract
For convenience and experimental control, cognitive science has relied largely on images as stimuli rather than the real, tangible objects encountered in the real world. Recent evidence suggests that the cognitive processing of images may differ from real objects, especially in the processing of spatial locations and actions, thought to be mediated by the dorsal visual stream. Perceptual and semantic processing in the ventral visual stream, however, has been assumed to be largely unaffected by the realism of objects. Several studies have found that one key difference accounting for differences between real objects and images is actability; however, less research has investigated another potential difference - the three-dimensional nature of real objects as conveyed by cues like binocular disparity. To investigate the extent to which perception is affected by the realism of a stimulus, we compared viewpoint adaptation when stimuli (a face or a kettle) were 2D (flat images without binocular disparity) vs. 3D (i.e., real, tangible objects or stereoscopic images with binocular disparity). For both faces and kettles, adaptation to 3D stimuli induced stronger viewpoint aftereffects than adaptation to 2D images when the adapting orientation was rightward. A computational model suggested that the difference in aftereffects could be explained by broader viewpoint tuning for 3D compared to 2D stimuli. Overall, our finding narrowed the gap between understanding the neural processing of visual images and real-world objects by suggesting that compared to 2D images, real and simulated 3D objects evoke more broadly tuned neural representations, which may result in stronger viewpoint invariance.
Collapse
Affiliation(s)
- Zhiqing Deng
- Center for the Study of Applied Psychology, Guangdong Key Laboratory of Mental Health and Cognitive Science, and the School of Psychology, South China Normal University, Guangzhou, Guangdong Province 510631, China
| | - Jie Gao
- Center for the Study of Applied Psychology, Guangdong Key Laboratory of Mental Health and Cognitive Science, and the School of Psychology, South China Normal University, Guangzhou, Guangdong Province 510631, China
| | - Toni Li
- Division of Emergency Medicine, Department of Medicine, University of Toronto, Toronto M5S 3H2, Canada
| | - Yan Chen
- Center for the Study of Applied Psychology, Guangdong Key Laboratory of Mental Health and Cognitive Science, and the School of Psychology, South China Normal University, Guangzhou, Guangdong Province 510631, China
| | - BoYu Gao
- College of Information Science and Technology/Cyber Security, Jinan University, Guangzhou 510632, China
| | - Fang Fang
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing 100871, China; Key Laboratory of Machine Perception (Ministry of Education), Peking University, Beijing 100871, People's Republic of China; Peking-Tsinghua Center for Life Sciences, Peking University, Beijing 100871, China; IDG/McGovern Institute for Brain Research, Peking University, Beijing 100871, China
| | - Jody C Culham
- Department of Psychology, The University of Western Ontario, London, ON N6A 5C2, Canada
| | - Juan Chen
- Center for the Study of Applied Psychology, Guangdong Key Laboratory of Mental Health and Cognitive Science, and the School of Psychology, South China Normal University, Guangzhou, Guangdong Province 510631, China; Key Laboratory of Brain, Cognition and Education Sciences (South China Normal University), Ministry of Education, China.
| |
Collapse
|
6
|
Jetly K, Ismail A, Hassan N, Mohammed Nawi A. Mechanism Linking Cigarette Pack Factors, Point-of-Sale Marketing and Individual Factors With Smoking Intention Among School-Going Adolescents. JOURNAL OF PUBLIC HEALTH MANAGEMENT AND PRACTICE 2024:00124784-990000000-00335. [PMID: 39236215 DOI: 10.1097/phh.0000000000001960] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/07/2024]
Abstract
CONTEXT Tobacco usage accounts for one of the most preventable causes of death. OBJECTIVE This study aimed to explore mechanisms linking cigarette pack factors, point-of-sale marketing, and individual factors (psychological reactant trait) to predict smoking intention among school-going adolescents. DESIGN, SETTING, AND PARTICIPANTS This was a cross-sectional study conducted among 6 urban secondary schools. A pretested and validated self-administered questionnaire was used. Data analysis for structural equation modeling was done using SMART-PLS v3.2.8. MAIN OUTCOME MEASURE The main outcome measure was to determine the direct and indirect effects of cigarette pack factors, point-of-sale marketing, and individual factors (psychological reactant trait) to predict smoking intention among school-going adolescents in a theory-based model. RESULTS A total of 386 adolescents fulfilling the inclusion criteria participated. Pictorial warning message reactance (β = .153, P ≤ .001), pack receptivity of conventional pack (β = .297, P = .004), and psychological reactant trait (β = .174, P ≤ .001) were positively related to smoking intention. Pictorial warning negative affect (β = -.153, P = .001) was negatively related to smoking intention. The psychological reactant trait was positively related to message reactance (β = .340, P ≤ .001). However, recall exposure to point-of-sale marketing and pack appraisal of conventional pack was not positively related to smoking intention (β = .038, P = .215 and β = -.026, P = .39, respectively). Pictorial warning message reactance also positively mediates the relationship between psychological reactant trait and smoking intention (β = 0.05, p = .001). The model has strong predictive power. CONCLUSION In conclusion, cigarette pack factors and psychological reactant traits are essential in predicting smoking intention. Hence, policymakers should consider these factors in developing smoking policies.
Collapse
Affiliation(s)
- Kavita Jetly
- Author Affiliations: Herbal Medicine Research Centre, Institute of Medical Research, National Institutes of Health, Shah Alam Selangor, Malaysia (Dr Jetly); Department of Public Health Medicine, Faculty of Medicine, Universiti Kebangsaan Malaysia, Cheras, Kuala Lumpur, Malaysia (Dr Ismail and Nawi); and Non-Communicable Disease Section, Disease Control Division (NCD), Ministry of Health Malaysia, Wilayah Persekutuan Putrajaya, Malaysia (Dr Hassan) Faculty of Public Health, Universitas Sumatera Utara, Indonesia(Dr Ismail)
| | | | | | | |
Collapse
|
7
|
Kyler H, James K. The importance of multisensory-motor learning on subsequent visual recognition. Perception 2024; 53:597-618. [PMID: 38900046 DOI: 10.1177/03010066241258967] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/21/2024]
Abstract
Speed of visual object recognition is facilitated after active manual exploration of objects relative to passive visual processing alone. Manual exploration allows viewers to select important information about object structure that may facilitate recognition. Viewpoints where the objects' axis of elongation is perpendicular or parallel to the line of sight are selected more during exploration, recognized faster than other viewpoints, and afford the most information about structure when object movement is controlled by the viewer. Prior work used virtual object exploration in active and passive viewing conditions, limiting multisensory structural object information. Adding multisensory information to encoding may change accuracy of overall recognition, viewpoint selection, and viewpoint recognition. We tested whether the known active advantage for object recognition would change when real objects were studied, affording visual and haptic information. Participants interacted with 3D novel objects during manual exploration or passive viewing of another's object interactions. Object recognition was tested using several viewpoints of rendered objects. We found that manually explored objects were recognized more accurately than objects studied through passive exploration and that recognition of viewpoints differed from previous work.
Collapse
|
8
|
Wrzus C, Frenkel MO, Schöne B. Current opportunities and challenges of immersive virtual reality for psychological research and application. Acta Psychol (Amst) 2024; 249:104485. [PMID: 39244850 DOI: 10.1016/j.actpsy.2024.104485] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2023] [Revised: 07/25/2024] [Accepted: 09/03/2024] [Indexed: 09/10/2024] Open
Abstract
Immersive virtual reality (iVR), that is, digital stereoscopic 360° scenarios usually presented in head-mounted displays, has gained much popularity in medical, educational, and consumer contexts in the last years. Recently, psychological research started to utilize the theoretical and methodological advantages of iVR. Furthermore, understanding cognitive, emotional, and behavioral processes in iVR similar to real-life is a genuinely psychological, currently understudied topic. This article briefly reviews the current application of iVR in psychological research and related disciplines. The review presents empirical evidence for opportunities and strengths (e.g., realism, experimental control, effectiveness of therapeutic and educational interventions) as well as challenges and weaknesses (e.g., differences in experiencing presence, interacting with VR content including avatars, i.e., graphical representation of a person). The main part discusses areas requiring additional basic research, such as cognitive processes, socio-emotional processes during social interactions in iVR, and possible societal implications (e.g., fraud, VR-addiction). For both research and application, iVR offers a contemporary extension of the psychological toolkit, offering new avenues to investigate and enhance core phenomena of psychology such as cognition, affect, motivation, and behavior. Still, it is crucial to exercise caution in its application as excessive and careless use of iVR can pose risks to individuals' mental and physical well-being.
Collapse
Affiliation(s)
- Cornelia Wrzus
- Psychological Institute and Network Aging Research, Heidelberg University, Germany.
| | | | - Benjamin Schöne
- Norwegian University of Science and Technology, Norway; Department of Psychology, University Osnabrück, Germany
| |
Collapse
|
9
|
Edwards S, Jenkins R, Jacobs O, Kingstone A. The medium modulates the medusa effect: Perceived mind in analogue and digital images. Cognition 2024; 249:105827. [PMID: 38810428 DOI: 10.1016/j.cognition.2024.105827] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Revised: 05/06/2024] [Accepted: 05/19/2024] [Indexed: 05/31/2024]
Abstract
We effortlessly attribute mental states to other people. We also attribute minds to people depicted in pictures, albeit at a reduced strength. Intriguingly, this reduction in intensity continues for images of people within a photograph itself-a phenomenon known as the Medusa effect. The present study replicates the Medusa effect for images shown digitally and on paper. Crucially, we demonstrate that we can reduce the magnitude of the Medusa effect by depicting people digitally within a computer screen (e.g., as if one were interacting with a person on a Zoom call). As well as modulating the quantity of the Medusa effect, changes in pictorial medium can affect the quality of the perceived mind. Specifically, the dimension of Experience-what a depicted person can feel-reflected participants' observations that they could interact with an onscreen person embedded in a digital image. This combination of a robust Medusa effect and the ability to control it both quantitatively and qualitatively opens many avenues for its future application, such as manipulating and measuring mind in immersive media.
Collapse
Affiliation(s)
- Salina Edwards
- Department of Psychology, University of British Columbia; Department of Psychiatry & Behavioural Neurosciences, McMaster University.
| | - Rob Jenkins
- Department of Psychology, University of York
| | - Oliver Jacobs
- Department of Psychology, University of British Columbia
| | - Alan Kingstone
- Department of Psychology, University of British Columbia
| |
Collapse
|
10
|
Kyler H. The multisensory and multidimensional nature of object representation. J Neurophysiol 2024; 132:130-133. [PMID: 38863428 PMCID: PMC11383381 DOI: 10.1152/jn.00462.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2023] [Accepted: 06/11/2024] [Indexed: 06/13/2024] Open
Abstract
Recent functional magnetic resonance imaging (fMRI) experiments revealed similar neural representations across different types of two-dimensional (2-D) visual stimuli; however, real three-dimensional (3-D) objects affording action differentially affect neural activation and behavioral results relative to 2-D objects. Recruitment of multiple sensory regions during unisensory (visual, haptic, and auditory) object shape tasks suggests that shape representation may be modality invariant. This mini-review explores the overlapping neural regions involved in object shape representation, across 2-D, 3-D, visual, and haptic experiments.
Collapse
Affiliation(s)
- Hellen Kyler
- Department of Psychological and Brain SciencesIndiana University Bloomington, IndianaUnited States
| |
Collapse
|
11
|
Purpura G, Petri S, Tancredi R, Tinelli F, Calderoni S. Haptic and visuo-haptic impairments for object recognition in children with autism spectrum disorder: focus on the sensory and multisensory processing dysfunctions. Exp Brain Res 2024; 242:1731-1744. [PMID: 38819648 PMCID: PMC11208199 DOI: 10.1007/s00221-024-06855-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2023] [Accepted: 05/15/2024] [Indexed: 06/01/2024]
Abstract
Dysfunctions in sensory processing are widely described in individuals with autism spectrum disorder (ASD), although little is known about the developmental course and the impact of these difficulties on the learning processes during the preschool and school ages of ASD children. Specifically, as regards the interplay between visual and haptic information in ASD during developmental age, knowledge is very scarce and controversial. In this study, we investigated unimodal (visual and haptic) and cross-modal (visuo-haptic) processing skills aimed at object recognition through a behavioural paradigm already used in children with typical development (TD), with cerebral palsy and with peripheral visual impairments. Thirty-five children with ASD (age range: 5-11 years) and thirty-five age-matched and gender-matched typically developing peers were recruited. The procedure required participants to perform an object-recognition task relying on only the visual modality (black-and-white photographs), only the haptic modality (manipulation of real objects) and visuo-haptic transfer of these two types of information. Results are consistent with the idea that visuo-haptic transfer may be significantly worse in ASD children than in TD peers, leading to significant impairment in multisensory interactions for object recognition facilitation. Furthermore, ASD children tended to show a specific deficit in haptic information processing, while a similar trend of maturation of visual modality between the two groups is reported. This study adds to the current literature by suggesting that ASD differences in multisensory processes also regard visuo-haptic abilities necessary to identify and recognise objects of daily life.
Collapse
Affiliation(s)
- G Purpura
- School of Medicine and Surgery, University of Milano Bicocca, Monza, Italy
| | - S Petri
- Unit for Visually Impaired People, Istituto Italiano di Tecnologia, Genova, Italy
- Department of Informatics, Bioengineering, Robotics and Systems Engineering (DIBRIS), Università degli Studi di Genova, Genoa, Italy
| | - R Tancredi
- Department of Developmental Neuroscience, IRCCS Fondazione Stella Maris, Pisa, Italy
| | - F Tinelli
- Department of Developmental Neuroscience, IRCCS Fondazione Stella Maris, Pisa, Italy
| | - S Calderoni
- Department of Developmental Neuroscience, IRCCS Fondazione Stella Maris, Pisa, Italy.
- Department of Clinical and Experimental Medicine, University of Pisa, Via Roma 55, Pisa, 56126, Italy.
| |
Collapse
|
12
|
Srinath R, Ni AM, Marucci C, Cohen MR, Brainard DH. Orthogonal neural representations support perceptual judgements of natural stimuli. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.02.14.580134. [PMID: 38464018 PMCID: PMC10925131 DOI: 10.1101/2024.02.14.580134] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/12/2024]
Abstract
In natural behavior, observers must separate relevant information from a barrage of irrelevant information. Many studies have investigated the neural underpinnings of this ability using artificial stimuli presented on simple backgrounds. Natural viewing, however, carries a set of challenges that are inaccessible using artificial stimuli, including neural responses to background objects that are task-irrelevant. An emerging body of evidence suggests that the visual abilities of humans and animals can be modeled through the linear decoding of task-relevant information from visual cortex. This idea suggests the hypothesis that irrelevant features of a natural scene should impair performance on a visual task only if their neural representations intrude on the linear readout of the task relevant feature, as would occur if the representations of task-relevant and irrelevant features are not orthogonal in the underlying neural population. We tested this hypothesis using human psychophysics and monkey neurophysiology, in response to parametrically variable naturalistic stimuli. We demonstrate that 1) the neural representation of one feature (the position of a central object) in visual area V4 is orthogonal to those of several background features, 2) the ability of human observers to precisely judge object position was largely unaffected by task-irrelevant variation in those background features, and 3) many features of the object and the background are orthogonally represented by V4 neural responses. Our observations are consistent with the hypothesis that orthogonal neural representations can support stable perception of objects and features despite the tremendous richness of natural visual scenes.
Collapse
Affiliation(s)
- Ramanujan Srinath
- equal contribution
- Department of Neurobiology and Neuroscience Institute, The University of Chicago, Chicago, IL 60637, USA
| | - Amy M. Ni
- equal contribution
- Department of Neurobiology and Neuroscience Institute, The University of Chicago, Chicago, IL 60637, USA
- Department of Psychology, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Claire Marucci
- Department of Psychology, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Marlene R. Cohen
- Department of Neurobiology and Neuroscience Institute, The University of Chicago, Chicago, IL 60637, USA
- equal contribution
| | - David H. Brainard
- Department of Psychology, University of Pennsylvania, Philadelphia, PA 19104, USA
- equal contribution
| |
Collapse
|
13
|
Mudrik L, Hirschhorn R, Korisky U. Taking consciousness for real: Increasing the ecological validity of the study of conscious vs. unconscious processes. Neuron 2024; 112:1642-1656. [PMID: 38653247 PMCID: PMC11100345 DOI: 10.1016/j.neuron.2024.03.031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2024] [Revised: 03/23/2024] [Accepted: 03/29/2024] [Indexed: 04/25/2024]
Abstract
The study of consciousness has developed well-controlled, rigorous methods for manipulating and measuring consciousness. Yet, in the process, experimental paradigms grew farther away from everyday conscious and unconscious processes, which raises the concern of ecological validity. In this review, we suggest that the field can benefit from adopting a more ecological approach, akin to other fields of cognitive science. There, this approach challenged some existing hypotheses, yielded stronger effects, and enabled new research questions. We argue that such a move is critical for studying consciousness, where experimental paradigms tend to be artificial and small effect sizes are relatively prevalent. We identify three paths for doing so-changing the stimuli and experimental settings, changing the measures, and changing the research questions themselves-and review works that have already started implementing such approaches. While acknowledging the inherent challenges, we call for increasing ecological validity in consciousness studies.
Collapse
Affiliation(s)
- Liad Mudrik
- School of Psychological Sciences, Tel Aviv University, Tel Aviv, Israel; Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel.
| | - Rony Hirschhorn
- Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel
| | - Uri Korisky
- School of Psychological Sciences, Tel Aviv University, Tel Aviv, Israel
| |
Collapse
|
14
|
Sagehorn M, Johnsdorf M, Kisker J, Gruber T, Schöne B. Electrophysiological correlates of face and object perception: A comparative analysis of 2D laboratory and virtual reality conditions. Psychophysiology 2024; 61:e14519. [PMID: 38219244 DOI: 10.1111/psyp.14519] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2023] [Revised: 12/12/2023] [Accepted: 12/26/2023] [Indexed: 01/16/2024]
Abstract
Human face perception is a specialized visual process with inherent social significance. The neural mechanisms reflecting this intricate cognitive process have evolved in spatially complex and emotionally rich environments. Previous research using VR to transfer an established face perception paradigm to realistic conditions has shown that the functional properties of face-sensitive neural correlates typically observed in the laboratory are attenuated outside the original modality. The present study builds on these results by comparing the perception of persons and objects under conventional laboratory (PC) and realistic conditions in VR. Adhering to established paradigms, the PC- and VR modalities both featured images of persons and cars alongside standard control images. To investigate the individual stages of realistic face processing, response times, the typical face-sensitive N170 component, and relevant subsequent components (L1, L2; pre-, post-response) were analyzed within and between modalities. The between-modality comparison of response times and component latencies revealed generally faster processing under realistic conditions. However, the obtained N170 latency and amplitude differences showed reduced discriminative capacity under realistic conditions during this early stage. These findings suggest that the effects commonly observed in the lab are specific to monitor-based presentations. Analyses of later and response-locked components showed specific neural mechanisms for identification and evaluation are employed when perceiving the stimuli under realistic conditions, reflected in discernible amplitude differences in response to faces and objects beyond the basic perceptual features. Conversely, the results do not provide evidence for comparable stimulus-specific perceptual processing pathways when viewing pictures of the stimuli under conventional laboratory conditions.
Collapse
Affiliation(s)
- Merle Sagehorn
- Experimental Psychology I, Institute of Psychology, Osnabrück University, Osnabrück, Germany
| | - Marike Johnsdorf
- Experimental Psychology I, Institute of Psychology, Osnabrück University, Osnabrück, Germany
| | - Joanna Kisker
- Experimental Psychology I, Institute of Psychology, Osnabrück University, Osnabrück, Germany
| | - Thomas Gruber
- Experimental Psychology I, Institute of Psychology, Osnabrück University, Osnabrück, Germany
| | - Benjamin Schöne
- Experimental Psychology I, Institute of Psychology, Osnabrück University, Osnabrück, Germany
- Department of Psychology, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
15
|
Baror S, Baumgarten TJ, He BJ. Neural Mechanisms Determining the Duration of Task-free, Self-paced Visual Perception. J Cogn Neurosci 2024; 36:756-775. [PMID: 38357932 DOI: 10.1162/jocn_a_02131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/16/2024]
Abstract
Humans spend hours each day spontaneously engaging with visual content, free from specific tasks and at their own pace. Currently, the brain mechanisms determining the duration of self-paced perceptual behavior remain largely unknown. Here, participants viewed naturalistic images under task-free settings and self-paced each image's viewing duration while undergoing EEG and pupillometry recordings. Across two independent data sets, we observed large inter- and intra-individual variability in viewing duration. However, beyond an image's presentation order and category, specific image content had no consistent effects on spontaneous viewing duration across participants. Overall, longer viewing durations were associated with sustained enhanced posterior positivity and anterior negativity in the ERPs. Individual-specific variations in the spontaneous viewing duration were consistently correlated with evoked EEG activity amplitudes and pupil size changes. By contrast, presentation order was selectively correlated with baseline alpha power and baseline pupil size. Critically, spontaneous viewing duration was strongly predicted by the temporal stability in neural activity patterns starting as early as 350 msec after image onset, suggesting that early neural stability is a key predictor for sustained perceptual engagement. Interestingly, neither bottom-up nor top-down predictions about image category influenced spontaneous viewing duration. Overall, these results suggest that individual-specific factors can influence perceptual processing at a surprisingly early time point and influence the multifaceted ebb and flow of spontaneous human perceptual behavior in naturalistic settings.
Collapse
Affiliation(s)
- Shira Baror
- New York University Grossman School of Medicine
- Hebrew University of Jerusalem
| | - Thomas J Baumgarten
- New York University Grossman School of Medicine
- Heinrich Heine University, Düsseldorf
| | - Biyu J He
- New York University Grossman School of Medicine
| |
Collapse
|
16
|
Fairchild GT, Holler DE, Fabbri S, Gomez MA, Walsh-Snow JC. Naturalistic Object Representations Depend on Distance and Size Cues. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.03.16.585308. [PMID: 38559105 PMCID: PMC10980039 DOI: 10.1101/2024.03.16.585308] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
Egocentric distance and real-world size are important cues for object perception and action. Nevertheless, most studies of human vision rely on two-dimensional pictorial stimuli that convey ambiguous distance and size information. Here, we use fMRI to test whether pictures are represented differently in the human brain from real, tangible objects that convey unambiguous distance and size cues. Participants directly viewed stimuli in two display formats (real objects and matched printed pictures of those objects) presented at different egocentric distances (near and far). We measured the effects of format and distance on fMRI response amplitudes and response patterns. We found that fMRI response amplitudes in the lateral occipital and posterior parietal cortices were stronger overall for real objects than for pictures. In these areas and many others, including regions involved in action guidance, responses to real objects were stronger for near vs. far stimuli, whereas distance had little effect on responses to pictures-suggesting that distance determines relevance to action for real objects, but not for pictures. Although stimulus distance especially influenced response patterns in dorsal areas that operate in the service of visually guided action, distance also modulated representations in ventral cortex, where object responses are thought to remain invariant across contextual changes. We observed object size representations for both stimulus formats in ventral cortex but predominantly only for real objects in dorsal cortex. Together, these results demonstrate that whether brain responses reflect physical object characteristics depends on whether the experimental stimuli convey unambiguous information about those characteristics. Significance Statement Classic frameworks of vision attribute perception of inherent object characteristics, such as size, to the ventral visual pathway, and processing of spatial characteristics relevant to action, such as distance, to the dorsal visual pathway. However, these frameworks are based on studies that used projected images of objects whose actual size and distance from the observer were ambiguous. Here, we find that when object size and distance information in the stimulus is less ambiguous, these characteristics are widely represented in both visual pathways. Our results provide valuable new insights into the brain representations of objects and their various physical attributes in the context of naturalistic vision.
Collapse
|
17
|
Shinskey JL. Developmental trajectories of picture-based object representations during the first year of life. INFANCY 2024; 29:233-250. [PMID: 38183666 DOI: 10.1111/infa.12581] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Revised: 12/13/2023] [Accepted: 12/22/2023] [Indexed: 01/08/2024]
Abstract
Experience with an object's photograph changes 9-month-olds' preference for the referent object, confirming they can represent objects from pictures. However, picture-based representations appear weaker than object-based representations. The current study's first objective was to investigate age differences in object recognition memory after familiarization with objects' pictures. The second objective was to test whether age differences in object permanence sensitivity with picture-based representations match those found with object-based representations, whereby 7-month-olds search more for familiar hidden objects but 11-month-olds search more for novel ones. Six- and 11-month-olds were familiarized with an object's photo and tested on their representation of the real object by comparing their reaching for it versus a novel object. Objects were visible under conditions testing recognition memory and hidden under conditions testing object permanence. Like 9-month-olds, 6- and 11-month-olds preferred novelty with visible objects, showing early object recognition after picture familiarization, as well as developmental continuity. Unlike 9-month-olds, who switched to preferring familiarity with hidden objects, 6- and 11-month-olds switched to null preference. This pattern fails to match 7- and 11-month-olds' hidden-object preferences after familiarization with real objects, revealing discontinuity in sensitivity to object permanence after picture familiarization, and suggesting that picture-based representations are weaker than object-based ones.
Collapse
|
18
|
Wang Y, Gao J, Zhu F, Liu X, Wang G, Zhang Y, Deng Z, Chen J. Internal representations of the canonical real-world distance of objects. J Vis 2024; 24:14. [PMID: 38411955 PMCID: PMC10910641 DOI: 10.1167/jov.24.2.14] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2023] [Accepted: 01/08/2024] [Indexed: 02/28/2024] Open
Abstract
In the real world, every object has its canonical distance from observers. For example, airplanes are usually far away from us, whereas eyeglasses are close to us. Do we have an internal representation of the canonical real-world distance of objects in our cognitive system? If we do, does the canonical distance influence the perceived size of an object? Here, we conducted two experiments to address these questions. In Experiment 1, we first asked participants to rate the canonical distance of objects. Participants gave consistent ratings to each object. Then, pairs of object images were presented one by one in a trial, and participants were asked to rate the distance of the second object (i.e., a priming paradigm). We found that the rating of the perceived distance of the target object was modulated by the canonical real-world distance of the prime. In Experiment 2, participants were asked to judge the perceived size of canonically near or far objects that were presented at the converging end (i.e., far location) or the opening end (i.e., near location) of a background image with converging lines. We found that regardless of the presentation location, participants perceived the canonically near object as smaller than the canonically far object even though their retinal and real-world sizes were matched. In all, our results suggest that we have an internal representation of the canonical real-world distance of objects, which affects the perceived distance of subsequent objects and the perceived size of the objects themselves.
Collapse
Affiliation(s)
- Yijin Wang
- Center for the Study of Applied Psychology, Guangdong Key Laboratory of Mental Health and Cognitive Science, and the School of Psychology, South China Normal University, Guangzhou, China
| | - Jie Gao
- Center for the Study of Applied Psychology, Guangdong Key Laboratory of Mental Health and Cognitive Science, and the School of Psychology, South China Normal University, Guangzhou, China
| | - Fuying Zhu
- Center for the Study of Applied Psychology, Guangdong Key Laboratory of Mental Health and Cognitive Science, and the School of Psychology, South China Normal University, Guangzhou, China
| | - Xiaoli Liu
- Center for the Study of Applied Psychology, Guangdong Key Laboratory of Mental Health and Cognitive Science, and the School of Psychology, South China Normal University, Guangzhou, China
| | - Gexiu Wang
- Center for the Study of Applied Psychology, Guangdong Key Laboratory of Mental Health and Cognitive Science, and the School of Psychology, South China Normal University, Guangzhou, China
| | - Yichong Zhang
- Center for the Study of Applied Psychology, Guangdong Key Laboratory of Mental Health and Cognitive Science, and the School of Psychology, South China Normal University, Guangzhou, China
| | - Zhiqing Deng
- Center for the Study of Applied Psychology, Guangdong Key Laboratory of Mental Health and Cognitive Science, and the School of Psychology, South China Normal University, Guangzhou, China
| | - Juan Chen
- Center for the Study of Applied Psychology, Guangdong Key Laboratory of Mental Health and Cognitive Science, and the School of Psychology, South China Normal University, Guangzhou, China
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, South China Normal University, Guangzhou, China
- http://juanchenpsy.scnu.edu.cn/
| |
Collapse
|
19
|
Lavoie E, Hebert JS, Chapman CS. Comparing eye-hand coordination between controller-mediated virtual reality, and a real-world object interaction task. J Vis 2024; 24:9. [PMID: 38393742 PMCID: PMC10905649 DOI: 10.1167/jov.24.2.9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Accepted: 11/30/2023] [Indexed: 02/25/2024] Open
Abstract
Virtual reality (VR) technology has advanced significantly in recent years, with many potential applications. However, it is unclear how well VR simulations mimic real-world experiences, particularly in terms of eye-hand coordination. This study compares eye-hand coordination from a previously validated real-world object interaction task to the same task re-created in controller-mediated VR. We recorded eye and body movements and segmented participants' gaze data using the movement data. In the real-world condition, participants wore a head-mounted eye tracker and motion capture markers and moved a pasta box into and out of a set of shelves. In the VR condition, participants wore a VR headset and moved a virtual box using handheld controllers. Unsurprisingly, VR participants took longer to complete the task. Before picking up or dropping off the box, participants in the real world visually fixated the box about half a second before their hand arrived at the area of action. This 500-ms minimum fixation time before the hand arrived was preserved in VR. Real-world participants disengaged their eyes from the box almost immediately after their hand initiated or terminated the interaction, but VR participants stayed fixated on the box for much longer after it was picked up or dropped off. We speculate that the limited haptic feedback during object interactions in VR forces users to maintain visual fixation on objects longer than in the real world, altering eye-hand coordination. These findings suggest that current VR technology does not replicate real-world experience in terms of eye-hand coordination.
Collapse
Affiliation(s)
- Ewen Lavoie
- Faculty of Kinesiology, Sport, and Recreation, Neuroscience and Mental Health Institute, University of Alberta, Edmonton, AB, Canada
| | - Jacqueline S Hebert
- Division of Physical Medicine and Rehabilitation, Department of Biomedical Engineering, University of Alberta, Edmonton, AB, Canada
- Glenrose Rehabiliation Hospital, Alberta Health Services, Edmonton, AB, Canada
| | - Craig S Chapman
- Faculty of Kinesiology, Sport, and Recreation, Neuroscience and Mental Health Institute, University of Alberta, Edmonton, AB, Canada
| |
Collapse
|
20
|
Gomez MA, Snow JC. How to construct liquid-crystal spectacles to control vision of real-world objects and environments. Behav Res Methods 2024; 56:563-576. [PMID: 36737581 PMCID: PMC10424568 DOI: 10.3758/s13428-023-02059-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/05/2023] [Indexed: 02/05/2023]
Abstract
A major challenge in studying naturalistic vision lies in controlling stimulus and scene viewing time. This is especially the case for studies using real-world objects as stimuli (rather than computerized images) because real objects cannot be "onset" and "offset" in the same way that images can be. Since the late 1980s, one solution to this problem has been to have the observer wear electro-optic spectacles with computer-controlled liquid-crystal lenses that switch between transparent ("open") and translucent ("closed") states. Unfortunately, the commercially available glasses (PLATO Visual Occlusion Spectacles) command a high price tag, the hardware is fragile, and the glasses cannot be customized. This led us to explore how to manufacture liquid-crystal occlusion glasses in our own laboratory. Here, we share the products of our work by providing step-by-step instructions for researchers to design, build, operate, and test liquid-crystal glasses for use in experimental contexts. The glasses can be assembled with minimal technical knowledge using readily available components, and they can be customized for different populations and applications. The glasses are robust, and they can be produced at a fraction of the cost of commercial alternatives. Tests of reliability and temporal accuracy show that the performance of our laboratory prototype was comparable to that of the PLATO glasses. We discuss the results of our work with respect to implications for promoting rigor and reproducibility, potential use cases, comparisons with other liquid-crystal shutter glasses, and how users can find information regarding future updates and developments.
Collapse
Affiliation(s)
- Michael A Gomez
- Department of Psychology, The University of Nevada, Reno, 1664 N. Virginia Street, Reno, NV, USA.
- Psychology Department, Clovis Community College, 10309 N. Willow Ave, Fresno, CA, USA.
| | - Jacqueline C Snow
- Department of Psychology, The University of Nevada, Reno, 1664 N. Virginia Street, Reno, NV, USA.
| |
Collapse
|
21
|
Noviello S, Kamari Songhorabadi S, Deng Z, Zheng C, Chen J, Pisani A, Franchin E, Pierotti E, Tonolli E, Monaco S, Renoult L, Sperandio I. Temporal features of size constancy for perception and action in a real-world setting: A combined EEG-kinematics study. Neuropsychologia 2024; 193:108746. [PMID: 38081353 DOI: 10.1016/j.neuropsychologia.2023.108746] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Revised: 11/23/2023] [Accepted: 12/04/2023] [Indexed: 12/24/2023]
Abstract
A stable representation of object size, in spite of continuous variations in retinal input due to changes in viewing distance, is critical for perceiving and acting in a real 3D world. In fact, our perceptual and visuo-motor systems exhibit size and grip constancies in order to compensate for the natural shrinkage of the retinal image with increased distance. The neural basis of this size-distance scaling remains largely unknown, although multiple lines of evidence suggest that size-constancy operations might take place remarkably early, already at the level of the primary visual cortex. In this study, we examined for the first time the temporal dynamics of size constancy during perception and action by using a combined measurement of event-related potentials (ERPs) and kinematics. Participants were asked to maintain their gaze steadily on a fixation point and perform either a manual estimation or a grasping task towards disks of different sizes placed at different distances. Importantly, the physical size of the target was scaled with distance to yield a constant retinal angle. Meanwhile, we recorded EEG data from 64 scalp electrodes and hand movements with a motion capture system. We focused on the first positive-going visual evoked component peaking at approximately 90 ms after stimulus onset. We found earlier latencies and greater amplitudes in response to bigger than smaller disks of matched retinal size, regardless of the task. In line with the ERP results, manual estimates and peak grip apertures were larger for the bigger targets. We also found task-related differences at later stages of processing from a cluster of central electrodes, whereby the mean amplitude of the P2 component was greater for manual estimation than grasping. Taken together, these findings provide novel evidence that size constancy for real objects at real distances occurs at the earliest cortical stages and that early visual processing does not change as a function of task demands.
Collapse
Affiliation(s)
- Simona Noviello
- Department of Psychology and Cognitive Science, University of Trento, Rovereto, TN, Italy
| | | | - Zhiqing Deng
- School of Psychology, South China Normal University, Guangzhou, Guangdong Province, China
| | - Chao Zheng
- School of Psychology, South China Normal University, Guangzhou, Guangdong Province, China
| | - Juan Chen
- School of Psychology, South China Normal University, Guangzhou, Guangdong Province, China; Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents (South China Normal University), Ministry of Education, China
| | - Angelo Pisani
- Department of Psychology "Renzo Canestrari", University of Bologna, Italy
| | - Elena Franchin
- Department of Psychology and Cognitive Science, University of Trento, Rovereto, TN, Italy
| | - Enrica Pierotti
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Rovereto, TN, Italy
| | - Elena Tonolli
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Rovereto, TN, Italy
| | - Simona Monaco
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Rovereto, TN, Italy
| | - Louis Renoult
- School of Psychology, University of East Anglia, Norwich, UK
| | - Irene Sperandio
- Department of Psychology and Cognitive Science, University of Trento, Rovereto, TN, Italy.
| |
Collapse
|
22
|
Li AY, Mur M. Neural networks need real-world behavior. Behav Brain Sci 2023; 46:e398. [PMID: 38054287 DOI: 10.1017/s0140525x23001504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/07/2023]
Abstract
Bowers et al. propose to use controlled behavioral experiments when evaluating deep neural networks as models of biological vision. We agree with the sentiment and draw parallels to the notion that "neuroscience needs behavior." As a promising path forward, we suggest complementing image recognition tasks with increasingly realistic and well-controlled task environments that engage real-world object recognition behavior.
Collapse
Affiliation(s)
- Aedan Y Li
- Department of Psychology, Western University, London, ON, Canada , www.aedanyueli.com
| | - Marieke Mur
- Department of Psychology, Western University, London, ON, Canada , www.aedanyueli.com
- Department of Computer Science, Western University, London, ON, Canada
| |
Collapse
|
23
|
Chen J, Paciocco JU, Deng Z, Culham JC. Human Neuroimaging Reveals Differences in Activation and Connectivity between Real and Pantomimed Tool Use. J Neurosci 2023; 43:7853-7867. [PMID: 37722847 PMCID: PMC10648550 DOI: 10.1523/jneurosci.0068-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2022] [Revised: 09/09/2023] [Accepted: 09/11/2023] [Indexed: 09/20/2023] Open
Abstract
Because the sophistication of tool use is vastly enhanced in humans compared with other species, a rich understanding of its neural substrates requires neuroscientific experiments in humans. Although functional magnetic resonance imaging (fMRI) has enabled many studies of tool-related neural processing, surprisingly few studies have examined real tool use. Rather, because of the many constraints of fMRI, past research has typically used proxies such as pantomiming despite neuropsychological dissociations between pantomimed and real tool use. We compared univariate activation levels, multivariate activation patterns, and functional connectivity when participants used real tools (a plastic knife or fork) to act on a target object (scoring or poking a piece of putty) or pantomimed the same actions with similar movements and timing. During the Execute phase, we found higher activation for real versus pantomimed tool use in sensorimotor regions and the anterior supramarginal gyrus, and higher activation for pantomimed than real tool use in classic tool-selective areas. Although no regions showed significant differences in activation magnitude during the Plan phase, activation patterns differed between real versus pantomimed tool use and motor cortex showed differential functional connectivity. These results reflect important differences between real tool use, a closed-loop process constrained by real consequences, and pantomimed tool use, a symbolic gesture that requires conceptual knowledge of tools but with limited consequences. These results highlight the feasibility and added value of employing natural tool use tasks in functional imaging, inform neuropsychological dissociations, and advance our theoretical understanding of the neural substrates of natural tool use.SIGNIFICANCE STATEMENT The study of tool use offers unique insights into how the human brain synthesizes perceptual, cognitive, and sensorimotor functions to accomplish a goal. We suggest that the reliance on proxies, such as pantomiming, for real tool use has (1) overestimated the contribution of cognitive networks, because of the indirect, symbolic nature of pantomiming; and (2) underestimated the contribution of sensorimotor networks necessary for predicting and monitoring the consequences of real interactions between hand, tool, and the target object. These results enhance our theoretical understanding of the full range of human tool functions and inform our understanding of neuropsychological dissociations between real and pantomimed tool use.
Collapse
Affiliation(s)
- Juan Chen
- Center for the Study of Applied Psychology, Guangdong Key Laboratory of Mental Health and Cognitive Science, and the School of Psychology, South China Normal University, Guangzhou, Guangdong 510631, China
- Key Laboratory of Brain, Cognition and Education Sciences, South China Normal University, Ministry of Education, Guangzhou, Guangdong 510631, China
| | - Joseph U Paciocco
- Neuroscience Program, University of Western Ontario, London, Ontario N6A 5B7, Canada
| | - Zhiqing Deng
- Center for the Study of Applied Psychology, Guangdong Key Laboratory of Mental Health and Cognitive Science, and the School of Psychology, South China Normal University, Guangzhou, Guangdong 510631, China
| | - Jody C Culham
- Neuroscience Program, University of Western Ontario, London, Ontario N6A 5B7, Canada
- Department of Psychology, University of Western Ontario, London, Ontario N6A 5B7, Canada
| |
Collapse
|
24
|
Peel HJ, Chouinard PA. A review of the impairments, preserved visual functions, and neuropathology in 21 patients with visual form agnosia - A unique defect with line drawings. Neuropsychologia 2023; 190:108666. [PMID: 37634886 DOI: 10.1016/j.neuropsychologia.2023.108666] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Revised: 08/14/2023] [Accepted: 08/18/2023] [Indexed: 08/29/2023]
Abstract
We present a comprehensive review of the rare syndrome visual form agnosia (VFA). We begin by documenting its history, including the origins of the term, and the first case study labelled as VFA. The defining characteristics of the syndrome, as others have previously defined it, are then described. The impairments, preserved aspects of visual perception, and areas of brain damage in 21 patients who meet these defining characteristics are described in detail, including which tests were used to verify the presence or absence of key symptoms. From this, we note important similarities along with notable areas of divergence between patients. Damage to the occipital lobe (20/21), an inability to recognise line drawings (19/21), preserved colour vision (14/21), and visual field defects (16/21) were areas of consistency across most cases. We found it useful to distinguish between shape and form as distinct constructs when examining perceptual abilities in VFA patients. Our observations suggest that these patients often exhibit difficulties in processing simplified versions of form. Deficits in processing orientation and size were uncommon. Motion perception and visual imagery were not widely tested for despite being typically cited as defining features of the syndrome - although in the sample described, motion perception was never found to be a deficit. Moreover, problems with vision (e.g., poor visual acuity and the presence of hemianopias/scotomas in the visual fields) are more common than we would have thought and may also contribute to perceptual impairments in patients with VFA. We conclude that VFA is a perceptual disorder where the visual system has a reduced ability to synthesise lines together for the purposes of making sense of what images represent holistically.
Collapse
Affiliation(s)
- Hayden J Peel
- Department of Psychology, Counselling and Therapy, School of Psychology and Public Health, La Trobe University, Victoria, Australia
| | - Philippe A Chouinard
- Department of Psychology, Counselling and Therapy, School of Psychology and Public Health, La Trobe University, Victoria, Australia.
| |
Collapse
|
25
|
Moro V, Beccherle M, Scandola M, Aglioti SM. Massive body-brain disconnection consequent to spinal cord injuries drives profound changes in higher-order cognitive and emotional functions: A PRISMA scoping review. Neurosci Biobehav Rev 2023; 154:105395. [PMID: 37734697 DOI: 10.1016/j.neubiorev.2023.105395] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Revised: 09/01/2023] [Accepted: 09/17/2023] [Indexed: 09/23/2023]
Abstract
Spinal cord injury (SCI) leads to a massive disconnection between the brain and the body parts below the lesion level representing a unique opportunity to explore how the body influences a person's mental life. We performed a systematic scoping review of 59 studies on higher-order cognitive and emotional changes after SCI. The results suggest that fluid abilities (e.g. attention, executive functions) and emotional regulation (e.g. emotional reactivity and discrimination) are impaired in people with SCI, with progressive deterioration over time. Although not systematically explored, the factors that are directly (e.g. the severity and level of the lesion) and indirectly associated (e.g. blood pressure, sleeping disorders, medication) with the damage may play a role in these deficits. The inconsistency which was found in the results may derive from the various methods used and the heterogeneity of samples (i.e. the lesion completeness, the time interval since lesion onset). Future studies which are specifically controlled for methods, clinical and socio-cultural dimensions are needed to better understand the role of the body in cognition.
Collapse
Affiliation(s)
- Valentina Moro
- NPSY.Lab-VR, Department of Human Sciences, University of Verona, Lungadige Porta Vittoria, 17, 37129 Verona, Italy.
| | - Maddalena Beccherle
- NPSY.Lab-VR, Department of Human Sciences, University of Verona, Lungadige Porta Vittoria, 17, 37129 Verona, Italy; Department of Psychology, Sapienza University of Rome and cln2s@sapienza Istituto Italiano di Tecnologia, Italy.
| | - Michele Scandola
- NPSY.Lab-VR, Department of Human Sciences, University of Verona, Lungadige Porta Vittoria, 17, 37129 Verona, Italy
| | - Salvatore Maria Aglioti
- Department of Psychology, Sapienza University of Rome and cln2s@sapienza Istituto Italiano di Tecnologia, Italy; Fondazione Santa Lucia IRCCS, Roma, Italy
| |
Collapse
|
26
|
Snow JC, Gomez MA, Compton MT. Human memory for real-world solid objects is not predicted by responses to image displays. J Exp Psychol Gen 2023; 152:2703-2712. [PMID: 37079829 PMCID: PMC10587360 DOI: 10.1037/xge0001387] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/22/2023]
Abstract
In experimental psychology and neuroscience, computerized image stimuli are typically used as artificial proxies for real-world objects to understand brain and behavior. Here, in a series of five experiments (n = 165), we studied human memory for objects presented as tangible solids versus computerized images. We found that recall for solids was superior to images, both immediately after learning, and after a 24-hr delay. A "realness advantage" was also evident relative to three-dimensional (3-D) stereoscopic images, and when solids were viewed monocularly, arguing against explanations based on the presence of binocular depth cues in the stimulus. Critically, memory for solids was modulated by physical distance, with superior recall for objects positioned within versus outside of observers' reach, whereas recall for images was unaffected by distance. We conclude that solids are processed quantitatively and qualitatively differently in episodic memory than are images, suggesting caution in assuming that artifice can always substitute for reality. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
Collapse
Affiliation(s)
| | - Michael A. Gomez
- The University of Nevada Reno, Reno, Nevada, USA
- Clovis Community College, Fresno, CA
| | | |
Collapse
|
27
|
Troje NF. Depth from motion parallax: Deictic consistency, eye contact, and a serious problem with Zoom. J Vis 2023; 23:1. [PMID: 37656465 PMCID: PMC10479236 DOI: 10.1167/jov.23.10.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Accepted: 07/24/2023] [Indexed: 09/02/2023] Open
Abstract
The dynamics of head and eye gaze between two or more individuals displayed during verbal and nonverbal face-to-face communication contains a wealth of information and is used for both volitionary and unconscious signaling. Current video communication systems convey visual signals about gaze behavior and other directional cues, but the information they carry is often spurious and potentially misleading. I discuss the consequences of this situation, identify the source of the problem as a more general lack of deictic consistency, and demonstrate that using display technologies that simulate motion parallax are both necessary and sufficient to alleviate it. I then devise an avatar-based remote communication solution that achieves deictic consistency and provides natural, dynamic eye contact for computer-mediated audiovisual communication.
Collapse
Affiliation(s)
- Nikolaus F Troje
- Centre for Vision Research and Department of Biology, York University, Toronto, Ontario, Canada
| |
Collapse
|
28
|
Djebbara Z, Kalantari S. Affordances and curvature preference: The case of real objects and spaces. Ann N Y Acad Sci 2023; 1527:14-19. [PMID: 37429830 DOI: 10.1111/nyas.15038] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/12/2023]
Abstract
Chuquichambi and colleagues recently questioned the prevailing belief that a universal human visual preference exists for curved shapes and lines. Their comprehensive meta-analysis demonstrated that while curvature preference is widespread, it is not universally constant or invariant. By revisiting their dataset, we made an intriguing discovery: a negative relationship between curvature preference and an object's "affordances." Taking an embodiment perspective into account, we propose an explanation for this phenomenon, suggesting that the diminished curvature preference in objects with abundant affordances can be understood through the lens of embodied cognition.
Collapse
Affiliation(s)
- Zakaria Djebbara
- Department of Architecture, Design, Media, and Technology, Aalborg University, Aalborg, Denmark
- Biological Psychology and Neuroergonomics, Technical University of Berlin, Berlin, Germany
| | - Saleh Kalantari
- Department of Human Centered Design, College of Human Ecology, Cornell University, Ithaca, New York, USA
| |
Collapse
|
29
|
Peelen MV, Downing PE. Testing cognitive theories with multivariate pattern analysis of neuroimaging data. Nat Hum Behav 2023; 7:1430-1441. [PMID: 37591984 PMCID: PMC7616245 DOI: 10.1038/s41562-023-01680-z] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2022] [Accepted: 07/12/2023] [Indexed: 08/19/2023]
Abstract
Multivariate pattern analysis (MVPA) has emerged as a powerful method for the analysis of functional magnetic resonance imaging, electroencephalography and magnetoencephalography data. The new approaches to experimental design and hypothesis testing afforded by MVPA have made it possible to address theories that describe cognition at the functional level. Here we review a selection of studies that have used MVPA to test cognitive theories from a range of domains, including perception, attention, memory, navigation, emotion, social cognition and motor control. This broad view reveals properties of MVPA that make it suitable for understanding the 'how' of human cognition, such as the ability to test predictions expressed at the item or event level. It also reveals limitations and points to future directions.
Collapse
Affiliation(s)
- Marius V Peelen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands.
| | - Paul E Downing
- Cognitive Neuroscience Institute, Department of Psychology, Bangor University, Bangor, UK.
| |
Collapse
|
30
|
Reschechtko S, Gnanaseelan C, Pruszynski JA. Reach Corrections Toward Moving Objects are Faster Than Reach Corrections Toward Instantaneously Switching Targets. Neuroscience 2023; 526:135-143. [PMID: 37391122 DOI: 10.1016/j.neuroscience.2023.06.021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2023] [Revised: 06/20/2023] [Accepted: 06/22/2023] [Indexed: 07/02/2023]
Abstract
Visually guided reaching is a common motor behavior that engages subcortical circuits to mediate rapid corrections. Although these neural mechanisms have evolved for interacting with the physical world, they are often studied in the context of reaching toward virtual targets on a screen. These targets often change position by disappearing from one place reappearing in another instantaneously. In this study, we instructed participants to perform rapid reaches to physical objects that changed position in different ways. In one condition, the objects moved very rapidly from one place to another. In the other condition, illuminated targets instantaneously switched position by being extinguished in one position and illuminating in another. Participants were consistently faster in correcting their reach trajectories when the object moved continuously.
Collapse
Affiliation(s)
- Sasha Reschechtko
- School of Exercise & Nutritional Sciences, San Diego State University, 351 ENS Building, 5500 Campanile Dr., San Diego, CA 92182, USA; Western BrainsCAN, Western University, 1151 Richmond St., London, ON N6A 3K7, Canada; Brain and Mind Institute, Western University, 1151 Richmond St., London, ON N6A 3K7, Canada; Robarts Research Institute, Western University, 1151 Richmond St., London, ON N6A 3K7, Canada; Department of Physiology & Pharmacology, Western University, 1151 Richmond St., London, ON N6A 3K7, Canada.
| | - Cynthiya Gnanaseelan
- Department of Physiology & Pharmacology, Western University, 1151 Richmond St., London, ON N6A 3K7, Canada
| | - J Andrew Pruszynski
- Brain and Mind Institute, Western University, 1151 Richmond St., London, ON N6A 3K7, Canada; Robarts Research Institute, Western University, 1151 Richmond St., London, ON N6A 3K7, Canada; Department of Physiology & Pharmacology, Western University, 1151 Richmond St., London, ON N6A 3K7, Canada; Department of Psychology, Western University, 1151 Richmond St., London, ON N6A 3K7, Canada
| |
Collapse
|
31
|
Chawoush B, Draschkow D, van Ede F. Capacity and selection in immersive visual working memory following naturalistic object disappearance. J Vis 2023; 23:9. [PMID: 37548958 PMCID: PMC10411649 DOI: 10.1167/jov.23.8.9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2023] [Accepted: 07/06/2023] [Indexed: 08/08/2023] Open
Abstract
Visual working memory-holding past visual information in mind for upcoming behavior-is commonly studied following the abrupt removal of visual objects from static two-dimensional (2D) displays. In everyday life, visual objects do not typically vanish from the environment in front of us. Rather, visual objects tend to enter working memory following self or object motion: disappearing from view gradually and changing the spatial relation between memoranda and observer. Here, we used virtual reality (VR) to investigate whether two classic findings from visual working memory research-a capacity of around three objects and the reliance on space for object selection-generalize to more naturalistic modes of object disappearance. Our static reference condition mimicked traditional laboratory tasks whereby visual objects were held static in front of the participant and removed from view abruptly. In our critical flow condition, the same visual objects flowed by participants, disappearing from view gradually and behind the observer. We considered visual working memory performance and capacity, as well as space-based mnemonic selection, indexed by directional biases in gaze. Despite vastly distinct modes of object disappearance and altered spatial relations between memoranda and observer, we found comparable capacity and comparable gaze signatures of space-based mnemonic selection. This finding reveals how classic findings from visual working memory research generalize to immersive situations with more naturalistic modes of object disappearance and with dynamic spatial relations between memoranda and observer.
Collapse
Affiliation(s)
- Babak Chawoush
- Institute for Brain and Behavior Amsterdam, Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Dejan Draschkow
- Department of Experimental Psychology, University of Oxford, United Kingdom
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, United Kingdom
| | - Freek van Ede
- Institute for Brain and Behavior Amsterdam, Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
32
|
Muller A, Garren JD, Cao K, Peterson MA, Ekstrom AD. Understanding the encoding of object locations in small-scale spaces during free exploration using eye tracking. Neuropsychologia 2023; 184:108565. [PMID: 37080425 DOI: 10.1016/j.neuropsychologia.2023.108565] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Revised: 03/14/2023] [Accepted: 04/16/2023] [Indexed: 04/22/2023]
Abstract
Navigation is instrumental to daily life and is often used to encode and locate objects, such as keys in one's house. Yet, little is known about how navigation works in more ecologically valid situations such as finding objects within a room. Specifically, it is not clear how vision vs. body movements contribute differentially to spatial memory in such small-scale spaces. In the current study, participants encoded object locations by viewing them while standing (stationary condition) or by additionally being guided by the experimenter while blindfolded (walking condition) after viewing the objects. They then retrieved the objects from the same or different viewpoint, creating a 2 × 2 within subject design. We simultaneously recorded participant eye movements throughout the experiment using mobile eye tracking. The results showed no statistically significant differences among our four conditions (stationary, same viewpoint as encoding; stationary, different viewpoint; walking, same viewpoint; walking, different viewpoint), suggesting that in a small real-world space, vision may be sufficient to remember object locations. Eye tracking analyses revealed that object locations were better remembered next to landmarks and that participants encoded items on one wall together, suggesting the use of local wall coordinates rather than global room coordinates. A multivariate regression analysis revealed that the only significant predictor of object placement accuracy was average looking time. These results suggest that vision may be sufficient for encoding object locations in a small-scale environment and that such memories may be formed largely locally rather than globally.
Collapse
Affiliation(s)
- Alana Muller
- Department of Psychology, University of Arizona, 1503 E. University Blvd., Tucson, AZ, 85721, USA.
| | - Joshua D Garren
- Department of Psychology, University of Arizona, 1503 E. University Blvd., Tucson, AZ, 85721, USA.
| | - Kayla Cao
- Department of Psychology, University of Arizona, 1503 E. University Blvd., Tucson, AZ, 85721, USA.
| | - Mary A Peterson
- Department of Psychology, University of Arizona, 1503 E. University Blvd., Tucson, AZ, 85721, USA; Evelyn McKnight Brain Institute, University of Arizona, 1503 E. University Blvd., Tucson, AZ, 85721, USA; Cognitive Science Program, University of Arizona, 1503 E. University Blvd., Tucson, AZ, 85721, USA.
| | - Arne D Ekstrom
- Department of Psychology, University of Arizona, 1503 E. University Blvd., Tucson, AZ, 85721, USA; Evelyn McKnight Brain Institute, University of Arizona, 1503 E. University Blvd., Tucson, AZ, 85721, USA.
| |
Collapse
|
33
|
Foerster FR, Chidharom M, Giersch A. Enhanced temporal resolution of vision in action video game players. Neuroimage 2023; 269:119906. [PMID: 36739103 DOI: 10.1016/j.neuroimage.2023.119906] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Revised: 01/16/2023] [Accepted: 01/26/2023] [Indexed: 02/05/2023] Open
Abstract
Video game play has been suggested to improve visual and attention processing. Nevertheless, while action video game play is highly dynamic, there is scarce research on how information is temporally discriminated at the millisecond level. This cross-sectional study investigates whether temporal discrimination at the millisecond level in vision varies across action video game players (VGPs; N = 23) and non-video game players (NVGPs; N = 23). Participants discriminated synchronous from asynchronous onsets of two visual targets in virtual reality, while their EEG and oculomotor movements were recorded. Results show an increased sensitivity to short asynchronies (11, 33 and 66 ms) in VGPs compared with NVGPs, which was especially marked at the start of the task, suggesting better temporal discrimination abilities. Pre-targets oculomotor freezing - the inhibition of small fixational saccades - was associated with correct temporal discrimination, probably revealing attentional preparation. However, this parameter did not differ between groups. EEG and reconstruction analyses suggest that the enhancement of temporal discrimination in VGPs during temporal discrimination is related to parieto-occipital processing, and a reduction of alpha-band (8-14 Hz) power and inter-trial phase coherence. Overall, the study reveals an enhanced ability in action video game players to discriminate in time visual events in close temporal proximity combined with reduced alpha-band oscillatory activities. Consequently, playing action video games is associated with an improved temporal resolution of vision.
Collapse
Affiliation(s)
- Francois R Foerster
- Université de Strasbourg, INSERM U1114, Pôle de Psychiatrie, Centre Hospitalier Régional Universitaire de Strasbourg, France.
| | - Matthieu Chidharom
- Department of Psychology, Lehigh University, Bethlehem, PA, United States
| | - Anne Giersch
- Université de Strasbourg, INSERM U1114, Pôle de Psychiatrie, Centre Hospitalier Régional Universitaire de Strasbourg, France
| |
Collapse
|
34
|
Sagehorn M, Johnsdorf M, Kisker J, Sylvester S, Gruber T, Schöne B. Real-life relevant face perception is not captured by the N170 but reflected in later potentials: A comparison of 2D and virtual reality stimuli. Front Psychol 2023; 14:1050892. [PMID: 37057177 PMCID: PMC10086431 DOI: 10.3389/fpsyg.2023.1050892] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Accepted: 02/27/2023] [Indexed: 03/30/2023] Open
Abstract
The perception of faces is one of the most specialized visual processes in the human brain and has been investigated by means of the early event-related potential component N170. However, face perception has mostly been studied in the conventional laboratory, i.e., monitor setups, offering rather distal presentation of faces as planar 2D-images. Increasing spatial proximity through Virtual Reality (VR) allows to present 3D, real-life-sized persons at personal distance to participants, thus creating a feeling of social involvement and adding a self-relevant value to the presented faces. The present study compared the perception of persons under conventional laboratory conditions (PC) with realistic conditions in VR. Paralleling standard designs, pictures of unknown persons and standard control images were presented in a PC- and a VR-modality. To investigate how the mechanisms of face perception differ under realistic conditions from those under conventional laboratory conditions, the typical face-specific N170 and subsequent components were analyzed in both modalities. Consistent with previous laboratory research, the N170 lost discriminatory power when translated to realistic conditions, as it only discriminated faces and controls under laboratory conditions. Most interestingly, analysis of the later component [230–420 ms] revealed more differentiated face-specific processing in VR, as indicated by distinctive, stimulus-specific topographies. Complemented by source analysis, the results on later latencies show that face-specific neural mechanisms are applied only under realistic conditions (A video abstract is available in the Supplementary material and via YouTube: https://youtu.be/TF8wiPUrpSY).
Collapse
Affiliation(s)
- Merle Sagehorn
- Experimental Psychology I, Institute of Psychology, Osnabrück University, Osnabrück, Germany
- *Correspondence: Merle Sagehorn,
| | - Marike Johnsdorf
- Experimental Psychology I, Institute of Psychology, Osnabrück University, Osnabrück, Germany
| | - Joanna Kisker
- Experimental Psychology I, Institute of Psychology, Osnabrück University, Osnabrück, Germany
| | - Sophia Sylvester
- Semantic Information Systems Research Group, Institute of Computer Science, Osnabrück University, Osnabrück, Germany
| | - Thomas Gruber
- Experimental Psychology I, Institute of Psychology, Osnabrück University, Osnabrück, Germany
| | - Benjamin Schöne
- Experimental Psychology I, Institute of Psychology, Osnabrück University, Osnabrück, Germany
| |
Collapse
|
35
|
Tang Z, Liu X, Huo H, Tang M, Qiao X, Chen D, Dong Y, Fan L, Wang J, Du X, Guo J, Tian S, Fan Y. Eye movement characteristics in a mental rotation task presented in virtual reality. Front Neurosci 2023; 17:1143006. [PMID: 37051147 PMCID: PMC10083294 DOI: 10.3389/fnins.2023.1143006] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Accepted: 03/13/2023] [Indexed: 03/28/2023] Open
Abstract
IntroductionEye-tracking technology provides a reliable and cost-effective approach to characterize mental representation according to specific patterns. Mental rotation tasks, referring to the mental representation and transformation of visual information, have been widely used to examine visuospatial ability. In these tasks, participants visually perceive three-dimensional (3D) objects and mentally rotate them until they identify whether the paired objects are identical or mirrored. In most studies, 3D objects are presented using two-dimensional (2D) images on a computer screen. Currently, visual neuroscience tends to investigate visual behavior responding to naturalistic stimuli rather than image stimuli. Virtual reality (VR) is an emerging technology used to provide naturalistic stimuli, allowing the investigation of behavioral features in an immersive environment similar to the real world. However, mental rotation tasks using 3D objects in immersive VR have been rarely reported.MethodsHere, we designed a VR mental rotation task using 3D stimuli presented in a head-mounted display (HMD). An eye tracker incorporated into the HMD was used to examine eye movement characteristics during the task synchronically. The stimuli were virtual paired objects oriented at specific angular disparities (0, 60, 120, and 180°). We recruited thirty-three participants who were required to determine whether the paired 3D objects were identical or mirrored.ResultsBehavioral results demonstrated that the response times when comparing mirrored objects were longer than identical objects. Eye-movement results showed that the percent fixation time, the number of within-object fixations, and the number of saccades for the mirrored objects were significantly lower than that for the identical objects, providing further explanations for the behavioral results.DiscussionIn the present work, we examined behavioral and eye movement characteristics during a VR mental rotation task using 3D stimuli. Significant differences were observed in response times and eye movement metrics between identical and mirrored objects. The eye movement data provided further explanation for the behavioral results in the VR mental rotation task.
Collapse
Affiliation(s)
- Zhili Tang
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering and School of Engineering Medicine, Beihang University, Beijing, China
| | - Xiaoyu Liu
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering and School of Engineering Medicine, Beihang University, Beijing, China
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, China
- *Correspondence: Xiaoyu Liu,
| | - Hongqiang Huo
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering and School of Engineering Medicine, Beihang University, Beijing, China
| | - Min Tang
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering and School of Engineering Medicine, Beihang University, Beijing, China
| | - Xiaofeng Qiao
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering and School of Engineering Medicine, Beihang University, Beijing, China
| | - Duo Chen
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering and School of Engineering Medicine, Beihang University, Beijing, China
| | - Ying Dong
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering and School of Engineering Medicine, Beihang University, Beijing, China
| | - Linyuan Fan
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering and School of Engineering Medicine, Beihang University, Beijing, China
| | - Jinghui Wang
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering and School of Engineering Medicine, Beihang University, Beijing, China
| | - Xin Du
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering and School of Engineering Medicine, Beihang University, Beijing, China
| | - Jieyi Guo
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering and School of Engineering Medicine, Beihang University, Beijing, China
| | - Shan Tian
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering and School of Engineering Medicine, Beihang University, Beijing, China
| | - Yubo Fan
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering and School of Engineering Medicine, Beihang University, Beijing, China
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, China
- Yubo Fan,
| |
Collapse
|
36
|
Tool use acquisition induces a multifunctional interference effect during object processing: evidence from the sensorimotor mu rhythm. Exp Brain Res 2023; 241:1145-1157. [PMID: 36920527 DOI: 10.1007/s00221-023-06595-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2022] [Accepted: 02/27/2023] [Indexed: 03/16/2023]
Abstract
A fundamental characteristic of human development is acquiring and accumulating tool use knowledge through observation and sensorimotor experience. Recent studies showed that, in children and adults, different action possibilities to grasp-to-move and grasp-to-use objects generate a conflict that extinguishes neural motor resonance phenomena during visual object processing. In this study, a training protocol coupled with EEG recordings was administered in virtual reality to healthy adults to evaluate whether a similar conflict occurs between novel tool use knowledge. Participants perceived and manipulated two novel 3D tools trained beforehand with either single or double-usage. A weaker reduction of mu-band (10-13 Hz) power, accompanied by a reduced inter-trial phase coherence, was recorded during the perception of the tool associated with the double-usage. These effects started within the first 200 ms of visual object processing and were predominantly recorded over the left motor system. Furthermore, interacting with the double usage tool delayed grasp-to-reach movements. The results highlight a multifunctional interference effect, such as tool use acquisition reduces the neural motor resonance phenomenon and inhibits the activation of the motor system during subsequent object recognition. These results imply that learned tool use information guides sensorimotor processes of objects.
Collapse
|
37
|
Abstract
Research has recently shown that efficient selection relies on the implicit extraction of environmental regularities, known as statistical learning. Although this has been demonstrated for scenes, similar learning arguably also occurs for objects. To test this, we developed a paradigm that allowed us to track attentional priority at specific object locations irrespective of the object's orientation in three experiments with young adults (all Ns = 80). Experiments 1a and 1b established within-object statistical learning by demonstrating increased attentional priority at relevant object parts (e.g., hammerhead). Experiment 2 extended this finding by demonstrating that learned priority generalized to viewpoints in which learning never took place. Together, these findings demonstrate that as a function of statistical learning, the visual system not only is able to tune attention relative to specific locations in space but also can develop preferential biases for specific parts of an object independently of the viewpoint of that object.
Collapse
Affiliation(s)
- Dirk van Moorselaar
- Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam.,Institute of Brain and Behaviour Amsterdam (iBBA), The Netherlands
| | - Jan Theeuwes
- Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam.,Institute of Brain and Behaviour Amsterdam (iBBA), The Netherlands.,William James Center for Research, ISPA-Instituto Universitario
| |
Collapse
|
38
|
Gandolfo M, Nägele H, Peelen MV. Predictive Processing of Scene Layout Depends on Naturalistic Depth of Field. Psychol Sci 2023; 34:394-405. [PMID: 36608172 DOI: 10.1177/09567976221140341] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023] Open
Abstract
Boundary extension is a classic memory illusion in which observers remember more of a scene than was presented. According to predictive-processing accounts, boundary extension reflects the integration of visual input and expectations of what is beyond a scene's boundaries. According to normalization accounts, boundary extension rather reflects one end of a normalization process toward a scene's typically experienced viewing distance, such that close-up views give boundary extension but distant views give boundary contraction. Here, across four experiments (N = 125 adults), we found that boundary extension strongly depends on depth of field, as determined by the aperture settings on a camera. Photographs with naturalistic depth of field led to larger boundary extension than photographs with unnaturalistic depth of field, even when distant views were shown. We propose that boundary extension reflects a predictive mechanism with adaptive value that is strongest for naturalistic views of scenes. The current findings indicate that depth of field is an important variable to consider in the study of scene perception and memory.
Collapse
Affiliation(s)
- Marco Gandolfo
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University
| | - Hendrik Nägele
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University
| | - Marius V Peelen
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University
| |
Collapse
|
39
|
Wang X, Liang H, Li L, Zhou J, Song R. Contribution of the stereoscopic representation of motion-in-depth during visually guided feedback control. Cereb Cortex 2023:7030846. [PMID: 36750266 DOI: 10.1093/cercor/bhad010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Revised: 01/06/2023] [Accepted: 01/07/2023] [Indexed: 02/09/2023] Open
Abstract
Considerable studies have focused on the neural basis of visually guided tracking movement in the frontoparallel plane, whereas the neural process in real-world circumstances regarding the influence of binocular disparity and motion-in-depth (MID) perception is less understood. Although the role of stereoscopic versus monoscopic MID information has been extensively described for visual processing, its influence on top-down regulation for motor execution has not received much attention. Here, we orthogonally varied the visual representation (stereoscopic versus monoscopic) and motion direction (depth motion versus bias depth motion versus frontoparallel motion) during visually guided tracking movements, with simultaneous functional near-infrared spectroscopy recordings. Results show that the stereoscopic representation of MID could lead to more accurate movements, which was supported by specific neural activity pattern. More importantly, we extend prior evidence about the role of frontoparietal network in brain-behavior relationship, showing that occipital area, more specifically, visual area V2/V3 was also robustly involved in the association. Furthermore, by using the stereoscopic representation of MID, it is plausible to detect robust brain-behavior relationship even with small sample size at low executive task demand. Taken together, these findings highlight the importance of the stereoscopic representation of MID for investigating neural correlates of visually guided feedback control.
Collapse
Affiliation(s)
- Xiaolu Wang
- Key Laboratory of Sensing Technology and Biomedical Instrument of Guangdong Province, School of Biomedical Engineering, Sun Yat-sen University, Guangzhou 510006, China
| | - Haowen Liang
- State Key Laboratory of Optoelectronic Materials and Technology, Guangdong Marine Laboratory, School of Physics, Sun Yat-sen University, Guangzhou 510275, China
| | - Le Li
- Institute of Medical Research, Northwestern Polytechnical University, Xi'an 710072, China.,Department of Rehabilitation Medicine, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou 510030, China
| | - Jianying Zhou
- State Key Laboratory of Optoelectronic Materials and Technology, Guangdong Marine Laboratory, School of Physics, Sun Yat-sen University, Guangzhou 510275, China
| | - Rong Song
- Key Laboratory of Sensing Technology and Biomedical Instrument of Guangdong Province, School of Biomedical Engineering, Sun Yat-sen University, Guangzhou 510006, China
| |
Collapse
|
40
|
Rzepka AM, Hussey KJ, Maltz MV, Babin K, Wilcox LM, Culham JC. Familiar size affects perception differently in virtual reality and the real world. Philos Trans R Soc Lond B Biol Sci 2023; 378:20210464. [PMID: 36511414 PMCID: PMC9745877 DOI: 10.1098/rstb.2021.0464] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Accepted: 08/10/2022] [Indexed: 12/15/2022] Open
Abstract
The promise of virtual reality (VR) as a tool for perceptual and cognitive research rests on the assumption that perception in virtual environments generalizes to the real world. Here, we conducted two experiments to compare size and distance perception between VR and physical reality (Maltz et al. 2021 J. Vis. 21, 1-18). In experiment 1, we used VR to present dice and Rubik's cubes at their typical sizes or reversed sizes at distances that maintained a constant visual angle. After viewing the stimuli binocularly (to provide vergence and disparity information) or monocularly, participants manually estimated perceived size and distance. Unlike physical reality, where participants relied less on familiar size and more on presented size during binocular versus monocular viewing, in VR participants relied heavily on familiar size regardless of the availability of binocular cues. In experiment 2, we demonstrated that the effects in VR generalized to other stimuli and to a higher quality VR headset. These results suggest that the use of binocular cues and familiar size differs substantially between virtual and physical reality. A deeper understanding of perceptual differences is necessary before assuming that research outcomes from VR will generalize to the real world. This article is part of a discussion meeting issue 'New approaches to 3D vision'.
Collapse
Affiliation(s)
- Anna M. Rzepka
- Neuroscience Program, University of Western Ontario, Western Interdisciplinary Research Building, London, ON, Canada N6A 3K7
| | - Kieran J. Hussey
- Neuroscience Program, University of Western Ontario, Western Interdisciplinary Research Building, London, ON, Canada N6A 3K7
| | - Margaret V. Maltz
- Department of Psychology, University of Western Ontario, Western Interdisciplinary Research Building, London, ON, Canada N6A 3K7
| | - Karsten Babin
- Department of Psychology, University of Western Ontario, Western Interdisciplinary Research Building, London, ON, Canada N6A 3K7
| | - Laurie M. Wilcox
- Department of Psychology, York University, Toronto, ON, Canada M3J 1P3
| | - Jody C. Culham
- Neuroscience Program, University of Western Ontario, Western Interdisciplinary Research Building, London, ON, Canada N6A 3K7
- Department of Psychology, University of Western Ontario, Western Interdisciplinary Research Building, London, ON, Canada N6A 3K7
| |
Collapse
|
41
|
Johnsdorf M, Kisker J, Gruber T, Schöne B. Comparing encoding mechanisms in realistic virtual reality and conventional 2D laboratory settings: Event-related potentials in a repetition suppression paradigm. Front Psychol 2023; 14:1051938. [PMID: 36777234 PMCID: PMC9912617 DOI: 10.3389/fpsyg.2023.1051938] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Accepted: 01/06/2023] [Indexed: 01/28/2023] Open
Abstract
Although the human brain is adapted to function within three-dimensional environments, conventional laboratory research commonly investigates cognitive mechanisms in a reductionist approach using two-dimensional stimuli. However, findings regarding mnemonic processes indicate that realistic experiences in Virtual Reality (VR) are stored in richer and more intertwined engrams than those obtained from the conventional laboratory. Our study aimed to further investigate the generalizability of laboratory findings and to differentiate whether the processes underlying memory formation differ between VR and the conventional laboratory already in early encoding stages. Therefore, we investigated the Repetition Suppression (RS) effect as a correlate of the earliest instance of mnemonic processes under conventional laboratory conditions and in a realistic virtual environment. Analyses of event-related potentials (ERPs) indicate that the ERP deflections at several electrode clusters were lower in VR compared to the PC condition. These results indicate an optimized distribution of cognitive resources in realistic contexts. The typical RS effect was replicated under both conditions at most electrode clusters for a late time window. Additionally, a specific RS effect was found in VR at anterior electrodes for a later time window, indicating more extensive encoding processes in VR compared to the laboratory. Specifically, electrotomographic results (VARETA) indicate multimodal integration involving a broad cortical network and higher cognitive processes during the encoding of realistic objects. Our data suggest that object perception under realistic conditions, in contrast to the conventional laboratory, requires multisensory integration involving an interconnected functional system, facilitating the formation of intertwined memory traces in realistic environments.
Collapse
|
42
|
Suomala J, Kauttonen J. Computational meaningfulness as the source of beneficial cognitive biases. Front Psychol 2023; 14:1189704. [PMID: 37205079 PMCID: PMC10187636 DOI: 10.3389/fpsyg.2023.1189704] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2023] [Accepted: 04/05/2023] [Indexed: 05/21/2023] Open
Abstract
The human brain has evolved to solve the problems it encounters in multiple environments. In solving these challenges, it forms mental simulations about multidimensional information about the world. These processes produce context-dependent behaviors. The brain as overparameterized modeling organ is an evolutionary solution for producing behavior in a complex world. One of the most essential characteristics of living creatures is that they compute the values of information they receive from external and internal contexts. As a result of this computation, the creature can behave in optimal ways in each environment. Whereas most other living creatures compute almost exclusively biological values (e.g., how to get food), the human as a cultural creature computes meaningfulness from the perspective of one's activity. The computational meaningfulness means the process of the human brain, with the help of which an individual tries to make the respective situation comprehensible to herself to know how to behave optimally. This paper challenges the bias-centric approach of behavioral economics by exploring different possibilities opened up by computational meaningfulness with insight into wider perspectives. We concentrate on confirmation bias and framing effect as behavioral economics examples of cognitive biases. We conclude that from the computational meaningfulness perspective of the brain, the use of these biases are indispensable property of an optimally designed computational system of what the human brain is like. From this perspective, cognitive biases can be rational under some conditions. Whereas the bias-centric approach relies on small-scale interpretable models which include only a few explanatory variables, the computational meaningfulness perspective emphasizes the behavioral models, which allow multiple variables in these models. People are used to working in multidimensional and varying environments. The human brain is at its best in such an environment and scientific study should increasingly take place in such situations simulating the real environment. By using naturalistic stimuli (e.g., videos and VR) we can create more realistic, life-like contexts for research purposes and analyze resulting data using machine learning algorithms. In this manner, we can better explain, understand and predict human behavior and choice in different contexts.
Collapse
Affiliation(s)
- Jyrki Suomala
- Department of NeuroLab, Laurea University of Applied Sciences, Vantaa, Finland
- *Correspondence: Jyrki Suomala,
| | - Janne Kauttonen
- Competences, RDI and Digitalization, Haaga-Helia University of Applied Sciences, Helsinki, Finland
| |
Collapse
|
43
|
Schöne B. Commentary: A review on the role of affective stimuli in event-related frontal alpha asymmetry. FRONTIERS IN COMPUTER SCIENCE 2022. [DOI: 10.3389/fcomp.2022.994071] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022] Open
|
44
|
Maguire EA. Does memory research have a realistic future? Trends Cogn Sci 2022; 26:1043-1046. [PMID: 36207261 DOI: 10.1016/j.tics.2022.07.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Accepted: 07/18/2022] [Indexed: 11/11/2022]
Abstract
How do we remember our past experiences? This question remains stubbornly resistant to resolution. The next 25 years may see significant traction on this and other outstanding issues if memory researchers capitalise on exciting technological developments that allow embodied cognition to be studied in ways that closely approximate real life.
Collapse
Affiliation(s)
- Eleanor A Maguire
- Wellcome Centre for Human Neuroimaging, Department of Imaging Neuroscience, UCL Queen Square Institute of Neurology, University College London, 12 Queen Square, London WC1N 3AR, UK.
| |
Collapse
|
45
|
Abstract
Photography is often understood as an objective recording of light measurements, in contrast with the subjective nature of painting. This article argues that photography entails making the same kinds of choices of color, tone, and perspective as in painting, and surveys examples from film photography and smartphone cameras. Hence, understanding picture perception requires treating photography as just one way to make pictures. More research is needed to understand the effects of these choices on pictorial perception, which in turn could lead to the design of new imaging techniques.
Collapse
Affiliation(s)
- Aaron Hertzmann
- Adobe Research, San Francisco, CA, USA
- www.dgp.toronto.edu/~hertzman
| |
Collapse
|
46
|
Knights E, Smith FW, Rossit S. The role of the anterior temporal cortex in action: evidence from fMRI multivariate searchlight analysis during real object grasping. Sci Rep 2022; 12:9042. [PMID: 35662252 PMCID: PMC9167815 DOI: 10.1038/s41598-022-12174-9] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2022] [Accepted: 04/29/2022] [Indexed: 12/20/2022] Open
Abstract
Intelligent manipulation of handheld tools marks a major discontinuity between humans and our closest ancestors. Here we identified neural representations about how tools are typically manipulated within left anterior temporal cortex, by shifting a searchlight classifier through whole-brain real action fMRI data when participants grasped 3D-printed tools in ways considered typical for use (i.e., by their handle). These neural representations were automatically evocated as task performance did not require semantic processing. In fact, findings from a behavioural motion-capture experiment confirmed that actions with tools (relative to non-tool) incurred additional processing costs, as would be suspected if semantic areas are being automatically engaged. These results substantiate theories of semantic cognition that claim the anterior temporal cortex combines sensorimotor and semantic content for advanced behaviours like tool manipulation.
Collapse
Affiliation(s)
- Ethan Knights
- School of Psychology, University of East Anglia, Norwich, UK
| | - Fraser W Smith
- School of Psychology, University of East Anglia, Norwich, UK
| | | |
Collapse
|
47
|
Tang Z, Liu X, Huo H, Tang M, Liu T, Wu Z, Qiao X, Chen D, An R, Dong Y, Fan L, Wang J, Du X, Fan Y. The role of low-frequency oscillations in three-dimensional perception with depth cues in virtual reality. Neuroimage 2022; 257:119328. [PMID: 35605766 DOI: 10.1016/j.neuroimage.2022.119328] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Revised: 05/15/2022] [Accepted: 05/19/2022] [Indexed: 10/18/2022] Open
Abstract
Currently, vision-related neuroscience studies are undergoing a trend from simplified image stimuli toward more naturalistic stimuli. Virtual reality (VR), as an emerging technology for visual immersion, provides more depth cues for three-dimensional (3D) presentation than two-dimensional (2D) image. It is still unclear whether the depth cues used to create 3D visual perception modulate specific cortical activation. Here, we constructed two visual stimuli presented by stereoscopic vision in VR and graphical projection with 2D image, respectively, and used electroencephalography to examine neural oscillations and their functional connectivity during 3D perception. We find that neural oscillations are specific to delta and theta bands in stereoscopic vision and the functional connectivity in the both bands increase in cortical areas related to visual pathways. These findings indicate that low-frequency oscillations play an important role in 3D perception with depth cues.
Collapse
Affiliation(s)
- Zhili Tang
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education; Beijing Advanced Innovation Center for Biomedical Engineering; School of Biological Science and Medical Engineering, Beihang University, Beijing, 100083, China
| | - Xiaoyu Liu
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education; Beijing Advanced Innovation Center for Biomedical Engineering; School of Biological Science and Medical Engineering, Beihang University, Beijing, 100083, China; State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing 100083, China.
| | - Hongqiang Huo
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education; Beijing Advanced Innovation Center for Biomedical Engineering; School of Biological Science and Medical Engineering, Beihang University, Beijing, 100083, China
| | - Min Tang
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education; Beijing Advanced Innovation Center for Biomedical Engineering; School of Biological Science and Medical Engineering, Beihang University, Beijing, 100083, China
| | - Tao Liu
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education; Beijing Advanced Innovation Center for Biomedical Engineering; School of Biological Science and Medical Engineering, Beihang University, Beijing, 100083, China
| | - Zhixin Wu
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education; Beijing Advanced Innovation Center for Biomedical Engineering; School of Biological Science and Medical Engineering, Beihang University, Beijing, 100083, China
| | - Xiaofeng Qiao
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education; Beijing Advanced Innovation Center for Biomedical Engineering; School of Biological Science and Medical Engineering, Beihang University, Beijing, 100083, China
| | - Duo Chen
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education; Beijing Advanced Innovation Center for Biomedical Engineering; School of Biological Science and Medical Engineering, Beihang University, Beijing, 100083, China
| | - Ran An
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education; Beijing Advanced Innovation Center for Biomedical Engineering; School of Biological Science and Medical Engineering, Beihang University, Beijing, 100083, China
| | - Ying Dong
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education; Beijing Advanced Innovation Center for Biomedical Engineering; School of Biological Science and Medical Engineering, Beihang University, Beijing, 100083, China
| | - Linyuan Fan
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education; Beijing Advanced Innovation Center for Biomedical Engineering; School of Biological Science and Medical Engineering, Beihang University, Beijing, 100083, China
| | - Jinghui Wang
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education; Beijing Advanced Innovation Center for Biomedical Engineering; School of Biological Science and Medical Engineering, Beihang University, Beijing, 100083, China
| | - Xin Du
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education; Beijing Advanced Innovation Center for Biomedical Engineering; School of Biological Science and Medical Engineering, Beihang University, Beijing, 100083, China
| | - Yubo Fan
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education; Beijing Advanced Innovation Center for Biomedical Engineering; School of Biological Science and Medical Engineering, Beihang University, Beijing, 100083, China; School of Medical Science and Engineering Medicine, Beihang University, Beijing 100083, China; State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing 100083, China.
| |
Collapse
|
48
|
Kiefer CM, Ito J, Weidner R, Boers F, Shah NJ, Grün S, Dammers J. Revealing Whole-Brain Causality Networks During Guided Visual Searching. Front Neurosci 2022; 16:826083. [PMID: 35250461 PMCID: PMC8894880 DOI: 10.3389/fnins.2022.826083] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Accepted: 01/17/2022] [Indexed: 11/24/2022] Open
Abstract
In our daily lives, we use eye movements to actively sample visual information from our environment ("active vision"). However, little is known about how the underlying mechanisms are affected by goal-directed behavior. In a study of 31 participants, magnetoencephalography was combined with eye-tracking technology to investigate how interregional interactions in the brain change when engaged in two distinct forms of active vision: freely viewing natural images or performing a guided visual search. Regions of interest with significant fixation-related evoked activity (FRA) were identified with spatiotemporal cluster permutation testing. Using generalized partial directed coherence, we show that, in response to fixation onset, a bilateral cluster consisting of four regions (posterior insula, transverse temporal gyri, superior temporal gyrus, and supramarginal gyrus) formed a highly connected network during free viewing. A comparable network also emerged in the right hemisphere during the search task, with the right supramarginal gyrus acting as a central node for information exchange. The results suggest that all four regions are vital to visual processing and guiding attention. Furthermore, the right supramarginal gyrus was the only region where activity during fixations on the search target was significantly negatively correlated with search response times. Based on our findings, we hypothesize that, following a fixation, the right supramarginal gyrus supplies the right supplementary eye field (SEF) with new information to update the priority map guiding the eye movements during the search task.
Collapse
Affiliation(s)
- Christian M. Kiefer
- Institute of Neuroscience and Medicine (INM-4), Forschungszentrum Jülich GmbH, Jülich, Germany
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), Forschungszentrum Jülich GmbH, Jülich, Germany
- Faculty of Mathematics, Computer Science and Natural Sciences, RWTH Aachen University, Aachen, Germany
- Jülich Aachen Research Alliance (JARA)-Brain – Institute Brain Structure and Function, Institute of Neuroscience and Medicine (INM-10), Forschungszentrum Jülich GmbH, Jülich, Germany
| | - Junji Ito
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), Forschungszentrum Jülich GmbH, Jülich, Germany
- Jülich Aachen Research Alliance (JARA)-Brain – Institute Brain Structure and Function, Institute of Neuroscience and Medicine (INM-10), Forschungszentrum Jülich GmbH, Jülich, Germany
| | - Ralph Weidner
- Institute of Neuroscience and Medicine (INM-3), Forschungszentrum Jülich GmbH, Jülich, Germany
| | - Frank Boers
- Institute of Neuroscience and Medicine (INM-4), Forschungszentrum Jülich GmbH, Jülich, Germany
| | - N. Jon Shah
- Institute of Neuroscience and Medicine (INM-4), Forschungszentrum Jülich GmbH, Jülich, Germany
- Institute of Neuroscience and Medicine (INM-11), Jülich Aachen Research Alliance (JARA), Forschungszentrum Jülich GmbH, Jülich, Germany
- Jülich Aachen Research Alliance (JARA)-Brain – Translational Medicine, Aachen, Germany
- Department of Neurology, University Hospital RWTH Aachen, Aachen, Germany
| | - Sonja Grün
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), Forschungszentrum Jülich GmbH, Jülich, Germany
- Jülich Aachen Research Alliance (JARA)-Brain – Institute Brain Structure and Function, Institute of Neuroscience and Medicine (INM-10), Forschungszentrum Jülich GmbH, Jülich, Germany
- Theoretical Systems Neurobiology, RWTH Aachen University, Aachen, Germany
| | - Jürgen Dammers
- Institute of Neuroscience and Medicine (INM-4), Forschungszentrum Jülich GmbH, Jülich, Germany
| |
Collapse
|
49
|
Pazhoohi F, Jacobs OLE, Kingstone A. Contrapposto Pose Influences Perceptions of Attractiveness, Masculinity, and Dynamicity of Male Statues from Antiquity. EVOLUTIONARY PSYCHOLOGICAL SCIENCE 2022. [DOI: 10.1007/s40806-021-00310-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
50
|
Daprati E, Balestrucci P, Nico D. Do graspable objects always leave a motor signature? A study on memory traces. Exp Brain Res 2022; 240:3193-3206. [PMID: 36271939 PMCID: PMC9678995 DOI: 10.1007/s00221-022-06487-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Accepted: 10/14/2022] [Indexed: 12/30/2022]
Abstract
Several studies have reported the existence of reciprocal interactions between the type of motor activity physically performed on objects and the conceptual knowledge that is retained of them. Whether covert motor activity plays a similar effect is less clear. Certainly, objects are strong triggers for actions, and motor components can make the associated concepts more memorable. However, addition of an action-related memory trace may not always be automatic and could rather depend on 'how' objects are encountered. To test this hypothesis, we compared memory for objects that passive observers experienced as verbal labels (the word describing them), visual images (color photographs) and actions (pantomimes of object use). We predicted that the more direct the involvement of action-related representations the more effective would be the addition of a motor code to the experience and the more accurate would be the recall. Results showed that memory for objects presented as words i.e., a format that might only indirectly prime the sensorimotor system, was generally less accurate compared to memory for objects presented as photographs or pantomimes, which are more likely to directly elicit motor simulation processes. In addition, free recall of objects experienced as pantomimes was more accurate when these items afforded actions performed towards one's body than actions directed away from the body. We propose that covert motor activity can contribute to objects' memory, but the beneficial addition of a motor code to the experience is not necessarily automatic. An advantage is more likely to emerge when the observer is induced to take a first-person stance during the encoding phase, as may happen for objects affording actions directed towards the body, which obviously carry more relevance for the actor.
Collapse
Affiliation(s)
- Elena Daprati
- grid.6530.00000 0001 2300 0941Dipartimento di Medicina dei Sistemi, Università di Roma Tor Vergata, Via Montpellier 1, 00133 Rome, Italy
| | - Priscilla Balestrucci
- grid.6582.90000 0004 1936 9748Faculty for Computer Science, Engineering, and Psychology, Applied Cognitive Psychology, Ulm University, 89081 Ulm, Germany
| | - Daniele Nico
- grid.7841.aDipartimento di Psicologia, Università di Roma la Sapienza, 00185 Rome, Italy
| |
Collapse
|