1
|
Fairchild GT, Holler DE, Fabbri S, Gomez MA, Walsh-Snow JC. Naturalistic Object Representations Depend on Distance and Size Cues. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.03.16.585308. [PMID: 38559105 PMCID: PMC10980039 DOI: 10.1101/2024.03.16.585308] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
Egocentric distance and real-world size are important cues for object perception and action. Nevertheless, most studies of human vision rely on two-dimensional pictorial stimuli that convey ambiguous distance and size information. Here, we use fMRI to test whether pictures are represented differently in the human brain from real, tangible objects that convey unambiguous distance and size cues. Participants directly viewed stimuli in two display formats (real objects and matched printed pictures of those objects) presented at different egocentric distances (near and far). We measured the effects of format and distance on fMRI response amplitudes and response patterns. We found that fMRI response amplitudes in the lateral occipital and posterior parietal cortices were stronger overall for real objects than for pictures. In these areas and many others, including regions involved in action guidance, responses to real objects were stronger for near vs. far stimuli, whereas distance had little effect on responses to pictures-suggesting that distance determines relevance to action for real objects, but not for pictures. Although stimulus distance especially influenced response patterns in dorsal areas that operate in the service of visually guided action, distance also modulated representations in ventral cortex, where object responses are thought to remain invariant across contextual changes. We observed object size representations for both stimulus formats in ventral cortex but predominantly only for real objects in dorsal cortex. Together, these results demonstrate that whether brain responses reflect physical object characteristics depends on whether the experimental stimuli convey unambiguous information about those characteristics. Significance Statement Classic frameworks of vision attribute perception of inherent object characteristics, such as size, to the ventral visual pathway, and processing of spatial characteristics relevant to action, such as distance, to the dorsal visual pathway. However, these frameworks are based on studies that used projected images of objects whose actual size and distance from the observer were ambiguous. Here, we find that when object size and distance information in the stimulus is less ambiguous, these characteristics are widely represented in both visual pathways. Our results provide valuable new insights into the brain representations of objects and their various physical attributes in the context of naturalistic vision.
Collapse
|
2
|
Chow JK, Palmeri TJ, Gauthier I. Distinct but related abilities for visual and haptic object recognition. Psychon Bull Rev 2024:10.3758/s13423-024-02471-x. [PMID: 38381302 DOI: 10.3758/s13423-024-02471-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/29/2024] [Indexed: 02/22/2024]
Abstract
People vary in their ability to recognize objects visually. Individual differences for matching and recognizing objects visually is supported by a domain-general ability capturing common variance across different tasks (e.g., Richler et al., Psychological Review, 126, 226-251, 2019). Behavioral (e.g., Cooke et al., Neuropsychologia, 45, 484-495, 2007) and neural evidence (e.g., Amedi, Cerebral Cortex, 12, 1202-1212, 2002) suggest overlapping mechanisms in the processing of visual and haptic information in the service of object recognition, but it is unclear whether such group-average results generalize to individual differences. Psychometrically validated measures are required, which have been lacking in the haptic modality. We investigate whether object recognition ability is specific to vision or extends to haptics using psychometric measures we have developed. We use multiple visual and haptic tests with different objects and different formats to measure domain-general visual and haptic abilities and to test for relations across them. We measured object recognition abilities using two visual tests and four haptic tests (two each for two kinds of haptic exploration) in 97 participants. Partial correlation and confirmatory factor analyses converge to support the existence of a domain-general haptic object recognition ability that is moderately correlated with domain-general visual object recognition ability. Visual and haptic abilities share about 25% of their variance, supporting the existence of a multisensory domain-general ability while leaving a substantial amount of residual variance for modality-specific abilities. These results extend our understanding of the structure of object recognition abilities; while there are mechanisms that may generalize across categories, tasks, and modalities, there are still other mechanisms that are distinct between modalities.
Collapse
Affiliation(s)
- Jason K Chow
- Department of Psychology, Vanderbilt University, 111 21st Avenue South, Nashville, TN, 37240, USA.
| | - Thomas J Palmeri
- Department of Psychology, Vanderbilt University, 111 21st Avenue South, Nashville, TN, 37240, USA
| | - Isabel Gauthier
- Department of Psychology, Vanderbilt University, 111 21st Avenue South, Nashville, TN, 37240, USA
| |
Collapse
|
3
|
Karimpur H, Wolf C, Fiehler K. The (Un)ideal Physicist: How Humans Rely on Object Interaction for Friction Estimates. Psychol Sci 2024; 35:191-201. [PMID: 38252798 DOI: 10.1177/09567976231221789] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2024] Open
Abstract
To estimate object properties such as mass or friction, our brain relies on visual information to efficiently compute approximations. The role of sensorimotor feedback, however, is not well understood. Here we tested healthy adults (N = 79) in an inclined-plane problem, that is, how much a plane can be tilted before an object starts to slide, and contrasted the interaction group with observation groups who accessed involved forces by watching objects being manipulated. We created objects of different masses and levels of friction and asked participants to estimate the critical tilt angle after pushing an object, lifting it, or both. Estimates correlated with applied forces and were biased toward object mass, with higher estimates for heavier objects. Our findings highlight that inferences about physical object properties are tightly linked to the human sensorimotor system and that humans integrate sensorimotor information even at the risk of nonveridical perceptual estimates.
Collapse
Affiliation(s)
- Harun Karimpur
- Experimental Psychology, Justus Liebig University Giessen
- Center for Mind, Brain, and Behavior, University of Marburg and Justus Liebig University Giessen
| | | | - Katja Fiehler
- Experimental Psychology, Justus Liebig University Giessen
- Center for Mind, Brain, and Behavior, University of Marburg and Justus Liebig University Giessen
| |
Collapse
|
4
|
Wang Y, Gao J, Zhu F, Liu X, Wang G, Zhang Y, Deng Z, Chen J. Internal representations of the canonical real-world distance of objects. J Vis 2024; 24:14. [PMID: 38411955 PMCID: PMC10910641 DOI: 10.1167/jov.24.2.14] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2023] [Accepted: 01/08/2024] [Indexed: 02/28/2024] Open
Abstract
In the real world, every object has its canonical distance from observers. For example, airplanes are usually far away from us, whereas eyeglasses are close to us. Do we have an internal representation of the canonical real-world distance of objects in our cognitive system? If we do, does the canonical distance influence the perceived size of an object? Here, we conducted two experiments to address these questions. In Experiment 1, we first asked participants to rate the canonical distance of objects. Participants gave consistent ratings to each object. Then, pairs of object images were presented one by one in a trial, and participants were asked to rate the distance of the second object (i.e., a priming paradigm). We found that the rating of the perceived distance of the target object was modulated by the canonical real-world distance of the prime. In Experiment 2, participants were asked to judge the perceived size of canonically near or far objects that were presented at the converging end (i.e., far location) or the opening end (i.e., near location) of a background image with converging lines. We found that regardless of the presentation location, participants perceived the canonically near object as smaller than the canonically far object even though their retinal and real-world sizes were matched. In all, our results suggest that we have an internal representation of the canonical real-world distance of objects, which affects the perceived distance of subsequent objects and the perceived size of the objects themselves.
Collapse
Affiliation(s)
- Yijin Wang
- Center for the Study of Applied Psychology, Guangdong Key Laboratory of Mental Health and Cognitive Science, and the School of Psychology, South China Normal University, Guangzhou, China
| | - Jie Gao
- Center for the Study of Applied Psychology, Guangdong Key Laboratory of Mental Health and Cognitive Science, and the School of Psychology, South China Normal University, Guangzhou, China
| | - Fuying Zhu
- Center for the Study of Applied Psychology, Guangdong Key Laboratory of Mental Health and Cognitive Science, and the School of Psychology, South China Normal University, Guangzhou, China
| | - Xiaoli Liu
- Center for the Study of Applied Psychology, Guangdong Key Laboratory of Mental Health and Cognitive Science, and the School of Psychology, South China Normal University, Guangzhou, China
| | - Gexiu Wang
- Center for the Study of Applied Psychology, Guangdong Key Laboratory of Mental Health and Cognitive Science, and the School of Psychology, South China Normal University, Guangzhou, China
| | - Yichong Zhang
- Center for the Study of Applied Psychology, Guangdong Key Laboratory of Mental Health and Cognitive Science, and the School of Psychology, South China Normal University, Guangzhou, China
| | - Zhiqing Deng
- Center for the Study of Applied Psychology, Guangdong Key Laboratory of Mental Health and Cognitive Science, and the School of Psychology, South China Normal University, Guangzhou, China
| | - Juan Chen
- Center for the Study of Applied Psychology, Guangdong Key Laboratory of Mental Health and Cognitive Science, and the School of Psychology, South China Normal University, Guangzhou, China
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, South China Normal University, Guangzhou, China
- http://juanchenpsy.scnu.edu.cn/
| |
Collapse
|
5
|
Gomez MA, Snow JC. How to construct liquid-crystal spectacles to control vision of real-world objects and environments. Behav Res Methods 2024; 56:563-576. [PMID: 36737581 PMCID: PMC10424568 DOI: 10.3758/s13428-023-02059-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/05/2023] [Indexed: 02/05/2023]
Abstract
A major challenge in studying naturalistic vision lies in controlling stimulus and scene viewing time. This is especially the case for studies using real-world objects as stimuli (rather than computerized images) because real objects cannot be "onset" and "offset" in the same way that images can be. Since the late 1980s, one solution to this problem has been to have the observer wear electro-optic spectacles with computer-controlled liquid-crystal lenses that switch between transparent ("open") and translucent ("closed") states. Unfortunately, the commercially available glasses (PLATO Visual Occlusion Spectacles) command a high price tag, the hardware is fragile, and the glasses cannot be customized. This led us to explore how to manufacture liquid-crystal occlusion glasses in our own laboratory. Here, we share the products of our work by providing step-by-step instructions for researchers to design, build, operate, and test liquid-crystal glasses for use in experimental contexts. The glasses can be assembled with minimal technical knowledge using readily available components, and they can be customized for different populations and applications. The glasses are robust, and they can be produced at a fraction of the cost of commercial alternatives. Tests of reliability and temporal accuracy show that the performance of our laboratory prototype was comparable to that of the PLATO glasses. We discuss the results of our work with respect to implications for promoting rigor and reproducibility, potential use cases, comparisons with other liquid-crystal shutter glasses, and how users can find information regarding future updates and developments.
Collapse
Affiliation(s)
- Michael A Gomez
- Department of Psychology, The University of Nevada, Reno, 1664 N. Virginia Street, Reno, NV, USA.
- Psychology Department, Clovis Community College, 10309 N. Willow Ave, Fresno, CA, USA.
| | - Jacqueline C Snow
- Department of Psychology, The University of Nevada, Reno, 1664 N. Virginia Street, Reno, NV, USA.
| |
Collapse
|
6
|
Noviello S, Kamari Songhorabadi S, Deng Z, Zheng C, Chen J, Pisani A, Franchin E, Pierotti E, Tonolli E, Monaco S, Renoult L, Sperandio I. Temporal features of size constancy for perception and action in a real-world setting: A combined EEG-kinematics study. Neuropsychologia 2024; 193:108746. [PMID: 38081353 DOI: 10.1016/j.neuropsychologia.2023.108746] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Revised: 11/23/2023] [Accepted: 12/04/2023] [Indexed: 12/24/2023]
Abstract
A stable representation of object size, in spite of continuous variations in retinal input due to changes in viewing distance, is critical for perceiving and acting in a real 3D world. In fact, our perceptual and visuo-motor systems exhibit size and grip constancies in order to compensate for the natural shrinkage of the retinal image with increased distance. The neural basis of this size-distance scaling remains largely unknown, although multiple lines of evidence suggest that size-constancy operations might take place remarkably early, already at the level of the primary visual cortex. In this study, we examined for the first time the temporal dynamics of size constancy during perception and action by using a combined measurement of event-related potentials (ERPs) and kinematics. Participants were asked to maintain their gaze steadily on a fixation point and perform either a manual estimation or a grasping task towards disks of different sizes placed at different distances. Importantly, the physical size of the target was scaled with distance to yield a constant retinal angle. Meanwhile, we recorded EEG data from 64 scalp electrodes and hand movements with a motion capture system. We focused on the first positive-going visual evoked component peaking at approximately 90 ms after stimulus onset. We found earlier latencies and greater amplitudes in response to bigger than smaller disks of matched retinal size, regardless of the task. In line with the ERP results, manual estimates and peak grip apertures were larger for the bigger targets. We also found task-related differences at later stages of processing from a cluster of central electrodes, whereby the mean amplitude of the P2 component was greater for manual estimation than grasping. Taken together, these findings provide novel evidence that size constancy for real objects at real distances occurs at the earliest cortical stages and that early visual processing does not change as a function of task demands.
Collapse
Affiliation(s)
- Simona Noviello
- Department of Psychology and Cognitive Science, University of Trento, Rovereto, TN, Italy
| | | | - Zhiqing Deng
- School of Psychology, South China Normal University, Guangzhou, Guangdong Province, China
| | - Chao Zheng
- School of Psychology, South China Normal University, Guangzhou, Guangdong Province, China
| | - Juan Chen
- School of Psychology, South China Normal University, Guangzhou, Guangdong Province, China; Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents (South China Normal University), Ministry of Education, China
| | - Angelo Pisani
- Department of Psychology "Renzo Canestrari", University of Bologna, Italy
| | - Elena Franchin
- Department of Psychology and Cognitive Science, University of Trento, Rovereto, TN, Italy
| | - Enrica Pierotti
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Rovereto, TN, Italy
| | - Elena Tonolli
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Rovereto, TN, Italy
| | - Simona Monaco
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Rovereto, TN, Italy
| | - Louis Renoult
- School of Psychology, University of East Anglia, Norwich, UK
| | - Irene Sperandio
- Department of Psychology and Cognitive Science, University of Trento, Rovereto, TN, Italy.
| |
Collapse
|
7
|
Chen J, Paciocco JU, Deng Z, Culham JC. Human Neuroimaging Reveals Differences in Activation and Connectivity between Real and Pantomimed Tool Use. J Neurosci 2023; 43:7853-7867. [PMID: 37722847 PMCID: PMC10648550 DOI: 10.1523/jneurosci.0068-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2022] [Revised: 09/09/2023] [Accepted: 09/11/2023] [Indexed: 09/20/2023] Open
Abstract
Because the sophistication of tool use is vastly enhanced in humans compared with other species, a rich understanding of its neural substrates requires neuroscientific experiments in humans. Although functional magnetic resonance imaging (fMRI) has enabled many studies of tool-related neural processing, surprisingly few studies have examined real tool use. Rather, because of the many constraints of fMRI, past research has typically used proxies such as pantomiming despite neuropsychological dissociations between pantomimed and real tool use. We compared univariate activation levels, multivariate activation patterns, and functional connectivity when participants used real tools (a plastic knife or fork) to act on a target object (scoring or poking a piece of putty) or pantomimed the same actions with similar movements and timing. During the Execute phase, we found higher activation for real versus pantomimed tool use in sensorimotor regions and the anterior supramarginal gyrus, and higher activation for pantomimed than real tool use in classic tool-selective areas. Although no regions showed significant differences in activation magnitude during the Plan phase, activation patterns differed between real versus pantomimed tool use and motor cortex showed differential functional connectivity. These results reflect important differences between real tool use, a closed-loop process constrained by real consequences, and pantomimed tool use, a symbolic gesture that requires conceptual knowledge of tools but with limited consequences. These results highlight the feasibility and added value of employing natural tool use tasks in functional imaging, inform neuropsychological dissociations, and advance our theoretical understanding of the neural substrates of natural tool use.SIGNIFICANCE STATEMENT The study of tool use offers unique insights into how the human brain synthesizes perceptual, cognitive, and sensorimotor functions to accomplish a goal. We suggest that the reliance on proxies, such as pantomiming, for real tool use has (1) overestimated the contribution of cognitive networks, because of the indirect, symbolic nature of pantomiming; and (2) underestimated the contribution of sensorimotor networks necessary for predicting and monitoring the consequences of real interactions between hand, tool, and the target object. These results enhance our theoretical understanding of the full range of human tool functions and inform our understanding of neuropsychological dissociations between real and pantomimed tool use.
Collapse
Affiliation(s)
- Juan Chen
- Center for the Study of Applied Psychology, Guangdong Key Laboratory of Mental Health and Cognitive Science, and the School of Psychology, South China Normal University, Guangzhou, Guangdong 510631, China
- Key Laboratory of Brain, Cognition and Education Sciences, South China Normal University, Ministry of Education, Guangzhou, Guangdong 510631, China
| | - Joseph U Paciocco
- Neuroscience Program, University of Western Ontario, London, Ontario N6A 5B7, Canada
| | - Zhiqing Deng
- Center for the Study of Applied Psychology, Guangdong Key Laboratory of Mental Health and Cognitive Science, and the School of Psychology, South China Normal University, Guangzhou, Guangdong 510631, China
| | - Jody C Culham
- Neuroscience Program, University of Western Ontario, London, Ontario N6A 5B7, Canada
- Department of Psychology, University of Western Ontario, London, Ontario N6A 5B7, Canada
| |
Collapse
|
8
|
Reschechtko S, Gnanaseelan C, Pruszynski JA. Reach Corrections Toward Moving Objects are Faster Than Reach Corrections Toward Instantaneously Switching Targets. Neuroscience 2023; 526:135-143. [PMID: 37391122 DOI: 10.1016/j.neuroscience.2023.06.021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2023] [Revised: 06/20/2023] [Accepted: 06/22/2023] [Indexed: 07/02/2023]
Abstract
Visually guided reaching is a common motor behavior that engages subcortical circuits to mediate rapid corrections. Although these neural mechanisms have evolved for interacting with the physical world, they are often studied in the context of reaching toward virtual targets on a screen. These targets often change position by disappearing from one place reappearing in another instantaneously. In this study, we instructed participants to perform rapid reaches to physical objects that changed position in different ways. In one condition, the objects moved very rapidly from one place to another. In the other condition, illuminated targets instantaneously switched position by being extinguished in one position and illuminating in another. Participants were consistently faster in correcting their reach trajectories when the object moved continuously.
Collapse
Affiliation(s)
- Sasha Reschechtko
- School of Exercise & Nutritional Sciences, San Diego State University, 351 ENS Building, 5500 Campanile Dr., San Diego, CA 92182, USA; Western BrainsCAN, Western University, 1151 Richmond St., London, ON N6A 3K7, Canada; Brain and Mind Institute, Western University, 1151 Richmond St., London, ON N6A 3K7, Canada; Robarts Research Institute, Western University, 1151 Richmond St., London, ON N6A 3K7, Canada; Department of Physiology & Pharmacology, Western University, 1151 Richmond St., London, ON N6A 3K7, Canada.
| | - Cynthiya Gnanaseelan
- Department of Physiology & Pharmacology, Western University, 1151 Richmond St., London, ON N6A 3K7, Canada
| | - J Andrew Pruszynski
- Brain and Mind Institute, Western University, 1151 Richmond St., London, ON N6A 3K7, Canada; Robarts Research Institute, Western University, 1151 Richmond St., London, ON N6A 3K7, Canada; Department of Physiology & Pharmacology, Western University, 1151 Richmond St., London, ON N6A 3K7, Canada; Department of Psychology, Western University, 1151 Richmond St., London, ON N6A 3K7, Canada
| |
Collapse
|
9
|
Tool use acquisition induces a multifunctional interference effect during object processing: evidence from the sensorimotor mu rhythm. Exp Brain Res 2023; 241:1145-1157. [PMID: 36920527 DOI: 10.1007/s00221-023-06595-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2022] [Accepted: 02/27/2023] [Indexed: 03/16/2023]
Abstract
A fundamental characteristic of human development is acquiring and accumulating tool use knowledge through observation and sensorimotor experience. Recent studies showed that, in children and adults, different action possibilities to grasp-to-move and grasp-to-use objects generate a conflict that extinguishes neural motor resonance phenomena during visual object processing. In this study, a training protocol coupled with EEG recordings was administered in virtual reality to healthy adults to evaluate whether a similar conflict occurs between novel tool use knowledge. Participants perceived and manipulated two novel 3D tools trained beforehand with either single or double-usage. A weaker reduction of mu-band (10-13 Hz) power, accompanied by a reduced inter-trial phase coherence, was recorded during the perception of the tool associated with the double-usage. These effects started within the first 200 ms of visual object processing and were predominantly recorded over the left motor system. Furthermore, interacting with the double usage tool delayed grasp-to-reach movements. The results highlight a multifunctional interference effect, such as tool use acquisition reduces the neural motor resonance phenomenon and inhibits the activation of the motor system during subsequent object recognition. These results imply that learned tool use information guides sensorimotor processes of objects.
Collapse
|
10
|
Rzepka AM, Hussey KJ, Maltz MV, Babin K, Wilcox LM, Culham JC. Familiar size affects perception differently in virtual reality and the real world. Philos Trans R Soc Lond B Biol Sci 2023; 378:20210464. [PMID: 36511414 PMCID: PMC9745877 DOI: 10.1098/rstb.2021.0464] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Abstract
The promise of virtual reality (VR) as a tool for perceptual and cognitive research rests on the assumption that perception in virtual environments generalizes to the real world. Here, we conducted two experiments to compare size and distance perception between VR and physical reality (Maltz et al. 2021 J. Vis. 21, 1-18). In experiment 1, we used VR to present dice and Rubik's cubes at their typical sizes or reversed sizes at distances that maintained a constant visual angle. After viewing the stimuli binocularly (to provide vergence and disparity information) or monocularly, participants manually estimated perceived size and distance. Unlike physical reality, where participants relied less on familiar size and more on presented size during binocular versus monocular viewing, in VR participants relied heavily on familiar size regardless of the availability of binocular cues. In experiment 2, we demonstrated that the effects in VR generalized to other stimuli and to a higher quality VR headset. These results suggest that the use of binocular cues and familiar size differs substantially between virtual and physical reality. A deeper understanding of perceptual differences is necessary before assuming that research outcomes from VR will generalize to the real world. This article is part of a discussion meeting issue 'New approaches to 3D vision'.
Collapse
Affiliation(s)
- Anna M. Rzepka
- Neuroscience Program, University of Western Ontario, Western Interdisciplinary Research Building, London, ON, Canada N6A 3K7
| | - Kieran J. Hussey
- Neuroscience Program, University of Western Ontario, Western Interdisciplinary Research Building, London, ON, Canada N6A 3K7
| | - Margaret V. Maltz
- Department of Psychology, University of Western Ontario, Western Interdisciplinary Research Building, London, ON, Canada N6A 3K7
| | - Karsten Babin
- Department of Psychology, University of Western Ontario, Western Interdisciplinary Research Building, London, ON, Canada N6A 3K7
| | - Laurie M. Wilcox
- Department of Psychology, York University, Toronto, ON, Canada M3J 1P3
| | - Jody C. Culham
- Neuroscience Program, University of Western Ontario, Western Interdisciplinary Research Building, London, ON, Canada N6A 3K7,Department of Psychology, University of Western Ontario, Western Interdisciplinary Research Building, London, ON, Canada N6A 3K7
| |
Collapse
|
11
|
The Spatiotemporal Neural Dynamics of Object Recognition for Natural Images and Line Drawings. J Neurosci 2023; 43:484-500. [PMID: 36535769 PMCID: PMC9864561 DOI: 10.1523/jneurosci.1546-22.2022] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Revised: 11/18/2022] [Accepted: 11/30/2022] [Indexed: 12/24/2022] Open
Abstract
Drawings offer a simple and efficient way to communicate meaning. While line drawings capture only coarsely how objects look in reality, we still perceive them as resembling real-world objects. Previous work has shown that this perceived similarity is mirrored by shared neural representations for drawings and natural images, which suggests that similar mechanisms underlie the recognition of both. However, other work has proposed that representations of drawings and natural images become similar only after substantial processing has taken place, suggesting distinct mechanisms. To arbitrate between those alternatives, we measured brain responses resolved in space and time using fMRI and MEG, respectively, while human participants (female and male) viewed images of objects depicted as photographs, line drawings, or sketch-like drawings. Using multivariate decoding, we demonstrate that object category information emerged similarly fast and across overlapping regions in occipital, ventral-temporal, and posterior parietal cortex for all types of depiction, yet with smaller effects at higher levels of visual abstraction. In addition, cross-decoding between depiction types revealed strong generalization of object category information from early processing stages on. Finally, by combining fMRI and MEG data using representational similarity analysis, we found that visual information traversed similar processing stages for all types of depiction, yet with an overall stronger representation for photographs. Together, our results demonstrate broad commonalities in the neural dynamics of object recognition across types of depiction, thus providing clear evidence for shared neural mechanisms underlying recognition of natural object images and abstract drawings.SIGNIFICANCE STATEMENT When we see a line drawing, we effortlessly recognize it as an object in the world despite its simple and abstract style. Here we asked to what extent this correspondence in perception is reflected in the brain. To answer this question, we measured how neural processing of objects depicted as photographs and line drawings with varying levels of detail (from natural images to abstract line drawings) evolves over space and time. We find broad commonalities in the spatiotemporal dynamics and the neural representations underlying the perception of photographs and even abstract drawings. These results indicate a shared basic mechanism supporting recognition of drawings and natural images.
Collapse
|
12
|
Abstract
This chapter explores the current state of the art in eye tracking within 3D virtual environments. It begins with the motivation for eye tracking in Virtual Reality (VR) in psychological research, followed by descriptions of the hardware and software used for presenting virtual environments as well as for tracking eye and head movements in VR. This is followed by a detailed description of an example project on eye and head tracking while observers look at 360° panoramic scenes. The example is illustrated with descriptions of the user interface and program excerpts to show the measurement of eye and head movements in VR. The chapter continues with fundamentals of data analysis, in particular methods for the determination of fixations and saccades when viewing spherical displays. We then extend these methodological considerations to determining the spatial and temporal coordination of the eyes and head in VR perception. The chapter concludes with a discussion of outstanding problems and future directions for conducting eye- and head-tracking research in VR. We hope that this chapter will serve as a primer for those intending to implement VR eye tracking in their own research.
Collapse
|
13
|
Langridge RW, Marotta JJ. Use of remote data collection methodology to test for an illusory effect on visually guided cursor movements. Front Psychol 2022; 13:922381. [PMID: 36118434 PMCID: PMC9478591 DOI: 10.3389/fpsyg.2022.922381] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2022] [Accepted: 08/01/2022] [Indexed: 11/13/2022] Open
Abstract
Investigating the influence of perception on the control of visually guided action typically involves controlled experimentation within the laboratory setting. When appropriate, however, behavioral research of this nature may benefit from the use of methods that allow for remote data collection outside of the lab. This study tested the feasibility of using remote data collection methods to explore the influence of perceived target size on visually guided cursor movements using the Ebbinghaus illusion. Participants completed the experiment remotely, using the trackpad of their personal laptop computers. The task required participants to click on a single circular target presented at either the left or right side of their screen as quickly and accurately as possible (Experiment 1), or to emphasize speed (Experiment 2) or accuracy (Experiment 3). On each trial the target was either surrounded by small or large context circles, or no context circles. Participants’ judgments of the targets’ perceived size were influenced by the illusion, however, the illusion failed to produce differences in click-point accuracy or movement time. Interestingly, the illusion appeared to affect participants’ movement of the cursor toward the target; more directional changes were made when clicking the Perceived Large version of the illusion compared to the Perceived Small version. These results suggest the planning of the cursor movement may have been influenced by the illusion, while later stages of the movement were not, and cursor movements directed toward targets perceived as smaller required less correction compared to targets perceived as larger.
Collapse
|
14
|
Knights E, Smith FW, Rossit S. The role of the anterior temporal cortex in action: evidence from fMRI multivariate searchlight analysis during real object grasping. Sci Rep 2022; 12:9042. [PMID: 35662252 PMCID: PMC9167815 DOI: 10.1038/s41598-022-12174-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2022] [Accepted: 04/29/2022] [Indexed: 12/20/2022] Open
Abstract
Intelligent manipulation of handheld tools marks a major discontinuity between humans and our closest ancestors. Here we identified neural representations about how tools are typically manipulated within left anterior temporal cortex, by shifting a searchlight classifier through whole-brain real action fMRI data when participants grasped 3D-printed tools in ways considered typical for use (i.e., by their handle). These neural representations were automatically evocated as task performance did not require semantic processing. In fact, findings from a behavioural motion-capture experiment confirmed that actions with tools (relative to non-tool) incurred additional processing costs, as would be suspected if semantic areas are being automatically engaged. These results substantiate theories of semantic cognition that claim the anterior temporal cortex combines sensorimotor and semantic content for advanced behaviours like tool manipulation.
Collapse
Affiliation(s)
- Ethan Knights
- School of Psychology, University of East Anglia, Norwich, UK
| | - Fraser W Smith
- School of Psychology, University of East Anglia, Norwich, UK
| | | |
Collapse
|
15
|
Ras M, Wyrwa M, Stachowiak J, Buchwald M, Nowik AM, Kroliczak G. Complex tools and motor-to-mechanical transformations. Sci Rep 2022; 12:8041. [PMID: 35577883 PMCID: PMC9110343 DOI: 10.1038/s41598-022-12142-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2021] [Accepted: 04/27/2022] [Indexed: 12/24/2022] Open
Abstract
The ability to use complex tools is thought to depend on multifaceted motor-to-mechanical transformations within the left inferior parietal lobule (IPL), linked to cognitive control over compound actions. Here we show using neuroimaging that demanding transformations of finger movements into proper mechanical movements of functional parts of complex tools invoke significantly the right rather than left rostral IPL, and bilateral posterior-to-mid and left anterior intraparietal sulci. These findings emerged during the functional grasp and tool-use programming phase. The expected engagement of left IPL was partly revealed by traditional region-of-interest analyses, and further modeling/estimations at the hand-independent level. Thus, our results point to a special role of right IPL in supporting sensory-motor spatial mechanisms which enable an effective control of fingers in skillful handling of complex tools. The resulting motor-to-mechanical transformations involve dynamic hand-centered to target-centered reference frame conversions indispensable for efficient interactions with the environment.
Collapse
|
16
|
Sanz Diez P, Bosco A, Fattori P, Wahl S. Horizontal target size perturbations during grasping movements are described by subsequent size perception and saccade amplitude. PLoS One 2022; 17:e0264560. [PMID: 35290373 PMCID: PMC8923441 DOI: 10.1371/journal.pone.0264560] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2021] [Accepted: 02/14/2022] [Indexed: 11/18/2022] Open
Abstract
Perception and action are essential in our day-to-day interactions with the environment. Despite the dual-stream theory of action and perception, it is now accepted that action and perception processes interact with each other. However, little is known about the impact of unpredicted changes of target size during grasping actions on perception. We assessed whether size perception and saccade amplitude were affected before and after grasping a target that changed its horizontal size during the action execution under the presence or absence of tactile feedback. We have tested twenty-one participants in 4 blocks of 30 trials. Blocks were divided into two experimental tactile feedback paradigms: tactile and non-tactile. Trials consisted of 3 sequential phases: pre-grasping size perception, grasping, and post-grasping size perception. During pre- and post-phases, participants executed a saccade towards a horizontal bar and performed a manual size estimation of the bar size. During grasping phase, participants were asked to execute a saccade towards the bar and to make a grasping action towards the screen. While grasping, 3 horizontal size perturbation conditions were applied: non-perturbation, shortening, and lengthening. 30% of the trials presented perturbation, meaning a symmetrically shortened or lengthened by 33% of the original size. Participants’ hand and eye positions were assessed by a motion capture system and a mobile eye-tracker, respectively. After grasping, in both tactile and non-tactile feedback paradigms, size estimation was significantly reduced in lengthening (p = 0.002) and non-perturbation (p<0.001), whereas shortening did not induce significant adjustments (p = 0.86). After grasping, saccade amplitude became significantly longer in shortening (p<0.001) and significantly shorter in lengthening (p<0.001). Non-perturbation condition did not display adjustments (p = 0.95). Tactile feedback did not generate changes in the collected perceptual responses, but horizontal size perturbations did so, suggesting that all relevant target information used in the movement can be extracted from the post-action target perception.
Collapse
Affiliation(s)
- Pablo Sanz Diez
- Carl Zeiss Vision International GmbH, Aalen, Germany
- Institute for Ophthalmic Research, Eberhard Karls University Tuebingen, Tuebingen, Germany
- * E-mail: (PSD); (AB)
| | - Annalisa Bosco
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
- Alma Mater Research Institute For Human-Centered Artificial Intelligence (Alma Human AI), University of Bologna, Bologna, Italy
- * E-mail: (PSD); (AB)
| | - Patrizia Fattori
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
- Alma Mater Research Institute For Human-Centered Artificial Intelligence (Alma Human AI), University of Bologna, Bologna, Italy
| | - Siegfried Wahl
- Carl Zeiss Vision International GmbH, Aalen, Germany
- Institute for Ophthalmic Research, Eberhard Karls University Tuebingen, Tuebingen, Germany
| |
Collapse
|
17
|
Unilateral resection of both cortical visual pathways in a pediatric patient alters action but not perception. Neuropsychologia 2022; 168:108182. [PMID: 35182580 DOI: 10.1016/j.neuropsychologia.2022.108182] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2021] [Revised: 12/21/2021] [Accepted: 02/08/2022] [Indexed: 11/23/2022]
Abstract
The human cortical visual system consists of two major pathways, a ventral pathway that subserves perception and a dorsal pathway that primarily subserves visuomotor control. Previous studies have found that children with cortical resections of the ventral visual pathway retain largely normal visuoperceptual abilities. Whether visually guided actions, supported by computations carried out by the dorsal pathway, follow a similar pattern of preservation remains unknown. To address this question, we examined visuoperceptual and visuomotor behaviors in a pediatric patient, TC, who underwent a cortical resection that included portions of the left ventral and dorsal pathways. We collected kinematic data when TC used her right and left hands to perceptually estimate the width of blocks that varied in width and length, and, separately, to grasp the same blocks. TC's perceptual estimation performance was comparable to that of controls, independent of the hand used. In contrast, relative to controls, she showed reduced visuomotor sensitivity to object shape and this was more evident when she grasped the objects with her contralesional right hand. These results provide novel evidence for a striking difference in the competence of the two visual pathways to cortical injuries acquired in childhood.
Collapse
|
18
|
Campagnoli C, Hung B, Domini F. Explicit and implicit depth-cue integration: Evidence of systematic biases with real objects. Vision Res 2021; 190:107961. [PMID: 34757304 DOI: 10.1016/j.visres.2021.107961] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2021] [Revised: 09/28/2021] [Accepted: 10/03/2021] [Indexed: 11/27/2022]
Abstract
In previous studies using VR, we found evidence that 3D shape estimation agrees to a superadditivity rule of depth-cue combination, by which adding depth cues leads to greater perceived depth and, in principle, to depth overestimation. Superadditivity can be quantitatively accounted for by a normative theory of cue integration, via adapting a model termed Intrinsic Constraint (IC). As for its qualitative nature, it remains unclear whether superadditivity represents the genuine readout of depth-cue integration, as predicted by IC, or alternatively a byproduct of artificial virtual displays, because they carry flatness cues that can bias depth estimates in a Bayesian fashion, or even just a way for observers to express that a scene "looks deeper" with more depth cues by explicitly inflating their depth judgments. In the present study, we addressed this question by testing whether the IC model's prediction of superadditivity generalizes to real world settings. We asked participants to judge the perceived 3D shape of cardboard prisms through a matching task. To control for the potential interference of explicit reasoning, we also asked participants to reach-to-grasp the same objects and we analyzed the in-flight grip size throughout the reaching. We designed a novel technique to carefully control binocular and monocular 3D cues independently, allowing to add or remove depth information seamlessly. Even with real objects, participants exhibited a clear superadditivity effect in both tasks. Furthermore, the magnitude of this effect was accurately predicted by the IC model. These results confirm that superadditivity is an inherent feature of depth estimation.
Collapse
Affiliation(s)
- Carlo Campagnoli
- School of Psychology, University of Leeds, Leeds, UK; Department of Psychology, Princeton University, Princeton, NJ, USA.
| | - Bethany Hung
- The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Fulvio Domini
- Department of Cognitive, Linguistic and Psychological Science, Brown University, Providence, RI, USA
| |
Collapse
|
19
|
Fairchild GT, Marini F, Snow JC. Graspability Modulates the Stronger Neural Signature of Motor Preparation for Real Objects vs. Pictures. J Cogn Neurosci 2021; 33:2477-2493. [PMID: 34407193 PMCID: PMC9946154 DOI: 10.1162/jocn_a_01771] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The cognitive and neural bases of visual perception are typically studied using pictures rather than real-world stimuli. Unlike pictures, real objects are actionable solids that can be manipulated with the hands. Recent evidence from human brain imaging suggests that neural responses to real objects differ from responses to pictures; however, little is known about the neural mechanisms that drive these differences. Here, we tested whether brain responses to real objects versus pictures are differentially modulated by the "in-the-moment" graspability of the stimulus. In human dorsal cortex, electroencephalographic responses show a "real object advantage" in the strength and duration of mu (μ) and low beta (β) rhythm desynchronization-well-known neural signatures of visuomotor action planning. We compared desynchronization for real tools versus closely matched pictures of the same objects, when the stimuli were positioned unoccluded versus behind a large transparent barrier that prevented immediate access to the stimuli. We found that, without the barrier in place, real objects elicited stronger μ and β desynchronization compared to pictures, both during stimulus presentation and after stimulus offset, replicating previous findings. Critically, however, with the barrier in place, this real object advantage was attenuated during the period of stimulus presentation, whereas the amplification in later periods remained. These results suggest that the "real object advantage" is driven initially by immediate actionability, whereas later differences perhaps reflect other, more inherent properties of real objects. The findings showcase how the use of richer multidimensional stimuli can provide a more complete and ecologically valid understanding of object vision.
Collapse
|
20
|
Snow JC, Culham JC. The Treachery of Images: How Realism Influences Brain and Behavior. Trends Cogn Sci 2021; 25:506-519. [PMID: 33775583 PMCID: PMC10149139 DOI: 10.1016/j.tics.2021.02.008] [Citation(s) in RCA: 39] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2020] [Revised: 02/08/2021] [Accepted: 02/22/2021] [Indexed: 10/21/2022]
Abstract
Although the cognitive sciences aim to ultimately understand behavior and brain function in the real world, for historical and practical reasons, the field has relied heavily on artificial stimuli, typically pictures. We review a growing body of evidence that both behavior and brain function differ between image proxies and real, tangible objects. We also propose a new framework for immersive neuroscience to combine two approaches: (i) the traditional build-up approach of gradually combining simplified stimuli, tasks, and processes; and (ii) a newer tear-down approach that begins with reality and compelling simulations such as virtual reality to determine which elements critically affect behavior and brain processing.
Collapse
Affiliation(s)
- Jacqueline C Snow
- Department of Psychology, University of Nevada Reno, Reno, NV 89557, USA
| | - Jody C Culham
- Department of Psychology, University of Western Ontario, London, Ontario, N6A 5C2, Canada; Brain and Mind Institute, Western Interdisciplinary Research Building, University of Western Ontario, London, Ontario, N6A 3K7, Canada.
| |
Collapse
|
21
|
Langridge RW, Marotta JJ. Manipulation of physical 3-D and virtual 2-D stimuli: comparing digit placement and fixation position. Exp Brain Res 2021; 239:1863-1875. [PMID: 33860822 DOI: 10.1007/s00221-021-06101-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2020] [Accepted: 03/30/2021] [Indexed: 11/28/2022]
Abstract
The visuomotor processes involved in grasping a 2-D target are known to be fundamentally different than those involved in grasping a 3-D object, and this has led to concerns regarding the generalizability of 2-D grasping research. This study directly compared participants' fixation positions and digit placement during interaction with either physical square objects or 2-D virtual versions of these objects. Participants were instructed to either simply grasp the stimulus or grasp and slide it to another location. Participants' digit placement and fixation positions did not significantly differ as a function of stimulus type when grasping in the center of the display. However, gaze and grasp positions shifted toward the near side of non-central virtual stimuli, while consistently remaining close to the horizontal midline of the physical stimulus. Participants placed their digits at less stable locations when grasping the virtual stimulus in comparison to the physical stimulus on the right side of the display, but this difference disappeared when grasping in the center and on the left. Similar outward shifts in digit placement and lowered fixations were observed when sliding both stimulus types, suggesting participants incorporated similar adjustments in grasp selection in anticipation of manipulation in both Physical and Virtual stimulus conditions. These results suggest that while fixation position and grasp point selection differed between stimulus type as a function of stimulus position, certain eye-hand coordinated behaviours were maintained when grasping both physical and virtual stimuli.
Collapse
Affiliation(s)
- Ryan W Langridge
- Perception and Action Lab, Department of Psychology, University of Manitoba, 190 Dysart Rd, Winnipeg, MB, R3T-2N2, Canada.
| | - Jonathan J Marotta
- Perception and Action Lab, Department of Psychology, University of Manitoba, 190 Dysart Rd, Winnipeg, MB, R3T-2N2, Canada
| |
Collapse
|
22
|
Prichard A, Chhibber R, Athanassiades K, Chiu V, Spivak M, Berns GS. 2D or not 2D? An fMRI study of how dogs visually process objects. Anim Cogn 2021; 24:1143-1151. [PMID: 33772693 DOI: 10.1007/s10071-021-01506-3] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2021] [Revised: 03/09/2021] [Accepted: 03/12/2021] [Indexed: 10/21/2022]
Abstract
Given humans' habitual use of screens, they rarely consider potential differences when viewing two-dimensional (2D) stimuli and real-world versions of dimensional stimuli. Dogs also have access to many forms of screens and touchpads, with owners even subscribing to dog-directed content. Humans understand that 2D stimuli are representations of real-world objects, but do dogs? In canine cognition studies, 2D stimuli are almost always used to study what is normally 3D, like faces, and may assume that both 2D and 3D stimuli are represented in the brain the same way. Here, we used awake fMRI in 15 dogs to examine the neural mechanisms underlying dogs' perception of two- and three-dimensional objects after the dogs were trained on either two- or three-dimensional versions of the objects. Activation within reward processing regions and parietal cortex of the dog brain to 2D and 3D versions of objects was determined by their training experience, as dogs trained on one dimensionality showed greater differential activation within the dimension on which they were trained. These results show that dogs do not automatically generalize between two- and three-dimensional versions of object stimuli and suggest that future research consider the implicit assumptions when using pictures or videos.
Collapse
Affiliation(s)
- Ashley Prichard
- Psychology Department, Emory University, Atlanta, GA, 30322, USA
| | - Raveena Chhibber
- Psychology Department, Emory University, Atlanta, GA, 30322, USA
| | | | - Veronica Chiu
- Psychology Department, Emory University, Atlanta, GA, 30322, USA
| | - Mark Spivak
- Comprehensive Pet Therapy, Inc, Sandy Springs, GA, 30328, USA
| | - Gregory S Berns
- Psychology Department, Emory University, Atlanta, GA, 30322, USA.
| |
Collapse
|
23
|
Prichard A, Chhibber R, Athanassiades K, Chiu V, Spivak M, Berns GS. The mouth matters most: A functional magnetic resonance imaging study of how dogs perceive inanimate objects. J Comp Neurol 2021; 529:2987-2994. [PMID: 33745141 DOI: 10.1002/cne.25142] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2020] [Revised: 02/24/2021] [Accepted: 03/15/2021] [Indexed: 11/12/2022]
Abstract
The perception and representation of objects in the world are foundational to all animals. The relative importance of objects' physical properties versus how the objects are interacted with continues to be debated. Neural evidence in humans and nonhuman primates suggests animate-inanimate and face-body dimensions of objects are represented in the temporal cortex. However, because primates have opposable thumbs and interact with objects in similar ways, the question remains as to whether this similarity represents the evolution of a common cognitive process or whether it reflects a similarity of physical interaction. Here, we used functional magnetic resonance imaging (fMRI) in dogs to test whether the type of interaction affects object processing in an animal that interacts primarily with its mouth. In Study 1, we identified object-processing regions of cortex by having dogs passively view movies of faces and objects. In Study 2, dogs were trained to interact with two new objects with either the mouth or the paw. Then, we measured responsivity in the object regions to the presentation of these objects. Mouth-objects elicited significantly greater activity in object regions than paw-objects. Mouth-objects were also associated with activity in somatosensory cortex, suggesting dogs were anticipating mouthing interactions. These findings suggest that object perception in dogs is affected by how dogs expect to interact with familiar objects.
Collapse
Affiliation(s)
- Ashley Prichard
- Psychology Department, Emory University, Atlanta, Georgia, USA
| | | | | | - Veronica Chiu
- Psychology Department, Emory University, Atlanta, Georgia, USA
| | - Mark Spivak
- Comprehensive Pet Therapy, Inc., Sandy Springs, Georgia, USA
| | - Gregory S Berns
- Psychology Department, Emory University, Atlanta, Georgia, USA
| |
Collapse
|
24
|
Sivakumar P, Quinlan DJ, Stubbs KM, Culham JC. Grasping performance depends upon the richness of hand feedback. Exp Brain Res 2021; 239:835-846. [PMID: 33403432 DOI: 10.1007/s00221-020-06025-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2020] [Accepted: 12/19/2020] [Indexed: 11/28/2022]
Abstract
Although visual feedback of the hand allows fast and accurate grasping actions, little is known about whether the nature of feedback of the hand affects performance. We investigated kinematics during precision grasping (with the index finger and thumb) when participants received different levels of hand feedback, with or without visual feedback of the target. Specifically, we compared performance when participants saw (1) no hand feedback; (2) only the two critical points on the index finger and thumb tips; (3) 21 points on all digit tips and hand joints; (4) 21 points connected by a "skeleton", or (5) full feedback of the hand wearing a glove. When less hand feedback was available, participants took longer to execute the movement because they allowed more time to slow the reach and close the hand. When target feedback was unavailable, participants took longer to plan the movement and reached with higher velocity. We were particularly interested in investigating maximum grip aperture (MGA), which can reflect the margin of error that participants allow to compensate for uncertainty. A trend suggested that MGA was smallest when ample feedback was available (skeleton and full hand feedback, regardless of target feedback) and when only essential information about hand and target was provided (2-point hand feedback + target feedback) but increased when non-essential points were included (21-point feedback). These results suggest that visual feedback of the hand affects grasping performance and that, while more feedback is usually beneficial, this is not necessarily always the case.
Collapse
Affiliation(s)
- Prajith Sivakumar
- Department of Biology, University of Western Ontario, London, Canada.,Brain and Mind Institute, University of Western Ontario, Western Interdisciplinary Research Building, London, ON, Canada
| | - Derek J Quinlan
- Brain and Mind Institute, University of Western Ontario, Western Interdisciplinary Research Building, London, ON, Canada.,BrainsCAN, University of Western Ontario, London, ON, Canada.,Department of Psychology, Huron University College, London, ON, Canada
| | - Kevin M Stubbs
- Brain and Mind Institute, University of Western Ontario, Western Interdisciplinary Research Building, London, ON, Canada.,BrainsCAN, University of Western Ontario, London, ON, Canada.,Department of Psychology, University of Western Ontario, London, ON, Canada
| | - Jody C Culham
- Brain and Mind Institute, University of Western Ontario, Western Interdisciplinary Research Building, London, ON, Canada. .,Department of Psychology, University of Western Ontario, London, ON, Canada.
| |
Collapse
|
25
|
Hamidi M, Giuffre L, Heath M. A summary statistical representation influences perceptions but not visually or memory-guided grasping. Hum Mov Sci 2020; 75:102739. [PMID: 33310378 DOI: 10.1016/j.humov.2020.102739] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2020] [Revised: 10/21/2020] [Accepted: 11/26/2020] [Indexed: 11/29/2022]
Abstract
A statistical summary representation (SSR) is a phenomenon wherein a target property (e.g., size) is encoded based on the average of the stimulus-set to which it belongs. An SSR has been demonstrated in obligatory judgment tasks; however, to our knowledge no work has examined whether it influences grasps to 3D targets. Here, participants completed a method of adjustment task, and visually and memory-guided grasps in conditions wherein differently sized 3D targets (widths: 20, 30 and 40 mm; height and depth = 10 mm) were presented with equal frequency (i.e., control) and when the smallest (i.e., 20-mm: small-target) and largest (i.e., 40-mm: large-target) targets were presented five times as often as the other targets in the stimulus-set. In the method of adjustment task, responses for the small- and large-target weighting conditions were smaller and larger than the control condition, respectively. In other words, an SSR biased perceptions in the direction of the most frequently presented target in the stimulus-set - a result consistent with the view that perceptions are supported by relative visual information laid down by the ventral visual pathway. In contrast, grip apertures were refractory to target-weighting and was a finding independent of the presence (i.e., visually guided) or absence (i.e., memory-guided) of visual feedback. Furthermore, two one-sided tests showed that peak grip apertures for the different target weighting conditions were within an equivalence boundary. Accordingly, an SSR does not influence 3D grasps and is a finding adding to a growing literature reporting that actions are supported by the absolute visuomotor networks of the dorsal visual pathway.
Collapse
Affiliation(s)
- Maryam Hamidi
- Graduate Program in Neuroscience, University of Western Ontario, London, ON N6A 3K7, Canada
| | - Lauren Giuffre
- School of Kinesiology, University of Western Ontario, London, ON N6A 3K7, Canada
| | - Matthew Heath
- Graduate Program in Neuroscience, University of Western Ontario, London, ON N6A 3K7, Canada; School of Kinesiology, University of Western Ontario, London, ON N6A 3K7, Canada; Canadian Centre for Activity and Aging, University of Western Ontario, London, ON N6A 3K7, Canada.
| |
Collapse
|
26
|
Ozana A, Berman S, Ganel T. Grasping Weber's Law in a Virtual Environment: The Effect of Haptic Feedback. Front Psychol 2020; 11:573352. [PMID: 33329216 PMCID: PMC7710620 DOI: 10.3389/fpsyg.2020.573352] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2020] [Accepted: 10/05/2020] [Indexed: 11/13/2022] Open
Abstract
Recent findings suggest that the functional separation between vision-for-action and vision-for-perception does not generalize to situations in which virtual objects are used as targets. For instance, unlike actions toward real objects that violate Weber's law, a basic law of visual perception, actions toward virtual objects presented on flat-screens, or in remote virtual environments, obey to Weber's law. These results suggest that actions in virtual environments are performed in an inefficient manner and are subjected to perceptual effects. It is unclear, however, whether this inefficiency reflects extensive variation in the way in which visual information is processed in virtual environments or more local aspects related to the settings of the virtual environment. In the current study, we focused on grasping performance in a state-of-the-art virtual reality system that provides an accurate representation of the 3D space. Within this environment, we tested the effect of haptic feedback on grasping trajectories. Participants were asked to perform bimanual grasping movements toward the edges of virtual targets. In the haptic feedback condition, physical stimuli of matching dimensions were embedded in the virtual environment. Haptic feedback was not provided in the no-feedback condition. The results showed that grasping trajectories in the feedback, but not in the no-feedback condition, could be performed more efficiently, and evade the influence of Weber's law. These findings are discussed in relevance to previous literature on 2D and 3D grasping.
Collapse
Affiliation(s)
- Aviad Ozana
- Department of Psychology, Ben-Gurion University of the Negev, Beer-Sheva, Israel.,Zlotowski Center, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| | - Sigal Berman
- Zlotowski Center, Ben-Gurion University of the Negev, Beer-Sheva, Israel.,Department of Industrial Engineering and Management, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| | - Tzvi Ganel
- Department of Psychology, Ben-Gurion University of the Negev, Beer-Sheva, Israel.,Zlotowski Center, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| |
Collapse
|
27
|
Freud E, Behrmann M. Altered large-scale organization of shape processing in visual agnosia. Cortex 2020; 129:423-435. [PMID: 32574843 PMCID: PMC9972005 DOI: 10.1016/j.cortex.2020.05.009] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2019] [Revised: 02/29/2020] [Accepted: 05/04/2020] [Indexed: 01/08/2023]
Abstract
Recent findings suggest that both dorsal and ventral visual pathways process shape information. Nevertheless, a lesion to the ventral pathway alone can result in visual agnosia, an impairment in shape perception. Here, we explored the neural basis of shape processing in a patient with visual agnosia following a circumscribed right hemisphere ventral lesion and evaluated longitudinal changes in the neural profile of shape representations. The results revealed a reduction of shape sensitivity slopes along the patient's right ventral pathway and a similar reduction in the contralesional left ventral pathway. Remarkably, posterior parts of the dorsal pathway bilaterally also evinced a reduction in shape sensitivity. These findings were similar over a two-year interval, revealing that a focal cortical lesion can lead to persistent large-scale alterations of the two visual pathways. These alterations are consistent with the view that a distributed network of regions contributes to shape perception.
Collapse
Affiliation(s)
- Erez Freud
- Department of Psychology and Centre for Vision Research, York University, Toronto, ON, Canada.
| | - Marlene Behrmann
- Department of Psychology and the Carnegie Mellon Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA
| |
Collapse
|
28
|
Burt AL, Crewther DP. The 4D Space-Time Dimensions of Facial Perception. Front Psychol 2020; 11:1842. [PMID: 32849084 PMCID: PMC7399249 DOI: 10.3389/fpsyg.2020.01842] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2020] [Accepted: 07/06/2020] [Indexed: 12/19/2022] Open
Abstract
Facial information is a powerful channel for human-to-human communication. Characteristically, faces can be defined as biological objects that are four-dimensional (4D) patterns, whereby they have concurrently a spatial structure and surface as well as temporal dynamics. The spatial characteristics of facial objects contain a volume and surface in three dimensions (3D), namely breadth, height and importantly, depth. The temporal properties of facial objects are defined by how a 3D facial structure and surface evolves dynamically over time; where time is referred to as the fourth dimension (4D). Our entire perception of another’s face, whether it be social, affective or cognitive perceptions, is therefore built on a combination of 3D and 4D visual cues. Counterintuitively, over the past few decades of experimental research in psychology, facial stimuli have largely been captured, reproduced and presented to participants with two dimensions (2D), while remaining largely static. The following review aims to advance and update facial researchers, on the recent revolution in computer-generated, realistic 4D facial models produced from real-life human subjects. We delve in-depth to summarize recent studies which have utilized facial stimuli that possess 3D structural and surface cues (geometry, surface and depth) and 4D temporal cues (3D structure + dynamic viewpoint and movement). In sum, we have found that higher-order perceptions such as identity, gender, ethnicity, emotion and personality, are critically influenced by 4D characteristics. In future, it is recommended that facial stimuli incorporate the 4D space-time perspective with the proposed time-resolved methods.
Collapse
Affiliation(s)
- Adelaide L Burt
- Centre for Human Psychopharmacology, Swinburne University of Technology, Melbourne, VIC, Australia
| | - David P Crewther
- Centre for Human Psychopharmacology, Swinburne University of Technology, Melbourne, VIC, Australia
| |
Collapse
|
29
|
Wardle SG, Baker C. Recent advances in understanding object recognition in the human brain: deep neural networks, temporal dynamics, and context. F1000Res 2020; 9. [PMID: 32566136 PMCID: PMC7291077 DOI: 10.12688/f1000research.22296.1] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 06/08/2020] [Indexed: 12/17/2022] Open
Abstract
Object recognition is the ability to identify an object or category based on the combination of visual features observed. It is a remarkable feat of the human brain, given that the patterns of light received by the eye associated with the properties of a given object vary widely with simple changes in viewing angle, ambient lighting, and distance. Furthermore, different exemplars of a specific object category can vary widely in visual appearance, such that successful categorization requires generalization across disparate visual features. In this review, we discuss recent advances in understanding the neural representations underlying object recognition in the human brain. We highlight three current trends in the approach towards this goal within the field of cognitive neuroscience. Firstly, we consider the influence of deep neural networks both as potential models of object vision and in how their representations relate to those in the human brain. Secondly, we review the contribution that time-series neuroimaging methods have made towards understanding the temporal dynamics of object representations beyond their spatial organization within different brain regions. Finally, we argue that an increasing emphasis on the context (both visual and task) within which object recognition occurs has led to a broader conceptualization of what constitutes an object representation for the brain. We conclude by identifying some current challenges facing the experimental pursuit of understanding object recognition and outline some emerging directions that are likely to yield new insight into this complex cognitive process.
Collapse
Affiliation(s)
- Susan G Wardle
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, 20892, USA
| | - Chris Baker
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, 20892, USA
| |
Collapse
|
30
|
Karimpur H, Eftekharifar S, Troje NF, Fiehler K. Spatial coding for memory-guided reaching in visual and pictorial spaces. J Vis 2020; 20:1. [PMID: 32271893 PMCID: PMC7405696 DOI: 10.1167/jov.20.4.1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2019] [Accepted: 11/20/2019] [Indexed: 01/10/2023] Open
Abstract
An essential difference between pictorial space displayed as paintings, photographs, or computer screens, and the visual space experienced in the real world is that the observer has a defined location, and thus valid information about distance and direction of objects, in the latter but not in the former. Thus egocentric information should be more reliable in visual space, whereas allocentric information should be more reliable in pictorial space. The majority of studies relied on pictorial representations (images on a computer screen), leaving it unclear whether the same coding mechanisms apply in visual space. Using a memory-guided reaching task in virtual reality, we investigated allocentric coding in both visual space (on a table in virtual reality) and pictorial space (on a monitor that is on the table in virtual reality). Our results suggest that the brain uses allocentric information to represent objects in both pictorial and visual space. Contrary to our hypothesis, the influence of allocentric cues was stronger in visual space than in pictorial space, also after controlling for retinal stimulus size, confounding allocentric cues, and differences in presentation depth. We discuss possible reasons for stronger allocentric coding in visual than in pictorial space.
Collapse
Affiliation(s)
- Harun Karimpur
- Experimental Psychology, Justus Liebig University, Giessen, Germany
- Center for Mind, Brain, and Behavior (CMBB), University of Marburg and Justus Liebig University, Giessen, Germany
| | | | - Nikolaus F. Troje
- Centre for Neuroscience Studies, Queen's University, Kingston, ON, Canada
- Centre for Vision Research and Department of Biology, York University, Toronto, ON, Canada
| | - Katja Fiehler
- Experimental Psychology, Justus Liebig University, Giessen, Germany
- Center for Mind, Brain, and Behavior (CMBB), University of Marburg and Justus Liebig University, Giessen, Germany
| |
Collapse
|
31
|
Kolasinski J, Dima DC, Mehler DMA, Stephenson A, Valadan S, Kusmia S, Rossiter HE. Spatially and Temporally Distinct Encoding of Muscle and Kinematic Information in Rostral and Caudal Primary Motor Cortex. Cereb Cortex Commun 2020; 1:tgaa009. [PMID: 32864612 PMCID: PMC7446240 DOI: 10.1093/texcom/tgaa009] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2020] [Revised: 03/24/2020] [Accepted: 03/25/2020] [Indexed: 12/02/2022] Open
Abstract
The organizing principle of human motor cortex does not follow an anatomical body map, but rather a distributed representational structure in which motor primitives are combined to produce motor outputs. Electrophysiological recordings in primates and human imaging data suggest that M1 encodes kinematic features of movements, such as joint position and velocity. However, M1 exhibits well-documented sensory responses to cutaneous and proprioceptive stimuli, raising questions regarding the origins of kinematic motor representations: are they relevant in top-down motor control, or are they an epiphenomenon of bottom-up sensory feedback during movement? Here we provide evidence for spatially and temporally distinct encoding of kinematic and muscle information in human M1 during the production of a wide variety of naturalistic hand movements. Using a powerful combination of high-field functional magnetic resonance imaging and magnetoencephalography, a spatial and temporal multivariate representational similarity analysis revealed encoding of kinematic information in more caudal regions of M1, over 200 ms before movement onset. In contrast, patterns of muscle activity were encoded in more rostral motor regions much later after movements began. We provide compelling evidence that top-down control of dexterous movement engages kinematic representations in caudal regions of M1 prior to movement production.
Collapse
Affiliation(s)
- James Kolasinski
- Cardiff University Brain Research Imaging Centre, School of Psychology, Cardiff University, Cardiff, CF24 4HQ, UK
| | - Diana C Dima
- Cardiff University Brain Research Imaging Centre, School of Psychology, Cardiff University, Cardiff, CF24 4HQ, UK
| | - David M A Mehler
- Cardiff University Brain Research Imaging Centre, School of Psychology, Cardiff University, Cardiff, CF24 4HQ, UK
| | - Alice Stephenson
- Cardiff University Brain Research Imaging Centre, School of Psychology, Cardiff University, Cardiff, CF24 4HQ, UK
| | - Sara Valadan
- Cardiff University Brain Research Imaging Centre, School of Psychology, Cardiff University, Cardiff, CF24 4HQ, UK
| | - Slawomir Kusmia
- Cardiff University Brain Research Imaging Centre, School of Psychology, Cardiff University, Cardiff, CF24 4HQ, UK
| | - Holly E Rossiter
- Cardiff University Brain Research Imaging Centre, School of Psychology, Cardiff University, Cardiff, CF24 4HQ, UK
| |
Collapse
|
32
|
Holler DE, Fabbri S, Snow JC. Object responses are highly malleable, rather than invariant, with changes in object appearance. Sci Rep 2020; 10:4654. [PMID: 32170123 PMCID: PMC7070005 DOI: 10.1038/s41598-020-61447-8] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2020] [Accepted: 02/17/2020] [Indexed: 11/09/2022] Open
Abstract
Theoretical frameworks of human vision argue that object responses remain stable, or 'invariant', despite changes in viewing conditions that can alter object appearance but not identity. Here, in a major departure from previous approaches that have relied on two-dimensional (2-D) images to study object processing, we demonstrate that changes in an object's appearance, but not its identity, can lead to striking shifts in behavioral responses to objects. We used inverse multidimensional scaling (MDS) to measure the extent to which arrangements of objects in a sorting task were similar or different when the stimuli were displayed as scaled 2-D images, three-dimensional (3-D) augmented reality (AR) projections, or real-world solids. We were especially interested in whether sorting behavior in each display format was based on conceptual (e.g., typical location) versus physical object characteristics. We found that 2-D images of objects were arranged according to conceptual (typical location), but not physical, properties. AR projections, conversely, were arranged primarily according to physical properties such as real-world size, elongation and weight, but not conceptual properties. Real-world solid objects, unlike both 2-D and 3-D images, were arranged using multidimensional criteria that incorporated both conceptual and physical object characteristics. Our results suggest that object responses can be strikingly malleable, rather than invariant, with changes in the visual characteristics of the stimulus. The findings raise important questions about limits of invariance in object processing, and underscore the importance of studying responses to richer stimuli that more closely resemble those we encounter in real-world environments.
Collapse
Affiliation(s)
| | - Sara Fabbri
- Department of Psychology, University of Nevada, Reno, USA.,Department of Experimental Psychology, University of Groningen, Groningen, the Netherlands
| | | |
Collapse
|
33
|
On the Neurocircuitry of Grasping: The influence of action intent on kinematic asymmetries in reach-to-grasp actions. Atten Percept Psychophys 2020; 81:2217-2236. [PMID: 31290131 DOI: 10.3758/s13414-019-01805-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Evidence from electrophysiology suggests that nonhuman primates produce reach-to-grasp movements based on their functional end goal rather than on the biomechanical requirements of the movement. However, the invasiveness of direct-electrical stimulation and single-neuron recording largely precludes analogous investigations in humans. In this review, we present behavioural evidence in the form of kinematic analyses suggesting that the cortical circuits responsible for reach-to-grasp actions in humans are organized in a similar fashion. Grasp-to-eat movements are produced with significantly smaller and more precise maximum grip apertures (MGAs) than are grasp-to-place movements directed toward the same objects, despite near identical mechanical requirements of the two subsequent (i.e., grasp-to-eat and grasp-to-place) movements. Furthermore, the fact that this distinction is limited to right-handed movements suggests that the system governing reach-to-grasp movements is asymmetric. We contend that this asymmetry may be responsible, at least in part, for the preponderance of right-hand dominance among the global population.
Collapse
|
34
|
Singh S, Mandziak A, Barr K, Blackwell AA, Mohajerani MH, Wallace DG, Whishaw IQ. Human string-pulling with and without a string: movement, sensory control, and memory. Exp Brain Res 2019; 237:3431-3447. [DOI: 10.1007/s00221-019-05684-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2019] [Accepted: 11/07/2019] [Indexed: 01/04/2023]
|
35
|
Predictive coding of action intentions in dorsal and ventral visual stream is based on visual anticipations, memory-based information and motor preparation. Brain Struct Funct 2019; 224:3291-3308. [PMID: 31673774 DOI: 10.1007/s00429-019-01970-1] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2019] [Accepted: 10/16/2019] [Indexed: 10/25/2022]
Abstract
Predictions of upcoming movements are based on several types of neural signals that span the visual, somatosensory, motor and cognitive system. Thus far, pre-movement signals have been investigated while participants viewed the object to be acted upon. Here, we studied the contribution of information other than vision to the classification of preparatory signals for action, even in the absence of online visual information. We used functional magnetic resonance imaging (fMRI) and multivoxel pattern analysis (MVPA) to test whether the neural signals evoked by visual, memory-based and somato-motor information can be reliably used to predict upcoming actions in areas of the dorsal and ventral visual stream during the preparatory phase preceding the action, while participants were lying still. Nineteen human participants (nine women) performed one of two actions towards an object with their eyes open or closed. Despite the well-known role of ventral stream areas in visual recognition tasks and the specialization of dorsal stream areas in somato-motor processes, we decoded action intention in areas of both streams based on visual, memory-based and somato-motor signals. Interestingly, we could reliably decode action intention in absence of visual information based on neural activity evoked when visual information was available and vice versa. Our results show a similar visual, memory and somato-motor representation of action planning in dorsal and ventral visual stream areas that allows predicting action intention across domains, regardless of the availability of visual information.
Collapse
|
36
|
Affiliation(s)
- Nikolaus F. Troje
- Department of Biology, Centre for Vision Research, York University, Toronto, Canada
| |
Collapse
|
37
|
Affiliation(s)
- Katja Fiehler
- Department of Psychology, Justus Liebig University, Giessen, Germany
- Center for Mind, Brain, and Behavior (CMBB), Universities of Marburg and Giessen, Germany
| | - Eli Brenner
- Department of Human Movement Sciences, Vrije Universiteit Amsterdam, The Netherlands
| | - Miriam Spering
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, Canada
| |
Collapse
|
38
|
Uji M, Lingnau A, Cavin I, Vishwanath D. Identifying Cortical Substrates Underlying the Phenomenology of Stereopsis and Realness: A Pilot fMRI Study. Front Neurosci 2019; 13:646. [PMID: 31354404 PMCID: PMC6637755 DOI: 10.3389/fnins.2019.00646] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2019] [Accepted: 06/05/2019] [Indexed: 12/05/2022] Open
Abstract
Viewing a real scene or a stereoscopic image (e.g., 3D movies) with both eyes yields a vivid subjective impression of object solidity, tangibility, immersive negative space and sense of realness; something that is not experienced when viewing single pictures of 3D scenes normally with both eyes. This phenomenology, sometimes referred to as stereopsis, is conventionally ascribed to the derivation of depth from the differences in the two eye's images (binocular disparity). Here we report on a pilot study designed to explore if dissociable neural activity associated with the phenomenology of realness can be localized in the cortex. In order to dissociate subjective impression from disparity processing, we capitalized on the finding that the impression of realness associated with stereoscopic viewing can also be generated when viewing a single picture of a 3D scene with one eye through an aperture. Under a blocked fMRI design, subjects viewed intact and scrambled images of natural 3-D objects, and scenes under three viewing conditions: (1) single pictures viewed normally with both eyes (binocular); (2) single pictures viewed with one eye through an aperture (monocular-aperture); and (3) stereoscopic anaglyph images of the same scenes viewed with both eyes (binocular stereopsis). Fixed-effects GLM contrasts aimed at isolating the phenomenology of stereopsis demonstrated a selective recruitment of similar posterior parietal regions for both monocular and binocular stereopsis conditions. Our findings provide preliminary evidence that the cortical processing underlying the subjective impression of realness may be dissociable and distinct from the derivation of depth from disparity.
Collapse
Affiliation(s)
- Makoto Uji
- School of Psychology and Neuroscience, University of St Andrews, St Andrews, United Kingdom
| | - Angelika Lingnau
- Institute of Psychology, University of Regensburg, Regensburg, Germany
| | - Ian Cavin
- TAyside Medical Science Centre (TASC), NHS Tayside, Dundee, United Kingdom
| | - Dhanraj Vishwanath
- School of Psychology and Neuroscience, University of St Andrews, St Andrews, United Kingdom
| |
Collapse
|
39
|
Garcea FE, Almeida J, Sims MH, Nunno A, Meyers SP, Li YM, Walter K, Pilcher WH, Mahon BZ. Domain-Specific Diaschisis: Lesions to Parietal Action Areas Modulate Neural Responses to Tools in the Ventral Stream. Cereb Cortex 2019; 29:3168-3181. [PMID: 30169596 PMCID: PMC6933536 DOI: 10.1093/cercor/bhy183] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2018] [Revised: 07/04/2018] [Indexed: 12/31/2022] Open
Abstract
Neural responses to small manipulable objects ("tools") in high-level visual areas in ventral temporal cortex (VTC) provide an opportunity to test how anatomically remote regions modulate ventral stream processing in a domain-specific manner. Prior patient studies indicate that grasp-relevant information can be computed about objects by dorsal stream structures independently of processing in VTC. Prior functional neuroimaging studies indicate privileged functional connectivity between regions of VTC exhibiting tool preferences and regions of parietal cortex supporting object-directed action. Here we test whether lesions to parietal cortex modulate tool preferences within ventral and lateral temporal cortex. We found that lesions to the left anterior intraparietal sulcus, a region that supports hand-shaping during object grasping and manipulation, modulate tool preferences in left VTC and in the left posterior middle temporal gyrus. Control analyses demonstrated that neural responses to "place" stimuli in left VTC were unaffected by lesions to parietal cortex, indicating domain-specific consequences for ventral stream neural responses in the setting of parietal lesions. These findings provide causal evidence that neural specificity for "tools" in ventral and lateral temporal lobe areas may arise, in part, from online inputs to VTC from parietal areas that receive inputs via the dorsal visual pathway.
Collapse
Affiliation(s)
- Frank E Garcea
- University of Rochester, Department of Brain & Cognitive Sciences, 358 Meliora Hall, Rochester, NY, USA
- University of Rochester, Center for Language Sciences, 358 Meliora Hall, Rochester, NY, USA
- University of Rochester, Center for Visual Science, 274 Meliora Hall, Rochester, NY, USA
- Moss Rehabilitation Research Institute, 50 Township Line Road, Elkins Park, PA, USA
| | - Jorge Almeida
- University of Coimbra, Faculty of Psychology and Educational Sciences, Rua do Colégio Novo, Coimbra, Portugal
- University of Coimbra, Proaction Laboratory, Faculty of Psychology and Educational Sciences, Rua do Colégio Novo, Coimbra, Portugal
| | - Maxwell H Sims
- University of Rochester, Department of Brain & Cognitive Sciences, 358 Meliora Hall, Rochester, NY, USA
| | - Andrew Nunno
- University of Rochester, Department of Brain & Cognitive Sciences, 358 Meliora Hall, Rochester, NY, USA
| | - Steven P Meyers
- University of Rochester Medical Center, Department of Imaging Sciences, 601 Elmwood Avenue, Rochester, NY, USA
- University of Rochester Medical Center, Department of Neurosurgery, 601 Elmwood Avenue, Rochester, NY, USA
| | - Yan Michael Li
- University of Rochester Medical Center, Department of Neurosurgery, 601 Elmwood Avenue, Rochester, NY, USA
| | - Kevin Walter
- University of Rochester Medical Center, Department of Neurosurgery, 601 Elmwood Avenue, Rochester, NY, USA
| | - Webster H Pilcher
- University of Rochester Medical Center, Department of Neurosurgery, 601 Elmwood Avenue, Rochester, NY, USA
| | - Bradford Z Mahon
- University of Rochester, Department of Brain & Cognitive Sciences, 358 Meliora Hall, Rochester, NY, USA
- University of Rochester, Center for Language Sciences, 358 Meliora Hall, Rochester, NY, USA
- University of Rochester, Center for Visual Science, 274 Meliora Hall, Rochester, NY, USA
- University of Rochester Medical Center, Department of Neurosurgery, 601 Elmwood Avenue, Rochester, NY, USA
- Department of Neurology, University of Rochester Medical Center, 601 Elmwood Avenue, Rochester, NY, USA
- Department of Psychology, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA, USA
| |
Collapse
|
40
|
Romero CA, Snow JC. Methods for Presenting Real-world Objects Under Controlled Laboratory Conditions. J Vis Exp 2019. [PMID: 31282889 DOI: 10.3791/59762] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023] Open
Abstract
Our knowledge of human object vision is based almost exclusively on studies in which the stimuli are presented in the form of computerized two-dimensional (2-D) images. In everyday life, however, humans interact predominantly with real-world solid objects, not images. Currently, we know very little about whether images of objects trigger similar behavioral or neural processes as do real-world exemplars. Here, we present methods for bringing the real-world into the laboratory. We detail methods for presenting rich, ecologically-valid real-world stimuli under tightly-controlled viewing conditions. We describe how to match closely the visual appearance of real objects and their images, as well as novel apparatus and protocols that can be used to present real objects and computerized images on successively interleaved trials. We use a decision-making paradigm as a case example in which we compare willingness-to-pay (WTP) for real snack foods versus 2-D images of the same items. We show that WTP increases by 6.6% for food items displayed as real objects versus high-resolution 2-D colored images of the same foods -suggesting that real foods are perceived as being more valuable than their images. Although presenting real object stimuli under controlled conditions presents several practical challenges for the experimenter, this approach will fundamentally expand our understanding of the cognitive and neural processes that underlie naturalistic vision.
Collapse
|
41
|
Ganel T, Ozana A, Goodale MA. When perception intrudes on 2D grasping: evidence from Garner interference. PSYCHOLOGICAL RESEARCH 2019; 84:2138-2143. [PMID: 31201534 DOI: 10.1007/s00426-019-01216-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2019] [Accepted: 06/08/2019] [Indexed: 11/28/2022]
Abstract
When participants reach out to pick up a real 3-D object, their grip aperture reflects the size of the object well before contact is made. At the same time, the classical psychophysical laws and principles of relative size and shape that govern visual perception do not appear to intrude into the control of such movements, which are instead tuned only to the relevant dimension for grasping. In contrast, accumulating evidence suggests that grasps directed at flat 2D objects are not immune to perceptual effects. Thus, in 2D but not 3D grasping, the aperture of the fingers has been shown to be affected by relative and contextual information about the size and shape of the target object. A notable example of this dissociation comes from studies of Garner interference, which signals holistic processing of shape. Previous research has shown that 3D grasping shows no evidence for Garner interference but 2D grasping does (Freud & Ganel, 2015). In a recent study published in this journal (Löhr-Limpens et al., 2019), participants were presented with 2D objects in a Garner paradigm. The pattern of results closely replicated the previously published results with 2D grasping. Unfortunately, the authors, who appear to be unaware the potential differences between 2D and 3D grasping, used their findings to draw an overgeneralized and unwarranted conclusion about the relation between 3D grasping and perception. In this short methodological commentary, we discuss current literature on aperture shaping during 2D grasping and suggest that researchers should play close attention to the nature of the target stimuli they use before drawing conclusions about visual processing for perception and action.
Collapse
Affiliation(s)
- Tzvi Ganel
- Psychology Department, Ben-Gurion University of the Negev, 8410501, Beer-Sheva, Israel.
| | - Aviad Ozana
- Psychology Department, Ben-Gurion University of the Negev, 8410501, Beer-Sheva, Israel
| | - Melvyn A Goodale
- The Brain and Mind Institute, The University of Western Ontario, London, ON, N6A 5B7, Canada
| |
Collapse
|
42
|
Active visuomotor interactions with virtual objects on touchscreens adhere to Weber's law. PSYCHOLOGICAL RESEARCH 2019; 84:2144-2156. [PMID: 31203455 DOI: 10.1007/s00426-019-01210-5] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2019] [Accepted: 06/05/2019] [Indexed: 10/26/2022]
Abstract
Recent findings suggest that the functional separation between vision-for-action and vision-for-perception does not generalize to situations in which two-dimensional (2D), virtual objects, are used as targets. For example, unlike grasping movements directed at real, three-dimensional (3D) objects, the trajectories of grasping movements directed at 2D objects adhere to the psychophysical principle of Weber's law, indicating relative and less efficient processing of their size. Such inefficiency could be attributed to the fact that everyday interactions with touchscreens do not usually entail grasping movements. It is possible, therefore, that more typical interactions with virtual objects, which involve active manipulation of their size or location on a touchscreen, could be performed efficiently and in an absolute manner, and would violate Weber's law. We examined this hypothesis in three experiments in which participants performed active interactions with virtual objects. In Experiment 1, participants made swiping gestures to move virtual objects across the touchscreen. In Experiment 2, participants touched the edges of virtual objects to enlarge their size. In Experiment 3, participants freely enlarged the size of virtual objects, without being required to touch their edges upon contact. In all experiments, the resolution of grip aperture decreased with the size of the target object, adhering to Weber's law. These results suggest that active interactions with 2D objects on touchscreens are not performed in a natural, absolute manner which characterize visuomotor control of real objects.
Collapse
|
43
|
Ozana A, Ganel T. Obeying the law: speed-precision tradeoffs and the adherence to Weber's law in 2D grasping. Exp Brain Res 2019; 237:2011-2021. [PMID: 31161415 DOI: 10.1007/s00221-019-05572-5] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2019] [Accepted: 05/29/2019] [Indexed: 11/30/2022]
Abstract
Visually guided actions toward two-dimensional (2D) and three-dimensional (3D) objects show different patterns of adherence to Weber's law. In 3D grasping, Just Noticeable Differences (JNDs) do not scale with object size, violating Weber's law. Conversely, JNDs in 2D grasping increase with size, showing a pattern of scaler variability between aperture and JND, as predicted by Weber's law. In the current study, we tested whether such scaler variability in 2D grasping reflects genuine adherence to Weber's law. Alternatively, it could be potentially accounted for by a speed-precision tradeoff effect due to an increase in aperture velocity with size. In two experiments, we modified the relation between aperture velocity and size in 2D grasping and tested whether movement trajectories still adhere to Weber's law. In Experiment 1, we aimed to equate aperture velocities between different-sized objects by pre-adjusting the initial finger aperture to match the target's size. In Experiment 2, we reversed the relation between size and velocity by asking participants to hold their fingers wide open prior to grasp, resulting in faster velocities for smaller rather than for larger objects. The results of the two experiments showed that although aperture velocities did not increase with size, adherence to Weber's law was still maintained. These results indicate that the adherence to Weber's law during 2D grasping cannot be accounted for by a speed-precision tradeoff effect, but rather represents genuine reliance on relative, perceptually based computations in visuomotor interactions with 2D objects.
Collapse
Affiliation(s)
- Aviad Ozana
- Department of Psychology, Ben-Gurion University of the Negev, 8410500, Beer-Sheva, Israel
| | - Tzvi Ganel
- Department of Psychology, Ben-Gurion University of the Negev, 8410500, Beer-Sheva, Israel.
| |
Collapse
|
44
|
Bara F, Kaminski G. Holding a real object during encoding helps the learning of foreign vocabulary. Acta Psychol (Amst) 2019; 196:26-32. [PMID: 30974399 DOI: 10.1016/j.actpsy.2019.03.008] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2018] [Revised: 03/28/2019] [Accepted: 03/30/2019] [Indexed: 10/27/2022] Open
Abstract
This study aims at assessing and comparing two different methods for learning new vocabulary words in a foreign language. Learning vocabulary with images as non-verbal aids was compared to learning vocabulary with real objects. The Rwandan children who participated in this study learnt French as a third language. They took part in training sessions to learn different French words either seeing the corresponding image or holding the corresponding object. The training program was implemented in a Rwandan primary school with children of different ages (from five to 10 years old). The results showed that the words associated to objects that were held by the children during learning were better memorized than the words associated with images. The global memory performance was lower for the youngest children; however, learning with objects proved to be superior over learning with images for all ages. Taken together, the findings underscore that learning vocabulary with real objects is particularly efficient and support the idea that the embodied theory of language is a key element to effectively master a foreign language.
Collapse
Affiliation(s)
- Florence Bara
- Cognition, Langues, Langage, Ergonomie, Université de Toulouse, CNRS-UMR 5263, Toulouse 31000, France.
| | - Gwenael Kaminski
- Cognition, Langues, Langage, Ergonomie, Université de Toulouse, CNRS-UMR 5263, Toulouse 31000, France; Institut Universitaire de France, France
| |
Collapse
|
45
|
Uji M, Jentzsch I, Redburn J, Vishwanath D. Dissociating neural activity associated with the subjective phenomenology of monocular stereopsis: An EEG study. Neuropsychologia 2019; 129:357-371. [PMID: 31034841 DOI: 10.1016/j.neuropsychologia.2019.04.017] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2018] [Revised: 03/26/2019] [Accepted: 04/23/2019] [Indexed: 12/15/2022]
Abstract
The subjective phenomenology associated with stereopsis, of solid tangible objects separated by a palpable negative space, is conventionally thought to be a by-product of the derivation of depth from binocular disparity. However, the same qualitative impression has been reported in the absence of disparity, e.g., when viewing pictorial images monocularly through an aperture. Here we aimed to explore if we could identify dissociable neural activity associated with the qualitative impression of stereopsis in the absence of the processing of binocular disparities. We measured EEG activity while subjects viewed pictorial (non-stereoscopic) images of 2D and 3D geometric forms under four different viewing conditions (binocular, monocular, binocular aperture, monocular aperture). EEG activity was analysed by oscillatory source localization (beamformer technique) to examine power change in occipital and parietal regions across viewing and stimulus conditions in targeted frequency bands (alpha: 8-13 Hz & gamma: 60-90 Hz). We observed expected event-related gamma synchronization and alpha desynchronization in occipital cortex and predominant gamma synchronization in parietal cortex across viewing and stimulus conditions. However, only the viewing condition predicted to generate the strongest impression of stereopsis (monocular aperture) revealed significantly elevated gamma synchronization within the parietal cortex for the critical contrasts (3D vs. 2D form). These findings suggest dissociable neural processes specific to the qualitative impression of stereopsis as distinguished from disparity processing.
Collapse
Affiliation(s)
- Makoto Uji
- School of Psychology and Neuroscience, University of St Andrews, UK.
| | - Ines Jentzsch
- School of Psychology and Neuroscience, University of St Andrews, UK
| | - James Redburn
- School of Psychology and Neuroscience, University of St Andrews, UK
| | | |
Collapse
|
46
|
Marini F, Breeding KA, Snow JC. Distinct visuo-motor brain dynamics for real-world objects versus planar images. Neuroimage 2019; 195:232-242. [PMID: 30776529 DOI: 10.1016/j.neuroimage.2019.02.026] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2018] [Revised: 01/26/2019] [Accepted: 02/09/2019] [Indexed: 10/27/2022] Open
Abstract
Ultimately, we aim to generalize and translate scientific knowledge to the real world, yet current understanding of human visual perception is based predominantly on studies of two-dimensional (2-D) images. Recent cognitive-behavioral evidence shows that real objects are processed differently to images, although the neural processes that underlie these differences are unknown. Because real objects (unlike images) afford actions, they may trigger stronger or more prolonged activation in neural populations for visuo-motor action planning. Here, we recorded electroencephalography (EEG) when human observers viewed real-world three-dimensional (3-D) objects or closely matched 2-D images of the same items. Although responses to real objects and images were similar overall, there were critical differences. Compared to images, viewing real objects triggered stronger and more sustained event-related desynchronization (ERD) in the μ frequency band (8-13 Hz) - a neural signature of automatic motor preparation. Event-related potentials (ERPs) revealed a transient, early occipital negativity for real objects (versus images), likely reflecting 3-D stereoscopic differences, and a late sustained parietal amplitude modulation consistent with an 'old-new' memory advantage for real objects over images. Together, these findings demonstrate that real-world objects trigger stronger and more sustained action-related brain responses than images do. The results highlight important similarities and differences between brain responses to images and richer, more ecologically relevant, real-world objects.
Collapse
Affiliation(s)
- Francesco Marini
- Department of Psychology, University of Nevada, 1664 N Virginia St, Reno, NV, 89557-0296, USA; Swartz Center for Computational Neuroscience, University of California San Diego, 9500 Gilman Drive, La Jolla, CA, 92093-0559, USA.
| | - Katherine A Breeding
- Department of Psychology, University of Nevada, 1664 N Virginia St, Reno, NV, 89557-0296, USA
| | - Jacqueline C Snow
- Department of Psychology, University of Nevada, 1664 N Virginia St, Reno, NV, 89557-0296, USA.
| |
Collapse
|
47
|
Chen CF, Kreutz-Delgado K, Sereno MI, Huang RS. Unraveling the spatiotemporal brain dynamics during a simulated reach-to-eat task. Neuroimage 2019; 185:58-71. [PMID: 30315910 PMCID: PMC6325169 DOI: 10.1016/j.neuroimage.2018.10.028] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2018] [Revised: 09/11/2018] [Accepted: 10/09/2018] [Indexed: 01/17/2023] Open
Abstract
The reach-to-eat task involves a sequence of action components including looking, reaching, grasping, and feeding. While cortical representations of individual action components have been mapped in human functional magnetic resonance imaging (fMRI) studies, little is known about the continuous spatiotemporal dynamics among these representations during the reach-to-eat task. In a periodic event-related fMRI experiment, subjects were scanned while they reached toward a food image, grasped the virtual food, and brought it to their mouth within each 16-s cycle. Fourier-based analysis of fMRI time series revealed periodic signals and noise distributed across the brain. Independent component analysis was used to remove periodic or aperiodic motion artifacts. Time-frequency analysis was used to analyze the temporal characteristics of periodic signals in each voxel. Circular statistics was then used to estimate mean phase angles of periodic signals and select voxels based on the distribution of phase angles. By sorting mean phase angles across regions, we were able to show the real-time spatiotemporal brain dynamics as continuous traveling waves over the cortical surface. The activation sequence consisted of approximately the following stages: (1) stimulus related activations in occipital and temporal cortices; (2) movement planning related activations in dorsal premotor and superior parietal cortices; (3) reaching related activations in primary sensorimotor cortex and supplementary motor area; (4) grasping related activations in postcentral gyrus and sulcus; (5) feeding related activations in orofacial areas. These results suggest that phase-encoded design and analysis can be used to unravel sequential activations among brain regions during a simulated reach-to-eat task.
Collapse
Affiliation(s)
- Ching-Fu Chen
- Department of Electrical and Computer Engineering, University of California, San Diego, La Jolla, CA, 92093, USA
| | - Kenneth Kreutz-Delgado
- Department of Electrical and Computer Engineering, University of California, San Diego, La Jolla, CA, 92093, USA; Institute for Neural Computation, University of California, San Diego, La Jolla, CA, 92093, USA
| | - Martin I Sereno
- Department of Psychology and Neuroimaging Center, San Diego State University, San Diego, CA, 92182, USA; Experimental Psychology, University College London, London, WC1H 0AP, UK
| | - Ruey-Song Huang
- Institute for Neural Computation, University of California, San Diego, La Jolla, CA, 92093, USA.
| |
Collapse
|
48
|
Darcy N, Sterzer P, Hesselmann G. Category-selective processing in the two visual pathways as a function of stimulus degradation by noise. Neuroimage 2018; 188:785-793. [PMID: 30592972 DOI: 10.1016/j.neuroimage.2018.12.036] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2018] [Revised: 11/23/2018] [Accepted: 12/16/2018] [Indexed: 11/29/2022] Open
Abstract
Understanding the organising principles and functional properties of the primate brain's numerous visually responsive cortical regions is one of the major goals in cognitive neuroscience. Functional magnetic resonance imaging (fMRI) studies have revealed that neural responses in higher-order visual cortex are shaped by object categories, task context, and spatiotemporal regularities. Beyond these properties, visual processing in the ventral pathway has been shown to be tightly linked to perceptual awareness, while the evidence regarding dorsal visual processing and awareness is mixed. Most previous studies targeting the dorsal pathway have used dichotomous "visible versus invisible" experimental designs and interocular suppression paradigms to modulate stimulus visibility. In this fMRI study, we sought to investigate category-selective processing of faces and tools in the ventral and dorsal visual streams as a function of parametric stimulus degradation by noise. Both frequentist and Bayesian statistics provide strong evidence for a linear relationship between category-selective processing and stimulus information in both visual pathways. Overall, multivariate category decoding accuracies turned out to be lower in the dorsal pathway. We discuss our results within the context of the emerging notion of highly interconnected visual streams, and provide an outlook on how future studies may help to further refine our understanding of the functional role of the dorsal pathway in visual object processing.
Collapse
Affiliation(s)
- N Darcy
- Visual Perception Laboratory, Department of Psychiatry and Psychotherapy, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, Berlin Institute of Health, 10117, Berlin, Germany
| | - P Sterzer
- Visual Perception Laboratory, Department of Psychiatry and Psychotherapy, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, Berlin Institute of Health, 10117, Berlin, Germany
| | - G Hesselmann
- Visual Perception Laboratory, Department of Psychiatry and Psychotherapy, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, Berlin Institute of Health, 10117, Berlin, Germany.
| |
Collapse
|
49
|
"Real-life" continuous flash suppression (CFS)-CFS with real-world objects using augmented reality goggles. Behav Res Methods 2018; 51:2827-2839. [PMID: 30430349 PMCID: PMC6877487 DOI: 10.3758/s13428-018-1162-0] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Continuous flash suppression (CFS) is a popular method for suppressing visual stimuli from awareness for relatively long periods. Thus far, this method has only been used for suppressing two-dimensional images presented on screen. We present a novel variant of CFS, termed “real-life” CFS, in which a portion of the actual immediate surroundings of an observer—including three-dimensional, real-life objects—can be rendered unconscious. Our method uses augmented reality goggles to present subjects with CFS masks to the dominant eye, leaving the nondominant eye exposed to the real world. In three experiments we demonstrated that real objects can indeed be suppressed from awareness for several seconds, on average, and that the suppression duration is comparable to that obtained using classic, on-screen CFS. As supplementary information, we further provide an example of experimental code that can be modified for future studies. This technique opens the way to new questions in the study of consciousness and its functions.
Collapse
|
50
|
Decoding Brain States for Planning Functional Grasps of Tools: A Functional Magnetic Resonance Imaging Multivoxel Pattern Analysis Study. J Int Neuropsychol Soc 2018; 24:1013-1025. [PMID: 30196800 DOI: 10.1017/s1355617718000590] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/05/2023]
Abstract
OBJECTIVES We used multivoxel pattern analysis (MVPA) to investigate neural selectivity for grasp planning within the left-lateralized temporo-parieto-frontal network of areas (praxis representation network, PRN) typically associated with tool-related actions, as studied with traditional neuroimaging contrasts. METHODS We used data from 20 participants whose task was to plan functional grasps of tools, with either right or left hands. Region of interest and whole-brain searchlight analyses were performed to show task-related neural patterns. RESULTS MVPA revealed significant contributions to functional grasp planning from the anterior intraparietal sulcus (aIPS) and its immediate vicinities, supplemented by inputs from posterior subdivisions of IPS, and the ventral lateral occipital complex (vLOC). Moreover, greater local selectivity was demonstrated in areas near the superior parieto-occipital cortex and dorsal premotor cortex, putatively forming the dorso-dorsal stream. CONCLUSIONS A contribution from aIPS, consistent with its role in prospective grasp formation and/or encoding of relevant tool properties (e.g., potential graspable parts), is likely to accompany the retrieval of manipulation and/or mechanical knowledge subserved by the supramarginal gyrus for achieving action goals. An involvement of vLOC indicates that MVPA is particularly sensitive to coding of object properties, their identities and even functions, for a support of grip formation. Finally, the engagement of the superior parieto-frontal regions as revealed by MVPA is consistent with their selectivity for transient features of tools (i.e., variable affordances) for anticipatory hand postures. These outcomes support the notion that, compared to traditional approaches, MVPA can reveal more fine-grained patterns of neural activity. (JINS, 2018, 24, 1013-1025).
Collapse
|