1
|
Kyler H, James K. The importance of multisensory-motor learning on subsequent visual recognition. Perception 2024; 53:597-618. [PMID: 38900046 DOI: 10.1177/03010066241258967] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/21/2024]
Abstract
Speed of visual object recognition is facilitated after active manual exploration of objects relative to passive visual processing alone. Manual exploration allows viewers to select important information about object structure that may facilitate recognition. Viewpoints where the objects' axis of elongation is perpendicular or parallel to the line of sight are selected more during exploration, recognized faster than other viewpoints, and afford the most information about structure when object movement is controlled by the viewer. Prior work used virtual object exploration in active and passive viewing conditions, limiting multisensory structural object information. Adding multisensory information to encoding may change accuracy of overall recognition, viewpoint selection, and viewpoint recognition. We tested whether the known active advantage for object recognition would change when real objects were studied, affording visual and haptic information. Participants interacted with 3D novel objects during manual exploration or passive viewing of another's object interactions. Object recognition was tested using several viewpoints of rendered objects. We found that manually explored objects were recognized more accurately than objects studied through passive exploration and that recognition of viewpoints differed from previous work.
Collapse
|
2
|
Tivadar RI, Franceschiello B, Minier A, Murray MM. Learning and navigating digitally rendered haptic spatial layouts. NPJ SCIENCE OF LEARNING 2023; 8:61. [PMID: 38102127 PMCID: PMC10724186 DOI: 10.1038/s41539-023-00208-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/19/2022] [Accepted: 11/28/2023] [Indexed: 12/17/2023]
Abstract
Learning spatial layouts and navigating through them rely not simply on sight but rather on multisensory processes, including touch. Digital haptics based on ultrasounds are effective for creating and manipulating mental images of individual objects in sighted and visually impaired participants. Here, we tested if this extends to scenes and navigation within them. Using only tactile stimuli conveyed via ultrasonic feedback on a digital touchscreen (i.e., a digital interactive map), 25 sighted, blindfolded participants first learned the basic layout of an apartment based on digital haptics only and then one of two trajectories through it. While still blindfolded, participants successfully reconstructed the haptically learned 2D spaces and navigated these spaces. Digital haptics were thus an effective means to learn and translate, on the one hand, 2D images into 3D reconstructions of layouts and, on the other hand, navigate actions within real spaces. Digital haptics based on ultrasounds represent an alternative learning tool for complex scenes as well as for successful navigation in previously unfamiliar layouts, which can likely be further applied in the rehabilitation of spatial functions and mitigation of visual impairments.
Collapse
Affiliation(s)
- Ruxandra I Tivadar
- The Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland.
- Department of Ophthalmology, Fondation Asile des Aveugles, Lausanne, Switzerland.
- Centre for Integrative and Complementary Medicine, Department of Anesthesiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland.
- Cognitive Computational Neuroscience Group, Institute for Computer Science, University of Bern, Bern, Switzerland.
- The Sense Innovation and Research Center, Lausanne and Sion, Switzerland.
| | - Benedetta Franceschiello
- The Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
- The Sense Innovation and Research Center, Lausanne and Sion, Switzerland
- Institute of Systems Engineering, School of Engineering, University of Applied Sciences Western Switzerland (HES-SO Valais), Sion, Switzerland
| | - Astrid Minier
- The Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
- Department of Ophthalmology, Fondation Asile des Aveugles, Lausanne, Switzerland
| | - Micah M Murray
- The Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland.
- Department of Ophthalmology, Fondation Asile des Aveugles, Lausanne, Switzerland.
- The Sense Innovation and Research Center, Lausanne and Sion, Switzerland.
| |
Collapse
|
3
|
Newell FN, McKenna E, Seveso MA, Devine I, Alahmad F, Hirst RJ, O'Dowd A. Multisensory perception constrains the formation of object categories: a review of evidence from sensory-driven and predictive processes on categorical decisions. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220342. [PMID: 37545304 PMCID: PMC10404931 DOI: 10.1098/rstb.2022.0342] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Accepted: 06/29/2023] [Indexed: 08/08/2023] Open
Abstract
Although object categorization is a fundamental cognitive ability, it is also a complex process going beyond the perception and organization of sensory stimulation. Here we review existing evidence about how the human brain acquires and organizes multisensory inputs into object representations that may lead to conceptual knowledge in memory. We first focus on evidence for two processes on object perception, multisensory integration of redundant information (e.g. seeing and feeling a shape) and crossmodal, statistical learning of complementary information (e.g. the 'moo' sound of a cow and its visual shape). For both processes, the importance attributed to each sensory input in constructing a multisensory representation of an object depends on the working range of the specific sensory modality, the relative reliability or distinctiveness of the encoded information and top-down predictions. Moreover, apart from sensory-driven influences on perception, the acquisition of featural information across modalities can affect semantic memory and, in turn, influence category decisions. In sum, we argue that both multisensory processes independently constrain the formation of object categories across the lifespan, possibly through early and late integration mechanisms, respectively, to allow us to efficiently achieve the everyday, but remarkable, ability of recognizing objects. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- F. N. Newell
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - E. McKenna
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - M. A. Seveso
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - I. Devine
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - F. Alahmad
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - R. J. Hirst
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - A. O'Dowd
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| |
Collapse
|
4
|
Sathian K, Lacey S. Cross-Modal Interactions of the Tactile System. CURRENT DIRECTIONS IN PSYCHOLOGICAL SCIENCE 2022; 31:411-418. [PMID: 36408466 PMCID: PMC9674209 DOI: 10.1177/09637214221101877] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/29/2023]
Abstract
The sensory systems responsible for perceptions of touch, vision, hearing, etc. have traditionally been regarded as mostly separate, only converging at late stages of processing. Contrary to this dogma, recent work has shown that interactions between the senses are robust and abundant. Touch and vision are both commonly used to obtain information about a number of object properties, and share perceptual and neural representations in many domains. Additionally, visuotactile interactions are implicated in the sense of body ownership, as revealed by powerful illusions that can be evoked by manipulating these interactions. Touch and hearing both rely in part on temporal frequency information, leading to a number of audiotactile interactions reflecting a good deal of perceptual and neural overlap. The focus in sensory neuroscience and psychophysics is now on characterizing the multisensory interactions that lead to our panoply of perceptual experiences.
Collapse
Affiliation(s)
- K. Sathian
- Department of Neurology, Penn State Health Milton S. Hershey Medical Center
- Department of Neural & Behavioral Sciences, Penn State College of Medicine
- Department of Psychology, Penn State College of Liberal Arts
| | - Simon Lacey
- Department of Neurology, Penn State Health Milton S. Hershey Medical Center
- Department of Neural & Behavioral Sciences, Penn State College of Medicine
| |
Collapse
|
5
|
Leo F, Gori M, Sciutti A. Early blindness modulates haptic object recognition. Front Hum Neurosci 2022; 16:941593. [PMID: 36158621 PMCID: PMC9498977 DOI: 10.3389/fnhum.2022.941593] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2022] [Accepted: 08/16/2022] [Indexed: 11/13/2022] Open
Abstract
Haptic object recognition is usually an efficient process although slower and less accurate than its visual counterpart. The early loss of vision imposes a greater reliance on haptic perception for recognition compared to the sighted. Therefore, we may expect that congenitally blind persons could recognize objects through touch more quickly and accurately than late blind or sighted people. However, the literature provided mixed results. Furthermore, most of the studies on haptic object recognition focused on performance, devoting little attention to the exploration procedures that conducted to that performance. In this study, we used iCube, an instrumented cube recording its orientation in space as well as the location of the points of contact on its faces. Three groups of congenitally blind, late blind and age and gender-matched blindfolded sighted participants were asked to explore the cube faces where little pins were positioned in varying number. Participants were required to explore the cube twice, reporting whether the cube was the same or it differed in pins disposition. Results showed that recognition accuracy was not modulated by the level of visual ability. However, congenitally blind touched more cells simultaneously while exploring the faces and changed more the pattern of touched cells from one recording sample to the next than late blind and sighted. Furthermore, the number of simultaneously touched cells negatively correlated with exploration duration. These findings indicate that early blindness shapes haptic exploration of objects that can be held in hands.
Collapse
Affiliation(s)
- Fabrizio Leo
- Cognitive Architecture for Collaborative Technologies Unit, Istituto Italiano di Tecnologia, Genova, Italy
- *Correspondence: Fabrizio Leo,
| | - Monica Gori
- Unit for Visually Impaired People, Istituto Italiano di Tecnologia, Genova, Italy
| | - Alessandra Sciutti
- Cognitive Architecture for Collaborative Technologies Unit, Istituto Italiano di Tecnologia, Genova, Italy
| |
Collapse
|
6
|
Evidence for Independent Processing of Shape by Vision and Touch. eNeuro 2022; 9:ENEURO.0502-21.2022. [PMID: 35998295 PMCID: PMC9215689 DOI: 10.1523/eneuro.0502-21.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2021] [Revised: 05/21/2022] [Accepted: 05/24/2022] [Indexed: 12/15/2022] Open
Abstract
Although visual object recognition is well studied and relatively well understood, much less is known about how shapes are recognized by touch and how such haptic stimuli might be compared with visual shapes. One might expect that the processes of visual and haptic object recognition engage similar brain structures given the advantages of avoiding redundant brain circuitry and indeed there is some evidence that this is the case. A potentially fruitful approach to understanding the differences in how shapes might be neurally represented is to find an algorithmic method of comparing shapes, which agrees with human behavior and determines whether that method differs between different modality conditions. If not, it would provide further evidence for a shared representation of shape. We recruited human participants to perform a one-back same-different visual and haptic shape comparison task both within (i.e., comparing two visual shapes or two haptic shapes) and across (i.e., comparing visual with haptic shapes) modalities. We then used various shape metrics to predict performance based on the shape, orientation, and modality of the two stimuli that were being compared on each trial. We found that the metrics that best predict shape comparison behavior heavily depended on the modality of the two shapes, suggesting differences in which features are used for comparing shapes depending on modality and that object recognition is not necessarily performed in a single, modality-agnostic region.
Collapse
|
7
|
Tivadar RI, Chappaz C, Anaflous F, Roche J, Murray MM. Mental Rotation of Digitally-Rendered Haptic Objects by the Visually-Impaired. Front Neurosci 2020; 14:197. [PMID: 32265628 PMCID: PMC7099598 DOI: 10.3389/fnins.2020.00197] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2019] [Accepted: 02/24/2020] [Indexed: 11/18/2022] Open
Abstract
In the event of visual impairment or blindness, information from other intact senses can be used as substitutes to retrain (and in extremis replace) visual functions. Abilities including reading, mental representation of objects and spatial navigation can be performed using tactile information. Current technologies can convey a restricted library of stimuli, either because they depend on real objects or renderings with low resolution layouts. Digital haptic technologies can overcome such limitations. The applicability of this technology was previously demonstrated in sighted participants. Here, we reasoned that visually-impaired and blind participants can create mental representations of letters presented haptically in normal and mirror-reversed form without the use of any visual information, and mentally manipulate such representations. Visually-impaired and blind volunteers were blindfolded and trained on the haptic tablet with two letters (either L and P or F and G). During testing, they haptically explored on any trial one of the four letters presented at 0°, 90°, 180°, or 270° rotation from upright and indicated if the letter was either in a normal or mirror-reversed form. Rotation angle impacted performance; greater deviation from 0° resulted in greater impairment for trained and untrained normal letters, consistent with mental rotation of these haptically-rendered objects. Performance was also generally less accurate with mirror-reversed stimuli, which was not affected by rotation angle. Our findings demonstrate, for the first time, the suitability of a digital haptic technology in the blind and visually-impaired. Classic devices remain limited in their accessibility and in the flexibility of their applications. We show that mental representations can be generated and manipulated using digital haptic technology. This technology may thus offer an innovative solution to the mitigation of impairments in the visually-impaired, and to the training of skills dependent on mental representations and their spatial manipulation.
Collapse
Affiliation(s)
- Ruxandra I Tivadar
- The LINE (Laboratory for Investigative Neurophysiology), Department of Radiology, University Hospital Center and University of Lausanne, Lausanne, Switzerland.,Department of Ophthalmology, University of Lausanne and Fondation Asile des Aveugles, Lausanne, Switzerland
| | | | - Fatima Anaflous
- Department of Ophthalmology, University of Lausanne and Fondation Asile des Aveugles, Lausanne, Switzerland
| | - Jean Roche
- Department of Ophthalmology, University of Lausanne and Fondation Asile des Aveugles, Lausanne, Switzerland
| | - Micah M Murray
- The LINE (Laboratory for Investigative Neurophysiology), Department of Radiology, University Hospital Center and University of Lausanne, Lausanne, Switzerland.,Department of Ophthalmology, University of Lausanne and Fondation Asile des Aveugles, Lausanne, Switzerland.,Sensory, Perceptual and Cognitive Neuroscience Section, Center for Biomedical Imaging (CIBM), Lausanne, Switzerland.,Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, United States
| |
Collapse
|
8
|
Norman JF. The Recognition of Solid Object Shape: The Importance of Inhomogeneity. Iperception 2019; 10:2041669519870553. [PMID: 31448073 PMCID: PMC6693026 DOI: 10.1177/2041669519870553] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2019] [Accepted: 07/26/2019] [Indexed: 11/15/2022] Open
Abstract
A single experiment evaluated the haptic-visual cross-modal matching of solid object shape. One set of randomly shaped artificial objects was used (sinusoidally modulated spheres, SMS) as well as two sets of naturally shaped objects (bell peppers, Capsicum annuum and sweet potatoes, Ipomoea batatas). A total of 66 adults participated in the study. The participants' task was to haptically explore a single object on any particular trial and subsequently indicate which of 12 simultaneously visible objects possessed the same shape. The participants' performance for the natural objects was 60.9 and 78.7 percent correct for the bell peppers and sweet potatoes, respectively. The analogous performance for the SMS objects, while better than chance, was far worse (18.6 percent correct). All of these types of stimulus objects possess a rich geometrical structure (e.g., they all possess multiple elliptic, hyperbolic, and parabolic surface regions). Nevertheless, these three types of stimulus objects are perceived differently: Individual members of sweet potatoes and bell peppers are largely identifiable to human participants, while the individual SMS objects are not. Analyses of differential geometry indicate that these natural objects (e.g., bell peppers and sweet potatoes) possess heterogeneous spatial configurations of distinctly curved surface regions, and this heterogeneity is lacking in SMS objects. The current results therefore suggest that increases in surface structure heterogeneity facilitate human object recognition.
Collapse
Affiliation(s)
- J. Farley Norman
- Department of Psychological Sciences, Ogden College of Science and Engineering, Western Kentucky University, Bowling Green, KY, USA
| |
Collapse
|
9
|
Tivadar RI, Rouillard T, Chappaz C, Knebel JF, Turoman N, Anaflous F, Roche J, Matusz PJ, Murray MM. Mental Rotation of Digitally-Rendered Haptic Objects. Front Integr Neurosci 2019; 13:7. [PMID: 30930756 PMCID: PMC6427928 DOI: 10.3389/fnint.2019.00007] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2018] [Accepted: 02/25/2019] [Indexed: 11/13/2022] Open
Abstract
Sensory substitution is an effective means to rehabilitate many visual functions after visual impairment or blindness. Tactile information, for example, is particularly useful for functions such as reading, mental rotation, shape recognition, or exploration of space. Extant haptic technologies typically rely on real physical objects or pneumatically driven renderings and thus provide a limited library of stimuli to users. New developments in digital haptic technologies now make it possible to actively simulate an unprecedented range of tactile sensations. We provide a proof-of-concept for a new type of technology (hereafter haptic tablet) that renders haptic feedback by modulating the friction of a flat screen through ultrasonic vibrations of varying shapes to create the sensation of texture when the screen is actively explored. We reasoned that participants should be able to create mental representations of letters presented in normal and mirror-reversed haptic form without the use of any visual information and to manipulate such representations in a mental rotation task. Healthy sighted, blindfolded volunteers were trained to discriminate between two letters (either L and P, or F and G; counterbalanced across participants) on a haptic tablet. They then tactually explored all four letters in normal or mirror-reversed form at different rotations (0°, 90°, 180°, and 270°) and indicated letter form (i.e., normal or mirror-reversed) by pressing one of two mouse buttons. We observed the typical effect of rotation angle on object discrimination performance (i.e., greater deviation from 0° resulted in worse performance) for trained letters, consistent with mental rotation of these haptically-rendered objects. We likewise observed generally slower and less accurate performance with mirror-reversed compared to prototypically oriented stimuli. Our findings extend existing research in multisensory object recognition by indicating that a new technology simulating active haptic feedback can support the generation and spatial manipulation of mental representations of objects. Thus, such haptic tablets can offer a new avenue to mitigate visual impairments and train skills dependent on mental object-based representations and their spatial manipulation.
Collapse
Affiliation(s)
- Ruxandra I. Tivadar
- The Laboratory for Investigative Neurophysiology (LINE), Department of Radiology and Clinical Neurosciences, University Hospital Center and University of Lausanne, Lausanne, Switzerland
- Department of Ophthalmology, Fondation Asile des Aveugles, Lausanne, Switzerland
| | | | | | - Jean-François Knebel
- The Laboratory for Investigative Neurophysiology (LINE), Department of Radiology and Clinical Neurosciences, University Hospital Center and University of Lausanne, Lausanne, Switzerland
- Electroencephalography Brain Mapping Core, Center for Biomedical Imaging (CIBM) of Lausanne and Geneva, Lausanne, Switzerland
| | - Nora Turoman
- The Laboratory for Investigative Neurophysiology (LINE), Department of Radiology and Clinical Neurosciences, University Hospital Center and University of Lausanne, Lausanne, Switzerland
| | - Fatima Anaflous
- Department of Ophthalmology, Fondation Asile des Aveugles, Lausanne, Switzerland
| | - Jean Roche
- Department of Ophthalmology, Fondation Asile des Aveugles, Lausanne, Switzerland
| | - Pawel J. Matusz
- The Laboratory for Investigative Neurophysiology (LINE), Department of Radiology and Clinical Neurosciences, University Hospital Center and University of Lausanne, Lausanne, Switzerland
- Information Systems Institute at the University of Applied Sciences Western Switzerland (HES-SO Valais), Sierre, Switzerland
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, United States
| | - Micah M. Murray
- The Laboratory for Investigative Neurophysiology (LINE), Department of Radiology and Clinical Neurosciences, University Hospital Center and University of Lausanne, Lausanne, Switzerland
- Department of Ophthalmology, Fondation Asile des Aveugles, Lausanne, Switzerland
- Electroencephalography Brain Mapping Core, Center for Biomedical Imaging (CIBM) of Lausanne and Geneva, Lausanne, Switzerland
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, United States
| |
Collapse
|
10
|
Carducci P, Schwing R, Huber L, Truppa V. Tactile information improves visual object discrimination in kea, Nestor notabilis, and capuchin monkeys, Sapajus spp. Anim Behav 2018. [DOI: 10.1016/j.anbehav.2017.11.018] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
|
11
|
Sathian K. Analysis of haptic information in the cerebral cortex. J Neurophysiol 2016; 116:1795-1806. [PMID: 27440247 DOI: 10.1152/jn.00546.2015] [Citation(s) in RCA: 58] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2015] [Accepted: 07/20/2016] [Indexed: 11/22/2022] Open
Abstract
Haptic sensing of objects acquires information about a number of properties. This review summarizes current understanding about how these properties are processed in the cerebral cortex of macaques and humans. Nonnoxious somatosensory inputs, after initial processing in primary somatosensory cortex, are partially segregated into different pathways. A ventrally directed pathway carries information about surface texture into parietal opercular cortex and thence to medial occipital cortex. A dorsally directed pathway transmits information regarding the location of features on objects to the intraparietal sulcus and frontal eye fields. Shape processing occurs mainly in the intraparietal sulcus and lateral occipital complex, while orientation processing is distributed across primary somatosensory cortex, the parietal operculum, the anterior intraparietal sulcus, and a parieto-occipital region. For each of these properties, the respective areas outside primary somatosensory cortex also process corresponding visual information and are thus multisensory. Consistent with the distributed neural processing of haptic object properties, tactile spatial acuity depends on interaction between bottom-up tactile inputs and top-down attentional signals in a distributed neural network. Future work should clarify the roles of the various brain regions and how they interact at the network level.
Collapse
Affiliation(s)
- K Sathian
- Departments of Neurology, Rehabilitation Medicine and Psychology, Emory University, Atlanta, Georgia; and Center for Visual and Neurocognitive Rehabilitation, Atlanta Department of Veterans Affairs Medical Center, Decatur, Georgia
| |
Collapse
|
12
|
Erdogan G, Chen Q, Garcea FE, Mahon BZ, Jacobs RA. Multisensory Part-based Representations of Objects in Human Lateral Occipital Cortex. J Cogn Neurosci 2016; 28:869-81. [PMID: 26918587 DOI: 10.1162/jocn_a_00937] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The format of high-level object representations in temporal-occipital cortex is a fundamental and as yet unresolved issue. Here we use fMRI to show that human lateral occipital cortex (LOC) encodes novel 3-D objects in a multisensory and part-based format. We show that visual and haptic exploration of objects leads to similar patterns of neural activity in human LOC and that the shared variance between visually and haptically induced patterns of BOLD contrast in LOC reflects the part structure of the objects. We also show that linear classifiers trained on neural data from LOC on a subset of the objects successfully predict a novel object based on its component part structure. These data demonstrate a multisensory code for object representations in LOC that specifies the part structure of objects.
Collapse
|
13
|
Occelli V, Lacey S, Stephens C, John T, Sathian K. Haptic Object Recognition is View-Independent in Early Blind but not Sighted People. Perception 2015; 45:337-45. [PMID: 26562881 DOI: 10.1177/0301006615614489] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
Object recognition, whether visual or haptic, is impaired in sighted people when objects are rotated between learning and test, relative to an unrotated condition, that is, recognition is view-dependent. Loss of vision early in life results in greater reliance on haptic perception for object identification compared with the sighted. Therefore, we hypothesized that early blind people may be more adept at recognizing objects despite spatial transformations. To test this hypothesis, we compared early blind and sighted control participants on a haptic object recognition task. Participants studied pairs of unfamiliar three-dimensional objects and performed a two-alternative forced-choice identification task, with the learned objects presented both unrotated and rotated 180° about they-axis. Rotation impaired the recognition accuracy of sighted, but not blind, participants. We propose that, consistent with our hypothesis, haptic view-independence in the early blind reflects their greater experience with haptic object perception.
Collapse
Affiliation(s)
| | - Simon Lacey
- Department of Neurology, Emory University, Atlanta, GA, USA
| | - Careese Stephens
- Department of Neurology, Emory University, Atlanta, GA, USARehabilitation R&D Center of Excellence, Atlanta VAMC, Decatur, GA, USA
| | - Thomas John
- Department of Neurology, Emory University, Atlanta, GA, USA
| | - K Sathian
- Department of Neurology, Emory University, Atlanta, GA, USADepartment of Rehabilitation Medicine, Emory University, Atlanta, GA, USA; Department of Psychology, Emory University, Atlanta, GA, USARehabilitation R&D Center of Excellence, Atlanta VAMC, Decatur, GA, USA
| |
Collapse
|
14
|
Erdogan G, Yildirim I, Jacobs RA. From Sensory Signals to Modality-Independent Conceptual Representations: A Probabilistic Language of Thought Approach. PLoS Comput Biol 2015; 11:e1004610. [PMID: 26554704 PMCID: PMC4640543 DOI: 10.1371/journal.pcbi.1004610] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2015] [Accepted: 10/17/2015] [Indexed: 12/02/2022] Open
Abstract
People learn modality-independent, conceptual representations from modality-specific sensory signals. Here, we hypothesize that any system that accomplishes this feat will include three components: a representational language for characterizing modality-independent representations, a set of sensory-specific forward models for mapping from modality-independent representations to sensory signals, and an inference algorithm for inverting forward models-that is, an algorithm for using sensory signals to infer modality-independent representations. To evaluate this hypothesis, we instantiate it in the form of a computational model that learns object shape representations from visual and/or haptic signals. The model uses a probabilistic grammar to characterize modality-independent representations of object shape, uses a computer graphics toolkit and a human hand simulator to map from object representations to visual and haptic features, respectively, and uses a Bayesian inference algorithm to infer modality-independent object representations from visual and/or haptic signals. Simulation results show that the model infers identical object representations when an object is viewed, grasped, or both. That is, the model's percepts are modality invariant. We also report the results of an experiment in which different subjects rated the similarity of pairs of objects in different sensory conditions, and show that the model provides a very accurate account of subjects' ratings. Conceptually, this research significantly contributes to our understanding of modality invariance, an important type of perceptual constancy, by demonstrating how modality-independent representations can be acquired and used. Methodologically, it provides an important contribution to cognitive modeling, particularly an emerging probabilistic language-of-thought approach, by showing how symbolic and statistical approaches can be combined in order to understand aspects of human perception.
Collapse
Affiliation(s)
- Goker Erdogan
- Department of Brain & Cognitive Sciences, University of Rochester, Rochester, New York, United States of America
| | - Ilker Yildirim
- Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Laboratory of Neural Systems, The Rockefeller University, New York, New York, United States of America
| | - Robert A. Jacobs
- Department of Brain & Cognitive Sciences, University of Rochester, Rochester, New York, United States of America
| |
Collapse
|
15
|
Lacey S, Sathian K. CROSSMODAL AND MULTISENSORY INTERACTIONS BETWEEN VISION AND TOUCH. SCHOLARPEDIA 2015; 10:7957. [PMID: 26783412 DOI: 10.4249/scholarpedia.7957] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2023] Open
Affiliation(s)
- Simon Lacey
- Departments of Neurology, Emory University, Atlanta, GA, USA
| | - K Sathian
- Departments of Neurology, Emory University, Atlanta, GA, USA; Rehabilitation Medicine, Emory University, Atlanta, GA, USA; Psychology, Emory University, Atlanta, GA, USA; Rehabilitation R&D Center of Excellence, Atlanta VAMC, Decatur, GA, USA
| |
Collapse
|
16
|
Recognizing familiar objects by hand and foot: Haptic shape perception generalizes to inputs from unusual locations and untrained body parts. Atten Percept Psychophys 2014; 76:541-58. [PMID: 24197503 DOI: 10.3758/s13414-013-0559-1] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The limits of generalization of our 3-D shape recognition system to identifying objects by touch was investigated by testing exploration at unusual locations and using untrained effectors. In Experiments 1 and 2, people found identification by hand of real objects, plastic 3-D models of objects, and raised line drawings placed in front of themselves no easier than when exploration was behind their back. Experiment 3 compared one-handed, two-handed, one-footed, and two-footed haptic object recognition of familiar objects. Recognition by foot was slower (7 vs. 13 s) and much less accurate (9 % vs. 47 % errors) than recognition by either one or both hands. Nevertheless, item difficulty was similar across hand and foot exploration, and there was a strong correlation between an individual's hand and foot performance. Furthermore, foot recognition was better with the largest 20 of the 80 items (32 % errors), suggesting that physical limitations hampered exploration by foot. Thus, object recognition by hand generalized efficiently across the spatial location of stimuli, while object recognition by foot seemed surprisingly good given that no prior training was provided. Active touch (haptics) thus efficiently extracts 3-D shape information and accesses stored representations of familiar objects from novel modes of input.
Collapse
|
17
|
Abstract
We investigated whether the relative position of objects and the body would influence haptic recognition. People felt objects on the right or left side of their body midline, using their right hand. Their head was turned towards or away from the object, and they could not see their hands or the object. People were better at naming 2-D raised line drawings and 3-D small-scale models of objects and also real, everyday objects when they looked towards them. However, this head-towards benefit was reliable only when their right hand crossed their body midline to feel objects on their left side. Thus, haptic object recognition was influenced by people's head position, although vision of their hand and the object was blocked. This benefit of turning the head towards the object being explored suggests that proprioceptive and haptic inputs are remapped into an external coordinate system and that this remapping is harder when the body is in an unusual position (with the hand crossing the body midline and the head turned away from the hand). The results indicate that haptic processes align sensory inputs from the hand and head even though either hand-centered or object-centered coordinate systems should suffice for haptic object recognition.
Collapse
|
18
|
Lacey S, Sathian K. Visuo-haptic multisensory object recognition, categorization, and representation. Front Psychol 2014; 5:730. [PMID: 25101014 PMCID: PMC4102085 DOI: 10.3389/fpsyg.2014.00730] [Citation(s) in RCA: 60] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2014] [Accepted: 06/23/2014] [Indexed: 12/15/2022] Open
Abstract
Visual and haptic unisensory object processing show many similarities in terms of categorization, recognition, and representation. In this review, we discuss how these similarities contribute to multisensory object processing. In particular, we show that similar unisensory visual and haptic representations lead to a shared multisensory representation underlying both cross-modal object recognition and view-independence. This shared representation suggests a common neural substrate and we review several candidate brain regions, previously thought to be specialized for aspects of visual processing, that are now known also to be involved in analogous haptic tasks. Finally, we lay out the evidence for a model of multisensory object recognition in which top-down and bottom-up pathways to the object-selective lateral occipital complex are modulated by object familiarity and individual differences in object and spatial imagery.
Collapse
Affiliation(s)
- Simon Lacey
- Department of Neurology, Emory University School of Medicine Atlanta, GA, USA
| | - K Sathian
- Department of Neurology, Emory University School of Medicine Atlanta, GA, USA ; Department of Rehabilitation Medicine, Emory University School of Medicine Atlanta, GA, USA ; Department of Psychology, Emory University School of Medicine Atlanta, GA, USA ; Rehabilitation Research and Development Center of Excellence, Atlanta Veterans Affairs Medical Center Decatur, GA, USA
| |
Collapse
|
19
|
Abstract
Research has shown that information accessed from one sensory modality can influence perceptual and attentional processes in another modality. Here, we demonstrated a novel crossmodal influence of haptic-shape information on visual attention. Participants visually searched for a target object (e.g., an orange) presented among distractor objects, fixating the target as quickly as possible. While searching for the target, participants held (never viewed and out of sight) an item of a specific shape in their hands. In two experiments, we demonstrated that the time for the eyes to reach a target-a measure of overt visual attention-was reduced when the shape of the held item (e.g., a sphere) was consistent with the shape of the visual target (e.g., an orange), relative to when the held shape was unrelated to the target (e.g., a hockey puck) or when no shape was held. This haptic-to-visual facilitation occurred despite the fact that the held shapes were not predictive of the visual targets' shapes, suggesting that the crossmodal influence occurred automatically, reflecting shape-specific haptic guidance of overt visual attention.
Collapse
|
20
|
Joanne Jao R, James TW, Harman James K. Multisensory convergence of visual and haptic object preference across development. Neuropsychologia 2014; 56:381-92. [PMID: 24560914 PMCID: PMC4020146 DOI: 10.1016/j.neuropsychologia.2014.02.009] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2013] [Revised: 01/10/2014] [Accepted: 02/10/2014] [Indexed: 11/27/2022]
Abstract
Visuohaptic inputs offer redundant and complementary information regarding an object׳s geometrical structure. The integration of these inputs facilitates object recognition in adults. While the ability to recognize objects in the environment both visually and haptically develops early on, the development of the neural mechanisms for integrating visual and haptic object shape information remains unknown. In the present study, we used functional Magnetic Resonance Imaging (fMRI) in three groups of participants, 4 to 5.5 year olds, 7 to 8.5 year olds, and adults. Participants were tested in a block design involving visual exploration of two-dimensional images of common objects and real textures, and haptic exploration of their three-dimensional counterparts. As in previous studies, object preference was defined as a greater BOLD response for objects than textures. The analyses specifically target two sites of known visuohaptic convergence in adults: the lateral occipital tactile-visual region (LOtv) and intraparietal sulcus (IPS). Results indicated that the LOtv is involved in visuohaptic object recognition early on. More importantly, object preference in the LOtv became increasingly visually dominant with development. Despite previous reports that the lateral occipital complex (LOC) is adult-like by 8 years, these findings indicate that at least part of the LOC is not. Whole-brain maps showed overlap between adults and both groups of children in the LOC. However, the overlap did not build incrementally from the younger to the older group, suggesting that visuohaptic object preference does not develop in an additive manner. Taken together, the results show that the development of neural substrates for visuohaptic recognition is protracted compared to substrates that are primarily visual or haptic.
Collapse
Affiliation(s)
- R Joanne Jao
- Cognitive Science Program, Indiana University, Bloomington, IN, United States; Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN, United States.
| | - Thomas W James
- Cognitive Science Program, Indiana University, Bloomington, IN, United States; Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN, United States; Program in Neuroscience, Indiana University, Bloomington, IN, United States
| | - Karin Harman James
- Cognitive Science Program, Indiana University, Bloomington, IN, United States; Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN, United States; Program in Neuroscience, Indiana University, Bloomington, IN, United States
| |
Collapse
|
21
|
Toderita I, Bourgeon S, Voisin JIA, Chapman CE. Haptic two-dimensional angle categorization and discrimination. Exp Brain Res 2013; 232:369-83. [PMID: 24170289 DOI: 10.1007/s00221-013-3745-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2013] [Accepted: 10/11/2013] [Indexed: 11/29/2022]
Abstract
This study examined the extent to which haptic perception of two-dimensional (2-D) shape is modified by the design of the perceptual task (single-interval categorization vs. two-interval discrimination), the orientation of the angles in space (oblique vs. horizontal), and the exploration strategy (one or two passes over the angle). Subjects (n = 12) explored 2-D angles using the index finger of the outstretched arm. In the categorization task, subjects scanned individual angles, categorizing each as "large" or "small" (2 angles presented in each block of trials; range 80° vs. 100° to 89° vs. 91°; implicit standard 90°). In the discrimination task, a pair of angles was scanned (standard 90°; comparison 91-103°) and subjects identified the larger angle. The threshold for 2-D angle categorization was significantly lower than for 2-D angle discrimination, 4° versus 7.2°. Performance in the categorization task did not vary with either the orientation of the angles (horizontal vs. oblique, 3.9° vs. 4°) or the number of passes over the angle (1 vs. 2 passes, 3.9° vs. 4°). We suggest that the lower threshold with angle categorization likely reflects the reduced cognitive demands of this task. We found no evidence for a haptic oblique effect (higher threshold with oblique angles), likely reflecting the presence of an explicit external frame of reference formed by the intersection of the two bars forming the 2-D angles. Although one-interval haptic categorization is a more sensitive method for assessing 2-D haptic angle perception, perceptual invariances for exploratory strategy and angle orientation were, nevertheless, task-independent.
Collapse
Affiliation(s)
- Iuliana Toderita
- Groupe de recherche sur le système nerveux central (GRSNC), Département de neurosciences, Faculté de médecine, Université de Montréal, Succursale centre ville, PO Box 6128, Montreal, QC, H3C 3J7, Canada
| | | | | | | |
Collapse
|
22
|
Ueda Y, Saiki J. Characteristics of eye movements in 3-D object learning: comparison between within-modal and cross-modal object recognition. Perception 2013; 41:1289-98. [PMID: 23513616 DOI: 10.1068/p7257] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
Recent studies have indicated that the object representation acquired during visual learning depends on the encoding modality during the test phase. However, the nature of the differences between within-modal learning (eg visual learning-visual recognition) and cross-modal learning (eg visual learning-haptic recognition) remains unknown. To address this issue, we utilised eye movement data and investigated object learning strategies during the learning phase of a cross-modal object recognition experiment. Observers informed of the test modality studied an unfamiliar visually presented 3-D object. Quantitative analyses showed that recognition performance was consistent regardless of rotation in the cross-modal condition, but was reduced when objects were rotated in the within-modal condition. In addition, eye movements during learning significantly differed between within-modal and cross-modal learning. Fixations were more diffused for cross-modal learning than in within-modal learning. Moreover, over the course of the trial, fixation durations became longer in cross-modal learning than in within-modal learning. These results suggest that the object learning strategies employed during the learning phase differ according to the modality of the test phase, and that this difference leads to different recognition performances.
Collapse
Affiliation(s)
- Yoshiyuki Ueda
- Kokoro Research Center, Kyoto University, Yoshida Shimoadachi-cho 46, Sakyo, Kyoto 606-8501, Japan.
| | | |
Collapse
|
23
|
Transfer of object category knowledge across visual and haptic modalities: experimental and computational studies. Cognition 2012; 126:135-48. [PMID: 23102553 DOI: 10.1016/j.cognition.2012.08.005] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2012] [Revised: 08/16/2012] [Accepted: 08/19/2012] [Indexed: 11/20/2022]
Abstract
We study people's abilities to transfer object category knowledge across visual and haptic domains. If a person learns to categorize objects based on inputs from one sensory modality, can the person categorize these same objects when the objects are perceived through another modality? Can the person categorize novel objects from the same categories when these objects are, again, perceived through another modality? Our work makes three contributions. First, by fabricating Fribbles (3-D, multi-part objects with a categorical structure), we developed visual-haptic stimuli that are highly complex and realistic, and thus more ecologically valid than objects that are typically used in haptic or visual-haptic experiments. Based on these stimuli, we developed the See and Grasp data set, a data set containing both visual and haptic features of the Fribbles, and are making this data set freely available on the world wide web. Second, complementary to previous research such as studies asking if people transfer knowledge of object identity across visual and haptic domains, we conducted an experiment evaluating whether people transfer object category knowledge across these domains. Our data clearly indicate that we do. Third, we developed a computational model that learns multisensory representations of prototypical 3-D shape. Similar to previous work, the model uses shape primitives to represent parts, and spatial relations among primitives to represent multi-part objects. However, it is distinct in its use of a Bayesian inference algorithm allowing it to acquire multisensory representations, and sensory-specific forward models allowing it to predict visual or haptic features from multisensory representations. The model provides an excellent qualitative account of our experimental data, thereby illustrating the potential importance of multisensory representations and sensory-specific forward models to multisensory perception.
Collapse
|
24
|
Solid shape discrimination from vision and haptics: natural objects (Capsicum annuum) and Gibson's "feelies". Exp Brain Res 2012; 222:321-32. [PMID: 22918607 DOI: 10.1007/s00221-012-3220-7] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2012] [Accepted: 08/04/2012] [Indexed: 10/28/2022]
Abstract
A set of three experiments evaluated 96 participants' ability to visually and haptically discriminate solid object shape. In the past, some researchers have found haptic shape discrimination to be substantially inferior to visual shape discrimination, while other researchers have found haptics and vision to be essentially equivalent. A primary goal of the present study was to understand these discrepant past findings and to determine the true capabilities of the haptic system. All experiments used the same task (same vs. different shape discrimination) and stimulus objects (James Gibson's "feelies" and a set of naturally shaped objects--bell peppers). However, the methodology varied across experiments. Experiment 1 used random 3-dimensional (3-D) orientations of the stimulus objects, and the conditions were full-cue (active manipulation of objects and rotation of the visual objects in depth). Experiment 2 restricted the 3-D orientations of the stimulus objects and limited the haptic and visual information available to the participants. Experiment 3 compared restricted and full-cue conditions using random 3-D orientations. We replicated both previous findings in the current study. When we restricted visual and haptic information (and placed the stimulus objects in the same orientation on every trial), the participants' visual performance was superior to that obtained for haptics (replicating the earlier findings of Davidson et al. in Percept Psychophys 15(3):539-543, 1974). When the circumstances resembled those of ordinary life (e.g., participants able to actively manipulate objects and see them from a variety of perspectives), we found no significant difference between visual and haptic solid shape discrimination.
Collapse
|
25
|
Klatzky RL, Lederman SJ. Haptic object perception: spatial dimensionality and relation to vision. Philos Trans R Soc Lond B Biol Sci 2012; 366:3097-105. [PMID: 21969691 DOI: 10.1098/rstb.2011.0153] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Enabled by the remarkable dexterity of the human hand, specialized haptic exploration is a hallmark of object perception by touch. Haptic exploration normally takes place in a spatial world that is three-dimensional; nevertheless, stimuli of reduced spatial dimensionality are also used to display spatial information. This paper examines the consequences of full (three-dimensional) versus reduced (two-dimensional) spatial dimensionality for object processing by touch, particularly in comparison with vision. We begin with perceptual recognition of common human-made artefacts, then extend our discussion of spatial dimensionality in touch and vision to include faces, drawing from research on haptic recognition of facial identity and emotional expressions. Faces have often been characterized as constituting a specialized input for human perception. We find that contrary to vision, haptic processing of common objects is impaired by reduced spatial dimensionality, whereas haptic face processing is not. We interpret these results in terms of fundamental differences in object perception across the modalities, particularly the special role of manual exploration in extracting a three-dimensional structure.
Collapse
Affiliation(s)
- Roberta L Klatzky
- Department of Psychology, Carnegie Mellon University, Pittsburgh, PA 15213, USA.
| | | |
Collapse
|
26
|
Martinovic J, Lawson R, Craddock M. Time course of information processing in visual and haptic object classification. Front Hum Neurosci 2012; 6:49. [PMID: 22470327 PMCID: PMC3311268 DOI: 10.3389/fnhum.2012.00049] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2011] [Accepted: 02/24/2012] [Indexed: 11/13/2022] Open
Abstract
Vision identifies objects rapidly and efficiently. In contrast, object recognition by touch is much slower. Furthermore, haptics usually serially accumulates information from different parts of objects, whereas vision typically processes object information in parallel. Is haptic object identification slower simply due to sequential information acquisition and the resulting memory load or due to more fundamental processing differences between the senses? To compare the time course of visual and haptic object recognition, we slowed visual processing using a novel, restricted viewing technique. In an electroencephalographic (EEG) experiment, participants discriminated familiar, nameable from unfamiliar, unnamable objects both visually and haptically. Analyses focused on the evoked and total fronto-central theta-band (5-7 Hz; a marker of working memory) and the occipital upper alpha-band (10-12 Hz; a marker of perceptual processing) locked to the onset of classification. Decreases in total upper alpha-band activity for haptic identification of objects indicate a likely processing role of multisensory extrastriate areas. Long-latency modulations of alpha-band activity differentiated between familiar and unfamiliar objects in haptics but not in vision. In contrast, theta-band activity showed a general increase over time for the slowed-down visual recognition task only. We conclude that haptic object recognition relies on common representations with vision but also that there are fundamental differences between the senses that do not merely arise from differences in their speed of processing.
Collapse
Affiliation(s)
| | - Rebecca Lawson
- School of Psychology, University of LiverpoolLiverpool, UK
| | - Matt Craddock
- School of Psychology, University of LiverpoolLiverpool, UK
- Institut für Psychologie, Universität LeipzigLeipzig, Germany
| |
Collapse
|
27
|
|
28
|
|
29
|
Pilz KS, Konar Y, Vuong QC, Bennett PJ, Sekuler AB. Age-related changes in matching novel objects across viewpoints. Vision Res 2011; 51:1958-65. [PMID: 21784094 DOI: 10.1016/j.visres.2011.07.009] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2011] [Revised: 05/27/2011] [Accepted: 07/06/2011] [Indexed: 12/22/2022]
Abstract
Object recognition is an important visual process. We are not only required to recognize objects across a variety of lighting conditions and variations in size, but also across changes in viewpoint. It has been shown that reaction times in object matching increase as a function of increasing angular disparity between two views of the same object, and it is thought that this is related to the time it takes to mentally rotate an object. Recent studies have shown that object rotations for familiar objects affect older subjects differently than younger subjects. To investigate the general normalization effects for recognizing objects across different viewpoints regardless of visual experience with an object, in the current study we used novel 3D stimuli. Older and younger subjects matched objects across a variety of viewpoints along both in-depth and picture-plane rotations. Response times (RTs) for in-depth rotations were generally slower than for picture plane rotations and older subjects, overall, responded slower than younger subjects. However, a male RT advantage was only found for objects that differed by large, in-depth rotations. Compared to younger subjects, older subjects were not only slower but also less accurate at matching objects across both rotation axes. The age effect was primarily due to older male subjects performing worse than younger male subjects, whereas there was no significant age difference for female subjects. In addition, older males performed even worse than older females, which argues against a general male advantage in mental rotations tasks.
Collapse
Affiliation(s)
- Karin S Pilz
- Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamilton, Ontario, Canada L8S 4K1.
| | | | | | | | | |
Collapse
|
30
|
Lacey S, Lin JB, Sathian K. Object and spatial imagery dimensions in visuo-haptic representations. Exp Brain Res 2011; 213:267-73. [PMID: 21424255 DOI: 10.1007/s00221-011-2623-1] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2010] [Accepted: 03/04/2011] [Indexed: 10/18/2022]
Abstract
Visual imagery comprises object and spatial dimensions. Both types of imagery encode shape but a key difference is that object imagers are more likely to encode surface properties than spatial imagers. Since visual and haptic object representations share many characteristics, we investigated whether haptic and multisensory representations also share an object-spatial continuum. Experiment 1 involved two tasks in both visual and haptic within-modal conditions, one requiring discrimination of shape across changes in texture, the other discrimination of texture across changes in shape. In both modalities, spatial imagers could ignore changes in texture but not shape, whereas object imagers could ignore changes in shape but not texture. Experiment 2 re-analyzed a cross-modal version of the shape discrimination task from an earlier study. We found that spatial imagers could discriminate shape across changes in texture but object imagers could not and that the more one preferred object imagery, the more texture changes impaired discrimination. These findings are the first evidence that object and spatial dimensions of imagery can be observed in haptic and multisensory representations.
Collapse
Affiliation(s)
- Simon Lacey
- Department of Neurology, Emory University School of Medicine, WMB-6000, 101 Woodruff Circle, Atlanta, GA 30322, USA.
| | | | | |
Collapse
|
31
|
|
32
|
Lacey S, Hall J, Sathian K. Are surface properties integrated into visuohaptic object representations? Eur J Neurosci 2010; 31:1882-8. [PMID: 20584193 DOI: 10.1111/j.1460-9568.2010.07204.x] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
Object recognition studies have almost exclusively involved vision, focusing on shape rather than surface properties such as color. Visual object representations are thought to integrate shape and color information because changing the color of studied objects impairs their subsequent recognition. However, little is known about integration of surface properties into visuohaptic multisensory representations. Here, participants studied objects with distinct patterns of surface properties (color in Experiment 1, texture in Experiments 2 and 3) and had to discriminate between object shapes when color or texture schemes were altered in within-modal (visual and haptic) and cross-modal (visual study followed by haptic test and vice versa) conditions. In Experiment 1, color changes impaired within-modal visual recognition but had no effect on cross-modal recognition, suggesting that the multisensory representation was not influenced by modality-specific surface properties. In Experiment 2, texture changes impaired recognition in all conditions, suggesting that both unisensory and multisensory representations integrated modality-independent surface properties. However, the cross-modal impairment might have reflected either the texture change or a failure to form the multisensory representation. Experiment 3 attempted to distinguish between these possibilities by combining changes in texture with changes in orientation, taking advantage of the known view-independence of the multisensory representation, but the results were not conclusive owing to the overwhelming effect of texture change. The simplest account is that the multisensory representation integrates shape and modality-independent surface properties. However, more work is required to investigate this and the conditions under which multisensory integration of structural and surface properties occurs.
Collapse
Affiliation(s)
- Simon Lacey
- Department of Neurology, Emory University, WMB-6000, 101 Woodruff Circle, Atlanta, GA 30322, USA.
| | | | | |
Collapse
|
33
|
Volcic R, Wijntjes MWA, Kool EC, Kappers AML. Cross-modal visuo-haptic mental rotation: comparing objects between senses. Exp Brain Res 2010; 203:621-7. [PMID: 20437169 PMCID: PMC2875473 DOI: 10.1007/s00221-010-2262-y] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2010] [Accepted: 04/09/2010] [Indexed: 11/07/2022]
Abstract
The simple experience of a coherent percept while looking and touching an object conceals an intriguing issue: different senses encode and compare information in different modality-specific reference frames. We addressed this problem in a cross-modal visuo-haptic mental rotation task. Two objects in various orientations were presented at the same spatial location, one visually and one haptically. Participants had to identify the objects as same or different. The relative angle between viewing direction and hand orientation was manipulated (Aligned versus Orthogonal). In an additional condition (Delay), a temporal delay was introduced between haptic and visual explorations while the viewing direction and the hand orientation were orthogonal to each other. Whereas the phase shift of the response time function was close to 0° in the Aligned condition, we observed a consistent phase shift in the hand’s direction in the Orthogonal condition. A phase shift, although reduced, was also found in the Delay condition. Counterintuitively, these results mean that seen and touched objects do not need to be physically aligned for optimal performance to occur. The present results suggest that the information about an object is acquired in separate visual and hand-centered reference frames, which directly influence each other and which combine in a time-dependent manner.
Collapse
Affiliation(s)
- Robert Volcic
- Psychologisches Institut II, Westfälische Wilhelms-Universität Münster, Fliednerstr. 21, 48149, Münster, Germany.
| | | | | | | |
Collapse
|
34
|
Abstract
This review focusses on cross-modal plasticity resulting from visual deprivation. This is viewed against the background of task-specific visual cortical recruitment that is routine during tactile tasks in the sighted and that may depend in part on visual imagery. Superior tactile perceptual performance in the blind may be practice-related, although there are unresolved questions regarding the effects of Braille-reading experience and the age of onset of blindness. While visual cortical areas are clearly more involved in tactile microspatial processing in the blind than in the sighted, it still remains unclear how to reconcile these tactile processes with the growing literature implicating visual cortical activity in a wide range of cognitive tasks in the blind, including those involving language, or with studies of short-term, reversible visual deprivation in the normally sighted that reveal plastic changes even over periods of hours or days.
Collapse
Affiliation(s)
- K Sathian
- Department of Neurology, Emory University Rehabilitation R&D Center of Excellence, Atlanta, GA, USA.
| | | |
Collapse
|
35
|
Lacey S, Flueckiger P, Stilla R, Lava M, Sathian K. Object familiarity modulates the relationship between visual object imagery and haptic shape perception. Neuroimage 2009; 49:1977-90. [PMID: 19896540 DOI: 10.1016/j.neuroimage.2009.10.081] [Citation(s) in RCA: 55] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2009] [Revised: 10/23/2009] [Accepted: 10/29/2009] [Indexed: 11/20/2022] Open
Abstract
Although visual cortical engagement in haptic shape perception is well established, its relationship with visual imagery remains controversial. We addressed this using functional magnetic resonance imaging during separate visual object imagery and haptic shape perception tasks. Two experiments were conducted. In the first experiment, the haptic shape task employed unfamiliar, meaningless objects, whereas familiar objects were used in the second experiment. The activations evoked by visual object imagery overlapped more extensively, and their magnitudes were more correlated, with those evoked during haptic shape perception of familiar, compared to unfamiliar, objects. In the companion paper (Deshpande et al., this issue), we used task-specific functional and effective connectivity analyses to provide convergent evidence: these analyses showed that the neural networks underlying visual imagery were similar to those underlying haptic shape perception of familiar, but not unfamiliar, objects. We conclude that visual object imagery is more closely linked to haptic shape perception when objects are familiar, compared to when they are unfamiliar.
Collapse
Affiliation(s)
- Simon Lacey
- Department of Neurology, Emory University, Atlanta, GA 30322, USA
| | | | | | | | | |
Collapse
|
36
|
Perceptual learning of view-independence in visuo-haptic object representations. Exp Brain Res 2009; 198:329-37. [PMID: 19484467 DOI: 10.1007/s00221-009-1856-8] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2008] [Accepted: 05/12/2009] [Indexed: 10/20/2022]
Abstract
We previously showed that cross-modal recognition of unfamiliar objects is view-independent, in contrast to view-dependence within-modally, in both vision and haptics. Does the view-independent, bisensory representation underlying cross-modal recognition arise from integration of unisensory, view-dependent representations or intermediate, unisensory but view-independent representations? Two psychophysical experiments sought to distinguish between these alternative models. In both experiments, participants began from baseline, within-modal, view-dependence for object recognition in both vision and haptics. The first experiment induced within-modal view-independence by perceptual learning, which was completely and symmetrically transferred cross-modally: visual view-independence acquired through visual learning also resulted in haptic view-independence and vice versa. In the second experiment, both visual and haptic view-dependence were transformed to view-independence by either haptic-visual or visual-haptic cross-modal learning. We conclude that cross-modal view-independence fits with a model in which unisensory view-dependent representations are directly integrated into a bisensory, view-independent representation, rather than via intermediate, unisensory, view-independent representations.
Collapse
|
37
|
|
38
|
Lacey S, Tal N, Amedi A, Sathian K. A putative model of multisensory object representation. Brain Topogr 2009; 21:269-74. [PMID: 19330441 PMCID: PMC3156680 DOI: 10.1007/s10548-009-0087-4] [Citation(s) in RCA: 95] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2009] [Accepted: 03/11/2009] [Indexed: 10/21/2022]
Abstract
This review surveys the recent literature on visuo-haptic convergence in the perception of object form, with particular reference to the lateral occipital complex (LOC) and the intraparietal sulcus (IPS) and discusses how visual imagery or multisensory representations might underlie this convergence. Drawing on a recent distinction between object- and spatially-based visual imagery, we propose a putative model in which LOtv, a subregion of LOC, contains a modality-independent representation of geometric shape that can be accessed either bottom-up from direct sensory inputs or top-down from frontoparietal regions. We suggest that such access is modulated by object familiarity: spatial imagery may be more important for unfamiliar objects and involve IPS foci in facilitating somatosensory inputs to the LOC; by contrast, object imagery may be more critical for familiar objects, being reflected in prefrontal drive to the LOC.
Collapse
Affiliation(s)
- Simon Lacey
- Department of Neurology, Emory University, Atlanta, GA, USA
| | - Noa Tal
- Physiology Department, Faculty of Medicine, The Hebrew University of Jerusalem, Jerusalem 91220, Israel
| | - Amir Amedi
- Physiology Department, Faculty of Medicine, The Hebrew University of Jerusalem, Jerusalem 91220, Israel
- Cognitive Science Program, The Hebrew University of Jerusalem, Jerusalem 91220, Israel
| | - K. Sathian
- Department of Neurology, Emory University, Atlanta, GA, USA
- Department of Rehabilitation Medicine, Emory University, Atlanta, GA, USA
- Department of Psychology, Emory University, Atlanta, GA, USA
- Rehabilitation R&D Center of Excellence, Atlanta VAMC, Decatur, GA, USA
| |
Collapse
|
39
|
Craddock M, Lawson R. Do Left and Right Matter for Haptic Recognition of Familiar Objects? Perception 2009; 38:1355-76. [DOI: 10.1068/p6312] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Two experiments were carried out to examine the effects of dominant right versus non-dominant left exploration hand and left versus right object orientation on haptic recognition of familiar objects. In experiment 1, participants named 48 familiar objects in two blocks. There was no dominant-hand advantage to naming objects haptically and there was no interaction between exploration hand and object orientation. Furthermore, priming of naming was not reduced by changes of either object orientation or exploration hand. To test whether these results were attributable to a failure to encode object orientation and exploration hand, experiment 2 replicated experiment 1 except that the unexpected task in the second block was to decide whether either exploration hand or object orientation had changed relative to the initial naming block. Performance on both tasks was above chance, demonstrating that this information had been encoded into long-term haptic representations following the initial block of naming. Thus when identifying familiar objects, the haptic processing system can achieve object constancy efficiently across hand changes and object-orientation changes, although this information is often stored even when it is task-irrelevant.
Collapse
Affiliation(s)
- Matt Craddock
- School of Psychology, University of Liverpool, Eleanor Rathbone Building, Bedford Street South, Liverpool L69 7ZA, UK
| | - Rebecca Lawson
- School of Psychology, University of Liverpool, Eleanor Rathbone Building, Bedford Street South, Liverpool L69 7ZA, UK
| |
Collapse
|
40
|
Repetition priming and the haptic recognition of familiar and unfamiliar objects. ACTA ACUST UNITED AC 2008; 70:1350-65. [DOI: 10.3758/pp.70.7.1350] [Citation(s) in RCA: 29] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
41
|
Deshpande G, Hu X, Stilla R, Sathian K. Effective connectivity during haptic perception: a study using Granger causality analysis of functional magnetic resonance imaging data. Neuroimage 2008; 40:1807-14. [PMID: 18329290 DOI: 10.1016/j.neuroimage.2008.01.044] [Citation(s) in RCA: 108] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2007] [Revised: 01/11/2008] [Accepted: 01/18/2008] [Indexed: 10/22/2022] Open
Abstract
Although it is accepted that visual cortical areas are recruited during touch, it remains uncertain whether this depends on top-down inputs mediating visual imagery or engagement of modality-independent representations by bottom-up somatosensory inputs. Here we addressed this by examining effective connectivity in humans during haptic perception of shape and texture with the right hand. Multivariate Granger causality analysis of functional magnetic resonance imaging (fMRI) data was conducted on a network of regions that were shape- or texture-selective. A novel network reduction procedure was employed to eliminate connections that did not contribute significantly to overall connectivity. Effective connectivity during haptic perception was found to involve a variety of interactions between areas generally regarded as somatosensory, multisensory, visual and motor, emphasizing flexible cooperation between different brain regions rather than rigid functional separation. The left postcentral sulcus (PCS), left precentral gyrus and right posterior insula were important sources of connections in the network. Bottom-up somatosensory inputs from the left PCS and right posterior insula fed into visual cortical areas, both the shape-selective right lateral occipital complex (LOC) and the texture-selective right medial occipital cortex (probable V2). In addition, top-down inputs from left postero-supero-medial parietal cortex influenced the right LOC. Thus, there is strong evidence for the bottom-up somatosensory inputs predicted by models of visual cortical areas as multisensory processors and suggestive evidence for top-down parietal (but not prefrontal) inputs that could mediate visual imagery. This is consistent with modality-independent representations accessible through both bottom-up sensory inputs and top-down processes such as visual imagery.
Collapse
Affiliation(s)
- Gopikrishna Deshpande
- Coulter Department of Biomedical Engineering, Emory University School of Medicine, Atlanta, GA 30322, USA
| | | | | | | |
Collapse
|