1
|
Cappagli G, Cuturi LF, Signorini S, Morelli F, Cocchi E, Gori M. Early visual deprivation disrupts the mental representation of numbers in visually impaired children. Sci Rep 2022; 12:22538. [PMID: 36581659 PMCID: PMC9800586 DOI: 10.1038/s41598-022-25044-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Accepted: 11/23/2022] [Indexed: 12/30/2022] Open
Abstract
Several shreds of evidence indicate that visual deprivation does not alter numerical competence neither in adults nor in children. However, studies reporting non-impaired numerical abilities in the visually impaired population present some limitations: (a) they mainly assessed the ability to process numbers (e.g. mathematical competence) rather than represent numbers (e.g. mental number line); (b) they principally focused on positive rather than negative number estimates; (c) they investigated numerical abilities in adult individuals except one focusing on children (Crollen et al. in Cognition 210:104586, 2021). Overall, this could limit a comprehensive explanation of the role exerted by vision on numerical processing when vision is compromised. Here we investigated how congenital visual deprivation affects the ability to represent positive and negative numbers in horizontal and sagittal planes in visually impaired children (thirteen children with low vision, eight children with complete blindness, age range 6-15 years old). We adapted the number-to-position paradigm adopted by Crollen et al. (Cognition 210:104586, 2021), asking children to indicate the spatial position of positive and negative numbers on a graduated rule positioned horizontally or sagittally in the frontal plane. Results suggest that long-term visual deprivation alters the ability to identify the spatial position of numbers independently of the spatial plane and the number polarity. Moreover, results indicate that relying on poor visual acuity is detrimental for low vision children when asked to localize both positive and negative numbers in space, suggesting that visual experience might have a differential role in numerical processing depending on number polarity. Such findings add knowledge related to the impact of visual experience on numerical processing. Since both positive and negative numbers are fundamental aspects of learning mathematical principles, the outcomes of the present study inform about the need to implement early rehabilitation strategies to prevent the risk of numerical difficulties in visually impaired children.
Collapse
Affiliation(s)
- G. Cappagli
- grid.25786.3e0000 0004 1764 2907Unit for Visually Impaired People (UVIP), Istituto Italiano di Tecnologia, Via Melen 83, 16100 Genova, Italy ,grid.419416.f0000 0004 1760 3107Developmental Neuro-Ophthalmology Unit, IRCCS Mondino Foundation, Pavia, Italy
| | - L. F. Cuturi
- grid.25786.3e0000 0004 1764 2907Unit for Visually Impaired People (UVIP), Istituto Italiano di Tecnologia, Via Melen 83, 16100 Genova, Italy ,grid.10438.3e0000 0001 2178 8421Department of Cognitive, Psychological, Pedagogical Sciences and of Cultural Studies, University of Messina, Messina, Italy
| | - S. Signorini
- grid.419416.f0000 0004 1760 3107Developmental Neuro-Ophthalmology Unit, IRCCS Mondino Foundation, Pavia, Italy
| | - F. Morelli
- grid.419416.f0000 0004 1760 3107Developmental Neuro-Ophthalmology Unit, IRCCS Mondino Foundation, Pavia, Italy ,grid.8982.b0000 0004 1762 5736Department of Brain and Behavioural Sciences, University of Pavia, Pavia, Italy
| | | | - M. Gori
- grid.25786.3e0000 0004 1764 2907Unit for Visually Impaired People (UVIP), Istituto Italiano di Tecnologia, Via Melen 83, 16100 Genova, Italy
| |
Collapse
|
2
|
Vision- and touch-dependent brain correlates of body-related mental processing. Cortex 2022; 157:30-52. [PMID: 36272330 DOI: 10.1016/j.cortex.2022.09.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Revised: 06/17/2022] [Accepted: 09/07/2022] [Indexed: 12/15/2022]
Abstract
In humans, the nature of sensory input influences body-related mental processing. For instance, behavioral differences (e.g., response time) can be found between mental spatial transformations (e.g., mental rotation) of viewed and touched body parts. It can thus be hypothesized that distinct brain activation patterns are associated with such sensory-dependent body-related mental processing. However, direct evidence that the neural correlates of body-related mental processing can be modulated by the nature of the sensory stimuli is still missing. We thus analyzed event-related functional magnetic resonance imaging (fMRI) data from thirty-one healthy participants performing mental rotation of visually- (images) and haptically-presented (plastic) hands. We also dissociated the neural activity related to rotation or task-related performance using models that either regressed out or included the variance associated with response time. Haptically-mediated mental rotation recruited mostly the sensorimotor brain network. Visually-mediated mental rotation led to parieto-occipital activations. In addition, faster mental rotation was associated with sensorimotor activity, while slower mental rotation was associated with parieto-occipital activations. The fMRI results indicated that changing the type of sensory inputs modulates the neural correlates of body-related mental processing. These findings suggest that distinct sensorimotor brain dynamics can be exploited to execute similar tasks depending on the available sensory input. The present study can contribute to a better evaluation of body-related mental processing in experimental and clinical settings.
Collapse
|
3
|
Arbel R, Heimler B, Amedi A. Congenitally blind adults can learn to identify face-shapes via auditory sensory substitution and successfully generalize some of the learned features. Sci Rep 2022; 12:4330. [PMID: 35288597 PMCID: PMC8921184 DOI: 10.1038/s41598-022-08187-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Accepted: 02/22/2022] [Indexed: 11/24/2022] Open
Abstract
Unlike sighted individuals, congenitally blind individuals have little to no experience with face shapes. Instead, they rely on non-shape cues, such as voices, to perform character identification. The extent to which face-shape perception can be learned in adulthood via a different sensory modality (i.e., not vision) remains poorly explored. We used a visual-to-auditory Sensory Substitution Device (SSD) that enables conversion of visual images to the auditory modality while preserving their visual characteristics. Expert SSD users were systematically taught to identify cartoon faces via audition. Following a tailored training program lasting ~ 12 h, congenitally blind participants successfully identified six trained faces with high accuracy. Furthermore, they effectively generalized their identification to the untrained, inverted orientation of the learned faces. Finally, after completing the extensive 12-h training program, participants learned six new faces within 2 additional hours of training, suggesting internalization of face-identification processes. Our results document for the first time that facial features can be processed through audition, even in the absence of visual experience across the lifespan. Overall, these findings have important implications for both non-visual object recognition and visual rehabilitation practices and prompt the study of the neural processes underlying auditory face perception in the absence of vision.
Collapse
|
4
|
Applying a novel visual-to-touch sensory substitution for studying tactile reference frames. Sci Rep 2021; 11:10636. [PMID: 34017027 PMCID: PMC8137949 DOI: 10.1038/s41598-021-90132-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2020] [Accepted: 04/27/2021] [Indexed: 11/16/2022] Open
Abstract
Perceiving the spatial location and physical dimensions of touched objects is crucial for goal-directed actions. To achieve this, our brain transforms skin-based coordinates into a reference frame by integrating visual and posture information. In the current study, we examine the role of posture in mapping tactile sensations to a visual image. We developed a new visual-to-touch sensory substitution device that transforms images into a sequence of vibrations on the arm. 52 blindfolded participants performed spatial recognition tasks in three different arm postures and had to switch postures between trial blocks. As participants were not told which side of the device is down and which is up, they could choose how to map its vertical axis in their responses. Contrary to previous findings, we show that new proprioceptive inputs can be overridden in mapping tactile sensations. We discuss the results within the context of the spatial task and the various sensory contributions to the process.
Collapse
|
5
|
Tivadar RI, Chappaz C, Anaflous F, Roche J, Murray MM. Mental Rotation of Digitally-Rendered Haptic Objects by the Visually-Impaired. Front Neurosci 2020; 14:197. [PMID: 32265628 PMCID: PMC7099598 DOI: 10.3389/fnins.2020.00197] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2019] [Accepted: 02/24/2020] [Indexed: 11/18/2022] Open
Abstract
In the event of visual impairment or blindness, information from other intact senses can be used as substitutes to retrain (and in extremis replace) visual functions. Abilities including reading, mental representation of objects and spatial navigation can be performed using tactile information. Current technologies can convey a restricted library of stimuli, either because they depend on real objects or renderings with low resolution layouts. Digital haptic technologies can overcome such limitations. The applicability of this technology was previously demonstrated in sighted participants. Here, we reasoned that visually-impaired and blind participants can create mental representations of letters presented haptically in normal and mirror-reversed form without the use of any visual information, and mentally manipulate such representations. Visually-impaired and blind volunteers were blindfolded and trained on the haptic tablet with two letters (either L and P or F and G). During testing, they haptically explored on any trial one of the four letters presented at 0°, 90°, 180°, or 270° rotation from upright and indicated if the letter was either in a normal or mirror-reversed form. Rotation angle impacted performance; greater deviation from 0° resulted in greater impairment for trained and untrained normal letters, consistent with mental rotation of these haptically-rendered objects. Performance was also generally less accurate with mirror-reversed stimuli, which was not affected by rotation angle. Our findings demonstrate, for the first time, the suitability of a digital haptic technology in the blind and visually-impaired. Classic devices remain limited in their accessibility and in the flexibility of their applications. We show that mental representations can be generated and manipulated using digital haptic technology. This technology may thus offer an innovative solution to the mitigation of impairments in the visually-impaired, and to the training of skills dependent on mental representations and their spatial manipulation.
Collapse
Affiliation(s)
- Ruxandra I Tivadar
- The LINE (Laboratory for Investigative Neurophysiology), Department of Radiology, University Hospital Center and University of Lausanne, Lausanne, Switzerland.,Department of Ophthalmology, University of Lausanne and Fondation Asile des Aveugles, Lausanne, Switzerland
| | | | - Fatima Anaflous
- Department of Ophthalmology, University of Lausanne and Fondation Asile des Aveugles, Lausanne, Switzerland
| | - Jean Roche
- Department of Ophthalmology, University of Lausanne and Fondation Asile des Aveugles, Lausanne, Switzerland
| | - Micah M Murray
- The LINE (Laboratory for Investigative Neurophysiology), Department of Radiology, University Hospital Center and University of Lausanne, Lausanne, Switzerland.,Department of Ophthalmology, University of Lausanne and Fondation Asile des Aveugles, Lausanne, Switzerland.,Sensory, Perceptual and Cognitive Neuroscience Section, Center for Biomedical Imaging (CIBM), Lausanne, Switzerland.,Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, United States
| |
Collapse
|
6
|
Kamermans KL, Pouw W, Mast FW, Paas F. Reinterpretation in visual imagery is possible without visual cues: a validation of previous research. PSYCHOLOGICAL RESEARCH 2019; 83:1237-1250. [PMID: 29242975 PMCID: PMC6647238 DOI: 10.1007/s00426-017-0956-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2017] [Accepted: 12/04/2017] [Indexed: 11/20/2022]
Abstract
Is visual reinterpretation of bistable figures (e.g., duck/rabbit figure) in visual imagery possible? Current consensus suggests that it is in principle possible because of converging evidence of quasi-pictorial functioning of visual imagery. Yet, studies that have directly tested and found evidence for reinterpretation in visual imagery, allow for the possibility that reinterpretation was already achieved during memorization of the figure(s). One study resolved this issue, providing evidence for reinterpretation in visual imagery (Mast and Kosslyn, Cognition 86:57-70, 2002). However, participants in that study performed reinterpretations with aid of visual cues. Hence, reinterpretation was not performed with mental imagery alone. Therefore, in this study we assessed the possibility of reinterpretation without visual support. We further explored the possible role of haptic cues to assess the multimodal nature of mental imagery. Fifty-three participants were consecutively presented three to be remembered bistable 2-D figures (reinterpretable when rotated 180°), two of which were visually inspected and one was explored hapticly. After memorization of the figures, a visually bistable exemplar figure was presented to ensure understanding of the concept of visual bistability. During recall, 11 participants (out of 36; 30.6%) who did not spot bistability during memorization successfully performed reinterpretations when instructed to mentally rotate their visual image, but additional haptic cues during mental imagery did not inflate reinterpretation ability. This study validates previous findings that reinterpretation in visual imagery is possible.
Collapse
Affiliation(s)
- Kevin L Kamermans
- Department of Psychology, Education and Child Studies, Erasmus University Rotterdam, Rotterdam, The Netherlands
| | - Wim Pouw
- Department of Psychology, Education and Child Studies, Erasmus University Rotterdam, Rotterdam, The Netherlands.
- Department of Psychological Sciences, University of Connecticut, Storrs, USA.
| | - Fred W Mast
- Department of Psychology, University of Bern, Bern, Switzerland
| | - Fred Paas
- Department of Psychology, Education and Child Studies, Erasmus University Rotterdam, Rotterdam, The Netherlands
- Early Start Research Institute, University of Wollongong, Wollongong, Australia
| |
Collapse
|
7
|
Tivadar RI, Rouillard T, Chappaz C, Knebel JF, Turoman N, Anaflous F, Roche J, Matusz PJ, Murray MM. Mental Rotation of Digitally-Rendered Haptic Objects. Front Integr Neurosci 2019; 13:7. [PMID: 30930756 PMCID: PMC6427928 DOI: 10.3389/fnint.2019.00007] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2018] [Accepted: 02/25/2019] [Indexed: 11/13/2022] Open
Abstract
Sensory substitution is an effective means to rehabilitate many visual functions after visual impairment or blindness. Tactile information, for example, is particularly useful for functions such as reading, mental rotation, shape recognition, or exploration of space. Extant haptic technologies typically rely on real physical objects or pneumatically driven renderings and thus provide a limited library of stimuli to users. New developments in digital haptic technologies now make it possible to actively simulate an unprecedented range of tactile sensations. We provide a proof-of-concept for a new type of technology (hereafter haptic tablet) that renders haptic feedback by modulating the friction of a flat screen through ultrasonic vibrations of varying shapes to create the sensation of texture when the screen is actively explored. We reasoned that participants should be able to create mental representations of letters presented in normal and mirror-reversed haptic form without the use of any visual information and to manipulate such representations in a mental rotation task. Healthy sighted, blindfolded volunteers were trained to discriminate between two letters (either L and P, or F and G; counterbalanced across participants) on a haptic tablet. They then tactually explored all four letters in normal or mirror-reversed form at different rotations (0°, 90°, 180°, and 270°) and indicated letter form (i.e., normal or mirror-reversed) by pressing one of two mouse buttons. We observed the typical effect of rotation angle on object discrimination performance (i.e., greater deviation from 0° resulted in worse performance) for trained letters, consistent with mental rotation of these haptically-rendered objects. We likewise observed generally slower and less accurate performance with mirror-reversed compared to prototypically oriented stimuli. Our findings extend existing research in multisensory object recognition by indicating that a new technology simulating active haptic feedback can support the generation and spatial manipulation of mental representations of objects. Thus, such haptic tablets can offer a new avenue to mitigate visual impairments and train skills dependent on mental object-based representations and their spatial manipulation.
Collapse
Affiliation(s)
- Ruxandra I. Tivadar
- The Laboratory for Investigative Neurophysiology (LINE), Department of Radiology and Clinical Neurosciences, University Hospital Center and University of Lausanne, Lausanne, Switzerland
- Department of Ophthalmology, Fondation Asile des Aveugles, Lausanne, Switzerland
| | | | | | - Jean-François Knebel
- The Laboratory for Investigative Neurophysiology (LINE), Department of Radiology and Clinical Neurosciences, University Hospital Center and University of Lausanne, Lausanne, Switzerland
- Electroencephalography Brain Mapping Core, Center for Biomedical Imaging (CIBM) of Lausanne and Geneva, Lausanne, Switzerland
| | - Nora Turoman
- The Laboratory for Investigative Neurophysiology (LINE), Department of Radiology and Clinical Neurosciences, University Hospital Center and University of Lausanne, Lausanne, Switzerland
| | - Fatima Anaflous
- Department of Ophthalmology, Fondation Asile des Aveugles, Lausanne, Switzerland
| | - Jean Roche
- Department of Ophthalmology, Fondation Asile des Aveugles, Lausanne, Switzerland
| | - Pawel J. Matusz
- The Laboratory for Investigative Neurophysiology (LINE), Department of Radiology and Clinical Neurosciences, University Hospital Center and University of Lausanne, Lausanne, Switzerland
- Information Systems Institute at the University of Applied Sciences Western Switzerland (HES-SO Valais), Sierre, Switzerland
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, United States
| | - Micah M. Murray
- The Laboratory for Investigative Neurophysiology (LINE), Department of Radiology and Clinical Neurosciences, University Hospital Center and University of Lausanne, Lausanne, Switzerland
- Department of Ophthalmology, Fondation Asile des Aveugles, Lausanne, Switzerland
- Electroencephalography Brain Mapping Core, Center for Biomedical Imaging (CIBM) of Lausanne and Geneva, Lausanne, Switzerland
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, United States
| |
Collapse
|
8
|
Testing the perceptual equivalence hypothesis in mental rotation of 3D stimuli with visual and tactile input. Exp Brain Res 2018; 236:881-896. [DOI: 10.1007/s00221-018-5172-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2016] [Accepted: 01/05/2018] [Indexed: 10/18/2022]
|
9
|
Heller MA, Kennedy JM, Clark A, Mccarthy M, Borgert A, Wemple L, Fulkerson E, Kaffel N, Duncan A, Riddle T. Viewpoint and Orientation Influence Picture Recognition in the Blind. Perception 2016; 35:1397-420. [PMID: 17214384 DOI: 10.1068/p5460] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
In the first three experiments, subjects felt solid geometrical forms and matched raised-line pictures to the objects. Performance was best in experiment 1 for top views, with shorter response latencies than for side views, front views, or 3-D views with foreshortening. In a second experiment with blind participants, matching accuracy was not significantly affected by prior visual experience, but speed advantages were found for top views, with 3-D views also yielding better matching accuracy than side views. There were no performance advantages for pictures of objects with a constant cross section in the vertical axis. The early-blind participants had lower performance for side and frontal views. The objects were rotated to oblique orientations in experiment 3. Early-blind subjects performed worse than the other subjects given object rotation. Visual experience with pictures of objects at many angles could facilitate identification at oblique orientations. In experiment 5 with blindfolded sighted subjects, tangible pictures were used as targets and as choices. The results yielded superior overall performance for 3-D views (mean, M = 74% correct) and much lower matching accuracy for top views as targets ( M = 58% correct). Performance was highest when the target and matching viewpoint were identical, but 3-D views ( M = 96% correct) were still far better than top views. The accuracy advantage of the top views also disappeared when more complex objects were tested in experiment 6. Alternative theoretical implications of the results are discussed.
Collapse
Affiliation(s)
- Morton A Heller
- Department of Psychology, Eastern Illinois University, Charleston 61920, USA.
| | | | | | | | | | | | | | | | | | | |
Collapse
|
10
|
Hartcher-O'Brien J, Auvray M. Cognition overrides orientation dependence in tactile viewpoint selection. Exp Brain Res 2016; 234:1885-1892. [PMID: 26894892 DOI: 10.1007/s00221-016-4596-6] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2015] [Accepted: 02/08/2016] [Indexed: 12/26/2022]
Abstract
Humans are capable of extracting spatial information through their sense of touch: when someone strokes their hand, they can easily determine stroke direction without visual information. However, when it comes to the coordinate system used to assign the spatial relations to the stimulation, it remains poorly understood how the brain selects the appropriate system for passive touch. In the study reported here, we investigated whether hand orientation can determine coordinate assignment to ambiguous tactile patterns, whether observers can cognitively override any orientation-driven perspectives on touch, and whether the adaptation transfers across body surfaces. Our results demonstrated that the orientation of the hand in the vertical plane determines the perspective taken: an external perspective is adopted when the hand faces the observer and a gaze-centred perspective is selected when the hand faces away. Participants were then adapted to a mirror-reversed perspective through training, and the results revealed that this adapted perspective holds for the adapted surface and generalises to non-adapted surfaces, including across the body midline. These results reveal plasticity in perspective taking which relies on low-level postural cues (hand orientation) but also on higher-order somatosensory processing that can override the low-level cues.
Collapse
Affiliation(s)
- Jessica Hartcher-O'Brien
- UPMC Univ. Paris 06, UMR 7222, ISIR, Sorbonne Universités, 75005, Paris, France. .,Institut Jean Nicod, CNRS, EHESS, Ecole Normale Supérieure, 29 Rue d'Ulm, 75005, Paris, France.
| | - Malika Auvray
- UPMC Univ. Paris 06, UMR 7222, ISIR, Sorbonne Universités, 75005, Paris, France
| |
Collapse
|
11
|
Occelli V, Lacey S, Stephens C, John T, Sathian K. Haptic Object Recognition is View-Independent in Early Blind but not Sighted People. Perception 2015; 45:337-45. [PMID: 26562881 DOI: 10.1177/0301006615614489] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
Object recognition, whether visual or haptic, is impaired in sighted people when objects are rotated between learning and test, relative to an unrotated condition, that is, recognition is view-dependent. Loss of vision early in life results in greater reliance on haptic perception for object identification compared with the sighted. Therefore, we hypothesized that early blind people may be more adept at recognizing objects despite spatial transformations. To test this hypothesis, we compared early blind and sighted control participants on a haptic object recognition task. Participants studied pairs of unfamiliar three-dimensional objects and performed a two-alternative forced-choice identification task, with the learned objects presented both unrotated and rotated 180° about they-axis. Rotation impaired the recognition accuracy of sighted, but not blind, participants. We propose that, consistent with our hypothesis, haptic view-independence in the early blind reflects their greater experience with haptic object perception.
Collapse
Affiliation(s)
| | - Simon Lacey
- Department of Neurology, Emory University, Atlanta, GA, USA
| | - Careese Stephens
- Department of Neurology, Emory University, Atlanta, GA, USARehabilitation R&D Center of Excellence, Atlanta VAMC, Decatur, GA, USA
| | - Thomas John
- Department of Neurology, Emory University, Atlanta, GA, USA
| | - K Sathian
- Department of Neurology, Emory University, Atlanta, GA, USADepartment of Rehabilitation Medicine, Emory University, Atlanta, GA, USA; Department of Psychology, Emory University, Atlanta, GA, USARehabilitation R&D Center of Excellence, Atlanta VAMC, Decatur, GA, USA
| |
Collapse
|
12
|
Güçlü B, Celik S, Ilci C. Representation of haptic objects during mental rotation in congenital blindness. Percept Mot Skills 2014; 118:587-607. [PMID: 24897889 DOI: 10.2466/15.22.pms.118k20w0] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
The representation of haptic objects by three groups of participants (sighted, blindfolded, and congenitally blind) was studied in a mental-rotation task. Three models were tested. The participants explored a standard object continuously with the left hand and tried to find the mirror object among two alternatives explored sequentially with the right hand. Sighted participants were tested in the visual version of the task. The accuracy of judgments was very high (> 95%) for all groups, and the blind group had the highest identification times. Correlation analyses were performed between (both single-trial and average) identification times and angular differences. The identification times of the sighted and blindfolded groups increased as linear functions of the angular difference between the mirror and the standard stimuli, supporting the classical model. The identification times of the blind group changed non-monotonically and were consistent with an antiparallel image (180 degrees rotation superimposed) in the mental representation. The dual code model did not fit the data well for any participant group. The performance differences between the blindfolded and blind groups may be attributed to a modified mapping function from the object-properties-processing sub-system to the visual buffer, which was conjectured to be available also to the blind group while processing haptic objects.
Collapse
|
13
|
Rotation-independent representations for haptic movements. Sci Rep 2013; 3:2595. [PMID: 24005481 PMCID: PMC3763250 DOI: 10.1038/srep02595] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2012] [Accepted: 08/21/2013] [Indexed: 02/04/2023] Open
Abstract
The existence of a common mechanism for visual and haptic representations has been reported in object perception. In contrast, representations of movements might be more specific to modalities. Referring to the vertical axis is natural for visual representations whereas a fixed reference axis might be inappropriate for haptic movements and thus also inappropriate for its representations in the brain. The present study found that visual and haptic movement representations are processed independently. A psychophysical experiment examining mental rotation revealed the well-known effect of rotation angle for visual representations whereas no such effect was found for haptic representations. We also found no interference between processes for visual and haptic movements in an experiment where different stimuli were presented simultaneously through visual and haptic modalities. These results strongly suggest that (1) there are separate representations of visual and haptic movements, and (2) the haptic process has a rotation-independent representation.
Collapse
|
14
|
Gleeson BT, Provancher WR. Mental rotation of tactile stimuli: using directional haptic cues in mobile devices. IEEE TRANSACTIONS ON HAPTICS 2013; 6:330-339. [PMID: 24808329 DOI: 10.1109/toh.2013.5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Haptic interfaces have the potential to enrich users' interactions with mobile devices and convey information without burdening the user's visual or auditory attention. Haptic stimuli with directional content, for example, navigational cues, may be difficult to use in handheld applications; the user's hand, where the cues are delivered, may not be aligned with the world, where the cues are to be interpreted. In such a case, the user would be required to mentally transform the stimuli between different reference frames. We examine the mental rotation of directional haptic stimuli in three experiments, investigating: 1) users' intuitive interpretation of rotated stimuli, 2) mental rotation of haptic stimuli about a single axis, and 3) rotation about multiple axes and the effects of specific hand poses and joint rotations. We conclude that directional haptic stimuli are suitable for use in mobile applications, although users do not naturally interpret rotated stimuli in any one universal way. We find evidence of cognitive processes involving the rotation of analog, spatial representations and discuss how our results fit into the larger body of mental rotation research. For small angles (e.g., less than 40 degree), these mental rotations come at little cost, but rotations with larger misalignment angles impact user performance. When considering the design of a handheld haptic device, our results indicate that hand pose must be carefully considered, as certain poses increase the difficulty of stimulus interpretation. Generally, all tested joint rotations impact task difficulty, but finger flexion and wrist rotation interact to greatly increase the cost of stimulus interpretation; such hand poses should be avoided when designing a haptic interface.
Collapse
|
15
|
Crossmodal recruitment of the ventral visual stream in congenital blindness. Neural Plast 2012; 2012:304045. [PMID: 22779006 PMCID: PMC3384885 DOI: 10.1155/2012/304045] [Citation(s) in RCA: 46] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2012] [Accepted: 04/19/2012] [Indexed: 11/17/2022] Open
Abstract
We used functional MRI (fMRI) to test the hypothesis that blind subjects recruit the ventral visual stream during nonhaptic tactile-form recognition. Congenitally blind and blindfolded sighted control subjects were scanned after they had been trained during four consecutive days to perform a tactile-form recognition task with the tongue display unit (TDU). Both groups learned the task at the same rate. In line with our hypothesis, the fMRI data showed that during nonhaptic shape recognition, blind subjects activated large portions of the ventral visual stream, including the cuneus, precuneus, inferotemporal (IT), cortex, lateral occipital tactile vision area (LOtv), and fusiform gyrus. Control subjects activated area LOtv and precuneus but not cuneus, IT and fusiform gyrus. These results indicate that congenitally blind subjects recruit key regions in the ventral visual pathway during nonhaptic tactile shape discrimination. The activation of LOtv by nonhaptic tactile shape processing in blind and sighted subjects adds further support to the notion that this area subserves an abstract or supramodal representation of shape. Together with our previous findings, our data suggest that the segregation of the efferent projections of the primary visual cortex into a dorsal and ventral visual stream is preserved in individuals blind from birth.
Collapse
|
16
|
Toussaint L, Caissie AF, Blandin Y. Does mental rotation ability depend on sensory-specific experience? JOURNAL OF COGNITIVE PSYCHOLOGY 2012. [DOI: 10.1080/20445911.2011.641529] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|
17
|
Struiksma ME, Noordzij ML, Postma A. Reference Frame Preferences in Haptics Differ for the Blind and Sighted in the Horizontal but Not in the Vertical Plane. Perception 2011; 40:725-38. [DOI: 10.1068/p6805] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Abstract
We investigated which reference frames are preferred when matching spatial language to the haptic domain. Sighted, low-vision, and blind participants were tested on a haptic-sentence-verification task where participants had to haptically explore different configurations of a ball and a shoe and judge the relation between them. Results from the spatial relation “above”, in the vertical plane, showed that various reference frames are available after haptic inspection of a configuration. Moreover, the pattern of results was similar for all three groups and resembled patterns found for the sighted on visual sentence-verification tasks. In contrast, when judging the spatial relation “in front”, in the horizontal plane, the blind showed a markedly different response pattern. The sighted and low-vision participants did not show a clear preference for either the absolute/relative or the intrinsic reference frame when these frames were dissociated. The blind, on the other hand, showed a clear preference for the intrinsic reference frame. In the absence of a dominant cue, such as gravity in the vertical plane, the blind might emphasise the functional relationship between the objects owing to enhanced experience with haptic exploration of objects.
Collapse
Affiliation(s)
- Marijn E Struiksma
- (Utrecht Institute of Linguistics OTS), Utrecht University, Heidelberglaan 8, 3508 TC Utrecht, The Netherlands
| | - Matthijs L Noordzij
- Department of Cognitive Psychology and Ergonomics, University of Twente, The Netherlands
| | - Albert Postma
- Department of Neurology, University Medical Centre Utrecht, The Netherlands
| |
Collapse
|
18
|
Kitada R, Dijkerman HC, Soo G, Lederman SJ. Representing human hands haptically or visually from first-person versus third-person perspectives. Perception 2010; 39:236-54. [PMID: 20402245 DOI: 10.1068/p6535] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
Humans can recognise human body parts haptically as well as visually. We employed a mental-rotation task to determine whether participants could adopt a third-person perspective when judging the laterality of life-like human hands. Female participants adopted either a first-person or a third-person perspective using vision (experiment 1) or haptics (experiment 2), with hands presented at various orientations within a horizontal plane. In the first-person perspective task, most participants responded more slowly as hand orientation increasingly deviated from the participant's upright orientation, regardless of modality. In the visual third-person perspective task, most participants responded more slowly as hand orientation increasingly deviated from the experimenter's upright orientation; in contrast, less than half of the participants produced this same inverted U-shaped response-time function haptically. In experiment 3, participants were explicitly instructed to adopt a third-person perspective haptically by mentally rotating the rubber hand to the experimenter's upright orientation. Most participants produced an inverted U-shaped function. Collectively, these results suggest that humans can accurately assume a third-person perspective when hands are explored haptically or visually. With less explicit instructions, however, the canonical orientation for hand representation may be more strongly influenced haptically than visually by body-based heuristics, and less easily modified by perspective instructions.
Collapse
Affiliation(s)
- Ryo Kitada
- Division of Cerebral Integration, National Institute for Physiological Sciences, Okazaki, 444-8585, Japan.
| | | | | | | |
Collapse
|
19
|
Volcic R, Wijntjes MWA, Kool EC, Kappers AML. Cross-modal visuo-haptic mental rotation: comparing objects between senses. Exp Brain Res 2010; 203:621-7. [PMID: 20437169 PMCID: PMC2875473 DOI: 10.1007/s00221-010-2262-y] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2010] [Accepted: 04/09/2010] [Indexed: 11/07/2022]
Abstract
The simple experience of a coherent percept while looking and touching an object conceals an intriguing issue: different senses encode and compare information in different modality-specific reference frames. We addressed this problem in a cross-modal visuo-haptic mental rotation task. Two objects in various orientations were presented at the same spatial location, one visually and one haptically. Participants had to identify the objects as same or different. The relative angle between viewing direction and hand orientation was manipulated (Aligned versus Orthogonal). In an additional condition (Delay), a temporal delay was introduced between haptic and visual explorations while the viewing direction and the hand orientation were orthogonal to each other. Whereas the phase shift of the response time function was close to 0° in the Aligned condition, we observed a consistent phase shift in the hand’s direction in the Orthogonal condition. A phase shift, although reduced, was also found in the Delay condition. Counterintuitively, these results mean that seen and touched objects do not need to be physically aligned for optimal performance to occur. The present results suggest that the information about an object is acquired in separate visual and hand-centered reference frames, which directly influence each other and which combine in a time-dependent manner.
Collapse
Affiliation(s)
- Robert Volcic
- Psychologisches Institut II, Westfälische Wilhelms-Universität Münster, Fliednerstr. 21, 48149, Münster, Germany.
| | | | | | | |
Collapse
|
20
|
Volcic R, Wijntjes MWA, Kappers AML. Haptic mental rotation revisited: multiple reference frame dependence. Acta Psychol (Amst) 2009; 130:251-9. [PMID: 19243731 DOI: 10.1016/j.actpsy.2009.01.004] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2008] [Revised: 12/11/2008] [Accepted: 01/20/2009] [Indexed: 12/01/2022] Open
Abstract
The nature of reference frames involved in haptic spatial processing was addressed by means of a haptic mental rotation task. Participants assessed the parity of two objects located in various spatial locations by exploring them with different hand orientations. The resulting response times were fitted with a triangle wave function. Phase shifts were found to depend on the relation between the hands and the objects, and between the objects and the body. We rejected the possibility that a single reference frame drives spatial processing. Instead, we found evidence of multiple interacting reference frames with the hand-centered reference frame playing the dominant role. We propose that a weighted average of the allocentric, the hand-centered and the body-centered reference frames influences the haptic encoding of spatial information. In addition, we showed that previous results can be reinterpreted within the framework of multiple reference frames. This mechanism has proved to be ubiquitously present in haptic spatial processing.
Collapse
Affiliation(s)
- Robert Volcic
- Helmholtz Institute, Utrecht University, Padualaan 8, 3584 CH Utrecht, The Netherlands.
| | | | | |
Collapse
|
21
|
Stock O, Röder B, Burke M, Bien S, Rösler F. Cortical Activation Patterns during Long-term Memory Retrieval of Visually or Haptically Encoded Objects and Locations. J Cogn Neurosci 2009; 21:58-82. [DOI: 10.1162/jocn.2009.21006] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
The present study used functional magnetic resonance imaging to delineate cortical networks that are activated when objects or spatial locations encoded either visually (visual encoding group, n = 10) or haptically (haptic encoding group, n = 10) had to be retrieved from long-term memory. Participants learned associations between auditorily presented words and either meaningless objects or locations in a 3-D space. During the retrieval phase one day later, participants had to decide whether two auditorily presented words shared an association with a common object or location. Thus, perceptual stimulation during retrieval was always equivalent, whereas either visually or haptically encoded object or location associations had to be reactivated. Moreover, the number of associations fanning out from each word varied systematically, enabling a parametric increase of the number of reactivated representations. Recall of visual objects predominantly activated the left superior frontal gyrus and the intraparietal cortex, whereas visually learned locations activated the superior parietal cortex of both hemispheres. Retrieval of haptically encoded material activated the left medial frontal gyrus and the intraparietal cortex in the object condition, and the bilateral superior parietal cortex in the location condition. A direct test for modality-specific effects showed that visually encoded material activated more vision-related areas (BA 18/19) and haptically encoded material more motor and somatosensory-related areas. A conjunction analysis identified supramodal and material-unspecific activations within the medial and superior frontal gyrus and the superior parietal lobe including the intraparietal sulcus. These activation patterns strongly support the idea that code-specific representations are consolidated and reactivated within anatomically distributed cell assemblies that comprise sensory and motor processing systems.
Collapse
|
22
|
Craddock M, Lawson R. Do Left and Right Matter for Haptic Recognition of Familiar Objects? Perception 2009; 38:1355-76. [DOI: 10.1068/p6312] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Two experiments were carried out to examine the effects of dominant right versus non-dominant left exploration hand and left versus right object orientation on haptic recognition of familiar objects. In experiment 1, participants named 48 familiar objects in two blocks. There was no dominant-hand advantage to naming objects haptically and there was no interaction between exploration hand and object orientation. Furthermore, priming of naming was not reduced by changes of either object orientation or exploration hand. To test whether these results were attributable to a failure to encode object orientation and exploration hand, experiment 2 replicated experiment 1 except that the unexpected task in the second block was to decide whether either exploration hand or object orientation had changed relative to the initial naming block. Performance on both tasks was above chance, demonstrating that this information had been encoded into long-term haptic representations following the initial block of naming. Thus when identifying familiar objects, the haptic processing system can achieve object constancy efficiently across hand changes and object-orientation changes, although this information is often stored even when it is task-irrelevant.
Collapse
Affiliation(s)
- Matt Craddock
- School of Psychology, University of Liverpool, Eleanor Rathbone Building, Bedford Street South, Liverpool L69 7ZA, UK
| | - Rebecca Lawson
- School of Psychology, University of Liverpool, Eleanor Rathbone Building, Bedford Street South, Liverpool L69 7ZA, UK
| |
Collapse
|
23
|
Medina J, Rapp B. Phantom tactile sensations modulated by body position. Curr Biol 2008; 18:1937-42. [PMID: 19062276 DOI: 10.1016/j.cub.2008.10.068] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2008] [Revised: 10/21/2008] [Accepted: 10/22/2008] [Indexed: 11/16/2022]
Abstract
Bilateral activation of somatosensory areas after unilateral stimulation is assumed to be mediated by crosshemispheric connections. Despite evidence of bilateral activity in response to unilateral stimulation, neurologically intact humans do not experience bilateral percepts when stimulated on one side of the body. This may be due to active suppression of ipsilateral neural activity by inhibitory mechanisms whose functioning is poorly understood. We describe an individual with left fronto-parietal damage who experiences bilateral sensations in response to unilateral tactile stimulation-a rarely reported condition known as synchiria (previously described in visual, auditory, and somatosensory modalities). Presumably, the phantom sensations result from normal bilateral crosshemispheric activation, combined with a failure of inhibitory mechanisms to prevent bilateral perceptual experiences. Disruption of these mechanisms provides a valuable opportunity to examine their internal functioning. We find that the synchiria rate is affected by hand position relative to multiple reference frames. Specifically, synchiria decreases as the hands move from right (contralesional) to left (ipsilesional) space in trunk- and head-centered reference frames and disappears when the hands are crossed. These findings provide novel evidence that mechanisms that inhibit bilateral percepts operate in multiple reference frames.
Collapse
Affiliation(s)
- Jared Medina
- Department of Neurology, University of Pennsylvania, Philadelphia, PA 19104, USA.
| | | |
Collapse
|
24
|
Merabet LB, Hamilton R, Schlaug G, Swisher JD, Kiriakopoulos ET, Pitskel NB, Kauffman T, Pascual-Leone A. Rapid and reversible recruitment of early visual cortex for touch. PLoS One 2008; 3:e3046. [PMID: 18728773 PMCID: PMC2516172 DOI: 10.1371/journal.pone.0003046] [Citation(s) in RCA: 191] [Impact Index Per Article: 11.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2008] [Accepted: 08/01/2008] [Indexed: 11/19/2022] Open
Abstract
Background The loss of vision has been associated with enhanced performance in non-visual tasks such as tactile discrimination and sound localization. Current evidence suggests that these functional gains are linked to the recruitment of the occipital visual cortex for non-visual processing, but the neurophysiological mechanisms underlying these crossmodal changes remain uncertain. One possible explanation is that visual deprivation is associated with an unmasking of non-visual input into visual cortex. Methodology/Principal Findings We investigated the effect of sudden, complete and prolonged visual deprivation (five days) in normally sighted adult individuals while they were immersed in an intensive tactile training program. Following the five-day period, blindfolded subjects performed better on a Braille character discrimination task. In the blindfold group, serial fMRI scans revealed an increase in BOLD signal within the occipital cortex in response to tactile stimulation after five days of complete visual deprivation. This increase in signal was no longer present 24 hours after blindfold removal. Finally, reversible disruption of occipital cortex function on the fifth day (by repetitive transcranial magnetic stimulation; rTMS) impaired Braille character recognition ability in the blindfold group but not in non-blindfolded controls. This disruptive effect was no longer evident once the blindfold had been removed for 24 hours. Conclusions/Significance Overall, our findings suggest that sudden and complete visual deprivation in normally sighted individuals can lead to profound, but rapidly reversible, neuroplastic changes by which the occipital cortex becomes engaged in processing of non-visual information. The speed and dynamic nature of the observed changes suggests that normally inhibited or masked functions in the sighted are revealed by visual loss. The unmasking of pre-existing connections and shifts in connectivity represent rapid, early plastic changes, which presumably can lead, if sustained and reinforced, to slower developing, but more permanent structural changes, such as the establishment of new neural connections in the blind.
Collapse
Affiliation(s)
- Lotfi B. Merabet
- Department of Neurology, Berenson-Allen Center for Noninvasive Brain Stimulation, Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts, United States of America
| | - Roy Hamilton
- Department of Neurology, Berenson-Allen Center for Noninvasive Brain Stimulation, Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts, United States of America
| | - Gottfried Schlaug
- Department of Neurology, Berenson-Allen Center for Noninvasive Brain Stimulation, Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts, United States of America
| | - Jascha D. Swisher
- Department of Neurology, Berenson-Allen Center for Noninvasive Brain Stimulation, Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts, United States of America
| | - Elaine T. Kiriakopoulos
- Department of Neurology, Berenson-Allen Center for Noninvasive Brain Stimulation, Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts, United States of America
| | - Naomi B. Pitskel
- Department of Neurology, Berenson-Allen Center for Noninvasive Brain Stimulation, Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts, United States of America
| | - Thomas Kauffman
- Department of Neurology, Berenson-Allen Center for Noninvasive Brain Stimulation, Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts, United States of America
| | - Alvaro Pascual-Leone
- Department of Neurology, Berenson-Allen Center for Noninvasive Brain Stimulation, Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts, United States of America
- * E-mail:
| |
Collapse
|
25
|
Miller JC, Skillman GD. Relationship of stimulus and examinee variables to performance on analogous visual and tactile block construction tasks. APPLIED NEUROPSYCHOLOGY 2008; 15:140-9. [PMID: 18568607 DOI: 10.1080/09084280802160901] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
Nonverbal/spatial tests are unavailable for persons with visual impairments, despite decades of documented need and developmental effort. Because past tactile analogs of block design (BD) tests have not been widely accepted, known BD test parameters were compared across visual and tactile designs to assess the applicability of the test across modalities. Contrary to expectations, edge-cueing of designs with no perceptual cohesiveness (PC) improved tactile and visual performance. The expected PC by cueing and field independence (FI) by PC interactions were found for visual, but not tactile, BD. Uncued tactile designs elicited more errors, tending to occur closer to the center of the designs. These data suggest that visual and tactile BD performance cannot be interpreted similarly. Differences may be due to to modality-specific demand for various encoding and recoding abilities. The standing model is expanded to account for cross-modality differences in BD performance by including both rotation and block segregation.
Collapse
Affiliation(s)
- Joseph C Miller
- Department of Psychology, The University of North Dakota, Grand Forks, North Dakota 58203, USA.
| | | |
Collapse
|
26
|
Lacey S, Campbell C, Sathian K. Vision and touch: multiple or multisensory representations of objects? Perception 2008; 36:1513-21. [PMID: 18265834 DOI: 10.1068/p5850] [Citation(s) in RCA: 53] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
The relationship between visually and haptically derived representations of objects is an important question in multisensory processing and, increasingly, in mental representation. We review evidence for the format and properties of these representations, and address possible theoretical models. We explore the relevance of visual imagery processes and highlight areas for further research, including the neglected question of asymmetric performance in the visuo-haptic cross-modal memory paradigm. We conclude that the weight of evidence suggests the existence of a multisensory representation, spatial in format, and flexibly accessible by both bottom-up and top-down inputs, although efficient comparison between modality-specific representations cannot entirely be ruled out.
Collapse
Affiliation(s)
- Simon Lacey
- Department of Neurology, School of Medicine, Emory University, Atlanta, GA 30322, USA.
| | | | | |
Collapse
|
27
|
|
28
|
van der Horst BJ, Kappers AML. Haptic Curvature Comparison of Convex and Concave Shapes. Perception 2008; 37:1137-51. [DOI: 10.1068/p5780] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
A sculpture and the mould in which it was formed are typical examples of objects with an identical, but opponent, surface shape: each convex (ie outward pointing) surface part of a sculpture has a concave counterpart in the mould. The question arises whether the object features of opponent shapes can be compared by touch. Therefore, we investigated whether human observers were able to discriminate the curvatures of convex and concave shapes, irrespective of whether the shape was convex or concave. Using a 2AFC procedure, subjects had to compare the curvature of a convex shape to the curvature of a concave shape. In addition, results were also obtained for congruent shapes, when the curvature of either only convex shapes or only concave shapes had to be compared. Psychometric curves were fitted to the data to obtain threshold and bias results. When subjects explored the stimuli with a single index finger, significantly higher thresholds were obtained for the opponent shapes than for the congruent shapes. However, when the stimuli were touched by two index fingers, one finger per surface, we found similar thresholds. Systematic biases were found when the curvature of opponent shapes was compared: the curvature of a more curved convex surface was judged equal to the curvature of a less curved concave surface. We conclude that human observers had the ability to compare the curvature of shapes with an opposite direction, but that their performance decreased when they sensed the opponent surfaces with the same finger. Moreover, they systematically underestimated the curvature of convex shapes compared to the curvature of concave shapes.
Collapse
Affiliation(s)
- Bernard J van der Horst
- Department of Physics of Man, Helmholtz Instituut, Universiteit Utrecht, Princetonplein 5, NL 3584 CC Utrecht, The Netherlands
| | - Astrid M L Kappers
- Department of Physics of Man, Helmholtz Instituut, Universiteit Utrecht, Princetonplein 5, NL 3584 CC Utrecht, The Netherlands
| |
Collapse
|
29
|
Merabet LB, Swisher JD, McMains SA, Halko MA, Amedi A, Pascual-Leone A, Somers DC. Combined Activation and Deactivation of Visual Cortex During Tactile Sensory Processing. J Neurophysiol 2007; 97:1633-41. [PMID: 17135476 DOI: 10.1152/jn.00806.2006] [Citation(s) in RCA: 107] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The involvement of occipital cortex in sensory processing is not restricted solely to the visual modality. Tactile processing has been shown to modulate higher-order visual and multisensory integration areas in sighted as well as visually deprived subjects; however, the extent of involvement of early visual cortical areas remains unclear. To investigate this issue, we employed functional magnetic resonance imaging in normally sighted, briefly blindfolded subjects with well-defined visuotopic borders as they tactually explored and rated raised-dot patterns. Tactile task performance resulted in significant activation in primary visual cortex (V1) and deactivation of extrastriate cortical regions V2, V3, V3A, and hV4 with greater deactivation in dorsal subregions and higher visual areas. These results suggest that tactile processing affects occipital cortex via two distinct pathways: a suppressive top-down pathway descending through the visual cortical hierarchy and an excitatory pathway arising from outside the visual cortical hierarchy that drives area V1 directly.
Collapse
Affiliation(s)
- Lotfi B Merabet
- Department of Neurology, Beth Israel Deaconess Medical Center, 330 Brookline Avenue, KS 430, Boston, MA 02215, USA.
| | | | | | | | | | | | | |
Collapse
|
30
|
Newman SD, Klatzky RL, Lederman SJ, Just MA. Imagining material versus geometric properties of objects: an fMRI study. ACTA ACUST UNITED AC 2005; 23:235-46. [PMID: 15820631 DOI: 10.1016/j.cogbrainres.2004.10.020] [Citation(s) in RCA: 76] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2004] [Revised: 10/21/2004] [Accepted: 10/21/2004] [Indexed: 11/27/2022]
Abstract
Two experiments are reported that used fMRI to compare the brain activation during the imagery of material and geometric object features. In the first experiment, participants were to mentally evaluate objects along either a material dimension (roughness, hardness and temperature; e.g., Which is harder, a potato or a mushroom?) or a geometric dimension (size and shape; e.g., Which is larger, a pumpkin or a cucumber?). In the second experiment, when given the name of an object and either a material (roughness and hardness) or geometric (size and shape) property participants rated the object on a scale from 1 to 4. Both experiments were designed to examine the underlying neural substrate that supports the processing of material object properties with respect to geometric properties. Considering the relative amount of activation across the two types of object properties, we found that (1) the interrogation of geometric features differentially evokes visual imagery which involves the region in and around the intraparietal sulcus, (2) the interrogation of material features differentially evokes the processing of semantic object representations which involves the inferior extrastriate region, and (3) the lateral occipital cortex (LOC) responds to shape processing regardless of whether the feature being queried is a material or geometric feature.
Collapse
Affiliation(s)
- Sharlene D Newman
- Department of Psychology, Indiana University, 1101 E. 10th Street, Bloomington, IN 47405, USA.
| | | | | | | |
Collapse
|
31
|
James TW, Shima DW, Tarr MJ, Gauthier I. Generating complex three-dimensional stimuli (Greebles) for haptic expertise training. Behav Res Methods 2005; 37:353-8. [PMID: 16171207 DOI: 10.3758/bf03192703] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
An apparatus is described that accurately measures response times and video records hand movements during haptic object recognition using complex three-dimensional (3-D) forms. The apparatus was used for training participants to become expert at perceptual judgments of 3-D objects (Greebles) using only their sense of touch. Inspiration came from previous visual experiments, and therefore training and testing protocols that were similar to the earlier visual procedures were used. Two sets of Greebles were created. One set (clay Greebles) was hand crafted from clay, and the other (plastic Greebles) was machine created using rapid prototyping technology. Differences between these object creation techniques and their impact on perceptual expertise training are discussed. The full set of these stimuli may be downloaded from www.psychonomic.org/archive/.
Collapse
|
32
|
Prather SC, Votaw JR, Sathian K. Task-specific recruitment of dorsal and ventral visual areas during tactile perception. Neuropsychologia 2004; 42:1079-87. [PMID: 15093147 DOI: 10.1016/j.neuropsychologia.2003.12.013] [Citation(s) in RCA: 93] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2003] [Revised: 12/09/2003] [Accepted: 12/15/2003] [Indexed: 11/19/2022]
Abstract
Many studies have found that visual cortical areas are active during tactile perception. Here we used positron emission tomographic (PET) scanning in normally sighted humans to show that extrastriate cortical regions are recruited in a task-specific manner during perceptual processing of tactile stimuli varying in two dimensions. Mental rotation of tactile Forms activated a focus around the anterior part of the left intraparietal sulcus. Since prior studies have reported activity nearby during mental rotation of visual stimuli, this focus appears to be associated with the dorsal visual (visuospatial) pathway. Discrimination between tactile Forms activated the right lateral occipital complex, an object-selective region in the ventral visual (visual Form) pathway. Thus, tactile tasks appear to recruit cortical regions that are active during corresponding visual tasks. Activation of these areas in both visual and tactile tasks could reflect visual imagery during tactile perception, activity in multisensory representations, or both.
Collapse
Affiliation(s)
- S C Prather
- Department of Neurology, Emory University School of Medicine, WMRB 6000, 1639 Pierce Drive, Atlanta, GA 30322, USA
| | | | | |
Collapse
|
33
|
Robert M, Chevrier E. Does men’s advantage in mental rotation persist when real three-dimensional objects are either felt or seen? Mem Cognit 2003; 31:1136-45. [PMID: 14704028 DOI: 10.3758/bf03196134] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
Abstract
In several spatial tasks in which men outperform women in the processing of visual input, the sex difference has been eliminated in matching contexts limited to haptic input. The present experiment tested whether such contrasting results would be reproduced in a mental rotation task. A standard visual condition involved two-dimensional illustrations of three-dimensional stimuli; in a haptic condition, three-dimensional replicas of these stimuli were only felt; in an additional visual condition, these replicas were seen. The results indicated that, irrespective of condition, men's response times were shorter than women's, although accuracy did not significantly differ according to sex. For both men and women, response times were shorter and accuracy was higher in the standard condition than in the haptic one, the best performances being recorded when full replicas were shown. Self-reported solving strategies also varied as a function of sex and condition. The discussion emphasizes the robustness of men's faster speed in mental rotation. With respect to both speed and accuracy, the demanding sequential processing called for in the haptic setting, relative to the standard condition, is underscored, as is the benefit resulting from easier access to depth cues in the visual context with real three-dimensional objects.
Collapse
Affiliation(s)
- Michèle Robert
- Département de Psychologie, Université de Montréal, Montréal, Québec, Canada.
| | | |
Collapse
|