1
|
Leo F, Sandini G, Sciutti A. Mental Rotation Skill Shapes Haptic Exploration Strategies. IEEE TRANSACTIONS ON HAPTICS 2022; 15:339-350. [PMID: 35344495 DOI: 10.1109/toh.2022.3162321] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Haptic exploration strategies have been traditionally studied focusing on hand movements and neglecting how objects are moved in space. However, in daily life situations touch and movement cannot be disentangled. Furthermore, the relation between object manipulation as well as performance in haptic tasks and spatial skill is still little understood. In this study, we used iCube, a sensorized cube recording its orientation in space as well as the location of the points of contact on its faces. Participants had to explore the cube faces where little pins were positioned in varying number and count the number of pins on the faces with either even or odd number of pins. At the end of this task, they also completed a standard visual mental rotation test (MRT). Results showed that higher MRT scores were associated with better performance in the task with iCube both in term of accuracy and exploration speed and exploration strategies associated with better performance were identified. High performers tended to rotate the cube so that the explored face had the same spatial orientation (i.e., they preferentially explored the upward face and rotated iCube to explore the next face in the same orientation). They also explored less often twice the same face and were faster and more systematic in moving from one face to the next. These findings indicate that iCube could be used to infer subjects' spatial skill in a more natural and unobtrusive fashion than with standard MRTs.
Collapse
|
2
|
Stoycheva P, Kauramäki J, Newell FN, Tiippana K. Haptic recognition memory and lateralisation for verbal and nonverbal shapes. Memory 2021; 29:1043-1057. [PMID: 34309478 DOI: 10.1080/09658211.2021.1957938] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Laterality effects generally refer to an advantage for verbal processing in the left hemisphere and for non-verbal processing in the right hemisphere, and are often demonstrated in memory tasks in vision and audition. In contrast, their role in haptic memory is less understood. In this study, we examined haptic recognition memory and laterality for letters and nonsense shapes. We used both upper and lower case letters, with the latter designed as more complex in shape. Participants performed a recognition memory task with the left and right hand separately. Recognition memory performance (capacity and bias-free d') was higher and response times were faster for upper case letters than for lower case letters and nonsense shapes. The right hand performed best for upper case letters when it performed the task after the left hand. This right hand/left hemisphere advantage appeared for upper case letters, but not lower case letters, which also had a lower memory capacity, probably due to their more complex spatial shape. These findings suggest that verbal laterality effects in haptic memory are not very prominent, which may be due to the haptic verbal stimuli being processed mainly as spatial objects without reaching robust verbal coding into memory.
Collapse
Affiliation(s)
- Polina Stoycheva
- Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| | - Jaakko Kauramäki
- Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| | - Fiona N Newell
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland
| | - Kaisa Tiippana
- Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| |
Collapse
|
3
|
Ramirez Zegarra R, di Pasquo E, Dall'Asta A, Minopoli M, Armano G, Fieni S, Frusca T, Ghi T. Impact of ultrasound guided training in the diagnosis of the fetal head position during labor: A prospective observational study. Eur J Obstet Gynecol Reprod Biol 2020; 256:308-313. [PMID: 33260000 DOI: 10.1016/j.ejogrb.2020.11.053] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2020] [Revised: 11/13/2020] [Accepted: 11/18/2020] [Indexed: 11/18/2022]
Abstract
OBJECTIVES To assess whether the additional training with transabdominal ultrasound may improve the accuracy of the transvaginal digital examination in the assessment of the fetal head position during the active stage of labor. METHODS Prospective observational study involving 2 residents in their 1 st year of training in Obstetrics with no prior experience in neither transvaginal digital examination nor ultrasound. Women with term, cephalic presenting fetus and active labor with cervical dilation ≥ 8 cm and ruptured membranes were included. In the preliminary phase of the study, the resident A ("blinded") was assigned to assess the fetal head position by transvaginal digital examination, while the resident B ("unmasked") performed transvaginal digital examination following transabdominal ultrasound, which was considered to be the gold standard to determine the fetal head position. After 50 examinations independently performed by each resident in the training phase, a post-training phase of the study was carried out to compare the accuracy of each resident in the diagnosis of fetal head position by digital assessment. The occiput position was eventually confirmed by ultrasound performed by the main investigator. RESULTS Over a 6 months period, 90 post-training vaginal examinations were performed by each resident. The number of incorrect diagnoses of head position was higher for the "blinded" resident compared with the "unmasked" resident subjected to the ultrasound training (28/90 or 31.1 % vs 15/90 or 16.7 % p = 0.02). For both residents a wrong diagnosis was more likely with non-OA vs OA fetuses but this difference was statistically significant for the "blinded" Resident (10/20 or 50 % vs 18/70 or 25.7 % p = 0.039) but not for the "unmasked" Resident (5/18 or 27.9 % vs 10/72 or 13.9 % p = 0.16). CONCLUSION The addition of transabdominal ultrasound as a training tool in the determination of the fetal head position during labor seems to improve the accuracy of the transvaginal digital examination in unexperienced residents.
Collapse
Affiliation(s)
- Ruben Ramirez Zegarra
- Department of Medicine and Surgery, Obstetrics and Gynecology Unit, University of Parma, Parma, Italy; Department of Obstetrics and Gynecology, St. Joseph Krankenhaus, Berlin, Germany
| | - Elvira di Pasquo
- Department of Medicine and Surgery, Obstetrics and Gynecology Unit, University of Parma, Parma, Italy
| | - Andrea Dall'Asta
- Department of Medicine and Surgery, Obstetrics and Gynecology Unit, University of Parma, Parma, Italy
| | - Monica Minopoli
- Department of Medicine and Surgery, Obstetrics and Gynecology Unit, University of Parma, Parma, Italy
| | - Giulia Armano
- Department of Medicine and Surgery, Obstetrics and Gynecology Unit, University of Parma, Parma, Italy
| | - Stefania Fieni
- Department of Medicine and Surgery, Obstetrics and Gynecology Unit, University of Parma, Parma, Italy
| | - Tiziana Frusca
- Department of Medicine and Surgery, Obstetrics and Gynecology Unit, University of Parma, Parma, Italy
| | - Tullio Ghi
- Department of Medicine and Surgery, Obstetrics and Gynecology Unit, University of Parma, Parma, Italy.
| |
Collapse
|
4
|
Monegato M, Cattaneo Z, Pece A, Vecchi T. Comparing the Effects of Congenital and Late Visual Impairments on Visuospatial Mental Abilities. JOURNAL OF VISUAL IMPAIRMENT & BLINDNESS 2019. [DOI: 10.1177/0145482x0710100503] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
This study compared participants who were congenitally visually impaired and those who became visually impaired later in life in a spatial memory task. The latter showed less efficient visuospatial processes than did the former. However, these differences were of a quantitative nature only, indicating common cognitive mechanisms that can be clearly differentiated from those of people who are congenitally blind.
Collapse
Affiliation(s)
- Maura Monegato
- Department of Psychology, University of Pavia, Piazza Botta 6, 27100 Pavia, Italy, and optometrist, Ophthalmology Unit, Melegnano Hospital, Via Pandina 1, Vizzolo Predabissi (MI), Italy
| | | | - Alfredo Pece
- Department chair, Ophthalmology Unit, Melegnano Hospital, Italy
| | - Tomaso Vecchi
- Professor of experimental psychology, Department of Psychology, University of Pavia, Italy
| |
Collapse
|
5
|
Yasaka K, Mori T, Yamaguchi M, Kaba H. Representations of microgeometric tactile information during object recognition. Cogn Process 2018; 20:19-30. [PMID: 30446884 DOI: 10.1007/s10339-018-0892-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2017] [Accepted: 11/03/2018] [Indexed: 11/26/2022]
Abstract
Object recognition through tactile perception involves two elements: the shape of the object (macrogeometric properties) and the material of the object (microgeometric properties). Here we sought to determine the characteristics of microgeometric tactile representations regarding object recognition through tactile perception. Participants were directed to recognize objects with different surface materials using either tactile information or visual information. With a quantitative analysis of the cognitive process regarding object recognition, Experiment 1 confirmed the same eight concepts (composed of rules defining distinct cognitive processes) commonly generated in both tactile and visual perceptions to accomplish the task, although an additional concept was generated during the visual task. Experiment 2 focused only on tactile perception. Three tactile objects with different surface materials (plastic, cloth and sandpaper) were used for the object recognition task. The participants answered a questionnaire regarding the process leading to their answers (which was designed based on the results obtained in Experiment 1) and to provide ratings on the vividness, familiarity and affective valence. We used these experimental data to investigate whether changes in material attributes (tactile information) change the characteristics of tactile representation. The observation showed that differences in tactile information resulted in differences in cognitive processes, vividness, familiarity and emotionality. These two experiments collectively indicated that microgeometric tactile information contributes to object recognition by recruiting various cognitive processes including episodic memory and emotion, similar to the case of object recognition by visual information.
Collapse
Affiliation(s)
- Kazuhiko Yasaka
- Department of Physical Therapy, Kochi School of Allied Health and Medical Professions, 6012-10, Nagahama, Kochi, 781-0270, Japan.
- Department of Physiology, Kochi Medical School, Kochi University, Nankoku, Kochi, 783-8505, Japan.
| | | | - Masahiro Yamaguchi
- Department of Physiology, Kochi Medical School, Kochi University, Nankoku, Kochi, 783-8505, Japan
| | - Hideto Kaba
- Department of Physiology, Kochi Medical School, Kochi University, Nankoku, Kochi, 783-8505, Japan
| |
Collapse
|
6
|
Picard D. Tactual, Visual, and Cross-Modal Transfer of Texture in 5- and 8-Year-Old Children. Perception 2016; 36:722-36. [PMID: 17624118 DOI: 10.1068/p5575] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
Children's tactual, visual, and cross-modal transfer abilities for texture were investigated in a delayed matching-to-sample paradigm. Transfer performance from vision to touch was found to increase between 5 and 8 years of age, whereas transfer performance from touch to vision did not vary with age and matched touch-to-touch performance. Asymmetrical cross-modal abilities were observed at the age of 8 years, vision-to-touch transfer performance being higher than touch-to-vision transfer performance (experiment 2). This developmental pattern could not be attributed to limitations in the tactual or visual discriminability of the textures or to differences in tactual or visual memory between the two age groups (experiment 1). It is suggested that the increase with age in vision-to-touch performance may be related to the intervention of more efficient top – down perceptual processes in the older children.
Collapse
Affiliation(s)
- Delphine Picard
- Department of Psychology, University of Montpellier III, Route de Mende, F 34199 Montpellier, France.
| |
Collapse
|
7
|
Norman JF, Crabtree CE, Norman HF, Moncrief BK, Herrmann M, Kapley N. Aging and the Visual, Haptic, and Cross-Modal Perception of Natural Object Shape. Perception 2016; 35:1383-95. [PMID: 17214383 DOI: 10.1068/p5504] [Citation(s) in RCA: 50] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
One hundred observers participated in two experiments designed to investigate aging and the perception of natural object shape. In the experiments, younger and older observers performed either a same/different shape discrimination task (experiment 1) or a cross-modal matching task (experiment 2). Quantitative effects of age were found in both experiments. The effect of age in experiment 1 was limited to cross-modal shape discrimination: there was no effect of age upon unimodal (ie within a single perceptual modality) shape discrimination. The effect of age in experiment 2 was eliminated when the older observers were either given an unlimited amount of time to perform the task or when the number of response alternatives was decreased. Overall, the results of the experiments reveal that older observers can effectively perceive 3-D shape from both vision and haptics.
Collapse
Affiliation(s)
- J Farley Norman
- Department of Psychology, Western Kentucky University, Bowling Green 42101-1030, USA.
| | | | | | | | | | | |
Collapse
|
8
|
Erdogan G, Yildirim I, Jacobs RA. From Sensory Signals to Modality-Independent Conceptual Representations: A Probabilistic Language of Thought Approach. PLoS Comput Biol 2015; 11:e1004610. [PMID: 26554704 PMCID: PMC4640543 DOI: 10.1371/journal.pcbi.1004610] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2015] [Accepted: 10/17/2015] [Indexed: 12/02/2022] Open
Abstract
People learn modality-independent, conceptual representations from modality-specific sensory signals. Here, we hypothesize that any system that accomplishes this feat will include three components: a representational language for characterizing modality-independent representations, a set of sensory-specific forward models for mapping from modality-independent representations to sensory signals, and an inference algorithm for inverting forward models-that is, an algorithm for using sensory signals to infer modality-independent representations. To evaluate this hypothesis, we instantiate it in the form of a computational model that learns object shape representations from visual and/or haptic signals. The model uses a probabilistic grammar to characterize modality-independent representations of object shape, uses a computer graphics toolkit and a human hand simulator to map from object representations to visual and haptic features, respectively, and uses a Bayesian inference algorithm to infer modality-independent object representations from visual and/or haptic signals. Simulation results show that the model infers identical object representations when an object is viewed, grasped, or both. That is, the model's percepts are modality invariant. We also report the results of an experiment in which different subjects rated the similarity of pairs of objects in different sensory conditions, and show that the model provides a very accurate account of subjects' ratings. Conceptually, this research significantly contributes to our understanding of modality invariance, an important type of perceptual constancy, by demonstrating how modality-independent representations can be acquired and used. Methodologically, it provides an important contribution to cognitive modeling, particularly an emerging probabilistic language-of-thought approach, by showing how symbolic and statistical approaches can be combined in order to understand aspects of human perception.
Collapse
Affiliation(s)
- Goker Erdogan
- Department of Brain & Cognitive Sciences, University of Rochester, Rochester, New York, United States of America
| | - Ilker Yildirim
- Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Laboratory of Neural Systems, The Rockefeller University, New York, New York, United States of America
| | - Robert A. Jacobs
- Department of Brain & Cognitive Sciences, University of Rochester, Rochester, New York, United States of America
| |
Collapse
|
9
|
Do PT, Moreland JR. Facilitating role of 3D multimodal visualization and learning rehearsal in memory recall. Psychol Rep 2014; 114:541-56. [PMID: 24897906 DOI: 10.2466/04.28.pr0.114k17w9] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
The present study investigated the influence of 3D multimodal visualization and learning rehearsal on memory recall. Participants (N = 175 college students ranging from 21 to 25 years) were assigned to different training conditions and rehearsal processes to learn a list of 14 terms associated with construction of a wood-frame house. They then completed a memory test determining their cognitive ability to free recall the definitions of the 14 studied terms immediately after training and rehearsal. The audiovisual modality training condition was associated with the highest accuracy, and the visual- and auditory-modality conditions with lower accuracy rates. The no-training condition indicated little learning acquisition. A statistically significant increase in performance accuracy for the audiovisual condition as a function of rehearsal suggested the relative importance of rehearsal strategies in 3D observational learning. Findings revealed the potential application of integrating virtual reality and cognitive sciences to enhance learning and teaching effectiveness.
Collapse
|
10
|
Lacey S, Sathian K. Visuo-haptic multisensory object recognition, categorization, and representation. Front Psychol 2014; 5:730. [PMID: 25101014 PMCID: PMC4102085 DOI: 10.3389/fpsyg.2014.00730] [Citation(s) in RCA: 60] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2014] [Accepted: 06/23/2014] [Indexed: 12/15/2022] Open
Abstract
Visual and haptic unisensory object processing show many similarities in terms of categorization, recognition, and representation. In this review, we discuss how these similarities contribute to multisensory object processing. In particular, we show that similar unisensory visual and haptic representations lead to a shared multisensory representation underlying both cross-modal object recognition and view-independence. This shared representation suggests a common neural substrate and we review several candidate brain regions, previously thought to be specialized for aspects of visual processing, that are now known also to be involved in analogous haptic tasks. Finally, we lay out the evidence for a model of multisensory object recognition in which top-down and bottom-up pathways to the object-selective lateral occipital complex are modulated by object familiarity and individual differences in object and spatial imagery.
Collapse
Affiliation(s)
- Simon Lacey
- Department of Neurology, Emory University School of Medicine Atlanta, GA, USA
| | - K Sathian
- Department of Neurology, Emory University School of Medicine Atlanta, GA, USA ; Department of Rehabilitation Medicine, Emory University School of Medicine Atlanta, GA, USA ; Department of Psychology, Emory University School of Medicine Atlanta, GA, USA ; Rehabilitation Research and Development Center of Excellence, Atlanta Veterans Affairs Medical Center Decatur, GA, USA
| |
Collapse
|
11
|
Kalagher H. The effects of perceptual priming on 4-year-olds' haptic-to-visual cross-modal transfer. Perception 2014; 42:1063-74. [PMID: 24494437 DOI: 10.1068/p7525] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
Four-year-old children often have difficulty visually recognizing objects that were previously experienced only haptically. This experiment attempts to improve their performance in these haptic-to-visual transfer tasks. Sixty-two 4-year-old children participated in priming trials in which they explored eight unfamiliar objects visually, haptically, or visually and haptically together. Subsequently, all children participated in the same haptic-to-visual cross-modal transfer task. In this task, children haptically explored the objects that were presented in the priming phase and then visually identified a match from among three test objects, each matching the object on only one dimension (shape, texture, or color). Children in all priming conditions predominantly made shape-based matches; however, the most shape-based matches were made in the Visual and Haptic condition. All kinds of priming provided the necessary memory traces upon which subsequent haptic exploration could build a strong enough representation to enable subsequent visual recognition. Haptic exploration patterns during the cross-modal transfer task are discussed and the detailed analyses provide a unique contribution to our understanding of the development of haptic exploratory procedures.
Collapse
Affiliation(s)
- Hilary Kalagher
- Department of Psychology, Drew University, Madison, NJ 07940, USA.
| |
Collapse
|
12
|
Rotation-independent representations for haptic movements. Sci Rep 2013; 3:2595. [PMID: 24005481 PMCID: PMC3763250 DOI: 10.1038/srep02595] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2012] [Accepted: 08/21/2013] [Indexed: 02/04/2023] Open
Abstract
The existence of a common mechanism for visual and haptic representations has been reported in object perception. In contrast, representations of movements might be more specific to modalities. Referring to the vertical axis is natural for visual representations whereas a fixed reference axis might be inappropriate for haptic movements and thus also inappropriate for its representations in the brain. The present study found that visual and haptic movement representations are processed independently. A psychophysical experiment examining mental rotation revealed the well-known effect of rotation angle for visual representations whereas no such effect was found for haptic representations. We also found no interference between processes for visual and haptic movements in an experiment where different stimuli were presented simultaneously through visual and haptic modalities. These results strongly suggest that (1) there are separate representations of visual and haptic movements, and (2) the haptic process has a rotation-independent representation.
Collapse
|
13
|
Vallet GT, Simard M, Versace R, Mazza S. The perceptual nature of audiovisual interactions for semantic knowledge in young and elderly adults. Acta Psychol (Amst) 2013; 143:253-60. [PMID: 23684850 DOI: 10.1016/j.actpsy.2013.04.009] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2012] [Revised: 04/05/2013] [Accepted: 04/06/2013] [Indexed: 11/30/2022] Open
Abstract
Audiovisual interactions for familiar objects are at the core of perception. The nature of these interactions depends on the amodal--sensory abstracted--or modal--sensory-dependent--approach of knowledge. According to these approaches, the interactions should be respectively semantic and indirect or perceptual and direct. This issue is therefore a central question to memory and perception, yet the nature of these interactions remains unexplored in young and elderly adults. We used a cross-modal priming paradigm combined with a visual masking procedure of half of the auditory primes. The data demonstrated similar results in the young and elderly adult groups. The mask interfered with the priming effect in the semantically congruent condition, whereas the mask facilitated the processing of the visual target in the semantically incongruent condition. These findings indicate that audiovisual interactions are perceptual, and support the grounded cognition theory.
Collapse
Affiliation(s)
- Guillaume T Vallet
- Laboratoire d'Étude des Mécanismes Cognitifs, University Lyon 2, 5 Avenue Pierre-Mendès France, 69676 Bron Cedex, France.
| | | | | | | |
Collapse
|
14
|
Isolating shape from semantics in haptic-visual priming. Exp Brain Res 2013; 227:311-22. [DOI: 10.1007/s00221-013-3489-1] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2012] [Accepted: 03/14/2013] [Indexed: 11/26/2022]
|
15
|
Transfer of object category knowledge across visual and haptic modalities: experimental and computational studies. Cognition 2012; 126:135-48. [PMID: 23102553 DOI: 10.1016/j.cognition.2012.08.005] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2012] [Revised: 08/16/2012] [Accepted: 08/19/2012] [Indexed: 11/20/2022]
Abstract
We study people's abilities to transfer object category knowledge across visual and haptic domains. If a person learns to categorize objects based on inputs from one sensory modality, can the person categorize these same objects when the objects are perceived through another modality? Can the person categorize novel objects from the same categories when these objects are, again, perceived through another modality? Our work makes three contributions. First, by fabricating Fribbles (3-D, multi-part objects with a categorical structure), we developed visual-haptic stimuli that are highly complex and realistic, and thus more ecologically valid than objects that are typically used in haptic or visual-haptic experiments. Based on these stimuli, we developed the See and Grasp data set, a data set containing both visual and haptic features of the Fribbles, and are making this data set freely available on the world wide web. Second, complementary to previous research such as studies asking if people transfer knowledge of object identity across visual and haptic domains, we conducted an experiment evaluating whether people transfer object category knowledge across these domains. Our data clearly indicate that we do. Third, we developed a computational model that learns multisensory representations of prototypical 3-D shape. Similar to previous work, the model uses shape primitives to represent parts, and spatial relations among primitives to represent multi-part objects. However, it is distinct in its use of a Bayesian inference algorithm allowing it to acquire multisensory representations, and sensory-specific forward models allowing it to predict visual or haptic features from multisensory representations. The model provides an excellent qualitative account of our experimental data, thereby illustrating the potential importance of multisensory representations and sensory-specific forward models to multisensory perception.
Collapse
|
16
|
Do PT, Homa D. Exploring the Psychological Structure of Transformational Knowledge in Visual and Haptic Intramodal Conditions Using Multidimensional Scaling. Percept Mot Skills 2012; 115:443-64. [DOI: 10.2466/24.27.23.pms.115.5.443-464] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
The intramodal relation between perceptual similarity and categorization performance in a psychological space, as indicated by multidimensional scaling (MDS) analysis of similarity judgments, was explored. Participants learned to classify transformed object shapes into three categories either visually or haptically via different training procedures (either random or systematic), followed by a transfer test. Learning modulated the psychological spaces, but this effect was more prevalent with haptic than with visual tasks. A prototype model for similarity ratings was illustrated in MDS space. The prototypes were multidimensionally scaled at the center of a category, rather than mirroring the bidirectional paths of their origins. Although they converged at the apex of two transformational trajectories, the category prototypes anchored at the centroid of their respective categories and became more structured as a function of learning. The reduced tendency to make errors (i.e., higher accuracy) in recognizing and classifying the category prototypes suggested that prototypical representation of a category abstracted from exemplar averaging functioned more as novel, rather than familiar, information. Findings were discussed in terms of transformational knowledge, categorical representation in three-dimensional (3D) space, and intramodal visual and haptic similarity.
Collapse
|
17
|
Fernandes AM, Albuquerque PB. Tactual perception: a review of experimental variables and procedures. Cogn Process 2012; 13:285-301. [PMID: 22669262 DOI: 10.1007/s10339-012-0443-2] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2012] [Accepted: 05/18/2012] [Indexed: 01/05/2023]
Abstract
This paper reviews the literature on tactual perception. Throughout this review, we will highlight some of the most relevant aspects in the touch literature: type of stimuli; type of participants; type of tactile exploration; and finally, the interaction between touch and other senses. Regarding type of stimuli, we will analyse studies with abstract stimuli such as vibrations, with two- and three-dimensional stimuli, and also concrete stimuli, considering the relation between familiar and unfamiliar stimuli and the haptic perception of faces. Under the "type of participants" topic, we separated studies with blind participants, studies with children and adults, and also performed an overview of sex differences in performance. The type of tactile exploration is explored considering conditions of active and passive touch, the relevance of movement in touch and the relation between haptic exploration and time. Finally, interactions between touch and vision, touch and smell and touch and taste are explored in the last topic. The review ends with an overall conclusion on the state of the art for the tactual perception literature. With this work, we intend to present an organised overview of the main variables in touch experiments, compiling aspects reported in the tactual literature, and attempting to provide both a summary of previous findings, and a guide to the design of future works on tactual perception and memory, through a presentation of implications from previous studies.
Collapse
|
18
|
Lin CL, Shaw FZ, Young KY, Lin CT, Jung TP. EEG correlates of haptic feedback in a visuomotor tracking task. Neuroimage 2012; 60:2258-73. [PMID: 22348883 DOI: 10.1016/j.neuroimage.2012.02.008] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2011] [Revised: 12/22/2011] [Accepted: 02/04/2012] [Indexed: 10/28/2022] Open
Abstract
This study investigates the temporal brain dynamics associated with haptic feedback in a visuomotor tracking task. Haptic feedback with deviation-related forces was used throughout tracking experiments in which subjects' behavioral responses and electroencephalogram (EEG) data were simultaneously measured. Independent component analysis was employed to decompose the acquired EEG signals into temporally independent time courses arising from distinct brain sources. Clustering analysis was used to extract independent components that were comparable across participants. The resultant independent brain processes were further analyzed via time-frequency analysis (event-related spectral perturbation) and event-related coherence (ERCOH) to contrast brain activity during tracking experiments with or without haptic feedback. Across subjects, in epochs with haptic feedback, components with equivalent dipoles in or near the right motor region exhibited greater alpha band power suppression. Components with equivalent dipoles in or near the left frontal, central, left motor, right motor, and parietal regions exhibited greater beta-band power suppression, while components with equivalent dipoles in or near the left frontal, left motor, and right motor regions showed greater gamma-band power suppression relative to non-haptic conditions. In contrast, the right occipital component cluster exhibited less beta-band power suppression in epochs with haptic feedback compared to non-haptic conditions. The results of ERCOH analysis of the six component clusters showed that there were significant increases in coherence between different brain networks in response to haptic feedback relative to the coherence observed when haptic feedback was not present. The results of this study provide novel insight into the effects of haptic feedback on the brain and may aid the development of new tools to facilitate the learning of motor skills.
Collapse
Affiliation(s)
- Chun-Ling Lin
- Brain Research Center, University System of Taiwan, Hsinchu, Taiwan
| | | | | | | | | |
Collapse
|
19
|
|
20
|
|
21
|
Keehner M. Spatial Cognition Through the Keyhole: How Studying a Real-World Domain Can Inform Basic Science-and Vice Versa. Top Cogn Sci 2011; 3:632-47. [DOI: 10.1111/j.1756-8765.2011.01154.x] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
22
|
Lawson R, Bracken S. Haptic Object Recognition: How Important are Depth Cues and Plane Orientation? Perception 2011; 40:576-97. [DOI: 10.1068/p6786] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
Raised-line drawings of familiar objects are very difficult to identify with active touch only. In contrast, haptically explored real 3-D objects are usually recognised efficiently, albeit slower and less accurately than with vision. Real 3-D objects have more depth information than outline drawings, but also extra information about identity (eg texture, hardness, temperature). Previous studies have not manipulated the availability of depth information in haptic object recognition whilst controlling for other information sources, so the importance of depth cues has not been assessed. In the present experiments, people named plastic small-scale models of familiar objects. Five versions of bilaterally symmetrical objects were produced. Versions varied only in the amount of depth information: minimal for cookie-cutter and filled-in outlines, partial for squashed and half objects, and full for 3-D models. Recognition was faster and much more accurate when more depth information was available, whether exploration was with both hands or just one finger. Novices found it almost impossible to recognise objects explored with two hand-held probes whereas experts succeeded using probes regardless of the amount of depth information. Surprisingly, plane misorientation did not impair recognition. Unlike with vision, depth information, but not object orientation, is extremely important for haptic object recognition.
Collapse
Affiliation(s)
- Rebecca Lawson
- School of Psychology, University of Liverpool, Eleanor Rathbone Building, Bedford Street South, Liverpool L69 7ZA, UK
| | - Sarah Bracken
- School of Psychology, University of Liverpool, Eleanor Rathbone Building, Bedford Street South, Liverpool L69 7ZA, UK
| |
Collapse
|
23
|
Abstract
The aim of this study was to demonstrate that the cross-modal priming effect is perceptual and therefore consistent with the idea that knowledge is modality dependent. We used a two-way cross-modal priming paradigm in two experiments. These experiments were constructed on the basis of a two-phase priming paradigm. In the study phase of Experiment 1, participants had to categorize auditory primes as “animal” or “artifact”. In the test phase, they had to perform the same categorization task with visual targets which corresponded either to the auditory primes presented in the study phase (old items) or to new stimuli (new items). To demonstrate the perceptual nature of the cross-modal priming effect, half of the auditory primes were presented with a visual mask (old-masked items). In the second experiment, the visual stimuli were used as primes and the auditory stimuli as targets, and half of the visual primes were presented with an auditory mask (a white noise). We hypothesized that if the cross-modal priming effect results from an activation of modality-specific representations, then the mask should interfere with the priming effect. In both experiments, the results corroborated our predictions. In addition, we observed a cross-modal priming effect from pictures to sounds in a long-term paradigm for the first time.
Collapse
|
24
|
Kalagher H, Jones SS. Developmental change in young children's use of haptic information in a visual task: the role of hand movements. J Exp Child Psychol 2010; 108:293-307. [PMID: 20974476 DOI: 10.1016/j.jecp.2010.09.004] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2009] [Revised: 09/14/2010] [Accepted: 09/19/2010] [Indexed: 11/26/2022]
Abstract
Preschoolers who explore objects haptically often fail to recognize those objects in subsequent visual tests. This suggests that children may represent qualitatively different information in vision and haptics and/or that children's haptic perception may be poor. In this study, 72 children (2½-5 years of age) and 20 adults explored unfamiliar objects either haptically or visually and then chose a visual match from among three test objects, each matching the exemplar on one perceptual dimension. All age groups chose shape-based matches after visual exploration. Both 5-year-olds and adults also chose shape-based matches after haptic exploration, but younger children did not match consistently in this condition. Certain hand movements performed by children during haptic exploration reliably predicted shape-based matches but occurred at very low frequencies. Thus, younger children's difficulties with haptic-to-visual information transfer appeared to stem from their failure to use their hands to obtain reliable haptic information about objects.
Collapse
Affiliation(s)
- Hilary Kalagher
- Psychological and Brain Sciences, Indiana University, Bloomington, IN 47405, USA.
| | | |
Collapse
|
25
|
Haag S. Effects of vision and haptics on categorizing common objects. Cogn Process 2010; 12:33-9. [PMID: 20721600 DOI: 10.1007/s10339-010-0369-5] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2010] [Accepted: 08/02/2010] [Indexed: 11/25/2022]
Abstract
Most research on object recognition and categorization centers on vision. However, these phenomena are likely influenced by the commonly used modality of touch. The present study tested this notion by having participants explore three-dimensional objects using vision and haptics in naming and sorting tasks. Results showed greater difficulty naming (recognizing) and sorting (categorizing) objects haptically. For both conditions, error increased from the concrete attribute of size to the more abstract quality of predation, providing behavioral evidence for shared object representation in vision and haptics.
Collapse
Affiliation(s)
- Susan Haag
- Department of Psychology, Arizona State University, Tempe, AZ, USA.
| |
Collapse
|
26
|
Kalenine S, Pinet L, Gentaz E. The visual and visuo-haptic exploration of geometrical shapes increases their recognition in preschoolers. INTERNATIONAL JOURNAL OF BEHAVIORAL DEVELOPMENT 2010. [DOI: 10.1177/0165025410367443] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
This study assessed the benefit of a multisensory intervention on the recognition of geometrical shapes in kindergarten children. Two interventions were proposed, both conducted by the teachers and involving exercises focused on the properties of the shapes but differing in the sensory modalities used to explore them. In the ‘‘VH’’ intervention, the visual and haptic modalities were used to explore the raised shapes while only the visual modality was involved in the ‘‘V’’ (Visual) intervention. We compared the effect of the two interventions on the acquisition of conceptual knowledge about squares, rectangles and triangles in 72 preschoolers. Results showed that children progressed more importantly following VH than V intervention for rectangles and triangles. The addition of the haptic modality in intervention provides beneficial effects by allowing children to better understand what is included in a shape category. Results are discussed in relation to the multimodal coding (in line with embodied theories) and the analytic perception generated by the haptic modality.
Collapse
|
27
|
Chan JS, Simões-Franklin C, Garavan H, Newell FN. Static images of novel, moveable objects learned through touch activate visual area hMT+. Neuroimage 2010; 49:1708-16. [DOI: 10.1016/j.neuroimage.2009.09.068] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2009] [Revised: 09/28/2009] [Accepted: 09/29/2009] [Indexed: 11/16/2022] Open
|
28
|
Abstract
Previous investigations of visual object recognition have found that some views of both familiar and unfamiliar objects promote more efficient recognition performance than other views. These views are considered as canonical and are often the views that present the most information about an object's 3-D structure and features in the image. Although objects can also be efficiently recognised with touch alone, little is known whether some views promote more efficient recognition than others. This may seem unlikely, given that the object structure and features are readily available to the hand during object exploration. We conducted two experiments to investigate whether canonical views existed in haptic object recognition. In the first, participants were required to position each object in a way that would present the best view for learning the object with touch alone. We found a large degree of consistency of viewpoint position across participants for both familiar and unfamiliar objects. In a second experiment, we found that these consistent, or canonical, views promoted better haptic recognition performance than other random views of the objects. Interestingly, these haptic canonical views were not necessarily the same as the canonical views normally found in visual perception. Nevertheless, our findings provide support for the idea that both the visual and the tactile systems are functionally equivalent in terms of how objects are represented in memory and subsequently recognised.
Collapse
Affiliation(s)
- Andrew T Woods
- School of Psychology and Institute of Neuroscience, Lloyd Building, Trinity College Dublin, Dublin 2, Ireland
| | | | | |
Collapse
|
29
|
Graven T. Aspects of object recognition: When touch replaces vision as the dominant sense modality. ACTA ACUST UNITED AC 2009. [DOI: 10.1076/vimr.5.2.101.26263] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
|
30
|
Lacey S, Tal N, Amedi A, Sathian K. A putative model of multisensory object representation. Brain Topogr 2009; 21:269-74. [PMID: 19330441 PMCID: PMC3156680 DOI: 10.1007/s10548-009-0087-4] [Citation(s) in RCA: 95] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2009] [Accepted: 03/11/2009] [Indexed: 10/21/2022]
Abstract
This review surveys the recent literature on visuo-haptic convergence in the perception of object form, with particular reference to the lateral occipital complex (LOC) and the intraparietal sulcus (IPS) and discusses how visual imagery or multisensory representations might underlie this convergence. Drawing on a recent distinction between object- and spatially-based visual imagery, we propose a putative model in which LOtv, a subregion of LOC, contains a modality-independent representation of geometric shape that can be accessed either bottom-up from direct sensory inputs or top-down from frontoparietal regions. We suggest that such access is modulated by object familiarity: spatial imagery may be more important for unfamiliar objects and involve IPS foci in facilitating somatosensory inputs to the LOC; by contrast, object imagery may be more critical for familiar objects, being reflected in prefrontal drive to the LOC.
Collapse
Affiliation(s)
- Simon Lacey
- Department of Neurology, Emory University, Atlanta, GA, USA
| | - Noa Tal
- Physiology Department, Faculty of Medicine, The Hebrew University of Jerusalem, Jerusalem 91220, Israel
| | - Amir Amedi
- Physiology Department, Faculty of Medicine, The Hebrew University of Jerusalem, Jerusalem 91220, Israel
- Cognitive Science Program, The Hebrew University of Jerusalem, Jerusalem 91220, Israel
| | - K. Sathian
- Department of Neurology, Emory University, Atlanta, GA, USA
- Department of Rehabilitation Medicine, Emory University, Atlanta, GA, USA
- Department of Psychology, Emory University, Atlanta, GA, USA
- Rehabilitation R&D Center of Excellence, Atlanta VAMC, Decatur, GA, USA
| |
Collapse
|
31
|
Ballesteros S, González M, Mayas J, García-Rodríguez B, Reales JM. Cross-modal repetition priming in young and old adults. ACTA ACUST UNITED AC 2009. [DOI: 10.1080/09541440802311956] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
32
|
Craddock M, Lawson R. Do Left and Right Matter for Haptic Recognition of Familiar Objects? Perception 2009; 38:1355-76. [DOI: 10.1068/p6312] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Two experiments were carried out to examine the effects of dominant right versus non-dominant left exploration hand and left versus right object orientation on haptic recognition of familiar objects. In experiment 1, participants named 48 familiar objects in two blocks. There was no dominant-hand advantage to naming objects haptically and there was no interaction between exploration hand and object orientation. Furthermore, priming of naming was not reduced by changes of either object orientation or exploration hand. To test whether these results were attributable to a failure to encode object orientation and exploration hand, experiment 2 replicated experiment 1 except that the unexpected task in the second block was to decide whether either exploration hand or object orientation had changed relative to the initial naming block. Performance on both tasks was above chance, demonstrating that this information had been encoded into long-term haptic representations following the initial block of naming. Thus when identifying familiar objects, the haptic processing system can achieve object constancy efficiently across hand changes and object-orientation changes, although this information is often stored even when it is task-irrelevant.
Collapse
Affiliation(s)
- Matt Craddock
- School of Psychology, University of Liverpool, Eleanor Rathbone Building, Bedford Street South, Liverpool L69 7ZA, UK
| | - Rebecca Lawson
- School of Psychology, University of Liverpool, Eleanor Rathbone Building, Bedford Street South, Liverpool L69 7ZA, UK
| |
Collapse
|
33
|
Repetition priming for multisensory stimuli: Task-irrelevant and task-relevant stimuli are associated if semantically related but with no advantage over uni-sensory stimuli. Brain Res 2009; 1251:236-44. [DOI: 10.1016/j.brainres.2008.10.062] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2008] [Revised: 08/05/2008] [Accepted: 10/23/2008] [Indexed: 11/30/2022]
|
34
|
Effects of complete monocular deprivation in visuo-spatial memory. Brain Res Bull 2008; 77:112-6. [DOI: 10.1016/j.brainresbull.2008.05.009] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2008] [Revised: 05/27/2008] [Accepted: 05/28/2008] [Indexed: 11/19/2022]
|
35
|
Lacey S, Campbell C, Sathian K. Vision and touch: multiple or multisensory representations of objects? Perception 2008; 36:1513-21. [PMID: 18265834 DOI: 10.1068/p5850] [Citation(s) in RCA: 54] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
The relationship between visually and haptically derived representations of objects is an important question in multisensory processing and, increasingly, in mental representation. We review evidence for the format and properties of these representations, and address possible theoretical models. We explore the relevance of visual imagery processes and highlight areas for further research, including the neglected question of asymmetric performance in the visuo-haptic cross-modal memory paradigm. We conclude that the weight of evidence suggests the existence of a multisensory representation, spatial in format, and flexibly accessible by both bottom-up and top-down inputs, although efficient comparison between modality-specific representations cannot entirely be ruled out.
Collapse
Affiliation(s)
- Simon Lacey
- Department of Neurology, School of Medicine, Emory University, Atlanta, GA 30322, USA.
| | | | | |
Collapse
|
36
|
Neuronal substrates of haptic shape encoding and matching: A functional magnetic resonance imaging study. Neuroscience 2008; 152:29-39. [PMID: 18255234 DOI: 10.1016/j.neuroscience.2007.12.021] [Citation(s) in RCA: 47] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2007] [Revised: 12/05/2007] [Accepted: 12/06/2007] [Indexed: 11/20/2022]
|
37
|
Molholm S, Martinez A, Shpaner M, Foxe JJ. Object-based attention is multisensory: co-activation of an object's representations in ignored sensory modalities. Eur J Neurosci 2007; 26:499-509. [PMID: 17650120 DOI: 10.1111/j.1460-9568.2007.05668.x] [Citation(s) in RCA: 72] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
Abstract
Within the visual modality, it has been shown that attention to a single visual feature of an object such as speed of motion, results in an automatic transfer of attention to other task-irrelevant features (e.g. colour). An extension of this logic might lead one to predict that such mechanisms also operate across sensory systems. But, connectivity patterns between feature modules across sensory systems are thought to be sparser to those within a given sensory system, where interareal connectivity is extensive. It is not clear that transfer of attention between sensory systems will operate as it does within a sensory system. Using high-density electrical mapping of the event-related potential (ERP) in humans, we tested whether attending to objects in one sensory modality resulted in the preferential processing of that object's features within another task-irrelevant sensory modality. Clear evidence for cross-sensory attention effects was seen, such that for multisensory stimuli responses to ignored task-irrelevant information in the auditory and visual domains were selectively enhanced when they were features of the explicitly attended object presented in the attended sensory modality. We conclude that attending to an object within one sensory modality results in coactivation of that object's representations in ignored sensory modalities. The data further suggest that transfer of attention from visual-to-auditory features operates in a fundamentally different manner than transfer from auditory-to-visual features, and indicate that visual-object representations have a greater influence on their auditory counterparts than vice-versa. These data are discussed in terms of 'priming' vs. 'spreading' accounts of attentional transfer.
Collapse
Affiliation(s)
- Sophie Molholm
- The Cognitive Neurophysiology Laboratory, Program in Cognitive Neuroscience and Schizophrenia, Nathan Kline Institute for Psychiatric Research, 140 Old Orangeburg Road, Orangeburg, NY 10962, USA.
| | | | | | | |
Collapse
|
38
|
Fiehler K, Burke M, Engel A, Bien S, Rösler F. Kinesthetic Working Memory and Action Control within the Dorsal Stream. Cereb Cortex 2007; 18:243-53. [PMID: 17548801 DOI: 10.1093/cercor/bhm071] [Citation(s) in RCA: 49] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
Abstract
There is wide agreement that the "dorsal (action) stream" processes visual information for movement control. However, movements depend not only on vision but also on tactile and kinesthetic information (=haptics). Using functional magnetic resonance imaging, the present study investigates to what extent networks within the dorsal stream are also utilized for kinesthetic action control and whether they are also involved in kinesthetic working memory. Fourteen blindfolded participants performed a delayed-recognition task in which right-handed movements had to be encoded, maintained, and later recognized without any visual feedback. Encoding of hand movements activated somatosensory areas, superior parietal lobe (dorsodorsal stream), anterior intraparietal sulcus (aIPS) and adjoining areas (ventrodorsal stream), premotor cortex, and occipitotemporal cortex (ventral stream). Short-term maintenance of kinesthetic information elicited load-dependent activity in the aIPS and adjacent anterior portion of the superior parietal lobe (ventrodorsal stream) of the left hemisphere. We propose that the action representation system of the dorsodorsal and ventrodorsal stream is utilized not only for visual but also for kinesthetic action control. Moreover, the present findings demonstrate that networks within the ventrodorsal stream, in particular the left aIPS and closely adjacent areas, are also engaged in working memory maintenance of kinesthetic information.
Collapse
Affiliation(s)
- Katja Fiehler
- Department of Experimental and Biological Psychology, Philipps-Universität Marburg, Gutenbergstr. 18, D-35032 Marburg, Germany.
| | | | | | | | | |
Collapse
|
39
|
Ingle D. Central visual persistences: II. Effects of hand and head rotations. Perception 2007; 35:1315-29. [PMID: 17214379 DOI: 10.1068/p5488] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
In an earlier paper, kinesthetic effects on central visual persistences (CPs) were reported, including the ability to move these images by hand following eye closure. While all CPs could be translated anywhere within the frontal field, the present report documents a more selective influence of manual rotations on CPs in the same subjects. When common objects or figures drawn on cards were rotated (while holding one end of the object or one corner of a card between thumb and forefinger), it was found that CPs of larger objects rotated with the hand. By contrast, CPs of smaller objects, parts of objects, and textures remained stable in space as the hand rotated. It is proposed that CPs of smaller stimuli and textures are represented mainly by the ventral stream (temporal cortex) while larger CPs, which rotate, are represented mainly by the dorsal stream (parietal cortex). A second discovery was that CPs of small objects (but not of line segments or textures) could be rotated when the thumb and fingers surrounded the edges of the object. It is proposed that neuronal convergence of visual and tactile information about shape increases parietal responses to small objects, so that their CPs will rotate. Experiments with CPs offer new tools to infer visual coding differences between ventral and dorsal streams in man.
Collapse
|
40
|
Lacey S, Campbell C. Mental representation in visual/haptic crossmodal memory: evidence from interference effects. Q J Exp Psychol (Hove) 2006; 59:361-76. [PMID: 16618639 DOI: 10.1080/17470210500173232] [Citation(s) in RCA: 42] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
Two experiments used visual-, verbal-, and haptic-interference tasks during encoding (Experiment 1) and retrieval (Experiment 2) to examine mental representation of familiar and unfamiliar objects in visual/haptic crossmodal memory. Three competing theories are discussed, which variously suggest that these representations are: (a) visual; (b) dual-code-visual for unfamiliar objects but visual and verbal for familiar objects; or (c) amodal. The results suggest that representations of unfamiliar objects are primarily visual but that crossmodal memory for familiar objects may rely on a network of different representations. The pattern of verbal-interference effects suggests that verbal strategies facilitate encoding of unfamiliar objects regardless of modality, but only haptic recognition regardless of familiarity. The results raise further research questions about all three theoretical approaches.
Collapse
Affiliation(s)
- Simon Lacey
- School of Human Sciences, Southampton Solent University, East Park Terrace, Southampton, UK.
| | | |
Collapse
|
41
|
James TW, Servos P, Kilgour AR, Huh E, Lederman S. The influence of familiarity on brain activation during haptic exploration of 3-D facemasks. Neurosci Lett 2006; 397:269-73. [PMID: 16420973 DOI: 10.1016/j.neulet.2005.12.052] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2005] [Revised: 10/06/2005] [Accepted: 12/12/2005] [Indexed: 10/25/2022]
Abstract
Little is known about the neural substrates that underlie difficult haptic discrimination of 3-D within-class object stimuli. Recent work [A.R. Kilgour, R. Kitada, P. Servos, T.W. James, S.J. Lederman, Haptic face identification activates ventral occipital and temporal areas: an fMRI study, Brain Cogn. (in press)] suggests that the left fusiform gyrus may contribute to the identification of facemasks that are haptically explored in the absence of vision. Here, we extend this line of research to investigate the influence of familiarity. Subjects were trained extensively to individuate a set of facemasks in the absence of vision using only haptic exploration. Brain activation was then measured using fMRI while subjects performed a haptic face recognition task on familiar and unfamiliar facemasks. A group analysis contrasting familiar and unfamiliar facemasks found that the left fusiform gyrus produced greater activation with familiar facemasks.
Collapse
Affiliation(s)
- Thomas W James
- Department of Psychological and Brain Sciences, 1101 E 10th Street, Indiana University, Bloomington, IN 47405, USA.
| | | | | | | | | |
Collapse
|
42
|
Cooke T, Jäkel F, Wallraven C, Bülthoff HH. Multimodal similarity and categorization of novel, three-dimensional objects. Neuropsychologia 2006; 45:484-95. [PMID: 16580027 DOI: 10.1016/j.neuropsychologia.2006.02.009] [Citation(s) in RCA: 41] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2005] [Revised: 01/26/2006] [Accepted: 02/20/2006] [Indexed: 11/16/2022]
Abstract
Similarity has been proposed as a fundamental principle underlying mental object representations and capable of supporting cognitive-level tasks such as categorization. However, much of the research has considered connections between similarity and categorization for tasks performed using a single perceptual modality. Considering similarity and categorization within a multimodal context opens up a number of important questions: Are the similarities between objects the same when they are perceived using different modalities or using more than one modality at a time? Is similarity still able to explain categorization performance when objects are experienced multimodally? In this study, we addressed these questions by having subjects explore novel, 3D objects which varied parametrically in shape and texture using vision alone, touch alone, or touch and vision together. Subjects then performed a pair-wise similarity rating task and a free sorting categorization task. Multidimensional scaling (MDS) analysis of similarity data revealed that a single underlying perceptual map whose dimensions corresponded to shape and texture could explain visual, haptic, and bimodal similarity ratings. However, the relative dimension weights varied according to modality: shape dominated texture when objects were seen, whereas shape and texture were roughly equally important in the haptic and bimodal conditions. Some evidence was found for a multimodal connection between similarity and categorization: the probability of category membership increased with similarity while the probability of a category boundary being placed between two stimuli decreased with similarity. In addition, dimension weights varied according to modality in the same way for both tasks. The study also demonstrates the usefulness of 3D printing technology and MDS techniques in the study of visuohaptic object processing.
Collapse
Affiliation(s)
- Theresa Cooke
- Max Planck Institute for Biological Cybernetics, Spemannstr. 38, 72076 Tübingen, Germany.
| | | | | | | |
Collapse
|
43
|
Abstract
Phenomena associated with 'central visual persistences' (CPs) are new to both medical and psychological literature. Five subjects have reported similar CPs: positive afterimages following brief fixation of high-contrast objects or drawings and eye closure. CPs duplicate shapes and colors of single objects, lasting for about 15 s. Unlike retinal afterimages, CPs do not move with the eyes but are stable in extrapersonal space during head or body rotations. CPs may reflect sustained neural activity in neurons of association cortex, which mediate object perception. A remarkable finding is that CPs can be moved in any direction by the (unseen) hand holding the original seen object. Moreover, a CP once formed will 'jump' into an extended hand and 'stick' in that hand as it moves about. The apparent size of a CP of a single object is determined by the size of the gap between finger and thumb, even when no object is touched. These CPs can be either magnified or minified via the grip of the extended hand. The felt orientation of the hand-held object will also determine the orientation of the CP seen in that hand. Thus, kinesthetic signals from hand and arm movements can determine perceived location, size, and orientation of CPs. A neural model based on physiological studies of premotor, temporal, parietal, and prefrontal cortices is proposed to account for these novel phenomena.
Collapse
|
44
|
Amedi A, von Kriegstein K, van Atteveldt NM, Beauchamp MS, Naumer MJ. Functional imaging of human crossmodal identification and object recognition. Exp Brain Res 2005; 166:559-71. [PMID: 16028028 DOI: 10.1007/s00221-005-2396-5] [Citation(s) in RCA: 253] [Impact Index Per Article: 12.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2004] [Accepted: 11/12/2004] [Indexed: 11/30/2022]
Abstract
The perception of objects is a cognitive function of prime importance. In everyday life, object perception benefits from the coordinated interplay of vision, audition, and touch. The different sensory modalities provide both complementary and redundant information about objects, which may improve recognition speed and accuracy in many circumstances. We review crossmodal studies of object recognition in humans that mainly employed functional magnetic resonance imaging (fMRI). These studies show that visual, tactile, and auditory information about objects can activate cortical association areas that were once believed to be modality-specific. Processing converges either in multisensory zones or via direct crossmodal interaction of modality-specific cortices without relay through multisensory regions. We integrate these findings with existing theories about semantic processing and propose a general mechanism for crossmodal object recognition: The recruitment and location of multisensory convergence zones varies depending on the information content and the dominant modality.
Collapse
Affiliation(s)
- A Amedi
- Laboratory for Magnetic Brain Stimulation, Department of Neurology, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, USA
| | | | | | | | | |
Collapse
|
45
|
Reed CL, Shoham S, Halgren E. Neural substrates of tactile object recognition: an fMRI study. Hum Brain Mapp 2004; 21:236-46. [PMID: 15038005 PMCID: PMC6871926 DOI: 10.1002/hbm.10162] [Citation(s) in RCA: 139] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
A functional magnetic resonance imaging (fMRI) study was conducted during which seven subjects carried out naturalistic tactile object recognition (TOR) of real objects. Activation maps, conjunctions across subjects, were compared between tasks involving TOR of common real objects, palpation of "nonsense" objects, and rest. The tactile tasks involved similar motor and sensory stimulation, allowing higher tactile recognition processes to be isolated. Compared to nonsense object palpation, the most prominent activation evoked by TOR was in secondary somatosensory areas in the parietal operculum (SII) and insula, confirming a modality-specific path for TOR. Prominent activation was also present in medial and lateral secondary motor cortices, but not in primary motor areas, supporting the high level of sensory and motor integration characteristic of object recognition in the tactile modality. Activation in a lateral occipitotemporal area associated previously with visual object recognition may support cross-modal collateral activation. Finally, activation in medial temporal and prefrontal areas may reflect a common final pathway of modality-independent object recognition. This study suggests that TOR involves a complex network including parietal and insular somatosensory association cortices, as well as occipitotemporal visual areas, prefrontal, and medial temporal supramodal areas, and medial and lateral secondary motor cortices. It confirms the involvement of somatosensory association areas in the recognition component of TOR, and the existence of a ventrolateral somatosensory pathway for TOR in intact subjects. It challenges the results of previous studies that emphasize the role of visual cortex rather than somatosensory association cortices in higher-level somatosensory cognition.
Collapse
Affiliation(s)
- Catherine L Reed
- Department of Psychology, University of Denver, Denver, Colorado 80208, USA.
| | | | | |
Collapse
|
46
|
Holbrook JB, Bost PR, Cave CB. The effects of study-task relevance on perceptual repetition priming. Mem Cognit 2003; 31:380-92. [PMID: 12795480 DOI: 10.3758/bf03194396] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Repetition priming is easily elicited in many traditional paradigms, and the possibility that perceptual priming may be other than an automatic consequence of perception has received little consideration. This issue is explored in two experiments. In Experiment 1, participants named the target from a four-item category search study task more quickly than the nontarget study items at a later naming test. Experiment 2 extended this finding to conditions in which stimuli were individually presented at study. In three different study tasks, stimuli relevant to study-task completion elicited priming on a later test, but stimuli presented outside the context of a task did not. In both experiments, recognition was above chance for nonrelevant stimuli, suggesting that participants explicitly remembered stimuli that did not elicit priming. Results suggest that priming is sensitive to study-task demands and may reflect a more adaptive and flexible mechanism for modification of perceptual processing than previously appreciated.
Collapse
|
47
|
Aleman A, van Lee L, Mantione MH, Verkoijen IG, de Haan EH. Visual imagery without visual experience: evidence from congenitally totally blind people. Neuroreport 2001; 12:2601-4. [PMID: 11496156 DOI: 10.1097/00001756-200108080-00061] [Citation(s) in RCA: 47] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
We explored the ability of congenitally totally blind people (who were contrasted with age-, sex- and education matched blindfolded sighted subjects) to perform tasks which are mediated by visual mental imagery in sighted people. In the first (pictorial) task, subjects had to mentally compare the shape of the outline of three named objects and to indicate the odd-one-out. In the second (spatial) task the participants were asked to memorise the position of a target cube in two- and three-dimensional matrices, based on a sequence of spatially based imagery operations. In addition, during half of the trials of both imagery tasks subjects were required to perform a concurrent finger tapping task, to investigate whether the blind subjects would be more dependent on spatial processing. Although blind participants made significantly more errors than sighted participants, they were well able to perform the spatial imagery task as well as the pictorial imagery task. Interference from the concurrent tapping task affected both groups to the same extent. Our results shed new light on the question whether early visual experience is necessary for performance on visual imagery tasks, and strongly suggest that vision and haptics may share common representations.
Collapse
Affiliation(s)
- A Aleman
- Psychological Laboratory, Department of Psychonomics, Utrecht University, Heidelberglaan 2, 3584 CS Utrecht, The Netherlands
| | | | | | | | | |
Collapse
|
48
|
Abstract
The horizontal-vertical illusion consists of two lines of the same length (one horizontal and the other vertical) at a 90 degree angle from one another forming either an inverted-T or an L-shape. The illusion occurs when the length of a vertical line is perceived as longer than the horizontal line even though they are the same physical length. The illusion has been shown both visually and haptically. The present purpose was to assess differences between the visual or haptic perception of the illusions and also whether differences occur between the inverted-T and the L-shape illusions. The current study showed a greater effect in the haptic perception of the horizontal-vertical illusion than in visual perception. There is also greater illusory susceptibility of the inverted-T than the L-shape.
Collapse
|
49
|
Visual and tactile memory for 2-D patterns: Effects of changes in size and left-right orientation. Psychon Bull Rev 1997. [DOI: 10.3758/bf03214345] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
50
|
|