1
|
Streri A, de Hevia MD. How do human newborns come to understand the multimodal environment? Psychon Bull Rev 2023; 30:1171-1186. [PMID: 36862372 DOI: 10.3758/s13423-023-02260-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/18/2023] [Indexed: 03/03/2023]
Abstract
For a long time, newborns were considered as human beings devoid of perceptual abilities who had to learn with effort everything about their physical and social environment. Extensive empirical evidence gathered in the last decades has systematically invalidated this notion. Despite the relatively immature state of their sensory modalities, newborns have perceptions that are acquired, and are triggered by, their contact with the environment. More recently, the study of the fetal origins of the sensory modes has revealed that in utero all the senses prepare to operate, except for the vision mode, which is only functional starting from the first minutes after birth. This discrepancy between the maturation of the different senses leads to the question of how human newborns come to understand our multimodal and complex environment. More precisely, how the visual mode interacts with the tactile and auditory modes from birth. After having defined the tools that newborns use to interact with other sensory modalities, we review studies across different fields of research such as the intermodal transfer between touch and vision, auditory-visual speech perception, and the existence of links between the dimensions of space, time, and number. Overall, evidence from these studies supports the idea that human newborns are spontaneously driven, and cognitively equipped, to link information collected by the different sensory modes in order to create a representation of a stable world.
Collapse
Affiliation(s)
- Arlette Streri
- Université Paris Cité, CNRS, Integrative Neuroscience and Cognition Center, F-75006, Paris, France
| | - Maria Dolores de Hevia
- Université Paris Cité, CNRS, Integrative Neuroscience and Cognition Center, F-75006, Paris, France.
| |
Collapse
|
2
|
Odagiri R, Yoshida H, Takami A. The difference in attentional focus during exercise affects attention resources. J Phys Ther Sci 2021; 33:887-890. [PMID: 34873368 PMCID: PMC8636913 DOI: 10.1589/jpts.33.887] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2021] [Accepted: 09/08/2021] [Indexed: 11/24/2022] Open
Abstract
[Purpose] The purpose of this study was to investigate the effect of the difference in
attentional focus, including the external focus (EF) or internal focus (IF) during
exercise on attention resources from the viewpoint of the brain activity. [Participants
and Methods] The study included 20 healthy adult participants randomly assigned to two
groups: the EF and IF groups. The participants in each group received different verbal
instructions before performing a tennis ball task, in which they threw a tennis ball on
the floor at a target with their non-dominant hands as accurately as possible while
sitting on a chair. During the task, oxygenated hemoglobin (oxy-Hb) in the right
dorsolateral prefrontal cortex was continuously measured using a near-infrared
spectroscopy device. The accuracy of the task and the change of oxy-Hb were statistically
analyzed. [Results] Although there were no statistically significant differences between
the groups, both accuracy of the task and oxy-Hb in the EF group were found to be higher
than those in the IF group. [Conclusion] Our results showed that although the accuracy of
motor control in the EF was superior to that in the IF, there is a possibility of
increased attention resources in the EF compared to those in the IF.
Collapse
Affiliation(s)
- Rei Odagiri
- Department of Rehabilitation, Hirosaki Stroke and Rehabilitation Center: 1-2-1 Ougi-machi, Hirosaki-shi, Aomori 036-8104, Japan.,Department of Comprehensive Rehabilitation Science, Hirosaki University Graduate School of Health Sciences, Japan
| | - Hideki Yoshida
- Department of Comprehensive Rehabilitation Science, Hirosaki University Graduate School of Health Sciences, Japan
| | - Akiyoshi Takami
- Department of Comprehensive Rehabilitation Science, Hirosaki University Graduate School of Health Sciences, Japan
| |
Collapse
|
3
|
Abstract
The neural substrates of tactile roughness perception have been investigated by many neuroimaging studies, while relatively little effort has been devoted to the investigation of neural representations of visually perceived roughness. In this human fMRI study, we looked for neural activity patterns that could be attributed to five different roughness intensity levels when the stimuli were perceived visually, i.e., in absence of any tactile sensation. During functional image acquisition, participants viewed video clips displaying a right index fingertip actively exploring the sandpapers that had been used for the behavioural experiment. A whole brain multivariate pattern analysis found four brain regions in which visual roughness intensities could be decoded: the bilateral posterior parietal cortex (PPC), the primary somatosensory cortex (S1) extending to the primary motor cortex (M1) in the right hemisphere, and the inferior occipital gyrus (IOG). In a follow-up analysis, we tested for correlations between the decoding accuracies and the tactile roughness discriminability obtained from a preceding behavioural experiment. We could not find any correlation between both although, during scanning, participants were asked to recall the tactilely perceived roughness of the sandpapers. We presume that a better paradigm is needed to reveal any potential visuo-tactile convergence. However, the present study identified brain regions that may subserve the discrimination of different intensities of visual roughness. This finding may contribute to elucidate the neural mechanisms related to the visual roughness perception in the human brain.
Collapse
Affiliation(s)
- Junsuk Kim
- a Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics , Tübingen , Germany.,b Center for Neuroscience Imaging Research, Institute for Basic Science (IBS) , Suwon , Republic of Korea.,c Department of Biomedical Engineering , Sungkyunkwan University , Suwon , Republic of Korea
| | - Isabelle Bülthoff
- a Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics , Tübingen , Germany
| | - Heinrich H Bülthoff
- a Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics , Tübingen , Germany
| |
Collapse
|
4
|
Ueno T, Hada Y, Shimizu Y, Yamada T. Relationship between somatosensory event-related potential N140 aberrations and hemispatial agnosia in patients with stroke: a preliminary study. Int J Neurosci 2017; 128:487-494. [PMID: 29076767 DOI: 10.1080/00207454.2017.1398155] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
PURPOSE The somatosensory event-related potential N140 is thought to be related to selective attention. This study aimed to compare the somatosensory event-related potential N140 in healthy subjects to that in patients with stroke to determine whether N140 and attentiveness are associated in patients with stroke with or without hemispatial agnosia. MATERIALS AND METHODS Normal somatosensory event-related potential N140 values were determined using data from ten healthy subjects. Fifteen patients with stroke were divided into two groups based on the presence of hemispatial neglect. Somatosensory event-related potential N140 components were compared between the two groups. RESULTS Stimulation of the affected limb in the hemispatial agnosia group resulted in significantly longer N140 latency at the contralateral vs. the ipsilateral electrode. This was the inverse of the relationship observed in normal subjects, with stimulation of the intact side in patients with hemispatial agnosia, and with stimulation of both the intact and affected sides in patients without agnosia. In the hemispatial agnosia group, the peak latency of N140 following stimulation of the affected side was significantly longer than it was following stimulation of the intact side and when compared to that in patients without agnosia. In addition, abnormal N140 peak latencies were observed at the Cz and ipsilateral electrodes in patients with hemispatial agnosia following stimulation of the intact side. CONCLUSIONS These findings suggest that somatosensory event-related potential N140 is independently generated in each hemisphere and may reflect cognitive attention.
Collapse
Affiliation(s)
- Tomoyuki Ueno
- a Faculty of Medicine , University of Tsukuba , Tsukuba City , Japan
| | - Yasushi Hada
- a Faculty of Medicine , University of Tsukuba , Tsukuba City , Japan
| | - Yukiyo Shimizu
- a Faculty of Medicine , University of Tsukuba , Tsukuba City , Japan
| | - Thoru Yamada
- b Division of Clinical Electrophysiology, Department of Neurology , University of Iowa Hospitals and Clinics , Iowa City , USA
| |
Collapse
|
5
|
Jao RJ, James TW, James KH. Crossmodal enhancement in the LOC for visuohaptic object recognition over development. Neuropsychologia 2015; 77:76-89. [PMID: 26272239 DOI: 10.1016/j.neuropsychologia.2015.08.008] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2014] [Revised: 08/05/2015] [Accepted: 08/07/2015] [Indexed: 10/23/2022]
Abstract
Research has provided strong evidence of multisensory convergence of visual and haptic information within the visual cortex. These studies implement crossmodal matching paradigms to examine how systems use information from different sensory modalities for object recognition. Developmentally, behavioral evidence of visuohaptic crossmodal processing has suggested that communication within sensory systems develops earlier than across systems; nonetheless, it is unknown how the neural mechanisms driving these behavioral effects develop. To address this gap in knowledge, BOLD functional Magnetic Resonance Imaging (fMRI) was measured during delayed match-to-sample tasks that examined intramodal (visual-to-visual, haptic-to-haptic) and crossmodal (visual-to-haptic, haptic-to-visual) novel object recognition in children aged 7-8.5 years and adults. Tasks were further divided into sample encoding and test matching phases to dissociate the relative contributions of each. Results of crossmodal and intramodal object recognition revealed the network of known visuohaptic multisensory substrates, including the lateral occipital complex (LOC) and the intraparietal sulcus (IPS). Critically, both adults and children showed crossmodal enhancement within the LOC, suggesting a sensitivity to changes in sensory modality during recognition. These groups showed similar regions of activation, although children generally exhibited more widespread activity during sample encoding and weaker BOLD signal change during test matching than adults. Results further provided evidence of a bilateral region in the occipitotemporal cortex that was haptic-preferring in both age groups. This region abutted the bimodal LOtv, and was consistent with a medial to lateral organization that transitioned from a visual to haptic bias within the LOC. These findings converge with existing evidence of visuohaptic processing in the LOC in adults, and extend our knowledge of crossmodal processing in adults and children.
Collapse
Affiliation(s)
- R Joanne Jao
- Cognitive Science Program, Indiana University, Bloomington, USA; Department of Psychological and Brain Sciences, Indiana University, Bloomington, USA.
| | - Thomas W James
- Cognitive Science Program, Indiana University, Bloomington, USA; Department of Psychological and Brain Sciences, Indiana University, Bloomington, USA; Program in Neuroscience, Indiana University, Bloomington, USA
| | - Karin Harman James
- Cognitive Science Program, Indiana University, Bloomington, USA; Department of Psychological and Brain Sciences, Indiana University, Bloomington, USA; Program in Neuroscience, Indiana University, Bloomington, USA
| |
Collapse
|
6
|
Fujii R, Takahashi T, Toyomura A, Miyamoto T, Ueno T, Yokoyama A. Comparison of cerebral activation involved in oral and manual stereognosis. J Clin Neurosci 2011; 18:1520-3. [PMID: 21868227 DOI: 10.1016/j.jocn.2011.03.005] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2010] [Revised: 01/31/2011] [Accepted: 03/21/2011] [Indexed: 11/17/2022]
Abstract
Brain activity associated with manual stereognosis has been the focus of increasing recent research effort. However, although oral stereognosis, defined as the ability to recognize and discriminate the food bolus in the mouth, is important for mastication and swallowing, there is little information available about the neural network relating to this function. In the present study, cerebral activation associated with oral stereognosis was evaluated as compared with manual stereognosis. Brain imaging data were acquired by functional MRI (fMRI). fMRI experiments were performed on 16 healthy right-handed young adults without any history of neurological or psychiatric disorders. All subjects had all teeth without malocclusion. Ten stereognosis test shape pieces sized approximately 20 mm × 20 mm × 10 mm were fabricated for this experiment. All test pieces had a complicated form that made them difficult to recognize with ease. Subjects were instructed to assess the shape of the test piece in the mouth or hand. The ten test pieces were randomly assigned to each subject and each run. Stereognosis-specific activation was found in the primary somatosensory area, primary motor area, supramarginal gyrus, premotor area, supplementary motor area, fusiform gyrus, frontopolar area and dorsolateral prefrontal cortex. Differences in cerebral activation between oral and manual stereognosis were found in the insular cortex and visual association cortex.
Collapse
Affiliation(s)
- Ryutaro Fujii
- Department of Oral Functional Prosthodontics, Division of Oral Functional Science, Graduate School of Dental Medicine, Hokkaido University, Sapporo, Japan
| | | | | | | | | | | |
Collapse
|
7
|
Norman JF, Clayton AM, Norman HF, Crabtree CE. Learning to perceive differences in solid shape through vision and touch. Perception 2008; 37:185-96. [PMID: 18456923 DOI: 10.1068/p5679] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
A single experiment was designed to investigate perceptual learning and the discrimination of 3-D object shape. Ninety-six observers were presented with naturally shaped solid objects either visually, haptically, or across the modalities of vision and touch. The observers' task was to judge whether the two sequentially presented objects on any given trial possessed the same or different 3-D shapes. The results of the experiment revealed that significant perceptual learning occurred in all modality conditions, both unimodal and cross-modal. The amount of the observers' perceptual learning, as indexed by increases in hit rate and d', was similar for all of the modality conditions. The observers' hit rates were highest for the unimodal conditions and lowest in the cross-modal conditions. Lengthening the inter-stimulus interval from 3 to 15 s led to increases in hit rates and decreases in response bias. The results also revealed the existence of an asymmetry between two otherwise equivalent cross-modal conditions: in particular, the observers' perceptual sensitivity was higher for the vision-haptic condition and lower for the haptic-vision condition. In general, the results indicate that effective cross-modal shape comparisons can be made between the modalities of vision and active touch, but that complete information transfer does not occur.
Collapse
Affiliation(s)
- J Farley Norman
- Department of Psychology, 1906 College Heights Blvd. #21030, Western Kentucky University, Bowling Green, KY 42101-1030, USA.
| | | | | | | |
Collapse
|
8
|
Buelte D, Meister IG, Staedtgen M, Dambeck N, Sparing R, Grefkes C, Boroojerdi B. The role of the anterior intraparietal sulcus in crossmodal processing of object features in humans: An rTMS study. Brain Res 2008; 1217:110-8. [DOI: 10.1016/j.brainres.2008.03.075] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2007] [Revised: 02/12/2008] [Accepted: 03/31/2008] [Indexed: 10/22/2022]
|
9
|
Takahashi T, Miyamoto T, Terao A, Yokoyama A. Cerebral activation related to the control of mastication during changes in food hardness. Neuroscience 2007; 145:791-4. [DOI: 10.1016/j.neuroscience.2006.12.044] [Citation(s) in RCA: 42] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2006] [Revised: 11/18/2006] [Accepted: 12/12/2006] [Indexed: 10/23/2022]
|
10
|
Ohara S, Lenz F, Zhou YD. Sequential neural processes of tactile-visual crossmodal working memory. Neuroscience 2005; 139:299-309. [PMID: 16324794 DOI: 10.1016/j.neuroscience.2005.05.058] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2005] [Revised: 05/04/2005] [Accepted: 05/06/2005] [Indexed: 11/24/2022]
Abstract
Working memory is essential to learning and performing sensory-motor behaviors that in many situations require the integration of stimuli of one modality with stimuli of another. In the present study, we focused on the neural mechanisms underlying crossmodal working memory. We hypothesized that in performance of the tactile crossmodal working memory task, there would be sequentially discrete task-correlated neural activities representing the processes of crossmodal working memory. Scalp-recorded event-related potentials were collected from 15 electrodes in humans performing each of four tasks: tactile-tactile unimodal delayed matching-to-sample task, tactile-visual crossmodal delayed matching-to-sample task, tactile unimodal control spatial task, and tactile crossmodal control spatial task. Two positive event-related potential peaks were observed during the delay of the task. One peak (late positive component-1) was at about 330 ms after the onset of the tactile stimulus, and the other (late positive component-2) was at about 600 ms. Late positive component-1 was observed in all four tasks. There was no significant difference in late positive component-1 either between the unimodal tasks, or between the crossmodal tasks, but late positive component-1 was significantly larger in the crossmodal tasks than that in the unimodal tasks, and showed a specific pattern of larger activity over parietal areas than activity over frontal areas. Late positive component-2 was not observed in the unimodal matching task but was observed in all other three tasks over parietal areas. During the late delay (1000 ms-1500 ms), significant differences in negative potentials (late negative component) were found between the tasks. The present study shows sequential changes in event-related potentials during the retention period of working memory tasks. It indicates that in performance of a crossmodal working memory task, there are sequentially discrete neural processes that may represent neural activities related to different cognitive functions, such as crossmodal transfer of information, and the working memory of the stimulus.
Collapse
Affiliation(s)
- S Ohara
- Department of Neurosurgery, School of Medicine, Johns Hopkins University, 600 North Wolfe Street, Baltimore, MD 21287, USA
| | | | | |
Collapse
|
11
|
Tanabe HC, Kato M, Miyauchi S, Hayashi S, Yanagida T. The sensorimotor transformation of cross-modal spatial information in the anterior intraparietal sulcus as revealed by functional MRI. ACTA ACUST UNITED AC 2005; 22:385-96. [PMID: 15722209 DOI: 10.1016/j.cogbrainres.2004.09.010] [Citation(s) in RCA: 18] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2003] [Revised: 09/15/2004] [Accepted: 09/16/2004] [Indexed: 11/19/2022]
Abstract
The parietal cortex in monkeys and humans has been shown to play an important role in the transformation of sensory information to motor commands. However, it is still unclear whether in humans, these areas are divided functionally into subregions based on different combinations of sensory and motor modalities. To identify subregions in the parietal cortex involved in the sensorimotor information transformation between different modalities, functional MRI was used to examine brain areas activated during tasks requiring different sensorimotor transformations--i.e., various combinations of eye (saccade) or finger movements triggered by visual or somatosensory cues. We then compared the activations between cross-modal conditions (eye movements triggered by somatosensory cues and finger movements triggered by visual cues) and intramodal (eye movements triggered by visual cues and finger movements triggered by somatosensory cues) conditions. Although the parietal cortex was involved in all tasks regardless of sensorimotor combinations, the only region activated to a greater degree in the cross-modal conditions compared to the intramodal conditions was the anterior portion of the intraparietal sulcus (a-IPS). The results suggest that the a-IPS plays an important role in the sensorimotor transformation of cross-modal spatial information.
Collapse
Affiliation(s)
- Hiroki C Tanabe
- Yanagida Brain Dynamism Project, Kansai Advanced Research Center, Communications Research Laboratory, Kobe, Japan.
| | | | | | | | | |
Collapse
|
12
|
Naito E, Roland PE, Grefkes C, Choi HJ, Eickhoff S, Geyer S, Zilles K, Ehrsson HH. Dominance of the right hemisphere and role of area 2 in human kinesthesia. J Neurophysiol 2004; 93:1020-34. [PMID: 15385595 DOI: 10.1152/jn.00637.2004] [Citation(s) in RCA: 180] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
We have previously shown that motor areas are engaged when subjects experience illusory limb movements elicited by tendon vibration. However, traditionally cytoarchitectonic area 2 is held responsible for kinesthesia. Here we use functional magnetic resonance imaging and cytoarchitectural mapping to examine whether area 2 is engaged in kinesthesia, whether it is engaged bilaterally because area 2 in non-human primates has strong callosal connections, which other areas are active members of the network for kinesthesia, and if there is a dominance for the right hemisphere in kinesthesia as has been suggested. Ten right-handed blindfolded healthy subjects participated. The tendon of the extensor carpi ulnaris muscles of the right or left hand was vibrated at 80 Hz, which elicited illusory palmar flexion in an immobile hand (illusion). As control we applied identical stimuli to the skin over the processus styloideus ulnae, which did not elicit any illusions (vibration). We found robust activations in cortical motor areas [areas 4a, 4p, 6; dorsal premotor cortex (PMD) and bilateral supplementary motor area (SMA)] and ipsilateral cerebellum during kinesthetic illusions (illusion-vibration). The illusions also activated contralateral area 2 and right area 2 was active in common irrespective of illusions of right or left hand. Right areas 44, 45, anterior part of intraparietal region (IP1) and caudo-lateral part of parietal opercular region (OP1), cortex rostral to PMD, anterior insula and superior temporal gyrus were also activated in common during illusions of right or left hand. These right-sided areas were significantly more activated than the corresponding areas in the left hemisphere. The present data, together with our previous results, suggest that human kinesthesia is associated with a network of active brain areas that consists of motor areas, cerebellum, and the right fronto-parietal areas including high-order somatosensory areas. Furthermore, our results provide evidence for a right hemisphere dominance for perception of limb movement.
Collapse
Affiliation(s)
- Eiichi Naito
- Division of Human Brain Research, Department of Neuroscience, Karolinska Institute, Stockholm, Sweden.
| | | | | | | | | | | | | | | |
Collapse
|
13
|
Wheaton KJ, Thompson JC, Syngeniotis A, Abbott DF, Puce A. Viewing the motion of human body parts activates different regions of premotor, temporal, and parietal cortex. Neuroimage 2004; 22:277-88. [PMID: 15110018 DOI: 10.1016/j.neuroimage.2003.12.043] [Citation(s) in RCA: 155] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2003] [Revised: 12/13/2003] [Accepted: 12/30/2003] [Indexed: 10/26/2022] Open
Abstract
Activation of premotor and temporoparietal cortex occurs when we observe others movements, particularly relating to objects. Viewing the motion of different body parts without the context of an object has not been systematically evaluated. During a 3T fMRI study, 12 healthy subjects viewed human face, hand, and leg motion, which was not directed at or did not involve an object. Activation was identified relative to static images of the same human face, hand, and leg in both individual subject and group average data. Four clear activation foci emerged: (1) right MT/V5 activated to all forms of viewed motion; (2) right STS activated to face and leg motion; (3) ventral premotor cortex activated to face, hand, and leg motion in the right hemisphere and to leg motion in the left hemisphere; and (4) anterior intraparietal cortex (aIP) was active bilaterally to viewing hand motion and in the right hemisphere leg motion. In addition, in the group data, a somatotopic activation pattern for viewing face, hand, and leg motion occurred in right ventral premotor cortex. Activation patterns in STS and aIP were more complex--typically activation foci to viewing two types of human motion showed some overlap. Activation in individual subjects was similar; however, activation to hand motion also occurred in the STS with a variable location across subjects--explaining the lack of a clear activation focus in the group data. The data indicate that there are selective responses to viewing motion of different body parts in the human brain that are independent of object or tool use.
Collapse
Affiliation(s)
- Kylie J Wheaton
- Brain Sciences Institute, Swinburne University of Technology, Hawthorn, Victoria 3122, Australia
| | | | | | | | | |
Collapse
|
14
|
Hale KS, Stanney KM. Deriving haptic design guidelines from human physiological, psychophysical, and neurological foundations. IEEE COMPUTER GRAPHICS AND APPLICATIONS 2004; 24:33-39. [PMID: 15387226 DOI: 10.1109/mcg.2004.1274059] [Citation(s) in RCA: 31] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Affiliation(s)
- Kelly S Hale
- Industrial Eng. and Management Systems Dept., College of Eng. and Computer Science, University of Central Florida, Orlando 32816, USA.
| | | |
Collapse
|
15
|
Saito DN, Okada T, Morita Y, Yonekura Y, Sadato N. Tactile-visual cross-modal shape matching: a functional MRI study. BRAIN RESEARCH. COGNITIVE BRAIN RESEARCH 2003; 17:14-25. [PMID: 12763188 DOI: 10.1016/s0926-6410(03)00076-4] [Citation(s) in RCA: 89] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
Abstract
The process and location of integration of information from different sensory modalities remains controversial. We used functional MRI to investigate the neural representation of cross-modal matching between tactile and visual shape information in eleven normal volunteers. During the scan, patterns of 2D shapes were presented both tactually and visually, simultaneously. Four different matching tasks were performed: tactile-tactile with eyes closed (TT), tactile-tactile with visual input (TTv), visual-visual with tactile input (VVt), and tactile-visual (TV). The TT task activated the contralateral primary sensorimotor area, and the postcentral gyrus, superior parietal lobules, anterior portion of the intraparietal sulcus, secondary somatosensory cortex, thalamus, dorsal premotor area, cerebellum, and supplementary motor area bilaterally, without occipital involvement. Visual matching activated the primary visual cortex and the lingual and fusiform gyri bilaterally. A cross-modal area was identified by subtracting TTv images from TV images, subtracting VVt images from TV images, and then determining common active areas. There was one discrete area that was active bilaterally; the posterior intraparietal sulcus close to the parieto-occipital sulcus. These data suggest that shape information from different sensory modalities may be integrated in the posterior intraparietal sulcus during tactile-visual matching tasks.
Collapse
Affiliation(s)
- Daisuke N Saito
- Department of Physiology, School of Medicine, The University of Tokushima, Tokushima, Japan
| | | | | | | | | |
Collapse
|
16
|
Cartford MC, Beaver AJ, Wagner KA, Delay ER. Postoperative haptic training facilitates the retrieval of visual-based memories after visual cortex lesions in rats. Physiol Behav 2003; 78:601-9. [PMID: 12782214 DOI: 10.1016/s0031-9384(03)00045-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
Two experiments examined the effects of postoperative haptic discrimination training on the relearning of a maze visual discrimination in rats with visual cortex lesions. In the first experiment, rats learned a visual intensity discrimination prior to ablation of the lateral Oc2L cortex. Lesion rats were exposed to either a rough/smooth haptic discrimination training condition, a random training condition, or a no-training condition prior to relearning the visual task. Lesion rats relearned the visual task faster after haptic training than after other postoperative experiences. The second experiment replicated these procedures but with rats in which most of the visual cortex was removed. The lesion-induced relearning deficits in the second experiment were similar to the deficits seen for the smaller Oc2L lesions in the first experiment, supporting the hypothesis that the lateral visual cortex is critical for intensity discrimination. Haptic training also reduced these deficits, but the magnitude of this effect was related to the characteristics of the haptic cue. Postoperative training with haptic cues can produce specific and nonspecific information transfer from the intact somatosensory system to the damaged visual system that can facilitate the visual relearning. Possible implications for neuropsychological rehabilitation are also discussed.
Collapse
Affiliation(s)
- M Claire Cartford
- Department of Psychology, Regis University, 3333 Regis Boulevard, Denver, CO 80221, USA
| | | | | | | |
Collapse
|