1
|
Schütz A, Bharmauria V, Yan X, Wang H, Bremmer F, Crawford JD. Integration of landmark and saccade target signals in macaque frontal cortex visual responses. Commun Biol 2023; 6:938. [PMID: 37704829 PMCID: PMC10499799 DOI: 10.1038/s42003-023-05291-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2021] [Accepted: 08/26/2023] [Indexed: 09/15/2023] Open
Abstract
Visual landmarks influence spatial cognition and behavior, but their influence on visual codes for action is poorly understood. Here, we test landmark influence on the visual response to saccade targets recorded from 312 frontal and 256 supplementary eye field neurons in rhesus macaques. Visual response fields are characterized by recording neural responses to various target-landmark combinations, and then we test against several candidate spatial models. Overall, frontal/supplementary eye fields response fields preferentially code either saccade targets (40%/40%) or landmarks (30%/4.5%) in gaze fixation-centered coordinates, but most cells show multiplexed target-landmark coding within intermediate reference frames (between fixation-centered and landmark-centered). Further, these coding schemes interact: neurons with near-equal target and landmark coding show the biggest shift from fixation-centered toward landmark-centered target coding. These data show that landmark information is preserved and influences target coding in prefrontal visual responses, likely to stabilize movement goals in the presence of noisy egocentric signals.
Collapse
Affiliation(s)
- Adrian Schütz
- Department of Neurophysics, Phillips Universität Marburg, Marburg, Germany
- Center for Mind, Brain, and Behavior - CMBB, Philipps-Universität Marburg, Marburg, Germany & Justus-Liebig-Universität Giessen, Giessen, Germany
| | - Vishal Bharmauria
- York Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Canada
| | - Xiaogang Yan
- York Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Canada
| | - Hongying Wang
- York Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Canada
| | - Frank Bremmer
- Department of Neurophysics, Phillips Universität Marburg, Marburg, Germany
- Center for Mind, Brain, and Behavior - CMBB, Philipps-Universität Marburg, Marburg, Germany & Justus-Liebig-Universität Giessen, Giessen, Germany
| | - J Douglas Crawford
- York Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Canada.
- Departments of Psychology, Biology, Kinesiology & Health Sciences, York University, Toronto, Canada.
| |
Collapse
|
2
|
Tani K, Iio S, Kamiya M, Yoshizawa K, Shigematsu T, Fujishima I, Tanaka S. Neuroanatomy of reduced distortion of body-centred spatial coding during body tilt in stroke patients. Sci Rep 2023; 13:11853. [PMID: 37481585 PMCID: PMC10363170 DOI: 10.1038/s41598-023-38751-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Accepted: 07/14/2023] [Indexed: 07/24/2023] Open
Abstract
Awareness of the direction of the body's (longitudinal) axis is fundamental for action and perception. The perceived body axis orientation is strongly biased during body tilt; however, the neural substrates underlying this phenomenon remain largely unknown. Here, we tackled this issue using a neuropsychological approach in patients with hemispheric stroke. Thirty-seven stroke patients and 20 age-matched healthy controls adjusted a visual line with the perceived body longitudinal axis when the body was upright or laterally tilted by 10 degrees. The bias of the perceived body axis caused by body tilt, termed tilt-dependent error (TDE), was compared between the groups. The TDE was significantly smaller (i.e., less affected performance by body tilt) in the stroke group (15.9 ± 15.9°) than in the control group (25.7 ± 17.1°). Lesion subtraction analysis and Bayesian lesion-symptom inference revealed that the abnormally reduced TDEs were associated with lesions in the right occipitotemporal cortex, such as the superior and middle temporal gyri. Our findings contribute to a better understanding of the neuroanatomy of body-centred spatial coding during whole-body tilt.
Collapse
Affiliation(s)
- Keisuke Tani
- Laboratory of Psychology, Hamamatsu University School of Medicine, Hamamatsu, Shizuoka, 431-3192, Japan.
- Faculty of Psychology, Otemon Gakuin University, 2-1-15 Nishi-Ai, Ibaraki, Osaka, 567-8502, Japan.
| | - Shintaro Iio
- Department of Rehabilitation, Hamamatsu City Rehabilitation Hospital, Hamamatsu, Shizuoka, 433-8511, Japan
| | - Masato Kamiya
- Department of Rehabilitation, Hamamatsu City Rehabilitation Hospital, Hamamatsu, Shizuoka, 433-8511, Japan
| | - Kohei Yoshizawa
- Department of Rehabilitation, Hamamatsu City Rehabilitation Hospital, Hamamatsu, Shizuoka, 433-8511, Japan
| | - Takashi Shigematsu
- Department of Rehabilitation Medicine, Hamamatsu City Rehabilitation Hospital, Hamamatsu, Shizuoka, 433-8511, Japan
| | - Ichiro Fujishima
- Department of Rehabilitation Medicine, Hamamatsu City Rehabilitation Hospital, Hamamatsu, Shizuoka, 433-8511, Japan
| | - Satoshi Tanaka
- Laboratory of Psychology, Hamamatsu University School of Medicine, Hamamatsu, Shizuoka, 431-3192, Japan
| |
Collapse
|
3
|
Abedi Khoozani P, Bharmauria V, Schütz A, Wildes RP, Crawford JD. Integration of allocentric and egocentric visual information in a convolutional/multilayer perceptron network model of goal-directed gaze shifts. Cereb Cortex Commun 2022; 3:tgac026. [PMID: 35909704 PMCID: PMC9334293 DOI: 10.1093/texcom/tgac026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 06/14/2022] [Accepted: 06/21/2022] [Indexed: 11/13/2022] Open
Abstract
Allocentric (landmark-centered) and egocentric (eye-centered) visual codes are fundamental for spatial cognition, navigation, and goal-directed movement. Neuroimaging and neurophysiology suggest these codes are initially segregated, but then reintegrated in frontal cortex for movement control. We created and validated a theoretical framework for this process using physiologically constrained inputs and outputs. To implement a general framework, we integrated a convolutional neural network (CNN) of the visual system with a multilayer perceptron (MLP) model of the sensorimotor transformation. The network was trained on a task where a landmark shifted relative to the saccade target. These visual parameters were input to the CNN, the CNN output and initial gaze position to the MLP, and a decoder transformed MLP output into saccade vectors. Decoded saccade output replicated idealized training sets with various allocentric weightings and actual monkey data where the landmark shift had a partial influence (R2 = 0.8). Furthermore, MLP output units accurately simulated prefrontal response field shifts recorded from monkeys during the same paradigm. In summary, our model replicated both the general properties of the visuomotor transformations for gaze and specific experimental results obtained during allocentric–egocentric integration, suggesting it can provide a general framework for understanding these and other complex visuomotor behaviors.
Collapse
Affiliation(s)
- Parisa Abedi Khoozani
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program , York University, Toronto, Ontario M3J 1P3 , Canada
| | - Vishal Bharmauria
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program , York University, Toronto, Ontario M3J 1P3 , Canada
| | - Adrian Schütz
- Department of Neurophysics Phillips-University Marburg , Marburg 35037 , Germany
| | - Richard P Wildes
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program , York University, Toronto, Ontario M3J 1P3 , Canada
- Department of Electrical Engineering and Computer Science , York University, Toronto, ON M3J 1P3 , Canada
| | - J Douglas Crawford
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program , York University, Toronto, Ontario M3J 1P3 , Canada
- Departments of Psychology, Biology and Kinesiology & Health Sciences, York University , Toronto, Ontario M3J 1P3 , Canada
| |
Collapse
|
4
|
Spatiotemporal Coding in the Macaque Supplementary Eye Fields: Landmark Influence in the Target-to-Gaze Transformation. eNeuro 2021; 8:ENEURO.0446-20.2020. [PMID: 33318073 PMCID: PMC7877461 DOI: 10.1523/eneuro.0446-20.2020] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2020] [Accepted: 11/24/2020] [Indexed: 11/21/2022] Open
Abstract
Eye-centered (egocentric) and landmark-centered (allocentric) visual signals influence spatial cognition, navigation, and goal-directed action, but the neural mechanisms that integrate these signals for motor control are poorly understood. A likely candidate for egocentric/allocentric integration in the gaze control system is the supplementary eye fields (SEF), a mediofrontal structure with high-level “executive” functions, spatially tuned visual/motor response fields, and reciprocal projections with the frontal eye fields (FEF). To test this hypothesis, we trained two head-unrestrained monkeys (Macaca mulatta) to saccade toward a remembered visual target in the presence of a visual landmark that shifted during the delay, causing gaze end points to shift partially in the same direction. A total of 256 SEF neurons were recorded, including 68 with spatially tuned response fields. Model fits to the latter established that, like the FEF and superior colliculus (SC), spatially tuned SEF responses primarily showed an egocentric (eye-centered) target-to-gaze position transformation. However, the landmark shift influenced this default egocentric transformation: during the delay, motor neurons (with no visual response) showed a transient but unintegrated shift (i.e., not correlated with the target-to-gaze transformation), whereas during the saccade-related burst visuomotor (VM) neurons showed an integrated shift (i.e., correlated with the target-to-gaze transformation). This differed from our simultaneous FEF recordings (Bharmauria et al., 2020), which showed a transient shift in VM neurons, followed by an integrated response in all motor responses. Based on these findings and past literature, we propose that prefrontal cortex incorporates landmark-centered information into a distributed, eye-centered target-to-gaze transformation through a reciprocal prefrontal circuit.
Collapse
|
5
|
Bharmauria V, Sajad A, Li J, Yan X, Wang H, Crawford JD. Integration of Eye-Centered and Landmark-Centered Codes in Frontal Eye Field Gaze Responses. Cereb Cortex 2020; 30:4995-5013. [PMID: 32390052 DOI: 10.1093/cercor/bhaa090] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2019] [Revised: 02/07/2020] [Accepted: 03/23/2020] [Indexed: 12/19/2022] Open
Abstract
The visual system is thought to separate egocentric and allocentric representations, but behavioral experiments show that these codes are optimally integrated to influence goal-directed movements. To test if frontal cortex participates in this integration, we recorded primate frontal eye field activity during a cue-conflict memory delay saccade task. To dissociate egocentric and allocentric coordinates, we surreptitiously shifted a visual landmark during the delay period, causing saccades to deviate by 37% in the same direction. To assess the cellular mechanisms, we fit neural response fields against an egocentric (eye-centered target-to-gaze) continuum, and an allocentric shift (eye-to-landmark-centered) continuum. Initial visual responses best-fit target position. Motor responses (after the landmark shift) predicted future gaze position but embedded within the motor code was a 29% shift toward allocentric coordinates. This shift appeared transiently in memory-related visuomotor activity, and then reappeared in motor activity before saccades. Notably, fits along the egocentric and allocentric shift continua were initially independent, but became correlated across neurons just before the motor burst. Overall, these results implicate frontal cortex in the integration of egocentric and allocentric visual information for goal-directed action, and demonstrate the cell-specific, temporal progression of signal multiplexing for this process in the gaze system.
Collapse
Affiliation(s)
- Vishal Bharmauria
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, Ontario, Canada M3J 1P3
| | - Amirsaman Sajad
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, Ontario, Canada M3J 1P3.,Vanderbilt Vision Research Center, Vanderbilt University, Nashville, TN 37240, USA
| | - Jirui Li
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, Ontario, Canada M3J 1P3
| | - Xiaogang Yan
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, Ontario, Canada M3J 1P3
| | - Hongying Wang
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, Ontario, Canada M3J 1P3
| | - John Douglas Crawford
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, Ontario, Canada M3J 1P3.,Departments of Psychology, Biology and Kinesiology & Health Sciences, York University, Toronto, Ontario, Canada M3J 1P3
| |
Collapse
|
6
|
Lv M, Hu S. Asymmetrical Switch Costs in Spatial Reference Frames Switching. Perception 2020; 49:268-280. [DOI: 10.1177/0301006620906087] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Previous studies found that the egocentric and allocentric reference frames are distinct in their functions, developmental trajectory, and neural basis. However, these two spatial reference frames exist in parallel, and people switch between them frequently in their daily lives. Using an allocentric and egocentric switching task, this study explored the cognitive processes involved in the switch between egocentric and allocentric reference frames and the possible asymmetry of switch costs. Sixty-two participants were tested in congruent (i.e., the target was on the same side in two reference frames) and incongruent conditions (i.e., the target was on a different side in two reference frames). The results indicated that the interaction between allocentric and egocentric reference frames was bidirectional and that the congruency effect was higher in the egocentric task than in the allocentric task. More important, the switch costs between allocentric and egocentric reference frames were found in both conditions, and the switch cost was higher for allocentric task. To our knowledge, this was the first study to focus on how switch costs and asymmetrical switch costs occur in allocentric and egocentric task switching.
Collapse
Affiliation(s)
- Ming Lv
- Beijing Key Laboratory of Applied Experimental Psychology, National Demonstration Center for Experimental Psychology Education (Beijing Normal University), Faculty of Psychology, Beijing Normal University, China
| | - Siyuan Hu
- Beijing Key Laboratory of Applied Experimental Psychology, National Demonstration Center for Experimental Psychology Education (Beijing Normal University), Faculty of Psychology, Beijing Normal University, China
| |
Collapse
|
7
|
Ruotolo F, Ruggiero G, Raemaekers M, Iachini T, van der Ham I, Fracasso A, Postma A. Neural correlates of egocentric and allocentric frames of reference combined with metric and non-metric spatial relations. Neuroscience 2019; 409:235-252. [DOI: 10.1016/j.neuroscience.2019.04.021] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2018] [Revised: 03/28/2019] [Accepted: 04/09/2019] [Indexed: 01/08/2023]
|
8
|
Interactions between egocentric and allocentric spatial coding of sounds revealed by a multisensory learning paradigm. Sci Rep 2019; 9:7892. [PMID: 31133688 PMCID: PMC6536515 DOI: 10.1038/s41598-019-44267-3] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2018] [Accepted: 05/08/2019] [Indexed: 11/09/2022] Open
Abstract
Although sound position is initially head-centred (egocentric coordinates), our brain can also represent sounds relative to one another (allocentric coordinates). Whether reference frames for spatial hearing are independent or interact remained largely unexplored. Here we developed a new allocentric spatial-hearing training and tested whether it can improve egocentric sound-localisation performance in normal-hearing adults listening with one ear plugged. Two groups of participants (N = 15 each) performed an egocentric sound-localisation task (point to a syllable), in monaural listening, before and after 4-days of multisensory training on triplets of white-noise bursts paired with occasional visual feedback. Critically, one group performed an allocentric task (auditory bisection task), whereas the other processed the same stimuli to perform an egocentric task (pointing to a designated sound of the triplet). Unlike most previous works, we tested also a no training group (N = 15). Egocentric sound-localisation abilities in the horizontal plane improved for all groups in the space ipsilateral to the ear-plug. This unexpected finding highlights the importance of including a no training group when studying sound localisation re-learning. Yet, performance changes were qualitatively different in trained compared to untrained participants, providing initial evidence that allocentric and multisensory procedures may prove useful when aiming to promote sound localisation re-learning.
Collapse
|
9
|
Drakul A, Bockisch CJ, Tarnutzer AA. Does gravity influence the visual line bisection task? J Neurophysiol 2016; 116:629-36. [PMID: 27226452 DOI: 10.1152/jn.00312.2016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2016] [Accepted: 05/23/2016] [Indexed: 11/22/2022] Open
Abstract
The visual line bisection task (LBT) is sensitive to perceptual biases of visuospatial attention, showing slight leftward (for horizontal lines) and upward (for vertical lines) errors in healthy subjects. It may be solved in an egocentric or allocentric reference frame, and there is no obvious need for graviceptive input. However, for other visual line adjustments, such as the subjective visual vertical, otolith input is integrated. We hypothesized that graviceptive input is incorporated when performing the LBT and predicted reduced accuracy and precision when roll-tilted. Twenty healthy right-handed subjects repetitively bisected Earth-horizontal and body-horizontal lines in darkness. Recordings were obtained before, during, and after roll-tilt (±45°, ±90°) for 5 min each. Additionally, bisections of Earth-vertical and oblique lines were obtained in 17 subjects. When roll-tilted ±90° ear-down, bisections of Earth-horizontal (i.e., body-vertical) lines were shifted toward the direction of the head (P < 0.001). However, after correction for vertical line-bisection errors when upright, shifts disappeared. Bisecting body-horizontal lines while roll-tilted did not cause any shifts. The precision of Earth-horizontal line bisections decreased (P ≤ 0.006) when roll-tilted, while no such changes were observed for body-horizontal lines. Regardless of the trial condition and paradigm, the scanning direction of the bisecting cursor (leftward vs. rightward) significantly (P ≤ 0.021) affected line bisections. Our findings reject our hypothesis and suggest that gravity does not modulate the LBT. Roll-tilt-dependent shifts are instead explained by the headward bias when bisecting lines oriented along a body-vertical axis. Increased variability when roll-tilted likely reflects larger variability when bisecting body-vertical than body-horizontal lines.
Collapse
Affiliation(s)
- A Drakul
- Department of Neurology, University Hospital Zurich and University of Zurich, Zurich, Switzerland
| | - C J Bockisch
- Department of Neurology, University Hospital Zurich and University of Zurich, Zurich, Switzerland; Department of Otorhinolaryngology, University Hospital Zurich and University of Zurich, Zurich, Switzerland; and Department of Ophthalmology, University Hospital Zurich and University of Zurich, Zurich, Switzerland
| | - A A Tarnutzer
- Department of Neurology, University Hospital Zurich and University of Zurich, Zurich, Switzerland;
| |
Collapse
|
10
|
Szumska I, van der Lubbe RHJ, Grzeczkowski L, Herzog MH. Does sensitivity in binary choice tasks depend on response modality? Conscious Cogn 2016; 43:57-65. [PMID: 27236357 DOI: 10.1016/j.concog.2016.05.005] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2015] [Revised: 05/13/2016] [Accepted: 05/14/2016] [Indexed: 10/21/2022]
Abstract
In most models of vision, a stimulus is processed in a series of dedicated visual areas, leading to categorization of this stimulus, and possible decision, which subsequently may be mapped onto a motor-response. In these models, stimulus processing is thought to be independent of the response modality. However, in theories of event coding, common coding, and sensorimotor contingency, stimuli may be very specifically mapped onto certain motor-responses. Here, we compared performance in a shape localization task and used three different response modalities: manual, saccadic, and verbal. Meta-contrast masking was employed at various inter-stimulus intervals (ISI) to manipulate target visibility. Although we found major differences in reaction times for the three response modalities, accuracy remained at the same level for each response modality (and all ISIs). Our results support the view that stimulus-response (S-R) associations exist only for specific instances, such as reflexes or skills, but not for arbitrary S-R pairings.
Collapse
Affiliation(s)
- Izabela Szumska
- Laboratory of Psychophysics, Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland; Department of Cognitive Psychology, University of Finance and Management, Warsaw, Poland.
| | - Rob H J van der Lubbe
- Department of Cognitive Psychology, University of Finance and Management, Warsaw, Poland; Cognitive Psychology and Ergonomics, University of Twente, The Netherlands
| | - Lukasz Grzeczkowski
- Laboratory of Psychophysics, Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland
| | - Michael H Herzog
- Laboratory of Psychophysics, Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland
| |
Collapse
|
11
|
Visual attention modulates the asymmetric influence of each cerebral hemisphere on spatial perception. Sci Rep 2016; 6:19190. [PMID: 26758349 PMCID: PMC4725350 DOI: 10.1038/srep19190] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2015] [Accepted: 12/07/2015] [Indexed: 11/08/2022] Open
Abstract
Although the allocation of brain functions across the two cerebral hemispheres has aroused public interest over the past century, asymmetric interhemispheric cooperation under attentional modulation has been scarcely investigated. An example of interhemispheric cooperation is visual spatial perception. During this process, visual information from each hemisphere is integrated because each half of the visual field predominantly projects to the contralateral visual cortex. Both egocentric and allocentric coordinates can be employed for visual spatial representation, but they activate different areas in primate cerebral hemispheres. Recent studies have determined that egocentric representation affects the reaction time of allocentric perception; furthermore, this influence is asymmetric between the two visual hemifields. The egocentric-allocentric incompatibility effect and its asymmetry between the two hemispheres can produce this phenomenon. Using an allocentric position judgment task, we found that this incompatibility effect was reduced, and its asymmetry was eliminated on an attentional task rather than a neutral task. Visual attention might activate cortical areas that process conflicting information, such as the anterior cingulate cortex, and balance the asymmetry between the two hemispheres. Attention may enhance and balance this interhemispheric cooperation because this imbalance may also be caused by the asymmetric cooperation of each hemisphere in spatial perception.
Collapse
|
12
|
Ruotolo F, van der Ham I, Postma A, Ruggiero G, Iachini T. How coordinate and categorical spatial relations combine with egocentric and allocentric reference frames in a motor task: Effects of delay and stimuli characteristics. Behav Brain Res 2015; 284:167-78. [DOI: 10.1016/j.bbr.2015.02.021] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2015] [Revised: 02/05/2015] [Accepted: 02/07/2015] [Indexed: 11/26/2022]
|
13
|
Chieffi S, Iachini T, Iavarone A, Messina G, Viggiano A, Monda M. Flanker interference effects in a line bisection task. Exp Brain Res 2014; 232:1327-34. [PMID: 24496492 DOI: 10.1007/s00221-014-3851-y] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2013] [Accepted: 01/18/2014] [Indexed: 10/25/2022]
Abstract
Previous studies have shown that flanking distractors influence line bisection. In the present study, we examined if reaching the flanker after bisecting the line resulted in a variation of flanker interference on line bisection. Right- and left-handed participants were asked to bisect a horizontal line flanked by a dot (bisection task, B-task) or to bisect the line and then to reach the dot (bisection plus reaching task, BR-task). The dot was placed laterally to, and above or below, the line edge. The results showed that in both tasks the subjective midpoint was shifted away from the position of the dot. However, this effect was greater in the BR-task than in the B-task. We suggest that the requirement to perform an action to the flanker in the BR-task induced participants to pay more attention to the dot, enhancing its salience and distorting effects on line bisection.
Collapse
Affiliation(s)
- Sergio Chieffi
- Department of Experimental Medicine, Second University of Naples, Naples, Italy,
| | | | | | | | | | | |
Collapse
|
14
|
Zhang M, Tan X, Shen L, Wang A, Geng S, Chen Q. Interaction between allocentric and egocentric reference frames in deaf and hearing populations. Neuropsychologia 2014; 54:68-76. [DOI: 10.1016/j.neuropsychologia.2013.12.015] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2013] [Revised: 11/27/2013] [Accepted: 12/14/2013] [Indexed: 10/25/2022]
|
15
|
Schütz I, Henriques DYP, Fiehler K. Gaze-centered spatial updating in delayed reaching even in the presence of landmarks. Vision Res 2013; 87:46-52. [PMID: 23770521 DOI: 10.1016/j.visres.2013.06.001] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2013] [Revised: 05/29/2013] [Accepted: 06/01/2013] [Indexed: 11/16/2022]
Abstract
Previous results suggest that the brain predominantly relies on a constantly updated gaze-centered target representation to guide reach movements when no other visual information is available. In the present study, we investigated whether the addition of reliable visual landmarks influences the use of spatial reference frames for immediate and delayed reaching. Subjects reached immediately or after a delay of 8 or 12s to remembered target locations, either with or without landmarks. After target presentation and before reaching they shifted gaze to one of five different fixation points and held their gaze at this location until the end of the reach. With landmarks present, gaze-dependent reaching errors were smaller and more precise than when reaching without landmarks. Delay influenced neither reaching errors nor variability. These findings suggest that when landmarks are available, the brain seems to still use gaze-dependent representations but combine them with gaze-independent allocentric information to guide immediate or delayed reach movements to visual targets.
Collapse
Affiliation(s)
- I Schütz
- Department of Psychology, Justus-Liebig-University Giessen, Giessen, Germany.
| | | | | |
Collapse
|
16
|
Abstract
Objects in the visual world can be represented in both egocentric and allocentric coordinates. Previous studies have found that allocentric representation can affect the accuracy of spatial judgment relative to an egocentric frame, but not vice versa. Here we asked whether egocentric representation influenced the processing speed of allocentric perception. We measured the manual reaction time of human subjects in a position discrimination task in which the behavioral response purely relied on the target's allocentric location, independent of its egocentric position. We used two conditions of stimulus location: the compatible condition-allocentric left and egocentric left or allocentric right and egocentric right; the incompatible condition-allocentric left and egocentric right or allocentric right and egocentric left. We found that egocentric representation markedly influenced allocentric perception in three ways. First, in a given egocentric location, allocentric perception was significantly faster in the compatible condition than in the incompatible condition. Second, as the target became more eccentric in the visual field, the speed of allocentric perception gradually slowed down in the incompatible condition but remained unchanged in the compatible condition. Third, egocentric-allocentric incompatibility slowed allocentric perception more in the left egocentric side than the right egocentric side. These results cannot be explained by interhemispheric visuomotor transformation and stimulus-response compatibility theory. Our findings indicate that each hemisphere preferentially processes and integrates the contralateral egocentric and allocentric spatial information, and the right hemisphere receives more ipsilateral egocentric inputs than left hemisphere does.
Collapse
|
17
|
Tarnutzer AA, Bockisch CJ, Olasagasti I, Straumann D. Egocentric and allocentric alignment tasks are affected by otolith input. J Neurophysiol 2012; 107:3095-106. [DOI: 10.1152/jn.00724.2010] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Gravicentric visual alignments become less precise when the head is roll-tilted relative to gravity, which is most likely due to decreasing otolith sensitivity. To align a luminous line with the perceived gravity vector (gravicentric task) or the perceived body-longitudinal axis (egocentric task), the roll orientation of the line on the retina and the torsional position of the eyes relative to the head must be integrated to obtain the line orientation relative to the head. Whether otolith input contributes to egocentric tasks and whether the modulation of variability is restricted to vision-dependent paradigms is unknown. In nine subjects we compared precision and accuracy of gravicentric and egocentric alignments in various roll positions (upright, 45°, and 75° right-ear down) using a luminous line (visual paradigm) in darkness. Trial-to-trial variability doubled for both egocentric and gravicentric alignments when roll-tilted. Two mechanisms might explain the roll-angle–dependent modulation in egocentric tasks: 1) Modulating variability in estimated ocular torsion, which reflects the roll-dependent precision of otolith signals, affects the precision of estimating the line orientation relative to the head; this hypothesis predicts that variability modulation is restricted to vision-dependent alignments. 2) Estimated body-longitudinal reflects the roll-dependent variability of perceived earth-vertical. Gravicentric cues are thereby integrated regardless of the task's reference frame. To test the two hypotheses the visual paradigm was repeated using a rod instead (haptic paradigm). As with the visual paradigm, precision significantly decreased with increasing head roll for both tasks. These findings propose that the CNS integrates input coded in a gravicentric frame to solve egocentric tasks. In analogy to gravicentric tasks, where trial-to-trial variability is mainly influenced by the properties of the otolith afferents, egocentric tasks may also integrate otolith input. Such a shared mechanism for both paradigms and frames of reference is supported by the significantly correlated trial-to-trial variabilities.
Collapse
Affiliation(s)
| | - Christopher J. Bockisch
- Departments of 1Neurology,
- Ophthalmology, and
- Otorhinolaryngology, University Hospital Zurich, Zurich, Switzerland
| | | | | |
Collapse
|
18
|
Janzen G, Haun DBM, Levinson SC. Tracking down abstract linguistic meaning: neural correlates of spatial frame of reference ambiguities in language. PLoS One 2012; 7:e30657. [PMID: 22363462 PMCID: PMC3281860 DOI: 10.1371/journal.pone.0030657] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2011] [Accepted: 12/26/2011] [Indexed: 12/02/2022] Open
Abstract
This functional magnetic resonance imaging (fMRI) study investigates a crucial parameter in spatial description, namely variants in the frame of reference chosen. Two frames of reference are available in European languages for the description of small-scale assemblages, namely the intrinsic (or object-oriented) frame and the relative (or egocentric) frame. We showed participants a sentence such as “the ball is in front of the man”, ambiguous between the two frames, and then a picture of a scene with a ball and a man – participants had to respond by indicating whether the picture did or did not match the sentence. There were two blocks, in which we induced each frame of reference by feedback. Thus for the crucial test items, participants saw exactly the same sentence and the same picture but now from one perspective, now the other. Using this method, we were able to precisely pinpoint the pattern of neural activation associated with each linguistic interpretation of the ambiguity, while holding the perceptual stimuli constant. Increased brain activity in bilateral parahippocampal gyrus was associated with the intrinsic frame of reference whereas increased activity in the right superior frontal gyrus and in the parietal lobe was observed for the relative frame of reference. The study is among the few to show a distinctive pattern of neural activation for an abstract yet specific semantic parameter in language. It shows with special clarity the nature of the neural substrate supporting each frame of spatial reference.
Collapse
Affiliation(s)
- Gabriele Janzen
- Behavioural Science Institute, Radboud University Nijmegen, Nijmegen, The Netherlands.
| | | | | |
Collapse
|
19
|
Frames of reference and categorical and coordinate spatial relations: a hierarchical organisation. Exp Brain Res 2011; 214:587-95. [PMID: 21912930 DOI: 10.1007/s00221-011-2857-y] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2011] [Accepted: 08/27/2011] [Indexed: 10/17/2022]
Abstract
This research is about the role of categorical and coordinate spatial relations and allocentric and egocentric frames of reference in processing spatial information. To this end, we asked whether spatial information is firstly encoded with respect to a frame of reference or with respect to categorical/coordinate spatial relations. Participants had to judge whether two vertical bars appeared on the same side (categorical) or at the same distance (coordinate) with respect to the centre of a horizontal bar (allocentric) or with respect to their body midline (egocentric). The key manipulation was the timing of the instructions: one instruction (reference frame or spatial relation) was given before stimulus presentation, the other one after. If spatial processing requires egocentric/allocentric encoding before coordinate/categorical encoding, then spatial judgements should be facilitated when the frame of reference is specified in advance. In contrast, if categorical and coordinate dimensions are primary, then a facilitation should appear when the spatial relation is specified in advance. Results showed that participants were more accurate and faster when the reference frame rather than the type of spatial relation was provided before stimulus presentation. Furthermore, a selective facilitation was found for coordinate and categorical judgements after egocentric and allocentric cues, respectively. These results suggest a hierarchical structure of spatial information processing where reference frames play a primary role and selectively interact with subsequent processing of spatial relations.
Collapse
|
20
|
Ruotolo F, van der Ham IJM, Iachini T, Postma A. The relationship between allocentric and egocentric frames of reference and categorical and coordinate spatial information processing. Q J Exp Psychol (Hove) 2011; 64:1138-56. [PMID: 21271464 DOI: 10.1080/17470218.2010.539700] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
We report two experiments on the relationship between allocentric/egocentric frames of reference and categorical/coordinate spatial relations. Jager and Postma (2003) suggest two theoretical possibilities about their relationship: categorical judgements are better when combined with an allocentric reference frame and coordinate judgements with an egocentric reference frame (interaction hypothesis); allocentric/egocentric and categorical/coordinate form independent dimensions (independence hypothesis). Participants saw stimuli comprising two vertical bars (targets), one above and the other below a horizontal bar. They had to judge whether the targets appeared on the same side (categorical) or at the same distance (coordinate) with respect either to their body-midline (egocentric) or to the centre of the horizontal bar (allocentric). The results from Experiment 1 showed a facilitation in the allocentric and categorical conditions. In line with the independence hypothesis, no interaction effect emerged. To see whether the results were affected by the visual salience of the stimuli, in Experiment 2 the luminance of the horizontal bar was reduced. As a consequence, a significant interaction effect emerged indicating that categorical judgements were more accurate than coordinate ones, and especially so in the allocentric condition. Furthermore, egocentric judgements were as accurate as allocentric ones with a specific improvement when combined with coordinate spatial relations. The data from Experiment 2 showed that the visual salience of stimuli affected the relationship between allocentric/egocentric and categorical/coordinate dimensions. This suggests that the emergence of a selective interaction between the two dimensions may be modulated by the characteristics of the task.
Collapse
Affiliation(s)
- Francesco Ruotolo
- Department of Psychology, Second University of Naples, Naples, Italy.
| | | | | | | |
Collapse
|
21
|
Byrne PA, Crawford JD. Cue Reliability and a Landmark Stability Heuristic Determine Relative Weighting Between Egocentric and Allocentric Visual Information in Memory-Guided Reach. J Neurophysiol 2010; 103:3054-69. [DOI: 10.1152/jn.01008.2009] [Citation(s) in RCA: 57] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
It is not known how egocentric visual information (location of a target relative to the self) and allocentric visual information (location of a target relative to external landmarks) are integrated to form reach plans. Based on behavioral data from rodents and humans we hypothesized that the degree of stability in visual landmarks would influence the relative weighting. Furthermore, based on numerous cue-combination studies we hypothesized that the reach system would act like a maximum-likelihood estimator (MLE), where the reliability of both cues determines their relative weighting. To predict how these factors might interact we developed an MLE model that weighs egocentric and allocentric information based on their respective reliabilities, and also on an additional stability heuristic. We tested the predictions of this model in 10 human subjects by manipulating landmark stability and reliability (via variable amplitude vibration of the landmarks and variable amplitude gaze shifts) in three reach-to-touch tasks: an egocentric control (reaching without landmarks), an allocentric control (reaching relative to landmarks), and a cue-conflict task (involving a subtle landmark “shift” during the memory interval). Variability from all three experiments was used to derive parameters for the MLE model, which was then used to simulate egocentric–allocentric weighting in the cue-conflict experiment. As predicted by the model, landmark vibration—despite its lack of influence on pointing variability (and thus allocentric reliability) in the control experiment—had a strong influence on egocentric–allocentric weighting. A reduced model without the stability heuristic was unable to reproduce this effect. These results suggest heuristics for extrinsic cue stability are at least as important as reliability for determining cue weighting in memory-guided reaching.
Collapse
Affiliation(s)
- Patrick A. Byrne
- Centre for Vision Research,
- Canadian Action and Perception Network, and
| | - J. Douglas Crawford
- Centre for Vision Research,
- Canadian Action and Perception Network, and
- Neuroscience Graduate Diploma Program and Departments of Psychology, Biology, and Kinesiology and Health Sciences, York University, Toronto, Canada
| |
Collapse
|
22
|
Influence of gaze elevation on estimating the possibility of passing under high obstacles during body tilt. Exp Brain Res 2008; 193:19-28. [DOI: 10.1007/s00221-008-1589-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2008] [Accepted: 09/23/2008] [Indexed: 10/21/2022]
|
23
|
Bringoux L, Robic G, Gauthier GM, Vercher JL. Judging beforehand the possibility of passing under obstacles without motion: the influence of egocentric and geocentric frames of reference. Exp Brain Res 2007; 185:673-80. [PMID: 17989965 DOI: 10.1007/s00221-007-1194-7] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2007] [Accepted: 10/20/2007] [Indexed: 11/29/2022]
Affiliation(s)
- L Bringoux
- UMR CNRS 6152 Mouvement & Perception, Faculté des Sciences du Sport, Université de la Méditerranée, 163, avenue de Luminy CP 910, 13288 Marseille Cedex 9, France.
| | | | | | | |
Collapse
|
24
|
Coello Y, Delevoye-Turrell Y. Embodiment, spatial categorisation and action. Conscious Cogn 2007; 16:667-83. [PMID: 17728152 DOI: 10.1016/j.concog.2007.07.003] [Citation(s) in RCA: 61] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2007] [Revised: 06/18/2007] [Accepted: 07/01/2007] [Indexed: 12/01/2022]
Abstract
Despite the subjective experience of a continuous and coherent external world, we will argue that the perception and categorisation of visual space is constrained by the spatial resolution of the sensory systems but also and above all, by the pre-reflective representations of the body in action. Recent empirical data in cognitive neurosciences will be presented that suggest that multidimensional categorisation of perceptual space depends on body representations at both an experiential and a functional level. Results will also be resumed that show that representations of the body in action are pre-reflective in nature as only some aspects of the pre-reflective states can be consciously experienced. Finally, a neuro-cognitive model based on the integration of afferent and efferent information will be described, which suggests that action simulation and associated predicted sensory consequences may represent the underlying principle that enables pre-reflective representations of the body for space categorisation and selection for action.
Collapse
Affiliation(s)
- Yann Coello
- Laboratory URECA (EA 1059), University Charles de Gaulle-Lille3, BP 60149, F.59653 Villeneuve d'Ascq cedex, France.
| | | |
Collapse
|
25
|
Franz EA, Ford S, Werner S. Brain and cognitive processes of imitation in bimanual situations: Making inferences about mirror neuron systems. Brain Res 2007; 1145:138-49. [PMID: 17349983 DOI: 10.1016/j.brainres.2007.01.136] [Citation(s) in RCA: 27] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2006] [Revised: 01/22/2007] [Accepted: 01/28/2007] [Indexed: 11/29/2022]
Abstract
The relationship between mirror neuron systems and imitation is being widely studied. However, most if not all, studies on imitation have investigated only the mirror mode. The present study examined whether imitation in a mirror (specular) mode is likely to reflect similar or distinct neural processes and psychological principles as imitation in a non-mirror (anatomical) mode. Experiment 1 examined whether altering sensory information may reverse the typical mirror mode advantage, resulting in superior performance in the non-mirror mode. Experiment 2 examined whether the two different modes of imitation rely differentially on target selection (goals) and effector selection (means). Experiment 3 examined whether spatial translations are likely to occur in a typical non-mirror imitation mode. Experiment 4 examined whether non-mirror imitation would be the naturally selected mode of imitation under some situations. Findings from all experiments demonstrated marked differences between mirror and non-mirror modes of imitation. The implications of these findings may raise challenges for theories and models of mirror neurons.
Collapse
Affiliation(s)
- Elizabeth A Franz
- Action, Brain, and Cognition Laboratory, Department of Psychology, Otago University, Box 56, Dunedin, New Zealand.
| | | | | |
Collapse
|
26
|
Neggers SFW, Van der Lubbe RHJ, Ramsey NF, Postma A. Interactions between ego- and allocentric neuronal representations of space. Neuroimage 2006; 31:320-31. [PMID: 16473025 DOI: 10.1016/j.neuroimage.2005.12.028] [Citation(s) in RCA: 121] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2005] [Revised: 12/04/2005] [Accepted: 12/06/2005] [Indexed: 11/25/2022] Open
Abstract
In the primate brain, visual spatial representations express distances of objects with regard to different references. In the parietal cortex, distances are thought to be represented with respect to the body (egocentric representation) and in superior temporal cortices with respect to other objects, independent of the observer (allocentric representation). However, these representations of space are interdependent, complicating such distinctions. Specifically, an object's position within a background frame strongly biases egocentric position location judgments. This bias, however, is absent for pointing movements towards that same object. More recent theories state that dorsal parietal spatial representations subserve visuomotor processing, whereas temporal lobe representations subserve memory and cognition. Therefore, it may be hypothesized that parietal egocentric representations, responsible for movement control, are not influenced by irrelevant allocentric cues, whereas ventral representations are. In an event-related functional magnetic resonance imaging study, subjects judged target bar locations relative to their body (egocentric task) or a background bar (allocentric task). Activity in the superior parietal lobule (SPL) was shown to increase during egocentric judgments, but not during allocentric judgments. The superior temporal gyrus (STG) shows a negative BOLD response during allocentric judgments and no activation during egocentric judgments. During egocentric judgments, the irrelevant background influenced activity in the posterior commissure and the medial temporal gyrus. SPL activity was unaffected by the irrelevant background during egocentric judgments. Sensitivity to spatial perceptual biases is apparently limited to occipito-temporal areas, subserving the observed biased cognitive reports of location, and is not found in parietal areas, subserving unbiased goal-directed actions.
Collapse
Affiliation(s)
- S F W Neggers
- Department of Psychonomics, Helmholtz Institute, University of Utrecht, Heidelberglaan 2, 3584 CS Utrecht, The Netherlands.
| | | | | | | |
Collapse
|
27
|
|
28
|
Parslow DM, Morris RG, Fleminger S, Rahman Q, Abrahams S, Recce M. Allocentric spatial memory in humans with hippocampal lesions. Acta Psychol (Amst) 2005; 118:123-47. [PMID: 15627413 DOI: 10.1016/j.actpsy.2004.10.006] [Citation(s) in RCA: 59] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022] Open
Abstract
An immersive virtual reality (IVR) system was used to investigate allocentric spatial memory in a patient (PR) who had selective hippocampal damage, and also in patients who had undergone unilateral temporal lobectomies (17 right TL and 19 left TL), their performance compared against normal control groups. A human analogue of the Olton [Olton (1979). Hippocampus, space, and memory. Behavioural Brain Science, 2, 315] spatial maze was developed, consisting of a virtual room, a central virtual circular table and an array of radially arranged up-turned 'shells.' The participant had to search these shells in turn in order to find a blue 'cube' that would then 'move' to another location and so on, until all the shells had been target locations. Within-search errors could be made when the participants returned to a previously visited location during a search, and between-search errors when they revisited previously successful, but now incorrect locations. PR made significantly more between-search errors than his control group, but showed no increase in within-search errors. The right TL group showed a similar pattern of impairment, but the left TL group showed no impairment. This finding implicates the right hippocampal formation in spatial memory functioning in a scenario in which the visual environment was controlled so as to eliminate extraneous visual cues.
Collapse
Affiliation(s)
- David M Parslow
- Department of Psychology, Institute of Psychiatry, University of London, De Crespigny Park, SE5 8AF London, UK
| | | | | | | | | | | |
Collapse
|
29
|
Abstract
Two experiments investigated how angular estimates reflect bias as a function of response mode, geometric plane of variation, number of implicit categories, memory load and intervening task conditions. In Experiment 1, participants made motor and verbal estimates of incline and azimuth from memory. Estimates in both response modes showed signs of bias predicted by a single-category adaptation of Huttenlocher et al. [Huttenlocher, J., Hedges, L. V., & Duncan, S. (1991). Categories and particulars: Prototype effects in estimating spatial location. Psychological Review, 98, 352-376] category-adjustment model. In Experiment 2, participants made motor estimates of azimuth from memory under a variety of conditions. Stimuli in this experiment were distributed along two contiguous spatial categories. Although increasing levels of cognitive load did not produce a graded effect, participants' estimates were biased and were well described by a multiple-category adaptation of the category-adjustment model. Results from both studies supported an implicit region-based model of bias in spatial memory. These findings were discussed with respect to accounts of spatial memory that propose multiple systems or formats for coding.
Collapse
Affiliation(s)
- Daniel B M Haun
- Department of Psychology, University of South Carolina, Barnwell College, Columbia, SC 29208, USA.
| | | | | |
Collapse
|