1
|
West TR, Mazurek MH, Perez NA, Razak SS, Gal ZT, McHugh JM, Choi BD, Nahed BV. Navigated Intraoperative Ultrasound Offers Effective and Efficient Real-Time Analysis of Intracranial Tumor Resection and Brain Shift. Oper Neurosurg (Hagerstown) 2024:01787389-990000000-01250. [PMID: 38995025 DOI: 10.1227/ons.0000000000001250] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2024] [Accepted: 05/01/2024] [Indexed: 07/13/2024] Open
Abstract
BACKGROUND AND OBJECTIVES Neuronavigation is a fundamental tool in the resection of intracranial tumors. However, it is limited by its calibration to preoperative neuroimaging, which loses accuracy intraoperatively after brain shift. Therefore, surgeons rely on anatomic landmarks or tools like intraoperative MRI to assess the extent of tumor resection (EOR) and update neuronavigation. Recent studies demonstrate that intraoperative ultrasound (iUS) provides point-of-care imaging without the cost or resource utilization of an intraoperative MRI, and advances in neuronavigation-guided iUS provide an opportunity for real-time imaging overlaid with neuronavigation to account for brain shift. We assessed the feasibility, efficacy, and benefits of navigated iUS to assess the EOR and restore stereotactic accuracy in neuronavigation after brain shift. METHODS This prospective single-center study included patients presenting with intracranial tumors (gliomas, metastasis) to an academic medical center. Navigated iUS images were acquired preresection, midresection, and postresection. The EOR was determined by the surgeon intraoperatively and compared with the postoperative MRI report by an independent neuroradiologist. Outcome measures included time to perform the iUS sweep, time to process ultrasound images, and EOR predicted by the surgeon intraoperatively compared with the postoperative MRI. RESULTS This study included 40 patients consisting of gliomas (n = 18 high-grade gliomas, n = 4 low-grade gliomas, n = 4 recurrent) and metastasis (n = 18). Navigated ultrasound sweeps were performed in all patients (n = 83) with a median time to perform of 5.5 seconds and a median image processing time of 29.9 seconds. There was 95% concordance between the surgeon's and neuroradiologist's determination of EOR using navigated iUS and postoperative MRI, respectively. The sensitivity was 100%, and the specificity was 94%. CONCLUSION Navigated iUS was successfully used for EOR determination in glioma and metastasis resection. Incorporating navigated iUS into the surgical workflow is safe and efficient and provides a real-time assessment of EOR while accounting for brain shift in intracranial tumor surgeries.
Collapse
Affiliation(s)
- Timothy R West
- Department of Neurosurgery, Massachusetts General Hospital, Boston, Massachusetts, USA
| | | | | | | | | | - Jeffrey M McHugh
- Department of Neurosurgery, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Bryan D Choi
- Department of Neurosurgery, Massachusetts General Hospital, Boston, Massachusetts, USA
- Harvard Medical School, Boston, Massachusetts, USA
| | - Brian V Nahed
- Department of Neurosurgery, Massachusetts General Hospital, Boston, Massachusetts, USA
- Harvard Medical School, Boston, Massachusetts, USA
| |
Collapse
|
2
|
Zhang SH, Zhao XN, Jiang DQ, Tang SM, Yu C. Ocular dominance-dependent binocular combination of monocular neuronal responses in macaque V1. eLife 2024; 13:RP92839. [PMID: 38568729 PMCID: PMC10990486 DOI: 10.7554/elife.92839] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/05/2024] Open
Abstract
Primates rely on two eyes to perceive depth, while maintaining stable vision when either one eye or both eyes are open. Although psychophysical and modeling studies have investigated how monocular signals are combined to form binocular vision, the underlying neuronal mechanisms, particularly in V1 where most neurons exhibit binocularity with varying eye preferences, remain poorly understood. Here, we used two-photon calcium imaging to compare the monocular and binocular responses of thousands of simultaneously recorded V1 superficial-layer neurons in three awake macaques. During monocular stimulation, neurons preferring the stimulated eye exhibited significantly stronger responses compared to those preferring both eyes. However, during binocular stimulation, the responses of neurons preferring either eye were suppressed on the average, while those preferring both eyes were enhanced, resulting in similar neuronal responses irrespective of their eye preferences, and an overall response level similar to that with monocular viewing. A neuronally realistic model of binocular combination, which incorporates ocular dominance-dependent divisive interocular inhibition and binocular summation, is proposed to account for these findings.
Collapse
Affiliation(s)
- Sheng-Hui Zhang
- School of Psychological and Cognitive Sciences, Peking UniversityBeijingChina
- PKU-Tsinghua Center for Life Sciences, Peking UniversityBeijingChina
| | - Xing-Nan Zhao
- School of Psychological and Cognitive Sciences, Peking UniversityBeijingChina
- PKU-Tsinghua Center for Life Sciences, Peking UniversityBeijingChina
| | - Dan-Qing Jiang
- School of Psychological and Cognitive Sciences, Peking UniversityBeijingChina
- PKU-Tsinghua Center for Life Sciences, Peking UniversityBeijingChina
| | - Shi-Ming Tang
- PKU-Tsinghua Center for Life Sciences, Peking UniversityBeijingChina
- School of Life Sciences, Peking UniversityBeijingChina
- IDG-McGovern Institute for Brain Research, Peking UniversityBeijingChina
| | - Cong Yu
- School of Psychological and Cognitive Sciences, Peking UniversityBeijingChina
- IDG-McGovern Institute for Brain Research, Peking UniversityBeijingChina
| |
Collapse
|
3
|
Uejima T, Mancinelli E, Niebur E, Etienne-Cummings R. The influence of stereopsis on visual saliency in a proto-object based model of selective attention. Vision Res 2023; 212:108304. [PMID: 37542763 PMCID: PMC10592191 DOI: 10.1016/j.visres.2023.108304] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Revised: 07/18/2023] [Accepted: 07/18/2023] [Indexed: 08/07/2023]
Abstract
Some animals including humans use stereoscopic vision which reconstructs spatial information about the environment from the disparity between images captured by eyes in two separate adjacent locations. Like other sensory information, such stereoscopic information is expected to influence attentional selection. We develop a biologically plausible model of binocular vision to study its effect on bottom-up visual attention, i.e., visual saliency. In our model, the scene is organized in terms of proto-objects on which attention acts, rather than on unbound sets of elementary features. We show that taking into account the stereoscopic information improves the performance of the model in the prediction of human eye movements with statistically significant differences.
Collapse
Affiliation(s)
- Takeshi Uejima
- The Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD, USA.
| | - Elena Mancinelli
- The Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD, USA
| | - Ernst Niebur
- The Solomon Snyder Department of Neuroscience and the Zanvyl Krieger Mind/Brain Institute, The Johns Hopkins University, Baltimore, MD, USA
| | - Ralph Etienne-Cummings
- The Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
4
|
Jia K, Goebel R, Kourtzi Z. Ultra-High Field Imaging of Human Visual Cognition. Annu Rev Vis Sci 2023; 9:479-500. [PMID: 37137282 DOI: 10.1146/annurev-vision-111022-123830] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
Functional magnetic resonance imaging (fMRI), the key methodology for mapping the functions of the human brain in a noninvasive manner, is limited by low temporal and spatial resolution. Recent advances in ultra-high field (UHF) fMRI provide a mesoscopic (i.e., submillimeter resolution) tool that allows us to probe laminar and columnar circuits, distinguish bottom-up versus top-down pathways, and map small subcortical areas. We review recent work demonstrating that UHF fMRI provides a robust methodology for imaging the brain across cortical depths and columns that provides insights into the brain's organization and functions at unprecedented spatial resolution, advancing our understanding of the fine-scale computations and interareal communication that support visual cognition.
Collapse
Affiliation(s)
- Ke Jia
- Department of Neurobiology, Affiliated Mental Health Center & Hangzhou Seventh People's Hospital, Zhejiang University School of Medicine, Hangzhou, China
- Liangzhu Laboratory, MOE Frontier Science Center for Brain Science and Brain-machine Integration, State Key Laboratory of Brain-machine Intelligence, Zhejiang University, Hangzhou, China
- NHC and CAMS Key Laboratory of Medical Neurobiology, Zhejiang University, Hangzhou, China
- Department of Psychology, University of Cambridge, Cambridge, United Kingdom;
| | - Rainer Goebel
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
| | - Zoe Kourtzi
- Department of Psychology, University of Cambridge, Cambridge, United Kingdom;
| |
Collapse
|
5
|
Ayzenberg V, Behrmann M. The where, what, and how of object recognition. Trends Cogn Sci 2023; 27:335-336. [PMID: 36801163 DOI: 10.1016/j.tics.2023.01.006] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Accepted: 01/23/2023] [Indexed: 02/19/2023]
Affiliation(s)
- Vladislav Ayzenberg
- Neuroscience Institute and Psychology Department, Carnegie Mellon University, Pittsburgh, PA, USA.
| | - Marlene Behrmann
- Neuroscience Institute and Psychology Department, Carnegie Mellon University, Pittsburgh, PA, USA; Department of Ophthalmology, University of Pittsburgh, Pittsburgh, PA, USA.
| |
Collapse
|
6
|
Read JCA. Stereopsis without correspondence. Philos Trans R Soc Lond B Biol Sci 2023; 378:20210449. [PMID: 36511401 PMCID: PMC9745876 DOI: 10.1098/rstb.2021.0449] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Abstract
Stereopsis has traditionally been considered a complex visual ability, restricted to large-brained animals. The discovery in the 1980s that insects, too, have stereopsis, therefore, challenged theories of stereopsis. How can such simple brains see in three dimensions? A likely answer is that insect stereopsis has evolved to produce simple behaviour, such as orienting towards the closer of two objects or triggering a strike when prey comes within range. Scientific thinking about stereopsis has been unduly anthropomorphic, for example assuming that stereopsis must require binocular fusion or a solution of the stereo correspondence problem. In fact, useful behaviour can be produced with very basic stereoscopic algorithms which make no attempt to achieve fusion or correspondence, or to produce even a coarse map of depth across the visual field. This may explain why some aspects of insect stereopsis seem poorly designed from an engineering point of view: for example, paying no attention to whether interocular contrast or velocities match. Such algorithms demonstrably work well enough in practice for their species, and may prove useful in particular autonomous applications. This article is part of a discussion meeting issue 'New approaches to 3D vision'.
Collapse
Affiliation(s)
- Jenny C. A. Read
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, Tyne and Wear UNE2 4HH, UK
| |
Collapse
|
7
|
Linton P. Minimal theory of 3D vision: new approach to visual scale and visual shape. Philos Trans R Soc Lond B Biol Sci 2023; 378:20210455. [PMID: 36511406 PMCID: PMC9745885 DOI: 10.1098/rstb.2021.0455] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Abstract
Since Kepler and Descartes in the early-1600s, vision science has been committed to a triangulation model of stereo vision. But in the early-1800s, we realized that disparities are responsible for stereo vision. And we have spent the past 200 years trying to shoe-horn disparities back into the triangulation account. The first part of this article argues that this is a mistake, and that stereo vision is a solution to a different problem: the eradication of rivalry between the two retinal images, rather than the triangulation of objects in space. This leads to a 'minimal theory of 3D vision', where 3D vision is no longer tied to estimating the scale, shape, and direction of objects in the world. The second part of this article then asks whether the other aspects of 3D vision, which go beyond stereo vision, really operate at the same level of visual experience as stereo vision? I argue they do not. Whilst we want a theory of real-world 3D vision, the literature risks giving us a theory of picture perception instead. And I argue for a two-stage theory, where our purely internal 'minimal' 3D percept (from stereo vision) is linked to the world through cognition. This article is part of a discussion meeting issue 'New approaches to 3D vision'.
Collapse
Affiliation(s)
- Paul Linton
- Presidential Scholars in Society and Neuroscience, Center for Science and Society, Columbia University, New York, NY 10027, USA,Italian Academy for Advanced Studies in America, Columbia University, New York, NY 10027, USA,Visual Inference Lab, Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA
| |
Collapse
|
8
|
Boeken OJ, Markett S. Systems-level decoding reveals the cognitive and behavioral profile of the human intraparietal sulcus. FRONTIERS IN NEUROIMAGING 2023; 1:1074674. [PMID: 37555176 PMCID: PMC10406318 DOI: 10.3389/fnimg.2022.1074674] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/19/2022] [Accepted: 12/19/2022] [Indexed: 08/10/2023]
Abstract
INTRODUCTION The human intraparietal sulcus (IPS) covers large portions of the posterior cortical surface and has been implicated in a variety of cognitive functions. It is, however, unclear how cognitive functions dissociate between the IPS's heterogeneous subdivisions, particularly in perspective to their connectivity profile. METHODS We applied a neuroinformatics driven system-level decoding on three cytoarchitectural distinct subdivisions (hIP1, hIP2, hIP3) per hemisphere, with the aim to disentangle the cognitive profile of the IPS in conjunction with functionally connected cortical regions. RESULTS The system-level decoding revealed nine functional systems based on meta-analytical associations of IPS subdivisions and their cortical coactivations: Two systems-working memory and numeric cognition-which are centered on all IPS subdivisions, and seven systems-attention, language, grasping, recognition memory, rotation, detection of motions/shapes and navigation-with varying degrees of dissociation across subdivisions and hemispheres. By probing the spatial overlap between systems-level co-activations of the IPS and seven canonical intrinsic resting state networks, we observed a trend toward more co-activation between hIP1 and the front parietal network, between hIP2 and hIP3 and the dorsal attention network, and between hIP3 and the visual and somatomotor network. DISCUSSION Our results confirm previous findings on the IPS's role in cognition but also point to previously unknown differentiation along the IPS, which present viable starting points for future work. We also present the systems-level decoding as promising approach toward functional decoding of the human connectome.
Collapse
Affiliation(s)
- Ole Jonas Boeken
- Department of Molecular Psychology, Institute for Psychology, Humboldt-Universität zu Berlin, Berlin, Germany
| | | |
Collapse
|
9
|
Xi S, Zhou Y, Yao J, Ye X, Zhang P, Wen W, Zhao C. Cortical Deficits are Correlated with Impaired Stereopsis in Patients with Strabismus. Neurosci Bull 2022:10.1007/s12264-022-00987-7. [DOI: 10.1007/s12264-022-00987-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2022] [Accepted: 09/21/2022] [Indexed: 12/13/2022] Open
Abstract
AbstractIn this study, we explored the neural mechanism underlying impaired stereopsis and possible functional plasticity after strabismus surgery. We enrolled 18 stereo-deficient patients with intermittent exotropia before and after surgery, along with 18 healthy controls. Functional magnetic resonance imaging data were collected when participants viewed three-dimensional stimuli. Compared with controls, preoperative patients showed hypoactivation in higher-level dorsal (visual and parietal) areas and ventral visual areas. Pre- and postoperative activation did not significantly differ in patients overall; patients with improved stereopsis showed stronger postoperative activation than preoperative activation in the right V3A and left intraparietal sulcus. Worse stereopsis and fusional control were correlated with preoperative hypoactivation, suggesting that cortical deficits along the two streams might reflect impaired stereopsis in intermittent exotropia. The correlation between improved stereopsis and activation in the right V3A after surgery indicates that functional plasticity may underlie the improvement of stereopsis. Thus, additional postoperative strategies are needed to promote functional plasticity and enhance the recovery of stereopsis.
Collapse
|
10
|
French RL, DeAngelis GC. Scene-relative object motion biases depth percepts. Sci Rep 2022; 12:18480. [PMID: 36323845 PMCID: PMC9630409 DOI: 10.1038/s41598-022-23219-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2022] [Accepted: 10/27/2022] [Indexed: 11/07/2022] Open
Abstract
An important function of the visual system is to represent 3D scene structure from a sequence of 2D images projected onto the retinae. During observer translation, the relative image motion of stationary objects at different distances (motion parallax) provides potent depth information. However, if an object moves relative to the scene, this complicates the computation of depth from motion parallax since there will be an additional component of image motion related to scene-relative object motion. To correctly compute depth from motion parallax, only the component of image motion caused by self-motion should be used by the brain. Previous experimental and theoretical work on perception of depth from motion parallax has assumed that objects are stationary in the world. Thus, it is unknown whether perceived depth based on motion parallax is biased by object motion relative to the scene. Naïve human subjects viewed a virtual 3D scene consisting of a ground plane and stationary background objects, while lateral self-motion was simulated by optic flow. A target object could be either stationary or moving laterally at different velocities, and subjects were asked to judge the depth of the object relative to the plane of fixation. Subjects showed a far bias when object and observer moved in the same direction, and a near bias when object and observer moved in opposite directions. This pattern of biases is expected if subjects confound image motion due to self-motion with that due to scene-relative object motion. These biases were large when the object was viewed monocularly, and were greatly reduced, but not eliminated, when binocular disparity cues were provided. Our findings establish that scene-relative object motion can confound perceptual judgements of depth during self-motion.
Collapse
Affiliation(s)
- Ranran L. French
- grid.16416.340000 0004 1936 9174Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, USA
| | - Gregory C. DeAngelis
- grid.16416.340000 0004 1936 9174Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, USA
| |
Collapse
|
11
|
Abstract
Stereopsis provides us with a vivid impression of the depth and distance of objects in our 3- dimensional world. Stereopsis is important for a number of everyday visual tasks, including (but not limited to) reaching and grasping, fine visuo-motor control, and navigating in our world. This review briefly discusses the neural substrate for normal binocular vision and stereopsis and its development in primates; outlines some of the issues and limitations of stereopsis tests and examines some of the factors that limit the typical development of stereopsis and the causes and consequences of stereo-deficiency and stereo-blindness. Finally, we review several approaches to improving or recovering stereopsis in both neurotypical individuals and those with stereo-deficiency and stereo-blindness and outline some emerging strategies for improving stereopsis.
Collapse
|
12
|
A Novel Computer Vision Model for Medicinal Plant Identification Using Log-Gabor Filters and Deep Learning Algorithms. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:1189509. [PMID: 36203732 PMCID: PMC9532088 DOI: 10.1155/2022/1189509] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/24/2022] [Revised: 08/16/2022] [Accepted: 09/05/2022] [Indexed: 11/27/2022]
Abstract
Computer vision is the science that enables computers and machines to see and perceive image content on a semantic level. It combines concepts, techniques, and ideas from various fields such as digital image processing, pattern matching, artificial intelligence, and computer graphics. A computer vision system is designed to model the human visual system on a functional basis as closely as possible. Deep learning and Convolutional Neural Networks (CNNs) in particular which are biologically inspired have significantly contributed to computer vision studies. This research develops a computer vision system that uses CNNs and handcrafted filters from Log-Gabor filters to identify medicinal plants based on their leaf textural features in an ensemble manner. The system was tested on a dataset developed from the Centre of Plant Medicine Research, Ghana (MyDataset) consisting of forty-nine (49) plant species. Using the concept of transfer learning, ten pretrained networks including Alexnet, GoogLeNet, DenseNet201, Inceptionv3, Mobilenetv2, Restnet18, Resnet50, Resnet101, vgg16, and vgg19 were used as feature extractors. The DenseNet201 architecture resulted with the best outcome of 87% accuracy and GoogLeNet with 79% preforming the worse averaged across six supervised learning algorithms. The proposed model (OTAMNet), created by fusing a Log-Gabor layer into the transition layers of the DenseNet201 architecture achieved 98% accuracy when tested on MyDataset. OTAMNet was tested on other benchmark datasets; Flavia, Swedish Leaf, MD2020, and the Folio dataset. The Flavia dataset achieved 99%, Swedish Leaf 100%, MD2020 99%, and the Folio dataset 97%. A false-positive rate of less than 0.1% was achieved in all cases.
Collapse
|
13
|
Neural Research on Depth Perception and Stereoscopic Visual Fatigue in Virtual Reality. Brain Sci 2022; 12:brainsci12091231. [PMID: 36138967 PMCID: PMC9497221 DOI: 10.3390/brainsci12091231] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2022] [Revised: 09/04/2022] [Accepted: 09/07/2022] [Indexed: 11/29/2022] Open
Abstract
Virtual reality (VR) technology provides highly immersive depth perception experiences; nevertheless, stereoscopic visual fatigue (SVF) has become an important factor currently hindering the development of VR applications. However, there is scant research on the underlying neural mechanism of SVF, especially those induced by VR displays, which need further research. In this paper, a Go/NoGo paradigm based on disparity variations is proposed to induce SVF associated with depth perception, and the underlying neural mechanism of SVF in a VR environment was investigated. The effects of disparity variations as well as SVF on the temporal characteristics of visual evoked potentials (VEPs) were explored. Point-by-point permutation statistical with repeated measures ANOVA results revealed that the amplitudes and latencies of the posterior VEP component P2 were modulated by disparities, and posterior P2 amplitudes were modulated differently by SVF in different depth perception situations. Cortical source localization analysis was performed to explore the original cortex areas related to certain fatigue levels and disparities, and the results showed that posterior P2 generated from the precuneus could represent depth perception in binocular vision, and therefore could be performed to distinguish SVF induced by disparity variations. Our findings could help to extend an understanding of the neural mechanisms underlying depth perception and SVF as well as providing beneficial information for improving the visual experience in VR applications.
Collapse
|
14
|
Intraoperative MRI versus intraoperative ultrasound in pediatric brain tumor surgery: is expensive better than cheap? A review of the literature. Childs Nerv Syst 2022; 38:1445-1454. [PMID: 35511271 DOI: 10.1007/s00381-022-05545-0] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Accepted: 04/25/2022] [Indexed: 11/03/2022]
Abstract
PURPOSE The extent of brain tumor resection (EOR) is a fundamental prognostic factor in pediatric neuro-oncology in association with the histology. In general, resection aims at gross total resection (GTR). Intraoperative imaging like intraoperative US (iOUS) and MRI have been developed in order to find any tumoral remnant but with different costs. Aim of our work is to review the current literature in order to better understand the differences between costs and efficacy of MRI and iOUS to evaluate tumor remnants intraoperatively. METHODS We reviewed the existing literature on PubMed until 31st December 2021 including the sequential keywords "intraoperative ultrasound and pediatric brain tumors", "iUS and pediatric brain tumors", "intraoperative magnetic resonance AND pediatric brain tumors", and "intraoperative MRI AND pediatric brain tumors. RESULTS A total of 300 papers were screened through analysis of title and abstract; 254 were excluded. After selection, a total of 23 articles were used for this systematic review. Among the 929 patients described, a total of 349(38%) of the cases required an additional resection after an iMRI scan. GTR was measured on 794 patients (data of 69 patients lost), and it was achieved in 552(70%) patients. In case of iOUS, GTR was estimated in 291 out of 379 (77%) cases. This finding was confirmed at the post-operative MRI in 256(68%) cases. CONCLUSIONS The analysis of the available literature demonstrates that expensive equipment does not always mean better. In fact, for the majority of pediatric brain tumors, iOUS is comparable to iMRI in estimating the EOR.
Collapse
|
15
|
Chen X, Liao M, Jiang P, Sun H, Liu L, Gong Q. Abnormal effective connectivity in visual cortices underlies stereopsis defects in amblyopia. Neuroimage Clin 2022; 34:103005. [PMID: 35421811 PMCID: PMC9011166 DOI: 10.1016/j.nicl.2022.103005] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2021] [Revised: 02/15/2022] [Accepted: 04/05/2022] [Indexed: 02/08/2023]
Abstract
Abnormal effective connectivity inherent stereopsis defects in amblyopia was studied. A weakened connection from V2v to LO2 relates to stereopsis defects in amblyopia. Higher-order visual cortices may serve as key nodes to the stereopsis defects. An independent longitudinal dataset was used to validate the obtained results.
The neural basis underlying stereopsis defects in patients with amblyopia remains unclear, which hinders the development of clinical therapy. This study aimed to investigate visual network abnormalities in patients with amblyopia and their associations with stereopsis function. Spectral dynamic causal modeling methods were employed for resting-state functional magnetic resonance imaging data to investigate the effective connectivity (EC) among 14 predefined regions of interest in the dorsal and ventral visual pathways. We adopted two independent datasets, including a cross-sectional and a longitudinal dataset. In the cross-sectional dataset, we compared group differences in EC between 31 patients with amblyopia (mean age: 26.39 years old) and 31 healthy controls (mean age: 25.71 years old) and investigated the association between EC and stereoacuity. In addition, we explored EC changes after perceptual learning in a novel longitudinal dataset including 9 patients with amblyopia (mean age: 15.78 years old). We found consistent evidence from the two datasets indicating that the aberrant EC from V2v to LO2 is crucial for the stereoscopic deficits in the patients with amblyopia: it was weaker in the patients than in the controls, showed a positive linear relationship with the stereoscopic function, and increased after perceptual learning in the patients. In addition, higher-level dorsal (V3d, V3A, and V3B) and ventral areas (LO1 and LO2) were important nodes in the network of abnormal ECs associated with stereoscopic deficits in the patients with amblyopia. Our research provides insights into the neural mechanism underlying stereopsis deficits in patients with amblyopia and provides candidate targets for focused stimulus interventions to enhance the efficacy of clinical treatment for the improvement of stereopsis deficiency.
Collapse
Affiliation(s)
- Xia Chen
- Department of Optometry and Visual Science, West China Hospital, Sichuan University, Chengdu, China
| | - Meng Liao
- Department of Optometry and Visual Science, West China Hospital, Sichuan University, Chengdu, China; Department of Ophthalmology, West China Hospital, Sichuan University, Chengdu, China
| | - Ping Jiang
- Huaxi MR Research Center (HMRRC), Department of Radiology, West China Hospital of Sichuan University, Chengdu, China; Research Unit of Psychoradiology, Chinese Academy of Medical Sciences, Chengdu, China; Functional and Molecular Imaging Key Laboratory of Sichuan Province, Chengdu, China.
| | - Huaiqiang Sun
- Huaxi MR Research Center (HMRRC), Department of Radiology, West China Hospital of Sichuan University, Chengdu, China; Imaging Research Core Facilities, West China Hospital of Sichuan University, Chengdu, Sichuan, China
| | - Longqian Liu
- Department of Optometry and Visual Science, West China Hospital, Sichuan University, Chengdu, China; Department of Ophthalmology, West China Hospital, Sichuan University, Chengdu, China.
| | - Qiyong Gong
- Huaxi MR Research Center (HMRRC), Department of Radiology, West China Hospital of Sichuan University, Chengdu, China; Research Unit of Psychoradiology, Chinese Academy of Medical Sciences, Chengdu, China; Functional and Molecular Imaging Key Laboratory of Sichuan Province, Chengdu, China
| |
Collapse
|
16
|
O’Keeffe J, Yap SH, Llamas-Cornejo I, Nityananda V, Read JCA. A computational model of stereoscopic prey capture in praying mantises. PLoS Comput Biol 2022; 18:e1009666. [PMID: 35587948 PMCID: PMC9159633 DOI: 10.1371/journal.pcbi.1009666] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2021] [Revised: 06/01/2022] [Accepted: 04/10/2022] [Indexed: 11/25/2022] Open
Abstract
We present a simple model which can account for the stereoscopic sensitivity of praying mantis predatory strikes. The model consists of a single “disparity sensor”: a binocular neuron sensitive to stereoscopic disparity and thus to distance from the animal. The model is based closely on the known behavioural and neurophysiological properties of mantis stereopsis. The monocular inputs to the neuron reflect temporal change and are insensitive to contrast sign, making the sensor insensitive to interocular correlation. The monocular receptive fields have a excitatory centre and inhibitory surround, making them tuned to size. The disparity sensor combines inputs from the two eyes linearly, applies a threshold and then an exponent output nonlinearity. The activity of the sensor represents the model mantis’s instantaneous probability of striking. We integrate this over the stimulus duration to obtain the expected number of strikes in response to moving targets with different stereoscopic disparity, size and vertical disparity. We optimised the parameters of the model so as to bring its predictions into agreement with our empirical data on mean strike rate as a function of stimulus size and disparity. The model proves capable of reproducing the relatively broad tuning to size and narrow tuning to stereoscopic disparity seen in mantis striking behaviour. Although the model has only a single centre-surround receptive field in each eye, it displays qualitatively the same interaction between size and disparity as we observed in real mantids: the preferred size increases as simulated prey distance increases beyond the preferred distance. We show that this occurs because of a stereoscopic “false match” between the leading edge of the stimulus in one eye and its trailing edge in the other; further work will be required to find whether such false matches occur in real mantises. Importantly, the model also displays realistic responses to stimuli with vertical disparity and to pairs of identical stimuli offering a “ghost match”, despite not being fitted to these data. This is the first image-computable model of insect stereopsis, and reproduces key features of both neurophysiology and striking behaviour. The praying mantis is the only insect so far known to compute depth using stereoscopic (3D) vision. Mantis stereopsis appears to be simpler than human stereopsis and most machine sterovision algorithms. A computational model of mantis stereopsis may therefore be beneficial to the field of robotics, particularly where computational power is limited. Using a combination of behavioural observations and neurophysiological data, we propose a very simple model structure to describe the prey capture response in the praying mantis. We used the limited available data on the mantis’ size and distance preferences for its prey to train our model parameters. Our simple model is able to qualitatively reproduce previously unexplained characteristics of our training data, and predicts key observations in additional empirical data that was not included in the model training. Whilst we believe our model to be only a partial and heavily simplified account of mantis stereopsis, our results are supportive of our model structure as an approximation of the size and disparity sensors used by the mantis when catching its prey.
Collapse
Affiliation(s)
- James O’Keeffe
- Dyson School of Design Engineering, Imperial College, London, United Kingdom
- * E-mail:
| | - Sin Hui Yap
- Biosciences Institute, Newcastle University, Newcastle, United Kingdom
- School of Medical Education, Newcastle University, Johor, Malaysia
| | | | - Vivek Nityananda
- Biosciences Institute, Newcastle University, Newcastle, United Kingdom
| | - Jenny C. A. Read
- Biosciences Institute, Newcastle University, Newcastle, United Kingdom
| |
Collapse
|
17
|
Sarkar S, Devi P, Vaddavalli PK, Reddy JC, Bharadwaj SR. Differences in Image Quality after Three Laser Keratorefractive Procedures for Myopia. Optom Vis Sci 2022; 99:137-149. [PMID: 34974458 DOI: 10.1097/opx.0000000000001850] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
SIGNIFICANCE Psychophysical estimates of spatial and depth vision have been shown to be better after bilateral ReLEx small-incision lenticule extraction (SMILE) refractive surgery for myopia, relative to photorefractive keratectomy (PRK) and femtosecond laser-assisted in situ keratomileusis (FS-LASIK). The present study provides the optical basis for these findings using computational image quality analysis. PURPOSE This study aimed to compare longitudinal changes in higher-order wavefront aberrations and image quality before and after bilateral PRK, FS-LASIK, and SMILE refractive procedures for correcting myopia. METHODS Wavefront aberrations and image quality of both the eyes of 106 subjects (n = 40 for FS-LASIK and SMILE and n = 26 for PRK) were determined pre-operatively and at 1-week, 1-month, 3-month, and 6-month post-operative intervals using computational through-focus analysis for a 6-mm pupil diameter. Image quality was quantified in terms of its peak value and its interocular difference, residual defocus that was needed to achieve peak image quality (best focus), and the depth of focus. RESULTS The increase in root mean squared deviations of higher-order aberrations post-operatively was lesser after SMILE (1-month visit median [25th to 75th interquartile range], 0.34 μm (0.28 to 0.39 μm]) than after PRK (0.80 μm [0.74 to 0.87 μm]) and FS-LASIK (0.74 μm [0.59 to 0.83 μm]; P ≤ .001), all relative to pre-operative values (0.20 μm [0.15 to 0.30 μm]). The peak image quality dropped and its interocular difference increased, best focus shifted myopically by 0.5 to 0.75 D, and depth of focus widened significantly after PRK and FS-LASIK surgeries, all relative to pre-operative values (P < .001). All these changes were negligible but statistically significant in a minority of instances after SMILE surgery (P ≥ .01). CONCLUSIONS Although all three refractive surgeries correct myopia, the image quality and its similarity between eyes are better and closer to pre-operative values after SMILE, compared with FS-LASIK and PRK. These results can be explained from the underlying increase in higher-order wavefront aberrations experienced by the eye post-operatively.
Collapse
Affiliation(s)
| | | | | | - Jagadesh C Reddy
- The Cornea Institute, L V Prasad Eye Institute, Hyderabad, Telangana, India
| | | |
Collapse
|
18
|
Ayzenberg V, Kamps FS, Dilks DD, Lourenco SF. Skeletal representations of shape in the human visual cortex. Neuropsychologia 2022; 164:108092. [PMID: 34801519 PMCID: PMC9840386 DOI: 10.1016/j.neuropsychologia.2021.108092] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2021] [Revised: 11/07/2021] [Accepted: 11/17/2021] [Indexed: 01/17/2023]
Abstract
Shape perception is crucial for object recognition. However, it remains unknown exactly how shape information is represented and used by the visual system. Here, we tested the hypothesis that the visual system represents object shape via a skeletal structure. Using functional magnetic resonance imaging (fMRI) and representational similarity analysis (RSA), we found that a model of skeletal similarity explained significant unique variance in the response profiles of V3 and LO. Moreover, the skeletal model remained predictive in these regions even when controlling for other models of visual similarity that approximate low-to high-level visual features (i.e., Gabor-jet, GIST, HMAX, and AlexNet), and across different surface forms, a manipulation that altered object contours while preserving the underlying skeleton. Together, these findings shed light on shape processing in human vision, as well as the computational properties of V3 and LO. We discuss how these regions may support two putative roles of shape skeletons: namely, perceptual organization and object recognition.
Collapse
Affiliation(s)
- Vladislav Ayzenberg
- Department of Psychology, Carnegie Mellon University, USA,Corresponding author: (V. Ayzenberg)
| | - Frederik S. Kamps
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, USA
| | | | - Stella F. Lourenco
- Department of Psychology, Emory University, USA,Corresponding author: (S.F. Lourenco)
| |
Collapse
|
19
|
Ultra-High-Field Neuroimaging Reveals Fine-Scale Processing for 3D Perception. J Neurosci 2021; 41:8362-8374. [PMID: 34413206 PMCID: PMC8496197 DOI: 10.1523/jneurosci.0065-21.2021] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Revised: 06/08/2021] [Accepted: 07/07/2021] [Indexed: 11/21/2022] Open
Abstract
Binocular disparity provides critical information about three-dimensional (3D) structures to support perception and action. In the past decade significant progress has been made in uncovering human brain areas engaged in the processing of binocular disparity signals. Yet, the fine-scale brain processing underlying 3D perception remains unknown. Here, we use ultra-high-field (7T) functional imaging at submillimeter resolution to examine fine-scale BOLD fMRI signals involved in 3D perception. In particular, we sought to interrogate the local circuitry involved in disparity processing by sampling fMRI responses at different positions relative to the cortical surface (i.e., across cortical depths corresponding to layers). We tested for representations related to 3D perception by presenting participants (male and female, N = 8) with stimuli that enable stable stereoscopic perception [i.e., correlated random dot stereograms (RDS)] versus those that do not (i.e., anticorrelated RDS). Using multivoxel pattern analysis (MVPA), we demonstrate cortical depth-specific representations in areas V3A and V7 as indicated by stronger pattern responses for correlated than for anticorrelated stimuli in upper rather than deeper layers. Examining informational connectivity, we find higher feedforward layer-to-layer connectivity for correlated than anticorrelated stimuli between V3A and V7. Further, we observe disparity-specific feedback from V3A to V1 and from V7 to V3A. Our findings provide evidence for the role of V3A as a key nexus for disparity processing, which is implicated in feedforward and feedback signals related to the perceptual estimation of 3D structures.SIGNIFICANCE STATEMENT Binocular vision plays a significant role in supporting our interactions with the surrounding environment. The fine-scale neural mechanisms that underlie the brain's skill in extracting 3D structures from binocular signals are poorly understood. Here, we capitalize on recent advances in ultra-high-field functional imaging to interrogate human brain circuits involved in 3D perception at submillimeter resolution. We provide evidence for the role of area V3A as a key nexus for disparity processing, which is implicated in feedforward and feedback signals related to the perceptual estimation of 3D structures from binocular signals. These fine-scale measurements help bridge the gap between animal neurophysiology and human fMRI studies investigating cross-scale circuits, from micro circuits to global brain networks for 3D perception.
Collapse
|
20
|
Candy TR, Cormack LK. Recent understanding of binocular vision in the natural environment with clinical implications. Prog Retin Eye Res 2021; 88:101014. [PMID: 34624515 PMCID: PMC8983798 DOI: 10.1016/j.preteyeres.2021.101014] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Revised: 09/26/2021] [Accepted: 09/29/2021] [Indexed: 10/20/2022]
Abstract
Technological advances in recent decades have allowed us to measure both the information available to the visual system in the natural environment and the rich array of behaviors that the visual system supports. This review highlights the tasks undertaken by the binocular visual system in particular and how, for much of human activity, these tasks differ from those considered when an observer fixates a static target on the midline. The everyday motor and perceptual challenges involved in generating a stable, useful binocular percept of the environment are discussed, together with how these challenges are but minimally addressed by much of current clinical interpretation of binocular function. The implications for new technology, such as virtual reality, are also highlighted in terms of clinical and basic research application.
Collapse
Affiliation(s)
- T Rowan Candy
- School of Optometry, Programs in Vision Science, Neuroscience and Cognitive Science, Indiana University, 800 East Atwater Avenue, Bloomington, IN, 47405, USA.
| | - Lawrence K Cormack
- Department of Psychology, Institute for Neuroscience, and Center for Perceptual Systems, The University of Texas at Austin, Austin, TX, 78712, USA.
| |
Collapse
|
21
|
Linton P. V1 as an egocentric cognitive map. Neurosci Conscious 2021; 2021:niab017. [PMID: 34532068 PMCID: PMC8439394 DOI: 10.1093/nc/niab017] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2021] [Revised: 05/21/2021] [Accepted: 06/08/2021] [Indexed: 01/20/2023] Open
Abstract
We typically distinguish between V1 as an egocentric perceptual map and the hippocampus as an allocentric cognitive map. In this article, we argue that V1 also functions as a post-perceptual egocentric cognitive map. We argue that three well-documented functions of V1, namely (i) the estimation of distance, (ii) the estimation of size, and (iii) multisensory integration, are better understood as post-perceptual cognitive inferences. This argument has two important implications. First, we argue that V1 must function as the neural correlates of the visual perception/cognition distinction and suggest how this can be accommodated by V1's laminar structure. Second, we use this insight to propose a low-level account of visual consciousness in contrast to mid-level accounts (recurrent processing theory; integrated information theory) and higher-level accounts (higher-order thought; global workspace theory). Detection thresholds have been traditionally used to rule out such an approach, but we explain why it is a mistake to equate visibility (and therefore the presence/absence of visual experience) with detection thresholds.
Collapse
Affiliation(s)
- Paul Linton
- Centre for Applied Vision Research, City, University of London, Northampton Square, London EC1V 0HB, UK
| |
Collapse
|
22
|
Duan Y, Thatte J, Yaklovleva A, Norcia AM. Disparity in Context: Understanding how monocular image content interacts with disparity processing in human visual cortex. Neuroimage 2021; 237:118139. [PMID: 33964460 PMCID: PMC10786599 DOI: 10.1016/j.neuroimage.2021.118139] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2020] [Revised: 04/16/2021] [Accepted: 04/19/2021] [Indexed: 11/24/2022] Open
Abstract
Horizontal disparities between the two eyes' retinal images are the primary cue for depth. Commonly used random ot tereograms (RDS) intentionally camouflage the disparity cue, breaking the correlations between monocular image structure and the depth map that are present in natural images. Because of the nonlinear nature of visual processing, it is unlikely that simple computational rules derived from RDS will be sufficient to explain binocular vision in natural environments. In order to understand the interplay between natural scene structure and disparity encoding, we used a depth-image-based-rendering technique and a library of natural 3D stereo pairs to synthesize two novel stereogram types in which monocular scene content was manipulated independent of scene depth information. The half-images of the novel stereograms comprised either random-dots or scrambled natural scenes, each with the same depth maps as the corresponding natural scene stereograms. Using these stereograms in a simultaneous Event-Related Potential and behavioral discrimination task, we identified multiple disparity-contingent encoding stages between 100 ~ 500 msec. The first disparity sensitive evoked potential was observed at ~100 msec after an earlier evoked potential (between ~50-100 msec) that was sensitive to the structure of the monocular half-images but blind to disparity. Starting at ~150 msec, disparity responses were stereogram-specific and predictive of perceptual depth. Complex features associated with natural scene content are thus at least partially coded prior to disparity information, but these features and possibly others associated with natural scene content interact with disparity information only after an intermediate, 2D scene-independent disparity processing stage.
Collapse
Affiliation(s)
- Yiran Duan
- Wu Tsai Neurosciences Institute, 290 Jane Stanford Way, Stanford, CA 94305
| | - Jayant Thatte
- Department of Electrical Engineering, David Packard Building, Stanford University, 350 Jane Stanford Way, Stanford, CA 94305
| | | | - Anthony M Norcia
- Wu Tsai Neurosciences Institute, 290 Jane Stanford Way, Stanford, CA 94305.
| |
Collapse
|
23
|
Abstract
Most animals have at least some binocular overlap, i.e., a region of space that is viewed by both eyes. This reduces the overall visual field and raises the problem of combining two views of the world, seen from different vantage points, into a coherent whole. However, binocular vision also offers many potential advantages, including increased ability to see around obstacles and increased contrast sensitivity. One particularly interesting use for binocular vision is comparing information from both eyes to derive information about depth. There are many different ways in which this might be done, but in this review, I refer to them all under the general heading of stereopsis. This review examines the different possible uses of binocular vision and stereopsis and compares what is currently known about the neural basis of stereopsis in different taxa. Studying different animals helps us break free of preconceptions stemming from the way that stereopsis operates in human vision and provides new insights into the different possible forms of stereopsis. Expected final online publication date for the Annual Review of Vision Science, Volume 7 is September 2021. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Collapse
Affiliation(s)
- Jenny C A Read
- Biosciences Institute, Newcastle University, Newcastle upon Tyne NE2 4HH, United Kingdom;
| |
Collapse
|
24
|
Goutcher R, Barrington C, Hibbard PB, Graham B. Binocular vision supports the development of scene segmentation capabilities: Evidence from a deep learning model. J Vis 2021; 21:13. [PMID: 34289490 PMCID: PMC8300045 DOI: 10.1167/jov.21.7.13] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2021] [Accepted: 06/16/2021] [Indexed: 11/24/2022] Open
Abstract
The application of deep learning techniques has led to substantial progress in solving a number of critical problems in machine vision, including fundamental problems of scene segmentation and depth estimation. Here, we report a novel deep neural network model, capable of simultaneous scene segmentation and depth estimation from a pair of binocular images. By manipulating the arrangement of binocular image pairs, presenting the model with standard left-right image pairs, identical image pairs or swapped left-right images, we show that performance levels depend on the presence of appropriate binocular image arrangements. Segmentation and depth estimation performance are both impaired when images are swapped. Segmentation performance levels are maintained, however, for identical image pairs, despite the absence of binocular disparity information. Critically, these performance levels exceed those found for an equivalent, monocularly trained, segmentation model. These results provide evidence that binocular image differences support both the direct recovery of depth and segmentation information, and the enhanced learning of monocular segmentation signals. This finding suggests that binocular vision may play an important role in visual development. Better understanding of this role may hold implications for the study and treatment of developmentally acquired perceptual impairments.
Collapse
Affiliation(s)
- Ross Goutcher
- Psychology Division, Faculty of Natural Sciences, University of Stirling, Stirling, UK
| | - Christian Barrington
- Psychology Division, Faculty of Natural Sciences, University of Stirling, Stirling, UK
- Computing Science and Mathematics Division, Faculty of Natural Sciences, University of Stirling, Stirling, UK
| | - Paul B Hibbard
- Department of Psychology, University of Essex, Colchester, UK
| | - Bruce Graham
- Computing Science and Mathematics Division, Faculty of Natural Sciences, University of Stirling, Stirling, UK
| |
Collapse
|
25
|
Snow JC, Culham JC. The Treachery of Images: How Realism Influences Brain and Behavior. Trends Cogn Sci 2021; 25:506-519. [PMID: 33775583 PMCID: PMC10149139 DOI: 10.1016/j.tics.2021.02.008] [Citation(s) in RCA: 39] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2020] [Revised: 02/08/2021] [Accepted: 02/22/2021] [Indexed: 10/21/2022]
Abstract
Although the cognitive sciences aim to ultimately understand behavior and brain function in the real world, for historical and practical reasons, the field has relied heavily on artificial stimuli, typically pictures. We review a growing body of evidence that both behavior and brain function differ between image proxies and real, tangible objects. We also propose a new framework for immersive neuroscience to combine two approaches: (i) the traditional build-up approach of gradually combining simplified stimuli, tasks, and processes; and (ii) a newer tear-down approach that begins with reality and compelling simulations such as virtual reality to determine which elements critically affect behavior and brain processing.
Collapse
Affiliation(s)
- Jacqueline C Snow
- Department of Psychology, University of Nevada Reno, Reno, NV 89557, USA
| | - Jody C Culham
- Department of Psychology, University of Western Ontario, London, Ontario, N6A 5C2, Canada; Brain and Mind Institute, Western Interdisciplinary Research Building, University of Western Ontario, London, Ontario, N6A 3K7, Canada.
| |
Collapse
|
26
|
Kantor P, Matonti F, Varenne F, Sentis V, Pagot-Mathis V, Fournié P, Soler V. Use of the heads-up NGENUITY 3D Visualization System for vitreoretinal surgery: a retrospective evaluation of outcomes in a French tertiary center. Sci Rep 2021; 11:10031. [PMID: 33976247 PMCID: PMC8113355 DOI: 10.1038/s41598-021-88993-z] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2021] [Accepted: 04/16/2021] [Indexed: 12/02/2022] Open
Abstract
Heads-up three-dimensional (3D) surgical visualization systems allow ophthalmic surgeons to replace surgical microscope eyepieces with high-resolution stereoscopic cameras transmitting an image to a screen. We investigated the effectiveness and safety of the heads-up NGENUITY 3D Visualization System in a retrospective evaluation of 241 consecutive vitreoretinal surgeries performed by the same surgeon using conventional microscopy (CM group) over a 1-year period versus the NGENUITY System (3D group) over a consecutive 1-year period. We included for study vitreoretinal surgeries for treatment of retinal detachment (RD) (98 surgeries), macular hole (MH) (48 surgeries), or epiretinal membrane (ERM) (95 surgeries). A total of 138 and 103 eyes were divided into 3D and CM groups, respectively. We found no differences in 3-month postoperative rates of recurrence of RD (10% versus 18%, p = 0.42), MH closure (82% versus 88%, p = 0.69), or decrease in central macular thickness of ERMs (134 ± 188 µm versus 115 ± 105 µm, p = 0.57) between the 3D and CM groups, respectively. Surgery durations and visual prognosis were also similar between both groups. We consolidate that the NGENUITY System is comparable in terms of visual and anatomical outcomes, giving it perspectives for integration into future robotized intervention.
Collapse
Affiliation(s)
- Pierre Kantor
- Retina Unit, Ophthalmology Department, Pierre-Paul Riquet Hospital, Toulouse University Hospital (CHU Toulouse), Place Baylac, 31059, Toulouse Cedex, France
| | - Frédéric Matonti
- Centre Monticelli Paradis, 433 bis rue Paradis, 13008, Marseille, France.,CNRS, Timone Neuroscience Institute, Aix-Marseille University, Marseille, France
| | - Fanny Varenne
- Retina Unit, Ophthalmology Department, Pierre-Paul Riquet Hospital, Toulouse University Hospital (CHU Toulouse), Place Baylac, 31059, Toulouse Cedex, France
| | - Vanessa Sentis
- Retina Unit, Ophthalmology Department, Pierre-Paul Riquet Hospital, Toulouse University Hospital (CHU Toulouse), Place Baylac, 31059, Toulouse Cedex, France
| | - Véronique Pagot-Mathis
- Retina Unit, Ophthalmology Department, Pierre-Paul Riquet Hospital, Toulouse University Hospital (CHU Toulouse), Place Baylac, 31059, Toulouse Cedex, France
| | - Pierre Fournié
- Retina Unit, Ophthalmology Department, Pierre-Paul Riquet Hospital, Toulouse University Hospital (CHU Toulouse), Place Baylac, 31059, Toulouse Cedex, France.,University of Toulouse III, Toulouse, France
| | - Vincent Soler
- Retina Unit, Ophthalmology Department, Pierre-Paul Riquet Hospital, Toulouse University Hospital (CHU Toulouse), Place Baylac, 31059, Toulouse Cedex, France. .,University of Toulouse III, Toulouse, France.
| |
Collapse
|
27
|
Greene CM, Broughan J, Hanlon A, Keane S, Hanrahan S, Kerr S, Rooney B. Visual Search in 3D: Effects of Monoscopic and Stereoscopic Cues to Depth on the Validity of Feature Integration Theory and Perceptual Load Theory. Front Psychol 2021; 12:596511. [PMID: 33815197 PMCID: PMC8009999 DOI: 10.3389/fpsyg.2021.596511] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2020] [Accepted: 02/22/2021] [Indexed: 11/21/2022] Open
Abstract
Previous research has successfully used feature integration theory to operationalise the predictions of Perceptual Load Theory, while simultaneously testing the predictions of both models. Building on this work, we test the extent to which these models hold up in a 3D world. In two experiments, participants responded to a target stimulus within an array of shapes whose apparent depth was manipulated using a combination of monoscopic and stereoscopic cues. The search task was designed to test the predictions of (a) feature integration theory, as the target was identified by a single feature or a conjunction of features and embedded in search arrays of varying size, and (b) perceptual load theory, as the task included congruent and incongruent distractors presented alongside search tasks imposing high or low perceptual load. Findings from both experiments upheld the predictions of feature integration theory, regardless of 2D/3D condition. Longer search times in conditions with a combination of monoscopic and stereoscopic depth cues suggests that binding features into three-dimensional objects requires greater attentional effort. This additional effort should have implications for perceptual load theory, yet our findings did not uphold its predictions; the effect of incongruent distractors did not differ between conjunction search trials (conceptualised as high perceptual load) and feature search trials (low perceptual load). Individual differences in susceptibility to the effects of perceptual load were evident and likely explain the absence of load effects. Overall, our findings suggest that feature integration theory may be useful for predicting attentional performance in a 3D world.
Collapse
Affiliation(s)
- Ciara M Greene
- School of Psychology, University College Dublin, Dublin, Ireland
| | - John Broughan
- School of Psychology, University College Dublin, Dublin, Ireland
| | - Anthony Hanlon
- School of Psychology, University College Dublin, Dublin, Ireland
| | - Seán Keane
- School of Psychology, University College Dublin, Dublin, Ireland
| | - Sophia Hanrahan
- School of Psychology, University College Dublin, Dublin, Ireland
| | - Stephen Kerr
- School of Psychology, University College Dublin, Dublin, Ireland
| | - Brendan Rooney
- School of Psychology, University College Dublin, Dublin, Ireland
| |
Collapse
|
28
|
Yoshioka TW, Doi T, Abdolrahmani M, Fujita I. Specialized contributions of mid-tier stages of dorsal and ventral pathways to stereoscopic processing in macaque. eLife 2021; 10:58749. [PMID: 33625356 PMCID: PMC7959693 DOI: 10.7554/elife.58749] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2020] [Accepted: 02/18/2021] [Indexed: 11/22/2022] Open
Abstract
The division of labor between the dorsal and ventral visual pathways has been well studied, but not often with direct comparison at the single-neuron resolution with matched stimuli. Here we directly compared how single neurons in MT and V4, mid-tier areas of the two pathways, process binocular disparity, a powerful cue for 3D perception and actions. We found that MT neurons transmitted disparity signals more quickly and robustly, whereas V4 or its upstream neurons transformed the signals into sophisticated representations more prominently. Therefore, signaling speed and robustness were traded for transformation between the dorsal and ventral pathways. The key factor in this tradeoff was disparity-tuning shape: V4 neurons had more even-symmetric tuning than MT neurons. Moreover, the tuning symmetry predicted the degree of signal transformation across neurons similarly within each area, implying a general role of tuning symmetry in the stereoscopic processing by the two pathways.
Collapse
Affiliation(s)
- Toshihide W Yoshioka
- Laboratory for Cognitive Neuroscience, Graduate School of Frontier Biosciences, Osaka University, SuitaOsaka, Japan.,Center for Information and Neural Networks, Osaka University and National Institute of Information and Communications Technology, SuitaOsaka, Japan
| | - Takahiro Doi
- Department of Psychology, University of Pennsylvania, Philadelphia, United States
| | - Mohammad Abdolrahmani
- Laboratory for Neural Circuits and Behavior, RIKEN Center for Brain Science (CBS), Wako, Japan
| | - Ichiro Fujita
- Laboratory for Cognitive Neuroscience, Graduate School of Frontier Biosciences, Osaka University, SuitaOsaka, Japan.,Center for Information and Neural Networks, Osaka University and National Institute of Information and Communications Technology, SuitaOsaka, Japan
| |
Collapse
|
29
|
Disparity Sensitivity and Binocular Integration in Mouse Visual Cortex Areas. J Neurosci 2020; 40:8883-8899. [PMID: 33051348 DOI: 10.1523/jneurosci.1060-20.2020] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2020] [Revised: 09/18/2020] [Accepted: 09/22/2020] [Indexed: 01/02/2023] Open
Abstract
Binocular disparity, the difference between the two eyes' images, is a powerful cue to generate the 3D depth percept known as stereopsis. In primates, binocular disparity is processed in multiple areas of the visual cortex, with distinct contributions of higher areas to specific aspects of depth perception. Mice, too, can perceive stereoscopic depth, and neurons in primary visual cortex (V1) and higher-order, lateromedial (LM) and rostrolateral (RL) areas were found to be sensitive to binocular disparity. A detailed characterization of disparity tuning across mouse visual areas is lacking, however, and acquiring such data might help clarifying the role of higher areas for disparity processing and establishing putative functional correspondences to primate areas. We used two-photon calcium imaging in female mice to characterize the disparity tuning properties of neurons in visual areas V1, LM, and RL in response to dichoptically presented binocular gratings, as well as random dot correlograms (RDC). In all three areas, many neurons were tuned to disparity, showing strong response facilitation or suppression at optimal or null disparity, respectively, even in neurons classified as monocular by conventional ocular dominance (OD) measurements. Neurons in higher areas exhibited broader and more asymmetric disparity tuning curves compared with V1, as observed in primate visual cortex. Finally, we probed neurons' sensitivity to true stereo correspondence by comparing responses to correlated RDC (cRDC) and anticorrelated RDC (aRDC). Area LM, akin to primate ventral visual stream areas, showed higher selectivity for correlated stimuli and reduced anticorrelated responses, indicating higher-level disparity processing in LM compared with V1 and RL.SIGNIFICANCE STATEMENT A major cue for inferring 3D depth is disparity between the two eyes' images. Investigating how binocular disparity is processed in the mouse visual system will not only help delineating the role of mouse higher areas for visual processing, but also shed light on how the mammalian brain computes stereopsis. We found that binocular integration is a prominent feature of mouse visual cortex, as many neurons are selectively and strongly modulated by binocular disparity. Comparison of responses to correlated and anticorrelated random dot correlograms (RDC) revealed that lateromedial area (LM) is more selective to correlated stimuli, while less sensitive to anticorrelated stimuli compared with primary visual cortex (V1) and rostrolateral area (RL), suggesting higher-level disparity processing in LM, resembling primate ventral visual stream areas.
Collapse
|
30
|
Mamassian P, Zannoli M. Sensory loss due to object formation. Vision Res 2020; 174:22-40. [DOI: 10.1016/j.visres.2020.05.005] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2019] [Revised: 05/14/2020] [Accepted: 05/20/2020] [Indexed: 11/29/2022]
|
31
|
Abstract
An ideal observer is a theoretical model observer that performs a specific sensory-perceptual task optimally, making the best possible use of the available information given physical and biological constraints. An image-computable ideal observer (pixels in, estimates out) is a particularly powerful type of ideal observer that explicitly models the flow of visual information from the stimulus-encoding process to the eventual decoding of a sensory-perceptual estimate. Image-computable ideal observer analyses underlie some of the most important results in vision science. However, most of what we know from ideal observers about visual processing and performance derives from relatively simple tasks and relatively simple stimuli. This review describes recent efforts to develop image-computable ideal observers for a range of tasks with natural stimuli and shows how these observers can be used to predict and understand perceptual and neurophysiological performance. The reviewed results establish principled links among models of neural coding, computational methods for dimensionality reduction, and sensory-perceptual performance in tasks with natural stimuli.
Collapse
Affiliation(s)
- Johannes Burge
- Department of Psychology, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA; .,Neuroscience Graduate Group, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA.,Bioengineering Graduate Group, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA
| |
Collapse
|
32
|
Effects of Mild Traumatic Brain Injury on Stereopsis Detected by a Virtual Reality System: Attempt to Develop a Screening Test. J Med Biol Eng 2020. [DOI: 10.1007/s40846-020-00542-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Abstract
Purpose
The study aimed to evaluate stereopsis as a surrogate marker for post-concussion oculomotor function to develop an objective test that can reliably and quickly detect mild traumatic brain injuries (TBI).
Methods
The cohort of this prospective clinical study included 30 healthy subjects (mean age 25 ± 2 years) and 30 TBI patients (43 ± 22 years) comprising 11 patients with moderate TBI and 19 patients with mild TBI. The healthy subjects were examined once, whereas the TBI patients were examined immediately after hospitalization, at 1 week, and at 2 months. A virtual reality (VR) program displayed three-dimensional rendering of four rotating soccer balls over VR glasses in different gaze directions. The subjects were instructed to select the ball that appeared to be raised from the screen as quickly as possible via remote control. The response times and fusion abilities in different gaze directions were recorded.
Results
The correlation between stereopsis and TBI severity was significant. The response times of the moderate and mild TBI groups were significantly longer than those of the healthy reference group. The response times of the moderate TBI group were significantly longer than those of the mild TBI group. The response times at follow-up examinations were significantly shorter than those immediately after hospitalization. Fusion ability was primarily defective in the gaze direction to the right (90°) and left (270° and 315°).
Conclusions
TBI patients showed impaired stereopsis. Measuring stereopsis in different positions of the visual field using VR can be effective for rapid concussion assessment.
Collapse
|
33
|
Atchison DA, Lee J, Lu J, Webber AL, Hess RF, Baldwin AS, Schmid KL. Effects of simulated anisometropia and aniseikonia on stereopsis. Ophthalmic Physiol Opt 2020; 40:323-332. [PMID: 32128857 DOI: 10.1111/opo.12680] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2019] [Accepted: 02/06/2020] [Indexed: 11/30/2022]
Abstract
PURPOSE Stereopsis depends on horizontally disparate retinal images but otherwise concordance between eyes. Here we investigate the effect of spherical and meridional simulated anisometropia and aniseikonia on stereopsis thresholds. The aims were to determine effects of meridian, magnitude and the relative effects of the two conditions. METHODS Ten participants with normal binocular vision viewed McGill modified random dot stereograms through synchronised shutter glasses. Stereoacuities were determined using a four-alternative forced-choice procedure. To induce anisometropia, trial lenses of varying power and axes were placed in front of right eyes. Seventeen combinations were used: zero (no lens) and both positive and negative, 1 and 2 D powers, at 45, 90 and 180 axes; spherical lenses were also tested. To induce aniseikonia 17 magnification power and axis combinations were used. This included zero (no lens), and 3%, 6%, 9% and 12% at axes 45, 90 and 180; overall magnifications were also tested. RESULTS For induced anisometropia, stereopsis loss increased as cylindrical axis rotated from 180° to 90°, at which the loss was similar to that for spherical blur. For example, for 2 D meridional anisometropia threshold increased from 1.53 log sec arc (i.e. 34 sec arc) for x 180 to 1.89 log sec arc (78 sec arc) for x 90. Anisometropia induced with either positive or negative lenses had similar detrimental effects on stereopsis. Unlike anisometropia, the stereopsis loss with induced meridional aniseikonia was not affected by axis and was about 64% of that for overall aniseikonia of the same amount. Approximately, each 1 D of induced anisometropia had the same effect on threshold as did each 6% of induced aniseikonia. CONCLUSION The axes of meridional anisometropia but not aniseikonia affected stereopsis. This suggests differences in the way that monocular blur (anisometropia) and interocular shape differences (aniseikonia) are processed during the production of stereopsis.
Collapse
Affiliation(s)
- David A Atchison
- School of Optometry & Vision Sciences and Institute of Health & Biomedical Innovation, Queensland University of Technology, Queensland, Australia
| | - Jeongmin Lee
- School of Optometry & Vision Sciences and Institute of Health & Biomedical Innovation, Queensland University of Technology, Queensland, Australia
| | - Jianing Lu
- School of Optometry & Vision Sciences and Institute of Health & Biomedical Innovation, Queensland University of Technology, Queensland, Australia
| | - Ann L Webber
- School of Optometry & Vision Sciences and Institute of Health & Biomedical Innovation, Queensland University of Technology, Queensland, Australia
| | - Robert F Hess
- McGill Vision Research Unit, Department of Ophthalmology & Visual Sciences, McGill University, Montreal, Canada
| | - Alex S Baldwin
- McGill Vision Research Unit, Department of Ophthalmology & Visual Sciences, McGill University, Montreal, Canada
| | - Katrina L Schmid
- School of Optometry & Vision Sciences and Institute of Health & Biomedical Innovation, Queensland University of Technology, Queensland, Australia
| |
Collapse
|
34
|
Thomaschewski M, Jürgens T, Benecke C, Griesmann AC, Esnaashari H, Lux R, Scheppan D, Simon R, Keck T, Laubert T. Testing Distinct Three-Dimensional Effects in Laparoscopy: A Prospective Randomized Trial Using the Lübecker Toolbox Curriculum. Visc Med 2020; 36:113-123. [PMID: 32355668 DOI: 10.1159/000506059] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2020] [Accepted: 01/16/2020] [Indexed: 12/14/2022] Open
Abstract
Background The use of stereoscopic laparoscopic systems in minimally invasive surgery (MIS) allows a three-dimensional (3D) view of the surgical field, which improves hand-eye coordination. Depending on the stereo base used in the construction of the endoscopes, 3D systems may differ regarding the 3D effect. Our aim was to investigate the influence of different stereo bases on the 3D effect. Methods This was a prospective randomized study involving 42 MIS-inexperienced study participants. We evaluated two laparoscopic 3D systems with stereo bases of 2.5 mm (system A) and 3.8 mm (system B) for differences in learning MIS skills using the Lübeck Toolbox (LTB) video box trainer. We evaluated participants' performance regarding the times and repetitions required to reach each exercise's goal. After completing the final exercise ("suturing"), participants performed the exercise again using a two-dimensional (2D) representation. Additionally, we retrospectively compared our study results with a preliminary study from participants completing the LTB curriculum with a 2D system. Results The median number of repetitions until reaching the goals for LTB exercises 1, 2, 3, and 6 for system A were: 18 (range 7-53), 24 (range 8-46), 24 (range 13-51), and 21 (range 10-46), respectively, and for system B were: 12 (range 2-30), 16 (range 6-43), 17 (range 4-47), and 15 (range 6-29), respectively (p = not significant). Changing from a 3D to a 2D representation after completing the learning curve led to a longer average time required, from 95.22 to 119.3 s (p < 0.0001), for the last exercise (exercise 6; "suturing"). When comparing the results retrospectively with the learning curves acquired with the 2D system, there was a significant reduction in the number of repetitions required to reach the LTB exercise goals for exercises 1, 3, and 6 using the 3D system. Conclusion Stereo bases of 2.5 and 3.8 mm provide acceptable bases for designing 3D systems. Additionally, our results indicated that MIS basic skills can be learned quicker using a 3D system versus a 2D system, and that when the 3D effect is eliminated, the corresponding compensatory mechanisms must be relearned.
Collapse
Affiliation(s)
- Michael Thomaschewski
- Department of Surgery, University Medical Center Schleswig-Holstein, Campus Lübeck, Lübeck, Germany
| | | | - Claudia Benecke
- Department of Surgery, University Medical Center Schleswig-Holstein, Campus Lübeck, Lübeck, Germany
| | - Anna-Catherina Griesmann
- Department of Surgery, University Medical Center Schleswig-Holstein, Campus Lübeck, Lübeck, Germany
| | - Hamed Esnaashari
- Department of Surgery, University Medical Center Schleswig-Holstein, Campus Lübeck, Lübeck, Germany.,LTB Germany Ltd., Lübeck, Germany
| | - Romy Lux
- Department of Surgery, University Medical Center Schleswig-Holstein, Campus Lübeck, Lübeck, Germany
| | - Diana Scheppan
- Department of Surgery, University Medical Center Schleswig-Holstein, Campus Lübeck, Lübeck, Germany
| | - Ronja Simon
- Department of Surgery, University Medical Center Schleswig-Holstein, Campus Lübeck, Lübeck, Germany
| | - Tobias Keck
- Department of Surgery, University Medical Center Schleswig-Holstein, Campus Lübeck, Lübeck, Germany
| | - Tilman Laubert
- Department of Surgery, University Medical Center Schleswig-Holstein, Campus Lübeck, Lübeck, Germany.,LTB Germany Ltd., Lübeck, Germany
| |
Collapse
|
35
|
White matter dissection and structural connectivity of the human vertical occipital fasciculus to link vision-associated brain cortex. Sci Rep 2020; 10:820. [PMID: 31965011 PMCID: PMC6972933 DOI: 10.1038/s41598-020-57837-7] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2019] [Accepted: 01/08/2020] [Indexed: 01/10/2023] Open
Abstract
The vertical occipital fasciculus (VOF) is an association fiber tract coursing vertically at the posterolateral corner of the brain. It is re-evaluated as a major fiber tract to link the dorsal and ventral visual stream. Although previous tractography studies showed the VOF’s cortical projections fall in the dorsal and ventral visual areas, the post-mortem dissection study for the validation remains limited. First, to validate the previous tractography data, we here performed the white matter dissection in post-mortem brains and demonstrated the VOF’s fiber bundles coursing between the V3A/B areas and the posterior fusiform gyrus. Secondly, we analyzed the VOF’s structural connectivity with diffusion tractography to link vision-associated cortical areas of the HCP MMP1.0 atlas, an updated map of the human cerebral cortex. Based on the criteria the VOF courses laterally to the inferior longitudinal fasciculus (ILF) and craniocaudally at the posterolateral corner of the brain, we reconstructed the VOF’s fiber tracts and found the widespread projections to the visual cortex. These findings could suggest a crucial role of VOF in integrating visual information to link the broad visual cortex as well as in connecting the dual visual stream.
Collapse
|
36
|
Feord RC, Sumner ME, Pusdekar S, Kalra L, Gonzalez-Bellido PT, Wardill TJ. Cuttlefish use stereopsis to strike at prey. SCIENCE ADVANCES 2020; 6:eaay6036. [PMID: 31934631 PMCID: PMC6949036 DOI: 10.1126/sciadv.aay6036] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/02/2019] [Accepted: 11/08/2019] [Indexed: 06/10/2023]
Abstract
The camera-type eyes of vertebrates and cephalopods exhibit remarkable convergence, but it is currently unknown whether the mechanisms for visual information processing in these brains, which exhibit wildly disparate architecture, are also shared. To investigate stereopsis in a cephalopod species, we affixed "anaglyph" glasses to cuttlefish and used a three-dimensional perception paradigm. We show that (i) cuttlefish have also evolved stereopsis (i.e., the ability to extract depth information from the disparity between left and right visual fields); (ii) when stereopsis information is intact, the time and distance covered before striking at a target are shorter; (iii) stereopsis in cuttlefish works differently to vertebrates, as cuttlefish can extract stereopsis cues from anticorrelated stimuli. These findings demonstrate that although there is convergent evolution in depth computation, cuttlefish stereopsis is likely afforded by a different algorithm than in humans, and not just a different implementation.
Collapse
Affiliation(s)
- R. C. Feord
- Department of Physiology, Development and Neuroscience, University of Cambridge, Cambridge CB2 3EG, UK
| | - M. E. Sumner
- Department of Ecology, Evolution and Behavior, University of Minnesota, St. Paul, MN 55108, USA
| | - S. Pusdekar
- Department of Ecology, Evolution and Behavior, University of Minnesota, St. Paul, MN 55108, USA
| | - L. Kalra
- Department of Ecology, Evolution and Behavior, University of Minnesota, St. Paul, MN 55108, USA
| | - P. T. Gonzalez-Bellido
- Department of Ecology, Evolution and Behavior, University of Minnesota, St. Paul, MN 55108, USA
| | - Trevor J. Wardill
- Department of Ecology, Evolution and Behavior, University of Minnesota, St. Paul, MN 55108, USA
| |
Collapse
|
37
|
Second-order cues to figure motion enable object detection during prey capture by praying mantises. Proc Natl Acad Sci U S A 2019; 116:27018-27027. [PMID: 31818943 DOI: 10.1073/pnas.1912310116] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023] Open
Abstract
Detecting motion is essential for animals to perform a wide variety of functions. In order to do so, animals could exploit motion cues, including both first-order cues-such as luminance correlation over time-and second-order cues, by correlating higher-order visual statistics. Since first-order motion cues are typically sufficient for motion detection, it is unclear why sensitivity to second-order motion has evolved in animals, including insects. Here, we investigate the role of second-order motion in prey capture by praying mantises. We show that prey detection uses second-order motion cues to detect figure motion. We further present a model of prey detection based on second-order motion sensitivity, resulting from a layer of position detectors feeding into a second layer of elementary-motion detectors. Mantis stereopsis, in contrast, does not require figure motion and is explained by a simpler model that uses only the first layer in both eyes. Second-order motion cues thus enable prey motion to be detected, even when perfectly matching the average background luminance and independent of the elementary motion of any parts of the prey. Subsequent to prey detection, processes such as stereopsis could work to determine the distance to the prey. We thus demonstrate how second-order motion mechanisms enable ecologically relevant behavior such as detecting camouflaged targets for other visual functions including stereopsis and target tracking.
Collapse
|
38
|
Yu F, Shang J, Hu Y, Milford M. NeuroSLAM: a brain-inspired SLAM system for 3D environments. BIOLOGICAL CYBERNETICS 2019; 113:515-545. [PMID: 31571007 DOI: 10.1007/s00422-019-00806-9] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/29/2019] [Accepted: 09/14/2019] [Indexed: 06/10/2023]
Abstract
Roboticists have long drawn inspiration from nature to develop navigation and simultaneous localization and mapping (SLAM) systems such as RatSLAM. Animals such as birds and bats possess superlative navigation capabilities, robustly navigating over large, three-dimensional environments, leveraging an internal neural representation of space combined with external sensory cues and self-motion cues. This paper presents a novel neuro-inspired 4DoF (degrees of freedom) SLAM system named NeuroSLAM, based upon computational models of 3D grid cells and multilayered head direction cells, integrated with a vision system that provides external visual cues and self-motion cues. NeuroSLAM's neural network activity drives the creation of a multilayered graphical experience map in a real time, enabling relocalization and loop closure through sequences of familiar local visual cues. A multilayered experience map relaxation algorithm is used to correct cumulative errors in path integration after loop closure. Using both synthetic and real-world datasets comprising complex, multilayered indoor and outdoor environments, we demonstrate NeuroSLAM consistently producing topologically correct three-dimensional maps.
Collapse
Affiliation(s)
- Fangwen Yu
- Faculty of Information Engineering, China University of Geosciences and National Engineering Research Center for Geographic Information System, Wuhan, 430074, China
- Science and Engineering Faculty, Queensland University of Technology and Australian Centre for Robotic Vision, Brisbane, QLD, 4000, Australia
| | - Jianga Shang
- Faculty of Information Engineering, China University of Geosciences and National Engineering Research Center for Geographic Information System, Wuhan, 430074, China.
| | - Youjian Hu
- Faculty of Information Engineering, China University of Geosciences and National Engineering Research Center for Geographic Information System, Wuhan, 430074, China
| | - Michael Milford
- Science and Engineering Faculty, Queensland University of Technology and Australian Centre for Robotic Vision, Brisbane, QLD, 4000, Australia
| |
Collapse
|
39
|
|
40
|
On the Aperture Problem of Binocular 3D Motion Perception. Vision (Basel) 2019; 3:vision3040064. [PMID: 31752372 PMCID: PMC6969946 DOI: 10.3390/vision3040064] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2019] [Revised: 11/08/2019] [Accepted: 11/15/2019] [Indexed: 12/27/2022] Open
Abstract
Like many predators, humans have forward-facing eyes that are set a short distance apart so that an extensive region of the visual field is seen from two different points of view. The human visual system can establish a three-dimensional (3D) percept from the projection of images into the left and right eye. How the visual system integrates local motion and binocular depth in order to accomplish 3D motion perception is still under investigation. Here, we propose a geometric-statistical model that combines noisy velocity constraints with a spherical motion prior to solve the aperture problem in 3D. In two psychophysical experiments, it is shown that instantiations of this model can explain how human observers disambiguate 3D line motion direction behind a circular aperture. We discuss the implications of our results for the processing of motion and dynamic depth in the visual system.
Collapse
|
41
|
Papathomas TV, Hughes P. Hughes's Reverspectives: Radical Uses of Linear Perspective on Non-Coplanar Surfaces. Vision (Basel) 2019; 3:vision3040063. [PMID: 31735864 PMCID: PMC6969905 DOI: 10.3390/vision3040063] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2019] [Revised: 09/23/2019] [Accepted: 09/23/2019] [Indexed: 11/17/2022] Open
Abstract
Two major uses of linear perspective are in planar paintings—the flat canvas is incongruent with the painted 3-D scene—and in forced perspectives, such as theater stages that are concave truncated pyramids, where the physical geometry and the depicted scene are congruent. Patrick Hughes pioneered a third major art form, the reverse perspective, where the depicted scene opposes the physical geometry. Reverse perspectives comprise solid forms composed of multiple planar surfaces (truncated pyramids and prisms) jutting toward the viewer, thus forming concave spaces between the solids. The solids are painted in reverse perspective: as an example, the left and right trapezoids of a truncated pyramid are painted as rows of houses; the bottom trapezoid is painted as the road between them and the top forms the sky. This elicits the percept of a street receding away, even though it physically juts toward the viewer. Under this illusion, the concave void spaces between the solids are transformed into convex volumes. This depth inversion creates a concomitant motion illusion: when a viewer moves in front of the art piece, the scene appears to move vividly. Two additional contributions by the artist are discussed, in which he combines reverse-perspective parts with forced and planar-perspective parts on the same art piece. The effect is spectacular, creating objects on the same planar surface that move in different directions, thus “breaking” the surface apart, demonstrating the superiority of objects over surfaces. We conclude with a discussion on the value of these art pieces in vision science.
Collapse
Affiliation(s)
- Thomas V. Papathomas
- Laboratory of Vision Research and Center for Cognitive Science, Rutgers University, 152 Frelinghuysen Road, Piscataway, NJ 08854-8020, USA
- Department of Biomedical Engineering, Rutgers University, New Brunswick, NJ 08854, USA
- Correspondence:
| | - Patrick Hughes
- Reverspective, 72 Great Eastern Street, London EC2A 3JL, UK;
| |
Collapse
|
42
|
Wong NHL, Ban H, Chang DHF. Human Depth Sensitivity Is Affected by Object Plausibility. J Cogn Neurosci 2019; 32:338-352. [PMID: 31633464 DOI: 10.1162/jocn_a_01483] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Using behavioral and fMRI paradigms, we asked how the physical plausibility of complex 3-D objects, as defined by the object's congruence with 3-D Euclidean geometry, affects behavioral thresholds and neural responses to depth information. Stimuli were disparity-defined geometric objects rendered as random dot stereograms, presented in plausible and implausible variations. In the behavior experiment, observers were asked to complete (1) a noise-based depth task that involved judging the depth position of a target embedded in noise and (2) a fine depth judgment task that involved discriminating the nearer of two consecutively presented targets. Interestingly, results indicated greater behavioral sensitivities of depth judgments for implausible versus plausible objects across both tasks. In the fMRI experiment, we measured fMRI responses concurrently with behavioral depth responses. Although univariate responses for depth judgments were largely similar across cortex regardless of object plausibility, multivariate representations for plausible and implausible objects were notably distinguishable along depth-relevant intermediate regions V3 and V3A, in addition to object-relevant LOC. Our data indicate significant modulations of both behavioral judgments of and neural responses to depth by object context. We conjecture that disparity mechanisms interact dynamically with the object recognition problem in the visual system such that disparity computations are adjusted based on object familiarity.
Collapse
|
43
|
Uji M, Lingnau A, Cavin I, Vishwanath D. Identifying Cortical Substrates Underlying the Phenomenology of Stereopsis and Realness: A Pilot fMRI Study. Front Neurosci 2019; 13:646. [PMID: 31354404 PMCID: PMC6637755 DOI: 10.3389/fnins.2019.00646] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2019] [Accepted: 06/05/2019] [Indexed: 12/05/2022] Open
Abstract
Viewing a real scene or a stereoscopic image (e.g., 3D movies) with both eyes yields a vivid subjective impression of object solidity, tangibility, immersive negative space and sense of realness; something that is not experienced when viewing single pictures of 3D scenes normally with both eyes. This phenomenology, sometimes referred to as stereopsis, is conventionally ascribed to the derivation of depth from the differences in the two eye's images (binocular disparity). Here we report on a pilot study designed to explore if dissociable neural activity associated with the phenomenology of realness can be localized in the cortex. In order to dissociate subjective impression from disparity processing, we capitalized on the finding that the impression of realness associated with stereoscopic viewing can also be generated when viewing a single picture of a 3D scene with one eye through an aperture. Under a blocked fMRI design, subjects viewed intact and scrambled images of natural 3-D objects, and scenes under three viewing conditions: (1) single pictures viewed normally with both eyes (binocular); (2) single pictures viewed with one eye through an aperture (monocular-aperture); and (3) stereoscopic anaglyph images of the same scenes viewed with both eyes (binocular stereopsis). Fixed-effects GLM contrasts aimed at isolating the phenomenology of stereopsis demonstrated a selective recruitment of similar posterior parietal regions for both monocular and binocular stereopsis conditions. Our findings provide preliminary evidence that the cortical processing underlying the subjective impression of realness may be dissociable and distinct from the derivation of depth from disparity.
Collapse
Affiliation(s)
- Makoto Uji
- School of Psychology and Neuroscience, University of St Andrews, St Andrews, United Kingdom
| | - Angelika Lingnau
- Institute of Psychology, University of Regensburg, Regensburg, Germany
| | - Ian Cavin
- TAyside Medical Science Centre (TASC), NHS Tayside, Dundee, United Kingdom
| | - Dhanraj Vishwanath
- School of Psychology and Neuroscience, University of St Andrews, St Andrews, United Kingdom
| |
Collapse
|
44
|
Multivariate Analysis of BOLD Activation Patterns Recovers Graded Depth Representations in Human Visual and Parietal Cortex. eNeuro 2019; 6:ENEURO.0362-18.2019. [PMID: 31285275 PMCID: PMC6709213 DOI: 10.1523/eneuro.0362-18.2019] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2018] [Revised: 06/24/2019] [Accepted: 06/26/2019] [Indexed: 11/21/2022] Open
Abstract
Navigating through natural environments requires localizing objects along three distinct spatial axes. Information about position along the horizontal and vertical axes is available from an object’s position on the retina, while position along the depth axis must be inferred based on second-order cues such as the disparity between the images cast on the two retinae. Past work has revealed that object position in two-dimensional (2D) retinotopic space is robustly represented in visual cortex and can be robustly predicted using a multivariate encoding model, in which an explicit axis is modeled for each spatial dimension. However, no study to date has used an encoding model to estimate a representation of stimulus position in depth. Here, we recorded BOLD fMRI while human subjects viewed a stereoscopic random-dot sphere at various positions along the depth (z) and the horizontal (x) axes, and the stimuli were presented across a wider range of disparities (out to ∼40 arcmin) compared to previous neuroimaging studies. In addition to performing decoding analyses for comparison to previous work, we built encoding models for depth position and for horizontal position, allowing us to directly compare encoding between these dimensions. Our results validate this method of recovering depth representations from retinotopic cortex. Furthermore, we find convergent evidence that depth is encoded most strongly in dorsal area V3A.
Collapse
|
45
|
Abstract
A puzzle for neuroscience—and robotics—is how insects achieve surprisingly complex behaviours with such tiny brains. One example is depth perception via binocular stereopsis in the praying mantis, a predatory insect. Praying mantids use stereopsis, the computation of distances from disparities between the two retinal images, to trigger a raptorial strike of their forelegs when prey is within reach. The neuronal basis of this ability is entirely unknown. Here we show the first evidence that individual neurons in the praying mantis brain are tuned to specific disparities and eccentricities, and thus locations in 3D-space. Like disparity-tuned cortical cells in vertebrates, the responses of these mantis neurons are consistent with linear summation of binocular inputs followed by an output nonlinearity. Our study not only proves the existence of disparity sensitive neurons in an insect brain, it also reveals feedback connections hitherto undiscovered in any animal species. The praying mantis, a predatory insect, estimates depth via binocular vision. In this way, the animal decides whether prey is within reach. Here, the authors explore the neural correlates of binocular distance estimation and report that individual neurons are tuned to specific locations in 3D space.
Collapse
|
46
|
Uji M, Jentzsch I, Redburn J, Vishwanath D. Dissociating neural activity associated with the subjective phenomenology of monocular stereopsis: An EEG study. Neuropsychologia 2019; 129:357-371. [PMID: 31034841 DOI: 10.1016/j.neuropsychologia.2019.04.017] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2018] [Revised: 03/26/2019] [Accepted: 04/23/2019] [Indexed: 12/15/2022]
Abstract
The subjective phenomenology associated with stereopsis, of solid tangible objects separated by a palpable negative space, is conventionally thought to be a by-product of the derivation of depth from binocular disparity. However, the same qualitative impression has been reported in the absence of disparity, e.g., when viewing pictorial images monocularly through an aperture. Here we aimed to explore if we could identify dissociable neural activity associated with the qualitative impression of stereopsis in the absence of the processing of binocular disparities. We measured EEG activity while subjects viewed pictorial (non-stereoscopic) images of 2D and 3D geometric forms under four different viewing conditions (binocular, monocular, binocular aperture, monocular aperture). EEG activity was analysed by oscillatory source localization (beamformer technique) to examine power change in occipital and parietal regions across viewing and stimulus conditions in targeted frequency bands (alpha: 8-13 Hz & gamma: 60-90 Hz). We observed expected event-related gamma synchronization and alpha desynchronization in occipital cortex and predominant gamma synchronization in parietal cortex across viewing and stimulus conditions. However, only the viewing condition predicted to generate the strongest impression of stereopsis (monocular aperture) revealed significantly elevated gamma synchronization within the parietal cortex for the critical contrasts (3D vs. 2D form). These findings suggest dissociable neural processes specific to the qualitative impression of stereopsis as distinguished from disparity processing.
Collapse
Affiliation(s)
- Makoto Uji
- School of Psychology and Neuroscience, University of St Andrews, UK.
| | - Ines Jentzsch
- School of Psychology and Neuroscience, University of St Andrews, UK
| | - James Redburn
- School of Psychology and Neuroscience, University of St Andrews, UK
| | | |
Collapse
|
47
|
Armendariz M, Ban H, Welchman AE, Vanduffel W. Areal differences in depth cue integration between monkey and human. PLoS Biol 2019; 17:e2006405. [PMID: 30925163 PMCID: PMC6457573 DOI: 10.1371/journal.pbio.2006405] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2018] [Revised: 04/10/2019] [Accepted: 03/12/2019] [Indexed: 11/22/2022] Open
Abstract
Electrophysiological evidence suggested primarily the involvement of the middle temporal (MT) area in depth cue integration in macaques, as opposed to human imaging data pinpointing area V3B/kinetic occipital area (V3B/KO). To clarify this conundrum, we decoded monkey functional MRI (fMRI) responses evoked by stimuli signaling near or far depths defined by binocular disparity, relative motion, and their combination, and we compared results with those from an identical experiment previously performed in humans. Responses in macaque area MT are more discriminable when two cues concurrently signal depth, and information provided by one cue is diagnostic of depth indicated by the other. This suggests that monkey area MT computes fusion of disparity and motion depth signals, exactly as shown for human area V3B/KO. Hence, these data reconcile previously reported discrepancies between depth processing in human and monkey by showing the involvement of the dorsal stream in depth cue integration using the same technique, despite the engagement of different regions. In everyday life, we interact with a three-dimensional world that we perceive via our two-dimensional retinas. Our brain can reconstruct the third dimension from these flat retinal images using multiple sources of visual information, or cues. The horizontal displacement of the two retinal images, known as binocular disparity, and the relative motion between different objects are two important depth cues. However, to make the most of the information provided by each cue, our brains must efficiently integrate across them. To examine this process, we used neuroimaging in monkeys to record brain responses evoked by stimuli signaling depths defined by either binocular disparity or relative motion in isolation, and also when the two cues are combined congruently or incongruently. We found that cortical area MT in monkeys is involved in the fusion of these two particular depth cues, in contrast to previous human imaging data that pinpoint a more posterior cortical area, V3B/KO. Our findings support the existence of depth cue integration mechanisms in primates; however, this fusion appears to be computed in slightly different areas in humans and monkeys.
Collapse
Affiliation(s)
- Marcelo Armendariz
- Laboratory of Neuro- and Psychophysiology, Department of Neurosciences, KU Leuven Medical School, Leuven, Belgium
| | - Hiroshi Ban
- Center for Information and Neural Networks, National Institute of Information and Communications Technology, Osaka, Japan
- Graduate School of Frontier Biosciences, Osaka University, Osaka, Japan
| | - Andrew E. Welchman
- Department of Psychology, University of Cambridge, Cambridge, United Kingdom
- * E-mail: (WV); (AW)
| | - Wim Vanduffel
- Laboratory of Neuro- and Psychophysiology, Department of Neurosciences, KU Leuven Medical School, Leuven, Belgium
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts, United States of America
- Department of Radiology, Harvard Medical School, Boston, Massachusetts, United States of America
- Leuven Brain Institute, Leuven, Belgium
- * E-mail: (WV); (AW)
| |
Collapse
|
48
|
Richter M, Amunts K, Mohlberg H, Bludau S, Eickhoff SB, Zilles K, Caspers S. Cytoarchitectonic segregation of human posterior intraparietal and adjacent parieto-occipital sulcus and its relation to visuomotor and cognitive functions. Cereb Cortex 2019; 29:1305-1327. [PMID: 30561508 PMCID: PMC6373694 DOI: 10.1093/cercor/bhy245] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2018] [Revised: 07/27/2018] [Indexed: 01/05/2023] Open
Abstract
Human posterior intraparietal sulcus (pIPS) and adjacent posterior wall of parieto-occipital sulcus (POS) are functionally diverse, serving higher motor, visual and cognitive functions. Its microstructural basis, though, is still largely unknown. A similar or even more pronounced architectonical complexity, as described in monkeys, could be assumed. We cytoarchitectonically mapped the pIPS/POS in 10 human postmortem brains using an observer-independent, quantitative parcellation. 3D-probability maps were generated within MNI reference space and used for functional decoding and meta-analytic coactivation modeling based on the BrainMap database to decode the general structural-functional organization of the areas. Seven cytoarchitectonically distinct areas were identified: five within human pIPS, three on its lateral (hIP4-6) and two on its medial wall (hIP7-8); and two (hPO1, hOc6) in POS. Mediocaudal areas (hIP7, hPO1) were predominantly involved in visual processing, whereas laterorostral areas (hIP4-6, 8) were associated with higher cognitive functions, e.g. counting. This shift was mirrored by systematic changes in connectivity, from temporo-occipital to premotor and prefrontal cortex, and in cytoarchitecture, from prominent Layer IIIc pyramidal cells to homogeneous neuronal distribution. This architectonical mosaic within human pIPS/POS represents a structural basis of its functional and connectional heterogeneity. The new 3D-maps of the areas enable dedicated assessments of structure-function relationships.
Collapse
Affiliation(s)
- Monika Richter
- C. and O. Vogt Institute for Brain Research, Heinrich-Heine-University Düsseldorf, Düsseldorf, Germany
| | - Katrin Amunts
- C. and O. Vogt Institute for Brain Research, Heinrich-Heine-University Düsseldorf, Düsseldorf, Germany
- Institute of Neuroscience and Medicine (INM-1), Research Centre Jülich, Jülich, Germany
- JARA-BRAIN, Jülich-Aachen Research Alliance, 52425 Jülich, Germany
| | - Hartmut Mohlberg
- Institute of Neuroscience and Medicine (INM-1), Research Centre Jülich, Jülich, Germany
| | - Sebastian Bludau
- Institute of Neuroscience and Medicine (INM-1), Research Centre Jülich, Jülich, Germany
| | - Simon B Eickhoff
- Institute of Neuroscience and Medicine, Brain & Behaviour (INM-7), Research Centre Jülich, Jülich, Germany
- Institute for Systems Neuroscience, Medical Faculty, Heinrich Heine University Düsseldorf, Düsseldorf, Germany
| | - Karl Zilles
- Institute of Neuroscience and Medicine (INM-1), Research Centre Jülich, Jülich, Germany
- JARA-BRAIN, Jülich-Aachen Research Alliance, 52425 Jülich, Germany
- Department of Psychiatry, Psychotherapy and Psychosomatics, RWTH Aachen University, Aachen, Germany
| | - Svenja Caspers
- C. and O. Vogt Institute for Brain Research, Heinrich-Heine-University Düsseldorf, Düsseldorf, Germany
- Institute of Neuroscience and Medicine (INM-1), Research Centre Jülich, Jülich, Germany
- JARA-BRAIN, Jülich-Aachen Research Alliance, 52425 Jülich, Germany
| |
Collapse
|
49
|
Welchman AE. Shape Perception: Boundary Conditions on a Grey Area. Curr Biol 2019; 29:R97-R99. [DOI: 10.1016/j.cub.2018.12.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
50
|
Microstructural properties of the vertical occipital fasciculus explain the variability in human stereoacuity. Proc Natl Acad Sci U S A 2018; 115:12289-12294. [PMID: 30429321 PMCID: PMC6275509 DOI: 10.1073/pnas.1804741115] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022] Open
Abstract
Seeing in the three-dimensional world—stereopsis—is an innate human ability, but it varies substantially among individuals. The neurobiological basis of this variability is not understood. We combined diffusion and quantitative MRI imaging with a psychophysical measurements, and found that variability in stereoacuity is associated with microstructural differences in the right vertical occipital fasciculus, a white matter tract connecting dorsal and ventral visual cortex. This result suggests that the microstructure of the pathways that support information transmission across dorsal and ventral visual areas plays an important role human stereopsis. Stereopsis is a fundamental visual function that has been studied extensively. However, it is not clear why depth discrimination (stereoacuity) varies more significantly among people than other modalities. Previous studies have reported the involvement of both dorsal and ventral visual areas in stereopsis, implying that not only neural computations in cortical areas but also the anatomical properties of white matter tracts connecting those areas can impact stereopsis. Here, we studied how human stereoacuity relates to white matter properties by combining psychophysics, diffusion MRI (dMRI), and quantitative MRI (qMRI). We performed a psychophysical experiment to measure stereoacuity and, in the same participants, we analyzed the microstructural properties of visual white matter tracts on the basis of two independent measurements, dMRI (fractional anisotropy, FA) and qMRI (macromolecular tissue volume; MTV). Microstructural properties along the right vertical occipital fasciculus (VOF), a major tract connecting dorsal and ventral visual areas, were highly correlated with measures of stereoacuity. This result was consistent for both FA and MTV, suggesting that the behavioral–structural relationship reflects differences in neural tissue density, rather than differences in the morphological configuration of fibers. fMRI confirmed that binocular disparity stimuli activated the dorsal and ventral visual regions near VOF endpoints. No other occipital tracts explained the variance in stereoacuity. In addition, the VOF properties were not associated with differences in performance on a different psychophysical task (contrast detection). These series of experiments suggest that stereoscopic depth discrimination performance is, at least in part, constrained by dorso-ventral communication through the VOF.
Collapse
|