1
|
Wright RS, Allan AC, Gamaldo AA, Morgan AA, Lee AK, Erus G, Davatzikos C, Bygrave DC. Neighborhood disadvantage is associated with working memory and hippocampal volumes among older adults. NEUROPSYCHOLOGY, DEVELOPMENT, AND COGNITION. SECTION B, AGING, NEUROPSYCHOLOGY AND COGNITION 2024:1-14. [PMID: 38656243 PMCID: PMC11499292 DOI: 10.1080/13825585.2024.2345926] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/25/2023] [Accepted: 04/15/2024] [Indexed: 04/26/2024]
Abstract
It is not well understood how neighborhood disadvantage is associated with specific domains of cognitive function and underlying brain health within older adults. Thus, the objective was to examine associations between neighborhood disadvantage, brain health, and cognitive performance, and examine whether associations were more pronounced among women. The study included 136 older adults who underwent cognitive testing and MRI. Neighborhood disadvantage was characterized using the Area Deprivation Index (ADI). Descriptive statistics, bivariate correlations, and multiple regressions were run. Multiple regressions, adjusted for age, sex, education, and depression, showed that higher ADI state rankings (greater disadvantage) were associated with poorer working memory performance (p < .01) and lower hippocampal volumes (p < .01), but not total, frontal, and white matter lesion volumes, nor visual and verbal memory performance. There were no significant sex interactions. Findings suggest that greater neighborhood disadvantage may play a role in working memory and underlying brain structure.
Collapse
Affiliation(s)
| | - Alexa C Allan
- Department of Human Development and Family Studies, The Pennsylvania State University, State College, PA, USA
| | | | | | - Anna K Lee
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, PA, USA
| | - Guray Erus
- Radiology, University of Pennsylvania, Philadelphia, PA, USA
| | | | - Desirée C Bygrave
- Department of Psychology, North Carolina Agricultural and Technical State University, Greensboro, NC, USA
| |
Collapse
|
2
|
Jin R, Cai Y, Zhang S, Yang T, Feng H, Jiang H, Zhang X, Hu Y, Liu J. Computational approaches for the reconstruction of optic nerve fibers along the visual pathway from medical images: a comprehensive review. Front Neurosci 2023; 17:1191999. [PMID: 37304011 PMCID: PMC10250625 DOI: 10.3389/fnins.2023.1191999] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2023] [Accepted: 05/09/2023] [Indexed: 06/13/2023] Open
Abstract
Optic never fibers in the visual pathway play significant roles in vision formation. Damages of optic nerve fibers are biomarkers for the diagnosis of various ophthalmological and neurological diseases; also, there is a need to prevent the optic nerve fibers from getting damaged in neurosurgery and radiation therapy. Reconstruction of optic nerve fibers from medical images can facilitate all these clinical applications. Although many computational methods are developed for the reconstruction of optic nerve fibers, a comprehensive review of these methods is still lacking. This paper described both the two strategies for optic nerve fiber reconstruction applied in existing studies, i.e., image segmentation and fiber tracking. In comparison to image segmentation, fiber tracking can delineate more detailed structures of optic nerve fibers. For each strategy, both conventional and AI-based approaches were introduced, and the latter usually demonstrates better performance than the former. From the review, we concluded that AI-based methods are the trend for optic nerve fiber reconstruction and some new techniques like generative AI can help address the current challenges in optic nerve fiber reconstruction.
Collapse
Affiliation(s)
- Richu Jin
- Research Institute of Trustworthy Autonomous Systems, Southern University of Science and Technology, Shenzhen, China
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Yongning Cai
- Research Institute of Trustworthy Autonomous Systems, Southern University of Science and Technology, Shenzhen, China
| | - Shiyang Zhang
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Ting Yang
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Haibo Feng
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Hongyang Jiang
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Xiaoqing Zhang
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Yan Hu
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Jiang Liu
- Research Institute of Trustworthy Autonomous Systems, Southern University of Science and Technology, Shenzhen, China
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China
- Guangdong Provincial Key Laboratory of Brain-inspired Intelligent Computation, Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China
| |
Collapse
|
3
|
Chaganti S, Nelson K, Mundy K, Harrigan R, Galloway R, Mawn LA, Landman B. Imaging biomarkers in thyroid eye disease and their clinical associations. J Med Imaging (Bellingham) 2018; 5:044001. [PMID: 30345325 PMCID: PMC6191037 DOI: 10.1117/1.jmi.5.4.044001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2018] [Accepted: 08/23/2018] [Indexed: 04/14/2024] Open
Abstract
The purpose of this study is to understand the phenotypes of thyroid eye disease (TED) through data derived from a multiatlas segmentation of computed tomography (CT) imaging. Images of 170 orbits of 85 retrospectively selected TED patients were analyzed with the developed automated segmentation tool. Twenty-five bilateral orbital structural metrics were used to perform principal component analysis (PCA). PCA of the 25 structural metrics identified the two most dominant structural phenotypes or characteristics, the "big volume phenotype" and the "stretched optic nerve phenotype," that accounted for 60% of the variance. Most of the subjects in the study have either of these characteristics or a combination of both. A Kendall rank correlation between the principal components (phenotypes) and clinical data showed that the big volume phenotype was very strongly correlated ( p - value < 0.05 ) with motility defects, and loss of visual acuity. Whereas, the stretched optic nerve phenotype was strongly correlated ( p - value < 0.05 ) with an increased Hertel measurement, relatively better visual acuity, and smoking. Two clinical subtypes of TED, type 1 with enlarged muscles and type 2 with proptosis, are recognizable in CT imaging. Our automated algorithm identifies the phenotypes and finds associations with clinical markers.
Collapse
Affiliation(s)
- Shikha Chaganti
- Vanderbilt University, Department of Computer Science, Nashville, Tennessee, United States
| | - Katrina Nelson
- Vanderbilt University, Department of Electrical Engineering, Nashville, Tennessee, United States
| | - Kevin Mundy
- Vanderbilt University, School of Medicine, Vanderbilt Eye Institute, Nashville, Tennessee, United States
| | - Robert Harrigan
- Vanderbilt University, Department of Electrical Engineering, Nashville, Tennessee, United States
| | - Robert Galloway
- Vanderbilt University, Department of Biomedical Engineering, Nashville, Tennessee, United States
| | - Louise A. Mawn
- Vanderbilt University, School of Medicine, Vanderbilt Eye Institute, Nashville, Tennessee, United States
| | - Bennett Landman
- Vanderbilt University, Department of Electrical Engineering, Nashville, Tennessee, United States
| |
Collapse
|
4
|
Yang G, Gu J, Chen Y, Liu W, Tang L, Shu H, Toumoulin C. Automatic kidney segmentation in CT images based on multi-atlas image registration. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2015; 2014:5538-41. [PMID: 25571249 DOI: 10.1109/embc.2014.6944881] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Kidney segmentation is an important step for computer-aided diagnosis or treatment in urology. In this paper, we present an automatic method based on multi-atlas image registration for kidney segmentation. The method mainly relies on a two-step framework to obtain coarse-to-fine segmentation results. In the first step, down-sampled patient image is registered with a set of low-resolution atlas images. A coarse kidney segmentation result is generated to locate the left and right kidneys. In the second step, the left and right kidneys are cropped from original images and aligned with another set of high-resolution atlas images to obtain the final results respectively. Segmentation results from 14 CT angiographic (CTA) images show that our proposed method can segment the kidneys with a high accuracy. The average Dice similarity coefficient and surface-to-surface distance between segmentation results and reference standard are 0.952 and 0.913mm. Furthermore, the kidney segmentation in CT urography (CTU) and CTA images of 12 patients were performed to show the feasibility of our method in CTU images.
Collapse
|
5
|
DeLisi MP, Mawn LA, Galloway RL. Image-guided transorbital procedures with endoscopic video augmentation. Med Phys 2014; 41:091901. [PMID: 25186388 PMCID: PMC4137863 DOI: 10.1118/1.4892181] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2014] [Revised: 06/17/2014] [Accepted: 07/20/2014] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Surgical interventions to the orbital space behind the eyeball are limited to highly invasive procedures due to the confined nature of the region along with the presence of several intricate soft tissue structures. A minimally invasive approach to orbital surgery would enable several therapeutic options, particularly new treatment protocols for optic neuropathies such as glaucoma. The authors have developed an image-guided system for the purpose of navigating a thin flexible endoscope to a specified target region behind the eyeball. Navigation within the orbit is particularly challenging despite its small volume, as the presence of fat tissue occludes the endoscopic visual field while the surgeon must constantly be aware of optic nerve position. This research investigates the impact of endoscopic video augmentation to targeted image-guided navigation in a series of anthropomorphic phantom experiments. METHODS A group of 16 surgeons performed a target identification task within the orbits of four skull phantoms. The task consisted of identifying the correct target, indicated by the augmented video and the preoperative imaging frames, out of four possibilities. For each skull, one orbital intervention was performed with video augmentation, while the other was done with the standard image guidance technique, in random order. RESULTS The authors measured a target identification accuracy of 95.3% and 85.9% for the augmented and standard cases, respectively, with statistically significant improvement in procedure time (Z=-2.044, p=0.041) and intraoperator mean procedure time (Z=2.456, p=0.014) when augmentation was used. CONCLUSIONS Improvements in both target identification accuracy and interventional procedure time suggest that endoscopic video augmentation provides valuable additional orientation and trajectory information in an image-guided procedure. Utilization of video augmentation in transorbital interventions could further minimize complication risk and enhance surgeon comfort and confidence in the procedure.
Collapse
Affiliation(s)
- Michael P DeLisi
- Department of Biomedical Engineering, Vanderbilt University, Nashville, Tennessee 37235
| | - Louise A Mawn
- Department of Neurological Surgery, Vanderbilt University, Nashville, Tennessee 37235 and Department of Ophthalmology and Visual Sciences, Vanderbilt University, Nashville, Tennessee 37235
| | - Robert L Galloway
- Department of Biomedical Engineering, Vanderbilt University, Nashville, Tennessee 37235 and Department of Neurological Surgery, Vanderbilt University, Nashville, Tennessee 37235
| |
Collapse
|
6
|
Asman AJ, Landman BA. Hierarchical performance estimation in the statistical label fusion framework. Med Image Anal 2014; 18:1070-81. [PMID: 25033470 DOI: 10.1016/j.media.2014.06.005] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2014] [Revised: 04/17/2014] [Accepted: 06/16/2014] [Indexed: 10/25/2022]
Abstract
Label fusion is a critical step in many image segmentation frameworks (e.g., multi-atlas segmentation) as it provides a mechanism for generalizing a collection of labeled examples into a single estimate of the underlying segmentation. In the multi-label case, typical label fusion algorithms treat all labels equally - fully neglecting the known, yet complex, anatomical relationships exhibited in the data. To address this problem, we propose a generalized statistical fusion framework using hierarchical models of rater performance. Building on the seminal work in statistical fusion, we reformulate the traditional rater performance model from a multi-tiered hierarchical perspective. The proposed approach provides a natural framework for leveraging known anatomical relationships and accurately modeling the types of errors that raters (or atlases) make within a hierarchically consistent formulation. Herein, the primary contributions of this manuscript are: (1) we provide a theoretical advancement to the statistical fusion framework that enables the simultaneous estimation of multiple (hierarchical) confusion matrices for each rater, (2) we highlight the amenability of the proposed hierarchical formulation to many of the state-of-the-art advancements to the statistical fusion framework, and (3) we demonstrate statistically significant improvement on both simulated and empirical data. Specifically, both theoretically and empirically, we show that the proposed hierarchical performance model provides substantial and significant accuracy benefits when applied to two disparate multi-atlas segmentation tasks: (1) 133 label whole-brain anatomy on structural MR, and (2) orbital anatomy on CT.
Collapse
Affiliation(s)
- Andrew J Asman
- Electrical Engineering, Vanderbilt University, Nashville, TN 37235, USA.
| | - Bennett A Landman
- Electrical Engineering, Vanderbilt University, Nashville, TN 37235, USA; Institute of Imaging Science, Vanderbilt University, Nashville, TN 37235, USA
| |
Collapse
|