1
|
Mo PKH, Xin M, Wang Z, Lau JTF, Ye X, Hui KH, Yu FY, Lee HH. Patterns of sex behaviors and factors associated with condomless anal intercourse during the COVID-19 pandemic among men who have sex with men in Hong Kong: A cross-sectional study. PLoS One 2024; 19:e0300988. [PMID: 38573984 PMCID: PMC10994335 DOI: 10.1371/journal.pone.0300988] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Accepted: 03/06/2024] [Indexed: 04/06/2024] Open
Abstract
OBJECTIVES The present study examined the patterns of sex behaviors before and during COVID-19, and identified the factors associated with condomless anal intercourse during COVID-19 from individual, interpersonal, and contextual level among men who have sex with men (MSM) in Hong Kong. METHODS A cross-sectional study was conducted among MSM in Hong Kong. A total of 463 MSM completed a cross-sectional telephone survey between March 2021 and January 2022. RESULTS Among all participants, the mean number of regular sex partners, non-regular sex partners, and casual sex partners during the COVID-19 period were 1.24, 2.09, and 0.08 respectively. Among those who had sex with regular, non-regular, and casual sex partner during the COVID-19 period, respectively 52.4%, 31.8% and 46.7% reported condomless anal intercourse. Compared to the pre-COVID-19 period, participants reported significantly fewer number of regular and non-regular sex partners during the COVID-19 period. However, a higher level of condomless anal intercourse with all types of sex partners during the COVID-19 period was also observed. Adjusted for significant socio-demographic variables, results from logistic regression analyses revealed that perceived severity of COVID-19 (aOR = 0.72, 95% CI = 0.58, 0.88), COVID-19 risk reduction behaviors in general (aOR = 0.68, 95% CI = 0.48, 0.96), COVID-19 risk reduction behaviors during sex encounters (aOR = 0.45, 95% CI = 0.30, 0.66), condom negotiation (aOR = 0.61, 95% CI = 0.44, 0.86), and collective efficacy (aOR = 0.79, 95% CI = 0.64, 0.98) were protective factors of condomless anal intercourse with any type of sex partners during the COVID-19 period. CONCLUSION The COVID-19 control measures have caused a dramatic impact on the sexual behavior of MSM in Hong Kong. Interventions that promote condom use during the COVID-19 pandemic are still needed and such interventions could emphasize prevention of both COVID-19 and HIV.
Collapse
|
2
|
Kanakaraj P, Yao T, Cai LY, Lee HH, Newlin NR, Kim ME, Gao C, Pechman KR, Archer D, Hohman T, Jefferson A, Beason-Held LL, Resnick SM, Garyfallidis E, Anderson A, Schilling KG, Landman BA, Moyer D. DeepN4: Learning N4ITK Bias Field Correction for T1-weighted Images. Neuroinformatics 2024; 22:193-205. [PMID: 38526701 DOI: 10.1007/s12021-024-09655-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/19/2023] [Indexed: 03/27/2024]
Abstract
T1-weighted (T1w) MRI has low frequency intensity artifacts due to magnetic field inhomogeneities. Removal of these biases in T1w MRI images is a critical preprocessing step to ensure spatially consistent image interpretation. N4ITK bias field correction, the current state-of-the-art, is implemented in such a way that makes it difficult to port between different pipelines and workflows, thus making it hard to reimplement and reproduce results across local, cloud, and edge platforms. Moreover, N4ITK is opaque to optimization before and after its application, meaning that methodological development must work around the inhomogeneity correction step. Given the importance of bias fields correction in structural preprocessing and flexible implementation, we pursue a deep learning approximation / reinterpretation of the N4ITK bias fields correction to create a method which is portable, flexible, and fully differentiable. In this paper, we trained a deep learning network "DeepN4" on eight independent cohorts from 72 different scanners and age ranges with N4ITK-corrected T1w MRI and bias field for supervision in log space. We found that we can closely approximate N4ITK bias fields correction with naïve networks. We evaluate the peak signal to noise ratio (PSNR) in test dataset against the N4ITK corrected images. The median PSNR of corrected images between N4ITK and DeepN4 was 47.96 dB. In addition, we assess the DeepN4 model on eight additional external datasets and show the generalizability of the approach. This study establishes that incompatible N4ITK preprocessing steps can be closely approximated by naïve deep neural networks, facilitating more flexibility. All code and models are released at https://github.com/MASILab/DeepN4 .
Collapse
|
3
|
Yu X, Yang Q, Tang Y, Gao R, Bao S, Cai LY, Lee HH, Huo Y, Moore AZ, Ferrucci L, Landman BA. Deep conditional generative model for longitudinal single-slice abdominal computed tomography harmonization. J Med Imaging (Bellingham) 2024; 11:024008. [PMID: 38571764 PMCID: PMC10987005 DOI: 10.1117/1.jmi.11.2.024008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2023] [Revised: 01/18/2024] [Accepted: 03/14/2024] [Indexed: 04/05/2024] Open
Abstract
Purpose Two-dimensional single-slice abdominal computed tomography (CT) provides a detailed tissue map with high resolution allowing quantitative characterization of relationships between health conditions and aging. However, longitudinal analysis of body composition changes using these scans is difficult due to positional variation between slices acquired in different years, which leads to different organs/tissues being captured. Approach To address this issue, we propose C-SliceGen, which takes an arbitrary axial slice in the abdominal region as a condition and generates a pre-defined vertebral level slice by estimating structural changes in the latent space. Results Our experiments on 2608 volumetric CT data from two in-house datasets and 50 subjects from the 2015 Multi-Atlas Abdomen Labeling Challenge Beyond the Cranial Vault (BTCV) dataset demonstrate that our model can generate high-quality images that are realistic and similar. We further evaluate our method's capability to harmonize longitudinal positional variation on 1033 subjects from the Baltimore longitudinal study of aging dataset, which contains longitudinal single abdominal slices, and confirmed that our method can harmonize the slice positional variance in terms of visceral fat area. Conclusion This approach provides a promising direction for mapping slices from different vertebral levels to a target slice and reducing positional variance for single-slice longitudinal analysis. The source code is available at: https://github.com/MASILab/C-SliceGen.
Collapse
|
4
|
Gao C, Kim ME, Lee HH, Yang Q, Khairi NM, Kanakaraj P, Newlin NR, Archer DB, Jefferson AL, Taylor WD, Boyd BD, Beason-Held LL, Resnick SM, Huo Y, Van Schaik KD, Schilling KG, Moyer D, Išgum I, Landman BA. Predicting Age from White Matter Diffusivity with Residual Learning. ARXIV 2024:arXiv:2311.03500v2. [PMID: 37986731 PMCID: PMC10659451] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 11/22/2023]
Abstract
Imaging findings inconsistent with those expected at specific chronological age ranges may serve as early indicators of neurological disorders and increased mortality risk. Estimation of chronological age, and deviations from expected results, from structural magnetic resonance imaging (MRI) data has become an important proxy task for developing biomarkers that are sensitive to such deviations. Complementary to structural analysis, diffusion tensor imaging (DTI) has proven effective in identifying age-related microstructural changes within the brain white matter, thereby presenting itself as a promising additional modality for brain age prediction. Although early studies have sought to harness DTI's advantages for age estimation, there is no evidence that the success of this prediction is owed to the unique microstructural and diffusivity features that DTI provides, rather than the macrostructural features that are also available in DTI data. Therefore, we seek to develop white-matter-specific age estimation to capture deviations from normal white matter aging. Specifically, we deliberately disregard the macrostructural information when predicting age from DTI scalar images, using two distinct methods. The first method relies on extracting only microstructural features from regions of interest (ROIs). The second applies 3D residual neural networks (ResNets) to learn features directly from the images, which are non-linearly registered and warped to a template to minimize macrostructural variations. When tested on unseen data, the first method yields mean absolute error (MAE) of 6.11 ± 0.19 years for cognitively normal participants and MAE of 6.62 ± 0.30 years for cognitively impaired participants, while the second method achieves MAE of 4.69 ± 0.23 years for cognitively normal participants and MAE of 4.96 ± 0.28 years for cognitively impaired participants. We find that the ResNet model captures subtler, non-macrostructural features for brain age prediction.
Collapse
|
5
|
Seriramulu VP, Suppiah S, Lee HH, Jang JH, Omar NF, Mohan SN, Ibrahim NSN, Azmi NHM, Buhari I, Ahmad U. Review of MR spectroscopy analysis and artificial intelligence applications for the detection of cerebral inflammation and neurotoxicity in Alzheimer's disease. THE MEDICAL JOURNAL OF MALAYSIA 2024; 79:102-110. [PMID: 38287765] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 01/31/2024]
Abstract
INTRODUCTION Magnetic resonance spectroscopy (MRS) has an emerging role as a neuroimaging tool for the detection of biomarkers of Alzheimer's disease (AD). To date, MRS has been established as one of the diagnostic tools for various diseases such as breast cancer and fatty liver, as well as brain tumours. However, its utility in neurodegenerative diseases is still in the experimental stages. The potential role of the modality has not been fully explored, as there is diverse information regarding the aberrations in the brain metabolites caused by normal ageing versus neurodegenerative disorders. MATERIALS AND METHODS A literature search was carried out to gather eligible studies from the following widely sourced electronic databases such as Scopus, PubMed and Google Scholar using the combination of the following keywords: AD, MRS, brain metabolites, deep learning (DL), machine learning (ML) and artificial intelligence (AI); having the aim of taking the readers through the advancements in the usage of MRS analysis and related AI applications for the detection of AD. RESULTS We elaborate on the MRS data acquisition, processing, analysis, and interpretation techniques. Recommendation is made for MRS parameters that can obtain the best quality spectrum for fingerprinting the brain metabolomics composition in AD. Furthermore, we summarise ML and DL techniques that have been utilised to estimate the uncertainty in the machine-predicted metabolite content, as well as streamline the process of displaying results of metabolites derangement that occurs as part of ageing. CONCLUSION MRS has a role as a non-invasive tool for the detection of brain metabolite biomarkers that indicate brain metabolic health, which can be integral in the management of AD.
Collapse
|
6
|
Yu X, Yang Q, Zhou Y, Cai LY, Gao R, Lee HH, Li T, Bao S, Xu Z, Lasko TA, Abramson RG, Zhang Z, Huo Y, Landman BA, Tang Y. UNesT: Local spatial representation learning with hierarchical transformer for efficient medical segmentation. Med Image Anal 2023; 90:102939. [PMID: 37725868 DOI: 10.1016/j.media.2023.102939] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Revised: 07/14/2023] [Accepted: 08/16/2023] [Indexed: 09/21/2023]
Abstract
Transformer-based models, capable of learning better global dependencies, have recently demonstrated exceptional representation learning capabilities in computer vision and medical image analysis. Transformer reformats the image into separate patches and realizes global communication via the self-attention mechanism. However, positional information between patches is hard to preserve in such 1D sequences, and loss of it can lead to sub-optimal performance when dealing with large amounts of heterogeneous tissues of various sizes in 3D medical image segmentation. Additionally, current methods are not robust and efficient for heavy-duty medical segmentation tasks such as predicting a large number of tissue classes or modeling globally inter-connected tissue structures. To address such challenges and inspired by the nested hierarchical structures in vision transformer, we proposed a novel 3D medical image segmentation method (UNesT), employing a simplified and faster-converging transformer encoder design that achieves local communication among spatially adjacent patch sequences by aggregating them hierarchically. We extensively validate our method on multiple challenging datasets, consisting of multiple modalities, anatomies, and a wide range of tissue classes, including 133 structures in the brain, 14 organs in the abdomen, 4 hierarchical components in the kidneys, inter-connected kidney tumors and brain tumors. We show that UNesT consistently achieves state-of-the-art performance and evaluate its generalizability and data efficiency. Particularly, the model achieves whole brain segmentation task complete ROI with 133 tissue classes in a single network, outperforming prior state-of-the-art method SLANT27 ensembled with 27 networks. Our model performance increases the mean DSC score of the publicly available Colin and CANDI dataset from 0.7264 to 0.7444 and from 0.6968 to 0.7025, respectively. Code, pre-trained models, and use case pipeline are available at: https://github.com/MASILab/UNesT.
Collapse
|
7
|
Kanakaraj P, Yao T, Cai LY, Lee HH, Newlin NR, Kim ME, Gao C, Pechman KR, Archer D, Hohman T, Jefferson A, Beason-Held LL, Resnick SM, Garyfallidis E, Anderson A, Schilling KG, Landman BA, Moyer D. DeepN4: Learning N4ITK Bias Field Correction for T1-weighted Images. RESEARCH SQUARE 2023:rs.3.rs-3585882. [PMID: 38014176 PMCID: PMC10680935 DOI: 10.21203/rs.3.rs-3585882/v1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/29/2023]
Abstract
T1-weighted (T1w) MRI has low frequency intensity artifacts due to magnetic field inhomogeneities. Removal of these biases in T1w MRI images is a critical preprocessing step to ensure spatially consistent image interpretation. N4ITK bias field correction, the current state-of-the-art, is implemented in such a way that makes it difficult to port between different pipelines and workflows, thus making it hard to reimplement and reproduce results across local, cloud, and edge platforms. Moreover, N4ITK is opaque to optimization before and after its application, meaning that methodological development must work around the inhomogeneity correction step. Given the importance of bias fields correction in structural preprocessing and flexible implementation, we pursue a deep learning approximation / reinterpretation of the N4ITK bias fields correction to create a method which is portable, flexible, and fully differentiable. In this paper, we trained a deep learning network "DeepN4" on eight independent cohorts from 72 different scanners and age ranges with N4ITK-corrected T1w MRI and bias field for supervision in log space. We found that we can closely approximate N4ITK bias fields correction with naïve networks. We evaluate the peak signal to noise ratio (PSNR) in test dataset against the N4ITK corrected images. The median PSNR of corrected images between N4ITK and DeepN4 was 47.96 dB. In addition, we assess the DeepN4 model on eight additional external datasets and show the generalizability of the approach. This study establishes that incompatible N4ITK preprocessing steps can be closely approximated by naïve deep neural networks, facilitating more flexibility. All code and models are released at https://github.com/MASILab/DeepN4.
Collapse
|
8
|
Li TZ, Still JM, Xu K, Lee HH, Cai LY, Krishnan AR, Gao R, Khan MS, Antic S, Kammer M, Sandler KL, Maldonado F, Landman BA, Lasko TA. Longitudinal Multimodal Transformer Integrating Imaging and Latent Clinical Signatures From Routine EHRs for Pulmonary Nodule Classification. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2023; 14221:649-659. [PMID: 38779102 PMCID: PMC11110542 DOI: 10.1007/978-3-031-43895-0_61] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2024]
Abstract
The accuracy of predictive models for solitary pulmonary nodule (SPN) diagnosis can be greatly increased by incorporating repeat imaging and medical context, such as electronic health records (EHRs). However, clinically routine modalities such as imaging and diagnostic codes can be asynchronous and irregularly sampled over different time scales which are obstacles to longitudinal multimodal learning. In this work, we propose a transformer-based multimodal strategy to integrate repeat imaging with longitudinal clinical signatures from routinely collected EHRs for SPN classification. We perform unsupervised disentanglement of latent clinical signatures and leverage time-distance scaled self-attention to jointly learn from clinical signatures expressions and chest computed tomography (CT) scans. Our classifier is pretrained on 2,668 scans from a public dataset and 1,149 subjects with longitudinal chest CTs, billing codes, medications, and laboratory tests from EHRs of our home institution. Evaluation on 227 subjects with challenging SPNs revealed a significant AUC improvement over a longitudinal multimodal baseline (0.824 vs 0.752 AUC), as well as improvements over a single cross-section multimodal scenario (0.809 AUC) and a longitudinal imaging-only scenario (0.741 AUC). This work demonstrates significant advantages with a novel approach for co-learning longitudinal imaging and non-imaging phenotypes with transformers. Code available at https://github.com/MASILab/lmsignatures.
Collapse
|
9
|
Lee HH, Tang Y, Yang Q, Yu X, Cai LY, Remedios LW, Bao S, Landman BA, Huo Y. Semantic-Aware Contrastive Learning for Multi-Object Medical Image Segmentation. IEEE J Biomed Health Inform 2023; 27:4444-4453. [PMID: 37310834 PMCID: PMC10524443 DOI: 10.1109/jbhi.2023.3285230] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Medical image segmentation, or computing voxel-wise semantic masks, is a fundamental yet challenging task in medical imaging domain. To increase the ability of encoder-decoder neural networks to perform this task across large clinical cohorts, contrastive learning provides an opportunity to stabilize model initialization and enhances downstream tasks performance without ground-truth voxel-wise labels. However, multiple target objects with different semantic meanings and contrast level may exist in a single image, which poses a problem for adapting traditional contrastive learning methods from prevalent "image-level classification" to "pixel-level segmentation". In this article, we propose a simple semantic-aware contrastive learning approach leveraging attention masks and image-wise labels to advance multi-object semantic segmentation. Briefly, we embed different semantic objects to different clusters rather than the traditional image-level embeddings. We evaluate our proposed method on a multi-organ medical image segmentation task with both in-house data and MICCAI Challenge 2015 BTCV datasets. Compared with current state-of-the-art training strategies, our proposed pipeline yields a substantial improvement of 5.53% and 6.09% on Dice score for both medical image segmentation cohorts respectively (p-value 0.01). The performance of the proposed method is further assessed on external medical image cohort via MICCAI Challenge FLARE 2021 dataset, and achieves a substantial improvement from Dice 0.922 to 0.933 (p-value 0.01).
Collapse
|
10
|
Li TZ, Hin Lee H, Xu K, Gao R, Dawant BM, Maldonado F, Sandler KL, Landman BA. Quantifying emphysema in lung screening computed tomography with robust automated lobe segmentation. J Med Imaging (Bellingham) 2023; 10:044002. [PMID: 37469854 PMCID: PMC10353481 DOI: 10.1117/1.jmi.10.4.044002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Revised: 06/14/2023] [Accepted: 06/21/2023] [Indexed: 07/21/2023] Open
Abstract
Purpose Anatomy-based quantification of emphysema in a lung screening cohort has the potential to improve lung cancer risk stratification and risk communication. Segmenting lung lobes is an essential step in this analysis, but leading lobe segmentation algorithms have not been validated for lung screening computed tomography (CT). Approach In this work, we develop an automated approach to lobar emphysema quantification and study its association with lung cancer incidence. We combine self-supervised training with level set regularization and finetuning with radiologist annotations on three datasets to develop a lobe segmentation algorithm that is robust for lung screening CT. Using this algorithm, we extract quantitative CT measures for a cohort (n = 1189 ) from the National Lung Screening Trial and analyze the multivariate association with lung cancer incidence. Results Our lobe segmentation approach achieved an external validation Dice of 0.93, significantly outperforming a leading algorithm at 0.90 (p < 0.01 ). The percentage of low attenuation volume in the right upper lobe was associated with increased lung cancer incidence (odds ratio: 1.97; 95% CI: [1.06, 3.66]) independent of PLCO m 2012 risk factors and diagnosis of whole lung emphysema. Quantitative lobar emphysema improved the goodness-of-fit to lung cancer incidence (χ 2 = 7.48 , p = 0.02 ). Conclusions We are the first to develop and validate an automated lobe segmentation algorithm that is robust to smoking-related pathology. We discover a quantitative risk factor, lending further evidence that regional emphysema is independently associated with increased lung cancer incidence. The algorithm is provided at https://github.com/MASILab/EmphysemaSeg.
Collapse
|
11
|
Yang Q, Yu X, Lee HH, Cai LY, Xu K, Bao S, Huo Y, Moore AZ, Makrogiannis S, Ferrucci L, Landman BA. Single slice thigh CT muscle group segmentation with domain adaptation and self-training. J Med Imaging (Bellingham) 2023; 10:044001. [PMID: 37448597 PMCID: PMC10336322 DOI: 10.1117/1.jmi.10.4.044001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Revised: 06/09/2023] [Accepted: 06/20/2023] [Indexed: 07/15/2023] Open
Abstract
Purpose Thigh muscle group segmentation is important for assessing muscle anatomy, metabolic disease, and aging. Many efforts have been put into quantifying muscle tissues with magnetic resonance (MR) imaging, including manual annotation of individual muscles. However, leveraging publicly available annotations in MR images to achieve muscle group segmentation on single-slice computed tomography (CT) thigh images is challenging. Approach We propose an unsupervised domain adaptation pipeline with self-training to transfer labels from three-dimensional MR to single CT slices. First, we transform the image appearance from MR to CT with CycleGAN and feed the synthesized CT images to a segmenter simultaneously. Single CT slices are divided into hard and easy cohorts based on the entropy of pseudo-labels predicted by the segmenter. After refining easy cohort pseudo-labels based on anatomical assumption, self-training with easy and hard splits is applied to fine-tune the segmenter. Results On 152 withheld single CT thigh images, the proposed pipeline achieved a mean Dice of 0.888 (0.041) across all muscle groups, including gracilis, hamstrings, quadriceps femoris, and sartorius muscle. Conclusions To our best knowledge, this is the first pipeline to achieve domain adaptation from MR to CT for thigh images. The proposed pipeline effectively and robustly extracts muscle groups on two-dimensional single-slice CT thigh images. The container is available for public use in GitHub repository available at: https://github.com/MASILab/DA_CT_muscle_seg.
Collapse
|
12
|
Ye B, Lau JTF, Lee HH, Yeung JCH, Mo PKH. The mediating role of resilience on the association between family satisfaction and lower levels of depression and anxiety among Chinese adolescents. PLoS One 2023; 18:e0283662. [PMID: 37228075 DOI: 10.1371/journal.pone.0283662] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2022] [Accepted: 03/14/2023] [Indexed: 05/27/2023] Open
Abstract
PURPOSE This study aimed to explore the association between family satisfaction, resilience, and anxiety and depression among adolescents, and the mediating role of resilience in these relationships. METHODS A cross-sectional study was conducted among grade 8 to 9 students from 4 secondary schools in Hong Kong. A total of 1,146 participants completed the survey. RESULTS Respectively 45.8% and 58.0% of students scored above the cut-off for mild anxiety and mild depression. Results from linear regression analyses showed that family satisfaction was positively associated with resilience, and both family satisfaction and resilience were and negatively associated with anxiety and depression. The mediating effects of resilience on the relationship between family satisfaction and anxiety/ depression (26.3% and 31.1% effects accounted for, respectively) were significant. CONCLUSIONS Both family satisfaction and resilience have important influence on adolescent mental health. Interventions that seek to promote positive family relationships and resilience of adolescents may be effective in preventing and reducing anxiety and depression symptoms among adolescents.
Collapse
|
13
|
Cai LY, Lee HH, Newlin NR, Kim ME, Moyer D, Rheault F, Schilling KG, Landman BA. Implementation considerations for deep learning with diffusion MRI streamline tractography. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.04.03.535465. [PMID: 37066284 PMCID: PMC10104046 DOI: 10.1101/2023.04.03.535465] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/18/2023]
Abstract
One area of medical imaging that has recently experienced innovative deep learning advances is diffusion MRI (dMRI) streamline tractography with recurrent neural networks (RNNs). Unlike traditional imaging studies which utilize voxel-based learning, these studies model dMRI features at points in continuous space off the voxel grid in order to propagate streamlines, or virtual estimates of axons. However, implementing such models is non-trivial, and an open-source implementation is not yet widely available. Here, we describe a series of considerations for implementing tractography with RNNs and demonstrate they allow one to approximate a deterministic streamline propagator with comparable performance to existing algorithms. We release this trained model and the associated implementations leveraging popular deep learning libraries. We hope the availability of these resources will lower the barrier of entry into this field, spurring further innovation.
Collapse
|
14
|
Cai LY, Lee HH, Newlin NR, Kerley CI, Kanakaraj P, Yang Q, Johnson GW, Moyer D, Schilling KG, Rheault FC, Landman BA. Convolutional-recurrent neural networks approximate diffusion tractography from T1-weighted MRI and associated anatomical context. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.02.25.530046. [PMID: 36909466 PMCID: PMC10002661 DOI: 10.1101/2023.02.25.530046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/04/2023]
Abstract
Diffusion MRI (dMRI) streamline tractography is the gold-standard for in vivo estimation of white matter (WM) pathways in the brain. However, the high angular resolution dMRI acquisitions capable of fitting the microstructural models needed for tractography are often time-consuming and not routinely collected clinically, restricting the scope of tractography analyses. To address this limitation, we build on recent advances in deep learning which have demonstrated that streamline propagation can be learned from dMRI directly without traditional model fitting. Specifically, we propose learning the streamline propagator from T1w MRI to facilitate arbitrary tractography analyses when dMRI is unavailable. To do so, we present a novel convolutional-recurrent neural network (CoRNN) trained in a teacher-student framework that leverages T1w MRI, associated anatomical context, and streamline memory from data acquired for the Human Connectome Project. We characterize our approach under two common tractography paradigms, WM bundle analysis and structural connectomics, and find approximately a 5-15% difference between measures computed from streamlines generated with our approach and those generated using traditional dMRI tractography. When placed in the literature, these results suggest that the accuracy of WM measures computed from T1w MRI with our method is on the level of scan-rescan dMRI variability and raise an important question: is tractography truly a microstructural phenomenon, or has dMRI merely facilitated its discovery and implementation?
Collapse
|
15
|
Bao S, Cui C, Li J, Tang Y, Lee HH, Deng R, Remedios LW, Yu X, Yang Q, Chiron S, Patterson NH, Lau KS, Liu Q, Roland JT, Coburn LA, Wilson KT, Landman BA, Huo Y. Topological-Preserving Membrane Skeleton Segmentation in Multiplex Immunofluorescence Imaging. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2023; 12471:124710B. [PMID: 37786583 PMCID: PMC10545297 DOI: 10.1117/12.2654087] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/04/2023]
Abstract
Multiplex immunofluorescence (MxIF) is an emerging imaging technology whose downstream molecular analytics highly rely upon the effectiveness of cell segmentation. In practice, multiple membrane markers (e.g., NaKATPase, PanCK and β-catenin) are employed to stain membranes for different cell types, so as to achieve a more comprehensive cell segmentation since no single marker fits all cell types. However, prevalent watershed-based image processing might yield inferior capability for modeling complicated relationships between markers. For example, some markers can be misleading due to questionable stain quality. In this paper, we propose a deep learning based membrane segmentation method to aggregate complementary information that is uniquely provided by large scale MxIF markers. We aim to segment tubular membrane structure in MxIF data using global (membrane markers z-stack projection image) and local (separate individual markers) information to maximize topology preservation with deep learning. Specifically, we investigate the feasibility of four SOTA 2D deep networks and four volumetric-based loss functions. We conducted a comprehensive ablation study to assess the sensitivity of the proposed method with various combinations of input channels. Beyond using adjusted rand index (ARI) as the evaluation metric, which was inspired by the clDice, we propose a novel volumetric metric that is specific for skeletal structure, denoted as c l D i c e S K E L . In total, 80 membrane MxIF images were manually traced for 5-fold cross-validation. Our model outperforms the baseline with a 20.2% and 41.3% increase in c l D i c e S K E L and ARI performance, which is significant (p<0.05) using the Wilcoxon signed rank test. Our work explores a promising direction for advancing MxIF imaging cell segmentation with deep learning membrane segmentation. Tools are available at https://github.com/MASILab/MxIF_Membrane_Segmentation.
Collapse
|
16
|
Yu X, Tang Y, Yang Q, Lee HH, Gao R, Bao S, Moore AZ, Ferrucci L, Landman BA. Longitudinal Variability Analysis on Low-dose Abdominal CT with Deep Learning-based Segmentation. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2023; 12464:1246423. [PMID: 37465093 PMCID: PMC10353779 DOI: 10.1117/12.2653762] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/20/2023]
Abstract
Metabolic health is increasingly implicated as a risk factor across conditions from cardiology to neurology, and efficiency assessment of body composition is critical to quantitatively characterizing these relationships. 2D low dose single slice computed tomography (CT) provides a high resolution, quantitative tissue map, albeit with a limited field of view. Although numerous potential analyses have been proposed in quantifying image context, there has been no comprehensive study for low-dose single slice CT longitudinal variability with automated segmentation. We studied a total of 1816 slices from 1469 subjects of Baltimore Longitudinal Study on Aging (BLSA) abdominal dataset using supervised deep learning-based segmentation and unsupervised clustering method. 300 out of 1469 subjects that have two year gap in their first two scans were pick out to evaluate longitudinal variability with measurements including intraclass correlation coefficient (ICC) and coefficient of variation (CV) in terms of tissues/organs size and mean intensity. We showed that our segmentation methods are stable in longitudinal settings with Dice ranged from 0.821 to 0.962 for thirteen target abdominal tissues structures. We observed high variability in most organ with ICC<0.5, low variability in the area of muscle, abdominal wall, fat and body mask with average ICC≥0.8. We found that the variability in organ is highly related to the cross-sectional position of the 2D slice. Our efforts pave quantitative exploration and quality control to reduce uncertainties in longitudinal analysis.
Collapse
|
17
|
Lee HH, Tang Y, Bao S, Yang Q, Xu X, Schey KL, Spraggins JM, Huo Y, Landman BA. Unsupervised Registration Refinement for Generating Unbiased Eye Atlas. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2023; 12464:1246422. [PMID: 37465097 PMCID: PMC10353780 DOI: 10.1117/12.2653753] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/20/2023]
Abstract
With the confounding effects of demographics across large-scale imaging surveys, substantial variation is demonstrated with the volumetric structure of orbit and eye anthropometry. Such variability increases the level of difficulty to localize the anatomical features of the eye organs for populational analysis. To adapt the variability of eye organs with stable registration transfer, we propose an unbiased eye atlas template followed by a hierarchical coarse-to-fine approach to provide generalized eye organ context across populations. Furthermore, we retrieved volumetric scans from 1842 healthy patients for generating an eye atlas template with minimal biases. Briefly, we select 20 subject scans and use an iterative approach to generate an initial unbiased template. We then perform metric-based registration to the remaining samples with the unbiased template and generate coarse registered outputs. The coarse registered outputs are further leveraged to train a deep probabilistic network, which aims to refine the organ deformation in unsupervised setting. Computed tomography (CT) scans of 100 de-identified subjects are used to generate and evaluate the unbiased atlas template with the hierarchical pipeline. The refined registration shows the stable transfer of the eye organs, which were well-localized in the high-resolution (0.5 mm3) atlas space and demonstrated a significant improvement of 2.37% Dice for inverse label transfer performance. The subject-wise qualitative representations with surface rendering successfully demonstrate the transfer details of the organ context and showed the applicability of generalizing the morphological variation across patients.
Collapse
|
18
|
Yang Q, Yu X, Lee HH, Tang Y, Bao S, Gravenstein KS, Moore AZ, Makrogiannis S, Ferrucci L, Landman BA. Label efficient segmentation of single slice thigh CT with two-stage pseudo labels. J Med Imaging (Bellingham) 2022; 9:052405. [PMID: 35607409 PMCID: PMC9118142 DOI: 10.1117/1.jmi.9.5.052405] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Accepted: 05/02/2022] [Indexed: 07/20/2023] Open
Abstract
Purpose: Muscle, bone, and fat segmentation from thigh images is essential for quantifying body composition. Voxelwise image segmentation enables quantification of tissue properties including area, intensity, and texture. Deep learning approaches have had substantial success in medical image segmentation, but they typically require a significant amount of data. Due to the high cost of manual annotation, training deep learning models with limited human label data is desirable, but it is a challenging problem. Approach: Inspired by transfer learning, we proposed a two-stage deep learning pipeline to address the thigh and lower leg segmentation issue. We studied three datasets, 3022 thigh slices and 8939 lower leg slices from the BLSA dataset and 121 thigh slices from the GESTALT study. First, we generated pseudo labels for thigh based on approximate handcrafted approaches using CT intensity and anatomical morphology. Then, those pseudo labels were fed into deep neural networks to train models from scratch. Finally, the first stage model was loaded as the initialization and fine-tuned with a more limited set of expert human labels of the thigh. Results: We evaluated the performance of this framework on 73 thigh CT images and obtained an average Dice similarity coefficient (DSC) of 0.927 across muscle, internal bone, cortical bone, subcutaneous fat, and intermuscular fat. To test the generalizability of the proposed framework, we applied the model on lower leg images and obtained an average DSC of 0.823. Conclusions: Approximated handcrafted pseudo labels can build a good initialization for deep neural networks, which can help to reduce the need for, and make full use of, human expert labeled data.
Collapse
|
19
|
Kanakaraj P, Ramadass K, Bao S, Basford M, Jones LM, Lee HH, Xu K, Schilling KG, Carr JJ, Terry JG, Huo Y, Sandler KL, Netwon AT, Landman BA. Workflow Integration of Research AI Tools into a Hospital Radiology Rapid Prototyping Environment. J Digit Imaging 2022; 35:1023-1033. [PMID: 35266088 PMCID: PMC9485498 DOI: 10.1007/s10278-022-00601-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2021] [Revised: 01/14/2022] [Accepted: 01/23/2022] [Indexed: 11/25/2022] Open
Abstract
The field of artificial intelligence (AI) in medical imaging is undergoing explosive growth, and Radiology is a prime target for innovation. The American College of Radiology Data Science Institute has identified more than 240 specific use cases where AI could be used to improve clinical practice. In this context, thousands of potential methods are developed by research labs and industry innovators. Deploying AI tools within a clinical enterprise, even on limited retrospective evaluation, is complicated by security and privacy concerns. Thus, innovation must be weighed against the substantive resources required for local clinical evaluation. To reduce barriers to AI validation while maintaining rigorous security and privacy standards, we developed the AI Imaging Incubator. The AI Imaging Incubator serves as a DICOM storage destination within a clinical enterprise where images can be directed for novel research evaluation under Institutional Review Board approval. AI Imaging Incubator is controlled by a secure HIPAA-compliant front end and provides access to a menu of AI procedures captured within network-isolated containers. Results are served via a secure website that supports research and clinical data formats. Deployment of new AI approaches within this system is streamlined through a standardized application programming interface. This manuscript presents case studies of the AI Imaging Incubator applied to randomizing lung biopsies on chest CT, liver fat assessment on abdomen CT, and brain volumetry on head MRI.
Collapse
|
20
|
Lee HH, Tang Y, Xu K, Bao S, Fogo AB, Harris R, de Caestecker MP, Heinrich M, Spraggins JM, Huo Y, Landman BA. Multi-contrast computed tomography healthy kidney atlas. Comput Biol Med 2022; 146:105555. [PMID: 35533459 PMCID: PMC10243466 DOI: 10.1016/j.compbiomed.2022.105555] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2021] [Revised: 03/28/2022] [Accepted: 04/21/2022] [Indexed: 11/03/2022]
Abstract
The construction of three-dimensional multi-modal tissue maps provides an opportunity to spur interdisciplinary innovations across temporal and spatial scales through information integration. While the preponderance of effort is allocated to the cellular level and explore the changes in cell interactions and organizations, contextualizing findings within organs and systems is essential to visualize and interpret higher resolution linkage across scales. There is a substantial normal variation of kidney morphometry and appearance across body size, sex, and imaging protocols in abdominal computed tomography (CT). A volumetric atlas framework is needed to integrate and visualize the variability across scales. However, there is no abdominal and retroperitoneal organs atlas framework for multi-contrast CT. Hence, we proposed a high-resolution CT retroperitoneal atlas specifically optimized for the kidney organ across non-contrast CT and early arterial, late arterial, venous and delayed contrast-enhanced CT. We introduce a deep learning-based volume interest extraction method by localizing the 2D slices with a representative score and crop within the range of the abdominal interest. An automated two-stage hierarchal registration pipeline is then performed to register abdominal volumes to a high-resolution CT atlas template with DEEDS affine and non-rigid registration. To generate and evaluate the atlas framework, multi-contrast modality CT scans of 500 subjects (without reported history of renal disease, age: 15-50 years, 250 males & 250 females) were processed. PDD-Net with affine registration achieved the best overall mean DICE for portal venous phase multi-organs label transfer with the registration pipeline (0.540 ± 0.275, p < 0.0001 Wilcoxon signed-rank test) comparing to the other registration tools. It also demonstrated the best performance with the median DICE over 0.8 in transferring the kidney information to the atlas space. DEEDS perform constantly with stable transferring performance in all phases average mapping including significant clear boundary of kidneys with contrastive characteristics, while PDD-Net only demonstrates a stable kidney registration in the average mapping of early and late arterial, and portal venous phase. The variance mappings demonstrate the low intensity variance in the kidney regions with DEEDS across all contrast phases and with PDD-Net across late arterial and portal venous phase. We demonstrate a stable generalizability of the atlas template for integrating the normal kidney variation from small to large, across contrast modalities and populations with great variability of demographics. The linkage of atlas and demographics provided a better understanding of the variation of kidney anatomy across populations.
Collapse
|
21
|
Wang CT, Xu JC, Chan KC, Lee HH, Tso CY, Lin CSK, Chao CYH, Fu SC. Infection control measures for public transportation derived from the flow dynamics of obstructed cough jet. JOURNAL OF AEROSOL SCIENCE 2022; 163:105995. [PMID: 35382445 PMCID: PMC8971108 DOI: 10.1016/j.jaerosci.2022.105995] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/06/2022] [Revised: 03/21/2022] [Accepted: 03/21/2022] [Indexed: 06/14/2023]
Abstract
During the COVID-19 pandemic, WHO and CDC suggest people stay 1 m and 1.8 m away from others, respectively. Keeping social distance can avoid close contact and mitigate infection spread. Many researchers suspect that suggested distances are not enough because aerosols can spread up to 7-8 m away. Despite the debate on social distance, these social distances rely on unobstructed respiratory activities such as coughing and sneezing. Differently, in this work, we focused on the most common but less studied aerosol spread from an obstructed cough. The flow dynamics of a cough jet blocked by the backrest and gasper jet in a cabin environment was characterized by the particle image velocimetry (PIV) technique. It was proved that the backrest and the gasper jet can prevent the front passenger from droplet spray in public transportation where maintaining social distance was difficult. A model was developed to describe the cough jet trajectory due to the gasper jet, which matched well with PIV results. It was found that buoyancy and inside droplets almost do not affect the short-range cough jet trajectory. Infection control measures were suggested for public transportation, including using backrest/gasper jet, installing localized exhaust, and surface cleaning of the backrest.
Collapse
|
22
|
Yang Q, Hansen CB, Cai LY, Rheault F, Lee HH, Bao S, Chandio BQ, Williams O, Resnick SM, Garyfallidis E, Anderson AW, Descoteaux M, Schilling KG, Landman BA. Learning white matter subject-specific segmentation from structural MRI. Med Phys 2022; 49:2502-2513. [PMID: 35090192 PMCID: PMC9053869 DOI: 10.1002/mp.15495] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2021] [Revised: 12/20/2021] [Accepted: 01/10/2022] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Mapping brain white matter (WM) is essential for building an understanding of brain anatomy and function. Tractography-based methods derived from diffusion-weighted MRI (dMRI) are the principal tools for investigating WM. These procedures rely on time-consuming dMRI acquisitions that may not always be available, especially for legacy or time-constrained studies. To address this problem, we aim to generate WM tracts from structural magnetic resonance imaging (MRI) image by deep learning. METHODS Following recently proposed innovations in structural anatomical segmentation, we evaluate the feasibility of training multiply spatial localized convolution neural networks to learn context from fixed spatial patches from structural MRI on standard template. We focus on six widely used dMRI tractography algorithms (TractSeg, RecoBundles, XTRACT, Tracula, automated fiber quantification (AFQ), and AFQclipped) and train 125 U-Net models to learn these techniques from 3870 T1-weighted images from the Baltimore Longitudinal Study of Aging, the Human Connectome Project S1200 release, and scans acquired at Vanderbilt University. RESULTS The proposed framework identifies fiber bundles with high agreement against tractography-based pathways with a median Dice coefficient from 0.62 to 0.87 on a test cohort, achieving improved subject-specific accuracy when compared to population atlas-based methods. We demonstrate the generalizability of the proposed framework on three externally available datasets. CONCLUSIONS We show that patch-wise convolutional neural network can achieve robust bundle segmentation from T1w. We envision the use of this framework for visualizing the expected course of WM pathways when dMRI is not available.
Collapse
|
23
|
Yu X, Tang Y, Yang Q, Lee HH, Bao S, Moore AZ, Ferrucci L, Landman BA. Accelerating 2D Abdominal Organ Segmentation with Active Learning. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2022; 12032:120323F. [PMID: 36303576 PMCID: PMC9604047 DOI: 10.1117/12.2611595] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Abdominal computed tomography CT imaging enables assessment of body habitus and organ health. Quantification of these health factors necessitates semantic segmentation of key structures. Deep learning efforts have shown remarkable success in automating segmentation of abdominal CT, but these methods largely rely on 3D volumes. Current approaches are not applicable when single slice imaging is used to minimize radiation dose. For 2D abdominal organ segmentation, lack of 3D context and variety in acquired image levels are major challenges. Deep learning approaches for 2D abdominal organ segmentation benefit by adding more images with manual annotation, but annotation is resource intensive to acquire given the large quantity and the requirement of expertise. Herein, we designed a gradient based active learning annotation framework by meta-parameterizing and optimizing the exemplars to dynamically select the 'hard cases' to achieve better results with fewer annotated slices to reduce the annotation effort. With the Baltimore Longitudinal Study on Aging (BLSA) cohort, we evaluated the performance with starting from 286 subjects and added 50 more subjects iteratively to 586 subjects in total. We compared the amount of data required to add to achieve the same Dice score between using our proposed method and the random selection in terms of Dice. When achieving 0.97 of the maximum Dice, the random selection needed 4.4 times more data compared with our active learning framework. The proposed framework maximizes the efficacy of manual efforts and accelerates learning.
Collapse
|
24
|
Lee HH, Tang Y, Bao S, Yang Q, Xu X, Fogo AB, Harris R, de Caestecker MP, Spraggins JM, Heinrich M, Huo Y, Landman BA. Supervised Deep Generation of High-Resolution Arterial Phase Computed Tomography Kidney Substructure Atlas. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2022; 12032:120322S. [PMID: 36303577 PMCID: PMC9605120 DOI: 10.1117/12.2608290] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
The Human BioMolecular Atlas Program (HuBMAP) provides an opportunity to contextualize findings across cellular to organ systems levels. Constructing an atlas target is the primary endpoint for generalizing anatomical information across scales and populations. An initial target of HuBMAP is the kidney organ and arterial phase contrast-enhanced computed tomography (CT) provides distinctive appearance and anatomical context on the internal substructure of kidney organs such as renal context, medulla, and pelvicalyceal system. With the confounding effects of demographics and morphological characteristics of the kidney across large-scale imaging surveys, substantial variation is demonstrated with the internal substructure morphometry and the intensity contrast due to the variance of imaging protocols. Such variability increases the level of difficulty to localize the anatomical features of the kidney substructure in a well-defined spatial reference for clinical analysis. In order to stabilize the localization of kidney substructures in the context of this variability, we propose a high-resolution CT kidney substructure atlas template. Briefly, we introduce a deep learning preprocessing technique to extract the volumetric interest of the abdominal regions and further perform a deep supervised registration pipeline to stably adapt the anatomical context of the kidney internal substructure. To generate and evaluate the atlas template, arterial phase CT scans of 500 control subjects are de-identified and registered to the atlas template with a complete end-to-end pipeline. With stable registration to the abdominal wall and kidney organs, the internal substructure of both left and right kidneys are substantially localized in the high-resolution atlas space. The atlas average template successfully demonstrated the contextual details of the internal structure and was applicable to generalize the morphological variation of internal substructure across patients.
Collapse
|
25
|
Yang Q, Yu X, Lee HH, Tang Y, Bao S, Gravenstein KS, Moore AZ, Makrogiannis S, Ferrucci L, Landman BA. Quantification of muscle, bones, and fat on single slice thigh CT. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2022; 12032:120321K. [PMID: 36303572 PMCID: PMC9603775 DOI: 10.1117/12.2611664] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Muscle, bone, and fat segmentation of CT thigh slice is essential for body composition research. Voxel-wise image segmentation enables quantification of tissue properties including area, intensity and texture. Deep learning approaches have had substantial success in medical image segmentation, but they typically require substantial data. Due to high cost of manual annotation, training deep learning models with limited human labelled data is desirable but also a challenging problem. Inspired by transfer learning, we proposed a two-stage deep learning pipeline to address this issue in thigh segmentation. We study 2836 slices from Baltimore Longitudinal Study of Aging (BLSA) and 121 slices from Genetic and Epigenetic Signatures of Translational Aging Laboratory Testing (GESTALT). First, we generated pseudo-labels based on approximate hand-crafted approaches using CT intensity and anatomical morphology. Then, those pseudo labels are fed into deep neural networks to train models from scratch. Finally, the first stage model is loaded as initialization and fine-tuned with a more limited set of expert human labels. We evaluate the performance of this framework on 56 thigh CT scans and obtained average Dice of 0.979,0.969,0.953,0.980 and 0.800 for five tissues: muscle, cortical bone, internal bone, subcutaneous fat and intermuscular fat respectively. We evaluated generalizability by manually reviewing external 3504 BLSA single thighs from 1752 thigh slices. The result is consistent and passed human review with 150 failed thigh images, which demonstrates that the proposed method has strong generalizability.
Collapse
|