1
|
Akhtar Y, Udupa JK, Tong Y, Wu C, Liu T, Tong L, Hosseini M, Al-Noury M, Chodvadiya M, McDonough JM, Mayer OH, Biko DM, Anari JB, Cahill P, Torigian DA. Auto-segmentation of hemi-diaphragms in free-breathing dynamic MRI of pediatric subjects with thoracic insufficiency syndrome. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2024:2024.09.17.24313704. [PMID: 39371175 PMCID: PMC11451659 DOI: 10.1101/2024.09.17.24313704] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/08/2024]
Abstract
Purpose In respiratory disorders such as thoracic insufficiency syndrome (TIS), the quantitative study of the regional motion of the left hemi-diaphragm (LHD) and right hemi-diaphragm (RHD) can give detailed insights into the distribution and severity of the abnormalities in individual patients. Dynamic magnetic resonance imaging (dMRI) is a preferred imaging modality for capturing dynamic images of respiration since dMRI does not involve ionizing radiation and can be obtained under free-breathing conditions. Using 4D images constructed from dMRI of sagittal locations, diaphragm segmentation is an evident step for the said quantitative analysis of LHD and RHD in these 4D images. Methods In this paper, we segment the LHD and RHD in three steps: recognition of diaphragm, delineation of diaphragm, and separation of diaphragm along the mid-sagittal plane into LHD and RHD. The challenges involved in dMRI images are low resolution, motion blur, suboptimal contrast resolution, inconsistent meaning of gray-level intensities for the same object across multiple scans, and low signal-to-noise ratio. We have utilized deep learning (DL) concepts such as Path Aggregation Network and Dual Attention Network for the recognition step, Dense-Net and Residual-Net in an enhanced encoder-decoder architecture for the delineation step, and a combination of GoogleNet and Recurrent Neural Network for the identification of the mid-sagittal plane in the separation step. Due to the challenging images of TIS patients attributed to their highly distorted and variable anatomy of the thorax, in such images we localize the diaphragm using the auto-segmentations of the lungs and the thoraco-abdominal skin. Results We achieved an average±SD mean-Hausdorff distance of ∼3±3 mm for the delineation step and a positional error of ∼3±3 mm in recognizing the mid-sagittal plane in 100 3D test images of TIS patients with a different set of ∼430 3D images of TIS patients utilized for building the models for delineation, and separation. We showed that auto-segmentations of the diaphragm are indistinguishable from segmentations by experts, in images of near-normal subjects. In addition, the algorithmic identification of the mid-sagittal plane is indistinguishable from its identification by experts in images of near-normal subjects. Conclusions Motivated by applications in surgical planning for disorders such as TIS, we have shown an auto-segmentation set-up for the diaphragm in dMRI images of TIS pediatric subjects. The results are promising, showing that our system can handle the aforesaid challenges. We intend to use the auto-segmentations of the diaphragm to create the initial ground truth (GT) for newly acquired data and then refining them, to expedite the process of creating GT for diaphragm motion analysis, and to test the efficacy of our proposed method to optimize pre-treatment planning and post-operative assessment of patients with TIS and other disorders.
Collapse
|
2
|
Akhtar Y, Udupa JK, Tong Y, Liu T, Wu C, Kogan R, Al-Noury M, Hosseini M, Tong L, Mannikeri S, Odhner D, Mcdonough JM, Lott C, Clark A, Cahill PJ, Anari JB, Torigian DA. Auto-segmentation of thoraco-abdominal organs in pediatric dynamic MRI. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2024:2024.05.04.24306582. [PMID: 38766023 PMCID: PMC11100850 DOI: 10.1101/2024.05.04.24306582] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2024]
Abstract
Purpose Analysis of the abnormal motion of thoraco-abdominal organs in respiratory disorders such as the Thoracic Insufficiency Syndrome (TIS) and scoliosis such as adolescent idiopathic scoliosis (AIS) or early onset scoliosis (EOS) can lead to better surgical plans. We can use healthy subjects to find out the normal architecture and motion of a rib cage and associated organs and attempt to modify the patient's deformed anatomy to match to it. Dynamic magnetic resonance imaging (dMRI) is a practical and preferred imaging modality for capturing dynamic images of healthy pediatric subjects. In this paper, we propose an auto-segmentation set-up for the lungs, kidneys, liver, spleen, and thoraco-abdominal skin in these dMRI images which have their own challenges such as poor contrast, image non-standardness, and similarity in texture amongst gas, bone, and connective tissue at several inter-object interfaces. Methods The segmentation set-up has been implemented in two steps: recognition and delineation using two deep neural network (DL) architectures (say DL-R and DL-D) for the recognition step and delineation step, respectively. The encoder-decoder framework in DL-D utilizes features at four different resolution levels to counter the challenges involved in the segmentation. We have evaluated on dMRI sagittal acquisitions of 189 (near-)normal subjects. The spatial resolution in all dMRI acquisitions is 1.46 mm in a sagittal slice and 6.00 mm between sagittal slices. We utilized images of 89 (10) subjects at end inspiration for training (validation). For testing we experimented with three scenarios: utilizing (1) the images of 90 (=189-89-10) different (remaining) subjects at end inspiration for testing, (2) the images of the aforementioned 90 subjects at end expiration for testing, and (3) the images of the aforesaid 99 (=89+10) subjects but at end expiration for testing. In some situations, we can take advantage of already available ground truth (GT) of a subject at a particular respiratory phase to automatically segment the object in the image of the same subject at a different respiratory phase and then refining the segmentation to create the final GT. We anticipate that this process of creating GT would require minimal post hoc correction. In this spirit, we conducted separate experiments where we assume to have the ground truth of the test subjects at end expiration for scenario (1), end inspiration for (2), and end inspiration for (3). Results Amongst these three scenarios of testing, for the DL-R, we achieve a best average location error (LE) of about 1 voxel for the lungs, kidneys, and spleen and 1.5 voxels for the liver and the thoraco- abdominal skin. The standard deviation (SD) of LE is about 1 or 2 voxels. For the delineation approach, we achieve an average Dice coefficient (DC) of about 0.92 to 0.94 for the lungs, 0.82 for the kidneys, 0.90 for the liver, 0.81 for the spleen, and 0.93 for the thoraco-abdominal skin. The SD of DC is lower for the lungs, liver, and the thoraco-abdominal skin, and slightly higher for the spleen and kidneys. Conclusions Motivated by applications in surgical planning for disorders such as TIS, AIS, and EOS, we have shown an auto-segmentation system for thoraco-abdominal organs in dMRI acquisitions. This proposed setup copes with the challenges posed by low resolution, motion blur, inadequate contrast, and image intensity non-standardness quite well. We are in the process of testing its effectiveness on TIS patient dMRI data.
Collapse
|
3
|
Yi X, He Y, Gao S, Li M. A review of the application of deep learning in obesity: From early prediction aid to advanced management assistance. Diabetes Metab Syndr 2024; 18:103000. [PMID: 38604060 DOI: 10.1016/j.dsx.2024.103000] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/08/2022] [Revised: 01/23/2024] [Accepted: 03/29/2024] [Indexed: 04/13/2024]
Abstract
BACKGROUND AND AIMS Obesity is a chronic disease which can cause severe metabolic disorders. Machine learning (ML) techniques, especially deep learning (DL), have proven to be useful in obesity research. However, there is a dearth of systematic reviews of DL applications in obesity. This article aims to summarize the current trend of DL usage in obesity research. METHODS An extensive literature review was carried out across multiple databases, including PubMed, Embase, Web of Science, Scopus, and Medline, to collate relevant studies published from January 2018 to September 2023. The focus was on research detailing the application of DL in the context of obesity. We have distilled critical insights pertaining to the utilized learning models, encompassing aspects of their development, principal results, and foundational methodologies. RESULTS Our analysis culminated in the synthesis of new knowledge regarding the application of DL in the context of obesity. Finally, 40 research articles were included. The final collection of these research can be divided into three categories: obesity prediction (n = 16); obesity management (n = 13); and body fat estimation (n = 11). CONCLUSIONS This is the first review to examine DL applications in obesity. It reveals DL's superiority in obesity prediction over traditional ML methods, showing promise for multi-omics research. DL also innovates in obesity management through diet, fitness, and environmental analyses. Additionally, DL improves body fat estimation, offering affordable and precise monitoring tools. The study is registered with PROSPERO (ID: CRD42023475159).
Collapse
Affiliation(s)
- Xinghao Yi
- Department of Endocrinology, NHC Key Laboratory of Endocrinology, Peking Union Medical College Hospital, Peking Union Medical College and Chinese Academy of Medical Sciences, Beijing 100730, China
| | - Yangzhige He
- Department of Medical Research Center, Peking Union Medical College Hospital, Chinese Academy of Medical Science & Peking Union Medical College, Beijing 100730, China
| | - Shan Gao
- Department of Endocrinology, Xuan Wu Hospital, Capital Medical University, Beijing 10053, China
| | - Ming Li
- Department of Endocrinology, NHC Key Laboratory of Endocrinology, Peking Union Medical College Hospital, Peking Union Medical College and Chinese Academy of Medical Sciences, Beijing 100730, China.
| |
Collapse
|
4
|
Talwar AA, Desai AA, McAuliffe PB, Broach RB, Hsu JY, Liu T, Udupa JK, Tong Y, Torigian DA, Fischer JP. Optimal computed tomography-based biomarkers for prediction of incisional hernia formation. Hernia 2024; 28:17-24. [PMID: 37676569 PMCID: PMC11235401 DOI: 10.1007/s10029-023-02835-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Accepted: 07/04/2023] [Indexed: 09/08/2023]
Abstract
PURPOSE Unstructured data are an untapped source for surgical prediction. Modern image analysis and machine learning (ML) can harness unstructured data in medical imaging. Incisional hernia (IH) is a pervasive surgical disease, well-suited for prediction using image analysis. Our objective was to identify optimal biomarkers (OBMs) from preoperative abdominopelvic computed tomography (CT) imaging which are most predictive of IH development. METHODS Two hundred and twelve rigorously matched colorectal surgery patients at our institution were included. Preoperative abdominopelvic CT scans were segmented to derive linear, volumetric, intensity-based, and textural features. These features were analyzed to find a small subset of OBMs, which are maximally predictive of IH. Three ML classifiers (Ensemble Boosting, Random Forest, SVM) trained on these OBMs were used for prediction of IH. RESULTS Altogether, 279 features were extracted from each CT scan. The most predictive OBMs found were: (1) abdominopelvic visceral adipose tissue (VAT) volume, normalized for height; (2) abdominopelvic skeletal muscle tissue volume, normalized for height; and (3) pelvic VAT volume to pelvic outer aspect of body wall skeletal musculature (OAM) volume ratio. Among ML prediction models, Ensemble Boosting produced the best performance with an AUC of 0.85, accuracy of 0.83, sensitivity of 0.86, and specificity of 0.81. CONCLUSION These OBMs suggest increased intra-abdominopelvic volume/pressure as the salient pathophysiologic driver and likely mechanism for IH formation. ML models using these OBMs are highly predictive for IH development. The next generation of surgical prediction will maximize the utility of unstructured data using advanced image analysis and ML.
Collapse
Affiliation(s)
- A A Talwar
- Division of Plastic Surgery, Department of Surgery, University of Pennsylvania Health System, 3400 Civic Center Boulevard, 14th floor South Tower, Philadelphia, PA, 19104, USA
| | - A A Desai
- Division of Plastic Surgery, Department of Surgery, University of Pennsylvania Health System, 3400 Civic Center Boulevard, 14th floor South Tower, Philadelphia, PA, 19104, USA
| | - P B McAuliffe
- Division of Plastic Surgery, Department of Surgery, University of Pennsylvania Health System, 3400 Civic Center Boulevard, 14th floor South Tower, Philadelphia, PA, 19104, USA
| | - R B Broach
- Division of Plastic Surgery, Department of Surgery, University of Pennsylvania Health System, 3400 Civic Center Boulevard, 14th floor South Tower, Philadelphia, PA, 19104, USA
| | - J Y Hsu
- Division of Biostatistics, Department of Biostatistics, Epidemiology and Informatics, University of Pennsylvania, Philadelphia, PA, USA
| | - T Liu
- School of Information Science and Engineering, Yanshan University, Qinhuangdao, China
| | - J K Udupa
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA
| | - Y Tong
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA
| | - D A Torigian
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA
| | - J P Fischer
- Division of Plastic Surgery, Department of Surgery, University of Pennsylvania Health System, 3400 Civic Center Boulevard, 14th floor South Tower, Philadelphia, PA, 19104, USA.
| |
Collapse
|
5
|
Dai J, Liu T, Torigian DA, Tong Y, Han S, Nie P, Zhang J, Li R, Xie F, Udupa JK. GA-Net: A geographical attention neural network for the segmentation of body torso tissue composition. Med Image Anal 2024; 91:102987. [PMID: 37837691 PMCID: PMC10841506 DOI: 10.1016/j.media.2023.102987] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 07/27/2023] [Accepted: 09/28/2023] [Indexed: 10/16/2023]
Abstract
PURPOSE Body composition analysis (BCA) of the body torso plays a vital role in the study of physical health and pathology and provides biomarkers that facilitate the diagnosis and treatment of many diseases, such as type 2 diabetes mellitus, cardiovascular disease, obstructive sleep apnea, and osteoarthritis. In this work, we propose a body composition tissue segmentation method that can automatically delineate those key tissues, including subcutaneous adipose tissue, skeleton, skeletal muscle tissue, and visceral adipose tissue, on positron emission tomography/computed tomography scans of the body torso. METHODS To provide appropriate and precise semantic and spatial information that is strongly related to body composition tissues for the deep neural network, first we introduce a new concept of the body area and integrate it into our proposed segmentation network called Geographical Attention Network (GA-Net). The body areas are defined following anatomical principles such that the whole body torso region is partitioned into three non-overlapping body areas. Each body composition tissue of interest is fully contained in exactly one specific minimal body area. Secondly, the proposed GA-Net has a novel dual-decoder schema that is composed of a tissue decoder and an area decoder. The tissue decoder segments the body composition tissues, while the area decoder segments the body areas as an auxiliary task. The features of body areas and body composition tissues are fused through a soft attention mechanism to gain geographical attention relevant to the body tissues. Thirdly, we propose a body composition tissue annotation approach that takes the body area labels as the region of interest, which significantly improves the reproducibility, precision, and efficiency of delineating body composition tissues. RESULTS Our evaluations on 50 low-dose unenhanced CT images indicate that GA-Net outperforms other architectures statistically significantly based on the Dice metric. GA-Net also shows improvements for the 95% Hausdorff Distance metric in most comparisons. Notably, GA-Net exhibits more sensitivity to subtle boundary information and produces more reliable and robust predictions for such structures, which are the most challenging parts to manually mend in practice, with potentially significant time-savings in the post hoc correction of these subtle boundary placement errors. Due to the prior knowledge provided from body areas, GA-Net achieves competitive performance with less training data. Our extension of the dual-decoder schema to TransUNet and 3D U-Net demonstrates that the new schema significantly improves the performance of these classical neural networks as well. Heatmaps obtained from attention gate layers further illustrate the geographical guidance function of body areas for identifying body tissues. CONCLUSIONS (i) Prior anatomic knowledge supplied in the form of appropriately designed anatomic container objects significantly improves the segmentation of bodily tissues. (ii) Of particular note are the improvements achieved in the delineation of subtle boundary features which otherwise would take much effort for manual correction. (iii) The method can be easily extended to existing networks to improve their accuracy for this application.
Collapse
Affiliation(s)
- Jian Dai
- School of Information Science and Engineering, Yanshan University, Qinhuangdao 066004, Hebei, China; The Key Laboratory for Computer Virtual Technology and System Integration of Hebei Province, Yanshan University, Qinhuangdao 066004, Hebei, China.
| | - Tiange Liu
- School of Information Science and Engineering, Yanshan University, Qinhuangdao 066004, Hebei, China; The Key Laboratory for Computer Virtual Technology and System Integration of Hebei Province, Yanshan University, Qinhuangdao 066004, Hebei, China.
| | - Drew A Torigian
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia 19104, PA, United States of America.
| | - Yubing Tong
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia 19104, PA, United States of America.
| | - Shiwei Han
- School of Information Science and Engineering, Yanshan University, Qinhuangdao 066004, Hebei, China; The Key Laboratory for Computer Virtual Technology and System Integration of Hebei Province, Yanshan University, Qinhuangdao 066004, Hebei, China.
| | - Pengju Nie
- School of Information Science and Engineering, Yanshan University, Qinhuangdao 066004, Hebei, China; The Key Laboratory for Computer Virtual Technology and System Integration of Hebei Province, Yanshan University, Qinhuangdao 066004, Hebei, China.
| | - Jing Zhang
- School of Information Science and Engineering, Yanshan University, Qinhuangdao 066004, Hebei, China; The Key Laboratory for Computer Virtual Technology and System Integration of Hebei Province, Yanshan University, Qinhuangdao 066004, Hebei, China.
| | - Ran Li
- School of Information Science and Engineering, Yanshan University, Qinhuangdao 066004, Hebei, China; The Key Laboratory for Computer Virtual Technology and System Integration of Hebei Province, Yanshan University, Qinhuangdao 066004, Hebei, China.
| | - Fei Xie
- School of AOAIR, Xidian University, Xi'an 710071, Shaanxi, China.
| | - Jayaram K Udupa
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia 19104, PA, United States of America.
| |
Collapse
|
6
|
Saha PK, Nadeem SA, Comellas AP. A Survey on Artificial Intelligence in Pulmonary Imaging. WILEY INTERDISCIPLINARY REVIEWS. DATA MINING AND KNOWLEDGE DISCOVERY 2023; 13:e1510. [PMID: 38249785 PMCID: PMC10796150 DOI: 10.1002/widm.1510] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/18/2022] [Accepted: 06/21/2023] [Indexed: 01/23/2024]
Abstract
Over the last decade, deep learning (DL) has contributed a paradigm shift in computer vision and image recognition creating widespread opportunities of using artificial intelligence in research as well as industrial applications. DL has been extensively studied in medical imaging applications, including those related to pulmonary diseases. Chronic obstructive pulmonary disease, asthma, lung cancer, pneumonia, and, more recently, COVID-19 are common lung diseases affecting nearly 7.4% of world population. Pulmonary imaging has been widely investigated toward improving our understanding of disease etiologies and early diagnosis and assessment of disease progression and clinical outcomes. DL has been broadly applied to solve various pulmonary image processing challenges including classification, recognition, registration, and segmentation. This paper presents a survey of pulmonary diseases, roles of imaging in translational and clinical pulmonary research, and applications of different DL architectures and methods in pulmonary imaging with emphasis on DL-based segmentation of major pulmonary anatomies such as lung volumes, lung lobes, pulmonary vessels, and airways as well as thoracic musculoskeletal anatomies related to pulmonary diseases.
Collapse
Affiliation(s)
- Punam K Saha
- Departments of Radiology and Electrical and Computer Engineering, University of Iowa, Iowa City, IA, 52242
| | | | | |
Collapse
|
7
|
Mai DVC, Drami I, Pring ET, Gould LE, Lung P, Popuri K, Chow V, Beg MF, Athanasiou T, Jenkins JT. A systematic review of automated segmentation of 3D computed-tomography scans for volumetric body composition analysis. J Cachexia Sarcopenia Muscle 2023; 14:1973-1986. [PMID: 37562946 PMCID: PMC10570079 DOI: 10.1002/jcsm.13310] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/22/2022] [Revised: 05/03/2023] [Accepted: 07/11/2023] [Indexed: 08/12/2023] Open
Abstract
Automated computed tomography (CT) scan segmentation (labelling of pixels according to tissue type) is now possible. This technique is being adapted to achieve three-dimensional (3D) segmentation of CT scans, opposed to single L3-slice alone. This systematic review evaluates feasibility and accuracy of automated segmentation of 3D CT scans for volumetric body composition (BC) analysis, as well as current limitations and pitfalls clinicians and researchers should be aware of. OVID Medline, Embase and grey literature databases up to October 2021 were searched. Original studies investigating automated skeletal muscle, visceral and subcutaneous AT segmentation from CT were included. Seven of the 92 studies met inclusion criteria. Variation existed in expertise and numbers of humans performing ground-truth segmentations used to train algorithms. There was heterogeneity in patient characteristics, pathology and CT phases that segmentation algorithms were developed upon. Reporting of anatomical CT coverage varied, with confusing terminology. Six studies covered volumetric regional slabs rather than the whole body. One study stated the use of whole-body CT, but it was not clear whether this truly meant head-to-fingertip-to-toe. Two studies used conventional computer algorithms. The latter five used deep learning (DL), an artificial intelligence technique where algorithms are similarly organized to brain neuronal pathways. Six of seven reported excellent segmentation performance (Dice similarity coefficients > 0.9 per tissue). Internal testing on unseen scans was performed for only four of seven algorithms, whilst only three were tested externally. Trained DL algorithms achieved full CT segmentation in 12 to 75 s versus 25 min for non-DL techniques. DL enables opportunistic, rapid and automated volumetric BC analysis of CT performed for clinical indications. However, most CT scans do not cover head-to-fingertip-to-toe; further research must validate using common CT regions to estimate true whole-body BC, with direct comparison to single lumbar slice. Due to successes of DL, we expect progressive numbers of algorithms to materialize in addition to the seven discussed in this paper. Researchers and clinicians in the field of BC must therefore be aware of pitfalls. High Dice similarity coefficients do not inform the degree to which BC tissues may be under- or overestimated and nor does it inform on algorithm precision. Consensus is needed to define accuracy and precision standards for ground-truth labelling. Creation of a large international, multicentre common CT dataset with BC ground-truth labels from multiple experts could be a robust solution.
Collapse
Affiliation(s)
- Dinh Van Chi Mai
- Department of SurgerySt Mark's Academic Institute, St Mark's HospitalLondonUK
- Department of Surgery and CancerImperial CollegeLondonUK
| | - Ioanna Drami
- Department of SurgerySt Mark's Academic Institute, St Mark's HospitalLondonUK
- Department of Metabolism, Digestion and ReproductionImperial CollegeLondonUK
| | - Edward T. Pring
- Department of SurgerySt Mark's Academic Institute, St Mark's HospitalLondonUK
- Department of Surgery and CancerImperial CollegeLondonUK
| | - Laura E. Gould
- Department of SurgerySt Mark's Academic Institute, St Mark's HospitalLondonUK
- School of Cancer Sciences, College of Medical, Veterinary & Life SciencesUniverstiy of GlasgowGlasgowUK
| | - Phillip Lung
- Department of SurgerySt Mark's Academic Institute, St Mark's HospitalLondonUK
- Department of Surgery and CancerImperial CollegeLondonUK
| | - Karteek Popuri
- Department of Computer ScienceMemorial University of NewfoundlandSt JohnsCanada
| | - Vincent Chow
- School of Engineering ScienceSimon Fraser UniversityBurnabyCanada
| | - Mirza F. Beg
- School of Engineering ScienceSimon Fraser UniversityBurnabyCanada
| | | | - John T. Jenkins
- Department of SurgerySt Mark's Academic Institute, St Mark's HospitalLondonUK
- Department of Surgery and CancerImperial CollegeLondonUK
| | | |
Collapse
|
8
|
Udupa JK, Liu T, Jin C, Zhao L, Odhner D, Tong Y, Agrawal V, Pednekar G, Nag S, Kotia T, Goodman M, Wileyto EP, Mihailidis D, Lukens JN, Berman AT, Stambaugh J, Lim T, Chowdary R, Jalluri D, Jabbour SK, Kim S, Reyhan M, Robinson CG, Thorstad WL, Choi JI, Press R, Simone CB, Camaratta J, Owens S, Torigian DA. Combining natural and artificial intelligence for robust automatic anatomy segmentation: Application in neck and thorax auto-contouring. Med Phys 2022; 49:7118-7149. [PMID: 35833287 PMCID: PMC10087050 DOI: 10.1002/mp.15854] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Revised: 06/20/2022] [Accepted: 06/30/2022] [Indexed: 01/01/2023] Open
Abstract
BACKGROUND Automatic segmentation of 3D objects in computed tomography (CT) is challenging. Current methods, based mainly on artificial intelligence (AI) and end-to-end deep learning (DL) networks, are weak in garnering high-level anatomic information, which leads to compromised efficiency and robustness. This can be overcome by incorporating natural intelligence (NI) into AI methods via computational models of human anatomic knowledge. PURPOSE We formulate a hybrid intelligence (HI) approach that integrates the complementary strengths of NI and AI for organ segmentation in CT images and illustrate performance in the application of radiation therapy (RT) planning via multisite clinical evaluation. METHODS The system employs five modules: (i) body region recognition, which automatically trims a given image to a precisely defined target body region; (ii) NI-based automatic anatomy recognition object recognition (AAR-R), which performs object recognition in the trimmed image without DL and outputs a localized fuzzy model for each object; (iii) DL-based recognition (DL-R), which refines the coarse recognition results of AAR-R and outputs a stack of 2D bounding boxes (BBs) for each object; (iv) model morphing (MM), which deforms the AAR-R fuzzy model of each object guided by the BBs output by DL-R; and (v) DL-based delineation (DL-D), which employs the object containment information provided by MM to delineate each object. NI from (ii), AI from (i), (iii), and (v), and their combination from (iv) facilitate the HI system. RESULTS The HI system was tested on 26 organs in neck and thorax body regions on CT images obtained prospectively from 464 patients in a study involving four RT centers. Data sets from one separate independent institution involving 125 patients were employed in training/model building for each of the two body regions, whereas 104 and 110 data sets from the 4 RT centers were utilized for testing on neck and thorax, respectively. In the testing data sets, 83% of the images had limitations such as streak artifacts, poor contrast, shape distortion, pathology, or implants. The contours output by the HI system were compared to contours drawn in clinical practice at the four RT centers by utilizing an independently established ground-truth set of contours as reference. Three sets of measures were employed: accuracy via Dice coefficient (DC) and Hausdorff boundary distance (HD), subjective clinical acceptability via a blinded reader study, and efficiency by measuring human time saved in contouring by the HI system. Overall, the HI system achieved a mean DC of 0.78 and 0.87 and a mean HD of 2.22 and 4.53 mm for neck and thorax, respectively. It significantly outperformed clinical contouring in accuracy and saved overall 70% of human time over clinical contouring time, whereas acceptability scores varied significantly from site to site for both auto-contours and clinically drawn contours. CONCLUSIONS The HI system is observed to behave like an expert human in robustness in the contouring task but vastly more efficiently. It seems to use NI help where image information alone will not suffice to decide, first for the correct localization of the object and then for the precise delineation of the boundary.
Collapse
Affiliation(s)
- Jayaram K. Udupa
- Medical Image Processing GroupDepartment of RadiologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Tiange Liu
- Medical Image Processing GroupDepartment of RadiologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
- School of Information Science and EngineeringYanshan UniversityQinhuangdaoChina
| | - Chao Jin
- Medical Image Processing GroupDepartment of RadiologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Liming Zhao
- Medical Image Processing GroupDepartment of RadiologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Dewey Odhner
- Medical Image Processing GroupDepartment of RadiologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Yubing Tong
- Medical Image Processing GroupDepartment of RadiologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Vibhu Agrawal
- Medical Image Processing GroupDepartment of RadiologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Gargi Pednekar
- Quantitative Radiology SolutionsPhiladelphiaPennsylvaniaUSA
| | - Sanghita Nag
- Quantitative Radiology SolutionsPhiladelphiaPennsylvaniaUSA
| | - Tarun Kotia
- Quantitative Radiology SolutionsPhiladelphiaPennsylvaniaUSA
| | | | - E. Paul Wileyto
- Department of Biostatistics and EpidemiologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Dimitris Mihailidis
- Department of Radiation OncologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - John Nicholas Lukens
- Department of Radiation OncologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Abigail T. Berman
- Department of Radiation OncologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Joann Stambaugh
- Department of Radiation OncologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Tristan Lim
- Department of Radiation OncologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Rupa Chowdary
- Department of MedicineUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Dheeraj Jalluri
- Department of MedicineUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Salma K. Jabbour
- Department of Radiation OncologyRutgers UniversityNew BrunswickNew JerseyUSA
| | - Sung Kim
- Department of Radiation OncologyRutgers UniversityNew BrunswickNew JerseyUSA
| | - Meral Reyhan
- Department of Radiation OncologyRutgers UniversityNew BrunswickNew JerseyUSA
| | | | - Wade L. Thorstad
- Department of Radiation OncologyWashington UniversitySt. LouisMissouriUSA
| | | | | | | | - Joe Camaratta
- Quantitative Radiology SolutionsPhiladelphiaPennsylvaniaUSA
| | - Steve Owens
- Quantitative Radiology SolutionsPhiladelphiaPennsylvaniaUSA
| | - Drew A. Torigian
- Medical Image Processing GroupDepartment of RadiologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| |
Collapse
|
9
|
McAuliffe PB, Desai AA, Talwar AA, Broach RB, Hsu JY, Serletti JM, Liu T, Tong Y, Udupa JK, Torigian DA, Fischer JP. Preoperative Computed Tomography Morphological Features Indicative of Incisional Hernia Formation After Abdominal Surgery. Ann Surg 2022; 276:616-625. [PMID: 35837959 PMCID: PMC9484790 DOI: 10.1097/sla.0000000000005583] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVE To investigate key morphometric features identifiable on routine preoperative computed tomography (CT) imaging indicative of incisional hernia (IH) formation following abdominal surgery. BACKGROUND IH is a pervasive surgical disease that impacts all surgical disciplines operating in the abdominopelvic region and affecting 13% of patients undergoing abdominal surgery. Despite the significant costs and disability associated with IH, there is an incomplete understanding of the pathophysiology of hernia. METHODS A cohort of patients (n=21,501) that underwent colorectal surgery was identified, and clinical data and demographics were extracted, with a primary outcome of IH. Two datasets of case-control matched pairs were created for feature measurement, classification, and testing. Morphometric linear and volumetric measurements were extracted as features from anonymized preoperative abdominopelvic CT scans. Multivariate Pearson testing was performed to assess correlations among features. Each feature's ability to discriminate between classes was evaluated using 2-sided paired t testing. A support vector machine was implemented to determine the predictive accuracy of the features individually and in combination. RESULTS Two hundred and twelve patients were analyzed (106 matched pairs). Of 117 features measured, 21 features were capable of discriminating between IH and non-IH patients. These features are categorized into three key pathophysiologic domains: 1) structural widening of the rectus complex, 2) increased visceral volume, 3) atrophy of abdominopelvic skeletal muscle. Individual prediction accuracy ranged from 0.69 to 0.78 for the top 3 features among 117. CONCLUSIONS Three morphometric domains identifiable on routine preoperative CT imaging were associated with hernia: widening of the rectus complex, increased visceral volume, and body wall skeletal muscle atrophy. This work highlights an innovative pathophysiologic mechanism for IH formation hallmarked by increased intra-abdominal pressure and compromise of the rectus complex and abdominopelvic skeletal musculature.
Collapse
Affiliation(s)
- Phoebe B McAuliffe
- Division of Plastic Surgery, University of Pennsylvania, Philadelphia, PA
| | - Abhishek A Desai
- Division of Plastic Surgery, University of Pennsylvania, Philadelphia, PA
| | - Ankoor A Talwar
- Division of Plastic Surgery, University of Pennsylvania, Philadelphia, PA
| | - Robyn B Broach
- Division of Plastic Surgery, University of Pennsylvania, Philadelphia, PA
| | - Jesse Y Hsu
- Division of Biostatistics, Department of Biostatistics, Epidemiology and Informatics, University of Pennsylvania, Philadelphia, PA
| | - Joseph M Serletti
- Division of Plastic Surgery, University of Pennsylvania, Philadelphia, PA
| | - Tiange Liu
- School of Information Science and Engineering, Yanshan University, Qinhuangdao, China
| | - Yubing Tong
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA
| | - Jayaram K Udupa
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA
| | - Drew A Torigian
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA
| | - John P Fischer
- Division of Plastic Surgery, University of Pennsylvania, Philadelphia, PA
| |
Collapse
|
10
|
Bedrikovetski S, Seow W, Kroon HM, Traeger L, Moore JW, Sammour T. Artificial intelligence for body composition and sarcopenia evaluation on computed tomography: A systematic review and meta-analysis. Eur J Radiol 2022; 149:110218. [DOI: 10.1016/j.ejrad.2022.110218] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2021] [Revised: 12/30/2021] [Accepted: 02/10/2022] [Indexed: 12/13/2022]
|
11
|
Liu Z, Sun C, Wang H, Li Z, Gao Y, Lei W, Zhang S, Wang G, Zhang S. Automatic segmentation of organs-at-risks of nasopharynx cancer and lung cancer by cross-layer attention fusion network with TELD-Loss. Med Phys 2021; 48:6987-7002. [PMID: 34608652 DOI: 10.1002/mp.15260] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Revised: 07/26/2021] [Accepted: 09/01/2021] [Indexed: 11/08/2022] Open
Abstract
PURPOSE Radiotherapy is one of the main treatments of nasopharyngeal cancer (NPC) and lung cancer. Accurate segmentation of organs at risks (OARs) in CT images is a key step in radiotherapy planning for NPC and lung cancer. However, the segmentation of OARs is influenced by the highly imbalanced size of organs, which often results in very poor segmentation results for small and difficult-to-segment organs. In addition, the complex morphological changes and fuzzy boundaries of OARs also pose great challenges to the segmentation task. In this paper, we propose a cross-layer attention fusion network (CLAF-CNN) to solve the problem of accurately segmenting OARs. METHODS In CLAF-CNN, we integrate the spatial attention maps of the adjacent spatial attention modules to make the segmentation targets more accurately focused, so that the network can capture more target-related features. In this way, the spatial attention modules in the network can be learned and optimized together. In addition, we introduce a new Top-K exponential logarithmic Dice loss (TELD-Loss) to solve the imbalance problem in OAR segmentation. The TELD-Loss further introduces a Top-K optimization mechanism based on Dice loss and exponential logarithmic loss, which makes the network pay more attention to small organs and difficult-to-segment organs, so as to enhance the overall performance of the segmentation model. RESULTS We validated our framework on the OAR segmentation datasets of the head and neck and lung CT images in the StructSeg 2019 challenge. Experiments show that the CLAF-CNN outperforms the state-of-the-art attention-based segmentation methods in the OAR segmentation task with average Dice coefficient of 79.65% for head and neck OARs and 88.39% for lung OARs. CONCLUSIONS This work provides a new network named CLAF-CNN which contains cross-layer spatial attention map fusion architecture and TELD-Loss for OAR segmentation. Results demonstrated that the proposed method could obtain accurate segmentation results for OARs, which has a potential of improving the efficiency of radiotherapy planning for nasopharynx cancer and lung cancer.
Collapse
Affiliation(s)
- Zuhao Liu
- Glasgow College, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Chao Sun
- Department of Radiology, Peking University People's Hospital, Beijing, 100044, China
| | - Huan Wang
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Zhiqi Li
- School of Automation Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Yibo Gao
- Glasgow College, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Wenhui Lei
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Shichuan Zhang
- Department of Radiation Oncology, Sichuan Cancer Hospital and Institute, University of Electronic Science and Technology of China, Chengdu, 610041, China
| | - Guotai Wang
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Shaoting Zhang
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, China.,SenseTime Research, Shanghai, 200233, China
| |
Collapse
|
12
|
Best TD, Roeland EJ, Horick NK, Van Seventer EE, El-Jawahri A, Troschel AS, Johnson PC, Kanter KN, Fish MG, Marquardt JP, Bridge CP, Temel JS, Corcoran RB, Nipp RD, Fintelmann FJ. Muscle Loss Is Associated with Overall Survival in Patients with Metastatic Colorectal Cancer Independent of Tumor Mutational Status and Weight Loss. Oncologist 2021; 26:e963-e970. [PMID: 33818860 PMCID: PMC8176987 DOI: 10.1002/onco.13774] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2020] [Accepted: 01/28/2021] [Indexed: 12/12/2022] Open
Abstract
Background Survival in patients with metastatic colorectal cancer (mCRC) has been associated with tumor mutational status, muscle loss, and weight loss. We sought to explore the combined effects of these variables on overall survival. Materials and Methods We performed an observational cohort study, prospectively enrolling patients receiving chemotherapy for mCRC. We retrospectively assessed changes in muscle (using computed tomography) and weight, each dichotomized as >5% or ≤5% loss, at 3, 6, and 12 months after diagnosis of mCRC. We used regression models to assess relationships between tumor mutational status, muscle loss, weight loss, and overall survival. Additionally, we evaluated associations between muscle loss, weight loss, and tumor mutational status. Results We included 226 patients (mean age 59 ± 13 years, 53% male). Tumor mutational status included 44% wild type, 42% RAS‐mutant, and 14% BRAF‐mutant. Patients with >5% muscle loss at 3 and 12 months experienced worse survival controlling for mutational status and weight (3 months hazard ratio, 2.66; p < .001; 12 months hazard ratio, 2.10; p = .031). We found an association of >5% muscle loss with BRAF‐mutational status at 6 and 12 months. Weight loss was not associated with survival nor mutational status. Conclusion Increased muscle loss at 3 and 12 months may identify patients with mCRC at risk for decreased overall survival, independent of tumor mutational status. Specifically, >5% muscle loss identifies patients within each category of tumor mutational status with decreased overall survival in our sample. Our findings suggest that quantifying muscle loss on serial computed tomography scans may refine survival estimates in patients with mCRC. Implications for Practice In this study of 226 patients with metastatic colorectal cancer, it was found that losing >5% skeletal muscle at 3 and 12 months after the diagnosis of metastatic disease was associated with worse overall survival, independent of tumor mutational status and weight loss. Interestingly, results did not show a significant association between weight loss and overall survival. These findings suggest that muscle quantification on serial computed tomography may refine survival estimates in patients with metastatic colorectal cancer beyond mutational status. Cancer cachexia has traditionally been defined using weight loss; however, loss of skeletal muscle may be a more objective measure. This article reports the results of a retrospective study that assessed whether skeletal muscle loss is associated with overall survival in patients with metastatic colorectal cancer, independent of tumor mutational status and weight loss.
Collapse
Affiliation(s)
- Till Dominik Best
- Department of Radiology, Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Berlin, Germany.,Department of Radiology, Division of Thoracic Imaging and Intervention, Massachusetts General Hospital & Harvard Medical School, Boston, Massachusetts
| | - Eric J Roeland
- Department of Medicine, Division of Hematology & Oncology, Massachusetts General Hospital Cancer Center & Harvard Medical School, Boston, Massachusetts, USA
| | - Nora K Horick
- Department of Statistics, Massachusetts General Hospital & Harvard Medical School, Boston, Massachusetts, USA
| | - Emily E Van Seventer
- Department of Medicine, Division of Hematology & Oncology, Massachusetts General Hospital Cancer Center & Harvard Medical School, Boston, Massachusetts, USA
| | - Areej El-Jawahri
- Department of Medicine, Division of Hematology & Oncology, Massachusetts General Hospital Cancer Center & Harvard Medical School, Boston, Massachusetts, USA
| | - Amelie S Troschel
- Department of Radiology, Division of Thoracic Imaging and Intervention, Massachusetts General Hospital & Harvard Medical School, Boston, Massachusetts
| | - Patrick C Johnson
- Department of Medicine, Division of Hematology & Oncology, Massachusetts General Hospital Cancer Center & Harvard Medical School, Boston, Massachusetts, USA
| | - Katie N Kanter
- Department of Medicine, Division of Hematology & Oncology, Massachusetts General Hospital Cancer Center & Harvard Medical School, Boston, Massachusetts, USA
| | - Madeleine G Fish
- Department of Medicine, Division of Hematology & Oncology, Massachusetts General Hospital Cancer Center & Harvard Medical School, Boston, Massachusetts, USA
| | - J Peter Marquardt
- Department of Radiology, Division of Thoracic Imaging and Intervention, Massachusetts General Hospital & Harvard Medical School, Boston, Massachusetts.,School of Medicine, RWTH Aachen University, Aachen, Germany
| | | | - Jennifer S Temel
- Department of Medicine, Division of Hematology & Oncology, Massachusetts General Hospital Cancer Center & Harvard Medical School, Boston, Massachusetts, USA
| | - Ryan B Corcoran
- Department of Medicine, Division of Hematology & Oncology, Massachusetts General Hospital Cancer Center & Harvard Medical School, Boston, Massachusetts, USA
| | - Ryan D Nipp
- Department of Medicine, Division of Hematology & Oncology, Massachusetts General Hospital Cancer Center & Harvard Medical School, Boston, Massachusetts, USA
| | - Florian J Fintelmann
- Department of Radiology, Division of Thoracic Imaging and Intervention, Massachusetts General Hospital & Harvard Medical School, Boston, Massachusetts
| |
Collapse
|