1
|
Tong Y, Udupa JK, McDonough JM, Wu C, Sun C, Xie L, Lott C, Clark A, Mayer OH, Anari JB, Torigian DA, Cahill PJ. Assessment of Regional Functional Effects of Surgical Treatment in Thoracic Insufficiency Syndrome via Dynamic Magnetic Resonance Imaging. J Bone Joint Surg Am 2023; 105:53-62. [PMID: 36598475 DOI: 10.2106/jbjs.22.00324] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/05/2023]
Abstract
BACKGROUND Quantitative regional assessment of thoracic function would enable clinicians to better understand the regional effects of therapy and the degree of deviation from normality in patients with thoracic insufficiency syndrome (TIS). The purpose of this study was to determine the regional functional effects of surgical treatment in TIS via quantitative dynamic magnetic resonance imaging (MRI) in comparison with healthy children. METHODS Volumetric parameters were derived via 129 dynamic MRI scans from 51 normal children (November 2017 to March 2019) and 39 patients with TIS (preoperatively and postoperatively, July 2009 to May 2018) for the left and right lungs, the left and right hemi-diaphragms, and the left and right hemi-chest walls during tidal breathing. Paired t testing was performed to compare the parameters from patients with TIS preoperatively and postoperatively. Mahalanobis distances between parameters of patients with TIS and age-matched normal children were assessed to evaluate the closeness of patient lung function to normality. Linear regression functions were utilized to estimate volume deviations of patients with TIS from normality, taking into account the growth of the subjects. RESULTS The mean Mahalanobis distances for the right hemi-diaphragm tidal volume (RDtv) were -1.32 ± 1.04 preoperatively and -0.05 ± 1.11 postoperatively (p = 0.001). Similarly, the mean Mahalanobis distances for the right lung tidal volume (RLtv) were -1.12 ± 1.04 preoperatively and -0.10 ± 1.26 postoperatively (p = 0.01). The mean Mahalanobis distances for the ratio of bilateral hemi-diaphragm tidal volume to bilateral lung tidal volume (BDtv/BLtv) were -1.68 ± 1.21 preoperatively and -0.04 ± 1.10 postoperatively (p = 0.003). Mahalanobis distances decreased after treatment, suggesting reduced deviations from normality. Regression results showed that all volumes and tidal volumes significantly increased after treatment (p < 0.001), and the tidal volume increases were significantly greater than those expected from normal growth for RDtv, RLtv, BDtv, and BLtv (p < 0.05). CONCLUSIONS Postoperative tidal volumes of bilateral lungs and bilateral hemi-diaphragms of patients with TIS came closer to those of normal children, indicating positive treatment effects from the surgical procedure. Quantitative dynamic MRI facilitates the assessment of regional effects of a surgical procedure to treat TIS. LEVEL OF EVIDENCE Diagnostic Level II. See Instructions for Authors for a complete description of levels of evidence.
Collapse
Affiliation(s)
- Yubing Tong
- Department of Radiology, Medical Image Processing Group, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Jayaram K Udupa
- Department of Radiology, Medical Image Processing Group, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Joseph M McDonough
- The Wyss/Campbell Center for Thoracic Insufficiency Syndrome, Children's Hospital of Philadelphia, Philadelphia, Pennsylvania
| | - Caiyun Wu
- Department of Radiology, Medical Image Processing Group, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Changjian Sun
- Department of Radiology, Medical Image Processing Group, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Lipeng Xie
- Department of Radiology, Medical Image Processing Group, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Carina Lott
- The Wyss/Campbell Center for Thoracic Insufficiency Syndrome, Children's Hospital of Philadelphia, Philadelphia, Pennsylvania
| | - Abigail Clark
- The Wyss/Campbell Center for Thoracic Insufficiency Syndrome, Children's Hospital of Philadelphia, Philadelphia, Pennsylvania
| | - Oscar H Mayer
- The Wyss/Campbell Center for Thoracic Insufficiency Syndrome, Children's Hospital of Philadelphia, Philadelphia, Pennsylvania
| | - Jason B Anari
- The Wyss/Campbell Center for Thoracic Insufficiency Syndrome, Children's Hospital of Philadelphia, Philadelphia, Pennsylvania
| | - Drew A Torigian
- Department of Radiology, Medical Image Processing Group, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Patrick J Cahill
- The Wyss/Campbell Center for Thoracic Insufficiency Syndrome, Children's Hospital of Philadelphia, Philadelphia, Pennsylvania
| |
Collapse
|
2
|
Jin C, Udupa JK, Zhao L, Tong Y, Odhner D, Pednekar G, Nag S, Lewis S, Poole N, Mannikeri S, Govindasamy S, Singh A, Camaratta J, Owens S, Torigian DA. Object recognition in medical images via anatomy-guided deep learning. Med Image Anal 2022; 81:102527. [DOI: 10.1016/j.media.2022.102527] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2021] [Revised: 03/31/2022] [Accepted: 06/24/2022] [Indexed: 11/25/2022]
|
3
|
Li J, Udupa JK, Odhner D, Tong Y, Torigian DA. SOMA: Subject-, object-, and modality-adapted precision atlas approach for automatic anatomy recognition and delineation in medical images. Med Phys 2021; 48:7806-7825. [PMID: 34668207 PMCID: PMC8678400 DOI: 10.1002/mp.15308] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Revised: 09/12/2021] [Accepted: 09/29/2021] [Indexed: 11/06/2022] Open
Abstract
PURPOSE In the multi-atlas segmentation (MAS) method, a large enough atlas set, which can cover the complete spectrum of the whole population pattern of the target object will benefit the segmentation quality. However, the difficulty in obtaining and generating such a large set of atlases and the computational burden required in the segmentation procedure make this approach impractical. In this paper, we propose a method called SOMA to select subject-, object-, and modality-adapted precision atlases for automatic anatomy recognition in medical images with pathology, following the idea that different regions of the target object in a novel image can be recognized by different atlases with regionally best similarity, so that effective atlases have no need to be globally similar to the target subject and also have no need to be overall similar to the target object. METHODS The SOMA method consists of three main components: atlas building, object recognition, and object delineation. Considering the computational complexity, we utilize an all-to-template strategy to align all images to the same image space belonging to the root image determined by the minimum spanning tree (MST) strategy among a subset of radiologically near-normal images. The object recognition process is composed of two stages: rough recognition and refined recognition. In rough recognition, subimage matching is conducted between the test image and each image of the whole atlas set, and only the atlas corresponding to the best-matched subimage contributes to the recognition map regionally. The frequency of best match for each atlas is recorded by a counter, and the atlases with the highest frequencies are selected as the precision atlases. In refined recognition, only the precision atlases are examined, and the subimage matching is conducted in a nonlocal manner of searching to further increase the accuracy of boundary matching. Delineation is based on a U-net-based deep learning network, where the original gray scale image together with the fuzzy map from refined recognition compose a two-channel input to the network, and the output is a segmentation map of the target object. RESULTS Experiments are conducted on computed tomography (CT) images with different qualities in two body regions - head and neck (H&N) and thorax, from 298 subjects with nine objects and 241 subjects with six objects, respectively. Most objects achieve a localization error within two voxels after refined recognition, with marked improvement in localization accuracy from rough to refined recognition of 0.6-3 mm in H&N and 0.8-4.9 mm in thorax, and also in delineation accuracy (Dice coefficient) from refined recognition to delineation of 0.01-0.11 in H&N and 0.01-0.18 in thorax. CONCLUSIONS The SOMA method shows high accuracy and robustness in anatomy recognition and delineation. The improvements from rough to refined recognition and further to delineation, as well as immunity of recognition accuracy to varying image and object qualities, demonstrate the core principles of SOMA where segmentation accuracy increases with precision atlases and gradually refined object matching.
Collapse
Affiliation(s)
- Jieyu Li
- Institute of Image Processing and Pattern Recognition, Department of Automation, Shanghai Jiao Tong University, Shanghai, China
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Jayaram K. Udupa
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Dewey Odhner
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Yubing Tong
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Drew A. Torigian
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| |
Collapse
|
4
|
Abdel-Basset M, Chang V, Hawash H, Chakrabortty RK, Ryan M. FSS-2019-nCov: A deep learning architecture for semi-supervised few-shot segmentation of COVID-19 infection. Knowl Based Syst 2021; 212:106647. [PMID: 33519100 PMCID: PMC7836902 DOI: 10.1016/j.knosys.2020.106647] [Citation(s) in RCA: 64] [Impact Index Per Article: 21.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2020] [Revised: 10/20/2020] [Accepted: 11/30/2020] [Indexed: 01/06/2023]
Abstract
The newly discovered coronavirus (COVID-19) pneumonia is providing major challenges to research in terms of diagnosis and disease quantification. Deep-learning (DL) techniques allow extremely precise image segmentation; yet, they necessitate huge volumes of manually labeled data to be trained in a supervised manner. Few-Shot Learning (FSL) paradigms tackle this issue by learning a novel category from a small number of annotated instances. We present an innovative semi-supervised few-shot segmentation (FSS) approach for efficient segmentation of 2019-nCov infection (FSS-2019-nCov) from only a few amounts of annotated lung CT scans. The key challenge of this study is to provide accurate segmentation of COVID-19 infection from a limited number of annotated instances. For that purpose, we propose a novel dual-path deep-learning architecture for FSS. Every path contains encoder-decoder (E-D) architecture to extract high-level information while maintaining the channel information of COVID-19 CT slices. The E-D architecture primarily consists of three main modules: a feature encoder module, a context enrichment (CE) module, and a feature decoder module. We utilize the pre-trained ResNet34 as an encoder backbone for feature extraction. The CE module is designated by a newly introduced proposed Smoothed Atrous Convolution (SAC) block and Multi-scale Pyramid Pooling (MPP) block. The conditioner path takes the pairs of CT images and their labels as input and produces a relevant knowledge representation that is transferred to the segmentation path to be used to segment the new images. To enable effective collaboration between both paths, we propose an adaptive recombination and recalibration (RR) module that permits intensive knowledge exchange between paths with a trivial increase in computational complexity. The model is extended to multi-class labeling for various types of lung infections. This contribution overcomes the limitation of the lack of large numbers of COVID-19 CT scans. It also provides a general framework for lung disease diagnosis in limited data situations.
Collapse
Affiliation(s)
- Mohamed Abdel-Basset
- Faculty of Computers and Informatics, Zagazig University, Zagazig, Sharqiyah, 44519, Egypt
| | - Victor Chang
- School of Computing, Engineering and Digital Technologies, Teesside University, Middlesbrough, UK
| | - Hossam Hawash
- Faculty of Computers and Informatics, Zagazig University, Zagazig, Sharqiyah, 44519, Egypt
| | - Ripon K Chakrabortty
- Capability Systems Centre, School of Engineering and IT, UNSW Canberra, Australia
| | - Michael Ryan
- Capability Systems Centre, School of Engineering and IT, UNSW Canberra, Australia
| |
Collapse
|
5
|
Kiser KJ, Ahmed S, Stieb S, Mohamed ASR, Elhalawani H, Park PYS, Doyle NS, Wang BJ, Barman A, Li Z, Zheng WJ, Fuller CD, Giancardo L. PleThora: Pleural effusion and thoracic cavity segmentations in diseased lungs for benchmarking chest CT processing pipelines. Med Phys 2020; 47:5941-5952. [PMID: 32749075 PMCID: PMC7722027 DOI: 10.1002/mp.14424] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2020] [Revised: 07/22/2020] [Accepted: 07/27/2020] [Indexed: 12/19/2022] Open
Abstract
This manuscript describes a dataset of thoracic cavity segmentations and discrete pleural effusion segmentations we have annotated on 402 computed tomography (CT) scans acquired from patients with non-small cell lung cancer. The segmentation of these anatomic regions precedes fundamental tasks in image analysis pipelines such as lung structure segmentation, lesion detection, and radiomics feature extraction. Bilateral thoracic cavity volumes and pleural effusion volumes were manually segmented on CT scans acquired from The Cancer Imaging Archive "NSCLC Radiomics" data collection. Four hundred and two thoracic segmentations were first generated automatically by a U-Net based algorithm trained on chest CTs without cancer, manually corrected by a medical student to include the complete thoracic cavity (normal, pathologic, and atelectatic lung parenchyma, lung hilum, pleural effusion, fibrosis, nodules, tumor, and other anatomic anomalies), and revised by a radiation oncologist or a radiologist. Seventy-eight pleural effusions were manually segmented by a medical student and revised by a radiologist or radiation oncologist. Interobserver agreement between the radiation oncologist and radiologist corrections was acceptable. All expert-vetted segmentations are publicly available in NIfTI format through The Cancer Imaging Archive at https://doi.org/10.7937/tcia.2020.6c7y-gq39. Tabular data detailing clinical and technical metadata linked to segmentation cases are also available. Thoracic cavity segmentations will be valuable for developing image analysis pipelines on pathologic lungs - where current automated algorithms struggle most. In conjunction with gross tumor volume segmentations already available from "NSCLC Radiomics," pleural effusion segmentations may be valuable for investigating radiomics profile differences between effusion and primary tumor or training algorithms to discriminate between them.
Collapse
Affiliation(s)
- Kendall J. Kiser
- John P. and Kathrine G. McGovern Medical SchoolHoustonTXUSA
- Center for Precision HealthUTHealth School of Biomedical InformaticsHoustonTXUSA
- Department of Radiation OncologyUniversity of Texas MD Anderson Cancer CenterHoustonTXUSA
| | - Sara Ahmed
- Department of Radiation OncologyUniversity of Texas MD Anderson Cancer CenterHoustonTXUSA
| | - Sonja Stieb
- Department of Radiation OncologyUniversity of Texas MD Anderson Cancer CenterHoustonTXUSA
| | - Abdallah S. R. Mohamed
- Department of Radiation OncologyUniversity of Texas MD Anderson Cancer CenterHoustonTXUSA
- MD Anderson Cancer Center‐UTHealth Graduate School of Biomedical SciencesHoustonTXUSA
| | - Hesham Elhalawani
- Department of Radiation OncologyCleveland Clinic Taussig Cancer CenterClevelandOHUSA
| | - Peter Y. S. Park
- Department of Diagnostic and Interventional ImagingJohn P. and Kathrine G. McGovern Medical SchoolHoustonTXUSA
| | - Nathan S. Doyle
- Department of Diagnostic and Interventional ImagingJohn P. and Kathrine G. McGovern Medical SchoolHoustonTXUSA
| | - Brandon J. Wang
- Department of Diagnostic and Interventional ImagingJohn P. and Kathrine G. McGovern Medical SchoolHoustonTXUSA
| | - Arko Barman
- Center for Precision HealthUTHealth School of Biomedical InformaticsHoustonTXUSA
| | - Zhao Li
- Center for Precision HealthUTHealth School of Biomedical InformaticsHoustonTXUSA
| | - W. Jim Zheng
- Center for Precision HealthUTHealth School of Biomedical InformaticsHoustonTXUSA
| | - Clifton D. Fuller
- Department of Radiation OncologyUniversity of Texas MD Anderson Cancer CenterHoustonTXUSA
- MD Anderson Cancer Center‐UTHealth Graduate School of Biomedical SciencesHoustonTXUSA
| | - Luca Giancardo
- Center for Precision HealthUTHealth School of Biomedical InformaticsHoustonTXUSA
- Department of Radiation OncologyCleveland Clinic Taussig Cancer CenterClevelandOHUSA
| |
Collapse
|
6
|
Bernstein P, Metzler J, Weinzierl M, Seifert C, Kisel W, Wacker M. Radiographic scoliosis angle estimation: spline-based measurement reveals superior reliability compared to traditional COBB method. EUROPEAN SPINE JOURNAL : OFFICIAL PUBLICATION OF THE EUROPEAN SPINE SOCIETY, THE EUROPEAN SPINAL DEFORMITY SOCIETY, AND THE EUROPEAN SECTION OF THE CERVICAL SPINE RESEARCH SOCIETY 2020; 30:676-685. [PMID: 32856177 DOI: 10.1007/s00586-020-06577-3] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/15/2020] [Revised: 07/25/2020] [Accepted: 08/17/2020] [Indexed: 11/30/2022]
Abstract
INTRODUCTION AND OBJECTIVE Although being standard for scoliosis curve size estimation, COBB angle measurement is well known to be inaccurate, due to a high interobserver variance in end vertebra selection and end plate contour delineation. We propose a stepwise improvement by using a spline constructed from vertebra centroids to resemble spinal curve characteristics more closely. To enhance precision even further, a neural net was trained to detect the centroids automatically. MATERIALS & METHODS Vertebra centroids in AP spinal X-ray images of varying quality from 551 scoliosis patients were manually labeled by 4 investigators. With these inputs, splines were generated and the computed curve sizes were compared to the manually measured COBB angles and to the curve estimation obtained from the neural net. RESULTS Splines achieved a higher interobserver correlation of 0.92-0.95 compared to manual COBB measurements (0.83-0.92) and showed 1.5-2 times less variance, depending on the anatomic region. This translates into an average of 1° of interobserver measurement deviation for spline-based curve estimation compared to 3°-8° for COBB measurements. The neural net was even more precise and achieved mean deviations below 0.5°. CONCLUSION In conclusion, our data suggest an advantage of spline-based automated measuring systems, so further investigations are warranted to abandon manual COBB measurements.
Collapse
Affiliation(s)
- Peter Bernstein
- Department for Orthopaedics and Traumatology, University Comprehensive Spine Center, University Hospital Dresden, Fetscherstrasse 74, 01307, Dresden, Germany.
| | - Johannes Metzler
- Faculty of Informatics/Mathematics, HTW Dresden, Friedrich-List-Platz 1, 01069, Dresden, Germany
| | - Marlene Weinzierl
- Department for Orthopaedics and Traumatology, University Comprehensive Spine Center, University Hospital Dresden, Fetscherstrasse 74, 01307, Dresden, Germany
| | - Carl Seifert
- Department for Orthopaedics and Traumatology, University Comprehensive Spine Center, University Hospital Dresden, Fetscherstrasse 74, 01307, Dresden, Germany
| | - Wadim Kisel
- Department for Orthopaedics and Traumatology, University Comprehensive Spine Center, University Hospital Dresden, Fetscherstrasse 74, 01307, Dresden, Germany
| | - Markus Wacker
- Faculty of Informatics/Mathematics, HTW Dresden, Friedrich-List-Platz 1, 01069, Dresden, Germany
| |
Collapse
|
7
|
Agrawal V, Udupa J, Tong Y, Torigian D. BRR-Net: A tandem architectural CNN-RNN for automatic body region localization in CT images. Med Phys 2020; 47:5020-5031. [PMID: 32761899 DOI: 10.1002/mp.14439] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2020] [Revised: 06/22/2020] [Accepted: 07/22/2020] [Indexed: 12/25/2022] Open
Abstract
PURPOSE Automatic identification of consistently defined body regions in medical images is vital in many applications. In this paper, we describe a method to automatically demarcate the superior and inferior boundaries for neck, thorax, abdomen, and pelvis body regions in computed tomography (CT) images. METHODS For any three-dimensional (3D) CT image I, following precise anatomic definitions, we denote the superior and inferior axial boundary slices of the neck, thorax, abdomen, and pelvis body regions by NS(I), NI(I), TS(I), TI(I), AS(I), AI(I), PS(I), and PI(I), respectively. Of these, by definition, AI(I) = PS(I), and so the problem reduces to demarcating seven body region boundaries. Our method consists of a two-step approach. In the first step, a convolutional neural network (CNN) is trained to classify each axial slice in I into one of nine categories: the seven body region boundaries, plus legs (defined as all axial slices inferior to PI(I)), and the none-of-the-above category. This CNN uses a multichannel approach to exploit the interslice contrast, providing the neural network with additional visual context at the body region boundaries. In the second step, to improve the predictions for body region boundaries that are very subtle and that exhibit low contrast, a recurrent neural network (RNN) is trained on features extracted by CNN, limited to a flexible window about the predictions from the CNN. RESULTS The method is evaluated on low-dose CT images from 442 patient scans, divided into training and testing sets with a ratio of 70:30. Using only the CNN, overall absolute localization error for NS(I), NI(I), TS(I), TI(I), AS(I), AI(I), and PI(I) expressed in terms of number of slices (nS) is (mean ± SD): 0.61 ± 0.58, 1.05 ± 1.13, 0.31 ± 0.46, 1.85 ± 1.96, 0.57 ± 2.44, 3.42 ± 3.16, and 0.50 ± 0.50, respectively. Using the RNN to refine the CNN's predictions for select classes improved the accuracy of TI(I) and AI(I) to: 1.35 ± 1.71 and 2.83 ± 2.75, respectively. This model outperforms the results achieved in our previous work by 2.4, 1.7, 3.1, 1.1, and 2 slices, respectively for TS(I), TI(I), AS(I), AI(I) = PS(I), and PI(I) classes with statistical significance. The model trained on low-dose CT images was also tested on diagnostic CT images for NS(I), NI(I), and TS(I) classes; the resulting errors were: 1.48 ± 1.33, 2.56 ± 2.05, and 0.58 ± 0.71, respectively. CONCLUSIONS Standardized body region definitions are a prerequisite for effective implementation of quantitative radiology, but the literature is severely lacking in the precise identification of body regions. The method presented in this paper significantly outperforms earlier works by a large margin, and the deviations of our results from ground truth are comparable to variations observed in manual labeling by experts. The solution presented in this work is critical to the adoption and employment of the idea of standardized body regions, and clears the path for development of applications requiring accurate demarcations of body regions. The work is indispensable for automatic anatomy recognition, delineation, and contouring for radiation therapy planning, as it not only automates an essential part of the process, but also removes the dependency on experts for accurately demarcating body regions in a study.
Collapse
Affiliation(s)
- Vibhu Agrawal
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Jayaram Udupa
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Yubing Tong
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Drew Torigian
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA
| |
Collapse
|
8
|
Xu G, Udupa JK, Tong Y, Odhner D, Cao H, Torigian DA. AAR-LN-DQ: Automatic anatomy recognition based disease quantification in thoracic lymph node zones via FDG PET/CT images without Nodal Delineation. Med Phys 2020; 47:3467-3484. [PMID: 32418221 DOI: 10.1002/mp.14240] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2020] [Revised: 04/22/2020] [Accepted: 05/08/2020] [Indexed: 01/02/2023] Open
Abstract
PURPOSE The derivation of quantitative information from medical images in a practical manner is essential for quantitative radiology (QR) to become a clinical reality, but still faces a major hurdle because of image segmentation challenges. With the goal of performing disease quantification in lymph node (LN) stations without explicit nodal delineation, this paper presents a novel approach for disease quantification (DQ) by automatic recognition of LN zones and detection of malignant lymph nodes within thoracic LN zones via positron emission tomography/computed tomography (PET/CT) images. Named AAR-LN-DQ, this approach decouples DQ methods from explicit nodal segmentation via an LN recognition strategy involving a novel globular filter and a deep neural network called SegNet. METHOD The methodology consists of four main steps: (a) Building lymph node zone models by automatic anatomy recognition (AAR) method. It incorporates novel aspects of model building that relate to finding an optimal hierarchy for organs and lymph node zones in the thorax. (b) Recognizing lymph node zones by the built lymph node models. (c) Detecting pathologic LNs in the recognized zones by using a novel globular filter (g-filter) and a multi-level support vector machine (SVM) classifier. Here, we make use of the general globular shape of LNs to first localize them and then use a multi-level SVM classifier to identify pathologic LNs from among the LNs localized by the g-filter. Alternatively, we designed a deep neural network called SegNet which is trained to directly recognize pathologic nodes within AAR localized LN zones. (d) Disease quantification based on identified pathologic LNs within localized zones. A fuzzy disease map is devised to express the degree of disease burden at each voxel within the identified LNs to simultaneously handle several uncertain phenomena such as PET partial volume effects, uncertainty in localization of LNs, and gradation of disease content at the voxel level. We focused on the task of disease quantification in patients with lymphoma based on PET/CT acquisitions and devised a method of evaluation. Model building was carried out using 42 near-normal patient datasets via contrast-enhanced CT examinations of their thorax. PET/CT datasets from an additional 63 lymphoma patients were utilized for evaluating the AAR-LN-DQ methodology. We assess the accuracy of the three main processes involved in AAR-LN-DQ via fivefold cross validation: lymph node zone recognition, abnormal lymph node localization, and disease quantification. RESULTS The recognition and scale error for LN zones were 12.28 mm ± 1.99 and 0.94 ± 0.02, respectively, on normal CT datasets. On abnormal PET/CT datasets, the sensitivity and specificity of pathologic LN recognition were 84.1% ± 0.115 and 98.5% ± 0.003, respectively, for the g-filter-SVM strategy, and 91.3% ± 0.110 and 96.1% ± 0.016, respectively, for the SegNet method. Finally, the mean absolute percent errors for disease quantification of the recognized abnormal LNs were 8% ± 0.09 and 14% ± 0.10 for the g-filter-SVM method and the best SegNet strategy, respectively. CONCLUSIONS Accurate disease quantification on PET/CT images without performing explicit delineation of lymph nodes is feasible following lymph node zone and pathologic LN localization. It is very useful to perform LN zone recognition by AAR as this step can cover most (95.8%) of the abnormal LNs and drastically reduce the regions to search for abnormal LNs. This also improves the specificity of deep networks such as SegNet significantly. It is possible to utilize general shape information about LNs such as their globular nature via g-filter and to arrive at high recognition rates for abnormal LNs in conjunction with a traditional classifier such as SVM. Finally, the disease map concept is effective for estimating disease burden, irrespective of how the LNs are identified, to handle various uncertainties without having to address them explicitly one by one.
Collapse
Affiliation(s)
- Guoping Xu
- School of Electronic Information and Communications, Huazhong University of Science and technology, Wuhan, Hubei, 430074, China.,Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 602 Goddard building, 3710 Hamilton Walk, Philadelphia, PA, 19104, USA
| | - Jayaram K Udupa
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 602 Goddard building, 3710 Hamilton Walk, Philadelphia, PA, 19104, USA
| | - Yubing Tong
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 602 Goddard building, 3710 Hamilton Walk, Philadelphia, PA, 19104, USA
| | - Dewey Odhner
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 602 Goddard building, 3710 Hamilton Walk, Philadelphia, PA, 19104, USA
| | - Hanqiang Cao
- School of Electronic Information and Communications, Huazhong University of Science and technology, Wuhan, Hubei, 430074, China
| | - Drew A Torigian
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 602 Goddard building, 3710 Hamilton Walk, Philadelphia, PA, 19104, USA.,Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia, PA, 19104, USA
| |
Collapse
|
9
|
Li S, Xu P, Li B, Chen L, Zhou Z, Hao H, Duan Y, Folkert M, Ma J, Huang S, Jiang S, Wang J. Predicting lung nodule malignancies by combining deep convolutional neural network and handcrafted features. Phys Med Biol 2019; 64:175012. [PMID: 31307017 PMCID: PMC7106773 DOI: 10.1088/1361-6560/ab326a] [Citation(s) in RCA: 30] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
To predict lung nodule malignancy with a high sensitivity and specificity for low dose CT (LDCT) lung cancer screening, we propose a fusion algorithm that combines handcrafted features (HF) into the features learned at the output layer of a 3D deep convolutional neural network (CNN). First, we extracted twenty-nine HF, including nine intensity features, eight geometric features, and twelve texture features based on grey-level co-occurrence matrix (GLCM). We then trained 3D CNNs modified from three 2D CNN architectures (AlexNet, VGG-16 Net and Multi-crop Net) to extract the CNN features learned at the output layer. For each 3D CNN, the CNN features combined with the 29 HF were used as the input for the support vector machine (SVM) coupled with the sequential forward feature selection (SFS) method to select the optimal feature subset and construct the classifiers. The fusion algorithm takes full advantage of the HF and the highest level CNN features learned at the output layer. It can overcome the disadvantage of the HF that may not fully reflect the unique characteristics of a particular lesion by combining the intrinsic CNN features. Meanwhile, it also alleviates the requirement of a large scale annotated dataset for the CNNs based on the complementary of HF. The patient cohort includes 431 malignant nodules and 795 benign nodules extracted from the LIDC/IDRI database. For each investigated CNN architecture, the proposed fusion algorithm achieved the highest AUC, accuracy, sensitivity, and specificity scores among all competitive classification models.
Collapse
Affiliation(s)
- Shulong Li
- School of Biomedical Engineering, Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, 510515, China
| | - Panpan Xu
- Longgang District People’s Hospital, Shenzhen, 518172, China
| | - Bin Li
- School of Biomedical Engineering, Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, 510515, China
| | - Liyuan Chen
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, 75235, USA
| | - Zhiguo Zhou
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, 75235, USA
| | - Hongxia Hao
- School of Computer Science and Technology, Xidian University, Xi’an, 710071, China
| | - Yingying Duan
- School of Biomedical Engineering, Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, 510515, China
| | - Michael Folkert
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, 75235, USA
| | - Jianhua Ma
- School of Biomedical Engineering, Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, 510515, China
| | - Shiying Huang
- School of Traditional Chinese Medicine, Southern Medical University, Guangzhou, 510515, China
| | - Steve Jiang
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, 75235, USA
| | - Jing Wang
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, 75235, USA
| |
Collapse
|
10
|
Hatt M, Le Rest CC, Tixier F, Badic B, Schick U, Visvikis D. Radiomics: Data Are Also Images. J Nucl Med 2019; 60:38S-44S. [DOI: 10.2967/jnumed.118.220582] [Citation(s) in RCA: 39] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2019] [Accepted: 03/28/2019] [Indexed: 12/14/2022] Open
|
11
|
Liu T, Udupa JK, Miao Q, Tong Y, Torigian DA. Quantification of body-torso-wide tissue composition on low-dose CT images via automatic anatomy recognition. Med Phys 2019; 46:1272-1285. [PMID: 30614020 DOI: 10.1002/mp.13373] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2018] [Revised: 11/19/2018] [Accepted: 12/24/2018] [Indexed: 12/22/2022] Open
Abstract
PURPOSE Quantification of body composition plays an important role in many clinical and research applications. Radiologic imaging techniques such as Dual-energy X-ray absorptiometry (DXA), magnetic resonance imaging (MRI), and computed tomography (CT) imaging make accurate quantification of the body composition possible. However, most current imaging-based methods need human interaction to quantify multiple tissues. When dealing with whole-body images of many subjects, interactive methods become impractical. This paper presents an automated, efficient, accurate, and practical body composition quantification method for low-dose CT images. METHOD Our method, named automatic anatomy recognition body composition analysis (AAR-BCA), aims to quantify four tissue components in body torso (BT) - subcutaneous adipose tissue (SAT), visceral adipose tissue (VAT), bone tissue, and muscle tissue - from CT images of given whole-body positron emission tomography/computed tomography (PET/CT) acquisitions. AAR-BCA consists of three key steps - modeling BT with its ensemble of key objects from a population of patient images, recognition or localization of these objects in a given patient image I, and delineation and quantification of the four tissue components in I guided by the recognized objects. In the first step, from a given set of patient images and the associated delineated objects, a fuzzy anatomy model of the key object ensemble, including anatomic organs, tissue regions, and tissue interfaces, is built where the objects are organized in a hierarchical order. The second step involves recognizing, or finding roughly the location of, each object in any given whole-body image I of a patient following the object hierarchy and guided by the built model. The third step makes use of this fuzzy localization information of the objects and the intensity distributions of the four tissue components, already learned and encoded in the model, to optimally delineate in a fuzzy manner and quantify these components. All parameters in our method are determined from training datasets. RESULTS Thirty-eight low-dose CT images from different subjects are tested in a fivefold cross-validation strategy for evaluating AAR-BCA with a 23-15 train-test dataset division. For BT, over all objects, AAR-BCA achieves a false-positive volume fraction (FPVF) of 3.7% and false-negative volume fraction (FNVF) of 3.8%. Notably, SAT achieves both a FPVF and FNVF under 3%. For bone tissue, it achieves a FPVF and a FNVF both under 3.5%. For VAT tissue, the FNVF of 4.8% is higher than for other objects and so also for muscle (4.7%). The level of accuracy for the four tissue components in individual body subregions mostly remains at the same level as for BT. The processing time required per patient image is under a minute. CONCLUSIONS Motivated by applications in cancer and systemic diseases, our goal in this paper was to seek a practical method for body composition quantification which is automated, accurate, and efficient, and works on BT in low-dose CT. The proposed AAR-BCA method toward this goal can quantify four tissue components including SAT, VAT, bone tissue, and muscle tissue in the body torso with under 5% overall error. All needed parameters can be automatically estimated from the training datasets.
Collapse
Affiliation(s)
- Tiange Liu
- School of Information Science and Engineering, Yanshan University, Qinhuangdao, Hebei, 066004, China.,Medical image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA.,Xidian University, Xi'an, Shaanxi, 710126, China
| | - Jayaram K Udupa
- Medical image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Qiguang Miao
- Xidian University, Xi'an, Shaanxi, 710126, China
| | - Yubing Tong
- Medical image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Drew A Torigian
- Medical image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA
| |
Collapse
|