1
|
Udupa JK, Liu T, Jin C, Zhao L, Odhner D, Tong Y, Agrawal V, Pednekar G, Nag S, Kotia T, Goodman M, Wileyto EP, Mihailidis D, Lukens JN, Berman AT, Stambaugh J, Lim T, Chowdary R, Jalluri D, Jabbour SK, Kim S, Reyhan M, Robinson CG, Thorstad WL, Choi JI, Press R, Simone CB, Camaratta J, Owens S, Torigian DA. Combining natural and artificial intelligence for robust automatic anatomy segmentation: Application in neck and thorax auto-contouring. Med Phys 2022; 49:7118-7149. [PMID: 35833287 PMCID: PMC10087050 DOI: 10.1002/mp.15854] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Revised: 06/20/2022] [Accepted: 06/30/2022] [Indexed: 01/01/2023] Open
Abstract
BACKGROUND Automatic segmentation of 3D objects in computed tomography (CT) is challenging. Current methods, based mainly on artificial intelligence (AI) and end-to-end deep learning (DL) networks, are weak in garnering high-level anatomic information, which leads to compromised efficiency and robustness. This can be overcome by incorporating natural intelligence (NI) into AI methods via computational models of human anatomic knowledge. PURPOSE We formulate a hybrid intelligence (HI) approach that integrates the complementary strengths of NI and AI for organ segmentation in CT images and illustrate performance in the application of radiation therapy (RT) planning via multisite clinical evaluation. METHODS The system employs five modules: (i) body region recognition, which automatically trims a given image to a precisely defined target body region; (ii) NI-based automatic anatomy recognition object recognition (AAR-R), which performs object recognition in the trimmed image without DL and outputs a localized fuzzy model for each object; (iii) DL-based recognition (DL-R), which refines the coarse recognition results of AAR-R and outputs a stack of 2D bounding boxes (BBs) for each object; (iv) model morphing (MM), which deforms the AAR-R fuzzy model of each object guided by the BBs output by DL-R; and (v) DL-based delineation (DL-D), which employs the object containment information provided by MM to delineate each object. NI from (ii), AI from (i), (iii), and (v), and their combination from (iv) facilitate the HI system. RESULTS The HI system was tested on 26 organs in neck and thorax body regions on CT images obtained prospectively from 464 patients in a study involving four RT centers. Data sets from one separate independent institution involving 125 patients were employed in training/model building for each of the two body regions, whereas 104 and 110 data sets from the 4 RT centers were utilized for testing on neck and thorax, respectively. In the testing data sets, 83% of the images had limitations such as streak artifacts, poor contrast, shape distortion, pathology, or implants. The contours output by the HI system were compared to contours drawn in clinical practice at the four RT centers by utilizing an independently established ground-truth set of contours as reference. Three sets of measures were employed: accuracy via Dice coefficient (DC) and Hausdorff boundary distance (HD), subjective clinical acceptability via a blinded reader study, and efficiency by measuring human time saved in contouring by the HI system. Overall, the HI system achieved a mean DC of 0.78 and 0.87 and a mean HD of 2.22 and 4.53 mm for neck and thorax, respectively. It significantly outperformed clinical contouring in accuracy and saved overall 70% of human time over clinical contouring time, whereas acceptability scores varied significantly from site to site for both auto-contours and clinically drawn contours. CONCLUSIONS The HI system is observed to behave like an expert human in robustness in the contouring task but vastly more efficiently. It seems to use NI help where image information alone will not suffice to decide, first for the correct localization of the object and then for the precise delineation of the boundary.
Collapse
Affiliation(s)
- Jayaram K. Udupa
- Medical Image Processing GroupDepartment of RadiologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Tiange Liu
- Medical Image Processing GroupDepartment of RadiologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
- School of Information Science and EngineeringYanshan UniversityQinhuangdaoChina
| | - Chao Jin
- Medical Image Processing GroupDepartment of RadiologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Liming Zhao
- Medical Image Processing GroupDepartment of RadiologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Dewey Odhner
- Medical Image Processing GroupDepartment of RadiologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Yubing Tong
- Medical Image Processing GroupDepartment of RadiologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Vibhu Agrawal
- Medical Image Processing GroupDepartment of RadiologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Gargi Pednekar
- Quantitative Radiology SolutionsPhiladelphiaPennsylvaniaUSA
| | - Sanghita Nag
- Quantitative Radiology SolutionsPhiladelphiaPennsylvaniaUSA
| | - Tarun Kotia
- Quantitative Radiology SolutionsPhiladelphiaPennsylvaniaUSA
| | | | - E. Paul Wileyto
- Department of Biostatistics and EpidemiologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Dimitris Mihailidis
- Department of Radiation OncologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - John Nicholas Lukens
- Department of Radiation OncologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Abigail T. Berman
- Department of Radiation OncologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Joann Stambaugh
- Department of Radiation OncologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Tristan Lim
- Department of Radiation OncologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Rupa Chowdary
- Department of MedicineUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Dheeraj Jalluri
- Department of MedicineUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Salma K. Jabbour
- Department of Radiation OncologyRutgers UniversityNew BrunswickNew JerseyUSA
| | - Sung Kim
- Department of Radiation OncologyRutgers UniversityNew BrunswickNew JerseyUSA
| | - Meral Reyhan
- Department of Radiation OncologyRutgers UniversityNew BrunswickNew JerseyUSA
| | | | - Wade L. Thorstad
- Department of Radiation OncologyWashington UniversitySt. LouisMissouriUSA
| | | | | | | | - Joe Camaratta
- Quantitative Radiology SolutionsPhiladelphiaPennsylvaniaUSA
| | - Steve Owens
- Quantitative Radiology SolutionsPhiladelphiaPennsylvaniaUSA
| | - Drew A. Torigian
- Medical Image Processing GroupDepartment of RadiologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| |
Collapse
|
2
|
Research progress of 18F labeled small molecule positron emission tomography (PET) imaging agents. Eur J Med Chem 2020; 205:112629. [PMID: 32956956 DOI: 10.1016/j.ejmech.2020.112629] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2020] [Revised: 06/07/2020] [Accepted: 06/28/2020] [Indexed: 01/12/2023]
Abstract
With the development of positron emission tomography (PET) technology, a variety of PET imaging agents labeled with radionuclide 18F have been developed and widely used in the diagnosis and treatment of various clinical diseases in recent years. For example, they have showed a great value of study in the field of tumor detection, tumor treatment and evaluation of tumor therapy in a non-invasive, qualitative and quantitative way. In this review, we highlight the recent development in chemical synthesis, structure and characterization, imaging characterization, and potential applications of these 18F labeled small molecule PET imaging agents for the past five years. The development and application of 18F labeled small molecules will expand our knowledge of the function and distribution of diseases-related molecular targets and shed light on the diagnosis and treatment of various diseases including tumors.
Collapse
|
3
|
Agrawal V, Udupa J, Tong Y, Torigian D. BRR-Net: A tandem architectural CNN-RNN for automatic body region localization in CT images. Med Phys 2020; 47:5020-5031. [PMID: 32761899 DOI: 10.1002/mp.14439] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2020] [Revised: 06/22/2020] [Accepted: 07/22/2020] [Indexed: 12/25/2022] Open
Abstract
PURPOSE Automatic identification of consistently defined body regions in medical images is vital in many applications. In this paper, we describe a method to automatically demarcate the superior and inferior boundaries for neck, thorax, abdomen, and pelvis body regions in computed tomography (CT) images. METHODS For any three-dimensional (3D) CT image I, following precise anatomic definitions, we denote the superior and inferior axial boundary slices of the neck, thorax, abdomen, and pelvis body regions by NS(I), NI(I), TS(I), TI(I), AS(I), AI(I), PS(I), and PI(I), respectively. Of these, by definition, AI(I) = PS(I), and so the problem reduces to demarcating seven body region boundaries. Our method consists of a two-step approach. In the first step, a convolutional neural network (CNN) is trained to classify each axial slice in I into one of nine categories: the seven body region boundaries, plus legs (defined as all axial slices inferior to PI(I)), and the none-of-the-above category. This CNN uses a multichannel approach to exploit the interslice contrast, providing the neural network with additional visual context at the body region boundaries. In the second step, to improve the predictions for body region boundaries that are very subtle and that exhibit low contrast, a recurrent neural network (RNN) is trained on features extracted by CNN, limited to a flexible window about the predictions from the CNN. RESULTS The method is evaluated on low-dose CT images from 442 patient scans, divided into training and testing sets with a ratio of 70:30. Using only the CNN, overall absolute localization error for NS(I), NI(I), TS(I), TI(I), AS(I), AI(I), and PI(I) expressed in terms of number of slices (nS) is (mean ± SD): 0.61 ± 0.58, 1.05 ± 1.13, 0.31 ± 0.46, 1.85 ± 1.96, 0.57 ± 2.44, 3.42 ± 3.16, and 0.50 ± 0.50, respectively. Using the RNN to refine the CNN's predictions for select classes improved the accuracy of TI(I) and AI(I) to: 1.35 ± 1.71 and 2.83 ± 2.75, respectively. This model outperforms the results achieved in our previous work by 2.4, 1.7, 3.1, 1.1, and 2 slices, respectively for TS(I), TI(I), AS(I), AI(I) = PS(I), and PI(I) classes with statistical significance. The model trained on low-dose CT images was also tested on diagnostic CT images for NS(I), NI(I), and TS(I) classes; the resulting errors were: 1.48 ± 1.33, 2.56 ± 2.05, and 0.58 ± 0.71, respectively. CONCLUSIONS Standardized body region definitions are a prerequisite for effective implementation of quantitative radiology, but the literature is severely lacking in the precise identification of body regions. The method presented in this paper significantly outperforms earlier works by a large margin, and the deviations of our results from ground truth are comparable to variations observed in manual labeling by experts. The solution presented in this work is critical to the adoption and employment of the idea of standardized body regions, and clears the path for development of applications requiring accurate demarcations of body regions. The work is indispensable for automatic anatomy recognition, delineation, and contouring for radiation therapy planning, as it not only automates an essential part of the process, but also removes the dependency on experts for accurately demarcating body regions in a study.
Collapse
Affiliation(s)
- Vibhu Agrawal
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Jayaram Udupa
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Yubing Tong
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Drew Torigian
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA
| |
Collapse
|
4
|
Liu T, Udupa JK, Miao Q, Tong Y, Torigian DA. Quantification of body-torso-wide tissue composition on low-dose CT images via automatic anatomy recognition. Med Phys 2019; 46:1272-1285. [PMID: 30614020 DOI: 10.1002/mp.13373] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2018] [Revised: 11/19/2018] [Accepted: 12/24/2018] [Indexed: 12/22/2022] Open
Abstract
PURPOSE Quantification of body composition plays an important role in many clinical and research applications. Radiologic imaging techniques such as Dual-energy X-ray absorptiometry (DXA), magnetic resonance imaging (MRI), and computed tomography (CT) imaging make accurate quantification of the body composition possible. However, most current imaging-based methods need human interaction to quantify multiple tissues. When dealing with whole-body images of many subjects, interactive methods become impractical. This paper presents an automated, efficient, accurate, and practical body composition quantification method for low-dose CT images. METHOD Our method, named automatic anatomy recognition body composition analysis (AAR-BCA), aims to quantify four tissue components in body torso (BT) - subcutaneous adipose tissue (SAT), visceral adipose tissue (VAT), bone tissue, and muscle tissue - from CT images of given whole-body positron emission tomography/computed tomography (PET/CT) acquisitions. AAR-BCA consists of three key steps - modeling BT with its ensemble of key objects from a population of patient images, recognition or localization of these objects in a given patient image I, and delineation and quantification of the four tissue components in I guided by the recognized objects. In the first step, from a given set of patient images and the associated delineated objects, a fuzzy anatomy model of the key object ensemble, including anatomic organs, tissue regions, and tissue interfaces, is built where the objects are organized in a hierarchical order. The second step involves recognizing, or finding roughly the location of, each object in any given whole-body image I of a patient following the object hierarchy and guided by the built model. The third step makes use of this fuzzy localization information of the objects and the intensity distributions of the four tissue components, already learned and encoded in the model, to optimally delineate in a fuzzy manner and quantify these components. All parameters in our method are determined from training datasets. RESULTS Thirty-eight low-dose CT images from different subjects are tested in a fivefold cross-validation strategy for evaluating AAR-BCA with a 23-15 train-test dataset division. For BT, over all objects, AAR-BCA achieves a false-positive volume fraction (FPVF) of 3.7% and false-negative volume fraction (FNVF) of 3.8%. Notably, SAT achieves both a FPVF and FNVF under 3%. For bone tissue, it achieves a FPVF and a FNVF both under 3.5%. For VAT tissue, the FNVF of 4.8% is higher than for other objects and so also for muscle (4.7%). The level of accuracy for the four tissue components in individual body subregions mostly remains at the same level as for BT. The processing time required per patient image is under a minute. CONCLUSIONS Motivated by applications in cancer and systemic diseases, our goal in this paper was to seek a practical method for body composition quantification which is automated, accurate, and efficient, and works on BT in low-dose CT. The proposed AAR-BCA method toward this goal can quantify four tissue components including SAT, VAT, bone tissue, and muscle tissue in the body torso with under 5% overall error. All needed parameters can be automatically estimated from the training datasets.
Collapse
Affiliation(s)
- Tiange Liu
- School of Information Science and Engineering, Yanshan University, Qinhuangdao, Hebei, 066004, China.,Medical image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA.,Xidian University, Xi'an, Shaanxi, 710126, China
| | - Jayaram K Udupa
- Medical image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Qiguang Miao
- Xidian University, Xi'an, Shaanxi, 710126, China
| | - Yubing Tong
- Medical image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Drew A Torigian
- Medical image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA
| |
Collapse
|
5
|
Bai P, Udupa JK, Tong Y, Xie S, Torigian DA. Body region localization in whole-body low-dose CT images of PET/CT scans using virtual landmarks. Med Phys 2019; 46:1286-1299. [PMID: 30609058 DOI: 10.1002/mp.13376] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2018] [Revised: 12/11/2018] [Accepted: 12/31/2018] [Indexed: 12/18/2022] Open
Abstract
PURPOSE Radiological imaging and image interpretation for clinical decision making are mostly specific to each body region such as head and neck, thorax, abdomen, pelvis, and extremities. In this study, we present a new solution to trim automatically the given axial image stack into image volumes satisfying the given body region definition. METHODS The proposed approach consists of the following steps. First, a set of reference objects is selected and roughly segmented. Virtual landmarks (VLs) for the objects are then identified by using principal component analysis and recursive subdivision of the object via the principal axes system. The VLs can be defined based on just the binary objects or objects with gray values also considered. The VLs may lie anywhere with respect to the object, inside or outside, and rarely on the object surface, and are tethered to the object. Second, a classic neural network regressor is configured to learn the geometric mapping relationship between the VLs and the boundary locations of each body region. The trained network is then used to predict the locations of the body region boundaries. In this study, we focus on three body regions - thorax, abdomen, and pelvis, and predict their superior and inferior axial locations denoted by TS(I), TI(I), AS(I), AI(I), PS(I), and PI(I), respectively, for any given volume image I. Two kinds of reference objects - the skeleton and the lungs and airways, are employed to test the localization performance of the proposed approach. RESULTS Our method is tested by using low-dose unenhanced computed tomography (CT) images of 180 near whole-body 18 F-fluorodeoxyglucose-positron emission tomography/computed tomography (FDG-PET/CT) scans (including 34 whole-body scans) which are randomly divided into training and testing sets with a ratio of 85%:15%. The procedure is repeated six times and three times for the case of lungs and skeleton, respectively, with different divisions of the entire data set at this proportion. For the case of using skeleton as a reference object, the overall mean localization error for the six locations expressed as number of slices (nS) and distance (dS) in mm, is found to be nS: 3.4, 4.7, 4.1, 5.2, 5.2, and 3.9; dS: 13.4, 18.9, 16.5, 20.8, 20.8, and 15.5 mm for binary objects; nS: 4.1, 5.7, 4.3, 5.9, 5.9, and 4.0; dS: 16.2, 22.7, 17.2, 23.7, 23.7, and 16.1 mm for gray objects, respectively. For the case of using lungs and airways as a reference object, the corresponding results are, nS: 4.0, 5.3, 4.1, 6.9, 6.9, and 7.4; dS: 15.0, 19.7, 15.3, 26.2, 26.2, and 27.9 mm for binary objects; nS: 3.9, 5.4, 3.6, 7.2, 7.2, and 7.6; dS: 14.6, 20.1, 13.7, 27.3, 27.3, and 28.6 mm for gray objects, respectively. CONCLUSIONS Precise body region identification automatically in whole-body or body region tomographic images is vital for numerous medical image analysis and analytics applications. Despite its importance, this issue has received very little attention in the literature. We present a solution to this problem in this study using the concept of virtual landmarks. The method achieves localization accuracy within 2-3 slices, which is roughly comparable to the variation found in localization by experts. As long as the reference objects can be roughly segmented, the method with its learned VLs-to-boundary location relationship and predictive ability is transferable from one image modality to another.
Collapse
Affiliation(s)
- Peirui Bai
- College of Electronics, Communication and Physics, Shandong University of Science and Technology, Qingdao, Shandong, 266590, China.,Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Jayaram K Udupa
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Yubing Tong
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - ShiPeng Xie
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA.,College of Telecommunications and Information Engineering, Nanjing University of Posts and Telecommunications, Nanjing, Jiangsu, 210023, China
| | - Drew A Torigian
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA
| |
Collapse
|
6
|
Tong Y, Udupa JK, Odhner D, Wu C, Schuster SJ, Torigian DA. Disease quantification on PET/CT images without explicit object delineation. Med Image Anal 2018; 51:169-183. [PMID: 30453165 DOI: 10.1016/j.media.2018.11.002] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2018] [Revised: 10/17/2018] [Accepted: 11/09/2018] [Indexed: 10/27/2022]
Abstract
PURPOSE The derivation of quantitative information from images in a clinically practical way continues to face a major hurdle because of image segmentation challenges. This paper presents a novel approach, called automatic anatomy recognition-disease quantification (AAR-DQ), for disease quantification (DQ) on positron emission tomography/computed tomography (PET/CT) images. This approach explores how to decouple DQ methods from explicit dependence on object (e.g., organ) delineation through the use of only object recognition results from our recently developed automatic anatomy recognition (AAR) method to quantify disease burden. METHOD The AAR-DQ process starts off with the AAR approach for modeling anatomy and automatically recognizing objects on low-dose CT images of PET/CT acquisitions. It incorporates novel aspects of model building that relate to finding an optimal disease map for each organ. The parameters of the disease map are estimated from a set of training image data sets including normal subjects and patients with metastatic cancer. The result of recognition for an object on a patient image is the location of a fuzzy model for the object which is optimally adjusted for the image. The model is used as a fuzzy mask on the PET image for estimating a fuzzy disease map for the specific patient and subsequently for quantifying disease based on this map. This process handles blur arising in PET images from partial volume effect entirely through accurate fuzzy mapping to account for heterogeneity and gradation of disease content at the voxel level without explicitly performing correction for the partial volume effect. Disease quantification is performed from the fuzzy disease map in terms of total lesion glycolysis (TLG) and standardized uptake value (SUV) statistics. We also demonstrate that the method of disease quantification is applicable even when the "object" of interest is recognized manually with a simple and quick action such as interactively specifying a 3D box ROI. Depending on the degree of automaticity for object and lesion recognition on PET/CT, DQ can be performed at the object level either semi-automatically (DQ-MO) or automatically (DQ-AO), or at the lesion level either semi-automatically (DQ-ML) or automatically. RESULTS We utilized 67 data sets in total: 16 normal data sets used for model building, and 20 phantom data sets plus 31 patient data sets (with various types of metastatic cancer) used for testing the three methods DQ-AO, DQ-MO, and DQ-ML. The parameters of the disease map were estimated using the leave-one-out strategy. The organs of focus were left and right lungs and liver, and the disease quantities measured were TLG, SUVMean, and SUVMax. On phantom data sets, overall error for the three parameters were approximately 6%, 3%, and 0%, respectively, with TLG error varying from 2% for large "lesions" (37 mm diameter) to 37% for small "lesions" (10 mm diameter). On patient data sets, for non-conspicuous lesions, those overall errors were approximately 19%, 14% and 0%; for conspicuous lesions, these overall errors were approximately 9%, 7%, 0%, respectively, with errors in estimation being generally smaller for liver than for lungs, although without statistical significance. CONCLUSIONS Accurate disease quantification on PET/CT images without performing explicit delineation of lesions is feasible following object recognition. Method DQ-MO generally yields more accurate results than DQ-AO although the difference is statistically not significant. Compared to current methods from the literature, almost all of which focus only on lesion-level DQ and not organ-level DQ, our results were comparable for large lesions and were superior for smaller lesions, with less demand on training data and computational resources. DQ-AO and even DQ-MO seem to have the potential for quantifying disease burden body-wide routinely via the AAR-DQ approach.
Collapse
Affiliation(s)
- Yubing Tong
- Medical Image Processing group, Department of Radiology, 3710 Hamilton Walk, Goddard Building, 6th Floor, Philadelphia, PA 19104, United States
| | - Jayaram K Udupa
- Medical Image Processing group, Department of Radiology, 3710 Hamilton Walk, Goddard Building, 6th Floor, Philadelphia, PA 19104, United States.
| | - Dewey Odhner
- Medical Image Processing group, Department of Radiology, 3710 Hamilton Walk, Goddard Building, 6th Floor, Philadelphia, PA 19104, United States
| | - Caiyun Wu
- Medical Image Processing group, Department of Radiology, 3710 Hamilton Walk, Goddard Building, 6th Floor, Philadelphia, PA 19104, United States
| | - Stephen J Schuster
- Abramson Cancer Center, Perelman Center for Advanced Medicine, University of Pennsylvania, Philadelphia, PA 19104, United States
| | - Drew A Torigian
- Medical Image Processing group, Department of Radiology, 3710 Hamilton Walk, Goddard Building, 6th Floor, Philadelphia, PA 19104, United States; Abramson Cancer Center, Perelman Center for Advanced Medicine, University of Pennsylvania, Philadelphia, PA 19104, United States
| |
Collapse
|
7
|
Wang H, Zhang N, Huo L, Zhang B. Dual-modality multi-atlas segmentation of torso organs from [ 18F]FDG-PET/CT images. Int J Comput Assist Radiol Surg 2018; 14:473-482. [PMID: 30390179 DOI: 10.1007/s11548-018-1879-3] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2018] [Accepted: 10/23/2018] [Indexed: 11/28/2022]
Abstract
PURPOSE Automated segmentation of torso organs from positron emission tomography/computed tomography (PET/CT) images is a prerequisite step for nuclear medicine image analysis. However, accurate organ segmentation from clinical PET/CT is challenging due to the poor soft tissue contrast in the low-dose CT image and the low spatial resolution of the PET image. To overcome these challenges, we developed a multi-atlas segmentation (MAS) framework for torso organ segmentation from 2-deoxy-2-[18F]fluoro-D-glucose PET/CT images. METHOD Our key idea is to use PET information to compensate for the imperfect CT contrast and use surface-based atlas fusion to overcome the low PET resolution. First, all the organs are segmented from CT using a conventional MAS method, and then the abdomen region of the PET image is automatically cropped. Focusing on the cropped PET image, a refined MAS segmentation of the abdominal organs is performed, using a surface-based atlas fusion approach to reach subvoxel accuracy. RESULTS This method was validated based on 69 PET/CT images. The Dice coefficients of the target organs were between 0.80 and 0.96, and the average surface distances were between 1.58 and 2.44 mm. Compared to the CT-based segmentation, the PET-based segmentation gained a Dice increase of 0.06 and an ASD decrease of 0.38 mm. The surface-based atlas fusion leads to significant accuracy improvement for the liver and kidneys and saved ~ 10 min computation time compared to volumetric atlas fusion. CONCLUSIONS The presented method achieves better segmentation accuracy than conventional MAS method within acceptable computation time for clinical applications.
Collapse
Affiliation(s)
- Hongkai Wang
- Department of Biomedical Engineering, Dalian University of Technology, Dalian, Liaoning, China
| | - Nan Zhang
- Department of Biomedical Engineering, Dalian University of Technology, Dalian, Liaoning, China
| | - Li Huo
- Department of Nuclear Medicine, Peking Union Medical College Hospital, Beijing, China
| | - Bin Zhang
- Department of Biomedical Engineering, Dalian University of Technology, Dalian, Liaoning, China.
| |
Collapse
|
8
|
Yan F, Udupa JK, Tong Y, Xu G, Odhner D, Torigian DA. Automatic Anatomy Recognition using Neural Network Learning of Object Relationships via Virtual Landmarks. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2018; 10574. [PMID: 30190628 DOI: 10.1117/12.2293700] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/02/2023]
Abstract
The recently developed body-wide Automatic Anatomy Recognition (AAR) methodology depends on fuzzy modeling of individual objects, hierarchically arranging objects, constructing an anatomy ensemble of these models, and a dichotomous object recognition-delineation process. The parent-to-offspring spatial relationship in the object hierarchy is crucial in the AAR method. We have found this relationship to be quite complex, and as such any improvement in capturing this relationship information in the anatomy model will improve the process of recognition itself. Currently, the method encodes this relationship based on the layout of the geometric centers of the objects. Motivated by the concept of virtual landmarks (VLs), this paper presents a new one-shot AAR recognition method that utilizes the VLs to learn object relationships by training a neural network to predict the pose and the VLs of an offspring object given the VLs of the parent object in the hierarchy. We set up two neural networks for each parent-offspring object pair in a body region, one for predicting the VLs and another for predicting the pose parameters. The VL-based learning/prediction method is evaluated on two object hierarchies involving 14 objects. We utilize 54 computed tomography (CT) image data sets of head and neck cancer patients and the associated object contours drawn by dosimetrists for routine radiation therapy treatment planning. The VL neural network method is found to yield more accurate object localization than the currently used simple AAR method.
Collapse
Affiliation(s)
- Fengxia Yan
- College of Science, National University of Defense Technology, Changsha 410073, P. R. China.,Medical Image Processing Group, 602 Goddard building, 3710 Hamilton Walk, Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104, United States
| | - Jayaram K Udupa
- Medical Image Processing Group, 602 Goddard building, 3710 Hamilton Walk, Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104, United States
| | - Yubing Tong
- Medical Image Processing Group, 602 Goddard building, 3710 Hamilton Walk, Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104, United States
| | - Guoping Xu
- Medical Image Processing Group, 602 Goddard building, 3710 Hamilton Walk, Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104, United States
| | - Dewey Odhner
- Medical Image Processing Group, 602 Goddard building, 3710 Hamilton Walk, Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104, United States
| | - Drew A Torigian
- Medical Image Processing Group, 602 Goddard building, 3710 Hamilton Walk, Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104, United States
| |
Collapse
|
9
|
Wang H, Sun X, Wu T, Li C, Chen Z, Liao M, Li M, Yan W, Huang H, Yang J, Tan Z, Hui L, Liu Y, Pan H, Qu Y, Chen Z, Tan L, Yu L, Shi H, Huo L, Zhang Y, Tang X, Zhang S, Liu C. Deformable torso phantoms of Chinese adults for personalized anatomy modelling. J Anat 2018; 233:121-134. [PMID: 29663370 PMCID: PMC5987821 DOI: 10.1111/joa.12815] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/19/2018] [Indexed: 11/26/2022] Open
Abstract
In recent years, there has been increasing demand for personalized anatomy modelling for medical and industrial applications, such as ergonomics device development, clinical radiological exposure simulation, biomechanics analysis, and 3D animation character design. In this study, we constructed deformable torso phantoms that can be deformed to match the personal anatomy of Chinese male and female adults. The phantoms were created based on a training set of 79 trunk computed tomography (CT) images (41 males and 38 females) from normal Chinese subjects. Major torso organs were segmented from the CT images, and the statistical shape model (SSM) approach was used to learn the inter-subject anatomical variations. To match the personal anatomy, the phantoms were registered to individual body surface scans or medical images using the active shape model method. The constructed SSM demonstrated anatomical variations in body height, fat quantity, respiratory status, organ geometry, male muscle size, and female breast size. The masses of the deformed phantom organs were consistent with Chinese population organ mass ranges. To validate the performance of personal anatomy modelling, the phantoms were registered to the body surface scan and CT images. The registration accuracy measured from 22 test CT images showed a median Dice coefficient over 0.85, a median volume recovery coefficient (RCvlm ) between 0.85 and 1.1, and a median averaged surface distance (ASD) < 1.5 mm. We hope these phantoms can serve as computational tools for personalized anatomy modelling for the research community.
Collapse
Affiliation(s)
- Hongkai Wang
- Department of Biomedical EngineeringFaculty of Electronic Information and Electrical EngineeringDalian University of TechnologyDalianLiaoningChina
| | - Xiaobang Sun
- Department of Biomedical EngineeringFaculty of Electronic Information and Electrical EngineeringDalian University of TechnologyDalianLiaoningChina
- Department of Information TechnologyUniversity of JyväskyläJyväskyläFinland
| | - Tongning Wu
- China Academy of Industry and Communications TechnologyBeijingChina
| | - Congsheng Li
- China Academy of Industry and Communications TechnologyBeijingChina
| | - Zhonghua Chen
- Department of Biomedical EngineeringFaculty of Electronic Information and Electrical EngineeringDalian University of TechnologyDalianLiaoningChina
| | - Meiying Liao
- Department of Biomedical EngineeringFaculty of Electronic Information and Electrical EngineeringDalian University of TechnologyDalianLiaoningChina
| | - Mengci Li
- Department of Biomedical EngineeringFaculty of Electronic Information and Electrical EngineeringDalian University of TechnologyDalianLiaoningChina
| | - Wen Yan
- Department of Biomedical EngineeringFaculty of Electronic Information and Electrical EngineeringDalian University of TechnologyDalianLiaoningChina
| | - Hui Huang
- Department of Biomedical EngineeringFaculty of Electronic Information and Electrical EngineeringDalian University of TechnologyDalianLiaoningChina
| | - Jia Yang
- Department of Biomedical EngineeringFaculty of Electronic Information and Electrical EngineeringDalian University of TechnologyDalianLiaoningChina
| | - Ziyu Tan
- Department of Biomedical EngineeringFaculty of Electronic Information and Electrical EngineeringDalian University of TechnologyDalianLiaoningChina
| | - Libo Hui
- Department of Biomedical EngineeringFaculty of Electronic Information and Electrical EngineeringDalian University of TechnologyDalianLiaoningChina
| | - Yue Liu
- Department of Biomedical EngineeringFaculty of Electronic Information and Electrical EngineeringDalian University of TechnologyDalianLiaoningChina
| | - Hang Pan
- Department of Biomedical EngineeringFaculty of Electronic Information and Electrical EngineeringDalian University of TechnologyDalianLiaoningChina
| | - Yue Qu
- Department of Biomedical EngineeringFaculty of Electronic Information and Electrical EngineeringDalian University of TechnologyDalianLiaoningChina
| | - Zhaofeng Chen
- Department of Biomedical EngineeringFaculty of Electronic Information and Electrical EngineeringDalian University of TechnologyDalianLiaoningChina
| | - Liwen Tan
- Institute of Digital MedicineThird Military Medical UniversityChongqingChina
| | - Lijuan Yu
- The Affiliated Cancer Hospital of Hainan Medical CollegeHaikouHainanChina
| | - Hongcheng Shi
- Department of Nuclear MedicineZhongshan HospitalFudan UniversityShanghaiChina
| | - Li Huo
- Department of Nuclear MedicinePeking Union Medical College HospitalBeijingChina
| | - Yanjun Zhang
- Department of Nuclear Medicinethe First Affiliated Hospital of Dalian Medical UniversityDalianLiaoningChina
| | - Xin Tang
- Trauma Department of Orthopaedicsthe First Affiliated Hospital of Dalian Medical UniversityDalianLiaoningChina
| | - Shaoxiang Zhang
- Institute of Digital MedicineThird Military Medical UniversityChongqingChina
| | - Changjian Liu
- Trauma Department of Orthopaedicsthe First Affiliated Hospital of Dalian Medical UniversityDalianLiaoningChina
| |
Collapse
|
10
|
Xu G, Udupa JK, Tong Y, Cao H, Odhner D, Torigian DA, Wu X. Thoracic lymph node station recognition on CT images based on automatic anatomy recognition with an optimal parent strategy. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2018; 10574. [PMID: 30190627 DOI: 10.1117/12.2293258] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
Currently, there are many papers that have been published on the detection and segmentation of lymph nodes from medical images. However, it is still a challenging problem owing to low contrast with surrounding soft tissues and the variations of lymph node size and shape on computed tomography (CT) images. This is particularly very difficult on low-dose CT of PET/CT acquisitions. In this study, we utilize our previous automatic anatomy recognition (AAR) framework to recognize the thoracic-lymph node stations defined by the International Association for the Study of Lung Cancer (IASLC) lymph node map. The lymph node stations themselves are viewed as anatomic objects and are localized by using a one-shot method in the AAR framework. Two strategies have been taken in this paper for integration into AAR framework. The first is to combine some lymph node stations into composite lymph node stations according to their geometrical nearness. The other is to find the optimal parent (organ or union of organs) as an anchor for each lymph node station based on the recognition error and thereby find an overall optimal hierarchy to arrange anchor organs and lymph node stations. Based on 28 contrast-enhanced thoracic CT image data sets for model building, 12 independent data sets for testing, our results show that thoracic lymph node stations can be localized within 2-3 voxels compared to the ground truth.
Collapse
Affiliation(s)
- Guoping Xu
- Medical Image Processing Group, 602 Goddard building, 3710 Hamilton Walk, Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104 United States.,School of Electronic Information and Communications, Huazhong University of Science and technology, Wuhan, Hubei 430074, China
| | - Jayaram K Udupa
- Medical Image Processing Group, 602 Goddard building, 3710 Hamilton Walk, Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104 United States
| | - Yubing Tong
- Medical Image Processing Group, 602 Goddard building, 3710 Hamilton Walk, Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104 United States
| | - Hanqiang Cao
- School of Electronic Information and Communications, Huazhong University of Science and technology, Wuhan, Hubei 430074, China
| | - Dewey Odhner
- Medical Image Processing Group, 602 Goddard building, 3710 Hamilton Walk, Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104 United States
| | - Drew A Torigian
- Medical Image Processing Group, 602 Goddard building, 3710 Hamilton Walk, Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104 United States
| | - Xingyu Wu
- Medical Image Processing Group, 602 Goddard building, 3710 Hamilton Walk, Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104 United States
| |
Collapse
|
11
|
Chauvie S, Bertone E, Bergesio F, Terulla A, Botto D, Cerello P. Automatic liver detection and standardised uptake value evaluation in whole-body Positron Emission Tomography/Computed Tomography scans. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2018; 156:47-52. [PMID: 29428075 DOI: 10.1016/j.cmpb.2017.12.026] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/12/2017] [Revised: 12/02/2017] [Accepted: 12/22/2017] [Indexed: 06/08/2023]
Abstract
BACKGROUND AND OBJECTIVE Standardised Uptake Value (SUV), in clinical research and practice, is a marker of tumour avidity in Positron Emission Tomography/Computed Tomography (PET/CT). Since many technical, physical and physiological factors affect the SUV absolute measurement, the liver uptake is often used as reference value both in quantitative and semi-quantitative evaluation. The purpose of this investigation was to automatically detect the liver position in whole-body PET/CT scans and extract its average SUV value. METHODS We developed an algorithm, called LIver DEtection Algorithm (LIDEA), that analyses PET/CT scans, and under the assumption that the liver is a large homogeneous volume near the centre of mass of the patient, finds its position and automatically places a region of interest (ROI) in the liver, which is used to calculate the average SUV. The algorithm was validated on a population of 630 PET/CT scans coming from more than 60 different scanners. The SUV was also calculated by manually placing a large ROI in the liver. RESULTS LIDEA identified the liver with a 97.3% sensitivity with PET/CT images only and reached a 98.9% correct detection rate when using the co-registered CT scan to avoid liver misidentification in the right lung. The average liver SUV obtained with LIDEA was successfully validated against its manual assessment, with no systematic difference (0.11 ± 0.36 SUV units) and a R2=0.89 correlation coefficient. CONCLUSIONS LIDEA proved to be a reliable tool to automatically identify and extract the average SUV of the liver in oncological whole-body PET/CT scans.
Collapse
Affiliation(s)
- Stephane Chauvie
- Medical Physics Unit Division, Santa Croce e Carle Hospital, Cuneo, Italy.
| | - Elisa Bertone
- Medical Physics Unit Division, Santa Croce e Carle Hospital, Cuneo, Italy
| | - Fabrizio Bergesio
- Medical Physics Unit Division, Santa Croce e Carle Hospital, Cuneo, Italy
| | - Alessandra Terulla
- Medical Physics Unit Division, Santa Croce e Carle Hospital, Cuneo, Italy
| | - Davide Botto
- Dipartimento di Scienza Applicata e Tecnologia, Politecnico di Torino, Torino, Italy
| | | |
Collapse
|
12
|
Tong Y, Udupa JK, Wu X, Odhner D, Pednekar G, Simone CB, McLaughlin D, Apinorasethkul C, Shammo G, James P, Camaratta J, Torigian DA. Hierarchical model-based object localization for auto-contouring in head and neck radiation therapy planning. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2018; 10578:1057822. [PMID: 30190630 PMCID: PMC6122859 DOI: 10.1117/12.2294042] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Segmentation of organs at risk (OARs) is a key step during the radiation therapy (RT) treatment planning process. Automatic anatomy recognition (AAR) is a recently developed body-wide multiple object segmentation approach, where segmentation is designed as two dichotomous steps: object recognition (or localization) and object delineation. Recognition is the high-level process of determining the whereabouts of an object, and delineation is the meticulous low-level process of precisely indicating the space occupied by an object. This study focuses on recognition. The purpose of this paper is to introduce new features of the AAR-recognition approach (abbreviated as AAR-R from now on) of combining texture and intensity information into the recognition procedure, using the optimal spanning tree to achieve the optimal hierarchy for recognition to minimize recognition errors, and to illustrate recognition performance by using large-scale testing computed tomography (CT) data sets. The data sets pertain to 216 non-serial (planning) and 82 serial (re-planning) studies of head and neck (H&N) cancer patients undergoing radiation therapy, involving a total of ~2600 object samples. Texture property "maximum probability of occurrence" derived from the co-occurrence matrix was determined to be the best property and is utilized in conjunction with intensity properties in AAR-R. An optimal spanning tree is found in the complete graph whose nodes are individual objects, and then the tree is used as the hierarchy in recognition. Texture information combined with intensity can significantly reduce location error for gland-related objects (parotid and submandibular glands). We also report recognition results by considering image quality, which is a novel concept. AAR-R with new features achieves a location error of less than 4 mm (~1.5 voxels in our studies) for good quality images for both serial and non-serial studies.
Collapse
Affiliation(s)
- Yubing Tong
- Medical Image Processing Group, 602 Goddard building, 3710 Hamilton Walk, Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104, United States
| | - Jayaram K Udupa
- Medical Image Processing Group, 602 Goddard building, 3710 Hamilton Walk, Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104, United States
| | - Xingyu Wu
- Medical Image Processing Group, 602 Goddard building, 3710 Hamilton Walk, Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104, United States
| | - Dewey Odhner
- Medical Image Processing Group, 602 Goddard building, 3710 Hamilton Walk, Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104, United States
| | - Gargi Pednekar
- Quantitative Radiology Solutions, 3624 Market Street, Suite 5E, Philadelphia, PA 19104, United States
| | - Charles B Simone
- University of Maryland School of Medicine, Department of Radiation Oncology, Maryland Proton Treatment Center, 850 W. Baltimore, MD 21201, United States
| | - David McLaughlin
- Quantitative Radiology Solutions, 3624 Market Street, Suite 5E, Philadelphia, PA 19104, United States
| | - Chavanon Apinorasethkul
- Radiation Oncology Department at University of Pennsylvania, Philadelphia, PA 19104, United States
| | - Geraldine Shammo
- Radiation Oncology Department at University of Pennsylvania, Philadelphia, PA 19104, United States
| | - Paul James
- Radiation Oncology Department at University of Pennsylvania, Philadelphia, PA 19104, United States
| | - Joseph Camaratta
- Quantitative Radiology Solutions, 3624 Market Street, Suite 5E, Philadelphia, PA 19104, United States
| | - Drew A Torigian
- Medical Image Processing Group, 602 Goddard building, 3710 Hamilton Walk, Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104, United States
| |
Collapse
|
13
|
Bai P, Udupa JK, Tong Y, Xie S, Torigian DA. Automatic thoracic body region localization. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2017; 10134. [PMID: 30158738 DOI: 10.1117/12.2254862] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
Radiological imaging and image interpretation for clinical decision making are mostly specific to each body region such as head & neck, thorax, abdomen, pelvis, and extremities. For automating image analysis and consistency of results, standardizing definitions of body regions and the various anatomic objects, tissue regions, and zones in them becomes essential. Assuming that a standardized definition of body regions is available, a fundamental early step needed in automated image and object analytics is to automatically trim the given image stack into image volumes exactly satisfying the body region definition. This paper presents a solution to this problem based on the concept of virtual landmarks and evaluates it on whole-body positron emission tomography/computed tomography (PET/CT) scans. The method first selects a (set of) reference object(s), segments it (them) roughly, and identifies virtual landmarks for the object(s). The geometric relationship between these landmarks and the boundary locations of body regions in the cranio-caudal direction is then learned through a neural network regressor, and the locations are predicted. Based on low-dose unenhanced CT images of 180 near whole-body PET/CT scans (which includes 34 whole-body PET/CT scans), the mean localization error for the boundaries of superior of thorax (TS) and inferior of thorax (TI), expressed as number of slices (slice spacing ≈ 4mm)), and using either the skeleton or the pleural spaces as reference objects, is found to be 3,2 (using skeleton) and 3, 5 (using pleural spaces) respectively, or in mm 13, 10 mm (using skeleton) and 10.5, 20 mm (using pleural spaces), respectively. Improvements of this performance via optimal selection of objects and virtual landmarks and other object analytics applications are currently being pursued. and the skeleton and pleural spaces used as a reference objects.
Collapse
Affiliation(s)
- PeiRui Bai
- College of Electronics, Communication and Physics, Shandong University of Science and Technology, Qingdao 266590, China.,Medical Image Processing Group, Goddard Building - 6 floor, 3710 Hamilton Walk, Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA 19104
| | - Jayaram K Udupa
- Medical Image Processing Group, Goddard Building - 6 floor, 3710 Hamilton Walk, Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA 19104
| | - YuBing Tong
- Medical Image Processing Group, Goddard Building - 6 floor, 3710 Hamilton Walk, Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA 19104
| | - ShiPeng Xie
- Medical Image Processing Group, Goddard Building - 6 floor, 3710 Hamilton Walk, Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA 19104.,College of Telecommunications & Information Engineering, Nanjing University of Posts and Telecommunications, Nanjing, Jiangsu 210003, China
| | - Drew A Torigian
- Medical Image Processing Group, Goddard Building - 6 floor, 3710 Hamilton Walk, Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA 19104
| |
Collapse
|
14
|
Hussein S, Green A, Watane A, Reiter D, Chen X, Papadakis GZ, Wood B, Cypess A, Osman M, Bagci U. Automatic Segmentation and Quantification of White and Brown Adipose Tissues from PET/CT Scans. IEEE TRANSACTIONS ON MEDICAL IMAGING 2017; 36:734-744. [PMID: 28114010 PMCID: PMC6421081 DOI: 10.1109/tmi.2016.2636188] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
In this paper, we investigate the automatic detection of white and brown adipose tissues using Positron Emission Tomography/Computed Tomography (PET/CT) scans, and develop methods for the quantification of these tissues at the whole-body and body-region levels. We propose a patient-specific automatic adiposity analysis system with two modules. In the first module, we detect white adipose tissue (WAT) and its two sub-types from CT scans: Visceral Adipose Tissue (VAT) and Subcutaneous Adipose Tissue (SAT). This process relies conventionally on manual or semi-automated segmentation, leading to inefficient solutions. Our novel framework addresses this challenge by proposing an unsupervised learning method to separate VAT from SAT in the abdominal region for the clinical quantification of central obesity. This step is followed by a context driven label fusion algorithm through sparse 3D Conditional Random Fields (CRF) for volumetric adiposity analysis. In the second module, we automatically detect, segment, and quantify brown adipose tissue (BAT) using PET scans because unlike WAT, BAT is metabolically active. After identifying BAT regions using PET, we perform a co-segmentation procedure utilizing asymmetric complementary information from PET and CT. Finally, we present a new probabilistic distance metric for differentiating BAT from non-BAT regions. Both modules are integrated via an automatic body-region detection unit based on one-shot learning. Experimental evaluations conducted on 151 PET/CT scans achieve state-of-the-art performances in both central obesity as well as brown adiposity quantification.
Collapse
|
15
|
Matsumoto MMS, Udupa JK, Tong Y, Saboury B, Torigian DA. Quantitative normal thoracic anatomy at CT. Comput Med Imaging Graph 2016; 51:1-10. [PMID: 27065241 DOI: 10.1016/j.compmedimag.2016.03.005] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2015] [Revised: 03/22/2016] [Accepted: 03/29/2016] [Indexed: 12/22/2022]
Abstract
Automatic anatomy recognition (AAR) methodologies for a body region require detailed understanding of the morphology, architecture, and geographical layout of the organs within the body region. The aim of this paper was to quantitatively characterize the normal anatomy of the thoracic region for AAR. Contrast-enhanced chest CT images from 41 normal male subjects, each with 11 segmented objects, were considered in this study. The individual objects were quantitatively characterized in terms of their linear size, surface area, volume, shape, CT attenuation properties, inter-object distances, size and shape correlations, size-to-distance correlations, and distance-to-distance correlations. A heat map visualization approach was used for intuitively portraying the associations between parameters. Numerous new observations about object geography and relationships were made. Some objects, such as the pericardial region, vary far less than others in size across subjects. Distance relationships are more consistent when involving an object such as trachea and bronchi than other objects. Considering the inter-object distance, some objects have a more prominent correlation, such as trachea and bronchi, right and left lungs, arterial system, and esophagus. The proposed method provides new, objective, and usable knowledge about anatomy whose utility in building body-wide models toward AAR has been demonstrated in other studies.
Collapse
Affiliation(s)
- Monica M S Matsumoto
- Medical Image Processing Group, Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia, United States
| | - Jayaram K Udupa
- Medical Image Processing Group, Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia, United States.
| | - Yubing Tong
- Medical Image Processing Group, Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia, United States
| | - Babak Saboury
- Medical Image Processing Group, Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia, United States
| | - Drew A Torigian
- Medical Image Processing Group, Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia, United States
| |
Collapse
|