1
|
Chiu MC, Tsai SCS, Bai ZR, Lin A, Chang CC, Wang GZ, Lin FCF. Radiographic chest wall abnormalities in primary spontaneous pneumothorax identified by artificial intelligence. Heliyon 2024; 10:e30023. [PMID: 38726131 PMCID: PMC11078867 DOI: 10.1016/j.heliyon.2024.e30023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Revised: 04/18/2024] [Accepted: 04/18/2024] [Indexed: 05/12/2024] Open
Abstract
Primary spontaneous pneumothorax (PSP) primarily affects slim and tall young males. Exploring the etiological link between chest wall structural characteristics and PSP is crucial for advancing treatment methods. In this case-control study, chest computed tomography (CT) images from patients undergoing thoracic surgery, with or without PSP, were analyzed using Artificial Intelligence. Convolutional Neural Network (CNN) model of EfficientNetB3 and InceptionV3 were used with transfer learning on the Imagenet to compare the images of both groups. A heatmap was created on the chest CT scans to enhance interoperability, and the scale-invariant feature transform (SIFT) was adopted to further compare the image level. A total of 2,312 CT images of 26 non-PSP patients and 1,122 CT images of 26 PSP patients were selected. Chest-wall apex pit (CAP) was found in 25 PSP and three non-PSP patients (p < 0.001). The CNN achieved a testing accuracy of 93.47 % in distinguishing PSP from non-PSP based on chest wall features by identifying the existence of CAP. Heatmap analysis demonstrated CNN's precision in targeting the upper chest wall, accurately identifying CAP without undue influence from similar structures, or inappropriately expanding or minimizing the test area. SIFT results indicated a 10.55 % higher mean similarity within the groups compared to between PSP and non-PSP (p < 0.001). In conclusion, distinctive radiographic chest wall configurations were observed in PSP patients, with CAP potentially serving as an etiological factor linked to PSP. This study accentuates the potential of AI-assisted analysis in refining diagnostic approaches and treatment strategies for PSP.
Collapse
Affiliation(s)
- Ming-Chuan Chiu
- Department of Industrial Engineering and Industrial Management, National Tsing Hua University, Hsinchu, 300044, Taiwan
| | - Stella Chin-Shaw Tsai
- Superintendent Office, Tungs' Taichung MetroHarbor Hospital, Taichung, Taiwan
- Department of Post-Baccalaureate Medicine, National Chung Hsing University, Taichung, Taiwan
| | - Zhe-Rui Bai
- Department of Industrial Engineering and Industrial Management, National Tsing Hua University, Hsinchu, 300044, Taiwan
| | - Abraham Lin
- Engineering Management, Cornell University, Ithaca, NY, USA
| | - Chi-Chang Chang
- Department of Medical Informatics, Chung Shan Medical University Hospital, Taichung, Taiwan
| | - Guo-Zhi Wang
- Department of Thoracic Surgery, Chung Shan Medical University Hospital, Taichung, Taiwan
| | - Frank Cheau-Feng Lin
- Department of Thoracic Surgery, Chung Shan Medical University Hospital, Taichung, Taiwan
- School of Medicine, Chung Shan Medical University, Taichung, Taiwan
| |
Collapse
|
2
|
Li Y, Qiu Z, Fan X, Liu X, Chang EIC, Xu Y. Integrated 3d flow-based multi-atlas brain structure segmentation. PLoS One 2022; 17:e0270339. [PMID: 35969596 PMCID: PMC9377636 DOI: 10.1371/journal.pone.0270339] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2021] [Accepted: 06/09/2022] [Indexed: 11/18/2022] Open
Abstract
MRI brain structure segmentation plays an important role in neuroimaging studies. Existing methods either spend much CPU time, require considerable annotated data, or fail in segmenting volumes with large deformation. In this paper, we develop a novel multi-atlas-based algorithm for 3D MRI brain structure segmentation. It consists of three modules: registration, atlas selection and label fusion. Both registration and label fusion leverage an integrated flow based on grayscale and SIFT features. We introduce an effective and efficient strategy for atlas selection by employing the accompanying energy generated in the registration step. A 3D sequential belief propagation method and a 3D coarse-to-fine flow matching approach are developed in both registration and label fusion modules. The proposed method is evaluated on five public datasets. The results show that it has the best performance in almost all the settings compared to competitive methods such as ANTs, Elastix, Learning to Rank and Joint Label Fusion. Moreover, our registration method is more than 7 times as efficient as that of ANTs SyN, while our label transfer method is 18 times faster than Joint Label Fusion in CPU time. The results on the ADNI dataset demonstrate that our method is applicable to image pairs that require a significant transformation in registration. The performance on a composite dataset suggests that our method succeeds in a cross-modality manner. The results of this study show that the integrated 3D flow-based method is effective and efficient for brain structure segmentation. It also demonstrates the power of SIFT features, multi-atlas segmentation and classical machine learning algorithms for a medical image analysis task. The experimental results on public datasets show the proposed method’s potential for general applicability in various brain structures and settings.
Collapse
Affiliation(s)
- Yeshu Li
- School of Computer Science and Engineering, Beihang University, Beijing, China
| | - Ziming Qiu
- Electrical and Computer Engineering, Tandon School of Engineering, New York University, Brooklyn, NY, United States of America
| | - Xingyu Fan
- Bioengineering College, Chongqing University, Chongqing, China
| | - Xianglong Liu
- School of Computer Science and Engineering, Beihang University, Beijing, China
| | | | - Yan Xu
- School of Biological Science and Medical Engineering, State Key Laboratory of Software Development Environment, Key Laboratory of Biomechanics, Mechanobiology of Ministry of Education and Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing, China
- Microsoft Research, Beijing, China
- * E-mail:
| |
Collapse
|
3
|
Im JH, Lee IJ, Choi Y, Sung J, Ha JS, Lee H. Impact of Denoising on Deep-Learning-Based Automatic Segmentation Framework for Breast Cancer Radiotherapy Planning. Cancers (Basel) 2022; 14:cancers14153581. [PMID: 35892839 PMCID: PMC9332287 DOI: 10.3390/cancers14153581] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Revised: 07/08/2022] [Accepted: 07/20/2022] [Indexed: 02/04/2023] Open
Abstract
Objective: This study aimed to investigate the segmentation accuracy of organs at risk (OARs) when denoised computed tomography (CT) images are used as input data for a deep-learning-based auto-segmentation framework. Methods: We used non-contrast enhanced planning CT scans from 40 patients with breast cancer. The heart, lungs, esophagus, spinal cord, and liver were manually delineated by two experienced radiation oncologists in a double-blind manner. The denoised CT images were used as input data for the AccuContourTM segmentation software to increase the signal difference between structures of interest and unwanted noise in non-contrast CT. The accuracy of the segmentation was assessed using the Dice similarity coefficient (DSC), and the results were compared with those of conventional deep-learning-based auto-segmentation without denoising. Results: The average DSC outcomes were higher than 0.80 for all OARs except for the esophagus. AccuContourTM-based and denoising-based auto-segmentation demonstrated comparable performance for the lungs and spinal cord but showed limited performance for the esophagus. Denoising-based auto-segmentation for the liver was minimal but had statistically significantly better DSC than AccuContourTM-based auto-segmentation (p < 0.05). Conclusions: Denoising-based auto-segmentation demonstrated satisfactory performance in automatic liver segmentation from non-contrast enhanced CT scans. Further external validation studies with larger cohorts are needed to verify the usefulness of denoising-based auto-segmentation.
Collapse
Affiliation(s)
- Jung Ho Im
- CHA Bundang Medical Center, Department of Radiation Oncology, CHA University School of Medicine, Seongnam 13496, Korea;
| | - Ik Jae Lee
- Department of Radiation Oncology, Yonsei University College of Medicine, Seoul 03722, Korea; (I.J.L.); (J.S.)
| | - Yeonho Choi
- Department of Radiation Oncology, Gangnam Severance Hospital, Seoul 06273, Korea; (Y.C.); (J.S.H.)
| | - Jiwon Sung
- Department of Radiation Oncology, Yonsei University College of Medicine, Seoul 03722, Korea; (I.J.L.); (J.S.)
| | - Jin Sook Ha
- Department of Radiation Oncology, Gangnam Severance Hospital, Seoul 06273, Korea; (Y.C.); (J.S.H.)
| | - Ho Lee
- Department of Radiation Oncology, Yonsei University College of Medicine, Seoul 03722, Korea; (I.J.L.); (J.S.)
- Correspondence: ; Tel.: +82-2-2228-8109; Fax: +82-2-2227-7823
| |
Collapse
|
4
|
Feature Point Extraction and Motion Tracking of Cardiac Color Ultrasound under Improved Lucas-Kanade Algorithm. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:4959727. [PMID: 34394892 PMCID: PMC8357506 DOI: 10.1155/2021/4959727] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/13/2021] [Revised: 07/13/2021] [Accepted: 07/26/2021] [Indexed: 11/17/2022]
Abstract
The purpose of this research is to study the application effect of Lucas–Kanade algorithm in right ventricular color Doppler ultrasound feature point extraction and motion tracking under the condition of scale invariant feature transform (SIFT). This study took the right ventricle as an example to analyze the extraction effect and calculation rate of SIFT algorithm and improved Lucas–Kanade algorithm. It was found that the calculation time before and after noise removal by the SIFT algorithm was 0.49 s and 0.46 s, respectively, and the number of extracted feature points was 703 and 698, respectively. The number of feature points extracted by the SIFT algorithm and the calculation time were significantly better than those of other algorithms (P < 0.01). The mean logarithm of the matching points of the SIFT algorithm for order matching and reverse order matching was 20.54 and 20.46, respectively. The calculation time and the number of feature points for the SIFT speckle tracking method were 1198.85 s and 81, respectively, and those of the optical flow method were 3274.19 s and 80, respectively. The calculation time of the SIFT speckle tracking method was significantly lower than that of the optical flow method (P < 0.05), and there was no statistical difference in the number of feature points between the SIFT speckle tracking method and the optical flow method (P > 0.05). In conclusion, the improved Lucas–Kanade algorithm based on SIFT significantly improves the accuracy of feature extraction and motion tracking of color Doppler ultrasound, which shows the value of the algorithm in the clinical application of color Doppler ultrasound.
Collapse
|
5
|
Shi C, Xian M, Zhou X, Wang H, Cheng HD. Multi-slice low-rank tensor decomposition based multi-atlas segmentation: Application to automatic pathological liver CT segmentation. Med Image Anal 2021; 73:102152. [PMID: 34280669 DOI: 10.1016/j.media.2021.102152] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2021] [Revised: 06/02/2021] [Accepted: 06/27/2021] [Indexed: 12/24/2022]
Abstract
Liver segmentation from abdominal CT images is an essential step for liver cancer computer-aided diagnosis and surgical planning. However, both the accuracy and robustness of existing liver segmentation methods cannot meet the requirements of clinical applications. In particular, for the common clinical cases where the liver tissue contains major pathology, current segmentation methods show poor performance. In this paper, we propose a novel low-rank tensor decomposition (LRTD) based multi-atlas segmentation (MAS) framework that achieves accurate and robust pathological liver segmentation of CT images. Firstly, we propose a multi-slice LRTD scheme to recover the underlying low-rank structure embedded in 3D medical images. It performs the LRTD on small image segments consisting of multiple consecutive image slices. Then, we present an LRTD-based atlas construction method to generate tumor-free liver atlases that mitigates the performance degradation of liver segmentation due to the presence of tumors. Finally, we introduce an LRTD-based MAS algorithm to derive patient-specific liver atlases for each test image, and to achieve accurate pairwise image registration and label propagation. Extensive experiments on three public databases of pathological liver cases validate the effectiveness of the proposed method. Both qualitative and quantitative results demonstrate that, in the presence of major pathology, the proposed method is more accurate and robust than state-of-the-art methods.
Collapse
Affiliation(s)
- Changfa Shi
- Mobile E-business Collaborative Innovation Center of Hunan Province, Hunan University of Technology and Business, Changsha 410205, China; Department of Computer Science, Utah State University, Logan, UT 84322, USA
| | - Min Xian
- Department of Computer Science, University of Idaho, Idaho Falls, ID 83402, USA.
| | - Xiancheng Zhou
- Mobile E-business Collaborative Innovation Center of Hunan Province, Hunan University of Technology and Business, Changsha 410205, China
| | - Haotian Wang
- Department of Computer Science, University of Idaho, Idaho Falls, ID 83402, USA
| | - Heng-Da Cheng
- Department of Computer Science, Utah State University, Logan, UT 84322, USA
| |
Collapse
|
6
|
Jimenez-Pastor A, Alberich-Bayarri A, Lopez-Gonzalez R, Marti-Aguado D, França M, Bachmann RSM, Mazzucco J, Marti-Bonmati L. Precise whole liver automatic segmentation and quantification of PDFF and R2* on MR images. Eur Radiol 2021; 31:7876-7887. [PMID: 33768292 DOI: 10.1007/s00330-021-07838-5] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2021] [Revised: 02/08/2021] [Accepted: 02/25/2021] [Indexed: 12/12/2022]
Abstract
OBJECTIVE To automate the segmentation of whole liver parenchyma on multi-echo chemical shift encoded (MECSE) MR examinations using convolutional neural networks (CNNs) to seamlessly quantify precise organ-related imaging biomarkers such as the fat fraction and iron load. METHODS A retrospective multicenter collection of 183 MECSE liver MR examinations was conducted. An encoder-decoder CNN was trained (107 studies) following a 5-fold cross-validation strategy to improve the model performance and ensure lack of overfitting. Proton density fat fraction (PDFF) and R2* were quantified on both manual and CNN segmentation masks. Different metrics were used to evaluate the CNN performance over both unseen internal (46 studies) and external (29 studies) validation datasets to analyze reproducibility. RESULTS The internal test showed excellent results for the automatic segmentation with a dice coefficient (DC) of 0.93 ± 0.03 and high correlation between the quantification done with the predicted mask and the manual segmentation (rPDFF = 1 and rR2* = 1; p values < 0.001). The external validation was also excellent with a different vendor but the same magnetic field strength, proving the generalization of the model to other manufacturers with DC of 0.94 ± 0.02. Results were lower for the 1.5-T MR same vendor scanner with DC of 0.87 ± 0.06. Both external validations showed high correlation in the quantification (rPDFF = 1 and rR2* = 1; p values < 0.001). In both internal and external validation datasets, the relative error for the PDFF and R2* quantification was below 4% and 1% respectively. CONCLUSION Liver parenchyma can be accurately segmented with CNN in a vendor-neutral virtual approach, allowing to obtain reproducible automatic whole organ virtual biopsies. KEY POINTS • Whole liver parenchyma can be automatically segmented using convolutional neural networks. • Deep learning allows the creation of automatic pipelines for the precise quantification of liver-related imaging biomarkers such as PDFF and R2*. • MR "virtual biopsy" can become a fast and automatic procedure for the assessment of chronic diffuse liver diseases in clinical practice.
Collapse
Affiliation(s)
- Ana Jimenez-Pastor
- Quantitative Imaging Biomarkers in Medicine, QUIBIM S.L, Aragon Avenue, 30, 13th floor, Office J, 46021, Valencia, Spain.
| | - Angel Alberich-Bayarri
- Quantitative Imaging Biomarkers in Medicine, QUIBIM S.L, Aragon Avenue, 30, 13th floor, Office J, 46021, Valencia, Spain
| | - Rafael Lopez-Gonzalez
- Quantitative Imaging Biomarkers in Medicine, QUIBIM S.L, Aragon Avenue, 30, 13th floor, Office J, 46021, Valencia, Spain
| | - David Marti-Aguado
- Digestive Disease Department, Clinic University Hospital, INCLIVA Health Research Institute, Valencia, Spain
| | - Manuela França
- Radiology Department, Centro Hospitalar Universitário do Porto (CHUP), Porto, Portugal
| | | | | | - Luis Marti-Bonmati
- Biomedical Imaging Research Group (GIBI230-PREBI) at La Fe Health Research Institute, and Imaging La Fe node at Distributed Network for Biomedical Imaging (ReDIB) Unique Scientific and Technical Infrastructures (ICTS), Valencia, Spain.,Radiology Department, La Fe University and Polytechnic Hospital, Valencia, Spain
| |
Collapse
|
7
|
Zhou X. Automatic Segmentation of Multiple Organs on 3D CT Images by Using Deep Learning Approaches. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2020; 1213:135-147. [PMID: 32030668 DOI: 10.1007/978-3-030-33128-3_9] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/05/2022]
Abstract
This chapter focuses on modern deep learning techniques that are proposed for automatically recognizing and segmenting multiple organ regions on three-dimensional (3D) computed tomography (CT) images. CT images are widely used to visualize 3D anatomical structures composed of multiple organ regions inside the human body in clinical medicine. Automatic recognition and segmentation of multiple organs on CT images is a fundamental processing step of computer-aided diagnosis, surgery, and radiation therapy systems, which aim to achieve precision and personalized medicines. In this chapter, we introduce our recent works on addressing the issue of multiple organ segmentation on 3D CT images by using deep learning, a completely novel approach, instead of conventional segmentation methods originated from traditional digital image processing techniques. We evaluated and compared the segmentation performances of two different deep learning approaches based on 2D- and 3D deep convolutional neural networks (CNNs) without and with a pre-processing step. A conventional method based on a probabilistic atlas algorithm, which presented the best performance within the conventional approaches, was also adopted as a baseline for performance comparison. A dataset containing 240 CT scans of different portions of human bodies was used for training the CNNs and validating the segmentation performance of the learning results. A maximum number of 17 types of organ regions in each CT scan were segmented automatically and validated with the human annotations by using ratio of intersection over union (IoU) as the criterion. Our experimental results showed that the IoUs of the segmentation results had a mean value of 79% and 67% by averaging 17 types of organs that were segmented by the proposed 3D and 2D deep CNNs, respectively. All results using the deep learning approaches showed better accuracy and robustness than the conventional segmentation method that used the probabilistic atlas algorithm. The effectiveness and usefulness of deep learning approaches were demonstrated for multiple organ segmentation on 3D CT images.
Collapse
|
8
|
Ahn SH, Yeo AU, Kim KH, Kim C, Goh Y, Cho S, Lee SB, Lim YK, Kim H, Shin D, Kim T, Kim TH, Youn SH, Oh ES, Jeong JH. Comparative clinical evaluation of atlas and deep-learning-based auto-segmentation of organ structures in liver cancer. Radiat Oncol 2019; 14:213. [PMID: 31775825 PMCID: PMC6880380 DOI: 10.1186/s13014-019-1392-z] [Citation(s) in RCA: 42] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2019] [Accepted: 10/09/2019] [Indexed: 12/14/2022] Open
Abstract
BACKGROUND Accurate and standardized descriptions of organs at risk (OARs) are essential in radiation therapy for treatment planning and evaluation. Traditionally, physicians have contoured patient images manually, which, is time-consuming and subject to inter-observer variability. This study aims to a) investigate whether customized, deep-learning-based auto-segmentation could overcome the limitations of manual contouring and b) compare its performance against a typical, atlas-based auto-segmentation method organ structures in liver cancer. METHODS On-contrast computer tomography image sets of 70 liver cancer patients were used, and four OARs (heart, liver, kidney, and stomach) were manually delineated by three experienced physicians as reference structures. Atlas and deep learning auto-segmentations were respectively performed with MIM Maestro 6.5 (MIM Software Inc., Cleveland, OH) and, with a deep convolution neural network (DCNN). The Hausdorff distance (HD) and, dice similarity coefficient (DSC), volume overlap error (VOE), and relative volume difference (RVD) were used to quantitatively evaluate the four different methods in the case of the reference set of the four OAR structures. RESULTS The atlas-based method yielded the following average DSC and standard deviation values (SD) for the heart, liver, right kidney, left kidney, and stomach: 0.92 ± 0.04 (DSC ± SD), 0.93 ± 0.02, 0.86 ± 0.07, 0.85 ± 0.11, and 0.60 ± 0.13 respectively. The deep-learning-based method yielded corresponding values for the OARs of 0.94 ± 0.01, 0.93 ± 0.01, 0.88 ± 0.03, 0.86 ± 0.03, and 0.73 ± 0.09. The segmentation results show that the deep learning framework is superior to the atlas-based framwork except in the case of the liver. Specifically, in the case of the stomach, the DSC, VOE, and RVD showed a maximum difference of 21.67, 25.11, 28.80% respectively. CONCLUSIONS In this study, we demonstrated that a deep learning framework could be used more effectively and efficiently compared to atlas-based auto-segmentation for most OARs in human liver cancer. Extended use of the deep-learning-based framework is anticipated for auto-segmentations of other body sites.
Collapse
Affiliation(s)
- Sang Hee Ahn
- Department of Radiation Oncology, Proton Therapy Center, National Cancer Center, 323, Ilsan-ro, Ilsandong-gu, Goyang-si, Gyeonggi-do, 10408, South Korea
| | - Adam Unjin Yeo
- Peter MacCallum Cancer Centre, Melbourne, VIC, Australia
| | - Kwang Hyeon Kim
- Department of Radiation Oncology, Proton Therapy Center, National Cancer Center, 323, Ilsan-ro, Ilsandong-gu, Goyang-si, Gyeonggi-do, 10408, South Korea
| | - Chankyu Kim
- Department of Radiation Oncology, Proton Therapy Center, National Cancer Center, 323, Ilsan-ro, Ilsandong-gu, Goyang-si, Gyeonggi-do, 10408, South Korea
| | - Youngmoon Goh
- Department of Radiation Oncology, Asan Medical Center, Seoul, South Korea
| | - Shinhaeng Cho
- Department of Radiation Oncology, Chonnam National University Medical School, Gwangju, South Korea
| | - Se Byeong Lee
- Department of Radiation Oncology, Proton Therapy Center, National Cancer Center, 323, Ilsan-ro, Ilsandong-gu, Goyang-si, Gyeonggi-do, 10408, South Korea
| | - Young Kyung Lim
- Department of Radiation Oncology, Proton Therapy Center, National Cancer Center, 323, Ilsan-ro, Ilsandong-gu, Goyang-si, Gyeonggi-do, 10408, South Korea
| | - Haksoo Kim
- Department of Radiation Oncology, Proton Therapy Center, National Cancer Center, 323, Ilsan-ro, Ilsandong-gu, Goyang-si, Gyeonggi-do, 10408, South Korea
| | - Dongho Shin
- Department of Radiation Oncology, Proton Therapy Center, National Cancer Center, 323, Ilsan-ro, Ilsandong-gu, Goyang-si, Gyeonggi-do, 10408, South Korea
| | - Taeyoon Kim
- Department of Radiation Oncology, Proton Therapy Center, National Cancer Center, 323, Ilsan-ro, Ilsandong-gu, Goyang-si, Gyeonggi-do, 10408, South Korea
| | - Tae Hyun Kim
- Department of Radiation Oncology, Proton Therapy Center, National Cancer Center, 323, Ilsan-ro, Ilsandong-gu, Goyang-si, Gyeonggi-do, 10408, South Korea
| | - Sang Hee Youn
- Department of Radiation Oncology, Proton Therapy Center, National Cancer Center, 323, Ilsan-ro, Ilsandong-gu, Goyang-si, Gyeonggi-do, 10408, South Korea
| | - Eun Sang Oh
- Department of Radiation Oncology, Proton Therapy Center, National Cancer Center, 323, Ilsan-ro, Ilsandong-gu, Goyang-si, Gyeonggi-do, 10408, South Korea
| | - Jong Hwi Jeong
- Department of Radiation Oncology, Proton Therapy Center, National Cancer Center, 323, Ilsan-ro, Ilsandong-gu, Goyang-si, Gyeonggi-do, 10408, South Korea.
| |
Collapse
|
9
|
Zhang C, Sun M, Wei Y, Zhang H, Xie S, Liu T. Automatic segmentation of arterial tree from 3D computed tomographic pulmonary angiography (CTPA) scans. Comput Assist Surg (Abingdon) 2019; 24:79-86. [PMID: 31401886 DOI: 10.1080/24699322.2019.1649077] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022] Open
Affiliation(s)
- Chi Zhang
- School of Biological Science and Medical Engineering, Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, China
| | - Mingxia Sun
- School of Biological Science and Medical Engineering, Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, China
| | - Yinan Wei
- School of Biological Science and Medical Engineering, Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, China
| | - Haoyuan Zhang
- School of Biological Science and Medical Engineering, Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, China
| | - Sheng Xie
- Department of Radiology, China-Japan Friendship Hospital, Beijing, China
| | - Tongxi Liu
- Department of Radiology, China-Japan Friendship Hospital, Beijing, China
| |
Collapse
|
10
|
Ruiz-España S, Domingo J, Díaz-Parra A, Dura E, D'Ocón-Alcañiz V, Arana E, Moratal D. Automatic segmentation of the spine by means of a probabilistic atlas with a special focus on ribs suppression. Med Phys 2017. [DOI: 10.1002/mp.12431] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Affiliation(s)
- Silvia Ruiz-España
- Center for Biomaterials and Tissue Engineering; Universitat Politècnica de València; 46022 Valencia Spain
| | - Juan Domingo
- Department of Informatics; Universitat de València; 46100 Burjasot Spain
| | - Antonio Díaz-Parra
- Center for Biomaterials and Tissue Engineering; Universitat Politècnica de València; 46022 Valencia Spain
| | - Esther Dura
- Department of Informatics; Universitat de València; 46100 Burjasot Spain
| | - Víctor D'Ocón-Alcañiz
- Center for Biomaterials and Tissue Engineering; Universitat Politècnica de València; 46022 Valencia Spain
| | - Estanislao Arana
- Radiology Department; Fundación Instituto Valenciano de Oncología; 46009 Valencia Spain
| | - David Moratal
- Center for Biomaterials and Tissue Engineering; Universitat Politècnica de València; 46022 Valencia Spain
| |
Collapse
|
11
|
Zhou X, Takayama R, Wang S, Hara T, Fujita H. Deep learning of the sectional appearances of 3D CT images for anatomical structure segmentation based on an FCN voting method. Med Phys 2017; 44:5221-5233. [PMID: 28730602 DOI: 10.1002/mp.12480] [Citation(s) in RCA: 102] [Impact Index Per Article: 14.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2016] [Revised: 07/03/2017] [Accepted: 07/10/2017] [Indexed: 12/31/2022] Open
Abstract
PURPOSE We propose a single network trained by pixel-to-label deep learning to address the general issue of automatic multiple organ segmentation in three-dimensional (3D) computed tomography (CT) images. Our method can be described as a voxel-wise multiple-class classification scheme for automatically assigning labels to each pixel/voxel in a 2D/3D CT image. METHODS We simplify the segmentation algorithms of anatomical structures (including multiple organs) in a CT image (generally in 3D) to a majority voting scheme over the semantic segmentation of multiple 2D slices drawn from different viewpoints with redundancy. The proposed method inherits the spirit of fully convolutional networks (FCNs) that consist of "convolution" and "deconvolution" layers for 2D semantic image segmentation, and expands the core structure with 3D-2D-3D transformations to adapt to 3D CT image segmentation. All parameters in the proposed network are trained pixel-to-label from a small number of CT cases with human annotations as the ground truth. The proposed network naturally fulfills the requirements of multiple organ segmentations in CT cases of different sizes that cover arbitrary scan regions without any adjustment. RESULTS The proposed network was trained and validated using the simultaneous segmentation of 19 anatomical structures in the human torso, including 17 major organs and two special regions (lumen and content inside of stomach). Some of these structures have never been reported in previous research on CT segmentation. A database consisting of 240 (95% for training and 5% for testing) 3D CT scans, together with their manually annotated ground-truth segmentations, was used in our experiments. The results show that the 19 structures of interest were segmented with acceptable accuracy (88.1% and 87.9% voxels in the training and testing datasets, respectively, were labeled correctly) against the ground truth. CONCLUSIONS We propose a single network based on pixel-to-label deep learning to address the challenging issue of anatomical structure segmentation in 3D CT cases. The novelty of this work is the policy of deep learning of the different 2D sectional appearances of 3D anatomical structures for CT cases and the majority voting of the 3D segmentation results from multiple crossed 2D sections to achieve availability and reliability with better efficiency, generality, and flexibility than conventional segmentation methods, which must be guided by human expertise.
Collapse
Affiliation(s)
- Xiangrong Zhou
- Department of Intelligent Image Information, Graduate School of Medicine, Gifu University, Gifu, 501-1194, Japan
| | - Ryosuke Takayama
- Department of Intelligent Image Information, Graduate School of Medicine, Gifu University, Gifu, 501-1194, Japan
| | - Song Wang
- Department of Computer Science and Engineering, University of South Carolina, Columbia, SC, 29208, USA
| | - Takeshi Hara
- Department of Intelligent Image Information, Graduate School of Medicine, Gifu University, Gifu, 501-1194, Japan
| | - Hiroshi Fujita
- Department of Intelligent Image Information, Graduate School of Medicine, Gifu University, Gifu, 501-1194, Japan
| |
Collapse
|
12
|
Esfandiarkhani M, Foruzan AH. A generalized active shape model for segmentation of liver in low-contrast CT volumes. Comput Biol Med 2017; 82:59-70. [DOI: 10.1016/j.compbiomed.2017.01.009] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2016] [Revised: 12/24/2016] [Accepted: 01/17/2017] [Indexed: 10/20/2022]
|
13
|
Xu Y, Xu C, Kuang X, Wang H, Chang EIC, Huang W, Fan Y. 3D-SIFT-Flow for atlas-based CT liver image segmentation. Med Phys 2017; 43:2229. [PMID: 27147335 DOI: 10.1118/1.4945021] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE In this paper, the authors proposed a new 3D registration algorithm, 3D-scale invariant feature transform (SIFT)-Flow, for multiatlas-based liver segmentation in computed tomography (CT) images. METHODS In the registration work, the authors developed a new registration method that takes advantage of dense correspondence using the informative and robust SIFT feature. The authors computed the dense SIFT features for the source image and the target image and designed an objective function to obtain the correspondence between these two images. Labeling of the source image was then mapped to the target image according to the former correspondence, resulting in accurate segmentation. In the fusion work, the 2D-based nonparametric label transfer method was extended to 3D for fusing the registered 3D atlases. RESULTS Compared with existing registration algorithms, 3D-SIFT-Flow has its particular advantage in matching anatomical structures (such as the liver) that observe large variation/deformation. The authors observed consistent improvement over widely adopted state-of-the-art registration methods such as ELASTIX, ANTS, and multiatlas fusion methods such as joint label fusion. Experimental results of liver segmentation on the MICCAI 2007 Grand Challenge are encouraging, e.g., Dice overlap ratio 96.27% ± 0.96% by our method compared with the previous state-of-the-art result of 94.90% ± 2.86%. CONCLUSIONS Experimental results show that 3D-SIFT-Flow is robust for segmenting the liver from CT images, which has large tissue deformation and blurry boundary, and 3D label transfer is effective and efficient for improving the registration accuracy.
Collapse
Affiliation(s)
- Yan Xu
- State Key Laboratory of Software Development Environment and Key Laboratory of Biomechanics and Mechanobiology of Ministry of Education, Beihang University, Beijing 100191, China and Research Institute of Beihang University in Shenzhen and Microsoft Research, Beijing 100080, China
| | - Chenchao Xu
- School of Biological Science and Medical Engineering, Beihang University, Beijing 100191, China
| | - Xiao Kuang
- School of Biological Science and Medical Engineering, Beihang University, Beijing 100191, China
| | - Hongkai Wang
- Department of Biomedical Engineering, Dalian University of Technology, Dalian 116024, China
| | | | - Weimin Huang
- Institute for Infocomm Research (I2R), Singapore 138632
| | - Yubo Fan
- Key Laboratory of Biomechanics and Mechanobiology of Ministry of Education, Beihang University, Beijing 100191, China
| |
Collapse
|
14
|
Zheng Y, Ai D, Mu J, Cong W, Wang X, Zhao H, Yang J. Automatic liver segmentation based on appearance and context information. Biomed Eng Online 2017; 16:16. [PMID: 28088195 PMCID: PMC5237528 DOI: 10.1186/s12938-016-0296-5] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2016] [Accepted: 12/08/2016] [Indexed: 11/17/2022] Open
Abstract
BACKGROUND Automated image segmentation has benefits for reducing clinicians' workload, quicker diagnosis, and a standardization of the diagnosis. METHODS This study proposes an automatic liver segmentation approach based on appearance and context information. The relationship between neighboring pixels in blocks is utilized to estimate appearance information, which is used for training the first classifier and obtaining the probability distribution map. The map is used for extracting context information, along with appearance features, to train the next classifier. The prior probability distribution map is achieved after iterations and refined through an improved random walk for liver segmentation without user interaction. RESULTS The proposed approach is evaluated using CT images with eight contemporary approaches, and it achieves the highest VOE, RVD, ASD, RMSD and MSD. It also achieves a high average score of 76 using the MICCAI-2007 Grand Challenge scoring system. CONCLUSIONS Experimental results show that the proposed method is superior to eight other state of the art methods.
Collapse
Affiliation(s)
- Yongchang Zheng
- Department of Liver Surgery, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, 100730 China
| | - Danni Ai
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing, 100081 China
| | - Jinrong Mu
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing, 100081 China
| | - Weijian Cong
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing, 100081 China
| | - Xuan Wang
- Department of Radiology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, 100730 China
| | - Haitao Zhao
- Department of Liver Surgery, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, 100730 China
| | - Jian Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing, 100081 China
| |
Collapse
|
15
|
Reaungamornrat S, De Silva T, Uneri A, Goerres J, Jacobson M, Ketcha M, Vogt S, Kleinszig G, Khanna AJ, Wolinsky JP, Prince JL, Siewerdsen JH. Performance evaluation of MIND demons deformable registration of MR and CT images in spinal interventions. Phys Med Biol 2016; 61:8276-8297. [PMID: 27811396 DOI: 10.1088/0031-9155/61/23/8276] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Accurate intraoperative localization of target anatomy and adjacent nervous and vascular tissue is essential to safe, effective surgery, and multimodality deformable registration can be used to identify such anatomy by fusing preoperative CT or MR images with intraoperative images. A deformable image registration method has been developed to estimate viscoelastic diffeomorphisms between preoperative MR and intraoperative CT using modality-independent neighborhood descriptors (MIND) and a Huber metric for robust registration. The method, called MIND Demons, optimizes a constrained symmetric energy functional incorporating priors on smoothness, geodesics, and invertibility by alternating between Gauss-Newton optimization and Tikhonov regularization in a multiresolution scheme. Registration performance was evaluated for the MIND Demons method with a symmetric energy formulation in comparison to an asymmetric form, and sensitivity to anisotropic MR voxel-size was analyzed in phantom experiments emulating image-guided spine-surgery in comparison to a free-form deformation (FFD) method using local mutual information (LMI). Performance was validated in a clinical study involving 15 patients undergoing intervention of the cervical, thoracic, and lumbar spine. The target registration error (TRE) for the symmetric MIND Demons formulation (1.3 ± 0.8 mm (median ± interquartile)) outperformed the asymmetric form (3.6 ± 4.4 mm). The method demonstrated fairly minor sensitivity to anisotropic MR voxel size, with median TRE ranging 1.3-2.9 mm for MR slice thickness ranging 0.9-9.9 mm, compared to TRE = 3.2-4.1 mm for LMI FFD over the same range. Evaluation in clinical data demonstrated sub-voxel TRE (<2 mm) in all fifteen cases with realistic deformations that preserved topology with sub-voxel invertibility (0.001 mm) and positive-determinant spatial Jacobians. The approach therefore appears robust against realistic anisotropic resolution characteristics in MR and yields registration accuracy suitable to application in image-guided spine-surgery.
Collapse
Affiliation(s)
- S Reaungamornrat
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, USA
| | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|