1
|
Yin Y, Tang Z, Weng H. Application of visual transformer in renal image analysis. Biomed Eng Online 2024; 23:27. [PMID: 38439100 PMCID: PMC10913284 DOI: 10.1186/s12938-024-01209-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Accepted: 01/22/2024] [Indexed: 03/06/2024] Open
Abstract
Deep Self-Attention Network (Transformer) is an encoder-decoder architectural model that excels in establishing long-distance dependencies and is first applied in natural language processing. Due to its complementary nature with the inductive bias of convolutional neural network (CNN), Transformer has been gradually applied to medical image processing, including kidney image processing. It has become a hot research topic in recent years. To further explore new ideas and directions in the field of renal image processing, this paper outlines the characteristics of the Transformer network model and summarizes the application of the Transformer-based model in renal image segmentation, classification, detection, electronic medical records, and decision-making systems, and compared with CNN-based renal image processing algorithm, analyzing the advantages and disadvantages of this technique in renal image processing. In addition, this paper gives an outlook on the development trend of Transformer in renal image processing, which provides a valuable reference for a lot of renal image analysis.
Collapse
Affiliation(s)
- Yuwei Yin
- The College of Health Sciences and Engineering, University of Shanghai for Science and Technology, 516 Jungong Highway, Yangpu Area, Shanghai, 200093, China
- The College of Medical Technology, Shanghai University of Medicine & Health Sciences, 279 Zhouzhu Highway, Pudong New Area, Shanghai, 201318, China
| | - Zhixian Tang
- The College of Medical Technology, Shanghai University of Medicine & Health Sciences, 279 Zhouzhu Highway, Pudong New Area, Shanghai, 201318, China.
| | - Huachun Weng
- The College of Health Sciences and Engineering, University of Shanghai for Science and Technology, 516 Jungong Highway, Yangpu Area, Shanghai, 200093, China.
- The College of Medical Technology, Shanghai University of Medicine & Health Sciences, 279 Zhouzhu Highway, Pudong New Area, Shanghai, 201318, China.
| |
Collapse
|
2
|
Gao H, Lyu M, Zhao X, Yang F, Bai X. Contour-aware network with class-wise convolutions for 3D abdominal multi-organ segmentation. Med Image Anal 2023; 87:102838. [PMID: 37196536 DOI: 10.1016/j.media.2023.102838] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2022] [Revised: 03/21/2023] [Accepted: 05/05/2023] [Indexed: 05/19/2023]
Abstract
Accurate delineation of multiple organs is a critical process for various medical procedures, which could be operator-dependent and time-consuming. Existing organ segmentation methods, which were mainly inspired by natural image analysis techniques, might not fully exploit the traits of the multi-organ segmentation task and could not accurately segment the organs with various shapes and sizes simultaneously. In this work, the characteristics of multi-organ segmentation are considered: the global count, position and scale of organs are generally predictable, while their local shape and appearance are volatile. Thus, we supplement the region segmentation backbone with a contour localization task to increase the certainty along delicate boundaries. Meantime, each organ has exclusive anatomical traits, which motivates us to deal with class variability with class-wise convolutions to highlight organ-specific features and suppress irrelevant responses at different field-of-views. To validate our method with adequate amounts of patients and organs, we constructed a multi-center dataset, which contains 110 3D CT scans with 24,528 axial slices, and provided voxel-level manual segmentations of 14 abdominal organs, which adds up to 1,532 3D structures in total. Extensive ablation and visualization studies on it validate the effectiveness of the proposed method. Quantitative analysis shows that we achieve state-of-the-art performance for most abdominal organs, and obtain 3.63 mm 95% Hausdorff Distance and 83.32% Dice Similarity Coefficient on an average.
Collapse
Affiliation(s)
- Hongjian Gao
- Image Processing Center, Beihang University, Beijing 102206, China
| | - Mengyao Lyu
- School of Software, Tsinghua University, Beijing 100084, China; Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing 100084, China
| | - Xinyue Zhao
- School of Medical Imaging, Xuzhou Medical University, Xuzhou 221004, China
| | - Fan Yang
- Image Processing Center, Beihang University, Beijing 102206, China
| | - Xiangzhi Bai
- Image Processing Center, Beihang University, Beijing 102206, China; State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing 100191, China; Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China.
| |
Collapse
|
3
|
Zhang G, Yang Z, Huo B, Chai S, Jiang S. Multiorgan segmentation from partially labeled datasets with conditional nnU-Net. Comput Biol Med 2021; 136:104658. [PMID: 34311262 DOI: 10.1016/j.compbiomed.2021.104658] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2021] [Revised: 07/14/2021] [Accepted: 07/15/2021] [Indexed: 11/30/2022]
Abstract
Accurate and robust multiorgan abdominal CT segmentation plays a significant role in numerous clinical applications, such as therapy treatment planning and treatment delivery. Almost all existing segmentation networks rely on fully annotated data with strong supervision. However, annotating fully annotated multiorgan data in CT images is both laborious and time-consuming. In comparison, massive partially labeled datasets are usually easily accessible. In this paper, we propose conditional nnU-Net trained on the union of partially labeled datasets for multiorgan segmentation. The deep model employs the state-of-the-art nnU-Net as the backbone and introduces a conditioning strategy by feeding auxiliary information into the decoder architecture as an additional input layer. This model leverages the prior conditional information to identify the organ class at the pixel-wise level and encourages organs' spatial information recovery. Furthermore, we adopt a deep supervision mechanism to refine the outputs at different scales and apply the combination of Dice loss and Focal loss to optimize the training model. Our proposed method is evaluated on seven publicly available datasets of the liver, pancreas, spleen and kidney, in which promising segmentation performance has been achieved. The proposed conditional nnU-Net breaks down the barriers between nonoverlapping labeled datasets and further alleviates the problem of data hunger in multiorgan segmentation.
Collapse
Affiliation(s)
- Guobin Zhang
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China
| | - Zhiyong Yang
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China
| | - Bin Huo
- Department of Oncology, Tianjin Medical University Second Hospital, Tianjin, 300211, China
| | - Shude Chai
- Department of Oncology, Tianjin Medical University Second Hospital, Tianjin, 300211, China
| | - Shan Jiang
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China.
| |
Collapse
|
4
|
Conze PH, Kavur AE, Cornec-Le Gall E, Gezer NS, Le Meur Y, Selver MA, Rousseau F. Abdominal multi-organ segmentation with cascaded convolutional and adversarial deep networks. Artif Intell Med 2021; 117:102109. [PMID: 34127239 DOI: 10.1016/j.artmed.2021.102109] [Citation(s) in RCA: 40] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2020] [Revised: 01/24/2021] [Accepted: 05/06/2021] [Indexed: 02/05/2023]
Abstract
Abdominal anatomy segmentation is crucial for numerous applications from computer-assisted diagnosis to image-guided surgery. In this context, we address fully-automated multi-organ segmentation from abdominal CT and MR images using deep learning. The proposed model extends standard conditional generative adversarial networks. Additionally to the discriminator which enforces the model to create realistic organ delineations, it embeds cascaded partially pre-trained convolutional encoder-decoders as generator. Encoder fine-tuning from a large amount of non-medical images alleviates data scarcity limitations. The network is trained end-to-end to benefit from simultaneous multi-level segmentation refinements using auto-context. Employed for healthy liver, kidneys and spleen segmentation, our pipeline provides promising results by outperforming state-of-the-art encoder-decoder schemes. Followed for the Combined Healthy Abdominal Organ Segmentation (CHAOS) challenge organized in conjunction with the IEEE International Symposium on Biomedical Imaging 2019, it gave us the first rank for three competition categories: liver CT, liver MR and multi-organ MR segmentation. Combining cascaded convolutional and adversarial networks strengthens the ability of deep learning pipelines to automatically delineate multiple abdominal organs, with good generalization capability. The comprehensive evaluation provided suggests that better guidance could be achieved to help clinicians in abdominal image interpretation and clinical decision making.
Collapse
Affiliation(s)
- Pierre-Henri Conze
- IMT Atlantique, Technopôle Brest-Iroise, 29238 Brest, France; LaTIM UMR 1101, Inserm, 22 avenue Camille Desmoulins, 29238 Brest, France.
| | - Ali Emre Kavur
- Dokuz Eylul University, Cumhuriyet Bulvarı, 35210 Izmir, Turkey
| | - Emilie Cornec-Le Gall
- Department of Nephrology, University Hospital, 2 avenue Foch, 29609 Brest, France; UMR 1078, Inserm, 22 avenue Camille Desmoulins, 29238 Brest, France
| | - Naciye Sinem Gezer
- Dokuz Eylul University, Cumhuriyet Bulvarı, 35210 Izmir, Turkey; Department of Radiology, Faculty of Medicine, Cumhuriyet Bulvarı, 35210 Izmir, Turkey
| | - Yannick Le Meur
- Department of Nephrology, University Hospital, 2 avenue Foch, 29609 Brest, France; LBAI UMR 1227, Inserm, 5 avenue Foch, 29609 Brest, France
| | - M Alper Selver
- Dokuz Eylul University, Cumhuriyet Bulvarı, 35210 Izmir, Turkey
| | - François Rousseau
- IMT Atlantique, Technopôle Brest-Iroise, 29238 Brest, France; LaTIM UMR 1101, Inserm, 22 avenue Camille Desmoulins, 29238 Brest, France
| |
Collapse
|
5
|
Barat M, Hoeffel C, Aissaoui M, Dohan A, Oudjit A, Dautry R, Paisant A, Malgras B, Cottereau AS, Soyer P. Focal splenic lesions: Imaging spectrum of diseases on CT, MRI and PET/CT. Diagn Interv Imaging 2021; 102:501-513. [PMID: 33965354 DOI: 10.1016/j.diii.2021.03.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2021] [Revised: 03/29/2021] [Accepted: 03/31/2021] [Indexed: 12/12/2022]
Abstract
The spleen can be affected by a variety of diseases. Some of them are readily identified as variations of normal or benign diseases on imaging. However, for a substantial number of focal splenic abnormalities, the diagnosis can be difficult so that histopathologic analysis may be required for a definite diagnosis. In this review, the typical splenic abnormalities that can be diagnosed with imaging with a high degree of confidence are illustrated. The complementary role of computed tomography (CT), magnetic resonance imaging and positron emission tomography/CT that helps make a diagnostic approach is discussed. Finally, current applications and future trends of radiomics and artificial intelligence for the diagnosis of splenic diseases are addressed.
Collapse
Affiliation(s)
- Maxime Barat
- Department of Radiology, Hôpital Cochin, AP-HP, 75014 Paris, France; Université de Paris, 75006 Paris, France.
| | - Christine Hoeffel
- Department of Radiology, Reims University Hospital, 51092 Reims, France; CRESTIC, University of Reims Champagne-Ardenne, 51100 Reims, France
| | | | - Anthony Dohan
- Department of Radiology, Hôpital Cochin, AP-HP, 75014 Paris, France; Université de Paris, 75006 Paris, France
| | - Amar Oudjit
- Department of Radiology, Hôpital Cochin, AP-HP, 75014 Paris, France
| | - Raphael Dautry
- Department of Radiology, Hôpital Cochin, AP-HP, 75014 Paris, France
| | - Anita Paisant
- Department of Radiology, University Hospital of Angers, 49100 Angers, France; Faculté de Médecine, Université d'Angers, 49045 Angers, France
| | - Brice Malgras
- Department of Digestive and Endocrine Surgery, Bégin Army Training hospital, 94160 Saint-Mandé, France; École du Val-de-Grâce, 75005 Paris, France
| | - Anne-Ségolène Cottereau
- Université de Paris, 75006 Paris, France; Department of Nuclear Medicine, Hôpital Cochin, AP-HP, 75014 Paris, France
| | - Philippe Soyer
- Department of Radiology, Hôpital Cochin, AP-HP, 75014 Paris, France; Université de Paris, 75006 Paris, France
| |
Collapse
|
6
|
Yang Y, Tang Y, Gao R, Bao S, Huo Y, McKenna MT, Savona MR, Abramson RG, Landman BA. Validation and estimation of spleen volume via computer-assisted segmentation on clinically acquired CT scans. J Med Imaging (Bellingham) 2021; 8:014004. [PMID: 33634205 PMCID: PMC7893322 DOI: 10.1117/1.jmi.8.1.014004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2020] [Accepted: 01/28/2021] [Indexed: 11/14/2022] Open
Abstract
Purpose: Deep learning is a promising technique for spleen segmentation. Our study aims to validate the reproducibility of deep learning-based spleen volume estimation by performing spleen segmentation on clinically acquired computed tomography (CT) scans from patients with myeloproliferative neoplasms. Approach: As approved by the institutional review board, we obtained 138 de-identified abdominal CT scans. A sum of voxel volume on an expert annotator's segmentations establishes the ground truth (estimation 1). We used our deep convolutional neural network (estimation 2) alongside traditional linear estimations (estimation 3 and 4) to estimate spleen volumes independently. Dice coefficient, Hausdorff distance,R 2 coefficient, Pearson R coefficient, the absolute difference in volume, and the relative difference in volume were calculated for 2 to 4 against the ground truth to compare and assess methods' performances. We re-labeled on scan-rescan on a subset of 40 studies to evaluate method reproducibility. Results: Calculated against the ground truth, theR 2 coefficients for our method (estimation 2) and linear method (estimation 3 and 4) are 0.998, 0.954, and 0.973, respectively. The Pearson R coefficients for the estimations against the ground truth are 0.999, 0.963, and 0.978, respectively (paired t -tests produced p < 0.05 between 2 and 3, and 2 and 4). Conclusion: The deep convolutional neural network algorithm shows excellent potential in rendering more precise spleen volume estimations. Our computer-aided segmentation exhibits reasonable improvements in splenic volume estimation accuracy.
Collapse
Affiliation(s)
- Yiyuan Yang
- Vanderbilt University, Department of Electrical Engineering and Computer Science, Nashville, Tennessee, United States
| | - Yucheng Tang
- Vanderbilt University, Department of Electrical Engineering and Computer Science, Nashville, Tennessee, United States
| | - Riqiang Gao
- Vanderbilt University, Department of Electrical Engineering and Computer Science, Nashville, Tennessee, United States
| | - Shunxing Bao
- Vanderbilt University, Department of Electrical Engineering and Computer Science, Nashville, Tennessee, United States
| | - Yuankai Huo
- Vanderbilt University, Department of Electrical Engineering and Computer Science, Nashville, Tennessee, United States
| | - Matthew T. McKenna
- Vanderbilt University School of Medicine, Vanderbilt-Ingram Cancer Center, Nashville, Tennessee, United States
- Vanderbilt University School of Medicine, Department of Surgery, Nashville, Tennessee, United States
| | - Michael R. Savona
- Vanderbilt University School of Medicine, Vanderbilt-Ingram Cancer Center, Nashville, Tennessee, United States
- Vanderbilt University School of Medicine, Department of Medicine, Nashville, Tennessee, United States
- Vanderbilt University School of Medicine, Program in Cancer Biology, Nashville, Tennessee, United States
| | | | - Bennett A. Landman
- Vanderbilt University, Department of Electrical Engineering and Computer Science, Nashville, Tennessee, United States
- Vanderbilt University School of Medicine, Vanderbilt-Ingram Cancer Center, Nashville, Tennessee, United States
- Vanderbilt University, Department of Biomedical Engineering, Nashville, Tennessee, United States
| |
Collapse
|
7
|
Zhu Q, Li L, Hao J, Zha Y, Zhang Y, Cheng Y, Liao F, Li P. Selective information passing for MR/CT image segmentation. Neural Comput Appl 2020. [DOI: 10.1007/s00521-020-05407-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
8
|
Lin D, Li Y, Nwe TL, Dong S, Oo ZM. RefineU-Net: Improved U-Net with progressive global feedbacks and residual attention guided local refinement for medical image segmentation. Pattern Recognit Lett 2020. [DOI: 10.1016/j.patrec.2020.07.013] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
|
9
|
Tang O, Xu Y, Tang Y, Lee HH, Chen Y, Gao D, Han S, Gao R, Savona MR, Abramson RG, Huo Y, Landman BA. Validation and Optimization of Multi-Organ Segmentation on Clinical Imaging Archives. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2020; 11313:1131320. [PMID: 34040277 PMCID: PMC8148084 DOI: 10.1117/12.2549035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Segmentation of abdominal computed tomography (CT) provides spatial context, morphological properties, and a framework for tissue-specific radiomics to guide quantitative Radiological assessment. A 2015 MICCAI challenge spurred substantial innovation in multi-organ abdominal CT segmentation with both traditional and deep learning methods. Recent innovations in deep methods have driven performance toward levels for which clinical translation is appealing. However, continued cross-validation on open datasets presents the risk of indirect knowledge contamination and could result in circular reasoning. Moreover, "real world" segmentations can be challenging due to the wide variability of abdomen physiology within patients. Herein, we perform two data retrievals to capture clinically acquired deidentified abdominal CT cohorts with respect to a recently published variation on 3D U-Net (baseline algorithm). First, we retrieved 2004 deidentified studies on 476 patients with diagnosis codes involving spleen abnormalities (cohort A). Second, we retrieved 4313 deidentified studies on 1754 patients without diagnosis codes involving spleen abnormalities (cohort B). We perform prospective evaluation of the existing algorithm on both cohorts, yielding 13% and 8% failure rate, respectively. Then, we identified 51 subjects in cohort A with segmentation failures and manually corrected the liver and gallbladder labels. We re-trained the model adding the manual labels, resulting in performance improvement of 9% and 6% failure rate for the A and B cohorts, respectively. In summary, the performance of the baseline on the prospective cohorts was similar to that on previously published datasets. Moreover, adding data from the first cohort substantively improved performance when evaluated on the second withheld validation cohort.
Collapse
Affiliation(s)
- Olivia Tang
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA 37212
| | - Yuchen Xu
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA 37212
| | - Yucheng Tang
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA 37212
| | - Ho Hin Lee
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA 37212
| | | | - Dashan Gao
- 12 Sigma Technologies, San Diego, CA, USA 92130
| | | | - Riqiang Gao
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA 37212
| | - Michael R. Savona
- Hematology and Oncology, Vanderbilt University Medical Center, Nashville, TN, USA 37235
| | | | - Yuankai Huo
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA 37212
| | - Bennett A. Landman
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA 37212
- Radiology, Vanderbilt University Medical Center, Nashville, TN, USA 37235
| |
Collapse
|
10
|
Lenchik L, Heacock L, Weaver AA, Boutin RD, Cook TS, Itri J, Filippi CG, Gullapalli RP, Lee J, Zagurovskaya M, Retson T, Godwin K, Nicholson J, Narayana PA. Automated Segmentation of Tissues Using CT and MRI: A Systematic Review. Acad Radiol 2019; 26:1695-1706. [PMID: 31405724 PMCID: PMC6878163 DOI: 10.1016/j.acra.2019.07.006] [Citation(s) in RCA: 62] [Impact Index Per Article: 12.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2019] [Revised: 07/17/2019] [Accepted: 07/17/2019] [Indexed: 01/10/2023]
Abstract
RATIONALE AND OBJECTIVES The automated segmentation of organs and tissues throughout the body using computed tomography and magnetic resonance imaging has been rapidly increasing. Research into many medical conditions has benefited greatly from these approaches by allowing the development of more rapid and reproducible quantitative imaging markers. These markers have been used to help diagnose disease, determine prognosis, select patients for therapy, and follow responses to therapy. Because some of these tools are now transitioning from research environments to clinical practice, it is important for radiologists to become familiar with various methods used for automated segmentation. MATERIALS AND METHODS The Radiology Research Alliance of the Association of University Radiologists convened an Automated Segmentation Task Force to conduct a systematic review of the peer-reviewed literature on this topic. RESULTS The systematic review presented here includes 408 studies and discusses various approaches to automated segmentation using computed tomography and magnetic resonance imaging for neurologic, thoracic, abdominal, musculoskeletal, and breast imaging applications. CONCLUSION These insights should help prepare radiologists to better evaluate automated segmentation tools and apply them not only to research, but eventually to clinical practice.
Collapse
Affiliation(s)
- Leon Lenchik
- Department of Radiology, Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC 27157.
| | - Laura Heacock
- Department of Radiology, NYU Langone, New York, New York
| | - Ashley A Weaver
- Department of Biomedical Engineering, Wake Forest School of Medicine, Winston-Salem, North Carolina
| | - Robert D Boutin
- Department of Radiology, University of California Davis School of Medicine, Sacramento, California
| | - Tessa S Cook
- Department of Radiology, University of Pennsylvania, Philadelphia Pennsylvania
| | - Jason Itri
- Department of Radiology, Wake Forest School of Medicine, Medical Center Boulevard, Winston-Salem, NC 27157
| | - Christopher G Filippi
- Department of Radiology, Donald and Barbara School of Medicine at Hofstra/Northwell, Lenox Hill Hospital, NY, New York
| | - Rao P Gullapalli
- Department of Radiology, University of Maryland School of Medicine, Baltimore, Maryland
| | - James Lee
- Department of Radiology, University of Kentucky, Lexington, Kentucky
| | | | - Tara Retson
- Department of Radiology, University of California San Diego, San Diego, California
| | - Kendra Godwin
- Medical Library, Memorial Sloan Kettering Cancer Center, New York, New York
| | - Joey Nicholson
- NYU Health Sciences Library, NYU School of Medicine, NYU Langone Health, New York, New York
| | - Ponnada A Narayana
- Department of Diagnostic and Interventional Imaging, McGovern Medical School, University of Texas Health Science Center at Houston, Houston, Texas
| |
Collapse
|
11
|
Huo Y, Xu Z, Bao S, Bermudez C, Moon H, Parvathaneni P, Moyo TK, Savona MR, Assad A, Abramson RG, Landman BA. Splenomegaly Segmentation on Multi-Modal MRI Using Deep Convolutional Networks. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:1185-1196. [PMID: 30442602 PMCID: PMC7194446 DOI: 10.1109/tmi.2018.2881110] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
The findings of splenomegaly, abnormal enlargement of the spleen, is a non-invasive clinical biomarker for liver and spleen diseases. Automated segmentation methods are essential to efficiently quantify splenomegaly from clinically acquired abdominal magnetic resonance imaging (MRI) scans. However, the task is challenging due to: 1) large anatomical and spatial variations of splenomegaly; 2) large inter- and intra-scan intensity variations on multi-modal MRI; and 3) limited numbers of labeled splenomegaly scans. In this paper, we propose the Splenomegaly Segmentation Network (SS-Net) to introduce the deep convolutional neural network (DCNN) approaches in multi-modal MRI splenomegaly segmentation. Large convolutional kernel layers were used to address the spatial and anatomical variations, while the conditional generative adversarial networks were employed to leverage the segmentation performance of SS-Net in an end-to-end manner. A clinically acquired cohort containing both T1-weighted (T1w) and T2-weighted (T2w) MRI splenomegaly scans was used to train and evaluate the performance of multi-atlas segmentation (MAS), 2D DCNN networks, and a 3-D DCNN network. From the experimental results, the DCNN methods achieved superior performance to the state-of-the-art MAS method. The proposed SS-Net method has achieved the highest median and mean Dice scores among the investigated baseline DCNN methods.
Collapse
Affiliation(s)
- Yuankai Huo
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN 37235 USA
| | - Zhoubing Xu
- Department of Electrical Engineering and Computer Science, Vanderbilt University, TN 37235 USA
| | - Shunxing Bao
- Department of Electrical Engineering and Computer Science, Vanderbilt University, TN 37235 USA
| | - Camilo Bermudez
- Department of Biomedical Engineering, Vanderbilt University, TN 37235 USA
| | - Hyeonsoo Moon
- Department of Electrical Engineering and Computer Science, Vanderbilt University, TN 37235 USA
| | - Prasanna Parvathaneni
- Department of Electrical Engineering and Computer Science, Vanderbilt University, TN 37235 USA
| | - Tamara K. Moyo
- Department of Medicine, Vanderbilt University Medical Center. TN 37235 USA
| | - Michael R. Savona
- Department of Medicine, Vanderbilt University Medical Center. TN 37235 USA
| | | | - Richard G. Abramson
- Department of Radiology and Radiological Science, Vanderbilt University Medical Center. TN 37235 USA
| | - Bennett A. Landman
- Department of Electrical Engineering and Computer Science, Vanderbilt University, TN 37235 USA
| |
Collapse
|
12
|
Moon H, Huo Y, Abramson RG, Peters RA, Assad A, Moyo TK, Savona MR, Landman BA. Acceleration of spleen segmentation with end-to-end deep learning method and automated pipeline. Comput Biol Med 2019; 107:109-117. [PMID: 30798219 PMCID: PMC7086455 DOI: 10.1016/j.compbiomed.2019.01.018] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2018] [Revised: 01/20/2019] [Accepted: 01/21/2019] [Indexed: 12/15/2022]
Abstract
Delineation of Computed Tomography (CT) abdominal anatomical structure, specifically spleen segmentation, is useful for not only measuring tissue volume and biomarkers but also for monitoring interventions. Recently, segmentation algorithms using deep learning have been widely used to reduce time humans spend to label CT data. However, the computerized segmentation has two major difficulties: managing intermediate results (e.g., resampled scans, 2D sliced image for deep learning), and setting up the system environments and packages for autonomous execution. To overcome these issues, we propose an automated pipeline for the abdominal spleen segmentation. This pipeline provides an end-to-end synthesized process that allows users to avoid installing any packages and to deal with the intermediate results locally. The pipeline has three major stages: pre-processing of input data, segmentation of spleen using deep learning, 3D reconstruction with the generated labels by matching the segmentation results with the original image dimensions, which can then be used later and for display or demonstration. Given the same volume scan, the approach described here takes about 50 s on average whereas the manual segmentation takes about 30 min on the average. Even if it includes all subsidiary processes such as preprocessing and necessary setups, the whole pipeline process requires on the average 20 min from beginning to end.
Collapse
Affiliation(s)
- Hyeonsoo Moon
- Department of Electrical Engineering, Vanderbilt University, 2301 Vanderbilt Pl, Nashville, TN, 37235, USA.
| | - Yuankai Huo
- Department of Electrical Engineering, Vanderbilt University, 2301 Vanderbilt Pl, Nashville, TN, 37235, USA.
| | - Richard G Abramson
- Vanderbilt University Institute of Imaging Science, 161 21st Avenue South, Nashville, TN, 37232, USA; Vanderbilt-Ingram Cancer Center, 2220 Pierce Ave, Nashville, TN, 37232, USA.
| | - Richard Alan Peters
- Department of Electrical Engineering, Vanderbilt University, 2301 Vanderbilt Pl, Nashville, TN, 37235, USA.
| | - Albert Assad
- Incyte Corporation, 1801 Augustine Cut Off, Wilmington, DE, 19803, USA.
| | - Tamara K Moyo
- Department of Medicine, 250 25th Ave N, Suite 412, Nashville, TN, 37203, USA.
| | - Michael R Savona
- Department of Medicine, 250 25th Ave N, Suite 412, Nashville, TN, 37203, USA; Vanderbilt Institute for Clinical and Translational Research, 2525 West End Ave, Nashville, TN, 37235, USA.
| | - Bennett A Landman
- Department of Electrical Engineering, Vanderbilt University, 2301 Vanderbilt Pl, Nashville, TN, 37235, USA; Vanderbilt University Institute of Imaging Science, 161 21st Avenue South, Nashville, TN, 37232, USA.
| |
Collapse
|
13
|
Tang Y, Huo Y, Xiong Y, Moon H, Assad A, Moyo TK, Savona MR, Abramson R, Landman BA. Improving Splenomegaly Segmentation by Learning from Heterogeneous Multi-Source Labels. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2019; 10949:1094908. [PMID: 31762532 PMCID: PMC6874226 DOI: 10.1117/12.2512842] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Splenomegaly segmentation on computed tomography (CT) abdomen anatomical scans is essential for identifying spleen biomarkers and has applications for quantitative assessment in patients with liver and spleen disease. Deep convolutional neural network automated segmentation has shown promising performance for splenomegaly segmentation. However, manual labeling of abdominal structures is resource intensive, so the labeled abdominal imaging data are rare resources despite their essential role in algorithm training. Hence, the number of annotated labels (e.g., spleen only) are typically limited with a single study. However, with the development of data sharing techniques, more and more publicly available labeled cohorts are available from different resources. A key new challenging is to co-learn from the multi-source data, even with different numbers of labeled abdominal organs in each study. Thus, it is appealing to design a co-learning strategy to train a deep network from heterogeneously labeled scans. In this paper, we propose a new deep convolutional neural network (DCNN) based method that integrates heterogeneous multi-resource labeled cohorts for splenomegaly segmentation. To enable the proposed approach, a novel loss function is introduced based on the Dice similarity coefficient to adaptively learn multi-organ information from different resources. Three cohorts were employed in our experiments, the first cohort (98 CT scans) has only splenomegaly labels, while the second training cohort (100 CT scans) has 15 distinct anatomical labels with normal spleens. A separate, independent cohort consisting of 19 splenomegaly CT scans with labeled spleen was used as testing cohort. The proposed method achieved the highest median Dice similarity coefficient value (0.94), which is superior (p-value<0.01 against each other method) to the baselines of multi-atlas segmentation (0.86), SS-Net segmentation with only spleen labels (0.90) and U-Net segmentation with multi-organ training (0.91). Our approach for adapting the loss function and training structure is not specific to the abdominal context and may be beneficial in other situations where datasets with varied label sets are available.
Collapse
Affiliation(s)
- Yucheng Tang
- Electrical Engineering, Vanderbilt University, Nashville, TN, USA 37235
| | - Yuankai Huo
- Computer Science, Vanderbilt University, Nashville, TN, USA 37235
| | - Yunxi Xiong
- Computer Science, Vanderbilt University, Nashville, TN, USA 37235
| | - Hyeonsoo Moon
- Electrical Engineering, Vanderbilt University, Nashville, TN, USA 37235
| | | | - Tamara K. Moyo
- Hematology and Oncology, Vanderbilt University Medical Center, Nashville, TN, USA 37235
| | - Michael R. Savona
- Hematology and Oncology, Vanderbilt University Medical Center, Nashville, TN, USA 37235
| | - Richard Abramson
- Radiology, Vanderbilt University Medical Center, Nashville, TN, USA 37235
| | - Bennett A. Landman
- Electrical Engineering, Vanderbilt University, Nashville, TN, USA 37235
- Computer Science, Vanderbilt University, Nashville, TN, USA 37235
- Radiology, Vanderbilt University Medical Center, Nashville, TN, USA 37235
| |
Collapse
|
14
|
Bobo MF, Bao S, Huo Y, Yao Y, Virostko J, Plassard AJ, Lyu I, Assad A, Abramson RG, Hilmes MA, Landman BA. Fully Convolutional Neural Networks Improve Abdominal Organ Segmentation. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2018; 10574:105742V. [PMID: 29887665 PMCID: PMC5992909 DOI: 10.1117/12.2293751] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Abdominal image segmentation is a challenging, yet important clinical problem. Variations in body size, position, and relative organ positions greatly complicate the segmentation process. Historically, multi-atlas methods have achieved leading results across imaging modalities and anatomical targets. However, deep learning is rapidly overtaking classical approaches for image segmentation. Recently, Zhou et al. showed that fully convolutional networks produce excellent results in abdominal organ segmentation of computed tomography (CT) scans. Yet, deep learning approaches have not been applied to whole abdomen magnetic resonance imaging (MRI) segmentation. Herein, we evaluate the applicability of an existing fully convolutional neural network (FCNN) designed for CT imaging to segment abdominal organs on T2 weighted (T2w) MRI's with two examples. In the primary example, we compare a classical multi-atlas approach with FCNN on forty-five T2w MRI's acquired from splenomegaly patients with five organs labeled (liver, spleen, left kidney, right kidney, and stomach). Thirty-six images were used for training while nine were used for testing. The FCNN resulted in a Dice similarity coefficient (DSC) of 0.930 in spleens, 0.730 in left kidneys, 0.780 in right kidneys, 0.913 in livers, and 0.556 in stomachs. The performance measures for livers, spleens, right kidneys, and stomachs were significantly better than multi-atlas (p < 0.05, Wilcoxon rank-sum test). In a secondary example, we compare the multi-atlas approach with FCNN on 138 distinct T2w MRI's with manually labeled pancreases (one label). On the pancreas dataset, the FCNN resulted in a median DSC of 0.691 in pancreases versus 0.287 for multi-atlas. The results are highly promising given relatively limited training data and without specific training of the FCNN model and illustrate the potential of deep learning approaches to transcend imaging modalities.
Collapse
Affiliation(s)
- Meg F Bobo
- Electrical Engineering, Vanderbilt University, Nashville, TN, USA 37235
| | - Shunxing Bao
- Computer Science, Vanderbilt University, Nashville, TN, USA 37235
| | - Yuankai Huo
- Electrical Engineering, Vanderbilt University, Nashville, TN, USA 37235
| | - Yuang Yao
- Electrical Engineering, Vanderbilt University, Nashville, TN, USA 37235
| | - Jack Virostko
- Department of Medicine, Dell Medical School, University of Texas at Austin, Austin, TX 78712
| | | | - Ilwoo Lyu
- Computer Science, Vanderbilt University, Nashville, TN, USA 37235
| | | | - Richard G Abramson
- Radiology and Radiological Science, Vanderbilt University, Nashville, TN, USA 37235
| | - Melissa A Hilmes
- Radiology and Radiological Science, Vanderbilt University, Nashville, TN, USA 37235
| | - Bennett A Landman
- Electrical Engineering, Vanderbilt University, Nashville, TN, USA 37235
- Computer Science, Vanderbilt University, Nashville, TN, USA 37235
- Radiology and Radiological Science, Vanderbilt University, Nashville, TN, USA 37235
- Biomedical Engineering, Vanderbilt University, Nashville, TN, USA 37235
| |
Collapse
|
15
|
Huo Y, Xu Z, Bao S, Bermudez C, Plassard AJ, Liu J, Yao Y, Assad A, Abramson RG, Landman BA. Splenomegaly Segmentation using Global Convolutional Kernels and Conditional Generative Adversarial Networks. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2018; 10574:1057409. [PMID: 29887666 PMCID: PMC5992918 DOI: 10.1117/12.2293406] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Spleen volume estimation using automated image segmentation technique may be used to detect splenomegaly (abnormally enlarged spleen) on Magnetic Resonance Imaging (MRI) scans. In recent years, Deep Convolutional Neural Networks (DCNN) segmentation methods have demonstrated advantages for abdominal organ segmentation. However, variations in both size and shape of the spleen on MRI images may result in large false positive and false negative labeling when deploying DCNN based methods. In this paper, we propose the Splenomegaly Segmentation Network (SSNet) to address spatial variations when segmenting extraordinarily large spleens. SSNet was designed based on the framework of image-to-image conditional generative adversarial networks (cGAN). Specifically, the Global Convolutional Network (GCN) was used as the generator to reduce false negatives, while the Markovian discriminator (PatchGAN) was used to alleviate false positives. A cohort of clinically acquired 3D MRI scans (both T1 weighted and T2 weighted) from patients with splenomegaly were used to train and test the networks. The experimental results demonstrated that a mean Dice coefficient of 0.9260 and a median Dice coefficient of 0.9262 using SSNet on independently tested MRI volumes of patients with splenomegaly.
Collapse
Affiliation(s)
- Yuankai Huo
- Electrical Engineering, Vanderbilt University, Nashville, TN, USA 37235
| | - Zhoubing Xu
- Electrical Engineering, Vanderbilt University, Nashville, TN, USA 37235
| | - Shunxing Bao
- Computer Science, Vanderbilt University, Nashville, TN, USA 37235
| | - Camilo Bermudez
- Biomedical Engineering, Vanderbilt University, Nashville, TN, USA 37235
| | | | - Jiaqi Liu
- Computer Science, Vanderbilt University, Nashville, TN, USA 37235
| | - Yuang Yao
- Computer Science, Vanderbilt University, Nashville, TN, USA 37235
| | | | - Richard G Abramson
- Radiology and Radiological Science, Vanderbilt University, Nashville, TN, USA 37235
| | - Bennett A Landman
- Electrical Engineering, Vanderbilt University, Nashville, TN, USA 37235
- Computer Science, Vanderbilt University, Nashville, TN, USA 37235
- Biomedical Engineering, Vanderbilt University, Nashville, TN, USA 37235
- Radiology and Radiological Science, Vanderbilt University, Nashville, TN, USA 37235
| |
Collapse
|