101
|
Heller N, Tejpaul R, Isensee F, Benidir T, Hofmann M, Blake P, Rengal Z, Moore K, Sathianathen N, Kalapara A, Rosenberg J, Peterson S, Walczak E, Kutikov A, Uzzo RG, Palacios DA, Remer EM, Campbell SC, Papanikolopoulos N, Weight CJ. Computer-Generated R.E.N.A.L. Nephrometry Scores Yield Comparable Predictive Results to Those of Human-Expert Scores in Predicting Oncologic and Perioperative Outcomes. J Urol 2022; 207:1105-1115. [PMID: 34968146 PMCID: PMC8995335 DOI: 10.1097/ju.0000000000002390] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/15/2021] [Indexed: 11/26/2022]
Abstract
PURPOSE We sought to automate R.E.N.A.L. (for radius, exophytic/endophytic, nearness of tumor to collecting system, anterior/posterior, location relative to polar line) nephrometry scoring of preoperative computerized tomography scans and create an artificial intelligence-generated score (AI-score). Subsequently, we aimed to evaluate its ability to predict meaningful oncologic and perioperative outcomes as compared to expert human-generated nephrometry scores (H-scores). MATERIALS AND METHODS A total of 300 patients with preoperative computerized tomography were identified from a cohort of 544 consecutive patients undergoing surgical extirpation for suspected renal cancer at a single institution. A deep neural network approach was used to automatically segment kidneys and tumors, and geometric algorithms were developed to estimate components of R.E.N.A.L. nephrometry score. Tumors were independently scored by medical personnel blinded to AI-scores. AI- and H-score agreement was assessed using Lin's concordance correlation and their predictive abilities for both oncologic and perioperative outcomes were assessed using areas under the curve. RESULTS Median age was 60 years (IQE 51-68), and 40% were female. Median tumor size was 4.2 cm and 91.3% had malignant tumors, including 27%, 37% and 24% with high stage, grade and necrosis, respectively. There was significant agreement between H-scores and AI-scores (Lin's ⍴=0.59). Both AI- and H-scores similarly predicted meaningful oncologic outcomes (p <0.001) including presence of malignancy, necrosis, and high-grade and -stage disease (p <0.003). They also predicted surgical approach (p <0.004) and specific perioperative outcomes (p <0.05). CONCLUSIONS Fully automated AI-generated R.E.N.A.L. scores are comparable to human-generated R.E.N.A.L. scores and predict a wide variety of meaningful patient-centered outcomes. This unambiguous artificial intelligence-based scoring is intended to facilitate wider adoption of the R.E.N.A.L. score.
Collapse
Affiliation(s)
- N Heller
- Department of Computer Science and Engineering, University of Minnesota, Minneapolis, Minnesota
| | - R Tejpaul
- Department of Computer Science and Engineering, University of Minnesota, Minneapolis, Minnesota
| | - F Isensee
- German Cancer Research Center (DKFZ) Heidelberg, University of Heidelberg, Heidelberg, Germany
| | - T Benidir
- Glickman Urological and Kidney Institute, Cleveland Clinic, Cleveland, Ohio
| | - M Hofmann
- Glickman Urological and Kidney Institute, Cleveland Clinic, Cleveland, Ohio
| | - P Blake
- University of Minnesota School of Medicine, Minneapolis, Minnesota
| | - Z Rengal
- University of Minnesota School of Medicine, Minneapolis, Minnesota
| | - K Moore
- Carleton College, Northfield, Minnesota
| | - N Sathianathen
- Department of Urology, University of Melbourne, Melbourne, Australia
| | - A Kalapara
- Department of Urology, University of Melbourne, Melbourne, Australia
| | - J Rosenberg
- University of Minnesota School of Medicine, Minneapolis, Minnesota
| | | | - E Walczak
- University of Minnesota School of Medicine, Minneapolis, Minnesota
| | - A Kutikov
- Urology, Fox Chase Cancer Center, Philadelphia, Pennsylvania
| | - R G Uzzo
- Urology, Fox Chase Cancer Center, Philadelphia, Pennsylvania
| | - D A Palacios
- Glickman Urological and Kidney Institute, Cleveland Clinic, Cleveland, Ohio
| | - E M Remer
- Glickman Urological and Kidney Institute, Cleveland Clinic, Cleveland, Ohio
- Department of Diagnostic Radiology, Imaging Institute Cleveland Clinic, Cleveland, Ohio
| | - S C Campbell
- Glickman Urological and Kidney Institute, Cleveland Clinic, Cleveland, Ohio
| | - N Papanikolopoulos
- Department of Computer Science and Engineering, University of Minnesota, Minneapolis, Minnesota
| | | |
Collapse
|
102
|
Zhang Y, Yuan N, Zhang Z, Du J, Wang T, Liu B, Yang A, Lv K, Ma G, Lei B. Unsupervised Domain Selective Graph Convolutional Network for Preoperative Prediction of Lymph Node Metastasis in Gastric Cancer. Med Image Anal 2022; 79:102467. [PMID: 35537338 DOI: 10.1016/j.media.2022.102467] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2021] [Revised: 04/24/2022] [Accepted: 04/25/2022] [Indexed: 12/24/2022]
|
103
|
Kittipongdaja P, Siriborvornratanakul T. Automatic kidney segmentation using 2.5D ResUNet and 2.5D DenseUNet for malignant potential analysis in complex renal cyst based on CT images. EURASIP JOURNAL ON IMAGE AND VIDEO PROCESSING 2022; 2022:5. [PMID: 35340560 PMCID: PMC8938741 DOI: 10.1186/s13640-022-00581-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/07/2021] [Accepted: 02/23/2022] [Indexed: 05/26/2023]
Abstract
Bosniak renal cyst classification has been widely used in determining the complexity of a renal cyst. However, it turns out that about half of patients undergoing surgery for Bosniak category III, take surgical risks that reward them with no clinical benefit at all. This is because their pathological results reveal that the cysts are actually benign not malignant. This problem inspires us to use recently popular deep learning techniques and study alternative analytics methods for precise binary classification (benign or malignant tumor) on Computerized Tomography (CT) images. To achieve our goal, two consecutive steps are required-segmenting kidney organs or lesions from CT images then classifying the segmented kidneys. In this paper, we propose a study of kidney segmentation using 2.5D ResUNet and 2.5D DenseUNet for efficiently extracting intra-slice and inter-slice features. Our models are trained and validated on the public data set from Kidney Tumor Segmentation (KiTS19) challenge in two different training environments. As a result, all experimental models achieve high mean kidney Dice scores of at least 95% on the KiTS19 validation set consisting of 60 patients. Apart from the KiTS19 data set, we also conduct separate experiments on abdomen CT images of four Thai patients. Based on the four Thai patients, our experimental models show a drop in performance, where the best mean kidney Dice score is 87.60%.
Collapse
Affiliation(s)
- Parin Kittipongdaja
- Graduate School of Applied Statistics, National Institute of Development Administration, Bangkok, Thailand
| | | |
Collapse
|
104
|
Sun P, Mo Z, Hu F, Liu F, Mo T, Zhang Y, Chen Z. Kidney Tumor Segmentation Based on FR2PAttU-Net Model. Front Oncol 2022; 12:853281. [PMID: 35372025 PMCID: PMC8968695 DOI: 10.3389/fonc.2022.853281] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2022] [Accepted: 02/17/2022] [Indexed: 11/14/2022] Open
Abstract
The incidence rate of kidney tumors increases year by year, especially for some incidental small tumors. It is challenging for doctors to segment kidney tumors from kidney CT images. Therefore, this paper proposes a deep learning model based on FR2PAttU-Net to help doctors process many CT images quickly and efficiently and save medical resources. FR2PAttU-Net is not a new CNN structure but focuses on improving the segmentation effect of kidney tumors, even when the kidney tumors are not clear. Firstly, we use the R2Att network in the "U" structure of the original U-Net, add parallel convolution, and construct FR2PAttU-Net model, to increase the width of the model, improve the adaptability of the model to the features of different scales of the image, and avoid the failure of network deepening to learn valuable features. Then, we use the fuzzy set enhancement algorithm to enhance the input image and construct the FR2PAttU-Net model to make the image obtain more prominent features to adapt to the model. Finally, we used the KiTS19 data set and took the size of the kidney tumor as the category judgment standard to enhance the small sample data set to balance the sample data set. We tested the segmentation effect of the model at different convolution and depths, and we got scored a 0.948 kidney Dice and a 0.911 tumor Dice results in a 0.930 composite score, showing a good segmentation effect.
Collapse
Affiliation(s)
- Peng Sun
- School of Electronic Engineering and Automation, Guilin University of Electronic Technology, Guilin, China
| | - Zengnan Mo
- Center for Genomic and Personalized Medicine, Guangxi Medical University, Nanning, China
| | - Fangrong Hu
- School of Electronic Engineering and Automation, Guilin University of Electronic Technology, Guilin, China
| | - Fang Liu
- College of Life and Environment Science, Guilin University of Electronic Technology, Guilin, China
| | - Taiping Mo
- School of Electronic Engineering and Automation, Guilin University of Electronic Technology, Guilin, China
| | - Yewei Zhang
- Hepatopancreatobiliary Center, The Second Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Zhencheng Chen
- School of Electronic Engineering and Automation, Guilin University of Electronic Technology, Guilin, China
| |
Collapse
|
105
|
Improving segmentation and classification of renal tumors in small sample 3D CT images using transfer learning with convolutional neural networks. Int J Comput Assist Radiol Surg 2022; 17:1303-1311. [PMID: 35290645 DOI: 10.1007/s11548-022-02587-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2021] [Accepted: 02/24/2022] [Indexed: 11/05/2022]
Abstract
PURPOSE Computed tomography (CT) images can display internal organs of patients and are particularly suitable for preoperative surgical diagnoses. The increasing demands for computer-aided systems in recent years have facilitated the development of many automated algorithms, especially deep convolutional neural networks, to segment organs and tumors or identify diseases from CT images. However, performances of some systems are highly affected by the amount of training data, while the sizes of medical image data sets, especially three-dimensional (3D) data sets, are usually small. This condition limits the application of deep learning. METHODS In this study, given a practical clinical data set that has 3D CT images of 20 patients with renal carcinoma, we designed a pipeline employing transfer learning to alleviate the detrimental effect of the small sample size. A dual-channel fine segmentation network (FS-Net) was constructed to segment kidney and tumor regions, with 210 publicly available 3D images from a competition employed during the training phase. We also built discriminative classifiers to classify the benign and malignant tumors based on the segmented regions, where both handcrafted and deep features were tested. RESULTS Our experimental results showed that the Dice values of segmented kidney and tumor regions were 0.9662 and 0.7685, respectively, which were better than those of state-of-the-art methods. The classification model using radiomics features can classify most of the tumors correctly. CONCLUSIONS The designed FS-Net was demonstrated to be more effective than simply fine-tuning on the practical small size data set given that the model can borrow knowledge from large auxiliary data without diluting the signal in primary data. For the small data set, radiomics features outperformed deep features in the classification of benign and malignant tumors. This work highlights the importance of architecture design in transfer learning, and the proposed pipeline is anticipated to provide a reference and inspiration for small data analysis.
Collapse
|
106
|
Fan C, Sun K, Min X, Cai W, Lv W, Ma X, Li Y, Chen C, Zhao P, Qiao J, Lu J, Guo Y, Xia L. Discriminating malignant from benign testicular masses using machine-learning based radiomics signature of appearance diffusion coefficient maps: Comparing with conventional mean and minimum ADC values. Eur J Radiol 2022; 148:110158. [DOI: 10.1016/j.ejrad.2022.110158] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2021] [Revised: 01/04/2022] [Accepted: 01/11/2022] [Indexed: 11/03/2022]
|
107
|
Klepaczko A, Majos M, Stefańczyk L, Ejkefjord E, Lundervold A. Whole kidney and renal cortex segmentation in contrast-enhanced MRI using a joint classification and segmentation convolutional neural network. Biocybern Biomed Eng 2022. [DOI: 10.1016/j.bbe.2022.02.002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
|
108
|
Rathi N, Palacios DA, Abramczyk E, Tanaka H, Ye Y, Li J, Yasuda Y, Abouassaly R, Eltemamy M, Wee A, Weight C, Campbell SC. Predicting GFR after radical nephrectomy: the importance of split renal function. World J Urol 2022; 40:1011-1018. [PMID: 35022828 DOI: 10.1007/s00345-021-03918-9] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2021] [Accepted: 12/26/2021] [Indexed: 01/30/2023] Open
Abstract
PURPOSE To evaluate a conceptually simple model to predict new-baseline-glomerular-filtration-rate (NBGFR) after radical nephrectomy (RN) based on split-renal-function (SRF) and renal-functional-compensation (RFC), and to compare its predictive accuracy against a validated non-SRF-based model. RN should only be considered when the tumor has increased oncologic potential and/or when there is concern about perioperative morbidity with PN due to increased tumor complexity. In these circumstances, accurate prediction of NBGFR after RN can be important, with a threshold NBGFR > 45 ml/min/1.73m2 correlating with improved overall survival. METHODS 236 RCC patients who underwent RN (2010-2012) with preoperative imaging (CT/MRI) and relevant functional data were included. NBGFR was defined as GFR 3-12 months post-RN. SRF was determined using semi-automated software that provides differential parenchymal-volume-analysis (PVA) from preoperative imaging. Our SRF-based model was: Predicted NBGFR = 1.24 (× Global GFRPre-RN) (× SRFContralateral), with 1.24 representing the mean RFC estimate from independent analyses. A non-SRF-based model was also assessed: Predicted NBGFR = 17 + preoperative GFR (× 0.65)-age (× 0.25) + 3 (if tumor > 7 cm)-2 (if diabetes). Alignment between predicted/observed NBGFR was assessed by comparing correlation coefficients and area-under-the-curve (AUC) analyses. RESULTS The correlation-coefficients (r) were 0.87/0.72 for SRF-based/non-SRF-based models, respectively (p = 0.005). For prediction of NBGFR > 45 ml/min/1.73m2, the SRF-based/non-SRF-based models provided AUC of 0.94/0.87, respectively (p = 0.044). CONCLUSION Previous non-SRF-based models to predict NBGFR post-RN are complex and omit two important parameters: SRF and RFC. Our proposed model prioritizes these parameters and provides a conceptually simple, accurate, and clinically implementable approach to predict NBGFR post-RN. SRF can be easily obtained using PVA software that is affordable, readily available (FUJIFILM-Medical-Systems), and more accurate than nuclear-renal-scans. The SRF-based model demonstrates greater predictive-accuracy than a non-SRF-based model, including the clinically-important predictive-threshold of NBGFR > 45 ml/min/1.73m2.
Collapse
Affiliation(s)
- Nityam Rathi
- Center for Urologic Oncology, Glickman Urological and Kidney Institute, Cleveland Clinic, Room Q10-120, 9500 Euclid Avenue, Cleveland, OH, 44195, USA
| | - Diego A Palacios
- Center for Urologic Oncology, Glickman Urological and Kidney Institute, Cleveland Clinic, Room Q10-120, 9500 Euclid Avenue, Cleveland, OH, 44195, USA
| | - Emily Abramczyk
- Center for Urologic Oncology, Glickman Urological and Kidney Institute, Cleveland Clinic, Room Q10-120, 9500 Euclid Avenue, Cleveland, OH, 44195, USA
| | - Hajime Tanaka
- Center for Urologic Oncology, Glickman Urological and Kidney Institute, Cleveland Clinic, Room Q10-120, 9500 Euclid Avenue, Cleveland, OH, 44195, USA.,Department of Urology, Tokyo Medical and Dental University, Tokyo, Japan
| | - Yunlin Ye
- Center for Urologic Oncology, Glickman Urological and Kidney Institute, Cleveland Clinic, Room Q10-120, 9500 Euclid Avenue, Cleveland, OH, 44195, USA.,Department of Urology, Sun Yat-Sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, China
| | - Jianbo Li
- Department of Quantitative Health Sciences, Cleveland Clinic, Cleveland, OH, USA
| | - Yosuke Yasuda
- Center for Urologic Oncology, Glickman Urological and Kidney Institute, Cleveland Clinic, Room Q10-120, 9500 Euclid Avenue, Cleveland, OH, 44195, USA.,Department of Urology, Tokyo Medical and Dental University, Tokyo, Japan
| | - Robert Abouassaly
- Center for Urologic Oncology, Glickman Urological and Kidney Institute, Cleveland Clinic, Room Q10-120, 9500 Euclid Avenue, Cleveland, OH, 44195, USA
| | - Mohamed Eltemamy
- Center for Urologic Oncology, Glickman Urological and Kidney Institute, Cleveland Clinic, Room Q10-120, 9500 Euclid Avenue, Cleveland, OH, 44195, USA
| | - Alvin Wee
- Center for Urologic Oncology, Glickman Urological and Kidney Institute, Cleveland Clinic, Room Q10-120, 9500 Euclid Avenue, Cleveland, OH, 44195, USA
| | - Christopher Weight
- Center for Urologic Oncology, Glickman Urological and Kidney Institute, Cleveland Clinic, Room Q10-120, 9500 Euclid Avenue, Cleveland, OH, 44195, USA
| | - Steven C Campbell
- Center for Urologic Oncology, Glickman Urological and Kidney Institute, Cleveland Clinic, Room Q10-120, 9500 Euclid Avenue, Cleveland, OH, 44195, USA.
| |
Collapse
|
109
|
Rao P, Chatterjee S, Sharma S. Weight pruning-UNet: Weight pruning UNet with depth-wise separable convolutions for semantic segmentation of kidney tumors. JOURNAL OF MEDICAL SIGNALS & SENSORS 2022; 12:108-113. [PMID: 35755976 PMCID: PMC9215835 DOI: 10.4103/jmss.jmss_108_21] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2021] [Revised: 07/10/2021] [Accepted: 07/16/2021] [Indexed: 11/23/2022]
Abstract
Background: Accurate semantic segmentation of kidney tumors in computed tomography (CT) images is difficult because tumors feature varied forms and occasionally, look alike. The KiTs19 challenge sets the groundwork for future advances in kidney tumor segmentation. Methods: We present weight pruning (WP)-UNet, a deep network model that is lightweight with a small scale; it involves few parameters with a quick assumption time and a low floating-point computational complexity. Results: We trained and evaluated the model with CT images from 210 patients. The findings implied the dominance of our method on the training Dice score (0.98) for the kidney tumor region. The proposed model only uses 1,297,441 parameters and 7.2e floating-point operations, three times lower than those for other network models. Conclusions: The results confirm that the proposed architecture is smaller than that of UNet, involves less computational complexity, and yields good accuracy, indicating its potential applicability in kidney tumor imaging.
Collapse
|
110
|
Artificial Intelligence in Urology. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_172] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
111
|
Moreau N, Rousseau C, Fourcade C, Santini G, Brennan A, Ferrer L, Lacombe M, Guillerminet C, Colombié M, Jézéquel P, Campone M, Normand N, Rubeaux M. Automatic Segmentation of Metastatic Breast Cancer Lesions on 18F-FDG PET/CT Longitudinal Acquisitions for Treatment Response Assessment. Cancers (Basel) 2021; 14:101. [PMID: 35008265 PMCID: PMC8750371 DOI: 10.3390/cancers14010101] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2021] [Revised: 12/16/2021] [Accepted: 12/21/2021] [Indexed: 11/21/2022] Open
Abstract
Metastatic breast cancer patients receive lifelong medication and are regularly monitored for disease progression. The aim of this work was to (1) propose networks to segment breast cancer metastatic lesions on longitudinal whole-body PET/CT and (2) extract imaging biomarkers from the segmentations and evaluate their potential to determine treatment response. Baseline and follow-up PET/CT images of 60 patients from the EPICUREseinmeta study were used to train two deep-learning models to segment breast cancer metastatic lesions: One for baseline images and one for follow-up images. From the automatic segmentations, four imaging biomarkers were computed and evaluated: SULpeak, Total Lesion Glycolysis (TLG), PET Bone Index (PBI) and PET Liver Index (PLI). The first network obtained a mean Dice score of 0.66 on baseline acquisitions. The second network obtained a mean Dice score of 0.58 on follow-up acquisitions. SULpeak, with a 32% decrease between baseline and follow-up, was the biomarker best able to assess patients' response (sensitivity 87%, specificity 87%), followed by TLG (43% decrease, sensitivity 73%, specificity 81%) and PBI (8% decrease, sensitivity 69%, specificity 69%). Our networks constitute promising tools for the automatic segmentation of lesions in patients with metastatic breast cancer allowing treatment response assessment with several biomarkers.
Collapse
Affiliation(s)
- Noémie Moreau
- LS2N, University of Nantes, CNRS, 44000 Nantes, France; (C.F.); (N.N.)
- Keosys Medical Imaging, 13 Imp. Serge Reggiani, 44815 Saint-Herblain, France; (G.S.); (A.B.); (M.R.)
| | - Caroline Rousseau
- CRCINA, University of Nantes, INSERM UMR1232, CNRS-ERL6001, 44000 Nantes, France; (C.R.); (P.J.)
- ICO Cancer Center, 49000 Angers, France; (L.F.); (M.L.); (C.G.); (M.C.); (M.C.)
| | - Constance Fourcade
- LS2N, University of Nantes, CNRS, 44000 Nantes, France; (C.F.); (N.N.)
- Keosys Medical Imaging, 13 Imp. Serge Reggiani, 44815 Saint-Herblain, France; (G.S.); (A.B.); (M.R.)
| | - Gianmarco Santini
- Keosys Medical Imaging, 13 Imp. Serge Reggiani, 44815 Saint-Herblain, France; (G.S.); (A.B.); (M.R.)
| | - Aislinn Brennan
- Keosys Medical Imaging, 13 Imp. Serge Reggiani, 44815 Saint-Herblain, France; (G.S.); (A.B.); (M.R.)
| | - Ludovic Ferrer
- ICO Cancer Center, 49000 Angers, France; (L.F.); (M.L.); (C.G.); (M.C.); (M.C.)
- CRCINA, University of Angers, INSERM UMR1232, CNRS-ERL6001, 49000 Angers, France
| | - Marie Lacombe
- ICO Cancer Center, 49000 Angers, France; (L.F.); (M.L.); (C.G.); (M.C.); (M.C.)
| | | | - Mathilde Colombié
- ICO Cancer Center, 49000 Angers, France; (L.F.); (M.L.); (C.G.); (M.C.); (M.C.)
| | - Pascal Jézéquel
- CRCINA, University of Nantes, INSERM UMR1232, CNRS-ERL6001, 44000 Nantes, France; (C.R.); (P.J.)
- ICO Cancer Center, 49000 Angers, France; (L.F.); (M.L.); (C.G.); (M.C.); (M.C.)
| | - Mario Campone
- ICO Cancer Center, 49000 Angers, France; (L.F.); (M.L.); (C.G.); (M.C.); (M.C.)
- CRCINA, University of Angers, INSERM UMR1232, CNRS-ERL6001, 49000 Angers, France
| | - Nicolas Normand
- LS2N, University of Nantes, CNRS, 44000 Nantes, France; (C.F.); (N.N.)
| | - Mathieu Rubeaux
- Keosys Medical Imaging, 13 Imp. Serge Reggiani, 44815 Saint-Herblain, France; (G.S.); (A.B.); (M.R.)
| |
Collapse
|
112
|
Oreiller V, Andrearczyk V, Jreige M, Boughdad S, Elhalawani H, Castelli J, Vallières M, Zhu S, Xie J, Peng Y, Iantsen A, Hatt M, Yuan Y, Ma J, Yang X, Rao C, Pai S, Ghimire K, Feng X, Naser MA, Fuller CD, Yousefirizi F, Rahmim A, Chen H, Wang L, Prior JO, Depeursinge A. Head and neck tumor segmentation in PET/CT: The HECKTOR challenge. Med Image Anal 2021; 77:102336. [PMID: 35016077 DOI: 10.1016/j.media.2021.102336] [Citation(s) in RCA: 58] [Impact Index Per Article: 19.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2021] [Revised: 10/13/2021] [Accepted: 12/14/2021] [Indexed: 12/23/2022]
Abstract
This paper relates the post-analysis of the first edition of the HEad and neCK TumOR (HECKTOR) challenge. This challenge was held as a satellite event of the 23rd International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2020, and was the first of its kind focusing on lesion segmentation in combined FDG-PET and CT image modalities. The challenge's task is the automatic segmentation of the Gross Tumor Volume (GTV) of Head and Neck (H&N) oropharyngeal primary tumors in FDG-PET/CT images. To this end, the participants were given a training set of 201 cases from four different centers and their methods were tested on a held-out set of 53 cases from a fifth center. The methods were ranked according to the Dice Score Coefficient (DSC) averaged across all test cases. An additional inter-observer agreement study was organized to assess the difficulty of the task from a human perspective. 64 teams registered to the challenge, among which 10 provided a paper detailing their approach. The best method obtained an average DSC of 0.7591, showing a large improvement over our proposed baseline method and the inter-observer agreement, associated with DSCs of 0.6610 and 0.61, respectively. The automatic methods proved to successfully leverage the wealth of metabolic and structural properties of combined PET and CT modalities, significantly outperforming human inter-observer agreement level, semi-automatic thresholding based on PET images as well as other single modality-based methods. This promising performance is one step forward towards large-scale radiomics studies in H&N cancer, obviating the need for error-prone and time-consuming manual delineation of GTVs.
Collapse
Affiliation(s)
- Valentin Oreiller
- Institute of Information Systems, University of Applied Sciences Western Switzerland (HES-SO), Sierre, Switzerland; Department of Nuclear Medicine and Molecular Imaging, Centre Hospitalier Universitaire Vaudois (CHUV), Lausanne, Switzerland.
| | - Vincent Andrearczyk
- Institute of Information Systems, University of Applied Sciences Western Switzerland (HES-SO), Sierre, Switzerland
| | - Mario Jreige
- Department of Nuclear Medicine and Molecular Imaging, Centre Hospitalier Universitaire Vaudois (CHUV), Lausanne, Switzerland
| | - Sarah Boughdad
- Department of Nuclear Medicine and Molecular Imaging, Centre Hospitalier Universitaire Vaudois (CHUV), Lausanne, Switzerland
| | - Hesham Elhalawani
- Department of Radiation Oncology, Brigham and Women's Hospital and Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, USA
| | - Joel Castelli
- Radiotherapy Department, Cancer Institute Eugène Marquis, Rennes, France
| | - Martin Vallières
- Department of Computer Science, Université de Sherbrooke, Sherbrooke, Québec, Canada
| | - Simeng Zhu
- Department of Radiation Oncology, Henry Ford Cancer Institute, Detroit, MI, USA
| | - Juanying Xie
- School of Computer Science, Shaanxi Normal University, Xi'an 710119, PR China
| | - Ying Peng
- School of Computer Science, Shaanxi Normal University, Xi'an 710119, PR China
| | - Andrei Iantsen
- LaTIM, INSERM, UMR 1101, University Brest, Brest, France
| | - Mathieu Hatt
- LaTIM, INSERM, UMR 1101, University Brest, Brest, France
| | - Yading Yuan
- Department of Radiation Oncology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Jun Ma
- Department of Mathematics, Nanjing University of Science and Technology, Jiangsu, China
| | - Xiaoping Yang
- Department of Mathematics, Nanjing University, Jiangsu, China
| | - Chinmay Rao
- Department of Radiation Oncology (Maastro), GROW School for Oncology, Maastricht University Medical Centre+, Maastricht, The Netherlands
| | - Suraj Pai
- Department of Radiation Oncology (Maastro), GROW School for Oncology, Maastricht University Medical Centre+, Maastricht, The Netherlands
| | | | - Xue Feng
- Carina Medical, Lexington, KY, 40513, USA; Department of Biomedical Engineering, University of Virginia, Charlottesville VA 22903, USA
| | - Mohamed A Naser
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030, USA
| | - Clifton D Fuller
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030, USA
| | - Fereshteh Yousefirizi
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver BC, Canada
| | - Arman Rahmim
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver BC, Canada
| | - Huai Chen
- Department of Automation, Institute of Image Processing and Pattern Recognition, Shangai Jiao Tong University, Shanghai 200240, People's Republic of China
| | - Lisheng Wang
- Department of Automation, Institute of Image Processing and Pattern Recognition, Shangai Jiao Tong University, Shanghai 200240, People's Republic of China
| | - John O Prior
- Department of Nuclear Medicine and Molecular Imaging, Centre Hospitalier Universitaire Vaudois (CHUV), Lausanne, Switzerland
| | - Adrien Depeursinge
- Institute of Information Systems, University of Applied Sciences Western Switzerland (HES-SO), Sierre, Switzerland; Department of Nuclear Medicine and Molecular Imaging, Centre Hospitalier Universitaire Vaudois (CHUV), Lausanne, Switzerland
| |
Collapse
|
113
|
Yeung M, Sala E, Schönlieb CB, Rundo L. Unified Focal loss: Generalising Dice and cross entropy-based losses to handle class imbalanced medical image segmentation. Comput Med Imaging Graph 2021; 95:102026. [PMID: 34953431 PMCID: PMC8785124 DOI: 10.1016/j.compmedimag.2021.102026] [Citation(s) in RCA: 90] [Impact Index Per Article: 30.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Revised: 11/18/2021] [Accepted: 12/04/2021] [Indexed: 12/18/2022]
Abstract
Automatic segmentation methods are an important advancement in medical image analysis. Machine learning techniques, and deep neural networks in particular, are the state-of-the-art for most medical image segmentation tasks. Issues with class imbalance pose a significant challenge in medical datasets, with lesions often occupying a considerably smaller volume relative to the background. Loss functions used in the training of deep learning algorithms differ in their robustness to class imbalance, with direct consequences for model convergence. The most commonly used loss functions for segmentation are based on either the cross entropy loss, Dice loss or a combination of the two. We propose the Unified Focal loss, a new hierarchical framework that generalises Dice and cross entropy-based losses for handling class imbalance. We evaluate our proposed loss function on five publicly available, class imbalanced medical imaging datasets: CVC-ClinicDB, Digital Retinal Images for Vessel Extraction (DRIVE), Breast Ultrasound 2017 (BUS2017), Brain Tumour Segmentation 2020 (BraTS20) and Kidney Tumour Segmentation 2019 (KiTS19). We compare our loss function performance against six Dice or cross entropy-based loss functions, across 2D binary, 3D binary and 3D multiclass segmentation tasks, demonstrating that our proposed loss function is robust to class imbalance and consistently outperforms the other loss functions. Source code is available at: https://github.com/mlyg/unified-focal-loss. Loss function choice is crucial for class-imbalanced medical imaging datasets. Understanding the relationship between loss functions is key to inform choice. Unified Focal loss generalises Dice and cross-entropy based loss functions. Unified Focal loss outperforms various Dice and cross-entropy based loss functions.
Collapse
Affiliation(s)
- Michael Yeung
- Department of Radiology, University of Cambridge, Cambridge CB2 0QQ, United Kingdom; School of Clinical Medicine, University of Cambridge, Cambridge CB2 0SP, United Kingdom.
| | - Evis Sala
- Department of Radiology, University of Cambridge, Cambridge CB2 0QQ, United Kingdom; Cancer Research UK Cambridge Centre, University of Cambridge, Cambridge CB2 0RE, United Kingdom.
| | - Carola-Bibiane Schönlieb
- Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge CB3 0WA, United Kingdom.
| | - Leonardo Rundo
- Department of Radiology, University of Cambridge, Cambridge CB2 0QQ, United Kingdom; Cancer Research UK Cambridge Centre, University of Cambridge, Cambridge CB2 0RE, United Kingdom; Department of Information and Electrical Engineering and Applied Mathematics (DIEM), University of Salerno, Fisciano, SA 84084, Italy.
| |
Collapse
|
114
|
Gao Y, Tang Y, Ren D, Cheng S, Wang Y, Yi L, Peng S. Deep Learning Plus Three-Dimensional Printing in the Management of Giant (>15 cm) Sporadic Renal Angiomyolipoma: An Initial Report. Front Oncol 2021; 11:724986. [PMID: 34868918 PMCID: PMC8634108 DOI: 10.3389/fonc.2021.724986] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Accepted: 10/22/2021] [Indexed: 11/13/2022] Open
Abstract
Objective To evaluate the feasibility and effectivity of deep learning (DL) plus three-dimensional (3D) printing in the management of giant sporadic renal angiomyolipoma (RAML). Methods The medical records of patients with giant (>15 cm) RAML were retrospectively reviewed from January 2011 to December 2020. 3D visualized and printed kidney models were performed by DL algorithms and 3D printing technology, respectively. Patient demographics and intra- and postoperative outcomes were compared between those with 3D-assisted surgery (3D group) or routine ones (control group). Results Among 372 sporadic RAML patients, 31 with giant ones were eligible for analysis. The median age was 40.6 (18-70) years old, and the median tumor size was 18.2 (15-28) cm. Seventeen of 31 (54.8%) had a surgical kidney removal. Overall, 11 underwent 3D-assisted surgeries and 20 underwent routine ones. A significant higher success rate of partial nephrectomy (PN) was noted in the 3D group (72.7% vs. 30.0%). Patients in the 3D group presented a lower reduction in renal function but experienced a longer operation time, a greater estimated blood loss, and a higher postoperative morbidity. Subgroup analysis was conducted between patients undergoing PN with or without 3D assistance. Despite no significant difference, patients with 3D-assisted PN had a slightly larger tumor size and higher nephrectomy score, possibly contributing to a relatively higher rate of complications. However, 3D-assisted PN lead to a shorter warm ischemia time and a lower renal function loss without significant difference. Another subgroup analysis between patients under 3D-assisted PN or 3D-assisted RN showed no statistically significant difference. However, the nearness of tumor to the second branch of renal artery was relatively shorter in 3D-assisted PN subgroup than that in 3D-assisted RN subgroup, and the difference between them was close to significant. Conclusions 3D visualized and printed kidney models appear to be additional tools to assist operational management and avoid a high rate of kidney removal for giant sporadic RAMLs.
Collapse
Affiliation(s)
- Yunliang Gao
- Department of Urology, The Second Xiangya Hospital, Central South University, Changsha, China
| | - Yuanyuan Tang
- Department of Oncology, The Second Xiangya Hospital, Central South University, Changsha, China
| | - Da Ren
- Department of Urology, The Second Xiangya Hospital, Central South University, Changsha, China
| | - Shunhua Cheng
- Hunan Engineering Research Center of Smart and Precise Medicine, Changsha, China
| | - Yinhuai Wang
- Department of Urology, The Second Xiangya Hospital, Central South University, Changsha, China
| | - Lu Yi
- Department of Urology, The Second Xiangya Hospital, Central South University, Changsha, China.,Hunan Engineering Research Center of Smart and Precise Medicine, Changsha, China
| | - Shuang Peng
- Clinical Nursing Teaching and Research Section, The Second Xiangya Hospital, Central South University, Changsha, China
| |
Collapse
|
115
|
Kong F, Wilson N, Shadden S. A deep-learning approach for direct whole-heart mesh reconstruction. Med Image Anal 2021; 74:102222. [PMID: 34543913 PMCID: PMC9503710 DOI: 10.1016/j.media.2021.102222] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2021] [Revised: 07/14/2021] [Accepted: 08/31/2021] [Indexed: 01/16/2023]
Abstract
Automated construction of surface geometries of cardiac structures from volumetric medical images is important for a number of clinical applications. While deep-learning-based approaches have demonstrated promising reconstruction precision, these approaches have mostly focused on voxel-wise segmentation followed by surface reconstruction and post-processing techniques. However, such approaches suffer from a number of limitations including disconnected regions or incorrect surface topology due to erroneous segmentation and stair-case artifacts due to limited segmentation resolution. We propose a novel deep-learning-based approach that directly predicts whole heart surface meshes from volumetric CT and MR image data. Our approach leverages a graph convolutional neural network to predict deformation on mesh vertices from a pre-defined mesh template to reconstruct multiple anatomical structures in a 3D image volume. Our method demonstrated promising performance of generating whole heart reconstructions with as good or better accuracy than prior deep-learning-based methods on both CT and MR data. Furthermore, by deforming a template mesh, our method can generate whole heart geometries with better anatomical consistency and produce high-resolution geometries from lower resolution input image data. Our method was also able to produce temporally-consistent surface mesh predictions for heart motion from CT or MR cine sequences, and therefore can potentially be applied for efficiently constructing 4D whole heart dynamics. Our code and pre-trained networks are available at https://github.com/fkong7/MeshDeformNet.
Collapse
Affiliation(s)
- Fanwei Kong
- Mechanical Engineering Department, University of California, Berkeley, Berkeley, CA 94709, United States.
| | - Nathan Wilson
- Open Source Medical Software Corporation, Santa Monica, CA, United States.
| | - Shawn Shadden
- Mechanical Engineering Department, University of California, Berkeley, Berkeley, CA 94709, United States.
| |
Collapse
|
116
|
Bones IK, Bos C, Moonen C, Hendrikse J, van Stralen M. Workflow for automatic renal perfusion quantification using ASL-MRI and machine learning. Magn Reson Med 2021; 87:800-809. [PMID: 34672029 PMCID: PMC9297892 DOI: 10.1002/mrm.29016] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2021] [Revised: 08/31/2021] [Accepted: 09/01/2021] [Indexed: 11/07/2022]
Abstract
PURPOSE Clinical applicability of renal arterial spin labeling (ASL) MRI is hampered because of time consuming and observer dependent post-processing, including manual segmentation of the cortex to obtain cortical renal blood flow (RBF). Machine learning has proven its value in medical image segmentation, including the kidneys. This study presents a fully automatic workflow for renal cortex perfusion quantification by including machine learning-based segmentation. METHODS Fully automatic workflow was achieved by construction of a cascade of 3 U-nets to replace manual segmentation in ASL quantification. All 1.5T ASL-MRI data, including M0 , T1 , and ASL label-control images, from 10 healthy volunteers was used for training (dataset 1). Trained cascade performance was validated on 4 additional volunteers (dataset 2). Manual segmentations were generated by 2 observers, yielding reference and second observer segmentations. To validate the intended use of the automatic segmentations, manual and automatic RBF values in mL/min/100 g were compared. RESULTS Good agreement was found between automatic and manual segmentations on dataset 1 (dice score = 0.78 ± 0.04), which was in line with inter-observer variability (dice score = 0.77 ± 0.02). Good agreement was confirmed on dataset 2 (dice score = 0.75 ± 0.03). Moreover, similar cortical RBF was obtained with automatic or manual segmentations, on average and at subject level; with 211 ± 31 mL/min/100 g and 208 ± 31 mL/min/100 g (P < .05), respectively, with narrow limits of agreement at -11 and 4.6 mL/min/100 g. RBF accuracy with automated segmentations was confirmed on dataset 2. CONCLUSION Our proposed method automates ASL quantification without compromising RBF accuracy. With quick processing and without observer dependence, renal ASL-MRI is more attractive for clinical application as well as for longitudinal and multi-center studies.
Collapse
Affiliation(s)
- Isabell K Bones
- Center for Image Sciences, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Clemens Bos
- Center for Image Sciences, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Chrit Moonen
- Center for Image Sciences, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Jeroen Hendrikse
- Department of Radiology, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Marijn van Stralen
- Center for Image Sciences, University Medical Center Utrecht, Utrecht, The Netherlands
| |
Collapse
|
117
|
Healthy Kidney Segmentation in the Dce-Mr Images Using a Convolutional Neural Network and Temporal Signal Characteristics. SENSORS 2021; 21:s21206714. [PMID: 34695931 PMCID: PMC8538657 DOI: 10.3390/s21206714] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/19/2021] [Revised: 09/30/2021] [Accepted: 10/04/2021] [Indexed: 11/17/2022]
Abstract
Quantification of renal perfusion based on dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) requires determination of signal intensity time courses in the region of renal parenchyma. Thus, selection of voxels representing the kidney must be accomplished with special care and constitutes one of the major technical limitations which hampers wider usage of this technique as a standard clinical routine. Manual segmentation of renal compartments—even if performed by experts—is a common source of decreased repeatability and reproducibility. In this paper, we present a processing framework for the automatic kidney segmentation in DCE-MR images. The framework consists of two stages. Firstly, kidney masks are generated using a convolutional neural network. Then, mask voxels are classified to one of three regions—cortex, medulla, and pelvis–based on DCE-MRI signal intensity time courses. The proposed approach was evaluated on a cohort of 10 healthy volunteers who underwent the DCE-MRI examination. MRI scanning was repeated on two time events within a 10-day interval. For semantic segmentation task we employed a classic U-Net architecture, whereas experiments on voxel classification were performed using three alternative algorithms—support vector machines, logistic regression and extreme gradient boosting trees, among which SVM produced the most accurate results. Both segmentation and classification steps were accomplished by a series of models, each trained separately for a given subject using the data from other participants only. The mean achieved accuracy of the whole kidney segmentation was 94% in terms of IoU coefficient. Cortex, medulla and pelvis were segmented with IoU ranging from 90 to 93% depending on the tissue and body side. The results were also validated by comparing image-derived perfusion parameters with ground truth measurements of glomerular filtration rate (GFR). The repeatability of GFR calculation, as assessed by the coefficient of variation was determined at the level of 14.5 and 17.5% for the left and right kidney, respectively and it improved relative to manual segmentation. Reproduciblity, in turn, was evaluated by measuring agreement between image-derived and iohexol-based GFR values. The estimated absolute mean differences were equal to 9.4 and 12.9 mL/min/1.73 m2 for scanning sessions 1 and 2 and the proposed automated segmentation method. The result for session 2 was comparable with manual segmentation, whereas for session 1 reproducibility in the automatic pipeline was weaker.
Collapse
|
118
|
Khodabakhshi Z, Amini M, Mostafaei S, Haddadi Avval A, Nazari M, Oveisi M, Shiri I, Zaidi H. Overall Survival Prediction in Renal Cell Carcinoma Patients Using Computed Tomography Radiomic and Clinical Information. J Digit Imaging 2021. [PMID: 34382117 DOI: 10.1007/s10278-021-00500-y/figures/5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/01/2023] Open
Abstract
The aim of this work is to investigate the applicability of radiomic features alone and in combination with clinical information for the prediction of renal cell carcinoma (RCC) patients' overall survival after partial or radical nephrectomy. Clinical studies of 210 RCC patients from The Cancer Imaging Archive (TCIA) who underwent either partial or radical nephrectomy were included in this study. Regions of interest (ROIs) were manually defined on CT images. A total of 225 radiomic features were extracted and analyzed along with the 59 clinical features. An elastic net penalized Cox regression was used for feature selection. Accelerated failure time (AFT) with the shared frailty model was used to determine the effects of the selected features on the overall survival time. Eleven radiomic and twelve clinical features were selected based on their non-zero coefficients. Tumor grade, tumor malignancy, and pathology t-stage were the most significant predictors of overall survival (OS) among the clinical features (p < 0.002, < 0.02, and < 0.018, respectively). The most significant predictors of OS among the selected radiomic features were flatness, area density, and median (p < 0.02, < 0.02, and < 0.05, respectively). Along with important clinical features, such as tumor heterogeneity and tumor grade, imaging biomarkers such as tumor flatness, area density, and median are significantly correlated with OS of RCC patients.
Collapse
Affiliation(s)
- Zahra Khodabakhshi
- Rajaie Cardiovascular Medical and Research Center, Iran University of Medical Science, Tehran, Iran
| | - Mehdi Amini
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland
| | - Shayan Mostafaei
- Department of Biostatistics, School of Health, Kermanshah University of Medical Sciences, Kermanshah, Iran
- Epidemiology and Biostatistics Unit, Rheumatology Research Center, Tehran University of Medical Sciences, Tehran, Iran
| | | | - Mostafa Nazari
- Department of Biomedical Engineering and Medical Physics, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Mehrdad Oveisi
- Department of Computer Science, University of British Columbia, Vancouver, BC, Canada
- Comprehensive Cancer Centre, School of Cancer & Pharmaceutical Sciences, Faculty of Life Sciences & Medicine , Kings College London, London, UK
| | - Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland.
- Geneva University Neurocenter, Geneva University, Geneva, Switzerland.
- Department of Nuclear Medicine and Molecular Imaging, University Medical Center Groningen, University of Groningen, Groningen, Netherlands.
- Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark.
| |
Collapse
|
119
|
Pandey M, Gupta A. A systematic review of the automatic kidney segmentation methods in abdominal images. Biocybern Biomed Eng 2021. [DOI: 10.1016/j.bbe.2021.10.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
|
120
|
Rickman J, Struyk G, Simpson B, Byun BC, Papanikolopoulos N. The Growing Role for Semantic Segmentation in Urology. Eur Urol Focus 2021; 7:692-695. [PMID: 34417153 DOI: 10.1016/j.euf.2021.07.017] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Revised: 07/21/2021] [Accepted: 07/29/2021] [Indexed: 11/17/2022]
Abstract
As the quantity and quality of cross-sectional imaging data increase, it is important to be able to make efficient use of the information. Semantic segmentation is an emerging technology that promises to improve the speed, reproducibility, and accuracy of analysis of medical imaging, and to allow visualization methods that were previously impossible. Manual image segmentation often requires expert knowledge and is both time- and cost-prohibitive in many clinical situations. However, automated methods, especially those using deep learning, show promise in alleviating this burden to make segmentation a standard tool for clinical intervention in the future. It is therefore important for clinicians to have a functional understanding of what segmentation is and to be aware of its uses. Here we include a number of examples of ways in which semantic segmentation has been put into practice in urology. PATIENT SUMMARY: This mini-review highlights the growing role of segmentation methods for medical images in urology to inform clinical practice. Segmentation methods show promise in improving the reliability of diagnosis and aiding in visualization, which may become a tool for patient education.
Collapse
Affiliation(s)
- Jack Rickman
- Minnesota Robotics Institute, University of Minnesota College of Science and Engineering, Minneapolis, MN, USA.
| | - Griffin Struyk
- University of Minnesota Medical School, Twin Cities Campus, Minneapolis, MN, USA
| | - Benjamin Simpson
- University of Minnesota Medical School, Twin Cities Campus, Minneapolis, MN, USA
| | - Benjamin C Byun
- University of Minnesota Medical School, Twin Cities Campus, Minneapolis, MN, USA
| | - Nikolaos Papanikolopoulos
- Minnesota Robotics Institute, University of Minnesota College of Science and Engineering, Minneapolis, MN, USA
| |
Collapse
|
121
|
"The Algorithm Will See You Now": The Role of Artificial (and Real) Intelligence in the Future of Urology. Eur Urol Focus 2021; 7:669-671. [PMID: 34417152 DOI: 10.1016/j.euf.2021.07.010] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2021] [Accepted: 07/29/2021] [Indexed: 12/23/2022]
Abstract
Modern AI systems have achieved impressive performance and are poised to have a substantial impact on urology. It's important for clinicians to get actively involved in the development and validation of these systems to ensure that their impact is positive.
Collapse
|
122
|
Hagen F, Mair A, Bitzer M, Bösmüller H, Horger M. Fully automated whole-liver volume quantification on CT-image data: Comparison with manual volumetry using enhanced and unenhanced images as well as two different radiation dose levels and two reconstruction kernels. PLoS One 2021; 16:e0255374. [PMID: 34339472 PMCID: PMC8328340 DOI: 10.1371/journal.pone.0255374] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2021] [Accepted: 07/15/2021] [Indexed: 11/18/2022] Open
Abstract
OBJECTIVES To evaluate the accuracy of fully automated liver volume quantification vs. manual quantification using unenhanced as well as enhanced CT-image data as well as two different radiation dose levels and also two image reconstruction kernels. MATERIAL AND METHODS The local ethics board gave its approval for retrospective data analysis. Automated liver volume quantification in 300 consecutive livers in 164 male and 103 female oncologic patients (64±12y) performed at our institution (between January 2020 and May 2020) using two different dual-energy helicals: portal-venous phase enhanced, ref. tube current 300mAs (CARE Dose4D) for tube A (100 kV) and ref. 232mAs tube current for tube B (Sn140kV), slice collimation 0.6mm, reconstruction kernel I30f/1, recon. thickness of 0.6mm and 5mm, 80-100 mL iodine contrast agent 350 mg/mL, (flow 2mL/s) and unenhanced ref. tube current 100mAs (CARE Dose4D) for tube A (100 kV) and ref. 77mAs tube current for tube B (Sn140kV), slice collimation 0.6mm (kernel Q40f) were analyzed. The post-processing tool (syngo.CT Liver Analysis) is already FDA-approved. Two resident radiologists with no and 1-year CT-experience performed both the automated measurements independently from each other. Results were compared with those of manual liver volume quantification using the same software which was supervised by a senior radiologist with 30-year CT-experience (ground truth). RESULTS In total, a correlation of 98% was obtained for liver volumetry based on enhanced and unenhanced data sets compared to the manual liver quantification. Radiologist #1 and #2 achieved an inter-reader agreement of 99.8% for manual liver segmentation (p<0.0001). Automated liver volumetry resulted in an overestimation (>5% deviation) of 3.7% for unenhanced CT-image data and 4.0% for contrast-enhanced CT-images. Underestimation (<5%) of liver volume was 2.0% for unenhanced CT-image data and 1.3% for enhanced images after automated liver volumetry. Number and distribution of erroneous volume measurements using either thin or thick slice reconstructions was exactly the same, both for the enhanced as well for the unenhanced image data sets (p> 0.05). CONCLUSION Results of fully automated liver volume quantification are accurate and comparable with those of manual liver volume quantification and the technique seems to be confident even if unenhanced lower-dose CT image data is used.
Collapse
Affiliation(s)
- Florian Hagen
- Department of Diagnostic and Interventional Radiology, Eberhard-Karls-University, Tübingen, Germany
| | - Antonia Mair
- Department of Diagnostic and Interventional Radiology, Eberhard-Karls-University, Tübingen, Germany
| | - Michael Bitzer
- Department of Internal Medicine I, University Hospital Tübingen, Tübingen, Germany
| | - Hans Bösmüller
- Department of Pathology and Neuropathology, University Hospital Tübingen and Eberhard Karls University Tübingen, Tübingen, Germany
| | - Marius Horger
- Department of Diagnostic and Interventional Radiology, Eberhard-Karls-University, Tübingen, Germany
| |
Collapse
|
123
|
Guan Y, Aamir M, Rahman Z, Ali A, Abro WA, Dayo ZA, Bhutta MS, Hu Z. A framework for efficient brain tumor classification using MRI images. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2021; 18:5790-5815. [PMID: 34517512 DOI: 10.3934/mbe.2021292] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
A brain tumor is an abnormal growth of brain cells inside the head, which reduces the patient's survival chance if it is not diagnosed at an earlier stage. Brain tumors vary in size, different in type, irregular in shapes and require distinct therapies for different patients. Manual diagnosis of brain tumors is less efficient, prone to error and time-consuming. Besides, it is a strenuous task, which counts on radiologist experience and proficiency. Therefore, a modern and efficient automated computer-assisted diagnosis (CAD) system is required which may appropriately address the aforementioned problems at high accuracy is presently in need. Aiming to enhance performance and minimise human efforts, in this manuscript, the first brain MRI image is pre-processed to improve its visual quality and increase sample images to avoid over-fitting in the network. Second, the tumor proposals or locations are obtained based on the agglomerative clustering-based method. Third, image proposals and enhanced input image are transferred to backbone architecture for features extraction. Fourth, high-quality image proposals or locations are obtained based on a refinement network, and others are discarded. Next, these refined proposals are aligned to the same size, and finally, transferred to the head network to achieve the desired classification task. The proposed method is a potent tumor grading tool assessed on a publicly available brain tumor dataset. Extensive experiment results show that the proposed method outperformed the existing approaches evaluated on the same dataset and achieved an optimal performance with an overall classification accuracy of 98.04%. Besides, the model yielded the accuracy of 98.17, 98.66, 99.24%, sensitivity (recall) of 96.89, 97.82, 99.24%, and specificity of 98.55, 99.38, 99.25% for Meningioma, Glioma, and Pituitary classes, respectively.
Collapse
Affiliation(s)
- Yurong Guan
- Department of Computer Science, Huanggang Normal University, Huangzhou 438000, China
| | - Muhammad Aamir
- Department of Computer Science, Huanggang Normal University, Huangzhou 438000, China
| | - Ziaur Rahman
- Department of Computer Science, Huanggang Normal University, Huangzhou 438000, China
| | - Ammara Ali
- Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, Norway
| | - Waheed Ahmed Abro
- Department of Computer Science, Huanggang Normal University, Huangzhou 438000, China
| | - Zaheer Ahmed Dayo
- Department of Computer Science, Huanggang Normal University, Huangzhou 438000, China
| | - Muhammad Shoaib Bhutta
- Binjiang College, Nanjing University of Information Science & Technology, Wuxi 214105, China
| | - Zhihua Hu
- Department of Computer Science, Huanggang Normal University, Huangzhou 438000, China
| |
Collapse
|
124
|
Huo L, Hu X, Xiao Q, Gu Y, Chu X, Jiang L. Segmentation of whole breast and fibroglandular tissue using nnU-Net in dynamic contrast enhanced MR images. Magn Reson Imaging 2021; 82:31-41. [PMID: 34147598 DOI: 10.1016/j.mri.2021.06.017] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Revised: 05/14/2021] [Accepted: 06/15/2021] [Indexed: 10/21/2022]
Abstract
PURPOSE Segmentation of the whole breast and fibroglandular tissue (FGT) is important for quantitatively analyzing the breast cancer risk in the dynamic contrast-enhanced magnetic resonance (DCE-MR) images. The purpose of this study is to improve the accuracy and efficiency of the segmentation of the whole breast and FGT in 3-D fat-suppressed DCE-MR images with a versatile deep learning (DL) framework. METHODS We randomly collected 100 breast DCE-MR scans from Shanghai Cancer Hospital of Fudan University. The MR scans in the dataset were different in both the spatial resolution and the MR scanners employed. Furthermore, four breast density categories were assessed by radiologists based on Breast Imaging Reporting and Data System (BI-RADS) of American College of Radiology. The dataset was separated into the training and the testing sets, while keeping a balanced distribution of scans with different imaging parameters and density categories. The nnU-Net has been recently proposed to automatically adapt preprocessing strategies and network architectures for a given medical image dataset, thus showing a great potential in the systematic adaptation of DL methods to different datasets. In this study, we applied the nnU-Net to segment the whole breast and FGT in 3-D fat-suppressed DCE-MR images. Five-fold cross validation was employed to train and validate the segmentation method. RESULTS The segmentation performance was evaluated with the volume and surface agreement metrics between the DL-based automatic and the manually delineated masks, as quantified with the following measures: the average Dice volume overlap (0.968 ± 0.017 and 0.877 ± 0.081), the average surface distances (0.201 ± 0.080 mm and 0.310 ± 0.043 mm), and the Pearson correlation coefficient of masks (0.995 and 0.972) between the automatic and the manually delineated masks, as calculated for the whole breast and the FGT segmentation, respectively. The correlation coefficient between the breast densities obtained with the DL-based segmentation and the manual delineation was 0.981. There was a positive bias of 0.8% (DL-based relative to manual) in breast density measurement with the Bland-Altman plot. The execution time of the DL-based segmentation was approximately 20 s for the whole breast segmentation and 15 s for the FGT segmentation. CONCLUSIONS Our DL-based segmentation framework using nnU-Net could robustly achieve high accuracy and efficiency across variable MR imaging settings without extra pre- or post-processing procedures. It would be useful for developing DCE-MR-based CAD systems to quantify breast cancer risk and to be integrated into the clinical workflow.
Collapse
Affiliation(s)
- Lu Huo
- Center for Advanced Medical Imaging Technology, Shanghai Advanced Research Institute, Chinese Academy of Sciences, No.99 Haike Road, Shanghai 201200, China; University of Chinese Academy of Sciences, No.19 Yuquan Road, Beijing 100049, China; Shanghai United Imaging Healthcare Co., Ltd., No. 2258 Chengbei Road, Shanghai 201807, China
| | - Xiaoxin Hu
- Department of Radiology, Shanghai Cancer Hospital of Fudan University, No. 270 DongAn Road, Shanghai 200032, China
| | - Qin Xiao
- Department of Radiology, Shanghai Cancer Hospital of Fudan University, No. 270 DongAn Road, Shanghai 200032, China
| | - Yajia Gu
- Department of Radiology, Shanghai Cancer Hospital of Fudan University, No. 270 DongAn Road, Shanghai 200032, China
| | - Xu Chu
- Center for Advanced Medical Imaging Technology, Shanghai Advanced Research Institute, Chinese Academy of Sciences, No.99 Haike Road, Shanghai 201200, China; Shanghai United Imaging Healthcare Co., Ltd., No. 2258 Chengbei Road, Shanghai 201807, China
| | - Luan Jiang
- Center for Advanced Medical Imaging Technology, Shanghai Advanced Research Institute, Chinese Academy of Sciences, No.99 Haike Road, Shanghai 201200, China; Shanghai United Imaging Healthcare Co., Ltd., No. 2258 Chengbei Road, Shanghai 201807, China.
| |
Collapse
|
125
|
Uhm KH, Jung SW, Choi MH, Shin HK, Yoo JI, Oh SW, Kim JY, Kim HG, Lee YJ, Youn SY, Hong SH, Ko SJ. Deep learning for end-to-end kidney cancer diagnosis on multi-phase abdominal computed tomography. NPJ Precis Oncol 2021; 5:54. [PMID: 34145374 PMCID: PMC8213852 DOI: 10.1038/s41698-021-00195-y] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2021] [Accepted: 05/26/2021] [Indexed: 11/09/2022] Open
Abstract
In 2020, it is estimated that 73,750 kidney cancer cases were diagnosed, and 14,830 people died from cancer in the United States. Preoperative multi-phase abdominal computed tomography (CT) is often used for detecting lesions and classifying histologic subtypes of renal tumor to avoid unnecessary biopsy or surgery. However, there exists inter-observer variability due to subtle differences in the imaging features of tumor subtypes, which makes decisions on treatment challenging. While deep learning has been recently applied to the automated diagnosis of renal tumor, classification of a wide range of subtype classes has not been sufficiently studied yet. In this paper, we propose an end-to-end deep learning model for the differential diagnosis of five major histologic subtypes of renal tumors including both benign and malignant tumors on multi-phase CT. Our model is a unified framework to simultaneously identify lesions and classify subtypes for the diagnosis without manual intervention. We trained and tested the model using CT data from 308 patients who underwent nephrectomy for renal tumors. The model achieved an area under the curve (AUC) of 0.889, and outperformed radiologists for most subtypes. We further validated the model on an independent dataset of 184 patients from The Cancer Imaging Archive (TCIA). The AUC for this dataset was 0.855, and the model performed comparably to the radiologists. These results indicate that our model can achieve similar or better diagnostic performance than radiologists in differentiating a wide range of renal tumors on multi-phase CT.
Collapse
Affiliation(s)
- Kwang-Hyun Uhm
- Department of Electrical Engineering, Korea University, Seoul, South Korea
| | - Seung-Won Jung
- Department of Electrical Engineering, Korea University, Seoul, South Korea
| | - Moon Hyung Choi
- Department of Radiology, The Catholic University of Korea, Seoul, South Korea
| | - Hong-Kyu Shin
- Department of Electrical Engineering, Korea University, Seoul, South Korea
| | - Jae-Ik Yoo
- Department of Electrical Engineering, Korea University, Seoul, South Korea
| | - Se Won Oh
- Department of Radiology, The Catholic University of Korea, Seoul, South Korea
| | - Jee Young Kim
- Department of Radiology, The Catholic University of Korea, Seoul, South Korea
| | - Hyun Gi Kim
- Department of Radiology, The Catholic University of Korea, Seoul, South Korea
| | - Young Joon Lee
- Department of Radiology, The Catholic University of Korea, Seoul, South Korea
| | - Seo Yeon Youn
- Department of Radiology, The Catholic University of Korea, Seoul, South Korea
| | - Sung-Hoo Hong
- Department of Urology, The Catholic University of Korea, Seoul, South Korea.
| | - Sung-Jea Ko
- Department of Electrical Engineering, Korea University, Seoul, South Korea.
| |
Collapse
|
126
|
Choi J, Cho HH, Kwon J, Lee HY, Park H. A Cascaded Neural Network for Staging in Non-Small Cell Lung Cancer Using Pre-Treatment CT. Diagnostics (Basel) 2021; 11:1047. [PMID: 34200270 PMCID: PMC8229025 DOI: 10.3390/diagnostics11061047] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2021] [Revised: 05/27/2021] [Accepted: 06/04/2021] [Indexed: 12/20/2022] Open
Abstract
BACKGROUND AND AIM Tumor staging in non-small cell lung cancer (NSCLC) is important for treatment and prognosis. Staging involves expert interpretation of imaging, which we aim to automate with deep learning (DL). We proposed a cascaded DL method comprised of two steps to classification between early- and advanced-stage NSCLC using pretreatment computed tomography. METHODS We developed and tested a DL model to classify between early- and advanced-stage using training (n = 90), validation (n = 8), and two test (n = 37, n = 26) cohorts obtained from the public domain. The first step adopted an autoencoder network to compress the imaging data into latent variables and the second step used the latent variable to classify the stages using the convolutional neural network (CNN). Other DL and machine learning-based approaches were compared. RESULTS Our model was tested in two test cohorts of CPTAC and TCGA. In CPTAC, our model achieved accuracy of 0.8649, sensitivity of 0.8000, specificity of 0.9412, and area under the curve (AUC) of 0.8206 compared to other approaches (AUC 0.6824-0.7206) for classifying between early- and advanced-stages. In TCGA, our model achieved accuracy of 0.8077, sensitivity of 0.7692, specificity of 0.8462, and AUC of 0.8343. CONCLUSION Our cascaded DL model for classification NSCLC patients into early-stage and advanced-stage showed promising results and could help future NSCLC research.
Collapse
Affiliation(s)
- Jieun Choi
- Department of Artificial Intelligence, Sungkyunkwan University, Suwon 16419, Korea;
| | - Hwan-ho Cho
- Department of Electrical and Computer Engineering, Sungkyunkwan University, Suwon 16419, Korea; (H.-h.C.); (J.K.)
- Center for Neuroscience Imaging Research, Institute for Basic Science, Suwon 16419, Korea
| | - Junmo Kwon
- Department of Electrical and Computer Engineering, Sungkyunkwan University, Suwon 16419, Korea; (H.-h.C.); (J.K.)
- Center for Neuroscience Imaging Research, Institute for Basic Science, Suwon 16419, Korea
| | - Ho Yun Lee
- Department of Radiology and Center for Imaging Science, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul 06351, Korea;
- Department of Health Sciences and Technology, Samsung Advanced Institute for Health Sciences & Technology (SAIHST), Sungkyunkwan University, Seoul 06351, Korea
| | - Hyunjin Park
- Department of Electrical and Computer Engineering, Sungkyunkwan University, Suwon 16419, Korea; (H.-h.C.); (J.K.)
- Center for Neuroscience Imaging Research, Institute for Basic Science, Suwon 16419, Korea
- School of Electronic and Electrical Engineering, Sungkyunkwan University, Suwon 16419, Korea
| |
Collapse
|
127
|
Caroli A, Remuzzi A, Lerman LO. Basic principles and new advances in kidney imaging. Kidney Int 2021; 100:1001-1011. [PMID: 33984338 DOI: 10.1016/j.kint.2021.04.032] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Revised: 04/21/2021] [Accepted: 04/22/2021] [Indexed: 12/12/2022]
Abstract
Over the past few years, clinical renal imaging has seen great advances, allowing assessments of kidney structure and morphology, perfusion, function and metabolism, and oxygenation, as well as microstructure and the interstitium. Medical imaging is becoming increasingly important in the evaluation of kidney physiology and pathophysiology, showing promise in management of patients with renal disease, in particular with regard to diagnosis, classification, and prediction of disease development and progression, monitoring response to therapy, detection of drug toxicity, and patient selection for clinical trials. A variety of imaging modalities, ranging from routine to advanced tools, are currently available to probe the kidney both spatially and temporally, particularly ultrasonography, computed tomography, positron emission tomography, renal scintigraphy, and multiparametric magnetic resonance imaging. Given that the range is broad and varied, kidney imaging techniques should be chosen based on the clinical question and the specific underlying pathologic mechanism, taking into account contraindications and possible adverse effects. Integration of various modalities providing complementary information will likely provide the greatest insight into renal pathophysiology. This review aims to highlight major recent advances in key tools that are currently available or potentially relevant for clinical kidney imaging, with a focus on non-oncological applications. The review also outlines the context of use, limitations, and advantages of various techniques, and highlights gaps to be filled with future development and clinical adoption.
Collapse
Affiliation(s)
- Anna Caroli
- Bioengineering Department, Istituto di Ricerche Farmacologiche Mario Negri IRCCS, Bergamo, Italy.
| | - Andrea Remuzzi
- Department of Management, Information and Production Engineering, University of Bergamo, Dalmine (Bergamo), Italy
| | - Lilach O Lerman
- Division of Nephrology and Hypertension, Mayo Clinic, Rochester, Minnesota, USA
| |
Collapse
|
128
|
Schmitz R, Madesta F, Nielsen M, Krause J, Steurer S, Werner R, Rösch T. Multi-scale fully convolutional neural networks for histopathology image segmentation: From nuclear aberrations to the global tissue architecture. Med Image Anal 2021; 70:101996. [DOI: 10.1016/j.media.2021.101996] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2019] [Revised: 11/23/2020] [Accepted: 02/08/2021] [Indexed: 12/28/2022]
|
129
|
Boundary Loss-Based 2.5D Fully Convolutional Neural Networks Approach for Segmentation: A Case Study of the Liver and Tumor on Computed Tomography. ALGORITHMS 2021. [DOI: 10.3390/a14050144] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
Image segmentation plays an important role in the field of image processing, helping to understand images and recognize objects. However, most existing methods are often unable to effectively explore the spatial information in 3D image segmentation, and they neglect the information from the contours and boundaries of the observed objects. In addition, shape boundaries can help to locate the positions of the observed objects, but most of the existing loss functions neglect the information from the boundaries. To overcome these shortcomings, this paper presents a new cascaded 2.5D fully convolutional networks (FCNs) learning framework to segment 3D medical images. A new boundary loss that incorporates distance, area, and boundary information is also proposed for the cascaded FCNs to learning more boundary and contour features from the 3D medical images. Moreover, an effective post-processing method is developed to further improve the segmentation accuracy. We verified the proposed method on LITS and 3DIRCADb datasets that include the liver and tumors. The experimental results show that the performance of the proposed method is better than existing methods with a Dice Per Case score of 74.5% for tumor segmentation, indicating the effectiveness of the proposed method.
Collapse
|
130
|
Ma J, Chen J, Ng M, Huang R, Li Y, Li C, Yang X, Martel AL. Loss odyssey in medical image segmentation. Med Image Anal 2021; 71:102035. [PMID: 33813286 DOI: 10.1016/j.media.2021.102035] [Citation(s) in RCA: 109] [Impact Index Per Article: 36.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2020] [Revised: 03/04/2021] [Accepted: 03/06/2021] [Indexed: 12/26/2022]
Abstract
The loss function is an important component in deep learning-based segmentation methods. Over the past five years, many loss functions have been proposed for various segmentation tasks. However, a systematic study of the utility of these loss functions is missing. In this paper, we present a comprehensive review of segmentation loss functions in an organized manner. We also conduct the first large-scale analysis of 20 general loss functions on four typical 3D segmentation tasks involving six public datasets from 10+ medical centers. The results show that none of the losses can consistently achieve the best performance on the four segmentation tasks, but compound loss functions (e.g. Dice with TopK loss, focal loss, Hausdorff distance loss, and boundary loss) are the most robust losses. Our code and segmentation results are publicly available and can serve as a loss function benchmark. We hope this work will also provide insights on new loss function development for the community.
Collapse
Affiliation(s)
- Jun Ma
- Department of Mathematics, Nanjing University of Science and Technology, Nanjing, China.
| | - Jianan Chen
- Department of Medical Biophysics, University of Toronto, Toronto, Canada
| | - Matthew Ng
- Department of Medical Biophysics, University of Toronto, Toronto, Canada
| | - Rui Huang
- Department of Medical Biophysics, University of Toronto, Toronto, Canada
| | - Yu Li
- Department of Mathematics, Nanjing University of Science and Technology, Nanjing, China
| | - Chen Li
- Department of Mathematics, Nanjing University, Nanjing, China
| | - Xiaoping Yang
- Department of Mathematics, Nanjing University, Nanjing, China
| | - Anne L Martel
- Department of Medical Biophysics, University of Toronto, Toronto, Canada; Physical Sciences, Sunnybrook Research Institute, Toronto, Canada
| |
Collapse
|
131
|
Lin Z, Cui Y, Liu J, Sun Z, Ma S, Zhang X, Wang X. Automated segmentation of kidney and renal mass and automated detection of renal mass in CT urography using 3D U-Net-based deep convolutional neural network. Eur Radiol 2021; 31:5021-5031. [PMID: 33439313 DOI: 10.1007/s00330-020-07608-9] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2020] [Revised: 11/19/2020] [Accepted: 12/04/2020] [Indexed: 11/12/2022]
Abstract
OBJECTIVES To develop a 3D U-Net-based deep learning model for automated segmentation of kidney and renal mass, and detection of renal mass in corticomedullary phase of computed tomography urography (CTU). METHODS Data on 882 kidneys obtained from CTU data of 441 patients with renal mass were used to learn and evaluate the deep learning model. The CTU data of 35 patients with small renal tumors (diameter ≤ 1.5 cm) were used for additional testing. The ground truth data for the kidney, renal tumor, and cyst were manually annotated on corticomedullary phase images of CTU. The proposed segmentation model for kidney and renal mass was constructed based on a 3D U-Net. The segmentation accuracy was evaluated through the Dice similarity coefficient (DSC). The volume of the maximum 3D volume of interest of renal tumor and cyst in the predicted segmentation by the model was used as an identification indicator, while the detection performance of the model was evaluated by the area under the receiver operation characteristic curve. RESULTS The proposed model showed a high accuracy in segmentation of kidney and renal tumor, with average DSC of 0.973 and 0.844, respectively. It performed moderately in the renal cyst segmentation, with an average DSC of 0.536 in the test set. Also, this model showed good performance in detecting renal tumor and cyst. CONCLUSIONS The proposed automated segmentation and detection model based on 3D U-Net shows promising results for the segmentation of kidney and renal tumor, and the detection of renal tumor and cyst. KEY POINTS • The segmentation model based on 3D U-Net showed high accuracy in segmentation of kidney and renal neoplasm, and good detection performance of renal neoplasm and cyst in corticomedullary phase of CTU. • The segmentation model based on 3D U-Net is a fully automated aided diagnostic tool that could be used to reduce the workload of radiologists and improve the accuracy of diagnosis. • The segmentation model based on 3D U-Net would be helpful to provide quantitative information for diagnosis, treatment, surgical planning, etc.
Collapse
Affiliation(s)
- Zhiyong Lin
- Department of Radiology, Peking University First Hospital, No.8, Xishiku Street, Xicheng District, Beijing, 100034, China
| | - Yingpu Cui
- Department of Radiology, Peking University First Hospital, No.8, Xishiku Street, Xicheng District, Beijing, 100034, China
| | - Jia Liu
- Department of Radiology, Peking University First Hospital, No.8, Xishiku Street, Xicheng District, Beijing, 100034, China
| | - Zhaonan Sun
- Department of Radiology, Peking University First Hospital, No.8, Xishiku Street, Xicheng District, Beijing, 100034, China
| | - Shuai Ma
- Department of Radiology, Peking University First Hospital, No.8, Xishiku Street, Xicheng District, Beijing, 100034, China
| | - Xiaodong Zhang
- Department of Radiology, Peking University First Hospital, No.8, Xishiku Street, Xicheng District, Beijing, 100034, China
| | - Xiaoying Wang
- Department of Radiology, Peking University First Hospital, No.8, Xishiku Street, Xicheng District, Beijing, 100034, China.
| |
Collapse
|
132
|
Chu KY, Tradewell MB. Artificial Intelligence in Urology. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_172-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
133
|
Langner T, Östling A, Maldonis L, Karlsson A, Olmo D, Lindgren D, Wallin A, Lundin L, Strand R, Ahlström H, Kullberg J. Kidney segmentation in neck-to-knee body MRI of 40,000 UK Biobank participants. Sci Rep 2020; 10:20963. [PMID: 33262432 PMCID: PMC7708493 DOI: 10.1038/s41598-020-77981-4] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2020] [Accepted: 11/17/2020] [Indexed: 02/06/2023] Open
Abstract
The UK Biobank is collecting extensive data on health-related characteristics of over half a million volunteers. The biological samples of blood and urine can provide valuable insight on kidney function, with important links to cardiovascular and metabolic health. Further information on kidney anatomy could be obtained by medical imaging. In contrast to the brain, heart, liver, and pancreas, no dedicated Magnetic Resonance Imaging (MRI) is planned for the kidneys. An image-based assessment is nonetheless feasible in the neck-to-knee body MRI intended for abdominal body composition analysis, which also covers the kidneys. In this work, a pipeline for automated segmentation of parenchymal kidney volume in UK Biobank neck-to-knee body MRI is proposed. The underlying neural network reaches a relative error of 3.8%, with Dice score 0.956 in validation on 64 subjects, close to the 2.6% and Dice score 0.962 for repeated segmentation by one human operator. The released MRI of about 40,000 subjects can be processed within one day, yielding volume measurements of left and right kidney. Algorithmic quality ratings enabled the exclusion of outliers and potential failure cases. The resulting measurements can be studied and shared for large-scale investigation of associations and longitudinal changes in parenchymal kidney volume.
Collapse
Affiliation(s)
- Taro Langner
- Department of Surgical Sciences, Uppsala University, 751 85, Uppsala, Sweden.
| | - Andreas Östling
- Department of Surgical Sciences, Uppsala University, 751 85, Uppsala, Sweden
| | - Lukas Maldonis
- Antaros Medical AB, BioVenture Hub, 431 53, Mölndal, Sweden
| | - Albin Karlsson
- Department of Surgical Sciences, Uppsala University, 751 85, Uppsala, Sweden
| | - Daniel Olmo
- Department of Surgical Sciences, Uppsala University, 751 85, Uppsala, Sweden
| | - Dag Lindgren
- Antaros Medical AB, BioVenture Hub, 431 53, Mölndal, Sweden
| | - Andreas Wallin
- Antaros Medical AB, BioVenture Hub, 431 53, Mölndal, Sweden
| | - Lowe Lundin
- Antaros Medical AB, BioVenture Hub, 431 53, Mölndal, Sweden
| | - Robin Strand
- Department of Surgical Sciences, Uppsala University, 751 85, Uppsala, Sweden
- Department of Information Technology, Uppsala University, 751 85, Uppsala, Sweden
| | - Håkan Ahlström
- Department of Surgical Sciences, Uppsala University, 751 85, Uppsala, Sweden
- Antaros Medical AB, BioVenture Hub, 431 53, Mölndal, Sweden
| | - Joel Kullberg
- Department of Surgical Sciences, Uppsala University, 751 85, Uppsala, Sweden
- Antaros Medical AB, BioVenture Hub, 431 53, Mölndal, Sweden
| |
Collapse
|
134
|
Fang X, Yan P. Multi-Organ Segmentation Over Partially Labeled Datasets With Multi-Scale Feature Abstraction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3619-3629. [PMID: 32746108 PMCID: PMC7665851 DOI: 10.1109/tmi.2020.3001036] [Citation(s) in RCA: 55] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
Shortage of fully annotated datasets has been a limiting factor in developing deep learning based image segmentation algorithms and the problem becomes more pronounced in multi-organ segmentation. In this paper, we propose a unified training strategy that enables a novel multi-scale deep neural network to be trained on multiple partially labeled datasets for multi-organ segmentation. In addition, a new network architecture for multi-scale feature abstraction is proposed to integrate pyramid input and feature analysis into a U-shape pyramid structure. To bridge the semantic gap caused by directly merging features from different scales, an equal convolutional depth mechanism is introduced. Furthermore, we employ a deep supervision mechanism to refine the outputs in different scales. To fully leverage the segmentation features from all the scales, we design an adaptive weighting layer to fuse the outputs in an automatic fashion. All these mechanisms together are integrated into a Pyramid Input Pyramid Output Feature Abstraction Network (PIPO-FAN). Our proposed method was evaluated on four publicly available datasets, including BTCV, LiTS, KiTS and Spleen, where very promising performance has been achieved. The source code of this work is publicly shared at https://github.com/DIAL-RPI/PIPO-FAN to facilitate others to reproduce the work and build their own models using the introduced mechanisms.
Collapse
|