1
|
Becker J, Woźnicki P, Decker JA, Risch F, Wudy R, Kaufmann D, Canalini L, Wollny C, Scheurig-Muenkler C, Kroencke T, Bette S, Schwarz F. Radiomics signature for automatic hydronephrosis detection in unenhanced Low-Dose CT. Eur J Radiol 2024; 179:111677. [PMID: 39178684 DOI: 10.1016/j.ejrad.2024.111677] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2024] [Revised: 08/02/2024] [Accepted: 08/07/2024] [Indexed: 08/26/2024]
Abstract
PURPOSE To investigate the diagnostic performance of an automatic pipeline for detection of hydronephrosis on kidney's parenchyma on unenhanced low-dose CT of the abdomen. METHODS This retrospective study included 95 patients with confirmed unilateral hydronephrosis in an unenhanced low-dose CT of the abdomen. Data were split into training (n = 67) and test (n = 28) cohorts. Both kidneys for each case were included in further analyses, whereas the kidney without hydronephrosis was used as control. Using the training cohort, we developed a pipeline consisting of a deep-learning model for automatic segmentation (a Convolutional Neural Network based on nnU-Net architecture) of the kidney's parenchyma and a radiomics classifier to detect hydronephrosis. The models were assessed using standard classification metrics, such as area under the ROC curve (AUC), sensitivity and specificity, as well as semantic segmentation metrics, including Dice coefficient and Jaccard index. RESULTS Using manual segmentation of the kidney's parenchyma, hydronephrosis can be detected with an AUC of 0.84, a sensitivity of 75% and a specificity of 82%, a PPV of 81% and a NPV of 77%. Automatic kidney segmentation achieved a mean Dice score of 0.87 and 0.91 for the right and left kidney, respectively. Additionally, automatic segmentation achieved an AUC of 0.83, a sensitivity of 86%, specificity of 64%, PPV of 71%, and NPV of 82%. CONCLUSION Our proposed radiomics signature using automatic kidney's parenchyma segmentation allows for accurate hydronephrosis detection on unenhanced low-dose CT scans of the abdomen independently of widened renal pelvis. This method could be used in clinical routine to highlight hydronephrosis to radiologists as well as clinicians, especially in patients with concurrent parapelvic cysts and might reduce time and costs associated with diagnosing hydronephrosis.
Collapse
Affiliation(s)
- Judith Becker
- Department of Diagnostic and Interventional Radiology, University Hospital Augsburg, Stenglinstr. 2, 86156 Augsburg, Germany
| | - Piotr Woźnicki
- Diagnostic and Interventional Radiology, University Hospital Würzburg, Josef-Schneider-Straße 2, 97080 Würzburg, Germany
| | - Josua A Decker
- Department of Diagnostic and Interventional Radiology, University Hospital Augsburg, Stenglinstr. 2, 86156 Augsburg, Germany
| | - Franka Risch
- Department of Diagnostic and Interventional Radiology, University Hospital Augsburg, Stenglinstr. 2, 86156 Augsburg, Germany
| | - Ramona Wudy
- Department of Diagnostic and Interventional Radiology, University Hospital Augsburg, Stenglinstr. 2, 86156 Augsburg, Germany
| | - David Kaufmann
- Department of Diagnostic and Interventional Radiology, University Hospital Augsburg, Stenglinstr. 2, 86156 Augsburg, Germany
| | - Luca Canalini
- Department of Diagnostic and Interventional Radiology, University Hospital Augsburg, Stenglinstr. 2, 86156 Augsburg, Germany
| | - Claudia Wollny
- Department of Diagnostic and Interventional Radiology, University Hospital Augsburg, Stenglinstr. 2, 86156 Augsburg, Germany
| | - Christian Scheurig-Muenkler
- Department of Diagnostic and Interventional Radiology, University Hospital Augsburg, Stenglinstr. 2, 86156 Augsburg, Germany
| | - Thomas Kroencke
- Department of Diagnostic and Interventional Radiology, University Hospital Augsburg, Stenglinstr. 2, 86156 Augsburg, Germany; Centre for Advanced Analytics and Predictive Sciences (CAAPS), University of Augsburg, Universitätsstr. 2, 86159 Augsburg, Germany.
| | - Stefanie Bette
- Department of Diagnostic and Interventional Radiology, University Hospital Augsburg, Stenglinstr. 2, 86156 Augsburg, Germany
| | - Florian Schwarz
- Centre for Diagnostic Imaging and Interventional Therapy, Donau-Isar-Klinikum, Perlasberger Straße 41, 94469 Deggendorf, Germany; Medical Faculty, Ludwig Maximilian University Munich, Bavariaring 19, 80336 Munich, Germany
| |
Collapse
|
2
|
Liu Y, Zhao Y, Wang M, Hao Y, Wang X, Wang L. MBD-Net: Multi-Branch Dilated Convolutional Network With Cyst Discriminator for Renal Multi-Structure Segmentation. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38082702 DOI: 10.1109/embc40787.2023.10341054] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
In surgery-based renal cancer treatment, one of the most essential tasks is the three-dimensional (3D) kidney parsing on computed tomography angiography (CTA) images. In this paper, we propose an end-to-end convolutional neural network-based framework to segment multiple renal structures, including kidneys, kidney tumors, arteries, and veins from arterial-phase CT images. Our method consists of two collaborative modules: First, we propose an encoding-decoding network, named Multi-Branch Dilated Convolutional Network (MBD-Net), consisting of residual, hybrid dilated convolutional, and reduced-dimensional convolutional structures, which improves the feature extraction ability with relatively fewer network parameters. Given that renal tumors and cysts have confusing geometric structures, we also design the Cyst Discriminator to effectively distinguish tumors from cysts without labeling information via gray-scale curves and radiographic features. We have quantitatively evaluated our approach on a publicly available dataset from MICCAI 2022 Kidney Parsing for Renal Cancer Treatment Challenge (KiPA2022), with mean Dice similarity coefficient (DSC) as 96.18%, 90.99%, 88.66% and 80.35% for the kidneys, kidney tumors, arteries, and veins respectively, winning the stable and top performance in the challenge.Clinical relevance-The proposed CNN-Based framework can automatically segment 3D kidneys, renal tumors, arteries, and veins for kidney parsing techniques, benefiting surgery-based renal cancer treatment.
Collapse
|
3
|
Pandey M, Gupta A. Tumorous kidney segmentation in abdominal CT images using active contour and 3D-UNet. Ir J Med Sci 2022:10.1007/s11845-022-03113-8. [PMID: 35930139 DOI: 10.1007/s11845-022-03113-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Accepted: 07/20/2022] [Indexed: 11/29/2022]
Abstract
BACKGROUND AND PURPOSE The precise segmentation of the kidneys in computed tomography (CT) images is vital in urology for diagnosis, treatment, and surgical planning. Medical experts can get assistance through segmentation, as it provides information about kidney malformations in terms of shape and size. Manual segmentation is slow, tedious, and not reproducible. An automatic computer-aided system is a solution to this problem. This paper presents an automated kidney segmentation technique based on active contour and deep learning. MATERIALS AND METHODS In this work, 210 CTs from the KiTS 19 repository were used. The used dataset was divided into a train set (168 CTs), test set (21 CTs), and validation set (21 CTs). The suggested technique has broadly four phases: (1) extraction of kidney regions using active contours, (2) preprocessing, (3) kidney segmentation using 3D U-Net, and (4) reconstruction of the segmented CT images. RESULTS The proposed segmentation method has received the Dice score of 97.62%, Jaccard index of 95.74%, average sensitivity of 98.28%, specificity of 99.95%, and accuracy of 99.93% over the validation dataset. CONCLUSION The proposed method can efficiently solve the problem of tumorous kidney segmentation in CT images by using active contour and deep learning. The active contour was used to select kidney regions and 3D-UNet was used for precisely segmenting the tumorous kidney.
Collapse
Affiliation(s)
- Mohit Pandey
- School of Computer Science & Engineering, Shri Mata Vaishno Devi University, Kakryal, Katra-182320, Jammu & Kashmir, India
| | - Abhishek Gupta
- School of Computer Science & Engineering, Shri Mata Vaishno Devi University, Kakryal, Katra-182320, Jammu & Kashmir, India.
| |
Collapse
|
4
|
Langner T, Östling A, Maldonis L, Karlsson A, Olmo D, Lindgren D, Wallin A, Lundin L, Strand R, Ahlström H, Kullberg J. Kidney segmentation in neck-to-knee body MRI of 40,000 UK Biobank participants. Sci Rep 2020; 10:20963. [PMID: 33262432 PMCID: PMC7708493 DOI: 10.1038/s41598-020-77981-4] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2020] [Accepted: 11/17/2020] [Indexed: 02/06/2023] Open
Abstract
The UK Biobank is collecting extensive data on health-related characteristics of over half a million volunteers. The biological samples of blood and urine can provide valuable insight on kidney function, with important links to cardiovascular and metabolic health. Further information on kidney anatomy could be obtained by medical imaging. In contrast to the brain, heart, liver, and pancreas, no dedicated Magnetic Resonance Imaging (MRI) is planned for the kidneys. An image-based assessment is nonetheless feasible in the neck-to-knee body MRI intended for abdominal body composition analysis, which also covers the kidneys. In this work, a pipeline for automated segmentation of parenchymal kidney volume in UK Biobank neck-to-knee body MRI is proposed. The underlying neural network reaches a relative error of 3.8%, with Dice score 0.956 in validation on 64 subjects, close to the 2.6% and Dice score 0.962 for repeated segmentation by one human operator. The released MRI of about 40,000 subjects can be processed within one day, yielding volume measurements of left and right kidney. Algorithmic quality ratings enabled the exclusion of outliers and potential failure cases. The resulting measurements can be studied and shared for large-scale investigation of associations and longitudinal changes in parenchymal kidney volume.
Collapse
Affiliation(s)
- Taro Langner
- Department of Surgical Sciences, Uppsala University, 751 85, Uppsala, Sweden.
| | - Andreas Östling
- Department of Surgical Sciences, Uppsala University, 751 85, Uppsala, Sweden
| | - Lukas Maldonis
- Antaros Medical AB, BioVenture Hub, 431 53, Mölndal, Sweden
| | - Albin Karlsson
- Department of Surgical Sciences, Uppsala University, 751 85, Uppsala, Sweden
| | - Daniel Olmo
- Department of Surgical Sciences, Uppsala University, 751 85, Uppsala, Sweden
| | - Dag Lindgren
- Antaros Medical AB, BioVenture Hub, 431 53, Mölndal, Sweden
| | - Andreas Wallin
- Antaros Medical AB, BioVenture Hub, 431 53, Mölndal, Sweden
| | - Lowe Lundin
- Antaros Medical AB, BioVenture Hub, 431 53, Mölndal, Sweden
| | - Robin Strand
- Department of Surgical Sciences, Uppsala University, 751 85, Uppsala, Sweden
- Department of Information Technology, Uppsala University, 751 85, Uppsala, Sweden
| | - Håkan Ahlström
- Department of Surgical Sciences, Uppsala University, 751 85, Uppsala, Sweden
- Antaros Medical AB, BioVenture Hub, 431 53, Mölndal, Sweden
| | - Joel Kullberg
- Department of Surgical Sciences, Uppsala University, 751 85, Uppsala, Sweden
- Antaros Medical AB, BioVenture Hub, 431 53, Mölndal, Sweden
| |
Collapse
|
5
|
Bennai MT, Guessoum Z, Mazouzi S, Cormier S, Mezghiche M. A stochastic multi-agent approach for medical-image segmentation: Application to tumor segmentation in brain MR images. Artif Intell Med 2020; 110:101980. [DOI: 10.1016/j.artmed.2020.101980] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2020] [Revised: 07/04/2020] [Accepted: 10/25/2020] [Indexed: 10/23/2022]
|
6
|
Yang G, Wang C, Yang J, Chen Y, Tang L, Shao P, Dillenseger JL, Shu H, Luo L. Weakly-supervised convolutional neural networks of renal tumor segmentation in abdominal CTA images. BMC Med Imaging 2020; 20:37. [PMID: 32293303 PMCID: PMC7161012 DOI: 10.1186/s12880-020-00435-w] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2019] [Accepted: 03/20/2020] [Indexed: 11/23/2022] Open
Abstract
Background Renal cancer is one of the 10 most common cancers in human beings. The laparoscopic partial nephrectomy (LPN) is an effective way to treat renal cancer. Localization and delineation of the renal tumor from pre-operative CT Angiography (CTA) is an important step for LPN surgery planning. Recently, with the development of the technique of deep learning, deep neural networks can be trained to provide accurate pixel-wise renal tumor segmentation in CTA images. However, constructing the training dataset with a large amount of pixel-wise annotations is a time-consuming task for the radiologists. Therefore, weakly-supervised approaches attract more interest in research. Methods In this paper, we proposed a novel weakly-supervised convolutional neural network (CNN) for renal tumor segmentation. A three-stage framework was introduced to train the CNN with the weak annotations of renal tumors, i.e. the bounding boxes of renal tumors. The framework includes pseudo masks generation, group and weighted training phases. Clinical abdominal CT angiographic images of 200 patients were applied to perform the evaluation. Results Extensive experimental results show that the proposed method achieves a higher dice coefficient (DSC) of 0.826 than the other two existing weakly-supervised deep neural networks. Furthermore, the segmentation performance is close to the fully supervised deep CNN. Conclusions The proposed strategy improves not only the efficiency of network training but also the precision of the segmentation.
Collapse
Affiliation(s)
- Guanyu Yang
- LIST, Key Laboratory of Computer Network and Information Integration, Southeast University, Ministry of Education, Nanjing, China. .,Centre de Recherche en Information Biomédicale Sino-Français (CRIBs), Rennes, France.
| | - Chuanxia Wang
- LIST, Key Laboratory of Computer Network and Information Integration, Southeast University, Ministry of Education, Nanjing, China
| | - Jian Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing, 100081, China
| | - Yang Chen
- LIST, Key Laboratory of Computer Network and Information Integration, Southeast University, Ministry of Education, Nanjing, China.,Centre de Recherche en Information Biomédicale Sino-Français (CRIBs), Rennes, France
| | - Lijun Tang
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Pengfei Shao
- Department of Urology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Jean-Louis Dillenseger
- Centre de Recherche en Information Biomédicale Sino-Français (CRIBs), Rennes, France.,University Rennes, Inserm, LTSI - UMR1099, F-35000, Rennes, France
| | - Huazhong Shu
- LIST, Key Laboratory of Computer Network and Information Integration, Southeast University, Ministry of Education, Nanjing, China.,Centre de Recherche en Information Biomédicale Sino-Français (CRIBs), Rennes, France
| | - Limin Luo
- LIST, Key Laboratory of Computer Network and Information Integration, Southeast University, Ministry of Education, Nanjing, China.,Centre de Recherche en Information Biomédicale Sino-Français (CRIBs), Rennes, France
| |
Collapse
|
7
|
Hsu CY, Doubrovin M, Hua CH, Mohammed O, Shulkin BL, Kaste S, Federico S, Metzger M, Krasin M, Tinkle C, Merchant TE, Lucas JT. Radiomics Features Differentiate Between Normal and Tumoral High-Fdg Uptake. Sci Rep 2018; 8:3913. [PMID: 29500442 PMCID: PMC5834444 DOI: 10.1038/s41598-018-22319-4] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2017] [Accepted: 02/09/2018] [Indexed: 12/31/2022] Open
Abstract
Identification of FDGavid- neoplasms may be obscured by high-uptake normal tissues, thus limiting inferences about the natural history of disease. We introduce a FDG-PET radiomics tissue classifier for differentiating FDGavid- normal tissues from tumor. Thirty-three scans from 15 patients with Hodgkin lymphoma and 68 scans from 23 patients with Ewing sarcoma treated on two prospective clinical trials were retrospectively analyzed. Disease volumes were manually segmented on FDG-PET and CT scans. Brain, heart, kidneys and bladder and tumor volumes were automatically segmented on PET images. Standard-uptake-value (SUV) derived shape and first order radiomics features were computed to build a random forest classifier. Manually segmented volumes were compared to automatically segmented tumor volumes. Classifier accuracy for normal tissues was 90%. Classifier performance was varied across normal tissue types (brain, left kidney and bladder, hear and right kidney were 100%, 96%, 97%, 83% and 87% respectively). Automatically segmented tumor volumes showed high concordance with the manually segmented tumor volumes (R2 = 0.97). Inclusion of texture-based radiomics features minimally contributed to classifier performance. Accurate normal tissue segmentation and classification facilitates accurate identification of FDGavid tissues and classification of those tissues as either tumor or normal tissue.
Collapse
Affiliation(s)
- Chih-Yang Hsu
- Department of Radiation Oncology, St. Jude Children's Research Hospital, 262 Danny Thomas Place, Memphis, TN, 38105, USA.
| | - Mike Doubrovin
- Department of Diagnostic Imaging, St. Jude Children's Research Hospital, 262 Danny Thomas Place, Memphis, TN, 38105, USA
| | - Chia-Ho Hua
- Department of Radiation Oncology, St. Jude Children's Research Hospital, 262 Danny Thomas Place, Memphis, TN, 38105, USA
| | - Omar Mohammed
- University of Tennessee Health Sciences College of Medicine, 910 Madison Ave # 1002, Memphis, TN, 38103, USA
| | - Barry L Shulkin
- Department of Diagnostic Imaging, St. Jude Children's Research Hospital, 262 Danny Thomas Place, Memphis, TN, 38105, USA
| | - Sue Kaste
- Department of Diagnostic Imaging, St. Jude Children's Research Hospital, 262 Danny Thomas Place, Memphis, TN, 38105, USA.,Department of Oncology, St. Jude Children's Research Hospital, 262 Danny Thomas Place, Memphis, TN, 38105, USA.,Department of Radiology, University of Tennessee Health Sciences, Memphis, TN, USA
| | - Sara Federico
- Department of Oncology, St. Jude Children's Research Hospital, 262 Danny Thomas Place, Memphis, TN, 38105, USA
| | - Monica Metzger
- Department of Oncology, St. Jude Children's Research Hospital, 262 Danny Thomas Place, Memphis, TN, 38105, USA
| | - Matthew Krasin
- Department of Radiation Oncology, St. Jude Children's Research Hospital, 262 Danny Thomas Place, Memphis, TN, 38105, USA
| | - Christopher Tinkle
- Department of Radiation Oncology, St. Jude Children's Research Hospital, 262 Danny Thomas Place, Memphis, TN, 38105, USA
| | - Thomas E Merchant
- Department of Radiation Oncology, St. Jude Children's Research Hospital, 262 Danny Thomas Place, Memphis, TN, 38105, USA
| | - John T Lucas
- Department of Radiation Oncology, St. Jude Children's Research Hospital, 262 Danny Thomas Place, Memphis, TN, 38105, USA
| |
Collapse
|
8
|
3D Kidney Segmentation from Abdominal Images Using Spatial-Appearance Models. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2017; 2017:9818506. [PMID: 28280519 PMCID: PMC5322574 DOI: 10.1155/2017/9818506] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/19/2016] [Revised: 11/29/2016] [Accepted: 12/22/2016] [Indexed: 11/18/2022]
Abstract
Kidney segmentation is an essential step in developing any noninvasive computer-assisted diagnostic system for renal function assessment. This paper introduces an automated framework for 3D kidney segmentation from dynamic computed tomography (CT) images that integrates discriminative features from the current and prior CT appearances into a random forest classification approach. To account for CT images' inhomogeneities, we employ discriminate features that are extracted from a higher-order spatial model and an adaptive shape model in addition to the first-order CT appearance. To model the interactions between CT data voxels, we employed a higher-order spatial model, which adds the triple and quad clique families to the traditional pairwise clique family. The kidney shape prior model is built using a set of training CT data and is updated during segmentation using not only region labels but also voxels' appearances in neighboring spatial voxel locations. Our framework performance has been evaluated on in vivo dynamic CT data collected from 20 subjects and comprises multiple 3D scans acquired before and after contrast medium administration. Quantitative evaluation between manually and automatically segmented kidney contours using Dice similarity, percentage volume differences, and 95th-percentile bidirectional Hausdorff distances confirms the high accuracy of our approach.
Collapse
|
9
|
Iglesias JE, Sabuncu MR. Multi-atlas segmentation of biomedical images: A survey. Med Image Anal 2015; 24:205-219. [PMID: 26201875 PMCID: PMC4532640 DOI: 10.1016/j.media.2015.06.012] [Citation(s) in RCA: 358] [Impact Index Per Article: 39.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2014] [Revised: 06/12/2015] [Accepted: 06/15/2015] [Indexed: 10/23/2022]
Abstract
Multi-atlas segmentation (MAS), first introduced and popularized by the pioneering work of Rohlfing, et al. (2004), Klein, et al. (2005), and Heckemann, et al. (2006), is becoming one of the most widely-used and successful image segmentation techniques in biomedical applications. By manipulating and utilizing the entire dataset of "atlases" (training images that have been previously labeled, e.g., manually by an expert), rather than some model-based average representation, MAS has the flexibility to better capture anatomical variation, thus offering superior segmentation accuracy. This benefit, however, typically comes at a high computational cost. Recent advancements in computer hardware and image processing software have been instrumental in addressing this challenge and facilitated the wide adoption of MAS. Today, MAS has come a long way and the approach includes a wide array of sophisticated algorithms that employ ideas from machine learning, probabilistic modeling, optimization, and computer vision, among other fields. This paper presents a survey of published MAS algorithms and studies that have applied these methods to various biomedical problems. In writing this survey, we have three distinct aims. Our primary goal is to document how MAS was originally conceived, later evolved, and now relates to alternative methods. Second, this paper is intended to be a detailed reference of past research activity in MAS, which now spans over a decade (2003-2014) and entails novel methodological developments and application-specific solutions. Finally, our goal is to also present a perspective on the future of MAS, which, we believe, will be one of the dominant approaches in biomedical image segmentation.
Collapse
Affiliation(s)
| | - Mert R Sabuncu
- A.A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, USA.
| |
Collapse
|