1
|
Langkilde F, Masaba P, Edenbrandt L, Gren M, Halil A, Hellström M, Larsson M, Naeem AA, Wallström J, Maier SE, Jäderling F. Manual prostate MRI segmentation by readers with different experience: a study of the learning progress. Eur Radiol 2024; 34:4801-4809. [PMID: 38165432 PMCID: PMC11213744 DOI: 10.1007/s00330-023-10515-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Revised: 11/06/2023] [Accepted: 11/10/2023] [Indexed: 01/03/2024]
Abstract
OBJECTIVE To evaluate the learning progress of less experienced readers in prostate MRI segmentation. MATERIALS AND METHODS One hundred bi-parametric prostate MRI scans were retrospectively selected from the Göteborg Prostate Cancer Screening 2 Trial (single center). Nine readers with varying degrees of segmentation experience were involved: one expert radiologist, two experienced radiology residents, two inexperienced radiology residents, and four novices. The task was to segment the whole prostate gland. The expert's segmentations were used as reference. For all other readers except three novices, the 100 MRI scans were divided into five rounds (cases 1-10, 11-25, 26-50, 51-76, 76-100). Three novices segmented only 50 cases (three rounds). After each round, a one-on-one feedback session between the expert and the reader was held, with feedback on systematic errors and potential improvements for the next round. Dice similarity coefficient (DSC) > 0.8 was considered accurate. RESULTS Using DSC > 0.8 as the threshold, the novices had a total of 194 accurate segmentations out of 250 (77.6%). The residents had a total of 397/400 (99.2%) accurate segmentations. In round 1, the novices had 19/40 (47.5%) accurate segmentations, in round 2 41/60 (68.3%), and in round 3 84/100 (84.0%) indicating learning progress. CONCLUSIONS Radiology residents, regardless of prior experience, showed high segmentation accuracy. Novices showed larger interindividual variation and lower segmentation accuracy than radiology residents. To prepare datasets for artificial intelligence (AI) development, employing radiology residents seems safe and provides a good balance between cost-effectiveness and segmentation accuracy. Employing novices should only be considered on an individual basis. CLINICAL RELEVANCE STATEMENT Employing radiology residents for prostate MRI segmentation seems safe and can potentially reduce the workload of expert radiologists. Employing novices should only be considered on an individual basis. KEY POINTS • Using less experienced readers for prostate MRI segmentation is cost-effective but may reduce quality. • Radiology residents provided high accuracy segmentations while novices showed large inter-reader variability. • To prepare datasets for AI development, employing radiology residents seems safe and might provide a good balance between cost-effectiveness and segmentation accuracy while novices should only be employed on an individual basis.
Collapse
Affiliation(s)
- Fredrik Langkilde
- Department of Radiology, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden.
- Department of Radiology, Sahlgrenska University Hospital, Gothenburg, Sweden.
| | - Patrick Masaba
- Department of Molecular Medicine and Surgery (MMK), Karolinska Institutet, Stockholm, Sweden
| | - Lars Edenbrandt
- Department of Molecular and Clinical Medicine, Institute of Medicine, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Department of Clinical Physiology, Sahlgrenska University Hospital, Gothenburg, Sweden
| | - Magnus Gren
- Department of Radiology, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Department of Radiology, Sahlgrenska University Hospital, Gothenburg, Sweden
| | - Airin Halil
- Department of Radiology, Sahlgrenska University Hospital, Gothenburg, Sweden
| | - Mikael Hellström
- Department of Radiology, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Department of Radiology, Sahlgrenska University Hospital, Gothenburg, Sweden
| | | | - Ameer Ali Naeem
- Department of Radiology, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
| | - Jonas Wallström
- Department of Radiology, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Department of Radiology, Sahlgrenska University Hospital, Gothenburg, Sweden
| | - Stephan E Maier
- Department of Radiology, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Department of Radiology, Sahlgrenska University Hospital, Gothenburg, Sweden
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Fredrik Jäderling
- Department of Molecular Medicine and Surgery (MMK), Karolinska Institutet, Stockholm, Sweden
- Department of Diagnostic Radiology, Capio S:T Göran's Hospital, Stockholm, Sweden
| |
Collapse
|
2
|
Molière S, Hamzaoui D, Granger B, Montagne S, Allera A, Ezziane M, Luzurier A, Quint R, Kalai M, Ayache N, Delingette H, Renard-Penna R. Reference standard for the evaluation of automatic segmentation algorithms: Quantification of inter observer variability of manual delineation of prostate contour on MRI. Diagn Interv Imaging 2024; 105:65-73. [PMID: 37822196 DOI: 10.1016/j.diii.2023.08.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Revised: 07/28/2023] [Accepted: 08/01/2023] [Indexed: 10/13/2023]
Abstract
PURPOSE The purpose of this study was to investigate the relationship between inter-reader variability in manual prostate contour segmentation on magnetic resonance imaging (MRI) examinations and determine the optimal number of readers required to establish a reliable reference standard. MATERIALS AND METHODS Seven radiologists with various experiences independently performed manual segmentation of the prostate contour (whole-gland [WG] and transition zone [TZ]) on 40 prostate MRI examinations obtained in 40 patients. Inter-reader variability in prostate contour delineations was estimated using standard metrics (Dice similarity coefficient [DSC], Hausdorff distance and volume-based metrics). The impact of the number of readers (from two to seven) on segmentation variability was assessed using pairwise metrics (consistency) and metrics with respect to a reference segmentation (conformity), obtained either with majority voting or simultaneous truth and performance level estimation (STAPLE) algorithm. RESULTS The average segmentation DSC for two readers in pairwise comparison was 0.919 for WG and 0.876 for TZ. Variability decreased with the number of readers: the interquartile ranges of the DSC were 0.076 (WG) / 0.021 (TZ) for configurations with two readers, 0.005 (WG) / 0.012 (TZ) for configurations with three readers, and 0.002 (WG) / 0.0037 (TZ) for configurations with six readers. The interquartile range decreased slightly faster between two and three readers than between three and six readers. When using consensus methods, variability often reached its minimum with three readers (with STAPLE, DSC = 0.96 [range: 0.945-0.971] for WG and DSC = 0.94 [range: 0.912-0.957] for TZ, and interquartile range was minimal for configurations with three readers. CONCLUSION The number of readers affects the inter-reader variability, in terms of inter-reader consistency and conformity to a reference. Variability is minimal for three readers, or three readers represent a tipping point in the variability evolution, with both pairwise-based metrics or metrics with respect to a reference. Accordingly, three readers may represent an optimal number to determine references for artificial intelligence applications.
Collapse
Affiliation(s)
- Sébastien Molière
- Department of Radiology, Hôpitaux Universitaire de Strasbourg, Hôpital de Hautepierre, 67200, Strasbourg, France; Breast and Thyroid Imaging Unit, Institut de Cancérologie Strasbourg Europe, 67200, Strasbourg, France; IGBMC, Institut de Génétique et de Biologie Moléculaire et Cellulaire, 67400, Illkirch, France.
| | - Dimitri Hamzaoui
- Inria, Epione Team, Sophia Antipolis, Université Côte d'Azur, 06902, Nice, France
| | - Benjamin Granger
- Sorbonne Université, INSERM, Institut Pierre Louis d'Epidémiologie et de Santé Publique, IPLESP, AP-HP, Hôpital Pitié Salpêtrière, Département de Santé Publique, 75013, Paris, France
| | - Sarah Montagne
- Department of Radiology, Hôpital Tenon, Assistance Publique-Hôpitaux de Paris, 75020, Paris, France; Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique-Hôpitaux de Paris, 75013, Paris, France; GRC N° 5, Oncotype-Uro, Sorbonne Université, 75020, Paris, France
| | - Alexandre Allera
- Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique-Hôpitaux de Paris, 75013, Paris, France
| | - Malek Ezziane
- Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique-Hôpitaux de Paris, 75013, Paris, France
| | - Anna Luzurier
- Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique-Hôpitaux de Paris, 75013, Paris, France
| | - Raphaelle Quint
- Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique-Hôpitaux de Paris, 75013, Paris, France
| | - Mehdi Kalai
- Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique-Hôpitaux de Paris, 75013, Paris, France
| | - Nicholas Ayache
- Department of Radiology, Hôpitaux Universitaire de Strasbourg, Hôpital de Hautepierre, 67200, Strasbourg, France
| | - Hervé Delingette
- Department of Radiology, Hôpitaux Universitaire de Strasbourg, Hôpital de Hautepierre, 67200, Strasbourg, France
| | - Raphaële Renard-Penna
- Department of Radiology, Hôpital Tenon, Assistance Publique-Hôpitaux de Paris, 75020, Paris, France; Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique-Hôpitaux de Paris, 75013, Paris, France; GRC N° 5, Oncotype-Uro, Sorbonne Université, 75020, Paris, France
| |
Collapse
|
3
|
Chen X, Peng Y, Li D, Sun J. DMCA-GAN: Dual Multilevel Constrained Attention GAN for MRI-Based Hippocampus Segmentation. J Digit Imaging 2023; 36:2532-2553. [PMID: 37735310 PMCID: PMC10584805 DOI: 10.1007/s10278-023-00854-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Revised: 04/29/2023] [Accepted: 05/17/2023] [Indexed: 09/23/2023] Open
Abstract
Precise segmentation of the hippocampus is essential for various human brain activity and neurological disorder studies. To overcome the small size of the hippocampus and the low contrast of MR images, a dual multilevel constrained attention GAN for MRI-based hippocampus segmentation is proposed in this paper, which is used to provide a relatively effective balance between suppressing noise interference and enhancing feature learning. First, we design the dual-GAN backbone to effectively compensate for the spatial information damage caused by multiple pooling operations in the feature generation stage. Specifically, dual-GAN performs joint adversarial learning on the multiscale feature maps at the end of the generator, which yields an average Dice coefficient (DSC) gain of 5.95% over the baseline. Next, to suppress MRI high-frequency noise interference, a multilayer information constraint unit is introduced before feature decoding, which improves the sensitivity of the decoder to forecast features by 5.39% and effectively alleviates the network overfitting problem. Then, to refine the boundary segmentation effects, we construct a multiscale feature attention restraint mechanism, which forces the network to concentrate more on effective multiscale details, thus improving the robustness. Furthermore, the dual discriminators D1 and D2 also effectively prevent the negative migration phenomenon. The proposed DMCA-GAN obtained a DSC of 90.53% on the Medical Segmentation Decathlon (MSD) dataset with tenfold cross-validation, which is superior to the backbone by 3.78%.
Collapse
Affiliation(s)
- Xue Chen
- College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao, 266590, Shandong, China
| | - Yanjun Peng
- College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao, 266590, Shandong, China.
- Shandong Province Key Laboratory of Wisdom Mining Information Technology, Shandong University of Science and Technology, Qingdao, 266590, Shandong, China.
| | - Dapeng Li
- College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao, 266590, Shandong, China
| | - Jindong Sun
- College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao, 266590, Shandong, China
| |
Collapse
|
4
|
Nagy E, Marterer R, Hržić F, Sorantin E, Tschauner S. Learning rate of students detecting and annotating pediatric wrist fractures in supervised artificial intelligence dataset preparations. PLoS One 2022; 17:e0276503. [PMID: 36264961 PMCID: PMC9584407 DOI: 10.1371/journal.pone.0276503] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2022] [Accepted: 09/13/2022] [Indexed: 11/06/2022] Open
Abstract
The use of artificial intelligence (AI) in image analysis is an intensively debated topic in the radiology community these days. AI computer vision algorithms typically rely on large-scale image databases, annotated by specialists. Developing and maintaining them is time-consuming, thus, the involvement of non-experts into the workflow of annotation should be considered. We assessed the learning rate of inexperienced evaluators regarding correct labeling of pediatric wrist fractures on digital radiographs. Students with and without a medical background labeled wrist fractures with bounding boxes in 7,000 radiographs over ten days. Pediatric radiologists regularly discussed their mistakes. We found F1 scores-as a measure for detection rate-to increase substantially under specialist feedback (mean 0.61±0.19 at day 1 to 0.97±0.02 at day 10, p<0.001), but not the Intersection over Union as a parameter for labeling precision (mean 0.27±0.29 at day 1 to 0.53±0.25 at day 10, p<0.001). The times needed to correct the students decreased significantly (mean 22.7±6.3 seconds per image at day 1 to 8.9±1.2 seconds at day 10, p<0.001) and were substantially lower as annotated by the radiologists alone. In conclusion our data showed, that the involvement of undergraduated students into annotation of pediatric wrist radiographs enables a substantial time saving for specialists, therefore, it should be considered.
Collapse
Affiliation(s)
- Eszter Nagy
- Division of Pediatric Radiology, Department of Radiology, Medical University of Graz, Graz, Austria
- * E-mail:
| | - Robert Marterer
- Division of Pediatric Radiology, Department of Radiology, Medical University of Graz, Graz, Austria
| | - Franko Hržić
- Faculty of Engineering, University of Rijeka, Rijeka, Croatia
| | - Erich Sorantin
- Division of Pediatric Radiology, Department of Radiology, Medical University of Graz, Graz, Austria
| | - Sebastian Tschauner
- Division of Pediatric Radiology, Department of Radiology, Medical University of Graz, Graz, Austria
| |
Collapse
|
5
|
Seo K, Lim JH, Seo J, Nguon LS, Yoon H, Park JS, Park S. Semantic Segmentation of Pancreatic Cancer in Endoscopic Ultrasound Images Using Deep Learning Approach. Cancers (Basel) 2022; 14:5111. [PMID: 36291895 PMCID: PMC9600976 DOI: 10.3390/cancers14205111] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Revised: 10/12/2022] [Accepted: 10/18/2022] [Indexed: 11/17/2022] Open
Abstract
Endoscopic ultrasonography (EUS) plays an important role in diagnosing pancreatic cancer. Surgical therapy is critical to pancreatic cancer survival and can be planned properly, with the characteristics of the target cancer determined. The physical characteristics of the pancreatic cancer, such as size, location, and shape, can be determined by semantic segmentation of EUS images. This study proposes a deep learning approach for the segmentation of pancreatic cancer in EUS images. EUS images were acquired from 150 patients diagnosed with pancreatic cancer. A network with deep attention features (DAF-Net) is proposed for pancreatic cancer segmentation using EUS images. The performance of the deep learning models (U-Net, Attention U-Net, and DAF-Net) was evaluated by 5-fold cross-validation. For the evaluation metrics, the Dice similarity coefficient (DSC), intersection over union (IoU), receiver operating characteristic (ROC) curve, and area under the curve (AUC) were chosen. Statistical analysis was performed for different stages and locations of the cancer. DAF-Net demonstrated superior segmentation performance for the DSC, IoU, AUC, sensitivity, specificity, and precision with scores of 82.8%, 72.3%, 92.7%, 89.0%, 98.1%, and 85.1%, respectively. The proposed deep learning approach can provide accurate segmentation of pancreatic cancer in EUS images and can effectively assist in the planning of surgical therapies.
Collapse
Affiliation(s)
- Kangwon Seo
- Department of Electrical and Electronics Engineering, Chung-Ang University, Seoul 06974, Korea
| | - Jung-Hyun Lim
- Division of Gastroenterology, Department of Internal Medicine, Inha University School of Medicine, Incheon 22332, Korea
| | - Jeongwung Seo
- Department of Electrical and Electronics Engineering, Chung-Ang University, Seoul 06974, Korea
| | - Leang Sim Nguon
- Department of Electronic and Electrical Engineering, Ewha Womans University, Seoul 03760, Korea
| | - Hongeun Yoon
- Department of Electronic and Electrical Engineering, Ewha Womans University, Seoul 03760, Korea
| | - Jin-Seok Park
- Division of Gastroenterology, Department of Internal Medicine, Inha University School of Medicine, Incheon 22332, Korea
| | - Suhyun Park
- Department of Electronic and Electrical Engineering, Ewha Womans University, Seoul 03760, Korea
| |
Collapse
|
6
|
Salvi M, De Santi B, Pop B, Bosco M, Giannini V, Regge D, Molinari F, Meiburger KM. Integration of Deep Learning and Active Shape Models for More Accurate Prostate Segmentation in 3D MR Images. J Imaging 2022; 8:133. [PMID: 35621897 PMCID: PMC9146644 DOI: 10.3390/jimaging8050133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Revised: 05/06/2022] [Accepted: 05/09/2022] [Indexed: 01/27/2023] Open
Abstract
Magnetic resonance imaging (MRI) has a growing role in the clinical workup of prostate cancer. However, manual three-dimensional (3D) segmentation of the prostate is a laborious and time-consuming task. In this scenario, the use of automated algorithms for prostate segmentation allows us to bypass the huge workload of physicians. In this work, we propose a fully automated hybrid approach for prostate gland segmentation in MR images using an initial segmentation of prostate volumes using a custom-made 3D deep network (VNet-T2), followed by refinement using an Active Shape Model (ASM). While the deep network focuses on three-dimensional spatial coherence of the shape, the ASM relies on local image information and this joint effort allows for improved segmentation of the organ contours. Our method is developed and tested on a dataset composed of T2-weighted (T2w) MRI prostatic volumes of 60 male patients. In the test set, the proposed method shows excellent segmentation performance, achieving a mean dice score and Hausdorff distance of 0.851 and 7.55 mm, respectively. In the future, this algorithm could serve as an enabling technology for the development of computer-aided systems for prostate cancer characterization in MR imaging.
Collapse
Affiliation(s)
- Massimo Salvi
- Biolab, PolitoBIOMed Lab, Department of Electronics and Telecommunications, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Turin, Italy; (M.S.); (B.P.); (F.M.)
| | - Bruno De Santi
- Multi-Modality Medical Imaging (M3I), Technical Medical Centre, University of Twente, PB217, 7500 AE Enschede, The Netherlands;
| | - Bianca Pop
- Biolab, PolitoBIOMed Lab, Department of Electronics and Telecommunications, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Turin, Italy; (M.S.); (B.P.); (F.M.)
| | - Martino Bosco
- Department of Pathology, Ospedale Michele e Pietro Ferrero, 12060 Verduno, Italy;
| | - Valentina Giannini
- Department of Surgical Sciences, University of Turin, 10126 Turin, Italy; (V.G.); (D.R.)
- Department of Radiology, Candiolo Cancer Institute, FPO-IRCCS, 10060 Candiolo, Italy
| | - Daniele Regge
- Department of Surgical Sciences, University of Turin, 10126 Turin, Italy; (V.G.); (D.R.)
- Department of Radiology, Candiolo Cancer Institute, FPO-IRCCS, 10060 Candiolo, Italy
| | - Filippo Molinari
- Biolab, PolitoBIOMed Lab, Department of Electronics and Telecommunications, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Turin, Italy; (M.S.); (B.P.); (F.M.)
| | - Kristen M. Meiburger
- Biolab, PolitoBIOMed Lab, Department of Electronics and Telecommunications, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Turin, Italy; (M.S.); (B.P.); (F.M.)
| |
Collapse
|
7
|
Hamzaoui D, Montagne S, Renard-Penna R, Ayache N, Delingette H. Automatic zonal segmentation of the prostate from 2D and 3D T2-weighted MRI and evaluation for clinical use. J Med Imaging (Bellingham) 2022; 9:024001. [PMID: 35300345 PMCID: PMC8920492 DOI: 10.1117/1.jmi.9.2.024001] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2021] [Accepted: 02/23/2022] [Indexed: 11/14/2022] Open
Abstract
Purpose: An accurate zonal segmentation of the prostate is required for prostate cancer (PCa) management with MRI. Approach: The aim of this work is to present UFNet, a deep learning-based method for automatic zonal segmentation of the prostate from T2-weighted (T2w) MRI. It takes into account the image anisotropy, includes both spatial and channelwise attention mechanisms and uses loss functions to enforce prostate partition. The method was applied on a private multicentric three-dimensional T2w MRI dataset and on the public two-dimensional T2w MRI dataset ProstateX. To assess the model performance, the structures segmented by the algorithm on the private dataset were compared with those obtained by seven radiologists of various experience levels. Results: On the private dataset, we obtained a Dice score (DSC) of 93.90 ± 2.85 for the whole gland (WG), 91.00 ± 4.34 for the transition zone (TZ), and 79.08 ± 7.08 for the peripheral zone (PZ). Results were significantly better than other compared networks' ( p - value < 0.05 ). On ProstateX, we obtained a DSC of 90.90 ± 2.94 for WG, 86.84 ± 4.33 for TZ, and 78.40 ± 7.31 for PZ. These results are similar to state-of-the art results and, on the private dataset, are coherent with those obtained by radiologists. Zonal locations and sectorial positions of lesions annotated by radiologists were also preserved. Conclusions: Deep learning-based methods can provide an accurate zonal segmentation of the prostate leading to a consistent zonal location and sectorial position of lesions, and therefore can be used as a helping tool for PCa diagnosis.
Collapse
Affiliation(s)
- Dimitri Hamzaoui
- Université Côte d'Azur, Inria, Epione Project-Team, Sophia Antipolis, Valbonne, France
| | - Sarah Montagne
- Sorbonne Université, Radiology Department, CHU La Pitié Salpétrière/Tenon, Paris, France
| | - Raphaële Renard-Penna
- Sorbonne Université, Radiology Department, CHU La Pitié Salpétrière/Tenon, Paris, France
| | - Nicholas Ayache
- Université Côte d'Azur, Inria, Epione Project-Team, Sophia Antipolis, Valbonne, France
| | - Hervé Delingette
- Université Côte d'Azur, Inria, Epione Project-Team, Sophia Antipolis, Valbonne, France
| |
Collapse
|
8
|
Rouvière O, Moldovan PC, Vlachomitrou A, Gouttard S, Riche B, Groth A, Rabotnikov M, Ruffion A, Colombel M, Crouzet S, Weese J, Rabilloud M. Combined model-based and deep learning-based automated 3D zonal segmentation of the prostate on T2-weighted MR images: clinical evaluation. Eur Radiol 2022; 32:3248-3259. [PMID: 35001157 DOI: 10.1007/s00330-021-08408-5] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2021] [Revised: 09/28/2021] [Accepted: 10/09/2021] [Indexed: 11/04/2022]
Abstract
OBJECTIVE To train and to test for prostate zonal segmentation an existing algorithm already trained for whole-gland segmentation. METHODS The algorithm, combining model-based and deep learning-based approaches, was trained for zonal segmentation using the NCI-ISBI-2013 dataset and 70 T2-weighted datasets acquired at an academic centre. Test datasets were randomly selected among examinations performed at this centre on one of two scanners (General Electric, 1.5 T; Philips, 3 T) not used for training. Automated segmentations were corrected by two independent radiologists. When segmentation was initiated outside the prostate, images were cropped and segmentation repeated. Factors influencing the algorithm's mean Dice similarity coefficient (DSC) and its precision were assessed using beta regression. RESULTS Eighty-two test datasets were selected; one was excluded. In 13/81 datasets, segmentation started outside the prostate, but zonal segmentation was possible after image cropping. Depending on the radiologist chosen as reference, algorithm's median DSCs were 96.4/97.4%, 91.8/93.0% and 79.9/89.6% for whole-gland, central gland and anterior fibromuscular stroma (AFMS) segmentations, respectively. DSCs comparing radiologists' delineations were 95.8%, 93.6% and 81.7%, respectively. For all segmentation tasks, the scanner used for imaging significantly influenced the mean DSC and its precision, and the mean DSC was significantly lower in cases with initial segmentation outside the prostate. For central gland segmentation, the mean DSC was also significantly lower in larger prostates. The radiologist chosen as reference had no significant impact, except for AFMS segmentation. CONCLUSIONS The algorithm performance fell within the range of inter-reader variability but remained significantly impacted by the scanner used for imaging. KEY POINTS • Median Dice similarity coefficients obtained by the algorithm fell within human inter-reader variability for the three segmentation tasks (whole gland, central gland, anterior fibromuscular stroma). • The scanner used for imaging significantly impacted the performance of the automated segmentation for the three segmentation tasks. • The performance of the automated segmentation of the anterior fibromuscular stroma was highly variable across patients and showed also high variability across the two radiologists.
Collapse
Affiliation(s)
- Olivier Rouvière
- Department of Urinary and Vascular Imaging, Hôpital Edouard Herriot, Hospices Civils de Lyon, Pavillon B, 5 place d'Arsonval, F-69437, Lyon, France. .,Université de Lyon, F-69003, Lyon, France. .,Faculté de Médecine Lyon Est, Université Lyon 1, F-69003, Lyon, France. .,INSERM, LabTau, U1032, Lyon, France.
| | - Paul Cezar Moldovan
- Department of Urinary and Vascular Imaging, Hôpital Edouard Herriot, Hospices Civils de Lyon, Pavillon B, 5 place d'Arsonval, F-69437, Lyon, France
| | - Anna Vlachomitrou
- Philips France, 33 rue de Verdun, CS 60 055, 92156, Suresnes Cedex, France
| | - Sylvain Gouttard
- Department of Urinary and Vascular Imaging, Hôpital Edouard Herriot, Hospices Civils de Lyon, Pavillon B, 5 place d'Arsonval, F-69437, Lyon, France
| | - Benjamin Riche
- Service de Biostatistique Et Bioinformatique, Pôle Santé Publique, Hospices Civils de Lyon, F-69003, Lyon, France.,Laboratoire de Biométrie Et Biologie Évolutive, Équipe Biostatistique-Santé, UMR 5558, CNRS, F-69100, Villeurbanne, France
| | - Alexandra Groth
- Philips Research, Röntgenstrasse 24-26, 22335, Hamburg, Germany
| | | | - Alain Ruffion
- Department of Urology, Centre Hospitalier Lyon Sud, Hospices Civils de Lyon, F-69310, Pierre-Bénite, France
| | - Marc Colombel
- Université de Lyon, F-69003, Lyon, France.,Faculté de Médecine Lyon Est, Université Lyon 1, F-69003, Lyon, France.,Department of Urology, Hôpital Edouard Herriot, Hospices Civils de Lyon, F-69437, Lyon, France
| | - Sébastien Crouzet
- Department of Urology, Hôpital Edouard Herriot, Hospices Civils de Lyon, F-69437, Lyon, France
| | - Juergen Weese
- Philips Research, Röntgenstrasse 24-26, 22335, Hamburg, Germany
| | - Muriel Rabilloud
- Université de Lyon, F-69003, Lyon, France.,Faculté de Médecine Lyon Est, Université Lyon 1, F-69003, Lyon, France.,Service de Biostatistique Et Bioinformatique, Pôle Santé Publique, Hospices Civils de Lyon, F-69003, Lyon, France.,Laboratoire de Biométrie Et Biologie Évolutive, Équipe Biostatistique-Santé, UMR 5558, CNRS, F-69100, Villeurbanne, France
| |
Collapse
|
9
|
Montagne S, Hamzaoui D, Allera A, Ezziane M, Luzurier A, Quint R, Kalai M, Ayache N, Delingette H, Renard-Penna R. Challenge of prostate MRI segmentation on T2-weighted images: inter-observer variability and impact of prostate morphology. Insights Imaging 2021; 12:71. [PMID: 34089410 PMCID: PMC8179870 DOI: 10.1186/s13244-021-01010-9] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2021] [Accepted: 05/05/2021] [Indexed: 12/29/2022] Open
Abstract
Background Accurate prostate zonal segmentation on magnetic resonance images (MRI) is a critical prerequisite for automated prostate cancer detection. We aimed to assess the variability of manual prostate zonal segmentation by radiologists on T2-weighted (T2W) images, and to study factors that may influence it. Methods Seven radiologists of varying levels of experience segmented the whole prostate gland (WG) and the transition zone (TZ) on 40 axial T2W prostate MRI images (3D T2W images for all patients, and both 3D and 2D images for a subgroup of 12 patients). Segmentation variabilities were evaluated based on: anatomical and morphological variation of the prostate (volume, retro-urethral lobe, intensity contrast between zones, presence of a PI-RADS ≥ 3 lesion), variation in image acquisition (3D vs 2D T2W images), and reader’s experience. Several metrics including Dice Score (DSC) and Hausdorff Distance were used to evaluate differences, with both a pairwise and a consensus (STAPLE reference) comparison. Results DSC was 0.92 (± 0.02) and 0.94 (± 0.03) for WG, 0.88 (± 0.05) and 0.91 (± 0.05) for TZ respectively with pairwise comparison and consensus reference. Variability was significantly (p < 0.05) lower for the mid-gland (DSC 0.95 (± 0.02)), higher for the apex (0.90 (± 0.06)) and the base (0.87 (± 0.06)), and higher for smaller prostates (p < 0.001) and when contrast between zones was low (p < 0.05). Impact of the other studied factors was non-significant. Conclusions Variability is higher in the extreme parts of the gland, is influenced by changes in prostate morphology (volume, zone intensity ratio), and is relatively unaffected by the radiologist’s level of expertise. Supplementary Information The online version contains supplementary material available at 10.1186/s13244-021-01010-9.
Collapse
Affiliation(s)
- Sarah Montagne
- Academic Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique des Hôpitaux de Paris, Paris, France. .,Academic Department of Radiology, Hôpital Tenon, Assistance Publique des Hôpitaux de Paris, Paris, France. .,Sorbonne Universités, GRC n° 5, Oncotype-Uro, Paris, France.
| | - Dimitri Hamzaoui
- Inria, Epione Team, Université Côte D'Azur, Sophia Antipolis, Nice, France
| | - Alexandre Allera
- Academic Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique des Hôpitaux de Paris, Paris, France
| | - Malek Ezziane
- Academic Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique des Hôpitaux de Paris, Paris, France
| | - Anna Luzurier
- Academic Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique des Hôpitaux de Paris, Paris, France
| | - Raphaelle Quint
- Academic Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique des Hôpitaux de Paris, Paris, France
| | - Mehdi Kalai
- Academic Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique des Hôpitaux de Paris, Paris, France
| | - Nicholas Ayache
- Inria, Epione Team, Université Côte D'Azur, Sophia Antipolis, Nice, France
| | - Hervé Delingette
- Inria, Epione Team, Université Côte D'Azur, Sophia Antipolis, Nice, France
| | - Raphaële Renard-Penna
- Academic Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique des Hôpitaux de Paris, Paris, France.,Academic Department of Radiology, Hôpital Tenon, Assistance Publique des Hôpitaux de Paris, Paris, France.,Sorbonne Universités, GRC n° 5, Oncotype-Uro, Paris, France
| |
Collapse
|
10
|
Casati M, Piffer S, Calusi S, Marrazzo L, Simontacchi G, Di Cataldo V, Greto D, Desideri I, Vernaleone M, Francolini G, Livi L, Pallotta S. Methodological approach to create an atlas using a commercial auto-contouring software. J Appl Clin Med Phys 2020; 21:219-230. [PMID: 33236827 PMCID: PMC7769405 DOI: 10.1002/acm2.13093] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2020] [Revised: 10/12/2020] [Accepted: 10/16/2020] [Indexed: 12/29/2022] Open
Abstract
PURPOSE The aim of this work was to establish a methodological approach for creation and optimization of an atlas for auto-contouring, using the commercial software MIM MAESTRO (MIM Software Inc. Cleveland OH). METHODS A computed tomography (CT) male pelvis atlas was created and optimized to evaluate how different tools and options impact on the accuracy of automatic segmentation. Pelvic lymph nodes (PLN), rectum, bladder, and femurs of 55 subjects were reviewed for consistency by a senior consultant radiation oncologist with 15 yr of experience. Several atlas and workflow options were tuned to optimize the accuracy of auto-contours. The deformable image registration (DIR), the finalization method, the k number of atlas best matching subjects, and several post-processing options were studied. To test our atlas performances, automatic and reference manual contours of 20 test subjects were statistically compared based on dice similarity coefficient (DSC) and mean distance to agreement (MDA) indices. The effect of field of view (FOV) reduction on auto-contouring time was also investigated. RESULTS With the optimized atlas and workflow, DSC and MDA median values of bladder, rectum, PLN, and femurs were 0.91 and 1.6 mm, 0.85 and 1.6 mm, 0.85 and 1.8 mm, and 0.96 and 0.5 mm, respectively. Auto-contouring time was more than halved by strictly cropping the FOV of the subject to be contoured to the pelvic region. CONCLUSION A statistically significant improvement of auto-contours accuracy was obtained using our atlas and optimized workflow instead of the MIM Software pelvic atlas.
Collapse
Affiliation(s)
- Marta Casati
- Department of Medical Physics, Careggi University Hospital, Florence, Italy
| | - Stefano Piffer
- Department of Experimental and Clinical Biomedical Sciences, University of Florence, Florence, Italy.,National Institute of Nuclear Physics (INFN), Florence, Italy
| | - Silvia Calusi
- Department of Experimental and Clinical Biomedical Sciences, University of Florence, Florence, Italy
| | - Livia Marrazzo
- Department of Medical Physics, Careggi University Hospital, Florence, Italy
| | | | | | - Daniela Greto
- Department of Radiation Oncology, Careggi University Hospital, Florence, Italy
| | - Isacco Desideri
- Department of Experimental and Clinical Biomedical Sciences, University of Florence, Florence, Italy
| | - Marco Vernaleone
- Department of Radiation Oncology, Careggi University Hospital, Florence, Italy
| | - Giulio Francolini
- Department of Radiation Oncology, Careggi University Hospital, Florence, Italy
| | - Lorenzo Livi
- Department of Experimental and Clinical Biomedical Sciences, University of Florence, Florence, Italy
| | - Stefania Pallotta
- Department of Experimental and Clinical Biomedical Sciences, University of Florence, Florence, Italy
| |
Collapse
|
11
|
Evaluation of Multimodal Algorithms for the Segmentation of Multiparametric MRI Prostate Images. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2020; 2020:8861035. [PMID: 33144873 PMCID: PMC7596462 DOI: 10.1155/2020/8861035] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/15/2020] [Revised: 09/29/2020] [Accepted: 10/04/2020] [Indexed: 12/18/2022]
Abstract
Prostate segmentation in multiparametric magnetic resonance imaging (mpMRI) can help to support prostate cancer diagnosis and therapy treatment. However, manual segmentation of the prostate is subjective and time-consuming. Many deep learning monomodal networks have been developed for automatic whole prostate segmentation from T2-weighted MR images. We aimed to investigate the added value of multimodal networks in segmenting the prostate into the peripheral zone (PZ) and central gland (CG). We optimized and evaluated monomodal DenseVNet, multimodal ScaleNet, and monomodal and multimodal HighRes3DNet, which yielded dice score coefficients (DSC) of 0.875, 0.848, 0.858, and 0.890 in WG, respectively. Multimodal HighRes3DNet and ScaleNet yielded higher DSC with statistical differences in PZ and CG only compared to monomodal DenseVNet, indicating that multimodal networks added value by generating better segmentation between PZ and CG regions but did not improve the WG segmentation. No significant difference was observed in the apex and base of WG segmentation between monomodal and multimodal networks, indicating that the segmentations at the apex and base were more affected by the general network architecture. The number of training data was also varied for DenseVNet and HighRes3DNet, from 20 to 120 in steps of 20. DenseVNet was able to yield DSC of higher than 0.65 even for special cases, such as TURP or abnormal prostate, whereas HighRes3DNet's performance fluctuated with no trend despite being the best network overall. Multimodal networks did not add value in segmenting special cases but generally reduced variations in segmentation compared to the same matched monomodal network.
Collapse
|
12
|
Wagner MW, Bilbily A, Beheshti M, Shammas A, Vali R. Artificial intelligence and radiomics in pediatric molecular imaging. Methods 2020; 188:37-43. [PMID: 32544594 DOI: 10.1016/j.ymeth.2020.06.008] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2020] [Revised: 06/02/2020] [Accepted: 06/10/2020] [Indexed: 12/22/2022] Open
Abstract
In the past decade, a new approach for quantitative analysis of medical images and prognostic modelling has emerged. Defined as the extraction and analysis of a large number of quantitative parameters from medical images, radiomics is an evolving field in precision medicine with the ultimate goal of the discovery of new imaging biomarkers for disease. Radiomics has already shown promising results in extracting diagnostic, prognostic, and molecular information latent in medical images. After acquisition of the medical images as part of the standard of care, a region of interest is defined often via a manual or semi-automatic approach. An algorithm then extracts and computes quantitative radiomics parameters from the region of interest. Whereas radiomics captures quantitative values of shape and texture based on predefined mathematical terms, neural networks have recently been used to directly learn and identify predictive features from medical images. Thereby, neural networks largely forego the need for so called "hand-engineered" features, which appears to result in significantly improved performance and reliability. Opportunities for radiomics and neural networks in pediatric nuclear medicine/radiology/molecular imaging are broad and can be thought of in three categories: automating well-defined administrative or clinical tasks, augmenting broader administrative or clinical tasks, and unlocking new methods of generating value. Specific applications include intelligent order sets, automated protocoling, improved image acquisition, computer aided triage and detection of abnormalities, next generation voice dictation systems, biomarker development, and therapy planning.
Collapse
Affiliation(s)
- Matthias W Wagner
- Department of Diagnostic Imaging, Division of Neuroradiology, The Hospital for Sick Children, Toronto, ON M5G 1X8, Canada
| | - Alexander Bilbily
- Department of Diagnostic Imaging, Division of Nuclear Medicine, The Hospital for Sick Children, Toronto, ON M5G 1X8, Canada
| | - Mohsen Beheshti
- Department of Nuclear Medicine, University Hospital, RWTH University, Aachen, Germany; Department of Nuclear Medicine & Endocrinology, Paracelsus Medical University, Salzburg, Austria
| | - Amer Shammas
- Department of Diagnostic Imaging, Division of Nuclear Medicine, The Hospital for Sick Children, Toronto, ON M5G 1X8, Canada
| | - Reza Vali
- Department of Diagnostic Imaging, Division of Nuclear Medicine, The Hospital for Sick Children, Toronto, ON M5G 1X8, Canada.
| |
Collapse
|
13
|
Variability of manual segmentation of the prostate in axial T2-weighted MRI: A multi-reader study. Eur J Radiol 2019; 121:108716. [DOI: 10.1016/j.ejrad.2019.108716] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2019] [Revised: 10/14/2019] [Accepted: 10/16/2019] [Indexed: 01/24/2023]
|
14
|
Jensen C, Sørensen KS, Jørgensen CK, Nielsen CW, Høy PC, Langkilde NC, Østergaard LR. Prostate zonal segmentation in 1.5T and 3T T2W MRI using a convolutional neural network. J Med Imaging (Bellingham) 2019; 6:014501. [PMID: 30820440 DOI: 10.1117/1.jmi.6.1.014501] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2018] [Accepted: 12/28/2018] [Indexed: 12/22/2022] Open
Abstract
Zonal segmentation of the prostate gland using magnetic resonance imaging (MRI) is clinically important for prostate cancer (PCa) diagnosis and image-guided treatments. A two-dimensional convolutional neural network (CNN) based on the U-net architecture was evaluated for segmentation of the central gland (CG) and peripheral zone (PZ) using a dataset of 40 patients (34 PCa positive and 6 PCa negative) scanned on two different MRI scanners (1.5T GE and 3T Siemens). Images were cropped around the prostate gland to exclude surrounding tissues, resampled to 0.5 × 0.5 × 0.5 mm voxels and z -score normalized before being propagated through the CNN. Performance was evaluated using the Dice similarity coefficient (DSC) and mean absolute distance (MAD) in a fivefold cross-validation setup. Overall performance showed DSC of 0.794 and 0.692, and MADs of 3.349 and 2.993 for CG and PZ, respectively. Dividing the gland into apex, mid, and base showed higher DSC for the midgland compared to apex and base for both CG and PZ. We found no significant difference in DSC between the two scanners. A larger dataset, preferably with multivendor scanners, is necessary for validation of the proposed algorithm; however, our results are promising and have clinical potential.
Collapse
Affiliation(s)
- Carina Jensen
- Aalborg University Hospital, Department of Medical Physics, Department of Oncology, Aalborg, Denmark
| | | | | | | | - Pia Christine Høy
- Aalborg University, Department of Health Science and Technology, Aalborg, Denmark
| | | | | |
Collapse
|
15
|
Shahedi M, Halicek M, Li Q, Liu L, Zhang Z, Verma S, Schuster DM, Fei B. A semiautomatic approach for prostate segmentation in MR images using local texture classification and statistical shape modeling. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2019; 10951:109512I. [PMID: 32528212 PMCID: PMC7289512 DOI: 10.1117/12.2512282] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
Segmentation of the prostate in magnetic resonance (MR) images has many applications in image-guided treatment planning and procedures such as biopsy and focal therapy. However, manual delineation of the prostate boundary is a time-consuming task with high inter-observer variation. In this study, we proposed a semiautomated, three-dimensional (3D) prostate segmentation technique for T2-weighted MR images based on shape and texture analysis. The prostate gland shape is usually globular with a smoothly curved surface that could be accurately modeled and reconstructed if the locations of a limited number of well-distributed surface points are known. For a training image set, we used an inter-subject correspondence between the prostate surface points to model the prostate shape variation based on a statistical point distribution modeling. We also studied the local texture difference between prostate and non-prostate tissues close to the prostate surface. To segment a new image, we used the learned prostate shape and texture characteristics to search for the prostate border close to an initially estimated prostate surface. We used 23 MR images for training, and 14 images for testing the algorithm performance. We compared the results to two sets of experts' manual reference segmentations. The measured mean ± standard deviation of error values for the whole gland were 1.4 ± 0.4 mm, 8.5 ± 2.0 mm, and 86 ± 3% in terms of mean absolute distance (MAD), Hausdorff distance (HDist), and Dice similarity coefficient (DSC). The average measured differences between the two experts on the same datasets were 1.5 mm (MAD), 9.0 mm (HDist), and 83% (DSC). The proposed algorithm illustrated a fast, accurate, and robust performance for 3D prostate segmentation. The accuracy of the algorithm is within the inter-expert variability observed in manual segmentation and comparable to the best performance results reported in the literature.
Collapse
Affiliation(s)
- Maysam Shahedi
- Department of Bioengineering, The University of Texas at Dallas, Richardson, TX
| | - Martin Halicek
- Department of Bioengineering, The University of Texas at Dallas, Richardson, TX
- Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA
| | - Qinmei Li
- Department of Bioengineering, The University of Texas at Dallas, Richardson, TX
- Department of Radiology, The Second Affiliated Hospital of Guangzhou, Medical University, Guangzhou, China
| | - Lizhi Liu
- State Key Laboratory of Oncology Collaborative Innovation Center for Cancer Medicine, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Zhenfeng Zhang
- Department of Radiology, The Second Affiliated Hospital of Guangzhou, Medical University, Guangzhou, China
| | - Sadhna Verma
- Department of Radiology, University of Cincinnati Medical Center and The Veterans Administration Hospital, Cincinnati, OH
| | - David M. Schuster
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - Baowei Fei
- Department of Bioengineering, The University of Texas at Dallas, Richardson, TX
- Department of Radiology, University of Texas Southwestern Medical Center, Dallas, TX
| |
Collapse
|
16
|
Meyer P, Noblet V, Mazzara C, Lallement A. Survey on deep learning for radiotherapy. Comput Biol Med 2018; 98:126-146. [PMID: 29787940 DOI: 10.1016/j.compbiomed.2018.05.018] [Citation(s) in RCA: 162] [Impact Index Per Article: 23.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2018] [Revised: 05/15/2018] [Accepted: 05/15/2018] [Indexed: 12/17/2022]
Abstract
More than 50% of cancer patients are treated with radiotherapy, either exclusively or in combination with other methods. The planning and delivery of radiotherapy treatment is a complex process, but can now be greatly facilitated by artificial intelligence technology. Deep learning is the fastest-growing field in artificial intelligence and has been successfully used in recent years in many domains, including medicine. In this article, we first explain the concept of deep learning, addressing it in the broader context of machine learning. The most common network architectures are presented, with a more specific focus on convolutional neural networks. We then present a review of the published works on deep learning methods that can be applied to radiotherapy, which are classified into seven categories related to the patient workflow, and can provide some insights of potential future applications. We have attempted to make this paper accessible to both radiotherapy and deep learning communities, and hope that it will inspire new collaborations between these two communities to develop dedicated radiotherapy applications.
Collapse
Affiliation(s)
- Philippe Meyer
- Department of Medical Physics, Paul Strauss Center, Strasbourg, France.
| | | | | | | |
Collapse
|
17
|
Piert M, Shankar PR, Montgomery J, Kunju LP, Rogers V, Siddiqui J, Rajendiran T, Hearn J, George A, Shao X, Davenport MS. Accuracy of tumor segmentation from multi-parametric prostate MRI and 18F-choline PET/CT for focal prostate cancer therapy applications. EJNMMI Res 2018; 8:23. [PMID: 29589155 PMCID: PMC5869349 DOI: 10.1186/s13550-018-0377-5] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2017] [Accepted: 03/15/2018] [Indexed: 12/13/2022] Open
Abstract
BACKGROUND The study aims to assess the accuracy of multi-parametric prostate MRI (mpMRI) and 18F-choline PET/CT in tumor segmentation for clinically significant prostate cancer. 18F-choline PET/CT and 3 T mpMRI were performed in 10 prospective subjects prior to prostatectomy. All subjects had a single biopsy-confirmed focus of Gleason ≥ 3+4 cancer. Two radiologists (readers 1 and 2) determined tumor boundaries based on in vivo mpMRI sequences, with clinical and pathologic data available. 18F-choline PET data were co-registered to T2-weighted 3D sequences and a semi-automatic segmentation routine was used to define tumor volumes. Registration of whole-mount surgical pathology to in vivo imaging was conducted utilizing two ex vivo prostate specimen MRIs, followed by gross sectioning of the specimens within a custom-made 3D-printed plastic mold. Overlap and similarity coefficients of manual segmentations (seg1, seg2) and 18F-choline-based segmented lesions (seg3) were compared to the pathologic reference standard. RESULTS All segmentation methods greatly underestimated the true tumor volumes. Human readers (seg1, seg2) and the PET-based segmentation (seg3) underestimated an average of 79, 80, and 58% of the tumor volumes, respectively. Combining segmentation volumes (union of seg1, seg2, seg3 = seg4) decreased the mean underestimated tumor volume to 42% of the true tumor volume. When using the combined segmentation with 5 mm contour expansion, the mean underestimated tumor volume was significantly reduced to 0.03 ± 0.05 mL (2.04 ± 2.84%). Substantial safety margins up to 11-15 mm were needed to include all tumors when the initial segmentation boundaries were drawn by human readers or the semi-automated 18F-choline segmentation tool. Combining MR-based human segmentations with the metabolic information based on 18F-choline PET reduced the necessary safety margin to a maximum of 9 mm to cover all tumors entirely. CONCLUSIONS To improve the outcome of focal therapies for significant prostate cancer, it is imperative to recognize the full extent of the underestimation of tumor volumes by mpMRI. Combining metabolic information from 18F-choline with MRI-based segmentation can improve tumor coverage. However, this approach requires confirmation in further clinical studies.
Collapse
Affiliation(s)
- Morand Piert
- Radiology Department, University of Michigan, Ann Arbor, MI USA
- Department of Radiology, Division of Nuclear Medicine, University of Michigan Health System, University Hospital B1G505C, 1500 E. Medical Center Drive, Ann Arbor, MI 48109-0028 USA
| | | | | | | | - Virginia Rogers
- Radiology Department, University of Michigan, Ann Arbor, MI USA
| | - Javed Siddiqui
- Pathology Department, University of Michigan, Ann Arbor, MI USA
| | | | - Jason Hearn
- Department of Radiation Oncology, University of Michigan, Ann Arbor, MI USA
| | - Arvin George
- Urology Department, University of Michigan, Ann Arbor, MI USA
| | - Xia Shao
- Radiology Department, University of Michigan, Ann Arbor, MI USA
| | | |
Collapse
|