1
|
Alavi H, Seifi M, Rouhollahei M, Rafati M, Arabfard M. Development of Local Software for Automatic Measurement of Geometric Parameters in the Proximal Femur Using a Combination of a Deep Learning Approach and an Active Shape Model on X-ray Images. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:633-652. [PMID: 38343246 PMCID: PMC11031524 DOI: 10.1007/s10278-023-00953-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Revised: 10/16/2023] [Accepted: 10/23/2023] [Indexed: 04/20/2024]
Abstract
Proximal femur geometry is an important risk factor for diagnosing and predicting hip and femur injuries. Hence, the development of an automated approach for measuring these parameters could help physicians with the early identification of hip and femur ailments. This paper presents a technique that combines the active shape model (ASM) and deep learning methodologies. First, the femur boundary is extracted by a deep learning neural network. Then, the femur's anatomical landmarks are fitted to the extracted border using the ASM method. Finally, the geometric parameters of the proximal femur, including femur neck axis length (FNAL), femur head diameter (FHD), femur neck width (FNW), shaft width (SW), neck shaft angle (NSA), and alpha angle (AA), are calculated by measuring the distances and angles between the landmarks. The dataset of hip radiographic images consisted of 428 images, with 208 men and 220 women. These images were split into training and testing sets for analysis. The deep learning network and ASM were subsequently trained on the training dataset. In the testing dataset, the automatic measurement of FNAL, FHD, FNW, SW, NSA, and AA parameters resulted in mean errors of 1.19%, 1.46%, 2.28%, 2.43%, 1.95%, and 4.53%, respectively.
Collapse
Affiliation(s)
- Hamid Alavi
- Department of Radiology, Health Research Center, Life Style Institute, Baqiyatallah University of Medical Sciences, Tehran, Iran
| | - Mehdi Seifi
- Department of Radiology, Health Research Center, Life Style Institute, Baqiyatallah University of Medical Sciences, Tehran, Iran
| | - Mahboubeh Rouhollahei
- School of Health Management and Information Sciences, Iran University of Medical Sciences, Tehran, Iran
- Chemical Injuries Research Center, Systems Biology and Poisonings Institute, Baqiyatallah University of Medical Sciences, Tehran, Iran
| | - Mehravar Rafati
- Department of Medical Physics and Radiology, Faculty of Paramedicine, Kashan University of Medical Sciences, Kashan, Iran.
| | - Masoud Arabfard
- Chemical Injuries Research Center, Systems Biology and Poisonings Institute, Baqiyatallah University of Medical Sciences, Tehran, Iran.
| |
Collapse
|
2
|
Utaibi KA, Ahmad U, Sait SM, Iqbal S. Medical imaging and nano-engineering advances with artificial intelligence. PROCEEDINGS OF THE INSTITUTION OF MECHANICAL ENGINEERS, PART N: JOURNAL OF NANOMATERIALS, NANOENGINEERING AND NANOSYSTEMS 2023. [DOI: 10.1177/23977914231161443] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/07/2023]
Abstract
Medical imaging is a broad field of research and artificial intelligence used to explore such images is termed as AI-Imaging. AI-imaging is further divided into sub-branches including the computational, theoretical and practical experiments in wet and dry labs. The current research focuses on the background of medical imaging, recent advances in the field of medical imaging for oncology, challenges and possible solutions. During this research, some computational and programing tools are outlined. The process of image segmentation is important as it can help to explore the medical images in more detail. During this research, the steps involved in image segmentation are outlined and the numerical experiments are performed on a set of breast cancer medical images. It is concluded during this research that the achievements in this domain are always credited by the smart programing & computational tools and computer vision. The current research also outlines the step-wise protocols of deep learning, designed for different types of medical imaging such as X-rays, CT-scan and MRI are documented to provide a comprehensive understanding, that can help in bridging the two domains of medicine and computer vision, in a reliable and fruitful manner.
Collapse
Affiliation(s)
- Khalid Al Utaibi
- Computer Science and Software Engineering Department, University of Ha’il, Ha’il, Saudi Arabia
| | - Usama Ahmad
- Department of Mathematics, Comsats University Islamabad, Islamabad,Pakistan
| | - Sadiq M Sait
- Center for Communications and IT Research, Research Institute, King Fahd University of Petroleum & Minerals, Dhahran, Saudi Arabia
| | - Sohail Iqbal
- Department of General Medicine, Shahdara Hospital, Lahore, Pakistan
| |
Collapse
|
3
|
Qureshi I, Yan J, Abbas Q, Shaheed K, Riaz AB, Wahid A, Khan MWJ, Szczuko P. Medical image segmentation using deep semantic-based methods: A review of techniques, applications and emerging trends. INFORMATION FUSION 2023. [DOI: 10.1016/j.inffus.2022.09.031] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
|
4
|
|
5
|
A Synopsis of Machine and Deep Learning in Medical Physics and Radiology. JOURNAL OF BASIC AND CLINICAL HEALTH SCIENCES 2022. [DOI: 10.30621/jbachs.960154] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Machine learning (ML) and deep learning (DL) technologies introduced in the fields of medical physics, radiology, and oncology have made great strides in the past few years. A good many applications have proven to be an efficacious automated diagnosis and radiotherapy system. This paper outlines DL's general concepts and principles, key computational methods, and resources, as well as the implementation of automated models in diagnostic radiology and radiation oncology research. In addition, the potential challenges and solutions of DL technology are also discussed.
Collapse
|
6
|
Chen A, Chen F, Li X, Zhang Y, Chen L, Chen L, Zhu J. A Feasibility Study of Deep Learning-Based Auto-Segmentation Directly Used in VMAT Planning Design and Optimization for Cervical Cancer. Front Oncol 2022; 12:908903. [PMID: 35719942 PMCID: PMC9198405 DOI: 10.3389/fonc.2022.908903] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Accepted: 05/06/2022] [Indexed: 12/02/2022] Open
Abstract
Purpose To investigate the dosimetric impact on target volumes and organs at risk (OARs) when unmodified auto-segmented OAR contours are directly used in the design of treatment plans. Materials and Methods A total of 127 patients with cervical cancer were collected for retrospective analysis, including 105 patients in the training set and 22 patients in the testing set. The 3D U-net architecture was used for model training and auto-segmentation of nine types of organs at risk. The auto-segmented and manually segmented organ contours were used for treatment plan optimization to obtain the AS-VMAT (automatic segmentations VMAT) plan and the MS-VMAT (manual segmentations VMAT) plan, respectively. Geometric accuracy between the manual and predicted contours were evaluated using the Dice similarity coefficient (DSC), mean distance-to-agreement (MDA), and Hausdorff distance (HD). The dose volume histogram (DVH) and the gamma passing rate were used to identify the dose differences between the AS-VMAT plan and the MS-VMAT plan. Results Average DSC, MDA and HD95 across all OARs were 0.82–0.96, 0.45–3.21 mm, and 2.30–17.31 mm on the testing set, respectively. The D99% in the rectum and the Dmean in the spinal cord were 6.04 Gy (P = 0.037) and 0.54 Gy (P = 0.026) higher, respectively, in the AS-VMAT plans than in the MS-VMAT plans. The V20, V30, and V40 in the rectum increased by 1.35% (P = 0.027), 1.73% (P = 0.021), and 1.96% (P = 0.008), respectively, whereas the V10 in the spinal cord increased by 1.93% (P = 0.011). The differences in other dosimetry parameters were not statistically significant. The gamma passing rates in the clinical target volume (CTV) were 92.72% and 98.77%, respectively, using the 2%/2 mm and 3%/3 mm criteria, which satisfied the clinical requirements. Conclusions The dose distributions of target volumes were unaffected when auto-segmented organ contours were used in the design of treatment plans, whereas the impact of automated segmentation on the doses to OARs was complicated. We suggest that the auto-segmented contours of tissues in close proximity to the target volume need to be carefully checked and corrected when necessary.
Collapse
Affiliation(s)
- Along Chen
- Department of Radiation Oncology, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Fei Chen
- School of Biomedical Engineering, Guangzhou Xinhua University, Guangzhou, China
| | - Xiaofang Li
- Department of Radiation Oncology, The Second Affiliated Hospital of Zunyi Medical University, Zunyi, China
| | - Yazhi Zhang
- Department of Oncology and Hematology, The Six People’s Hospital of Huizhou City, Huiyang Hospital Affiliated to Southern Medical University, Huizhou, China
| | - Li Chen
- Department of Radiation Oncology, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Lixin Chen
- Department of Radiation Oncology, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
- *Correspondence: Lixin Chen, ; Jinhan Zhu,
| | - Jinhan Zhu
- Department of Radiation Oncology, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
- *Correspondence: Lixin Chen, ; Jinhan Zhu,
| |
Collapse
|
7
|
Hamzaoui D, Montagne S, Renard-Penna R, Ayache N, Delingette H. Automatic zonal segmentation of the prostate from 2D and 3D T2-weighted MRI and evaluation for clinical use. J Med Imaging (Bellingham) 2022; 9:024001. [PMID: 35300345 PMCID: PMC8920492 DOI: 10.1117/1.jmi.9.2.024001] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2021] [Accepted: 02/23/2022] [Indexed: 11/14/2022] Open
Abstract
Purpose: An accurate zonal segmentation of the prostate is required for prostate cancer (PCa) management with MRI. Approach: The aim of this work is to present UFNet, a deep learning-based method for automatic zonal segmentation of the prostate from T2-weighted (T2w) MRI. It takes into account the image anisotropy, includes both spatial and channelwise attention mechanisms and uses loss functions to enforce prostate partition. The method was applied on a private multicentric three-dimensional T2w MRI dataset and on the public two-dimensional T2w MRI dataset ProstateX. To assess the model performance, the structures segmented by the algorithm on the private dataset were compared with those obtained by seven radiologists of various experience levels. Results: On the private dataset, we obtained a Dice score (DSC) of 93.90 ± 2.85 for the whole gland (WG), 91.00 ± 4.34 for the transition zone (TZ), and 79.08 ± 7.08 for the peripheral zone (PZ). Results were significantly better than other compared networks' ( p - value < 0.05 ). On ProstateX, we obtained a DSC of 90.90 ± 2.94 for WG, 86.84 ± 4.33 for TZ, and 78.40 ± 7.31 for PZ. These results are similar to state-of-the art results and, on the private dataset, are coherent with those obtained by radiologists. Zonal locations and sectorial positions of lesions annotated by radiologists were also preserved. Conclusions: Deep learning-based methods can provide an accurate zonal segmentation of the prostate leading to a consistent zonal location and sectorial position of lesions, and therefore can be used as a helping tool for PCa diagnosis.
Collapse
Affiliation(s)
- Dimitri Hamzaoui
- Université Côte d'Azur, Inria, Epione Project-Team, Sophia Antipolis, Valbonne, France
| | - Sarah Montagne
- Sorbonne Université, Radiology Department, CHU La Pitié Salpétrière/Tenon, Paris, France
| | - Raphaële Renard-Penna
- Sorbonne Université, Radiology Department, CHU La Pitié Salpétrière/Tenon, Paris, France
| | - Nicholas Ayache
- Université Côte d'Azur, Inria, Epione Project-Team, Sophia Antipolis, Valbonne, France
| | - Hervé Delingette
- Université Côte d'Azur, Inria, Epione Project-Team, Sophia Antipolis, Valbonne, France
| |
Collapse
|
8
|
Liu Y, Miao Q, Surawech C, Zheng H, Nguyen D, Yang G, Raman SS, Sung K. Deep Learning Enables Prostate MRI Segmentation: A Large Cohort Evaluation With Inter-Rater Variability Analysis. Front Oncol 2021; 11:801876. [PMID: 34993152 PMCID: PMC8724207 DOI: 10.3389/fonc.2021.801876] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Accepted: 11/23/2021] [Indexed: 02/02/2023] Open
Abstract
Whole-prostate gland (WPG) segmentation plays a significant role in prostate volume measurement, treatment, and biopsy planning. This study evaluated a previously developed automatic WPG segmentation, deep attentive neural network (DANN), on a large, continuous patient cohort to test its feasibility in a clinical setting. With IRB approval and HIPAA compliance, the study cohort included 3,698 3T MRI scans acquired between 2016 and 2020. In total, 335 MRI scans were used to train the model, and 3,210 and 100 were used to conduct the qualitative and quantitative evaluation of the model. In addition, the DANN-enabled prostate volume estimation was evaluated by using 50 MRI scans in comparison with manual prostate volume estimation. For qualitative evaluation, visual grading was used to evaluate the performance of WPG segmentation by two abdominal radiologists, and DANN demonstrated either acceptable or excellent performance in over 96% of the testing cohort on the WPG or each prostate sub-portion (apex, midgland, or base). Two radiologists reached a substantial agreement on WPG and midgland segmentation (κ = 0.75 and 0.63) and moderate agreement on apex and base segmentation (κ = 0.56 and 0.60). For quantitative evaluation, DANN demonstrated a dice similarity coefficient of 0.93 ± 0.02, significantly higher than other baseline methods, such as DeepLab v3+ and UNet (both p values < 0.05). For the volume measurement, 96% of the evaluation cohort achieved differences between the DANN-enabled and manual volume measurement within 95% limits of agreement. In conclusion, the study showed that the DANN achieved sufficient and consistent WPG segmentation on a large, continuous study cohort, demonstrating its great potential to serve as a tool to measure prostate volume.
Collapse
Affiliation(s)
- Yongkai Liu
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, CA, United States
- Physics and Biology in Medicine Interdisciplinary Program (IDP), David Geffen School of Medicine, University of California, Los Angeles, CA, United States
| | - Qi Miao
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, CA, United States
- Department of Radiology, The First Affiliated Hospital of China Medical University, Shenyang City, China
| | - Chuthaporn Surawech
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, CA, United States
- Department of Radiology, Division of Diagnostic Radiology, Faculty of Medicine, Chulalongkorn University and King Chulalongkorn Memorial Hospital, Bangkok, Thailand
| | - Haoxin Zheng
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, CA, United States
- Department of Computer Science, Henry Samueli School of Engineering and Applied Science, University of California, Los Angeles, CA, United States
| | - Dan Nguyen
- Medical Artificial Intelligence and Automation Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States
| | - Guang Yang
- National Heart and Lung Institute, Imperial College London, London, United Kingdom
| | - Steven S. Raman
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, CA, United States
| | - Kyunghyun Sung
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, CA, United States
- Physics and Biology in Medicine Interdisciplinary Program (IDP), David Geffen School of Medicine, University of California, Los Angeles, CA, United States
| |
Collapse
|
9
|
Kalantar R, Lin G, Winfield JM, Messiou C, Lalondrelle S, Blackledge MD, Koh DM. Automatic Segmentation of Pelvic Cancers Using Deep Learning: State-of-the-Art Approaches and Challenges. Diagnostics (Basel) 2021; 11:1964. [PMID: 34829310 PMCID: PMC8625809 DOI: 10.3390/diagnostics11111964] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Revised: 10/14/2021] [Accepted: 10/19/2021] [Indexed: 12/18/2022] Open
Abstract
The recent rise of deep learning (DL) and its promising capabilities in capturing non-explicit detail from large datasets have attracted substantial research attention in the field of medical image processing. DL provides grounds for technological development of computer-aided diagnosis and segmentation in radiology and radiation oncology. Amongst the anatomical locations where recent auto-segmentation algorithms have been employed, the pelvis remains one of the most challenging due to large intra- and inter-patient soft-tissue variabilities. This review provides a comprehensive, non-systematic and clinically-oriented overview of 74 DL-based segmentation studies, published between January 2016 and December 2020, for bladder, prostate, cervical and rectal cancers on computed tomography (CT) and magnetic resonance imaging (MRI), highlighting the key findings, challenges and limitations.
Collapse
Affiliation(s)
- Reza Kalantar
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
| | - Gigin Lin
- Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital at Linkou and Chang Gung University, 5 Fuhsing St., Guishan, Taoyuan 333, Taiwan;
| | - Jessica M. Winfield
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
- Department of Radiology, The Royal Marsden Hospital, London SW3 6JJ, UK
| | - Christina Messiou
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
- Department of Radiology, The Royal Marsden Hospital, London SW3 6JJ, UK
| | - Susan Lalondrelle
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
- Department of Radiology, The Royal Marsden Hospital, London SW3 6JJ, UK
| | - Matthew D. Blackledge
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
| | - Dow-Mu Koh
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
- Department of Radiology, The Royal Marsden Hospital, London SW3 6JJ, UK
| |
Collapse
|
10
|
Multiparametric MRI and Radiomics in Prostate Cancer: A Review of the Current Literature. Diagnostics (Basel) 2021; 11:diagnostics11101829. [PMID: 34679527 PMCID: PMC8534893 DOI: 10.3390/diagnostics11101829] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2021] [Revised: 09/26/2021] [Accepted: 09/27/2021] [Indexed: 12/22/2022] Open
Abstract
Prostate cancer (PCa) represents the fourth most common cancer and the fifth leading cause of cancer death of men worldwide. Multiparametric MRI (mp-MRI) has high sensitivity and specificity in the detection of PCa, and it is currently the most widely used imaging technique for tumor localization and cancer staging. mp-MRI plays a key role in risk stratification of naïve patients, in active surveillance for low-risk patients, and in monitoring recurrence after definitive therapy. Radiomics is an emerging and promising tool which allows a quantitative tumor evaluation from radiological images via conversion of digital images into mineable high-dimensional data. The purpose of radiomics is to increase the features available to detect PCa, to avoid unnecessary biopsies, to define tumor aggressiveness, and to monitor post-treatment recurrence of PCa. The integration of radiomics data, including different imaging modalities (such as PET-CT) and other clinical and histopathological data, could improve the prediction of tumor aggressiveness as well as guide clinical decisions and patient management. The purpose of this review is to describe the current research applications of radiomics in PCa on MR images.
Collapse
|
11
|
Andrew J, Mhatesh T, Sebastin RD, Sagayam KM, Eunice J, Pomplun M, Dang H. Super-resolution reconstruction of brain magnetic resonance images via lightweight autoencoder. INFORMATICS IN MEDICINE UNLOCKED 2021. [DOI: 10.1016/j.imu.2021.100713] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
|
12
|
Baessler B. [Artificial Intelligence in Radiology - Definition, Potential and Challenges]. PRAXIS 2021; 110:48-53. [PMID: 33406927 DOI: 10.1024/1661-8157/a003597] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Artificial Intelligence in Radiology - Definition, Potential and Challenges Abstract. Artificial Intelligence (AI) is omnipresent. It has neatly permeated our daily life, even if we are not always fully aware of its ubiquitous presence. The healthcare sector in particular is experiencing a revolution which will change our daily routine considerably in the near future. Due to its advanced digitization and its historical technical affinity radiology is especially prone to these developments. But what exactly is AI and what makes AI so potent that established medical disciplines such as radiology worry about their future job perspectives? What are the assets of AI in radiology today - and what are the major challenges? This review article tries to give some answers to these questions.
Collapse
Affiliation(s)
- Bettina Baessler
- Institut für Diagnostische und Interventionelle Radiologie, Universitätsspital Zürich
| |
Collapse
|
13
|
Rhee DJ, Jhingran A, Rigaud B, Netherton T, Cardenas CE, Zhang L, Vedam S, Kry S, Brock KK, Shaw W, O’Reilly F, Parkes J, Burger H, Fakie N, Trauernicht C, Simonds H, Court LE. Automatic contouring system for cervical cancer using convolutional neural networks. Med Phys 2020; 47:5648-5658. [PMID: 32964477 PMCID: PMC7756586 DOI: 10.1002/mp.14467] [Citation(s) in RCA: 39] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2020] [Revised: 09/01/2020] [Accepted: 09/07/2020] [Indexed: 02/06/2023] Open
Abstract
PURPOSE To develop a tool for the automatic contouring of clinical treatment volumes (CTVs) and normal tissues for radiotherapy treatment planning in cervical cancer patients. METHODS An auto-contouring tool based on convolutional neural networks (CNN) was developed to delineate three cervical CTVs and 11 normal structures (seven OARs, four bony structures) in cervical cancer treatment for use with the Radiation Planning Assistant, a web-based automatic plan generation system. A total of 2254 retrospective clinical computed tomography (CT) scans from a single cancer center and 210 CT scans from a segmentation challenge were used to train and validate the CNN-based auto-contouring tool. The accuracy of the tool was evaluated by calculating the Sørensen-dice similarity coefficient (DSC) and mean surface and Hausdorff distances between the automatically generated contours and physician-drawn contours on 140 internal CT scans. A radiation oncologist scored the automatically generated contours on 30 external CT scans from three South African hospitals. RESULTS The average DSC, mean surface distance, and Hausdorff distance of our CNN-based tool were 0.86/0.19 cm/2.02 cm for the primary CTV, 0.81/0.21 cm/2.09 cm for the nodal CTV, 0.76/0.27 cm/2.00 cm for the PAN CTV, 0.89/0.11 cm/1.07 cm for the bladder, 0.81/0.18 cm/1.66 cm for the rectum, 0.90/0.06 cm/0.65 cm for the spinal cord, 0.94/0.06 cm/0.60 cm for the left femur, 0.93/0.07 cm/0.66 cm for the right femur, 0.94/0.08 cm/0.76 cm for the left kidney, 0.95/0.07 cm/0.84 cm for the right kidney, 0.93/0.05 cm/1.06 cm for the pelvic bone, 0.91/0.07 cm/1.25 cm for the sacrum, 0.91/0.07 cm/0.53 cm for the L4 vertebral body, and 0.90/0.08 cm/0.68 cm for the L5 vertebral bodies. On average, 80% of the CTVs, 97% of the organ at risk, and 98% of the bony structure contours in the external test dataset were clinically acceptable based on physician review. CONCLUSIONS Our CNN-based auto-contouring tool performed well on both internal and external datasets and had a high rate of clinical acceptability.
Collapse
Affiliation(s)
- Dong Joo Rhee
- MD Anderson UTHealth Graduate SchoolHoustonTXUSA
- Department of Radiation PhysicsDivision of Radiation OncologyThe University of Texas MD Anderson Cancer CenterHoustonTXUSA
| | - Anuja Jhingran
- Department of Radiation OncologyThe University of Texas MD Anderson Cancer CenterHoustonTXUSA
| | - Bastien Rigaud
- Department of Imaging PhysicsThe University of Texas MD Anderson Cancer CenterHoustonTXUSA
| | - Tucker Netherton
- MD Anderson UTHealth Graduate SchoolHoustonTXUSA
- Department of Radiation PhysicsDivision of Radiation OncologyThe University of Texas MD Anderson Cancer CenterHoustonTXUSA
| | - Carlos E. Cardenas
- Department of Radiation PhysicsDivision of Radiation OncologyThe University of Texas MD Anderson Cancer CenterHoustonTXUSA
| | - Lifei Zhang
- Department of Radiation PhysicsDivision of Radiation OncologyThe University of Texas MD Anderson Cancer CenterHoustonTXUSA
| | - Sastry Vedam
- Department of Radiation PhysicsDivision of Radiation OncologyThe University of Texas MD Anderson Cancer CenterHoustonTXUSA
| | - Stephen Kry
- Department of Radiation PhysicsDivision of Radiation OncologyThe University of Texas MD Anderson Cancer CenterHoustonTXUSA
| | - Kristy K. Brock
- Department of Imaging PhysicsThe University of Texas MD Anderson Cancer CenterHoustonTXUSA
| | - William Shaw
- Department of Medical Physics (G68)University of the Free StateBloemfonteinSouth Africa
| | - Frederika O’Reilly
- Department of Medical Physics (G68)University of the Free StateBloemfonteinSouth Africa
| | - Jeannette Parkes
- Division of Radiation Oncology and Medical PhysicsUniversity of Cape Town and Groote Schuur HospitalCape TownSouth Africa
| | - Hester Burger
- Division of Radiation Oncology and Medical PhysicsUniversity of Cape Town and Groote Schuur HospitalCape TownSouth Africa
| | - Nazia Fakie
- Division of Radiation Oncology and Medical PhysicsUniversity of Cape Town and Groote Schuur HospitalCape TownSouth Africa
| | - Chris Trauernicht
- Division of Medical PhysicsStellenbosch UniversityTygerberg Academic HospitalCape TownSouth Africa
| | - Hannah Simonds
- Division of Radiation OncologyStellenbosch UniversityTygerberg Academic HospitalCape TownSouth Africa
| | - Laurence E. Court
- Department of Radiation PhysicsDivision of Radiation OncologyThe University of Texas MD Anderson Cancer CenterHoustonTXUSA
| |
Collapse
|
14
|
Evaluation of Multimodal Algorithms for the Segmentation of Multiparametric MRI Prostate Images. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2020; 2020:8861035. [PMID: 33144873 PMCID: PMC7596462 DOI: 10.1155/2020/8861035] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/15/2020] [Revised: 09/29/2020] [Accepted: 10/04/2020] [Indexed: 12/18/2022]
Abstract
Prostate segmentation in multiparametric magnetic resonance imaging (mpMRI) can help to support prostate cancer diagnosis and therapy treatment. However, manual segmentation of the prostate is subjective and time-consuming. Many deep learning monomodal networks have been developed for automatic whole prostate segmentation from T2-weighted MR images. We aimed to investigate the added value of multimodal networks in segmenting the prostate into the peripheral zone (PZ) and central gland (CG). We optimized and evaluated monomodal DenseVNet, multimodal ScaleNet, and monomodal and multimodal HighRes3DNet, which yielded dice score coefficients (DSC) of 0.875, 0.848, 0.858, and 0.890 in WG, respectively. Multimodal HighRes3DNet and ScaleNet yielded higher DSC with statistical differences in PZ and CG only compared to monomodal DenseVNet, indicating that multimodal networks added value by generating better segmentation between PZ and CG regions but did not improve the WG segmentation. No significant difference was observed in the apex and base of WG segmentation between monomodal and multimodal networks, indicating that the segmentations at the apex and base were more affected by the general network architecture. The number of training data was also varied for DenseVNet and HighRes3DNet, from 20 to 120 in steps of 20. DenseVNet was able to yield DSC of higher than 0.65 even for special cases, such as TURP or abnormal prostate, whereas HighRes3DNet's performance fluctuated with no trend despite being the best network overall. Multimodal networks did not add value in segmenting special cases but generally reduced variations in segmentation compared to the same matched monomodal network.
Collapse
|
15
|
Liu Q, Dou Q, Yu L, Heng PA. MS-Net: Multi-Site Network for Improving Prostate Segmentation With Heterogeneous MRI Data. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2713-2724. [PMID: 32078543 DOI: 10.1109/tmi.2020.2974574] [Citation(s) in RCA: 84] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/17/2023]
Abstract
Automated prostate segmentation in MRI is highly demanded for computer-assisted diagnosis. Recently, a variety of deep learning methods have achieved remarkable progress in this task, usually relying on large amounts of training data. Due to the nature of scarcity for medical images, it is important to effectively aggregate data from multiple sites for robust model training, to alleviate the insufficiency of single-site samples. However, the prostate MRIs from different sites present heterogeneity due to the differences in scanners and imaging protocols, raising challenges for effective ways of aggregating multi-site data for network training. In this paper, we propose a novel multi-site network (MS-Net) for improving prostate segmentation by learning robust representations, leveraging multiple sources of data. To compensate for the inter-site heterogeneity of different MRI datasets, we develop Domain-Specific Batch Normalization layers in the network backbone, enabling the network to estimate statistics and perform feature normalization for each site separately. Considering the difficulty of capturing the shared knowledge from multiple datasets, a novel learning paradigm, i.e., Multi-site-guided Knowledge Transfer, is proposed to enhance the kernels to extract more generic representations from multi-site data. Extensive experiments on three heterogeneous prostate MRI datasets demonstrate that our MS-Net improves the performance across all datasets consistently, and outperforms state-of-the-art methods for multi-site learning.
Collapse
|
16
|
Hiremath A, Shiradkar R, Merisaari H, Prasanna P, Ettala O, Taimen P, Aronen HJ, Boström PJ, Jambor I, Madabhushi A. Test-retest repeatability of a deep learning architecture in detecting and segmenting clinically significant prostate cancer on apparent diffusion coefficient (ADC) maps. Eur Radiol 2020; 31:379-391. [PMID: 32700021 DOI: 10.1007/s00330-020-07065-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2020] [Revised: 05/22/2020] [Accepted: 07/02/2020] [Indexed: 12/16/2022]
Abstract
OBJECTIVES To evaluate short-term test-retest repeatability of a deep learning architecture (U-Net) in slice- and lesion-level detection and segmentation of clinically significant prostate cancer (csPCa: Gleason grade group > 1) using diffusion-weighted imaging fitted with monoexponential function, ADCm. METHODS One hundred twelve patients with prostate cancer (PCa) underwent 2 prostate MRI examinations on the same day. PCa areas were annotated using whole mount prostatectomy sections. Two U-Net-based convolutional neural networks were trained on three different ADCm b value settings for (a) slice- and (b) lesion-level detection and (c) segmentation of csPCa. Short-term test-retest repeatability was estimated using intra-class correlation coefficient (ICC(3,1)), proportionate agreement, and dice similarity coefficient (DSC). A 3-fold cross-validation was performed on training set (N = 78 patients) and evaluated for performance and repeatability on testing data (N = 34 patients). RESULTS For the three ADCm b value settings, repeatability of mean ADCm of csPCa lesions was ICC(3,1) = 0.86-0.98. Two CNNs with U-Net-based architecture demonstrated ICC(3,1) in the range of 0.80-0.83, agreement of 66-72%, and DSC of 0.68-0.72 for slice- and lesion-level detection and segmentation of csPCa. Bland-Altman plots suggest that there is no systematic bias in agreement between inter-scan ground truth segmentation repeatability and segmentation repeatability of the networks. CONCLUSIONS For the three ADCm b value settings, two CNNs with U-Net-based architecture were repeatable for the problem of detection of csPCa at the slice-level. The network repeatability in segmenting csPCa lesions is affected by inter-scan variability and ground truth segmentation repeatability and may thus improve with better inter-scan reproducibility. KEY POINTS • For the three ADCm b value settings, two CNNs with U-Net-based architecture were repeatable for the problem of detection of csPCa at the slice-level. • The network repeatability in segmenting csPCa lesions is affected by inter-scan variability and ground truth segmentation repeatability and may thus improve with better inter-scan reproducibility.
Collapse
Affiliation(s)
- Amogh Hiremath
- Department of Biomedical Engineering, Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH, 44106, USA.
| | - Rakesh Shiradkar
- Department of Biomedical Engineering, Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH, 44106, USA
| | - Harri Merisaari
- Department of Biomedical Engineering, Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH, 44106, USA.,Department of Diagnostic Radiology, University of Turku, Turku, Finland
| | - Prateek Prasanna
- Department of Biomedical Engineering, Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH, 44106, USA.,Department of Biomedical Informatics, Stony Brook University, Stony Brook, NY, USA
| | - Otto Ettala
- Department of Urology, University of Turku and Turku University Hospital, Turku, Finland
| | - Pekka Taimen
- Institute of Biomedicine, Department of Pathology, University of Turku and Turku University Hospital, Turku, Finland
| | - Hannu J Aronen
- Medical Imaging Centre of Southwest Finland, Turku University Hospital, Turku, Finland
| | - Peter J Boström
- Department of Urology, University of Turku and Turku University Hospital, Turku, Finland
| | - Ivan Jambor
- Department of Diagnostic Radiology, University of Turku, Turku, Finland.,Department of Radiology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Anant Madabhushi
- Department of Biomedical Engineering, Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH, 44106, USA.,Louis Stokes Cleveland Veterans Administration Medical Center, Cleveland, Ohio, USA
| |
Collapse
|
17
|
da Silva GLF, Diniz PS, Ferreira JL, França JVF, Silva AC, de Paiva AC, de Cavalcanti EAA. Superpixel-based deep convolutional neural networks and active contour model for automatic prostate segmentation on 3D MRI scans. Med Biol Eng Comput 2020; 58:1947-1964. [DOI: 10.1007/s11517-020-02199-5] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2019] [Accepted: 05/22/2020] [Indexed: 10/24/2022]
|
18
|
Vu CC, Siddiqui ZA, Zamdborg L, Thompson AB, Quinn TJ, Castillo E, Guerrero TM. Deep convolutional neural networks for automatic segmentation of thoracic organs-at-risk in radiation oncology - use of non-domain transfer learning. J Appl Clin Med Phys 2020; 21:108-113. [PMID: 32602187 PMCID: PMC7324695 DOI: 10.1002/acm2.12871] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2019] [Revised: 11/11/2019] [Accepted: 02/29/2020] [Indexed: 12/20/2022] Open
Abstract
PURPOSE Segmentation of organs-at-risk (OARs) is an essential component of the radiation oncology workflow. Commonly segmented thoracic OARs include the heart, esophagus, spinal cord, and lungs. This study evaluated a convolutional neural network (CNN) for automatic segmentation of these OARs. METHODS The dataset was created retrospectively from consecutive radiotherapy plans containing all five OARs of interest, including 22,411 CT slices from 168 patients. Patients were divided into training, validation, and test datasets according to a 66%/17%/17% split. We trained a modified U-Net, applying transfer learning from a VGG16 image classification model trained on ImageNet. The Dice coefficient and 95% Hausdorff distance on the test set for each organ was compared to a commercial atlas-based segmentation model using the Wilcoxon signed-rank test. RESULTS On the test dataset, the median Dice coefficients for the CNN model vs. the multi-atlas model were 71% vs. 67% for the spinal cord, 96% vs. 94% for the right lung, 96%vs. 94% for the left lung, 91% vs. 85% for the heart, and 63% vs. 37% for the esophagus. The median 95% Hausdorff distances were 9.5 mm vs. 25.3 mm, 5.1 mm vs. 8.1 mm, 4.0 mm vs. 8.0 mm, 9.8 mm vs. 15.8 mm, and 9.2 mm vs. 20.0 mm for the respective organs. The results all favored the CNN model (P < 0.05). CONCLUSIONS A 2D CNN can achieve superior results to commercial atlas-based software for OAR segmentation utilizing non-domain transfer learning, which has potential utility for quality assurance and expediting patient care.
Collapse
Affiliation(s)
- Charles C. Vu
- Beaumont Artificial Intelligence Research LaboratoryBeaumont Health System, Royal OakMIUSA
- Department of Radiation OncologyBeaumont Health System, Royal OakMIUSA
| | - Zaid A. Siddiqui
- Beaumont Artificial Intelligence Research LaboratoryBeaumont Health System, Royal OakMIUSA
- Department of Radiation OncologyBeaumont Health System, Royal OakMIUSA
| | - Leonid Zamdborg
- Beaumont Artificial Intelligence Research LaboratoryBeaumont Health System, Royal OakMIUSA
- Department of Radiation OncologyBeaumont Health System, Royal OakMIUSA
| | - Andrew B. Thompson
- Beaumont Artificial Intelligence Research LaboratoryBeaumont Health System, Royal OakMIUSA
- Department of Radiation OncologyBeaumont Health System, Royal OakMIUSA
| | - Thomas J. Quinn
- Beaumont Artificial Intelligence Research LaboratoryBeaumont Health System, Royal OakMIUSA
- Department of Radiation OncologyBeaumont Health System, Royal OakMIUSA
| | - Edward Castillo
- Department of Radiation OncologyBeaumont Health System, Royal OakMIUSA
| | - Thomas M. Guerrero
- Beaumont Artificial Intelligence Research LaboratoryBeaumont Health System, Royal OakMIUSA
- Department of Radiation OncologyBeaumont Health System, Royal OakMIUSA
| |
Collapse
|
19
|
Liechti MR, Muehlematter UJ, Schneider AF, Eberli D, Rupp NJ, Hötker AM, Donati OF, Becker AS. Manual prostate cancer segmentation in MRI: interreader agreement and volumetric correlation with transperineal template core needle biopsy. Eur Radiol 2020; 30:4806-4815. [PMID: 32306078 DOI: 10.1007/s00330-020-06786-w] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2019] [Revised: 02/16/2020] [Accepted: 03/02/2020] [Indexed: 11/26/2022]
Abstract
OBJECTIVES To assess interreader agreement of manual prostate cancer lesion segmentation on multiparametric MR images (mpMRI). The secondary aim was to compare tumor volume estimates between MRI segmentation and transperineal template saturation core needle biopsy (TTSB). METHODS We retrospectively reviewed patients who had undergone mpMRI of the prostate at our institution and who had received TTSB within 190 days of the examination. Seventy-eight cancer lesions with Gleason score of at least 3 + 4 = 7 were manually segmented in T2-weighted images by 3 radiologists and 1 medical student. Twenty lesions were also segmented in apparent diffusion coefficient (ADC) and dynamic contrast enhanced (DCE) series. First, 20 volumetric similarity scores were computed to quantify interreader agreement. Second, manually segmented cancer lesion volumes were compared with TTSB-derived estimates by Bland-Altman analysis and Wilcoxon testing. RESULTS Interreader agreement across all readers was only moderate with mean T2 Dice score of 0.57 (95%CI 0.39-0.70), volumetric similarity coefficient of 0.74 (0.48-0.89), and Hausdorff distance of 5.23 mm (3.17-9.32 mm). Discrepancy of volume estimate between MRI and TTSB was increasing with tumor size. Discrepancy was significantly different between tumors with a Gleason score 3 + 4 vs. higher grade tumors (0.66 ml vs. 0.78 ml; p = 0.007). There were no significant differences between T2, ADC, and DCE segmentations. CONCLUSIONS We found at best moderate interreader agreement of manual prostate cancer segmentation in mpMRI. Additionally, our study suggests a systematic discrepancy between the tumor volume estimate by MRI segmentation and TTSB core length, especially for large and high-grade tumors. KEY POINTS • Manual prostate cancer segmentation in mpMRI shows moderate interreader agreement. • There are no significant differences between T2, ADC, and DCE segmentation agreements. • There is a systematic difference between volume estimates derived from biopsy and MRI.
Collapse
Affiliation(s)
- Marc R Liechti
- Institute of Diagnostic and Interventional Radiology, University Hospital of Zurich, Zurich, Switzerland
| | - Urs J Muehlematter
- Institute of Diagnostic and Interventional Radiology, University Hospital of Zurich, Zurich, Switzerland
| | - Aurelia F Schneider
- Institute of Diagnostic and Interventional Radiology, University Hospital of Zurich, Zurich, Switzerland
| | - Daniel Eberli
- Department of Urology, University Hospital of Zurich, Zurich, Switzerland
| | - Niels J Rupp
- Department of Pathology and Molecular Pathology, University Hospital of Zurich, Zurich, Switzerland
| | - Andreas M Hötker
- Institute of Diagnostic and Interventional Radiology, University Hospital of Zurich, Zurich, Switzerland
| | - Olivio F Donati
- Institute of Diagnostic and Interventional Radiology, University Hospital of Zurich, Zurich, Switzerland
| | - Anton S Becker
- Institute of Diagnostic and Interventional Radiology, University Hospital of Zurich, Zurich, Switzerland.
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, USA.
| |
Collapse
|
20
|
Eppenhof K, Maspero M, Savenije M, de Boer J, van der Voort van Zyp J, Raaymakers B, Raaijmakers A, Veta M, van den Berg C, Pluim J. Fast contour propagation for MR-guided prostate radiotherapy using convolutional neural networks. Med Phys 2020; 47:1238-1248. [PMID: 31876300 PMCID: PMC7079098 DOI: 10.1002/mp.13994] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2019] [Revised: 12/09/2019] [Accepted: 12/18/2019] [Indexed: 12/27/2022] Open
Abstract
PURPOSE To quickly and automatically propagate organ contours from pretreatment to fraction images in magnetic resonance (MR)-guided prostate external-beam radiotherapy. METHODS Five prostate cancer patients underwent 20 fractions of image-guided external-beam radiotherapy on a 1.5 T MR-Linac system. For each patient, a pretreatment T2-weighted three-dimensional (3D) MR imaging (MRI) scan was used to delineate the clinical target volume (CTV) contours. The same scan was repeated during each fraction, with the CTV contour being manually adapted if necessary. A convolutional neural network (CNN) was trained for combined image registration and contour propagation. The network estimated the propagated contour and a deformation field between the two input images. The training set consisted of a synthetically generated ground truth of randomly deformed images and prostate segmentations. We performed a leave-one-out cross-validation on the five patients and propagated the prostate segmentations from the pretreatment to the fraction scans. Three variants of the CNN, aimed at investigating supervision based on optimizing segmentation overlap, optimizing the registration, and a combination of the two were compared to results of the open-source deformable registration software package Elastix. RESULTS The neural networks trained on segmentation overlap or the combined objective achieved significantly better Hausdorff distances between predicted and ground truth contours than Elastix, at the much faster registration speed of 0.5 s. The CNN variant trained to optimize both the prostate overlap and deformation field, and the variant trained to only maximize the prostate overlap, produced the best propagation results. CONCLUSIONS A CNN trained on maximizing prostate overlap and minimizing registration errors provides a fast and accurate method for deformable contour propagation for prostate MR-guided radiotherapy.
Collapse
Affiliation(s)
- K.A.J. Eppenhof
- Medical Image Analysis Group, Department of Biomedical EngineeringEindhoven University of TechnologyEindhovenThe Netherlands
| | - M. Maspero
- Computational Imaging Group for MR Diagnostics & Therapy, Center for Image SciencesUniversity Medical Center UtrechtUtrechtThe Netherlands
- Department of RadiotherapyUniversity Medical Center UtrechtUtrechtThe Netherlands
| | - M.H.F. Savenije
- Computational Imaging Group for MR Diagnostics & Therapy, Center for Image SciencesUniversity Medical Center UtrechtUtrechtThe Netherlands
- Department of RadiotherapyUniversity Medical Center UtrechtUtrechtThe Netherlands
| | - J.C.J. de Boer
- Department of RadiotherapyUniversity Medical Center UtrechtUtrechtThe Netherlands
| | | | - B.W. Raaymakers
- Department of RadiotherapyUniversity Medical Center UtrechtUtrechtThe Netherlands
| | - A.J.E. Raaijmakers
- Medical Image Analysis Group, Department of Biomedical EngineeringEindhoven University of TechnologyEindhovenThe Netherlands
- Computational Imaging Group for MR Diagnostics & Therapy, Center for Image SciencesUniversity Medical Center UtrechtUtrechtThe Netherlands
| | - M. Veta
- Medical Image Analysis Group, Department of Biomedical EngineeringEindhoven University of TechnologyEindhovenThe Netherlands
| | - C.A.T. van den Berg
- Computational Imaging Group for MR Diagnostics & Therapy, Center for Image SciencesUniversity Medical Center UtrechtUtrechtThe Netherlands
- Department of RadiotherapyUniversity Medical Center UtrechtUtrechtThe Netherlands
| | - J.P.W. Pluim
- Medical Image Analysis Group, Department of Biomedical EngineeringEindhoven University of TechnologyEindhovenThe Netherlands
- Image Sciences InstituteUniversity Medical Center UtrechtUtrechtThe Netherlands
| |
Collapse
|
21
|
Liang Y, Schott D, Zhang Y, Wang Z, Nasief H, Paulson E, Hall W, Knechtges P, Erickson B, Li XA. Auto-segmentation of pancreatic tumor in multi-parametric MRI using deep convolutional neural networks. Radiother Oncol 2020; 145:193-200. [PMID: 32045787 DOI: 10.1016/j.radonc.2020.01.021] [Citation(s) in RCA: 49] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2019] [Revised: 01/16/2020] [Accepted: 01/21/2020] [Indexed: 02/07/2023]
Abstract
PURPOSE The recently introduced MR-Linac enables MRI-guided Online Adaptive Radiation Therapy (MRgOART) of pancreatic cancer, for which fast and accurate segmentation of the gross tumor volume (GTV) is essential. This work aims to develop an algorithm allowing automatic segmentation of the pancreatic GTV based on multi-parametric MRI using deep neural networks. METHODS We employed a square-window based convolutional neural network (CNN) architecture with three convolutional layer blocks. The model was trained using about 245,000 normal and 230,000 tumor patches extracted from 37 DCE MRI sets acquired in 27 patients with data augmentation. These images were bias corrected, intensity standardized, and resampled to a fixed voxel size of 1 × 1 × 3 mm3. The trained model was tested on 19 DCE MRI sets from another 13 patients, and the model-generated GTVs were compared with the manually segmented GTVs by experienced radiologist and radiation oncologists based on Dice Similarity Coefficient (DSC), Hausdorff Distance (HD), and Mean Surface Distance (MSD). RESULTS The mean values and standard deviations of the performance metrics on the test set were DSC = 0.73 ± 0.09, HD = 8.11 ± 4.09 mm, and MSD = 1.82 ± 0.84 mm. The interobserver variations were estimated to be DSC = 0.71 ± 0.08, HD = 7.36 ± 2.72 mm, and MSD = 1.78 ± 0.66 mm, which had no significant difference with model performance at p values of 0.6, 0.52, and 0.88, respectively. CONCLUSION We developed a CNN-based model for auto-segmentation of pancreatic GTV in multi-parametric MRI. Model performance was comparable to expert radiation oncologists. This model provides a framework to incorporate multimodality images and daily MRI for GTV auto-segmentation in MRgOART.
Collapse
Affiliation(s)
- Ying Liang
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, USA
| | - Diane Schott
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, USA
| | - Ying Zhang
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, USA
| | - Zhiwu Wang
- Department of Chemoradiotherapy, Tangshan People's Hospital, PR China
| | - Haidy Nasief
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, USA
| | - Eric Paulson
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, USA
| | - William Hall
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, USA
| | - Paul Knechtges
- Department of Radiology, Medical College of Wisconsin, Milwaukee, USA
| | - Beth Erickson
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, USA
| | - X Allen Li
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, USA.
| |
Collapse
|
22
|
Chen CM, Huang YS, Fang PW, Liang CW, Chang RF. A computer-aided diagnosis system for differentiation and delineation of malignant regions on whole-slide prostate histopathology image using spatial statistics and multidimensional DenseNet. Med Phys 2020; 47:1021-1033. [PMID: 31834623 DOI: 10.1002/mp.13964] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2019] [Revised: 11/26/2019] [Accepted: 12/04/2019] [Indexed: 02/06/2023] Open
Abstract
PURPOSE Prostate cancer (PCa) is a major health concern in aging males, and proper management of the disease depends on accurately interpreting pathology specimens. However, reading prostatectomy histopathology slides, which is basically for staging, is usually time consuming and differs from reading small biopsy specimens, which is mainly used for diagnosis. Generally, each prostatectomy specimen generates tens of large tissue sections and for each section, the malignant region needs to be delineated to assess the amount of tumor and its burden. With the aim of reducing the workload of pathologists, in this study, we focus on developing a computer-aided diagnosis (CAD) system based on a densely connected convolutional neural network (DenseNet) for whole-slide histopathology images to outline the malignant regions. METHODS We use an efficient color normalization process based on ranklet transformation to automatically correct the intensity of the images. Additionally, we use spatial probability to segment the tissue structure regions for different tissue recognition patterns. Based on the segmentation, we incorporate a multidimensional structure into DenseNet to determine if a particular prostatic region is benign or malignant. RESULTS As demonstrated by the experimental results with a test set of 2,663 images from 32 whole-slide prostate histopathology images, our proposed system achieved 0.726, 0.6306, and 0.5209 in the average of the Dice coefficient, Jaccard similarity coefficient, and Boundary F1 score measures, respectively. Then, the accuracy, sensitivity, specificity, and the area under the ROC curve (AUC) of the proposed classification method were observed to be 95.0% (2544/2663), 96.7% (1210/1251), 93.9% (1334/1412), and 0.9831, respectively. DISCUSSIONS We provide a detailed discussion on how our proposed system demonstrates considerable improvement compared with similar methods considered in previous researches as well as how it can be used for delineating malignant regions.
Collapse
Affiliation(s)
- Chiao-Min Chen
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan
| | - Yao-Sian Huang
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan
| | - Pei-Wei Fang
- Department of Pathology, Fu Jen Catholic University Hospital, Fu Jen Catholic University, New Taipei City, Taiwan
| | - Cher-Wei Liang
- Department of Pathology, Fu Jen Catholic University Hospital, Fu Jen Catholic University, New Taipei City, Taiwan.,School of Medicine, College of Medicine, Fu Jen Catholic University, New Taipei City, Taiwan.,Graduate Institute of Pathology, College of Medicine, National Taiwan University Taipei, Taipei, Taiwan
| | - Ruey-Feng Chang
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan.,Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei, Taiwan.,MOST Joint Research Center for AI Technology and All Vista Healthcare Taipei, Taipei, Taiwan
| |
Collapse
|
23
|
Tang X. The role of artificial intelligence in medical imaging research. BJR Open 2019; 2:20190031. [PMID: 33178962 PMCID: PMC7594889 DOI: 10.1259/bjro.20190031] [Citation(s) in RCA: 33] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2019] [Revised: 10/01/2019] [Accepted: 11/13/2019] [Indexed: 12/22/2022] Open
Abstract
Without doubt, artificial intelligence (AI) is the most discussed topic today in medical imaging research, both in diagnostic and therapeutic. For diagnostic imaging alone, the number of publications on AI has increased from about 100-150 per year in 2007-2008 to 1000-1100 per year in 2017-2018. Researchers have applied AI to automatically recognizing complex patterns in imaging data and providing quantitative assessments of radiographic characteristics. In radiation oncology, AI has been applied on different image modalities that are used at different stages of the treatment. i.e. tumor delineation and treatment assessment. Radiomics, the extraction of a large number of image features from radiation images with a high-throughput approach, is one of the most popular research topics today in medical imaging research. AI is the essential boosting power of processing massive number of medical images and therefore uncovers disease characteristics that fail to be appreciated by the naked eyes. The objectives of this paper are to review the history of AI in medical imaging research, the current role, the challenges need to be resolved before AI can be adopted widely in the clinic, and the potential future.
Collapse
|
24
|
Schick U, Lucia F, Dissaux G, Visvikis D, Badic B, Masson I, Pradier O, Bourbonne V, Hatt M. MRI-derived radiomics: methodology and clinical applications in the field of pelvic oncology. Br J Radiol 2019; 92:20190105. [PMID: 31538516 DOI: 10.1259/bjr.20190105] [Citation(s) in RCA: 37] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023] Open
Abstract
Personalized medicine aims at offering optimized treatment options and improved survival for cancer patients based on individual variability. The success of precision medicine depends on robust biomarkers. Recently, the requirement for improved non-biologic biomarkers that reflect tumor biology has emerged and there has been a growing interest in the automatic extraction of quantitative features from medical images, denoted as radiomics. Radiomics as a methodological approach can be applied to any image and most studies have focused on PET, CT, ultrasound, and MRI. Here, we aim to present an overview of the radiomics workflow as well as the major challenges with special emphasis on the use of multiparametric MRI datasets. We then reviewed recent studies on radiomics in the field of pelvic oncology including prostate, cervical, and colorectal cancer.
Collapse
Affiliation(s)
- Ulrike Schick
- Radiation Oncology department, University Hospital, Brest, France.,LaTIM, INSERM, UMR 1101, University of Brest, ISBAM, UBO, UBL, Brest, France.,Faculté de Médecine et des Sciences de la Santé, Université de Bretagne Occidentale, Brest, France
| | - François Lucia
- Radiation Oncology department, University Hospital, Brest, France.,LaTIM, INSERM, UMR 1101, University of Brest, ISBAM, UBO, UBL, Brest, France
| | - Gurvan Dissaux
- Radiation Oncology department, University Hospital, Brest, France.,LaTIM, INSERM, UMR 1101, University of Brest, ISBAM, UBO, UBL, Brest, France.,Faculté de Médecine et des Sciences de la Santé, Université de Bretagne Occidentale, Brest, France
| | - Dimitris Visvikis
- LaTIM, INSERM, UMR 1101, University of Brest, ISBAM, UBO, UBL, Brest, France
| | - Bogdan Badic
- LaTIM, INSERM, UMR 1101, University of Brest, ISBAM, UBO, UBL, Brest, France.,Department of General and Digestive Surgery, University Hospital, Brest, France
| | - Ingrid Masson
- LaTIM, INSERM, UMR 1101, University of Brest, ISBAM, UBO, UBL, Brest, France
| | - Olivier Pradier
- Radiation Oncology department, University Hospital, Brest, France.,LaTIM, INSERM, UMR 1101, University of Brest, ISBAM, UBO, UBL, Brest, France.,Faculté de Médecine et des Sciences de la Santé, Université de Bretagne Occidentale, Brest, France
| | - Vincent Bourbonne
- Radiation Oncology department, University Hospital, Brest, France.,LaTIM, INSERM, UMR 1101, University of Brest, ISBAM, UBO, UBL, Brest, France
| | - Mathieu Hatt
- LaTIM, INSERM, UMR 1101, University of Brest, ISBAM, UBO, UBL, Brest, France
| |
Collapse
|
25
|
Cheng R, Alexandridi NA, Smith RM, Shen A, Gandler W, McCreedy E, McAuliffe MJ, Sheehan FT. Fully automated patellofemoral MRI segmentation using holistically nested networks: Implications for evaluating patellofemoral osteoarthritis, pain, injury, pathology, and adolescent development. Magn Reson Med 2019; 83:139-153. [PMID: 31402520 DOI: 10.1002/mrm.27920] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2018] [Revised: 07/05/2019] [Accepted: 07/06/2019] [Indexed: 12/26/2022]
Abstract
PURPOSE Our clinical understanding of the relationship between 3D bone morphology and knee osteoarthritis, as well as our ability to investigate potential causative factors of osteoarthritis, has been hampered by the time-intensive nature of manually segmenting bone from MR images. Thus, we aim to develop and validate a fully automated deep learning framework for segmenting the patella and distal femur cortex, in both adults and actively growing adolescents. METHODS Data from 93 subjects, obtained from on institutional review board-approved protocol, formed the study database. 3D sagittal gradient recalled echo and gradient recalled echo with fat saturation images and manual models of the outer cortex were available for 86 femurs and 90 patellae. A deep-learning-based 2D holistically nested network (HNN) architecture was developed to automatically segment the patella and distal femur using both single (sagittal, uniplanar) and 3 cardinal plane (triplanar) methodologies. Errors in the surface-to-surface distances and the Dice coefficient were the primary measures used to quantitatively evaluate segmentation accuracy using a 9-fold cross-validation. RESULTS Average absolute errors for segmenting both the patella and femur were 0.33 mm. The Dice coefficients were 97% and 94% for the femur and patella. The uniplanar, relative to the triplanar, methodology produced slightly superior segmentation. Neither the presence of active growth plates nor pathology influenced segmentation accuracy. CONCLUSION The proposed HNN with multi-feature architecture provides a fully automatic technique capable of delineating the often indistinct interfaces between the bone and other joint structures with an accuracy better than nearly all other techniques presented previously, even when active growth plates are present.
Collapse
Affiliation(s)
- Ruida Cheng
- Biomedical Imaging Research Services Section (BIRSS), Office of Intramural Research, Center of Information Technology, NIH, Bethesda, Maryland
| | - Natalia A Alexandridi
- Functional and Applied Biomechanics, Department of Rehabilitation Medicine, NIH, Bethesda, Maryland
| | - Richard M Smith
- Functional and Applied Biomechanics, Department of Rehabilitation Medicine, NIH, Bethesda, Maryland
| | - Aricia Shen
- Functional and Applied Biomechanics, Department of Rehabilitation Medicine, NIH, Bethesda, Maryland.,University of California Irvine School of Medicine, Irvine, California
| | - William Gandler
- Biomedical Imaging Research Services Section (BIRSS), Office of Intramural Research, Center of Information Technology, NIH, Bethesda, Maryland
| | - Evan McCreedy
- Biomedical Imaging Research Services Section (BIRSS), Office of Intramural Research, Center of Information Technology, NIH, Bethesda, Maryland
| | - Matthew J McAuliffe
- Biomedical Imaging Research Services Section (BIRSS), Office of Intramural Research, Center of Information Technology, NIH, Bethesda, Maryland
| | - Frances T Sheehan
- Functional and Applied Biomechanics, Department of Rehabilitation Medicine, NIH, Bethesda, Maryland
| |
Collapse
|
26
|
Cheng R, Lay N, Roth HR, Turkbey B, Jin D, Gandler W, McCreedy ES, Pohida T, Pinto P, Choyke P, McAuliffe MJ, Summers RM. Fully automated prostate whole gland and central gland segmentation on MRI using holistically nested networks with short connections. J Med Imaging (Bellingham) 2019; 6:024007. [PMID: 31205977 DOI: 10.1117/1.jmi.6.2.024007] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2018] [Accepted: 05/15/2019] [Indexed: 11/14/2022] Open
Abstract
Accurate and automated prostate whole gland and central gland segmentations on MR images are essential for aiding any prostate cancer diagnosis system. Our work presents a 2-D orthogonal deep learning method to automatically segment the whole prostate and central gland from T2-weighted axial-only MR images. The proposed method can generate high-density 3-D surfaces from low-resolution ( z axis) MR images. In the past, most methods have focused on axial images alone, e.g., 2-D based segmentation of the prostate from each 2-D slice. Those methods suffer the problems of over-segmenting or under-segmenting the prostate at apex and base, which adds a major contribution for errors. The proposed method leverages the orthogonal context to effectively reduce the apex and base segmentation ambiguities. It also overcomes jittering or stair-step surface artifacts when constructing a 3-D surface from 2-D segmentation or direct 3-D segmentation approaches, such as 3-D U-Net. The experimental results demonstrate that the proposed method achieves 92.4 % ± 3 % Dice similarity coefficient (DSC) for prostate and DSC of 90.1 % ± 4.6 % for central gland without trimming any ending contours at apex and base. The experiments illustrate the feasibility and robustness of the 2-D-based holistically nested networks with short connections method for MR prostate and central gland segmentation. The proposed method achieves segmentation results on par with the current literature.
Collapse
Affiliation(s)
- Ruida Cheng
- National Institutes of Health, Center for Information Technology, Image Sciences Laboratory, Bethesda, Maryland, United States
| | - Nathan Lay
- National Institutes of Health Clinical Center, Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, Bethesda, Maryland, United States
| | - Holger R Roth
- National Institutes of Health Clinical Center, Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, Bethesda, Maryland, United States
| | - Baris Turkbey
- National Cancer Institute, Molecular Imaging Program, Bethesda, Maryland, United States
| | - Dakai Jin
- National Institutes of Health Clinical Center, Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, Bethesda, Maryland, United States
| | - William Gandler
- National Institutes of Health, Center for Information Technology, Image Sciences Laboratory, Bethesda, Maryland, United States
| | - Evan S McCreedy
- National Institutes of Health, Center for Information Technology, Image Sciences Laboratory, Bethesda, Maryland, United States
| | - Tom Pohida
- National Institutes of Health, Center for Information Technology, Computational Bioscience and Engineering Laboratory, Bethesda, Maryland, United States
| | - Peter Pinto
- National Cancer Institute, Center for Cancer Research, Urologic Oncology Branch, Bethesda, Maryland, United States
| | - Peter Choyke
- National Cancer Institute, Molecular Imaging Program, Bethesda, Maryland, United States
| | - Matthew J McAuliffe
- National Institutes of Health, Center for Information Technology, Image Sciences Laboratory, Bethesda, Maryland, United States
| | - Ronald M Summers
- National Institutes of Health Clinical Center, Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, Bethesda, Maryland, United States
| |
Collapse
|
27
|
Zabihollahy F, Schieda N, Krishna Jeyaraj S, Ukwatta E. Automated segmentation of prostate zonal anatomy on T2-weighted (T2W) and apparent diffusion coefficient (ADC) map MR images using U-Nets. Med Phys 2019; 46:3078-3090. [PMID: 31002381 DOI: 10.1002/mp.13550] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2018] [Revised: 04/07/2019] [Accepted: 04/08/2019] [Indexed: 01/21/2023] Open
Abstract
PURPOSE Accurate regional segmentation of the prostate boundaries on magnetic resonance (MR) images is a fundamental requirement before automated prostate cancer diagnosis can be achieved. In this paper, we describe a novel methodology to segment prostate whole gland (WG), central gland (CG), and peripheral zone (PZ), where PZ + CG = WG, from T2W and apparent diffusion coefficient (ADC) map prostate MR images. METHODS We designed two similar models each made up of two U-Nets to delineate the WG, CG, and PZ from T2W and ADC map MR images, separately. The U-Net, which is a modified version of a fully convolutional neural network, includes contracting and expanding paths with convolutional, pooling, and upsampling layers. Pooling and upsampling layers help to capture and localize image features with a high spatial consistency. We used a dataset consisting of 225 patients (combining 153 and 72 patients with and without clinically significant prostate cancer) imaged with multiparametric MRI at 3 Tesla. RESULTS AND CONCLUSION Our proposed model for prostate zonal segmentation from T2W was trained and tested using 1154 and 1587 slices of 100 and 125 patients, respectively. Median of Dice similarity coefficient (DSC) on test dataset for prostate WG, CG, and PZ were 95.33 ± 7.77%, 93.75 ± 8.91%, and 86.78 ± 3.72%, respectively. Designed model for regional prostate delineation from ADC map images was trained and validated using 812 and 917 slices from 100 and 125 patients. This model yielded a median DSC of 92.09 ± 8.89%, 89.89 ± 10.69%, and 86.1 ± 9.56% for prostate WG, CG, and PZ on test samples, respectively. Further investigation indicated that the proposed algorithm reported high DSC for prostate WG segmentation from both T2W and ADC map MR images irrespective of WG size. In addition, segmentation accuracy in terms of DSC does not significantly vary among patients with or without significant tumors. SIGNIFICANCE We describe a method for automated prostate zonal segmentation using T2W and ADC map MR images independent of prostate size and the presence or absence of tumor. Our results are important in terms of clinical perspective as fully automated methods for ADC map images, which are considered as one of the most important sequences for prostate cancer detection in the PZ and CG, have not been reported previously.
Collapse
Affiliation(s)
- Fatemeh Zabihollahy
- Department of Systems and Computer Engineering, Carleton University, Ottawa, ON, Canada
| | - Nicola Schieda
- Department of Radiology, University of Ottawa, Ottawa, ON, Canada
| | | | - Eranga Ukwatta
- School of Engineering, University of Guelph, Guelph, ON, Canada
| |
Collapse
|
28
|
Abstract
Manual image segmentation is a time-consuming task routinely performed in radiotherapy to identify each patient's targets and anatomical structures. The efficacy and safety of the radiotherapy plan requires accurate segmentations as these regions of interest are generally used to optimize and assess the quality of the plan. However, reports have shown that this process can be subject to significant inter- and intraobserver variability. Furthermore, the quality of the radiotherapy treatment, and subsequent analyses (ie, radiomics, dosimetric), can be subject to the accuracy of these manual segmentations. Automatic segmentation (or auto-segmentation) of targets and normal tissues is, therefore, preferable as it would address these challenges. Previously, auto-segmentation techniques have been clustered into 3 generations of algorithms, with multiatlas based and hybrid techniques (third generation) being considered the state-of-the-art. More recently, however, the field of medical image segmentation has seen accelerated growth driven by advances in computer vision, particularly through the application of deep learning algorithms, suggesting we have entered the fourth generation of auto-segmentation algorithm development. In this paper, the authors review traditional (nondeep learning) algorithms particularly relevant for applications in radiotherapy. Concepts from deep learning are introduced focusing on convolutional neural networks and fully-convolutional networks which are generally used for segmentation tasks. Furthermore, the authors provide a summary of deep learning auto-segmentation radiotherapy applications reported in the literature. Lastly, considerations for clinical deployment (commissioning and QA) of auto-segmentation software are provided.
Collapse
|
29
|
What the radiologist should know about artificial intelligence - an ESR white paper. Insights Imaging 2019; 10:44. [PMID: 30949865 PMCID: PMC6449411 DOI: 10.1186/s13244-019-0738-2] [Citation(s) in RCA: 171] [Impact Index Per Article: 34.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2019] [Accepted: 03/20/2019] [Indexed: 02/08/2023] Open
Abstract
This paper aims to provide a review of the basis for application of AI in radiology, to discuss the immediate ethical and professional impact in radiology, and to consider possible future evolution.Even if AI does add significant value to image interpretation, there are implications outside the traditional radiology activities of lesion detection and characterisation. In radiomics, AI can foster the analysis of the features and help in the correlation with other omics data. Imaging biobanks would become a necessary infrastructure to organise and share the image data from which AI models can be trained. AI can be used as an optimising tool to assist the technologist and radiologist in choosing a personalised patient's protocol, tracking the patient's dose parameters, providing an estimate of the radiation risks. AI can also aid the reporting workflow and help the linking between words, images, and quantitative data. Finally, AI coupled with CDS can improve the decision process and thereby optimise clinical and radiological workflow.
Collapse
|
30
|
Shahedi M, Halicek M, Li Q, Liu L, Zhang Z, Verma S, Schuster DM, Fei B. A semiautomatic approach for prostate segmentation in MR images using local texture classification and statistical shape modeling. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2019; 10951:109512I. [PMID: 32528212 PMCID: PMC7289512 DOI: 10.1117/12.2512282] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
Segmentation of the prostate in magnetic resonance (MR) images has many applications in image-guided treatment planning and procedures such as biopsy and focal therapy. However, manual delineation of the prostate boundary is a time-consuming task with high inter-observer variation. In this study, we proposed a semiautomated, three-dimensional (3D) prostate segmentation technique for T2-weighted MR images based on shape and texture analysis. The prostate gland shape is usually globular with a smoothly curved surface that could be accurately modeled and reconstructed if the locations of a limited number of well-distributed surface points are known. For a training image set, we used an inter-subject correspondence between the prostate surface points to model the prostate shape variation based on a statistical point distribution modeling. We also studied the local texture difference between prostate and non-prostate tissues close to the prostate surface. To segment a new image, we used the learned prostate shape and texture characteristics to search for the prostate border close to an initially estimated prostate surface. We used 23 MR images for training, and 14 images for testing the algorithm performance. We compared the results to two sets of experts' manual reference segmentations. The measured mean ± standard deviation of error values for the whole gland were 1.4 ± 0.4 mm, 8.5 ± 2.0 mm, and 86 ± 3% in terms of mean absolute distance (MAD), Hausdorff distance (HDist), and Dice similarity coefficient (DSC). The average measured differences between the two experts on the same datasets were 1.5 mm (MAD), 9.0 mm (HDist), and 83% (DSC). The proposed algorithm illustrated a fast, accurate, and robust performance for 3D prostate segmentation. The accuracy of the algorithm is within the inter-expert variability observed in manual segmentation and comparable to the best performance results reported in the literature.
Collapse
Affiliation(s)
- Maysam Shahedi
- Department of Bioengineering, The University of Texas at Dallas, Richardson, TX
| | - Martin Halicek
- Department of Bioengineering, The University of Texas at Dallas, Richardson, TX
- Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA
| | - Qinmei Li
- Department of Bioengineering, The University of Texas at Dallas, Richardson, TX
- Department of Radiology, The Second Affiliated Hospital of Guangzhou, Medical University, Guangzhou, China
| | - Lizhi Liu
- State Key Laboratory of Oncology Collaborative Innovation Center for Cancer Medicine, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Zhenfeng Zhang
- Department of Radiology, The Second Affiliated Hospital of Guangzhou, Medical University, Guangzhou, China
| | - Sadhna Verma
- Department of Radiology, University of Cincinnati Medical Center and The Veterans Administration Hospital, Cincinnati, OH
| | - David M. Schuster
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - Baowei Fei
- Department of Bioengineering, The University of Texas at Dallas, Richardson, TX
- Department of Radiology, University of Texas Southwestern Medical Center, Dallas, TX
| |
Collapse
|
31
|
Lundervold AS, Lundervold A. An overview of deep learning in medical imaging focusing on MRI. Z Med Phys 2018; 29:102-127. [PMID: 30553609 DOI: 10.1016/j.zemedi.2018.11.002] [Citation(s) in RCA: 717] [Impact Index Per Article: 119.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2018] [Revised: 11/19/2018] [Accepted: 11/21/2018] [Indexed: 02/06/2023]
Abstract
What has happened in machine learning lately, and what does it mean for the future of medical image analysis? Machine learning has witnessed a tremendous amount of attention over the last few years. The current boom started around 2009 when so-called deep artificial neural networks began outperforming other established models on a number of important benchmarks. Deep neural networks are now the state-of-the-art machine learning models across a variety of areas, from image analysis to natural language processing, and widely deployed in academia and industry. These developments have a huge potential for medical imaging technology, medical data analysis, medical diagnostics and healthcare in general, slowly being realized. We provide a short overview of recent advances and some associated challenges in machine learning applied to medical image processing and image analysis. As this has become a very broad and fast expanding field we will not survey the entire landscape of applications, but put particular focus on deep learning in MRI. Our aim is threefold: (i) give a brief introduction to deep learning with pointers to core references; (ii) indicate how deep learning has been applied to the entire MRI processing chain, from acquisition to image retrieval, from segmentation to disease prediction; (iii) provide a starting point for people interested in experimenting and perhaps contributing to the field of deep learning for medical imaging by pointing out good educational resources, state-of-the-art open-source code, and interesting sources of data and problems related medical imaging.
Collapse
Affiliation(s)
- Alexander Selvikvåg Lundervold
- Mohn Medical Imaging and Visualization Centre (MMIV), Haukeland University Hospital, Norway; Department of Computing, Mathematics and Physics, Western Norway University of Applied Sciences, Norway.
| | - Arvid Lundervold
- Mohn Medical Imaging and Visualization Centre (MMIV), Haukeland University Hospital, Norway; Neuroinformatics and Image Analysis Laboratory, Department of Biomedicine, University of Bergen, Norway; Department of Health and Functioning, Western Norway University of Applied Sciences, Norway.
| |
Collapse
|
32
|
Tang Z, Wang M, Song Z. Rotationally resliced 3D prostate segmentation of MR images using Bhattacharyya similarity and active band theory. Phys Med 2018; 54:56-65. [PMID: 30337011 DOI: 10.1016/j.ejmp.2018.09.005] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/16/2018] [Revised: 09/16/2018] [Accepted: 09/18/2018] [Indexed: 11/24/2022] Open
Abstract
PURPOSE In this article, we propose a novel, semi-automatic segmentation method to process 3D MR images of the prostate using the Bhattacharyya coefficient and active band theory with the goal of providing technical support for computer-aided diagnosis and surgery of the prostate. METHODS Our method consecutively segments a stack of rotationally resectioned 2D slices of a prostate MR image by assessing the similarity of the shape and intensity distribution in neighboring slices. 2D segmentation is first performed on an initial slice by manually selecting several points on the prostate boundary, after which the segmentation results are propagated consecutively to neighboring slices. A framework of iterative graph cuts is used to optimize the energy function, which contains a global term for the Bhattacharyya coefficient with the help of an auxiliary function. Our method does not require previously segmented data for training or for building statistical models, and manual intervention can be applied flexibly and intuitively, indicating the potential utility of this method in the clinic. RESULTS We tested our method on 3D T2-weighted MR images from the ISBI dataset and PROMISE12 dataset of 129 patients, and the Dice similarity coefficients were 90.34 ± 2.21% and 89.32 ± 3.08%, respectively. The comparison was performed with several state-of-the-art methods, and the results demonstrate that the proposed method is robust and accurate, achieving similar or higher accuracy than other methods without requiring training. CONCLUSION The proposed algorithm for segmenting 3D MR images of the prostate is accurate, robust, and readily applicable to a clinical environment for computer-aided surgery or diagnosis.
Collapse
Affiliation(s)
- Zhixian Tang
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, Shanghai, China; Shanghai Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention, Shanghai, China
| | - Manning Wang
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, Shanghai, China; Shanghai Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention, Shanghai, China.
| | - Zhijian Song
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, Shanghai, China; Shanghai Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention, Shanghai, China.
| |
Collapse
|