1
|
Islam R, Imran A, Rabbi MF. Prostate Cancer Detection from MRI Using Efficient Feature Extraction with Transfer Learning. Prostate Cancer 2024; 2024:1588891. [PMID: 38783970 PMCID: PMC11115994 DOI: 10.1155/2024/1588891] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2024] [Revised: 04/19/2024] [Accepted: 04/26/2024] [Indexed: 05/25/2024] Open
Abstract
Prostate cancer is a common cancer with significant implications for global health. Prompt and precise identification is crucial for efficient treatment strategizing and enhanced patient results. This research study investigates the utilization of machine learning techniques to diagnose prostate cancer. It emphasizes utilizing deep learning models, namely VGG16, VGG19, ResNet50, and ResNet50V2, to extract relevant features. The random forest approach then uses these features for classification. The study begins by doing a thorough comparison examination of the deep learning architectures outlined above to evaluate their effectiveness in extracting significant characteristics from prostate cancer imaging data. Key metrics such as sensitivity, specificity, and accuracy are used to assess the models' efficacy. With an accuracy of 99.64%, ResNet50 outperformed other tested models when it came to identifying important features in images of prostate cancer. Furthermore, the analysis of understanding factors aims to offer valuable insights into the decision-making process, thereby addressing a critical problem for clinical practice acceptance. The random forest classifier, a powerful ensemble learning method renowned for its adaptability and ability to handle intricate datasets, then uses the collected characteristics as input. The random forest model seeks to identify patterns in the feature space and produce precise predictions on the presence or absence of prostate cancer. In addition, the study tackles the restricted availability of datasets by utilizing transfer learning methods to refine the deep learning models using a small amount of annotated prostate cancer data. The objective of this method is to improve the ability of the models to generalize across different patient populations and clinical situations. This study's results are useful because they show how well VGG16, VGG19, ResNet50, and ResNet50V2 work for extracting features in the field of diagnosing prostate cancer, when used with random forest's classification abilities. The results of this work provide a basis for creating reliable and easily understandable machine learning-based diagnostic tools for detecting prostate cancer. This will enhance the possibility of an early and precise diagnosis in clinical settings such as index terms deep learning, machine learning, prostate cancer, cancer identification, and cancer classification.
Collapse
Affiliation(s)
- Rafiqul Islam
- Department of IoT and Robotics Engineering, Bangabandhu Sheikh Mujibur Rahman Digital University, Gazipur, Bangladesh
| | - Al Imran
- Department of Computer Science and Engineering, Green University of Bangladesh, Dhaka, Bangladesh
| | - Md. Fazle Rabbi
- Department of Computer Science and Engineering, Green University of Bangladesh, Dhaka, Bangladesh
| |
Collapse
|
2
|
Talyshinskii A, Hameed BMZ, Ravinder PP, Naik N, Randhawa P, Shah M, Rai BP, Tokas T, Somani BK. Catalyzing Precision Medicine: Artificial Intelligence Advancements in Prostate Cancer Diagnosis and Management. Cancers (Basel) 2024; 16:1809. [PMID: 38791888 PMCID: PMC11119252 DOI: 10.3390/cancers16101809] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2024] [Revised: 04/29/2024] [Accepted: 05/07/2024] [Indexed: 05/26/2024] Open
Abstract
BACKGROUND The aim was to analyze the current state of deep learning (DL)-based prostate cancer (PCa) diagnosis with a focus on magnetic resonance (MR) prostate reconstruction; PCa detection/stratification/reconstruction; positron emission tomography/computed tomography (PET/CT); androgen deprivation therapy (ADT); prostate biopsy; associated challenges and their clinical implications. METHODS A search of the PubMed database was conducted based on the inclusion and exclusion criteria for the use of DL methods within the abovementioned areas. RESULTS A total of 784 articles were found, of which, 64 were included. Reconstruction of the prostate, the detection and stratification of prostate cancer, the reconstruction of prostate cancer, and diagnosis on PET/CT, ADT, and biopsy were analyzed in 21, 22, 6, 7, 2, and 6 studies, respectively. Among studies describing DL use for MR-based purposes, datasets with magnetic field power of 3 T, 1.5 T, and 3/1.5 T were used in 18/19/5, 0/1/0, and 3/2/1 studies, respectively, of 6/7 studies analyzing DL for PET/CT diagnosis which used data from a single institution. Among the radiotracers, [68Ga]Ga-PSMA-11, [18F]DCFPyl, and [18F]PSMA-1007 were used in 5, 1, and 1 study, respectively. Only two studies that analyzed DL in the context of DT met the inclusion criteria. Both were performed with a single-institution dataset with only manual labeling of training data. Three studies, each analyzing DL for prostate biopsy, were performed with single- and multi-institutional datasets. TeUS, TRUS, and MRI were used as input modalities in two, three, and one study, respectively. CONCLUSION DL models in prostate cancer diagnosis show promise but are not yet ready for clinical use due to variability in methods, labels, and evaluation criteria. Conducting additional research while acknowledging all the limitations outlined is crucial for reinforcing the utility and effectiveness of DL-based models in clinical settings.
Collapse
Affiliation(s)
- Ali Talyshinskii
- Department of Urology and Andrology, Astana Medical University, Astana 010000, Kazakhstan;
| | | | - Prajwal P. Ravinder
- Department of Urology, Kasturba Medical College, Mangaluru, Manipal Academy of Higher Education, Manipal 576104, India;
| | - Nithesh Naik
- Department of Mechanical and Industrial Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India;
| | - Princy Randhawa
- Department of Mechatronics, Manipal University Jaipur, Jaipur 303007, India;
| | - Milap Shah
- Department of Urology, Aarogyam Hospital, Ahmedabad 380014, India;
| | - Bhavan Prasad Rai
- Department of Urology, Freeman Hospital, Newcastle upon Tyne NE7 7DN, UK;
| | - Theodoros Tokas
- Department of Urology, Medical School, University General Hospital of Heraklion, University of Crete, 14122 Heraklion, Greece;
| | - Bhaskar K. Somani
- Department of Mechanical and Industrial Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India;
- Department of Urology, University Hospital Southampton NHS Trust, Southampton SO16 6YD, UK
| |
Collapse
|
3
|
Jin L, Yu Z, Gao F, Li M. T2-weighted imaging-based deep-learning method for noninvasive prostate cancer detection and Gleason grade prediction: a multicenter study. Insights Imaging 2024; 15:111. [PMID: 38713377 PMCID: PMC11076444 DOI: 10.1186/s13244-024-01682-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2024] [Accepted: 03/23/2024] [Indexed: 05/08/2024] Open
Abstract
OBJECTIVES To noninvasively detect prostate cancer and predict the Gleason grade using single-modality T2-weighted imaging with a deep-learning approach. METHODS Patients with prostate cancer, confirmed by histopathology, who underwent magnetic resonance imaging examinations at our hospital during September 2015-June 2022 were retrospectively included in an internal dataset. An external dataset from another medical center and a public challenge dataset were used for external validation. A deep-learning approach was designed for prostate cancer detection and Gleason grade prediction. The area under the curve (AUC) was calculated to compare the model performance. RESULTS For prostate cancer detection, the internal datasets comprised data from 195 healthy individuals (age: 57.27 ± 14.45 years) and 302 patients (age: 72.20 ± 8.34 years) diagnosed with prostate cancer. The AUC of our model for prostate cancer detection in the validation set (n = 96, 19.7%) was 0.918. For Gleason grade prediction, datasets comprising data from 283 of 302 patients with prostate cancer were used, with 227 (age: 72.06 ± 7.98 years) and 56 (age: 72.78 ± 9.49 years) patients being used for training and testing, respectively. The external and public challenge datasets comprised data from 48 (age: 72.19 ± 7.81 years) and 91 patients (unavailable information on age), respectively. The AUC of our model for Gleason grade prediction in the training set (n = 227) was 0.902, whereas those of the validation (n = 56), external validation (n = 48), and public challenge validation sets (n = 91) were 0.854, 0.776, and 0.838, respectively. CONCLUSION Through multicenter dataset validation, our proposed deep-learning method could detect prostate cancer and predict the Gleason grade better than human experts. CRITICAL RELEVANCE STATEMENT Precise prostate cancer detection and Gleason grade prediction have great significance for clinical treatment and decision making. KEY POINTS Prostate segmentation is easier to annotate than prostate cancer lesions for radiologists. Our deep-learning method detected prostate cancer and predicted the Gleason grade, outperforming human experts. Non-invasive Gleason grade prediction can reduce the number of unnecessary biopsies.
Collapse
Affiliation(s)
- Liang Jin
- Radiology Department, Huashan Hospital, Affiliated with Fudan University, Shanghai, 200040, China
- Radiology Department, Huadong Hospital, Affiliated with Fudan University, Shanghai, 200040, China
| | - Zhuo Yu
- School of Information and Safety Engineering, Zhongnan University of Economics and Law, Wuhan, China
| | - Feng Gao
- Radiology Department, Huadong Hospital, Affiliated with Fudan University, Shanghai, 200040, China
| | - Ming Li
- Radiology Department, Huadong Hospital, Affiliated with Fudan University, Shanghai, 200040, China.
- Institute of Functional and Molecular Medical Imaging, Shanghai, 200040, China.
| |
Collapse
|
4
|
Zeevi T, Leapman MS, Sprenkle PC, Venkataraman R, Staib LH, Onofrey JA. Reliable Prostate Cancer Risk Mapping From MRI Using Targeted and Systematic Core Needle Biopsy Histopathology. IEEE Trans Biomed Eng 2024; 71:1084-1091. [PMID: 37874731 PMCID: PMC10901528 DOI: 10.1109/tbme.2023.3326799] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2023]
Abstract
OBJECTIVE To compute a dense prostate cancer risk map for the individual patient post-biopsy from magnetic resonance imaging (MRI) and to provide a more reliable evaluation of its fitness in prostate regions that were not identified as suspicious for cancer by a human-reader in pre- and intra-biopsy imaging analysis. METHODS Low-level pre-biopsy MRI biomarkers from targeted and non-targeted biopsy locations were extracted and statistically tested for representativeness against biomarkers from non-biopsied prostate regions. A probabilistic machine learning classifier was optimized to map biomarkers to their core-level pathology, followed by extrapolation of pathology scores to non-biopsied prostate regions. Goodness-of-fit was assessed at targeted and non-targeted biopsy locations for the post-biopsy individual patient. RESULTS Our experiments showed high predictability of imaging biomarkers in differentiating histopathology scores in thousands of non-targeted core-biopsy locations (ROC-AUCs: 0.85-0.88), but also high variability between patients (Median ROC-AUC [IQR]: 0.81-0.89 [0.29-0.40]). CONCLUSION The sparseness of prostate biopsy data makes the validation of a whole gland risk mapping a non-trivial task. Previous studies i) focused on targeted-biopsy locations although biopsy-specimens drawn from systematically scattered locations across the prostate constitute a more representative sample to non-biopsied regions, and ii) estimated prediction-power across predicted instances (e.g., biopsy specimens) with no patient distinction, which may lead to unreliable estimation of model fitness to the individual patient due to variation between patients in instance count, imaging characteristics, and pathologies. SIGNIFICANCE This study proposes a personalized whole-gland prostate cancer risk mapping post-biopsy to allow clinicians to better stage and personalize focal therapy treatment plans.
Collapse
|
5
|
Mahbub T, Obeid A, Javed S, Dias J, Hassan T, Werghi N. Center-Focused Affinity Loss for Class Imbalance Histology Image Classification. IEEE J Biomed Health Inform 2024; 28:952-963. [PMID: 37999960 DOI: 10.1109/jbhi.2023.3336372] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2023]
Abstract
Early-stage cancer diagnosis potentially improves the chances of survival for many cancer patients worldwide. Manual examination of Whole Slide Images (WSIs) is a time-consuming task for analyzing tumor-microenvironment. To overcome this limitation, the conjunction of deep learning with computational pathology has been proposed to assist pathologists in efficiently prognosing the cancerous spread. Nevertheless, the existing deep learning methods are ill-equipped to handle fine-grained histopathology datasets. This is because these models are constrained via conventional softmax loss function, which cannot expose them to learn distinct representational embeddings of the similarly textured WSIs containing an imbalanced data distribution. To address this problem, we propose a novel center-focused affinity loss (CFAL) function that exhibits 1) constructing uniformly distributed class prototypes in the feature space, 2) penalizing difficult samples, 3) minimizing intra-class variations, and 4) placing greater emphasis on learning minority class features. We evaluated the performance of the proposed CFAL loss function on two publicly available breast and colon cancer datasets having varying levels of imbalanced classes. The proposed CFAL function shows better discrimination abilities as compared to the popular loss functions such as ArcFace, CosFace, and Focal loss. Moreover, it outperforms several SOTA methods for histology image classification across both datasets.
Collapse
|
6
|
Haq I, Mazhar T, Asif RN, Ghadi YY, Ullah N, Khan MA, Al-Rasheed A. YOLO and residual network for colorectal cancer cell detection and counting. Heliyon 2024; 10:e24403. [PMID: 38304780 PMCID: PMC10831604 DOI: 10.1016/j.heliyon.2024.e24403] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2023] [Revised: 12/30/2023] [Accepted: 01/08/2024] [Indexed: 02/03/2024] Open
Abstract
The HT-29 cell line, derived from human colon cancer, is valuable for biological and cancer research applications. Early detection is crucial for improving the chances of survival, and researchers are introducing new techniques for accurate cancer diagnosis. This study introduces an efficient deep learning-based method for detecting and counting colorectal cancer cells (HT-29). The colorectal cancer cell line was procured from a company. Further, the cancer cells were cultured, and a transwell experiment was conducted in the lab to collect the dataset of colorectal cancer cell images via fluorescence microscopy. Of the 566 images, 80 % were allocated to the training set, and the remaining 20 % were assigned to the testing set. The HT-29 cell detection and counting in medical images is performed by integrating YOLOv2, ResNet-50, and ResNet-18 architectures. The accuracy achieved by ResNet-18 is 98.70 % and ResNet-50 is 96.66 %. The study achieves its primary objective by focusing on detecting and quantifying congested and overlapping colorectal cancer cells within the images. This innovative work constitutes a significant development in overlapping cancer cell detection and counting, paving the way for novel advancements and opening new avenues for research and clinical applications. Researchers can extend the study by exploring variations in ResNet and YOLO architectures to optimize object detection performance. Further investigation into real-time deployment strategies will enhance the practical applicability of these models.
Collapse
Affiliation(s)
- Inayatul Haq
- School of Electrical and Information Engineering, Zhengzhou University, Zhengzhou, 450001, China
| | - Tehseen Mazhar
- Department of Computer Science, Virtual University of Pakistan, Lahore, 55150, Pakistan
| | - Rizwana Naz Asif
- School of Computer Science, National College of Business Administration and Economics, Lahore, 54000, Pakistan
| | - Yazeed Yasin Ghadi
- Department of Computer Science and Software Engineering, Al Ain University, Abu Dhabi, 12555, United Arab Emirates
| | - Najib Ullah
- Faculty of Pharmacy and Health Sciences, Department of Pharmacy, University of Balochistan, Quetta, 08770, Pakistan
| | - Muhammad Amir Khan
- School of Computing Sciences, College of Computing, Informatics and Mathematics, Universiti Teknologi MARA, 40450, Shah Alam, Selangor, Malaysia
| | - Amal Al-Rasheed
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh, 11671, Saudi Arabia
| |
Collapse
|
7
|
Chen X, Liu X, Wu Y, Wang Z, Wang SH. Research related to the diagnosis of prostate cancer based on machine learning medical images: A review. Int J Med Inform 2024; 181:105279. [PMID: 37977054 DOI: 10.1016/j.ijmedinf.2023.105279] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Revised: 09/06/2023] [Accepted: 10/29/2023] [Indexed: 11/19/2023]
Abstract
BACKGROUND Prostate cancer is currently the second most prevalent cancer among men. Accurate diagnosis of prostate cancer can provide effective treatment for patients and greatly reduce mortality. The current medical imaging tools for screening prostate cancer are mainly MRI, CT and ultrasound. In the past 20 years, these medical imaging methods have made great progress with machine learning, especially the rise of deep learning has led to a wider application of artificial intelligence in the use of image-assisted diagnosis of prostate cancer. METHOD This review collected medical image processing methods, prostate and prostate cancer on MR images, CT images, and ultrasound images through search engines such as web of science, PubMed, and Google Scholar, including image pre-processing methods, segmentation of prostate gland on medical images, registration between prostate gland on different modal images, detection of prostate cancer lesions on the prostate. CONCLUSION Through these collated papers, it is found that the current research on the diagnosis and staging of prostate cancer using machine learning and deep learning is in its infancy, and most of the existing studies are on the diagnosis of prostate cancer and classification of lesions, and the accuracy is low, with the best results having an accuracy of less than 0.95. There are fewer studies on staging. The research is mainly focused on MR images and much less on CT images, ultrasound images. DISCUSSION Machine learning and deep learning combined with medical imaging have a broad application prospect for the diagnosis and staging of prostate cancer, but the research in this area still has more room for development.
Collapse
Affiliation(s)
- Xinyi Chen
- School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai 201620, China.
| | - Xiang Liu
- School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai 201620, China.
| | - Yuke Wu
- School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai 201620, China.
| | - Zhenglei Wang
- Department of Medical Imaging, Shanghai Electric Power Hospital, Shanghai 201620, China.
| | - Shuo Hong Wang
- Department of Molecular and Cellular Biology and Center for Brain Science, Harvard University, Cambridge, MA 02138, USA.
| |
Collapse
|
8
|
Kovacs B, Netzer N, Baumgartner M, Schrader A, Isensee F, Weißer C, Wolf I, Görtz M, Jaeger PF, Schütz V, Floca R, Gnirs R, Stenzinger A, Hohenfellner M, Schlemmer HP, Bonekamp D, Maier-Hein KH. Addressing image misalignments in multi-parametric prostate MRI for enhanced computer-aided diagnosis of prostate cancer. Sci Rep 2023; 13:19805. [PMID: 37957250 PMCID: PMC10643562 DOI: 10.1038/s41598-023-46747-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Accepted: 11/04/2023] [Indexed: 11/15/2023] Open
Abstract
Prostate cancer (PCa) diagnosis on multi-parametric magnetic resonance images (MRI) requires radiologists with a high level of expertise. Misalignments between the MRI sequences can be caused by patient movement, elastic soft-tissue deformations, and imaging artifacts. They further increase the complexity of the task prompting radiologists to interpret the images. Recently, computer-aided diagnosis (CAD) tools have demonstrated potential for PCa diagnosis typically relying on complex co-registration of the input modalities. However, there is no consensus among research groups on whether CAD systems profit from using registration. Furthermore, alternative strategies to handle multi-modal misalignments have not been explored so far. Our study introduces and compares different strategies to cope with image misalignments and evaluates them regarding to their direct effect on diagnostic accuracy of PCa. In addition to established registration algorithms, we propose 'misalignment augmentation' as a concept to increase CAD robustness. As the results demonstrate, misalignment augmentations can not only compensate for a complete lack of registration, but if used in conjunction with registration, also improve the overall performance on an independent test set.
Collapse
Affiliation(s)
- Balint Kovacs
- Division of Medical Image Computing, German Cancer Research Center (DKFZ) Heidelberg, Im Neuenheimer Feld 223, 69120, Heidelberg, Germany.
- Division of Radiology, German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany.
- Medical Faculty Heidelberg, Heidelberg University, Heidelberg, Germany.
| | - Nils Netzer
- Division of Radiology, German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
- Medical Faculty Heidelberg, Heidelberg University, Heidelberg, Germany
| | - Michael Baumgartner
- Division of Medical Image Computing, German Cancer Research Center (DKFZ) Heidelberg, Im Neuenheimer Feld 223, 69120, Heidelberg, Germany
- Helmholtz Imaging, German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
- Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany
| | - Adrian Schrader
- Division of Radiology, German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
- Medical Faculty Heidelberg, Heidelberg University, Heidelberg, Germany
| | - Fabian Isensee
- Division of Medical Image Computing, German Cancer Research Center (DKFZ) Heidelberg, Im Neuenheimer Feld 223, 69120, Heidelberg, Germany
- Helmholtz Imaging, German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
| | - Cedric Weißer
- Division of Radiology, German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
- Medical Faculty Heidelberg, Heidelberg University, Heidelberg, Germany
| | - Ivo Wolf
- Mannheim University of Applied Sciences, Mannheim, Germany
| | - Magdalena Görtz
- Junior Clinical Cooperation Unit 'Multiparametric Methods for Early Detection of Prostate Cancer', German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
- Department of Urology, University of Heidelberg Medical Center, Heidelberg, Germany
| | - Paul F Jaeger
- Helmholtz Imaging, German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
- Interactive Machine Learning Group, German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
| | - Victoria Schütz
- Department of Urology, University of Heidelberg Medical Center, Heidelberg, Germany
| | - Ralf Floca
- Division of Medical Image Computing, German Cancer Research Center (DKFZ) Heidelberg, Im Neuenheimer Feld 223, 69120, Heidelberg, Germany
| | - Regula Gnirs
- Division of Radiology, German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
| | - Albrecht Stenzinger
- Institute of Pathology, University of Heidelberg Medical Center, Heidelberg, Germany
| | - Markus Hohenfellner
- Department of Urology, University of Heidelberg Medical Center, Heidelberg, Germany
| | - Heinz-Peter Schlemmer
- Division of Radiology, German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
- German Cancer Consortium (DKTK), DKFZ, Core Center Heidelberg, Heidelberg, Germany
| | - David Bonekamp
- Division of Radiology, German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
- Medical Faculty Heidelberg, Heidelberg University, Heidelberg, Germany
- German Cancer Consortium (DKTK), DKFZ, Core Center Heidelberg, Heidelberg, Germany
| | - Klaus H Maier-Hein
- Division of Medical Image Computing, German Cancer Research Center (DKFZ) Heidelberg, Im Neuenheimer Feld 223, 69120, Heidelberg, Germany
- Helmholtz Imaging, German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
- German Cancer Consortium (DKTK), DKFZ, Core Center Heidelberg, Heidelberg, Germany
- Pattern Analysis and Learning Group, Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany
| |
Collapse
|
9
|
Mehmood M, Abbasi SH, Aurangzeb K, Majeed MF, Anwar MS, Alhussein M. A classifier model for prostate cancer diagnosis using CNNs and transfer learning with multi-parametric MRI. Front Oncol 2023; 13:1225490. [PMID: 38023149 PMCID: PMC10666634 DOI: 10.3389/fonc.2023.1225490] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Accepted: 10/16/2023] [Indexed: 12/01/2023] Open
Abstract
Prostate cancer (PCa) is a major global concern, particularly for men, emphasizing the urgency of early detection to reduce mortality. As the second leading cause of cancer-related male deaths worldwide, precise and efficient diagnostic methods are crucial. Due to high and multiresolution MRI in PCa, computer-aided diagnostic (CAD) methods have emerged to assist radiologists in identifying anomalies. However, the rapid advancement of medical technology has led to the adoption of deep learning methods. These techniques enhance diagnostic efficiency, reduce observer variability, and consistently outperform traditional approaches. Resource constraints that can distinguish whether a cancer is aggressive or not is a significant problem in PCa treatment. This study aims to identify PCa using MRI images by combining deep learning and transfer learning (TL). Researchers have explored numerous CNN-based Deep Learning methods for classifying MRI images related to PCa. In this study, we have developed an approach for the classification of PCa using transfer learning on a limited number of images to achieve high performance and help radiologists instantly identify PCa. The proposed methodology adopts the EfficientNet architecture, pre-trained on the ImageNet dataset, and incorporates three branches for feature extraction from different MRI sequences. The extracted features are then combined, significantly enhancing the model's ability to distinguish MRI images accurately. Our model demonstrated remarkable results in classifying prostate cancer, achieving an accuracy rate of 88.89%. Furthermore, comparative results indicate that our approach achieve higher accuracy than both traditional hand-crafted feature techniques and existing deep learning techniques in PCa classification. The proposed methodology can learn more distinctive features in prostate images and correctly identify cancer.
Collapse
Affiliation(s)
- Mubashar Mehmood
- Department of Computer Science, COMSATS Institute of Information Technology, Islamabad, Pakistan
| | | | - Khursheed Aurangzeb
- Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia
| | | | | | - Musaed Alhussein
- Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia
| |
Collapse
|
10
|
Kim H, Kang SW, Kim JH, Nagar H, Sabuncu M, Margolis DJA, Kim CK. The role of AI in prostate MRI quality and interpretation: Opportunities and challenges. Eur J Radiol 2023; 165:110887. [PMID: 37245342 DOI: 10.1016/j.ejrad.2023.110887] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2023] [Revised: 05/06/2023] [Accepted: 05/20/2023] [Indexed: 05/30/2023]
Abstract
Prostate MRI plays an important role in imaging the prostate gland and surrounding tissues, particularly in the diagnosis and management of prostate cancer. With the widespread adoption of multiparametric magnetic resonance imaging in recent years, the concerns surrounding the variability of imaging quality have garnered increased attention. Several factors contribute to the inconsistency of image quality, such as acquisition parameters, scanner differences and interobserver variabilities. While efforts have been made to standardize image acquisition and interpretation via the development of systems, such as PI-RADS and PI-QUAL, the scoring systems still depend on the subjective experience and acumen of humans. Artificial intelligence (AI) has been increasingly used in many applications, including medical imaging, due to its ability to automate tasks and lower human error rates. These advantages have the potential to standardize the tasks of image interpretation and quality control of prostate MRI. Despite its potential, thorough validation is required before the implementation of AI in clinical practice. In this article, we explore the opportunities and challenges of AI, with a focus on the interpretation and quality of prostate MRI.
Collapse
Affiliation(s)
- Heejong Kim
- Department of Radiology, Weill Cornell Medical College, 525 E 68th St Box 141, New York, NY 10021, United States
| | - Shin Won Kang
- Research Institute for Future Medicine, Samsung Medical Center, Republic of Korea
| | - Jae-Hun Kim
- Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Republic of Korea
| | - Himanshu Nagar
- Department of Radiation Oncology, Weill Cornell Medical College, 525 E 68th St, New York, NY 10021, United States
| | - Mert Sabuncu
- Department of Radiology, Weill Cornell Medical College, 525 E 68th St Box 141, New York, NY 10021, United States
| | - Daniel J A Margolis
- Department of Radiology, Weill Cornell Medical College, 525 E 68th St Box 141, New York, NY 10021, United States.
| | - Chan Kyo Kim
- Department of Radiology and Center for Imaging Science, Samsung Medical Center, Sungkyunkwan University School of Medicine, Republic of Korea
| |
Collapse
|
11
|
Hong S, Kim SH, Yoo B, Kim JY. Deep Learning Algorithm for Tumor Segmentation and Discrimination of Clinically Significant Cancer in Patients with Prostate Cancer. Curr Oncol 2023; 30:7275-7285. [PMID: 37623009 PMCID: PMC10453750 DOI: 10.3390/curroncol30080528] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2023] [Revised: 07/05/2023] [Accepted: 07/25/2023] [Indexed: 08/26/2023] Open
Abstract
BACKGROUND We investigated the feasibility of a deep learning algorithm (DLA) based on apparent diffusion coefficient (ADC) maps for the segmentation and discrimination of clinically significant cancer (CSC, Gleason score ≥ 7) from non-CSC in patients with prostate cancer (PCa). METHODS Data from a total of 149 consecutive patients who had undergone 3T-MRI and been pathologically diagnosed with PCa were initially collected. The labelled data (148 images for GS6, 580 images for GS7) were applied for tumor segmentation using a convolutional neural network (CNN). For classification, 93 images for GS6 and 372 images for GS7 were used. For external validation, 22 consecutive patients from five different institutions (25 images for GS6, 70 images for GS7) representing different MR machines were recruited. RESULTS Regarding segmentation and classification, U-Net and DenseNet were used, respectively. The tumor Dice scores for internal and external validation were 0.822 and 0.7776, respectively. As for classification, the accuracies of internal and external validation were 73 and 75%, respectively. For external validation, diagnostic predictive values for CSC (sensitivity, specificity, positive predictive value and negative predictive value) were 84, 48, 82 and 52%, respectively. CONCLUSIONS Tumor segmentation and discrimination of CSC from non-CSC is feasible using a DLA developed based on ADC maps (b2000) alone.
Collapse
Affiliation(s)
- Sujin Hong
- Department of Radiology, Inje University, College of Medicine, Haeundae Paik Hospital, Busan 48108, Republic of Korea
| | - Seung Ho Kim
- Department of Radiology, Inje University, College of Medicine, Haeundae Paik Hospital, Busan 48108, Republic of Korea
| | | | - Joo Yeon Kim
- Department of Pathology, Inje University, College of Medicine, Haeundae Paik Hospital, Busan 48108, Republic of Korea
| |
Collapse
|
12
|
Karagoz A, Alis D, Seker ME, Zeybel G, Yergin M, Oksuz I, Karaarslan E. Anatomically guided self-adapting deep neural network for clinically significant prostate cancer detection on bi-parametric MRI: a multi-center study. Insights Imaging 2023; 14:110. [PMID: 37337101 DOI: 10.1186/s13244-023-01439-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2023] [Accepted: 04/17/2023] [Indexed: 06/21/2023] Open
Abstract
OBJECTIVE To evaluate the effectiveness of a self-adapting deep network, trained on large-scale bi-parametric MRI data, in detecting clinically significant prostate cancer (csPCa) in external multi-center data from men of diverse demographics; to investigate the advantages of transfer learning. METHODS We used two samples: (i) Publicly available multi-center and multi-vendor Prostate Imaging: Cancer AI (PI-CAI) training data, consisting of 1500 bi-parametric MRI scans, along with its unseen validation and testing samples; (ii) In-house multi-center testing and transfer learning data, comprising 1036 and 200 bi-parametric MRI scans. We trained a self-adapting 3D nnU-Net model using probabilistic prostate masks on the PI-CAI data and evaluated its performance on the hidden validation and testing samples and the in-house data with and without transfer learning. We used the area under the receiver operating characteristic (AUROC) curve to evaluate patient-level performance in detecting csPCa. RESULTS The PI-CAI training data had 425 scans with csPCa, while the in-house testing and fine-tuning data had 288 and 50 scans with csPCa, respectively. The nnU-Net model achieved an AUROC of 0.888 and 0.889 on the hidden validation and testing data. The model performed with an AUROC of 0.886 on the in-house testing data, with a slight decrease in performance to 0.870 using transfer learning. CONCLUSIONS The state-of-the-art deep learning method using prostate masks trained on large-scale bi-parametric MRI data provides high performance in detecting csPCa in internal and external testing data with different characteristics, demonstrating the robustness and generalizability of deep learning within and across datasets. CLINICAL RELEVANCE STATEMENT A self-adapting deep network, utilizing prostate masks and trained on large-scale bi-parametric MRI data, is effective in accurately detecting clinically significant prostate cancer across diverse datasets, highlighting the potential of deep learning methods for improving prostate cancer detection in clinical practice.
Collapse
Affiliation(s)
- Ahmet Karagoz
- Department of Computer Engineering, Istanbul Technical University, Istanbul, Turkey
- Artificial Intelligence and Information Technologies, Hevi AI Health, Istanbul, Turkey
| | - Deniz Alis
- Artificial Intelligence and Information Technologies, Hevi AI Health, Istanbul, Turkey.
- Department of Radiology, School of Medicine, Acibadem Mehmet Ali Aydinlar University, Istanbul, Turkey.
| | - Mustafa Ege Seker
- School of Medicine, Acibadem Mehmet Ali Aydinlar University, Istanbul, Turkey
| | - Gokberk Zeybel
- School of Medicine, Acibadem Mehmet Ali Aydinlar University, Istanbul, Turkey
| | - Mert Yergin
- Artificial Intelligence and Information Technologies, Hevi AI Health, Istanbul, Turkey
| | - Ilkay Oksuz
- Department of Computer Engineering, Istanbul Technical University, Istanbul, Turkey
| | - Ercan Karaarslan
- Department of Radiology, School of Medicine, Acibadem Mehmet Ali Aydinlar University, Istanbul, Turkey
| |
Collapse
|
13
|
Alis D, Kartal MS, Seker ME, Guroz B, Basar Y, Arslan A, Sirolu S, Kurtcan S, Denizoglu N, Tuzun U, Yildirim D, Oksuz I, Karaarslan E. Deep learning for assessing image quality in bi-parametric prostate MRI: A feasibility study. Eur J Radiol 2023; 165:110924. [PMID: 37354768 DOI: 10.1016/j.ejrad.2023.110924] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2023] [Revised: 05/15/2023] [Accepted: 06/09/2023] [Indexed: 06/26/2023]
Abstract
BACKGROUND Although systems such as Prostate Imaging Quality (PI-QUAL) have been proposed for quality assessment, visual evaluations by human readers remain somewhat inconsistent, particularly among less-experienced readers. OBJECTIVES To assess the feasibility of deep learning (DL) for the automated assessment of image quality in bi-parametric MRI scans and compare its performance to that of less-experienced readers. METHODS We used bi-parametric prostate MRI scans from the PI-CAI dataset in this study. A 3-point Likert scale, consisting of poor, moderate, and excellent, was utilized for assessing image quality. Three expert readers established the ground-truth labels for the development (500) and testing sets (100). We trained a 3D DL model on the development set using probabilistic prostate masks and an ordinal loss function. Four less-experienced readers scored the testing set for performance comparison. RESULTS The kappa scores between the DL model and the expert consensus for T2W images and ADC maps were 0.42 and 0.61, representing moderate and good levels of agreement. The kappa scores between the less-experienced readers and the expert consensus for T2W images and ADC maps ranged from 0.39 to 0.56 (fair to moderate) and from 0.39 to 0.62 (fair to good). CONCLUSIONS Deep learning (DL) can offer performance comparable to that of less-experienced readers when assessing image quality in bi-parametric prostate MRI, making it a viable option for an automated quality assessment tool. We suggest that DL models trained on more representative datasets, annotated by a larger group of experts, could yield reliable image quality assessment and potentially substitute or assist visual evaluations by human readers.
Collapse
Affiliation(s)
- Deniz Alis
- Acibadem Mehmet Ali Aydinlar University, School of Medicine, Department of Radiology, Istanbul, 34457, Turkey.
| | | | - Mustafa Ege Seker
- Acibadem Mehmet Ali Aydinlar University, School of Medicine, Istanbul, 34752, Turkey
| | - Batuhan Guroz
- Acibadem Mehmet Ali Aydinlar University, School of Medicine, Department of Radiology, Istanbul, 34457, Turkey
| | - Yeliz Basar
- Acibadem Healthcare Group, Department of Radiology, Istanbul, 34457, Turkey
| | - Aydan Arslan
- Umraniye Training and Research Hospital, Department of Radiology, Istanbul, 34764, Turkey
| | - Sabri Sirolu
- Istanbul Sisli Hamidiye Etfal Training and Research Hospital, Department of Radiology, Istanbul, 34396, Turkey
| | - Serpil Kurtcan
- Acibadem Healthcare Group, Department of Radiology, Istanbul, 34457, Turkey.
| | - Nurper Denizoglu
- Acibadem Healthcare Group, Department of Radiology, Istanbul, 34457, Turkey.
| | - Umit Tuzun
- Neolife, Radiology Center, Istanbul, 34340, Turkey.
| | - Duzgun Yildirim
- Acibadem Mehmet Ali Aydinlar University, School of Vocational Sciences, Department of Radiology, Istanbul, 34457, Turkey.
| | - Ilkay Oksuz
- Istanbul Technical University, Department of Computer Engineering, Istanbul, 34467, Turkey
| | - Ercan Karaarslan
- Cumhuriyet University, School of Medicine, Sivas, 581407, Turkey.
| |
Collapse
|
14
|
Liu Y, Zhu Y, Xin Y, Zhang Y, Yang D, Xu T. MESTrans: Multi-scale embedding spatial transformer for medical image segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 233:107493. [PMID: 36965298 DOI: 10.1016/j.cmpb.2023.107493] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/11/2022] [Revised: 03/15/2023] [Accepted: 03/16/2023] [Indexed: 06/18/2023]
Abstract
BACKGROUND AND OBJECTIVE Transformers profiting from global information modeling derived from the self-attention mechanism have recently achieved remarkable performance in computer vision. In this study, a novel transformer-based medical image segmentation network called the multi-scale embedding spatial transformer (MESTrans) was proposed for medical image segmentation. METHODS First, a dataset called COVID-DS36 was created from 4369 computed tomography (CT) images of 36 patients from a partner hospital, of which 18 had COVID-19 and 18 did not. Subsequently, a novel medical image segmentation network was proposed, which introduced a self-attention mechanism to improve the inherent limitation of convolutional neural networks (CNNs) and was capable of adaptively extracting discriminative information in both global and local content. Specifically, based on U-Net, a multi-scale embedding block (MEB) and multi-layer spatial attention transformer (SATrans) structure were designed, which can dynamically adjust the receptive field in accordance with the input content. The spatial relationship between multi-level and multi-scale image patches was modeled, and the global context information was captured effectively. To make the network concentrate on the salient feature region, a feature fusion module (FFM) was established, which performed global learning and soft selection between shallow and deep features, adaptively combining the encoder and decoder features. Four datasets comprising CT images, magnetic resonance (MR) images, and H&E-stained slide images were used to assess the performance of the proposed network. RESULTS Experiments were performed using four different types of medical image datasets. For the COVID-DS36 dataset, our method achieved a Dice similarity coefficient (DSC) of 81.23%. For the GlaS dataset, 89.95% DSC and 82.39% intersection over union (IoU) were obtained. On the Synapse dataset, the average DSC was 77.48% and the average Hausdorff distance (HD) was 31.69 mm. For the I2CVB dataset, 92.3% DSC and 85.8% IoU were obtained. CONCLUSIONS The experimental results demonstrate that the proposed model has an excellent generalization ability and outperforms other state-of-the-art methods. It is expected to be a potent tool to assist clinicians in auxiliary diagnosis and to promote the development of medical intelligence technology.
Collapse
Affiliation(s)
- Yatong Liu
- School of Information Science and Engineering, East China University of Science and Technology, Shanghai 200237, China
| | - Yu Zhu
- School of Information Science and Engineering, East China University of Science and Technology, Shanghai 200237, China; Shanghai Engineering Research Center of Internet of Things for Respiratory Medicine, Shanghai 200237, China.
| | - Ying Xin
- Department of Pulmonary and Critical Care Medicine, the Affiliated Hospital of Qingdao University, Qingdao, Shandong 266000, China
| | - Yanan Zhang
- Department of Pulmonary and Critical Care Medicine, the Affiliated Hospital of Qingdao University, Qingdao, Shandong 266000, China
| | - Dawei Yang
- Shanghai Engineering Research Center of Internet of Things for Respiratory Medicine, Shanghai 200237, China; Department of Pulmonary and Critical Care Medicine, Zhongshan Hospital, Fudan University, Shanghai 200032, China.
| | - Tao Xu
- Department of Pulmonary and Critical Care Medicine, the Affiliated Hospital of Qingdao University, Qingdao, Shandong 266000, China.
| |
Collapse
|
15
|
Zhong J, Staib LH, Venkataraman R, Onofrey JA. INTEGRATING PROSTATE SPECIFIC ANTIGEN DENSITY BIOMARKER INTO DEEP LEARNING PROSTATE MRI LESION SEGMENTATION MODELS. PROCEEDINGS. IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING 2023; 2023:10.1109/isbi53787.2023.10230418. [PMID: 38090633 PMCID: PMC10711801 DOI: 10.1109/isbi53787.2023.10230418] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/11/2024]
Abstract
Prostate cancer lesion segmentation in multi-parametric magnetic resonance imaging (mpMRI) is crucial for pre-biopsy diagnosis and targeted biopsy guidance. Deep convolution neural networks have been widely utilized for lesion segmentation. However, these methods fail to achieve a high Dice coefficient because of the large variations in lesion size and location within the gland. To address this problem, we integrate the clinically-meaningful prostate specific antigen density (PSAD) biomarker into the deep learning model using feature-wise transformations to condition the features in latent space, and thus control the size of lesion prediction. We tested our models on a public dataset with 214 annotated mpMRI scans and compared the segmentation performance to a baseline 3D U-Net model. Results demonstrate that integrating the PSAD biomarker significantly improves segmentation performance in both Dice coefficient and centroid distance metric.
Collapse
Affiliation(s)
- Jiayang Zhong
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Lawrence H Staib
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
- Department of Radiology & Biomedical Imaging, Yale University, New Haven, CT, USA
- Department of Electrical Engineering, Yale University, New Haven, CT, USA
| | | | - John A Onofrey
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
- Department of Radiology & Biomedical Imaging, Yale University, New Haven, CT, USA
- Department of Urology, Yale University, New Haven, CT, USA
| |
Collapse
|
16
|
Ramamurthy K, Varikuti AR, Gupta B, Aswani N. A deep learning network for Gleason grading of prostate biopsies using EfficientNet. BIOMED ENG-BIOMED TE 2022; 68:187-198. [PMID: 36332194 DOI: 10.1515/bmt-2022-0201] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2022] [Accepted: 10/23/2022] [Indexed: 11/06/2022]
Abstract
Abstract
Objectives
The most crucial part in the diagnosis of cancer is severity grading. Gleason’s score is a widely used grading system for prostate cancer. Manual examination of the microscopic images and grading them is tiresome and consumes a lot of time. Hence to automate the Gleason grading process, a novel deep learning network is proposed in this work.
Methods
In this work, a deep learning network for Gleason grading of prostate cancer is proposed based on EfficientNet architecture. It applies a compound scaling method to balance the dimensions of the underlying network. Also, an additional attention branch is added to EfficientNet-B7 for precise feature weighting.
Result
To the best of our knowledge, this is the first work that integrates an additional attention branch with EfficientNet architecture for Gleason grading. The proposed models were trained using H&E-stained samples from prostate cancer Tissue Microarrays (TMAs) in the Harvard Dataverse dataset.
Conclusions
The proposed network was able to outperform the existing methods and it achieved an Kappa score of 0.5775.
Collapse
Affiliation(s)
- Karthik Ramamurthy
- Centre for Cyber Physical Systems, School of Electronics Engineering, Vellore Institute of Technology , Chennai , India
| | - Abinash Reddy Varikuti
- School of Computer Science Engineering, Vellore Institute of Technology , Chennai , India
| | - Bhavya Gupta
- School of Computer Science Engineering, Vellore Institute of Technology , Chennai , India
| | - Nehal Aswani
- School of Electronics Engineering, Vellore Institute of Technology , Chennai , India
| |
Collapse
|
17
|
Liu Y, Zhu Y, Wang W, Zheng B, Qin X, Wang P. Multi-scale discriminative network for prostate cancer lesion segmentation in multiparametric MR images. Med Phys 2022; 49:7001-7015. [PMID: 35851482 DOI: 10.1002/mp.15861] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2022] [Revised: 06/30/2022] [Accepted: 07/03/2022] [Indexed: 01/01/2023] Open
Abstract
PURPOSE The accurate and reliable segmentation of prostate cancer (PCa) lesions using multiparametric magnetic resonance imaging (mpMRI) sequences, is crucial to the image-guided intervention and treatment of prostate disease. For PCa lesion segmentation, it is essential to reliably combine local and global information to retain the features of small targets at multiple scales. Therefore, this study proposes a multi-scale segmentation network with a cascading pyramid convolution module (CPCM) and a double-input channel attention module (DCAM) for the automated and accurate segmentation of PCa lesions using mpMRI. METHODS First, the region of interest was extracted from the data by clipping to enlarge the target region and reduce the background noise interference. Next, four CPCMs with large convolution kernels in their skip connection paths were designed to improve the feature extraction capability of the network for small targets. At the same time, a convolution decomposition was applied to reduce the computational complexity. Finally, the DCAM was adopted in the decoder to provide bottom-up semantic discriminative guidance; it can use the semantic information of the network's deep features to guide the shallow output of features with a higher discriminant ability. A residual refinement module (RRM) was also designed to strengthen the recognition ability of each stage. The feature maps of the skip connection and the decoder all go through the RRM. RESULTS For the Initiative for Collaborative Computer Vision Benchmarking (I2CVB) dataset, our proposed model achieved a Dice similarity coefficient (DSC) of 79.31% and an average boundary distance (ABD) of 4.15 mm. For the Prostate Multiparametric MRI (PROMM) dataset, our method greatly improved the DSC to 82.11% and obtained an ABD of 3.64 mm. CONCLUSIONS The experimental results of two different mpMRI prostate datasets demonstrate that our model is more accurate and reliable on small targets. In addition, it outperforms other state-of-the-art methods.
Collapse
Affiliation(s)
- Yatong Liu
- School of Information Science and Technology, East China University of Science and Technology, Shanghai, P. R. China
| | - Yu Zhu
- School of Information Science and Technology, East China University of Science and Technology, Shanghai, P. R. China
- Shanghai Engineering Research Center of Internet of Things for Respiratory Medicine, Shanghai, P. R. China
| | - Wei Wang
- Department of Radiology, Tongji Hospital, Tongji University School of Medicine, Shanghai, P. R. China
| | - Bingbing Zheng
- School of Information Science and Technology, East China University of Science and Technology, Shanghai, P. R. China
| | - Xiangxiang Qin
- School of Information Science and Technology, East China University of Science and Technology, Shanghai, P. R. China
| | - Peijun Wang
- Department of Radiology, Tongji Hospital, Tongji University School of Medicine, Shanghai, P. R. China
| |
Collapse
|
18
|
Sun SW, Xu X, Liu QP, Chen JN, Zhu FP, Liu XS, Zhang YD, Wang J. LiSNet: An artificial intelligence -based tool for liver imaging staging of hepatocellular carcinoma aggressiveness. Med Phys 2022; 49:6903-6913. [PMID: 36134900 DOI: 10.1002/mp.15972] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2022] [Revised: 08/01/2022] [Accepted: 08/30/2022] [Indexed: 12/13/2022] Open
Abstract
BACKGROUND Presurgical assessment of hepatocellular carcinoma (HCC) aggressiveness can benefit patients' treatment options and prognosis. PURPOSE To develop an artificial intelligence (AI) tool, namely, LiSNet, in the task of scoring and interpreting HCC aggressiveness with computed tomography (CT) imaging. METHODS A total of 358 patients with HCC undergoing curative liver resection were retrospectively included. Three subspecialists were recruited to pixel-wise annotate and grade tumor aggressiveness based on CT imaging. LiSNet was trained and validated in 193 and 61 patients with a deep neural network to emulate the diagnostic acumen of subspecialists for staging HCC. The test set comprised 104 independent patients. We subsequently compared LiSNet with an experience-based binary diagnosis scheme and human-AI partnership that combined binary diagnosis and LiSNet for assessing tumor aggressiveness. We also assessed the efficiency of LiSNet for predicting survival outcomes. RESULTS At the pixel-wise level, the agreement rate of LiSNet with subspecialists was 0.658 (95% confidence interval [CI]: 0.490-0.779), 0.595 (95% CI: 0.406-0.734), and 0.369 (95% CI: 0.134-0.566), for scoring HCC aggressiveness grades I, II, and III, respectively. Additionally, LiSNet was comparable to subspecialists for predicting histopathological microvascular invasion (area under the curve: LiSNet: 0.668 [95% CI: 0.559-0.776] versus subspecialists: 0.699 [95% CI: 0.591-0.806], p > 0.05). In a human-AI partnered diagnosis, combining LiSNet and experience-based binary diagnosis can achieve the best predictive ability for microvascular invasion (area under the curve: 0.705 [95% CI: 0.589-0.820]). Furthermore, LiSNet was able to indicate overall survival after surgery. CONCLUSION The designed LiSNet tool warrants evaluation as an alternative tool for radiologists to conduct automatic staging of HCC aggressiveness at the pixel-wise level with CT imaging. Its prognostic value might benefit patients' treatment options and survival prediction.
Collapse
Affiliation(s)
- Shu Wen Sun
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, Jiangsu Province, China
| | - Xun Xu
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, Jiangsu Province, China
| | - Qiu Ping Liu
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, Jiangsu Province, China
| | - Jie Neng Chen
- The College of Electronics and Information Engineering, Tongji University, Shanghai, China
| | - Fei Peng Zhu
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, Jiangsu Province, China
| | - Xi Sheng Liu
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, Jiangsu Province, China
| | - Yu Dong Zhang
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, Jiangsu Province, China
| | - Jie Wang
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, Jiangsu Province, China
| |
Collapse
|
19
|
Gao W, Wang C, Li Q, Zhang X, Yuan J, Li D, Sun Y, Chen Z, Gu Z. Application of medical imaging methods and artificial intelligence in tissue engineering and organ-on-a-chip. Front Bioeng Biotechnol 2022; 10:985692. [PMID: 36172022 PMCID: PMC9511994 DOI: 10.3389/fbioe.2022.985692] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2022] [Accepted: 08/08/2022] [Indexed: 12/02/2022] Open
Abstract
Organ-on-a-chip (OOC) is a new type of biochip technology. Various types of OOC systems have been developed rapidly in the past decade and found important applications in drug screening and precision medicine. However, due to the complexity in the structure of both the chip-body itself and the engineered-tissue inside, the imaging and analysis of OOC have still been a big challenge for biomedical researchers. Considering that medical imaging is moving towards higher spatial and temporal resolution and has more applications in tissue engineering, this paper aims to review medical imaging methods, including CT, micro-CT, MRI, small animal MRI, and OCT, and introduces the application of 3D printing in tissue engineering and OOC in which medical imaging plays an important role. The achievements of medical imaging assisted tissue engineering are reviewed, and the potential applications of medical imaging in organoids and OOC are discussed. Moreover, artificial intelligence - especially deep learning - has demonstrated its excellence in the analysis of medical imaging; we will also present the application of artificial intelligence in the image analysis of 3D tissues, especially for organoids developed in novel OOC systems.
Collapse
Affiliation(s)
- Wanying Gao
- State Key Laboratory of Bioelectronics, School of Biological Science and Medical Engineering, Southeast University, Nanjing, China
| | - Chunyan Wang
- State Key Laboratory of Space Medicine Fundamentals and Application, Chinese Astronaut Science Researching and Training Center, Beijing, China
| | - Qiwei Li
- State Key Laboratory of Bioelectronics, School of Biological Science and Medical Engineering, Southeast University, Nanjing, China
| | - Xijing Zhang
- Central Research Institute, United Imaging Group, Shanghai, China
| | - Jianmin Yuan
- Central Research Institute, United Imaging Group, Shanghai, China
| | - Dianfu Li
- The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Yu Sun
- International Children’s Medical Imaging Research Laboratory, School of Biological Science and Medical Engineering, Southeast University, Nanjing, China
| | - Zaozao Chen
- State Key Laboratory of Bioelectronics, School of Biological Science and Medical Engineering, Southeast University, Nanjing, China
| | - Zhongze Gu
- State Key Laboratory of Bioelectronics, School of Biological Science and Medical Engineering, Southeast University, Nanjing, China
| |
Collapse
|
20
|
Adams LC, Makowski MR, Engel G, Rattunde M, Busch F, Asbach P, Niehues SM, Vinayahalingam S, van Ginneken B, Litjens G, Bressem KK. Prostate158 - An expert-annotated 3T MRI dataset and algorithm for prostate cancer detection. Comput Biol Med 2022; 148:105817. [PMID: 35841780 DOI: 10.1016/j.compbiomed.2022.105817] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2022] [Revised: 06/12/2022] [Accepted: 07/03/2022] [Indexed: 11/03/2022]
Abstract
BACKGROUND The development of deep learning (DL) models for prostate segmentation on magnetic resonance imaging (MRI) depends on expert-annotated data and reliable baselines, which are often not publicly available. This limits both reproducibility and comparability. METHODS Prostate158 consists of 158 expert annotated biparametric 3T prostate MRIs comprising T2w sequences and diffusion-weighted sequences with apparent diffusion coefficient maps. Two U-ResNets trained for segmentation of anatomy (central gland, peripheral zone) and suspicious lesions for prostate cancer (PCa) with a PI-RADS score of ≥4 served as baseline algorithms. Segmentation performance was evaluated using the Dice similarity coefficient (DSC), the Hausdorff distance (HD), and the average surface distance (ASD). The Wilcoxon test with Bonferroni correction was used to evaluate differences in performance. The generalizability of the baseline model was assessed using the open datasets Medical Segmentation Decathlon and PROSTATEx. RESULTS Compared to Reader 1, the models achieved a DSC/HD/ASD of 0.88/18.3/2.2 for the central gland, 0.75/22.8/1.9 for the peripheral zone, and 0.45/36.7/17.4 for PCa. Compared with Reader 2, the DSC/HD/ASD were 0.88/17.5/2.6 for the central gland, 0.73/33.2/1.9 for the peripheral zone, and 0.4/39.5/19.1 for PCa. Interrater agreement measured in DSC/HD/ASD was 0.87/11.1/1.0 for the central gland, 0.75/15.8/0.74 for the peripheral zone, and 0.6/18.8/5.5 for PCa. Segmentation performances on the Medical Segmentation Decathlon and PROSTATEx were 0.82/22.5/3.4; 0.86/18.6/2.5 for the central gland, and 0.64/29.2/4.7; 0.71/26.3/2.2 for the peripheral zone. CONCLUSIONS We provide an openly accessible, expert-annotated 3T dataset of prostate MRI and a reproducible benchmark to foster the development of prostate segmentation algorithms.
Collapse
Affiliation(s)
- Lisa C Adams
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute for Radiology, Luisenstraße 7, 10117, Hindenburgdamm 30, 12203, Berlin, Germany; Berlin Institute of Health at Charité - Universitätsmedizin Berlin, Charitéplatz 1, 10117, Berlin, Germany.
| | - Marcus R Makowski
- Technical University of Munich, Department of Diagnostic and Interventional Radiology, Faculty of Medicine, Ismaninger Str. 22, 81675, Munich, Germany
| | - Günther Engel
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute for Radiology, Luisenstraße 7, 10117, Hindenburgdamm 30, 12203, Berlin, Germany; Institute for Diagnostic and Interventional Radiology, Georg-August University, Göttingen, Germany
| | - Maximilian Rattunde
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute for Radiology, Luisenstraße 7, 10117, Hindenburgdamm 30, 12203, Berlin, Germany
| | - Felix Busch
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute for Radiology, Luisenstraße 7, 10117, Hindenburgdamm 30, 12203, Berlin, Germany
| | - Patrick Asbach
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute for Radiology, Luisenstraße 7, 10117, Hindenburgdamm 30, 12203, Berlin, Germany
| | - Stefan M Niehues
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute for Radiology, Luisenstraße 7, 10117, Hindenburgdamm 30, 12203, Berlin, Germany
| | - Shankeeth Vinayahalingam
- Department of Oral and Maxillofacial Surgery, Radboud University Medical Center, Nijmegen, GA, the Netherlands
| | | | - Geert Litjens
- Radboud University Medical Center, Nijmegen, GA, the Netherlands
| | - Keno K Bressem
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute for Radiology, Luisenstraße 7, 10117, Hindenburgdamm 30, 12203, Berlin, Germany; Berlin Institute of Health at Charité - Universitätsmedizin Berlin, Charitéplatz 1, 10117, Berlin, Germany
| |
Collapse
|
21
|
Rojas-Domínguez A, Valdez SI, Ornelas-Rodríguez M, Carpio M. Improved training of deep convolutional networks via minimum-variance regularized adaptive sampling. Soft comput 2022. [DOI: 10.1007/s00500-022-07131-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
22
|
Kaneko M, Fukuda N, Nagano H, Yamada K, Yamada K, Konishi E, Sato Y, Ukimura O. Artificial intelligence trained with integration of multiparametric MR-US imaging data and fusion biopsy trajectory-proven pathology data for 3D prediction of prostate cancer: A proof-of-concept study. Prostate 2022; 82:793-803. [PMID: 35192229 DOI: 10.1002/pros.24321] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Revised: 01/30/2022] [Accepted: 02/04/2022] [Indexed: 12/09/2022]
Abstract
BACKGROUND We aimed to develop an artificial intelligence (AI) algorithm that predicts the volume and location of clinically significant cancer (CSCa) using convolutional neural network (CNN) trained with integration of multiparametric MR-US image data and MRI-US fusion prostate biopsy (MRI-US PBx) trajectory-proven pathology data. METHODS Twenty consecutive patients prospectively underwent MRI-US PBx, followed by robot-assisted radical prostatectomy (RARP). The AI algorithm was trained with the integration of MR-US image data with a MRI-US PBx trajectory-proven pathology. The relationship with the 3D-cancer-mapping of RARP specimens was compared between AI system-suggested 3D-CSCa mapping and an experienced radiologist's suggested 3D-CSCa mapping on MRI alone according to the Prostate Imaging Reporting and Data System (PI-RADS) version 2. The characteristics of detected and undetected tumors at AI were compared in 22,968 image data. The relationships between CSCa volumes and volumes predicted by AI as well as the radiologist's reading based on PI-RADS were analyzed. RESULTS The concordance of the CSCa center with that in RARP specimens was significantly higher in the AI prediction than the radiologist' reading (83% vs. 54%, p = 0.036). CSCa volumes predicted with AI were more accurate (r = 0.90, p < 0.001) than the radiologist's reading. The limitations include that the elastic fusion technology has its own registration error. CONCLUSIONS We presented a novel pilot AI algorithm for 3D prediction of PCa. AI was trained by integration of multiparametric MR-US image data and fusion biopsy trajectory-proven pathology data. This deep learning AI model may more precisely predict the 3D mapping of CSCa in its volume and center location than a radiologist's reading based on PI-RADS version 2, and has potential in the planning of focal therapy.
Collapse
Affiliation(s)
- Masatomo Kaneko
- Department of Urology, Kyoto Prefectural University of Medicine, Kyoto, Japan
| | - Norio Fukuda
- Division of Information Science, Nara Institute of Science and Technology, Nara, Japan
| | - Hitomi Nagano
- Department of Radiology, Kyoto Prefectural University of Medicine, Kyoto, Japan
| | - Kaori Yamada
- Department of Radiology, Kyoto Prefectural University of Medicine, Kyoto, Japan
| | - Kei Yamada
- Department of Radiology, Kyoto Prefectural University of Medicine, Kyoto, Japan
| | - Eiichi Konishi
- Department of Surgical Pathology, Kyoto Prefectural University of Medicine, Kyoto, Japan
| | - Yoshinobu Sato
- Division of Information Science, Nara Institute of Science and Technology, Nara, Japan
| | - Osamu Ukimura
- Department of Urology, Kyoto Prefectural University of Medicine, Kyoto, Japan
| |
Collapse
|
23
|
Ayyad SM, Badawy MA, Shehata M, Alksas A, Mahmoud A, Abou El-Ghar M, Ghazal M, El-Melegy M, Abdel-Hamid NB, Labib LM, Ali HA, El-Baz A. A New Framework for Precise Identification of Prostatic Adenocarcinoma. SENSORS 2022; 22:s22051848. [PMID: 35270995 PMCID: PMC8915102 DOI: 10.3390/s22051848] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/08/2021] [Revised: 02/21/2022] [Accepted: 02/24/2022] [Indexed: 02/01/2023]
Abstract
Prostate cancer, which is also known as prostatic adenocarcinoma, is an unconstrained growth of epithelial cells in the prostate and has become one of the leading causes of cancer-related death worldwide. The survival of patients with prostate cancer relies on detection at an early, treatable stage. In this paper, we introduce a new comprehensive framework to precisely differentiate between malignant and benign prostate cancer. This framework proposes a noninvasive computer-aided diagnosis system that integrates two imaging modalities of MR (diffusion-weighted (DW) and T2-weighted (T2W)). For the first time, it utilizes the combination of functional features represented by apparent diffusion coefficient (ADC) maps estimated from DW-MRI for the whole prostate in combination with texture features with its first- and second-order representations, extracted from T2W-MRIs of the whole prostate, and shape features represented by spherical harmonics constructed for the lesion inside the prostate and integrated with PSA screening results. The dataset presented in the paper includes 80 biopsy confirmed patients, with a mean age of 65.7 years (43 benign prostatic hyperplasia, 37 prostatic carcinomas). Experiments were conducted using different well-known machine learning approaches including support vector machines (SVM), random forests (RF), decision trees (DT), and linear discriminant analysis (LDA) classification models to study the impact of different feature sets that lead to better identification of prostatic adenocarcinoma. Using a leave-one-out cross-validation approach, the diagnostic results obtained using the SVM classification model along with the combined feature set after applying feature selection (88.75% accuracy, 81.08% sensitivity, 95.35% specificity, and 0.8821 AUC) indicated that the system’s performance, after integrating and reducing different types of feature sets, obtained an enhanced diagnostic performance compared with each individual feature set and other machine learning classifiers. In addition, the developed diagnostic system provided consistent diagnostic performance using 10-fold and 5-fold cross-validation approaches, which confirms the reliability, generalization ability, and robustness of the developed system.
Collapse
Affiliation(s)
- Sarah M. Ayyad
- Computers and Systems Department, Faculty of Engineering, Mansoura University, Mansoura 35511, Egypt; (S.M.A.); (N.B.A.-H.); (L.M.L.); (H.A.A.)
| | - Mohamed A. Badawy
- Radiology Department, Urology and Nephrology Center, Mansoura University, Mansoura 35516, Egypt; (M.A.B.); (M.A.E.-G.)
| | - Mohamed Shehata
- BioImaging Laboratory, Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (M.S.); (A.A.); (A.M.)
| | - Ahmed Alksas
- BioImaging Laboratory, Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (M.S.); (A.A.); (A.M.)
| | - Ali Mahmoud
- BioImaging Laboratory, Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (M.S.); (A.A.); (A.M.)
| | - Mohamed Abou El-Ghar
- Radiology Department, Urology and Nephrology Center, Mansoura University, Mansoura 35516, Egypt; (M.A.B.); (M.A.E.-G.)
| | - Mohammed Ghazal
- Department of Electrical and Computer Engineering, College of Engineering, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates;
| | - Moumen El-Melegy
- Department of Electrical Engineering, Assiut University, Assiut 71511, Egypt;
| | - Nahla B. Abdel-Hamid
- Computers and Systems Department, Faculty of Engineering, Mansoura University, Mansoura 35511, Egypt; (S.M.A.); (N.B.A.-H.); (L.M.L.); (H.A.A.)
| | - Labib M. Labib
- Computers and Systems Department, Faculty of Engineering, Mansoura University, Mansoura 35511, Egypt; (S.M.A.); (N.B.A.-H.); (L.M.L.); (H.A.A.)
| | - H. Arafat Ali
- Computers and Systems Department, Faculty of Engineering, Mansoura University, Mansoura 35511, Egypt; (S.M.A.); (N.B.A.-H.); (L.M.L.); (H.A.A.)
- Faulty of Artificial Intelligence, Delta University for Science and Technology, Mansoura 35516, Egypt
| | - Ayman El-Baz
- BioImaging Laboratory, Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (M.S.); (A.A.); (A.M.)
- Correspondence:
| |
Collapse
|
24
|
Li D, Han X, Gao J, Zhang Q, Yang H, Liao S, Guo H, Zhang B. Deep Learning in Prostate Cancer Diagnosis Using Multiparametric Magnetic Resonance Imaging With Whole-Mount Histopathology Referenced Delineations. Front Med (Lausanne) 2022; 8:810995. [PMID: 35096899 PMCID: PMC8793798 DOI: 10.3389/fmed.2021.810995] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2021] [Accepted: 12/16/2021] [Indexed: 11/13/2022] Open
Abstract
Background: Multiparametric magnetic resonance imaging (mpMRI) plays an important role in the diagnosis of prostate cancer (PCa) in the current clinical setting. However, the performance of mpMRI usually varies based on the experience of the radiologists at different levels; thus, the demand for MRI interpretation warrants further analysis. In this study, we developed a deep learning (DL) model to improve PCa diagnostic ability using mpMRI and whole-mount histopathology data. Methods: A total of 739 patients, including 466 with PCa and 273 without PCa, were enrolled from January 2017 to December 2019. The mpMRI (T2 weighted imaging, diffusion weighted imaging, and apparent diffusion coefficient sequences) data were randomly divided into training (n = 659) and validation datasets (n = 80). According to the whole-mount histopathology, a DL model, including independent segmentation and classification networks, was developed to extract the gland and PCa area for PCa diagnosis. The area under the curve (AUC) were used to evaluate the performance of the prostate classification networks. The proposed DL model was subsequently used in clinical practice (independent test dataset; n = 200), and the PCa detective/diagnostic performance between the DL model and different level radiologists was evaluated based on the sensitivity, specificity, precision, and accuracy. Results: The AUC of the prostate classification network was 0.871 in the validation dataset, and it reached 0.797 using the DL model in the test dataset. Furthermore, the sensitivity, specificity, precision, and accuracy of the DL model for diagnosing PCa in the test dataset were 0.710, 0.690, 0.696, and 0.700, respectively. For the junior radiologist without and with DL model assistance, these values were 0.590, 0.700, 0.663, and 0.645 versus 0.790, 0.720, 0.738, and 0.755, respectively. For the senior radiologist, the values were 0.690, 0.770, 0.750, and 0.730 vs. 0.810, 0.840, 0.835, and 0.825, respectively. The diagnosis made with DL model assistance for radiologists were significantly higher than those without assistance (P < 0.05). Conclusion: The diagnostic performance of DL model is higher than that of junior radiologists and can improve PCa diagnostic accuracy in both junior and senior radiologists.
Collapse
Affiliation(s)
- Danyan Li
- Department of Radiology, Nanjing Drum Tower Hospital Clinical College of Nanjing Medical University, Nanjing, China.,Department of Radiology, Nanjing Drum Tower Hospital, The Affiliated Hospital of Nanjing University Medical School, Nanjing, China
| | - Xiaowei Han
- Department of Radiology, Nanjing Drum Tower Hospital, The Affiliated Hospital of Nanjing University Medical School, Nanjing, China
| | - Jie Gao
- Department of Urology, Nanjing Drum Tower Hospital, The Affiliated Hospital of Nanjing University Medical School, Nanjing, China
| | - Qing Zhang
- Department of Urology, Nanjing Drum Tower Hospital, The Affiliated Hospital of Nanjing University Medical School, Nanjing, China
| | - Haibo Yang
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Shu Liao
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Hongqian Guo
- Department of Urology, Nanjing Drum Tower Hospital, The Affiliated Hospital of Nanjing University Medical School, Nanjing, China
| | - Bing Zhang
- Department of Radiology, Nanjing Drum Tower Hospital Clinical College of Nanjing Medical University, Nanjing, China.,Department of Radiology, Nanjing Drum Tower Hospital, The Affiliated Hospital of Nanjing University Medical School, Nanjing, China
| |
Collapse
|
25
|
Li H, Lee CH, Chia D, Lin Z, Huang W, Tan CH. Machine Learning in Prostate MRI for Prostate Cancer: Current Status and Future Opportunities. Diagnostics (Basel) 2022; 12:diagnostics12020289. [PMID: 35204380 PMCID: PMC8870978 DOI: 10.3390/diagnostics12020289] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2021] [Revised: 12/31/2021] [Accepted: 01/14/2022] [Indexed: 02/04/2023] Open
Abstract
Advances in our understanding of the role of magnetic resonance imaging (MRI) for the detection of prostate cancer have enabled its integration into clinical routines in the past two decades. The Prostate Imaging Reporting and Data System (PI-RADS) is an established imaging-based scoring system that scores the probability of clinically significant prostate cancer on MRI to guide management. Image fusion technology allows one to combine the superior soft tissue contrast resolution of MRI, with real-time anatomical depiction using ultrasound or computed tomography. This allows the accurate mapping of prostate cancer for targeted biopsy and treatment. Machine learning provides vast opportunities for automated organ and lesion depiction that could increase the reproducibility of PI-RADS categorisation, and improve co-registration across imaging modalities to enhance diagnostic and treatment methods that can then be individualised based on clinical risk of malignancy. In this article, we provide a comprehensive and contemporary review of advancements, and share insights into new opportunities in this field.
Collapse
Affiliation(s)
- Huanye Li
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798, Singapore; (H.L.); (Z.L.)
| | - Chau Hung Lee
- Department of Diagnostic Radiology, Tan Tock Seng Hospital, Singapore 308433, Singapore;
| | - David Chia
- Department of Radiation Oncology, National University Cancer Institute (NUH), Singapore 119074, Singapore;
| | - Zhiping Lin
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798, Singapore; (H.L.); (Z.L.)
| | - Weimin Huang
- Institute for Infocomm Research, A*Star, Singapore 138632, Singapore;
| | - Cher Heng Tan
- Department of Diagnostic Radiology, Tan Tock Seng Hospital, Singapore 308433, Singapore;
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore 639798, Singapore
- Correspondence:
| |
Collapse
|
26
|
Bhattacharya I, Khandwala YS, Vesal S, Shao W, Yang Q, Soerensen SJ, Fan RE, Ghanouni P, Kunder CA, Brooks JD, Hu Y, Rusu M, Sonn GA. A review of artificial intelligence in prostate cancer detection on imaging. Ther Adv Urol 2022; 14:17562872221128791. [PMID: 36249889 PMCID: PMC9554123 DOI: 10.1177/17562872221128791] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Accepted: 08/30/2022] [Indexed: 11/07/2022] Open
Abstract
A multitude of studies have explored the role of artificial intelligence (AI) in providing diagnostic support to radiologists, pathologists, and urologists in prostate cancer detection, risk-stratification, and management. This review provides a comprehensive overview of relevant literature regarding the use of AI models in (1) detecting prostate cancer on radiology images (magnetic resonance and ultrasound imaging), (2) detecting prostate cancer on histopathology images of prostate biopsy tissue, and (3) assisting in supporting tasks for prostate cancer detection (prostate gland segmentation, MRI-histopathology registration, MRI-ultrasound registration). We discuss both the potential of these AI models to assist in the clinical workflow of prostate cancer diagnosis, as well as the current limitations including variability in training data sets, algorithms, and evaluation criteria. We also discuss ongoing challenges and what is needed to bridge the gap between academic research on AI for prostate cancer and commercial solutions that improve routine clinical care.
Collapse
Affiliation(s)
- Indrani Bhattacharya
- Department of Radiology, Stanford University School of Medicine, 1201 Welch Road, Stanford, CA 94305, USA
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Yash S. Khandwala
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Sulaiman Vesal
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Wei Shao
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Qianye Yang
- Centre for Medical Image Computing, University College London, London, UK
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Simon J.C. Soerensen
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Epidemiology & Population Health, Stanford University School of Medicine, Stanford, CA, USA
| | - Richard E. Fan
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Pejman Ghanouni
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Christian A. Kunder
- Department of Pathology, Stanford University School of Medicine, Stanford, CA, USA
| | - James D. Brooks
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Yipeng Hu
- Centre for Medical Image Computing, University College London, London, UK
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Mirabela Rusu
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Geoffrey A. Sonn
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| |
Collapse
|
27
|
Sushentsev N, Rundo L, Blyuss O, Nazarenko T, Suvorov A, Gnanapragasam VJ, Sala E, Barrett T. Comparative performance of MRI-derived PRECISE scores and delta-radiomics models for the prediction of prostate cancer progression in patients on active surveillance. Eur Radiol 2022; 32:680-689. [PMID: 34255161 PMCID: PMC8660717 DOI: 10.1007/s00330-021-08151-x] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2021] [Revised: 05/27/2021] [Accepted: 06/13/2021] [Indexed: 12/14/2022]
Abstract
OBJECTIVES To compare the performance of the PRECISE scoring system against several MRI-derived delta-radiomics models for predicting histopathological prostate cancer (PCa) progression in patients on active surveillance (AS). METHODS The study included AS patients with biopsy-proven PCa with a minimum follow-up of 2 years and at least one repeat targeted biopsy. Histopathological progression was defined as grade group progression from diagnostic biopsy. The control group included patients with both radiologically and histopathologically stable disease. PRECISE scores were applied prospectively by four uro-radiologists with 5-16 years' experience. T2WI- and ADC-derived delta-radiomics features were computed using baseline and latest available MRI scans, with the predictive modelling performed using the parenclitic networks (PN), least absolute shrinkage and selection operator (LASSO) logistic regression, and random forests (RF) algorithms. Standard measures of discrimination and areas under the ROC curve (AUCs) were calculated, with AUCs compared using DeLong's test. RESULTS The study included 64 patients (27 progressors and 37 non-progressors) with a median follow-up of 46 months. PRECISE scores had the highest specificity (94.7%) and positive predictive value (90.9%), whilst RF had the highest sensitivity (92.6%) and negative predictive value (92.6%) for predicting disease progression. The AUC for PRECISE (84.4%) was non-significantly higher than AUCs of 81.5%, 78.0%, and 80.9% for PN, LASSO regression, and RF, respectively (p = 0.64, 0.43, and 0.57, respectively). No significant differences were observed between AUCs of the three delta-radiomics models (p-value range 0.34-0.77). CONCLUSIONS PRECISE and delta-radiomics models achieved comparably good performance for predicting PCa progression in AS patients. KEY POINTS • The observed high specificity and PPV of PRECISE are complemented by the high sensitivity and NPV of delta-radiomics, suggesting a possible synergy between the two image assessment approaches. • The comparable performance of delta-radiomics to PRECISE scores applied by expert readers highlights the prospective use of the former as an objective and standardisable quantitative tool for MRI-guided AS follow-up. • The marginally superior performance of parenclitic networks compared to conventional machine learning algorithms warrants its further use in radiomics research.
Collapse
Affiliation(s)
- Nikita Sushentsev
- Department of Radiology, Addenbrooke's Hospital and University of Cambridge, Cambridge, UK.
- Department of Radiology, University of Cambridge School of Clinical Medicine, Box 218, Cambridge Biomedical Campus, Cambridge, CB2 0QQ, UK.
| | - Leonardo Rundo
- Department of Radiology, Addenbrooke's Hospital and University of Cambridge, Cambridge, UK
- Cancer Research UK Cambridge Centre, University of Cambridge, Cambridge, UK
| | - Oleg Blyuss
- School of Physics, Engineering & Computer Science, University of Hertfordshire, Hatfield, UK
- Department of Paediatrics and Paediatric Infectious Diseases, Sechenov First Moscow State Medical University, Moscow, Russia
- Department of Applied Mathematics, Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
| | - Tatiana Nazarenko
- Department of Mathematics and Institute for Women's Health, University College London, London, UK
| | - Aleksandr Suvorov
- World-Class Research Center "Digital Biodesign and Personalised Healthcare", Sechenov First Moscow State Medical University, Moscow, Russia
| | - Vincent J Gnanapragasam
- Division of Urology, Department of Surgery, University of Cambridge, Cambridge, UK
- Cambridge Urology Translational Research and Clinical Trials Office, University of Cambridge, Cambridge, UK
| | - Evis Sala
- Department of Radiology, Addenbrooke's Hospital and University of Cambridge, Cambridge, UK
- Cancer Research UK Cambridge Centre, University of Cambridge, Cambridge, UK
| | - Tristan Barrett
- Department of Radiology, Addenbrooke's Hospital and University of Cambridge, Cambridge, UK
| |
Collapse
|
28
|
Selective identification and localization of indolent and aggressive prostate cancers via CorrSigNIA: an MRI-pathology correlation and deep learning framework. Med Image Anal 2022; 75:102288. [PMID: 34784540 PMCID: PMC8678366 DOI: 10.1016/j.media.2021.102288] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2020] [Revised: 09/02/2021] [Accepted: 10/20/2021] [Indexed: 01/03/2023]
Abstract
Automated methods for detecting prostate cancer and distinguishing indolent from aggressive disease on Magnetic Resonance Imaging (MRI) could assist in early diagnosis and treatment planning. Existing automated methods of prostate cancer detection mostly rely on ground truth labels with limited accuracy, ignore disease pathology characteristics observed on resected tissue, and cannot selectively identify aggressive (Gleason Pattern≥4) and indolent (Gleason Pattern=3) cancers when they co-exist in mixed lesions. In this paper, we present a radiology-pathology fusion approach, CorrSigNIA, for the selective identification and localization of indolent and aggressive prostate cancer on MRI. CorrSigNIA uses registered MRI and whole-mount histopathology images from radical prostatectomy patients to derive accurate ground truth labels and learn correlated features between radiology and pathology images. These correlated features are then used in a convolutional neural network architecture to detect and localize normal tissue, indolent cancer, and aggressive cancer on prostate MRI. CorrSigNIA was trained and validated on a dataset of 98 men, including 74 men that underwent radical prostatectomy and 24 men with normal prostate MRI. CorrSigNIA was tested on three independent test sets including 55 men that underwent radical prostatectomy, 275 men that underwent targeted biopsies, and 15 men with normal prostate MRI. CorrSigNIA achieved an accuracy of 80% in distinguishing between men with and without cancer, a lesion-level ROC-AUC of 0.81±0.31 in detecting cancers in both radical prostatectomy and biopsy cohort patients, and lesion-levels ROC-AUCs of 0.82±0.31 and 0.86±0.26 in detecting clinically significant cancers in radical prostatectomy and biopsy cohort patients respectively. CorrSigNIA consistently outperformed other methods across different evaluation metrics and cohorts. In clinical settings, CorrSigNIA may be used in prostate cancer detection as well as in selective identification of indolent and aggressive components of prostate cancer, thereby improving prostate cancer care by helping guide targeted biopsies, reducing unnecessary biopsies, and selecting and planning treatment.
Collapse
|
29
|
Kalantar R, Lin G, Winfield JM, Messiou C, Lalondrelle S, Blackledge MD, Koh DM. Automatic Segmentation of Pelvic Cancers Using Deep Learning: State-of-the-Art Approaches and Challenges. Diagnostics (Basel) 2021; 11:1964. [PMID: 34829310 PMCID: PMC8625809 DOI: 10.3390/diagnostics11111964] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Revised: 10/14/2021] [Accepted: 10/19/2021] [Indexed: 12/18/2022] Open
Abstract
The recent rise of deep learning (DL) and its promising capabilities in capturing non-explicit detail from large datasets have attracted substantial research attention in the field of medical image processing. DL provides grounds for technological development of computer-aided diagnosis and segmentation in radiology and radiation oncology. Amongst the anatomical locations where recent auto-segmentation algorithms have been employed, the pelvis remains one of the most challenging due to large intra- and inter-patient soft-tissue variabilities. This review provides a comprehensive, non-systematic and clinically-oriented overview of 74 DL-based segmentation studies, published between January 2016 and December 2020, for bladder, prostate, cervical and rectal cancers on computed tomography (CT) and magnetic resonance imaging (MRI), highlighting the key findings, challenges and limitations.
Collapse
Affiliation(s)
- Reza Kalantar
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
| | - Gigin Lin
- Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital at Linkou and Chang Gung University, 5 Fuhsing St., Guishan, Taoyuan 333, Taiwan;
| | - Jessica M. Winfield
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
- Department of Radiology, The Royal Marsden Hospital, London SW3 6JJ, UK
| | - Christina Messiou
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
- Department of Radiology, The Royal Marsden Hospital, London SW3 6JJ, UK
| | - Susan Lalondrelle
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
- Department of Radiology, The Royal Marsden Hospital, London SW3 6JJ, UK
| | - Matthew D. Blackledge
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
| | - Dow-Mu Koh
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
- Department of Radiology, The Royal Marsden Hospital, London SW3 6JJ, UK
| |
Collapse
|
30
|
Okawa K, Inoue M, Sakae T. Development of a tracking error prediction system for the CyberKnife Synchrony Respiratory Tracking System with use of support vector regression. Med Biol Eng Comput 2021; 59:2409-2418. [PMID: 34655052 DOI: 10.1007/s11517-021-02445-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2020] [Accepted: 09/17/2021] [Indexed: 12/25/2022]
Abstract
PURPOSE The accuracy of the CyberKnife Synchrony Respiratory Tracking System is dependent on the breathing pattern of a patient. Therefore, the tracking error in each patient must be determined. Support vector regression (SVR) can be used to easily identify the tracking error in each patient. This study aimed to develop a system with SVR that can predict tracking error according to a patient's respiratory waveform. METHODS Datasets of the respiratory waveforms of 93 patients were obtained. The feature variables were variation in respiration amplitude, tumor velocity, and phase shift between tumor and the chest wall, and the target variable was tracking error. A learning model was evaluated with tenfold cross-validation. We documented the difference between the predicted and actual tracking errors and assessed the correlation coefficient and coefficient of determination. RESULTS The average difference and maximum difference between the actual and predicted tracking errors were 0.57 ± 0.63 mm and 2.1 mm, respectively. The correlation coefficient and coefficient of determination were 0.86 and 0.74, respectively. CONCLUSION We developed a system for obtaining tracking error by using SVR. The accuracy of such a system is clinically useful. Moreover, the system can easily evaluate tracking error. We developed a system that can be used to predict the tracking error of SRTS in the CyberKnife Robotic Radiosurgery System using machine learning. The feature variables were the breathing parameters, and the target variable was the tracking error. We used support vector regression algorithm.
Collapse
Affiliation(s)
- Kohei Okawa
- Department Radiotherapy Quality Management, Yokohama CyberKnife Center, Ichizawa-cho 574-1, Asahi-ku, Yokohama, 241-0014, Japan.
- Graduate School of Comprehensive Human Science, University of Tsukuba, Ibaraki, 305-8577, Japan.
| | - Mitsuhiro Inoue
- Department Radiotherapy Quality Management, Yokohama CyberKnife Center, Ichizawa-cho 574-1, Asahi-ku, Yokohama, 241-0014, Japan
| | - Takeji Sakae
- Proton Medical Research Center, University of Tsukuba Hospital, Ibaraki, 305-8576, Japan
- Faculty of Medicine, University of Tsukuba, Ibaraki, 305-8577, Japan
| |
Collapse
|
31
|
Hoar D, Lee PQ, Guida A, Patterson S, Bowen CV, Merrimen J, Wang C, Rendon R, Beyea SD, Clarke SE. Combined Transfer Learning and Test-Time Augmentation Improves Convolutional Neural Network-Based Semantic Segmentation of Prostate Cancer from Multi-Parametric MR Images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 210:106375. [PMID: 34500139 DOI: 10.1016/j.cmpb.2021.106375] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/24/2020] [Accepted: 08/22/2021] [Indexed: 06/13/2023]
Abstract
PURPOSE Multiparametric MRI (mp-MRI) is a widely used tool for diagnosing and staging prostate cancer. The purpose of this study was to evaluate whether transfer learning, unsupervised pre-training and test-time augmentation significantly improved the performance of a convolutional neural network (CNN) for pixel-by-pixel prediction of cancer vs. non-cancer using mp-MRI datasets. METHODS 154 subjects undergoing mp-MRI were prospectively recruited, 16 of whom subsequently underwent radical prostatectomy. Logistic regression, random forest and CNN models were trained on mp-MRI data using histopathology as the gold standard. Transfer learning, unsupervised pre-training and test-time augmentation were used to boost CNN performance. Models were evaluated using Dice score and area under the receiver operating curve (AUROC) with leave-one-subject-out cross validation. Permutation feature importance testing was performed to evaluate the relative value of each MR contrast to CNN model performance. Statistical significance (p<0.05) was determined using the paired Wilcoxon signed rank test with Benjamini-Hochberg correction for multiple comparisons. RESULTS Baseline CNN outperformed logistic regression and random forest models. Transfer learning and unsupervised pre-training did not significantly improve CNN performance over baseline; however, test-time augmentation resulted in significantly higher Dice scores over both baseline CNN and CNN plus either of transfer learning or unsupervised pre-training. The best performing model was CNN with transfer learning and test-time augmentation (Dice score of 0.59 and AUROC of 0.93). The most important contrast was apparent diffusion coefficient (ADC), followed by Ktrans and T2, although each contributed significantly to classifier performance. CONCLUSIONS The addition of transfer learning and test-time augmentation resulted in significant improvement in CNN segmentation performance in a small set of prostate cancer mp-MRI data. Results suggest that these techniques may be more broadly useful for the optimization of deep learning algorithms applied to the problem of semantic segmentation in biomedical image datasets. However, further work is needed to improve the generalizability of the specific model presented herein.
Collapse
Affiliation(s)
- David Hoar
- Department of Electrical and Computer Engineering, Dalhousie University, Halifax, NS, Canada
| | - Peter Q Lee
- Faculty of Computer Science, Dalhousie University, Halifax, NS, Canada
| | - Alessandro Guida
- Biomedical Translational Imaging Centre, Nova Scotia Health Authority and IWK Health Centre, Halifax, NS, Canada
| | - Steven Patterson
- Biomedical Translational Imaging Centre, Nova Scotia Health Authority and IWK Health Centre, Halifax, NS, Canada
| | - Chris V Bowen
- Biomedical Translational Imaging Centre, Nova Scotia Health Authority and IWK Health Centre, Halifax, NS, Canada; Department of Diagnostic Radiology, Dalhousie University, Halifax, NS, Canada
| | | | - Cheng Wang
- Department of Pathology, Dalhousie University, Halifax, NS, Canada
| | - Ricardo Rendon
- Department of Urology, Dalhousie University, Halifax, NS, Canada
| | - Steven D Beyea
- Biomedical Translational Imaging Centre, Nova Scotia Health Authority and IWK Health Centre, Halifax, NS, Canada; Department of Diagnostic Radiology, Dalhousie University, Halifax, NS, Canada
| | - Sharon E Clarke
- Biomedical Translational Imaging Centre, Nova Scotia Health Authority and IWK Health Centre, Halifax, NS, Canada; Department of Diagnostic Radiology, Dalhousie University, Halifax, NS, Canada.
| |
Collapse
|
32
|
Interactive, Up-to-date Meta-Analysis of MRI in the Management of Men with Suspected Prostate Cancer. J Digit Imaging 2021; 33:586-594. [PMID: 31898035 DOI: 10.1007/s10278-019-00312-1] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023] Open
Abstract
The aim of this study was to test an interactive up-to-date meta-analysis (iu-ma) of studies on MRI in the management of men with suspected prostate cancer. Based on the findings of recently published systematic reviews and meta-analyses, two freely accessible dynamic meta-analyses (https://iu-ma.org) were designed using the programming language R in combination with the package "shiny." The first iu-ma compares the performance of the MRI-stratified pathway and the systematic transrectal ultrasound-guided biopsy pathway for the detection of clinically significant prostate cancer, while the second iu-ma focuses on the use of biparametric versus multiparametric MRI for the diagnosis of prostate cancer. Our iu-mas allow for the effortless addition of new studies and data, thereby enabling physicians to keep track of the most recent scientific developments without having to resort to classical static meta-analyses that may become outdated in a short period of time. Furthermore, the iu-mas enable in-depth subgroup analyses by a wide variety of selectable parameters. Such an analysis is not only tailored to the needs of the reader but is also far more comprehensive than a classical meta-analysis. In that respect, following multiple subgroup analyses, we found that even for various subgroups, detection rates of prostate cancer are not different between biparametric and multiparametric MRI. Secondly, we could confirm the favorable influence of MRI biopsy stratification for multiple clinical scenarios. For the future, we envisage the use of this technology in addressing further clinical questions of other organ systems.
Collapse
|
33
|
Stanzione A, Ponsiglione A, Di Fiore GA, Picchi SG, Di Stasi M, Verde F, Petretta M, Imbriaco M, Cuocolo R. Prostate Volume Estimation on MRI: Accuracy and Effects of Ellipsoid and Bullet-Shaped Measurements on PSA Density. Acad Radiol 2021; 28:e219-e226. [PMID: 32553281 DOI: 10.1016/j.acra.2020.05.014] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2020] [Revised: 05/13/2020] [Accepted: 05/14/2020] [Indexed: 12/14/2022]
Abstract
RATIONALE AND OBJECTIVES PSA density (PSAd), an important decision-making parameter for patients with suspected prostate cancer (PCa), is dependent on magnetic resonance imaging prostate volume (PV) estimation. We aimed to compare the accuracy of the ellipsoid and bullet-shaped formulas with manual whole-gland segmentation as reference standard and to evaluate the corresponding PSAd diagnostic accuracy in predicting clinically significant PCa. MATERIALS AND METHODS We retrospectively analysed 195 patients with suspected PCa who underwent magnetic resonance imaging and prostate biopsy. Patients with PCa were categorized according to ISUP score. PV and corresponding PSAd were calculated with manual segmentation (mPV and mPSAd) as well as with ellipsoid (ePV and ePSAd) and bullet-shaped (bPV and bPSAd) formulas. Inter and intra-reader reproducibility were assessed with Lin's concordance correlation coefficient and the intraclass correlation coefficient (ICC). A 2-way analysis of variance with post-hoc Bonferroni test was used for assessing PV differences. Predictive values of PSAd calculated with different methods for detecting clinically significant PCa were evaluated by receiver operating characteristic curve analysis and Youden's index. RESULTS Both intra (ρ = 0.99, ICC = 0.99) and inter-reader (ρ = 0.98, ICC = 0.98) reproducibility were excellent. No significant difference was found between ePV and reference standard (p = 1.00). bPV was significantly different from both (p = 0.00). PSAd (mPSAd/ePSAd cut-off ≥ 0.15, bPSAd cut-off ≥ 0.12) had sensitivity = 69-70%, specificity = 72-75%, areas under the curve = 0.757-0.760 (p = 0.70-0.88). CONCLUSIONS Our work shows that when using bullet-shaped formula, a different PSAd cut-off must be considered to avoid PCa under-diagnosis and inaccurate risk-stratification.
Collapse
Affiliation(s)
- Arnaldo Stanzione
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Naples, Italy
| | - Andrea Ponsiglione
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Naples, Italy.
| | | | - Stefano Giusto Picchi
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Naples, Italy
| | - Martina Di Stasi
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Naples, Italy
| | - Francesco Verde
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Naples, Italy
| | - Mario Petretta
- Department of Translational Medical Sciences, University of Naples "Federico II", Naples, Italy
| | - Massimo Imbriaco
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Naples, Italy
| | - Renato Cuocolo
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Naples, Italy
| |
Collapse
|
34
|
Huang P, Xu L, Xie Y. Biomedical Applications of Electromagnetic Detection: A Brief Review. BIOSENSORS 2021; 11:225. [PMID: 34356696 PMCID: PMC8301974 DOI: 10.3390/bios11070225] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/28/2021] [Revised: 06/29/2021] [Accepted: 07/03/2021] [Indexed: 01/01/2023]
Abstract
This paper presents a review on the biomedical applications of electromagnetic detection in recent years. First of all, the thermal, non-thermal, and cumulative thermal effects of electromagnetic field on organism and their biological mechanisms are introduced. According to the electromagnetic biological theory, the main parameters affecting electromagnetic biological effects are frequency and intensity. This review subsequently makes a brief review about the related biomedical application of electromagnetic detection and biosensors using frequency as a clue, such as health monitoring, food preservation, and disease treatment. In addition, electromagnetic detection in combination with machine learning (ML) technology has been used in clinical diagnosis because of its powerful feature extraction capabilities. Therefore, the relevant research involving the application of ML technology to electromagnetic medical images are summarized. Finally, the future development to electromagnetic detection for biomedical applications are presented.
Collapse
Affiliation(s)
- Pu Huang
- School of Instrumentation and Optoelectronic Engineering, Beihang University, Beijing 100191, China;
| | - Lijun Xu
- Beijing Advanced Innovation Centre for Big Data-Based Precision Medicine, Beihang University, Beijing 100191, China;
| | - Yuedong Xie
- School of Instrumentation and Optoelectronic Engineering, Beihang University, Beijing 100191, China;
- Beijing Advanced Innovation Centre for Big Data-Based Precision Medicine, Beihang University, Beijing 100191, China;
| |
Collapse
|
35
|
MRI-derived radiomics model for baseline prediction of prostate cancer progression on active surveillance. Sci Rep 2021; 11:12917. [PMID: 34155265 PMCID: PMC8217549 DOI: 10.1038/s41598-021-92341-6] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Accepted: 06/03/2021] [Indexed: 02/05/2023] Open
Abstract
Nearly half of patients with prostate cancer (PCa) harbour low- or intermediate-risk disease considered suitable for active surveillance (AS). However, up to 44% of patients discontinue AS within the first five years, highlighting the unmet clinical need for robust baseline risk-stratification tools that enable timely and accurate prediction of tumour progression. In this proof-of-concept study, we sought to investigate the added value of MRI-derived radiomic features to standard-of-care clinical parameters for improving baseline prediction of PCa progression in AS patients. Tumour T2-weighted imaging (T2WI) and apparent diffusion coefficient radiomic features were extracted, with rigorous calibration and pre-processing methods applied to select the most robust features for predictive modelling. Following leave-one-out cross-validation, the addition of T2WI-derived radiomic features to clinical variables alone improved the area under the ROC curve for predicting progression from 0.61 (95% confidence interval [CI] 0.481-0.743) to 0.75 (95% CI 0.64-0.86). These exploratory findings demonstrate the potential benefit of MRI-derived radiomics to add incremental benefit to clinical data only models in the baseline prediction of PCa progression on AS, paving the way for future multicentre studies validating the proposed model and evaluating its impact on clinical outcomes.
Collapse
|
36
|
Twilt JJ, van Leeuwen KG, Huisman HJ, Fütterer JJ, de Rooij M. Artificial Intelligence Based Algorithms for Prostate Cancer Classification and Detection on Magnetic Resonance Imaging: A Narrative Review. Diagnostics (Basel) 2021; 11:diagnostics11060959. [PMID: 34073627 PMCID: PMC8229869 DOI: 10.3390/diagnostics11060959] [Citation(s) in RCA: 41] [Impact Index Per Article: 13.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2021] [Revised: 05/19/2021] [Accepted: 05/21/2021] [Indexed: 12/14/2022] Open
Abstract
Due to the upfront role of magnetic resonance imaging (MRI) for prostate cancer (PCa) diagnosis, a multitude of artificial intelligence (AI) applications have been suggested to aid in the diagnosis and detection of PCa. In this review, we provide an overview of the current field, including studies between 2018 and February 2021, describing AI algorithms for (1) lesion classification and (2) lesion detection for PCa. Our evaluation of 59 included studies showed that most research has been conducted for the task of PCa lesion classification (66%) followed by PCa lesion detection (34%). Studies showed large heterogeneity in cohort sizes, ranging between 18 to 499 patients (median = 162) combined with different approaches for performance validation. Furthermore, 85% of the studies reported on the stand-alone diagnostic accuracy, whereas 15% demonstrated the impact of AI on diagnostic thinking efficacy, indicating limited proof for the clinical utility of PCa AI applications. In order to introduce AI within the clinical workflow of PCa assessment, robustness and generalizability of AI applications need to be further validated utilizing external validation and clinical workflow experiments.
Collapse
|
37
|
Hou Y, Zhang YH, Bao J, Bao ML, Yang G, Shi HB, Song Y, Zhang YD. Artificial intelligence is a promising prospect for the detection of prostate cancer extracapsular extension with mpMRI: a two-center comparative study. Eur J Nucl Med Mol Imaging 2021; 48:3805-3816. [PMID: 34018011 DOI: 10.1007/s00259-021-05381-5] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2021] [Accepted: 04/25/2021] [Indexed: 12/23/2022]
Abstract
PURPOSE A balance between preserving urinary continence as well as sexual potency and achieving negative surgical margins is of clinical relevance while implementary difficulty. Accurate detection of extracapsular extension (ECE) of prostate cancer (PCa) is thus crucial for determining appropriate treatment options. We aimed to develop and validate an artificial intelligence (AI)-based tool for detecting ECE of PCa using multiparametric magnetic resonance imaging (mpMRI). METHODS Eight hundred and forty nine consecutive PCa patients who underwent mpMRI and prostatectomy without previous radio- or hormonal therapy from two medical centers were retrospectively included. The AI tool was built on a ResNeXt network embedded with a spatial attention map of experts' prior knowledge (PAGNet) from 596 training patients. Model validation was performed in 150 internal and 103 external patients. Performance comparison was made between AI, two experts using a criteria-based ECE grading system, and expert-AI interaction. RESULTS An index PAGNet model using a single-slice image yielded the highest areas under the receiver operating characteristic curve (AUC) of 0.857 (95% confidence interval [CI], 0.827-0.884), 0.807 (95% CI, 0.735-0.867), and 0.728 (95% CI, 0.631-0.811) in training, internal, and external validation data, respectively. The performance of two experts (AUC, 0.632 to 0.741 vs 0.715 to 0.857) was lower (paired comparison, all p values < 0.05) than that of AI assessment. When experts' interpretations were adjusted by AI assessments, the performance of two experts was improved. CONCLUSION Our AI tool, showing improved accuracy, offers a promising alternative to human experts for ECE staging using mpMRI.
Collapse
Affiliation(s)
- Ying Hou
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, No. 300, Guangzhou Road, Nanjing, 210029, Jiangsu Province, China
| | - Yi-Hong Zhang
- Shanghai Key Laboratory of Magnetic Resonance, East China Normal University, 3663 N. Zhongshan Rd., Shanghai, 200062, China
| | - Jie Bao
- Department of Radiology, The First Affiliated Hospital of Soochow University, 188#, Shizi Road, Jiangsu Province, 215006, Suzhou, China
| | - Mei-Ling Bao
- Department of Pathology, The First Affiliated Hospital of Nanjing Medical University, No. 300, Guangzhou Road, Jiangsu Province, 210029, Nanjing, China
| | - Guang Yang
- Shanghai Key Laboratory of Magnetic Resonance, East China Normal University, 3663 N. Zhongshan Rd., Shanghai, 200062, China
| | - Hai-Bin Shi
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, No. 300, Guangzhou Road, Nanjing, 210029, Jiangsu Province, China
| | - Yang Song
- Shanghai Key Laboratory of Magnetic Resonance, East China Normal University, 3663 N. Zhongshan Rd., Shanghai, 200062, China.
| | - Yu-Dong Zhang
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, No. 300, Guangzhou Road, Nanjing, 210029, Jiangsu Province, China.
| |
Collapse
|
38
|
Lai CC, Wang HK, Wang FN, Peng YC, Lin TP, Peng HH, Shen SH. Autosegmentation of Prostate Zones and Cancer Regions from Biparametric Magnetic Resonance Images by Using Deep-Learning-Based Neural Networks. SENSORS 2021; 21:s21082709. [PMID: 33921451 PMCID: PMC8070192 DOI: 10.3390/s21082709] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/25/2021] [Revised: 04/01/2021] [Accepted: 04/09/2021] [Indexed: 12/21/2022]
Abstract
The accuracy in diagnosing prostate cancer (PCa) has increased with the development of multiparametric magnetic resonance imaging (mpMRI). Biparametric magnetic resonance imaging (bpMRI) was found to have a diagnostic accuracy comparable to mpMRI in detecting PCa. However, prostate MRI assessment relies on human experts and specialized training with considerable inter-reader variability. Deep learning may be a more robust approach for prostate MRI assessment. Here we present a method for autosegmenting the prostate zone and cancer region by using SegNet, a deep convolution neural network (DCNN) model. We used PROSTATEx dataset to train the model and combined different sequences into three channels of a single image. For each subject, all slices that contained the transition zone (TZ), peripheral zone (PZ), and PCa region were selected. The datasets were produced using different combinations of images, including T2-weighted (T2W) images, diffusion-weighted images (DWI) and apparent diffusion coefficient (ADC) images. Among these groups, the T2W + DWI + ADC images exhibited the best performance with a dice similarity coefficient of 90.45% for the TZ, 70.04% for the PZ, and 52.73% for the PCa region. Image sequence analysis with a DCNN model has the potential to assist PCa diagnosis.
Collapse
Affiliation(s)
- Chih-Ching Lai
- Department of Biomedical Engineering and Environmental Sciences, National Tsing Hua University, Hsinchu 300044, Taiwan; (C.-C.L.); (F.-N.W.)
| | - Hsin-Kai Wang
- Department of Radiology, Taipei Veterans General Hospital, Taipei 112201, Taiwan;
- School of Medicine, National Yang Ming Chiao Tung University, Taipei 112304, Taiwan; (Y.-C.P.); (T.-P.L.)
| | - Fu-Nien Wang
- Department of Biomedical Engineering and Environmental Sciences, National Tsing Hua University, Hsinchu 300044, Taiwan; (C.-C.L.); (F.-N.W.)
| | - Yu-Ching Peng
- School of Medicine, National Yang Ming Chiao Tung University, Taipei 112304, Taiwan; (Y.-C.P.); (T.-P.L.)
- Department of Pathology and Laboratory Medicine, Taipei Veterans General Hospital, Taipei 112201, Taiwan
| | - Tzu-Ping Lin
- School of Medicine, National Yang Ming Chiao Tung University, Taipei 112304, Taiwan; (Y.-C.P.); (T.-P.L.)
- Department of Urology, Taipei Veterans General Hospital, Taipei 112201, Taiwan
| | - Hsu-Hsia Peng
- Department of Biomedical Engineering and Environmental Sciences, National Tsing Hua University, Hsinchu 300044, Taiwan; (C.-C.L.); (F.-N.W.)
- Correspondence: (H.-H.P.); (S.-H.S.); Tel.: +886-3-571-5131 (ext. 80189) (H.-H.P.); +886-2-28757350 (S.-H.S.)
| | - Shu-Huei Shen
- Department of Radiology, Taipei Veterans General Hospital, Taipei 112201, Taiwan;
- School of Medicine, National Yang Ming Chiao Tung University, Taipei 112304, Taiwan; (Y.-C.P.); (T.-P.L.)
- Correspondence: (H.-H.P.); (S.-H.S.); Tel.: +886-3-571-5131 (ext. 80189) (H.-H.P.); +886-2-28757350 (S.-H.S.)
| |
Collapse
|
39
|
Popov GV, Chub AA, Lerner YV, Tsoy LV, Dubinina AV, Varshavsky VA. [Artificial intelligence in the diagnosis of prostate cancer]. Arkh Patol 2021; 83:38-45. [PMID: 33822553 DOI: 10.17116/patol20218302138] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
OBJECTIVE To discuss the possibilities and prospects of using artificial intelligence (AI) in the diagnosis of prostate cancer (PC). The laboratory diagnosis of PC is considered and prostate images are analyzed according to transrectal ultrasound and magnetic resonance imaging using AI algorithms. Particular emphasis is placed on prostate histologic evaluation.
Collapse
Affiliation(s)
- G V Popov
- PathVision.ai Corporation, Moscow, Russia
| | - A A Chub
- PathVision.ai Corporation, Moscow, Russia
| | - Yu V Lerner
- I.M. Sechenov First Moscow State Medical University (Sechenov University) of the Ministry of Health of Russia, Moscow, Russia
| | - L V Tsoy
- I.M. Sechenov First Moscow State Medical University (Sechenov University) of the Ministry of Health of Russia, Moscow, Russia
| | - A V Dubinina
- N.N. Blokhin National Medical Research Center of Oncology of the Ministry of Health of Russia, Moscow, Russia
| | - V A Varshavsky
- I.M. Sechenov First Moscow State Medical University (Sechenov University) of the Ministry of Health of Russia, Moscow, Russia
| |
Collapse
|
40
|
Sobecki P, Jóźwiak R, Sklinda K, Przelaskowski A. Effect of domain knowledge encoding in CNN model architecture-a prostate cancer study using mpMRI images. PeerJ 2021; 9:e11006. [PMID: 33732553 PMCID: PMC7953869 DOI: 10.7717/peerj.11006] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2020] [Accepted: 02/02/2021] [Indexed: 11/20/2022] Open
Abstract
Background Prostate cancer is one of the most common cancers worldwide. Currently, convolution neural networks (CNNs) are achieving remarkable success in various computer vision tasks, and in medical imaging research. Various CNN architectures and methodologies have been applied in the field of prostate cancer diagnosis. In this work, we evaluate the impact of the adaptation of a state-of-the-art CNN architecture on domain knowledge related to problems in the diagnosis of prostate cancer. The architecture of the final CNN model was optimised on the basis of the Prostate Imaging Reporting and Data System (PI-RADS) standard, which is currently the best available indicator in the acquisition, interpretation, and reporting of prostate multi-parametric magnetic resonance imaging (mpMRI) examinations. Methods A dataset containing 330 suspicious findings identified using mpMRI was used. Two CNN models were subjected to comparative analysis. Both implement the concept of decision-level fusion for mpMRI data, providing a separate network for each multi-parametric series. The first model implements a simple fusion of multi-parametric features to formulate the final decision. The architecture of the second model reflects the diagnostic pathway of PI-RADS methodology, using information about a lesion's primary anatomic location within the prostate gland. Both networks were experimentally tuned to successfully classify prostate cancer changes. Results The optimised knowledge-encoded model achieved slightly better classification results compared with the traditional model architecture (AUC = 0.84 vs. AUC = 0.82). We found the proposed model to achieve convergence significantly faster. Conclusions The final knowledge-encoded CNN model provided more stable learning performance and faster convergence to optimal diagnostic accuracy. The results fail to demonstrate that PI-RADS-based modelling of CNN architecture can significantly improve performance of prostate cancer recognition using mpMRI.
Collapse
Affiliation(s)
- Piotr Sobecki
- Applied Artificial Intelligence Laboratory, National Information Processing Institute, Warsaw, Mazowieckie, Poland.,Faculty of Mathematics and Information Science, Warsaw University of Technology, Warsaw, Poland
| | - Rafał Jóźwiak
- Applied Artificial Intelligence Laboratory, National Information Processing Institute, Warsaw, Mazowieckie, Poland.,Faculty of Mathematics and Information Science, Warsaw University of Technology, Warsaw, Poland
| | - Katarzyna Sklinda
- Department of Radiology, Centre of Postgraduate Medical Education, Warsaw, Poland
| | - Artur Przelaskowski
- Faculty of Mathematics and Information Science, Warsaw University of Technology, Warsaw, Poland
| |
Collapse
|
41
|
AI applications in robotics, diagnostic image analysis and precision medicine: Current limitations, future trends, guidelines on CAD systems for medicine. INFORMATICS IN MEDICINE UNLOCKED 2021. [DOI: 10.1016/j.imu.2021.100596] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022] Open
|
42
|
Pandey B, Kumar Pandey D, Pratap Mishra B, Rhmann W. A comprehensive survey of deep learning in the field of medical imaging and medical natural language processing: Challenges and research directions. JOURNAL OF KING SAUD UNIVERSITY - COMPUTER AND INFORMATION SCIENCES 2021. [DOI: 10.1016/j.jksuci.2021.01.007] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
43
|
Weikert T, Noordtzij LA, Bremerich J, Stieltjes B, Parmar V, Cyriac J, Sommer G, Sauter AW. Assessment of a Deep Learning Algorithm for the Detection of Rib Fractures on Whole-Body Trauma Computed Tomography. Korean J Radiol 2020; 21:891-899. [PMID: 32524789 PMCID: PMC7289702 DOI: 10.3348/kjr.2019.0653] [Citation(s) in RCA: 51] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2019] [Revised: 02/12/2020] [Accepted: 02/19/2020] [Indexed: 12/03/2022] Open
Abstract
Objective To assess the diagnostic performance of a deep learning-based algorithm for automated detection of acute and chronic rib fractures on whole-body trauma CT. Materials and Methods We retrospectively identified all whole-body trauma CT scans referred from the emergency department of our hospital from January to December 2018 (n = 511). Scans were categorized as positive (n = 159) or negative (n = 352) for rib fractures according to the clinically approved written CT reports, which served as the index test. The bone kernel series (1.5-mm slice thickness) served as an input for a detection prototype algorithm trained to detect both acute and chronic rib fractures based on a deep convolutional neural network. It had previously been trained on an independent sample from eight other institutions (n = 11455). Results All CTs except one were successfully processed (510/511). The algorithm achieved a sensitivity of 87.4% and specificity of 91.5% on a per-examination level [per CT scan: rib fracture(s): yes/no]. There were 0.16 false-positives per examination (= 81/510). On a per-finding level, there were 587 true-positive findings (sensitivity: 65.7%) and 307 false-negatives. Furthermore, 97 true rib fractures were detected that were not mentioned in the written CT reports. A major factor associated with correct detection was displacement. Conclusion We found good performance of a deep learning-based prototype algorithm detecting rib fractures on trauma CT on a per-examination level at a low rate of false-positives per case. A potential area for clinical application is its use as a screening tool to avoid false-negative radiology reports.
Collapse
Affiliation(s)
- Thomas Weikert
- Clinic of Radiology and Nuclear Medicine, University Hospital Basel, University of Basel, Basel, Switzerland.
| | - Luca Andre Noordtzij
- Clinic of Radiology and Nuclear Medicine, University Hospital Basel, University of Basel, Basel, Switzerland
| | - Jens Bremerich
- Clinic of Radiology and Nuclear Medicine, University Hospital Basel, University of Basel, Basel, Switzerland
| | - Bram Stieltjes
- Clinic of Radiology and Nuclear Medicine, University Hospital Basel, University of Basel, Basel, Switzerland
| | - Victor Parmar
- Clinic of Radiology and Nuclear Medicine, University Hospital Basel, University of Basel, Basel, Switzerland
| | - Joshy Cyriac
- Clinic of Radiology and Nuclear Medicine, University Hospital Basel, University of Basel, Basel, Switzerland
| | - Gregor Sommer
- Clinic of Radiology and Nuclear Medicine, University Hospital Basel, University of Basel, Basel, Switzerland
| | - Alexander Walter Sauter
- Clinic of Radiology and Nuclear Medicine, University Hospital Basel, University of Basel, Basel, Switzerland
| |
Collapse
|
44
|
Chen Y, Xing L, Yu L, Bagshaw HP, Buyyounouski MK, Han B. Automatic intraprostatic lesion segmentation in multiparametric magnetic resonance images with proposed multiple branch UNet. Med Phys 2020; 47:6421-6429. [PMID: 33012016 DOI: 10.1002/mp.14517] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2020] [Revised: 09/24/2020] [Accepted: 09/25/2020] [Indexed: 12/13/2022] Open
Abstract
PURPOSE Contouring intraprostatic lesions is a prerequisite for dose-escalating these lesions in radiotherapy to improve the local cancer control. In this study, a deep learning-based approach was developed for automatic intraprostatic lesion segmentation in multiparametric magnetic resonance imaging (mpMRI) images contributing to clinical practice. METHODS Multiparametric magnetic resonance imaging images from 136 patient cases were collected from our institution, and all these cases contained suspicious lesions with Prostate Imaging Reporting and Data System (PI-RADS) score ≥ 4. The contours of the lesion and prostate were manually created on axial T2-weighted (T2W), apparent diffusion coefficient (ADC) and high b-value diffusion-weighted imaging (DWI) images to provide the ground truth data. Then a multiple branch UNet (MB-UNet) was proposed for the segmentation of an indistinct target in multi-modality MRI images. An encoder module was designed with three branches for the three MRI modalities separately, to fully extract the high-level features provided by different MRI modalities; an input module was added by using three sub-branches for three consecutive image slices, to consider the contour consistency among different image slices; deep supervision strategy was also integrated into the network to speed up the convergency of the network and improve the performance. The probability maps of the background, normal prostate and lesion were output by the network to generate the segmentation of the lesion, and the performance was evaluated using the dice similarity coefficient (DSC) as the main metric. RESULTS A total of 162 lesions were contoured on 652 image slices, with 119 lesions in the peripheral zone, 38 in the transition zone, four in the central zone and one in the anterior fibromuscular stroma. All prostates were also contoured on 1,264 image slices. As for the segmentation of lesions in the testing set, MB-UNet achieved a per case DSC of 0.6333, specificity of 0.9993, sensitivity of 0.7056; and global DSC of 0.7205, specificity of 0.9993, sensitivity of 0.7409. All the three deep learning strategies adopted in this study contributed to the performance promotion of the MB-UNet. Missing the DWI modality would degrade the segmentation performance more markedly compared with the other two modalities. CONCLUSIONS A deep learning-based approach with proposed MB-UNet was developed to automatically segment suspicious lesions in mpMRI images. This study makes it feasible to adopt boosting intraprostatic lesions in clinical practice to achieve better outcomes.
Collapse
Affiliation(s)
- Yizheng Chen
- Department of Radiation Oncology, Stanford University, Stanford, 94305, USA
| | - Lei Xing
- Department of Radiation Oncology, Stanford University, Stanford, 94305, USA
| | - Lequan Yu
- Department of Radiation Oncology, Stanford University, Stanford, 94305, USA
| | - Hilary P Bagshaw
- Department of Radiation Oncology, Stanford University, Stanford, 94305, USA
| | | | - Bin Han
- Department of Radiation Oncology, Stanford University, Stanford, 94305, USA
| |
Collapse
|
45
|
Prostate lesion segmentation in MR images using radiomics based deeply supervised U-Net. Biocybern Biomed Eng 2020. [DOI: 10.1016/j.bbe.2020.07.011] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
|
46
|
Barateau A, De Crevoisier R, Largent A, Mylona E, Perichon N, Castelli J, Chajon E, Acosta O, Simon A, Nunes JC, Lafond C. Comparison of CBCT-based dose calculation methods in head and neck cancer radiotherapy: from Hounsfield unit to density calibration curve to deep learning. Med Phys 2020; 47:4683-4693. [PMID: 32654160 DOI: 10.1002/mp.14387] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2020] [Revised: 06/16/2020] [Accepted: 06/23/2020] [Indexed: 01/26/2023] Open
Abstract
PURPOSE Anatomical variations occur during head and neck (H&N) radiotherapy treatment. kV cone-beam computed tomography (CBCT) images can be used for daily dose monitoring to assess dose variations owing to anatomic changes. Deep learning methods (DLMs) have recently been proposed to generate pseudo-CT (pCT) from CBCT to perform dose calculation. This study aims to evaluate the accuracy of a DLM and to compare this method with three existing methods of dose calculation from CBCT in H&N cancer radiotherapy. METHODS Forty-four patients received VMAT for H&N cancer (70-63-56 Gy). For each patient, reference CT (Bigbore, Philips) and CBCT images (XVI, Elekta) were acquired. The DLM was based on a generative adversarial network. The three compared methods were: (a) a method using a density to Hounsfield Unit (HU) relation from phantom CBCT image (HU-D curve method), (b) a water-air-bone density assignment method (DAM), and iii) a method using deformable image registration (DIR). The imaging endpoints were the mean absolute error (MAE) and mean error (ME) of HU from pCT and reference CT (CTref ). The dosimetric endpoints were dose discrepancies and 3D gamma analyses (local, 2%/2 mm, 30% dose threshold). Dose discrepancies were defined as the mean absolute differences between DVHs calculated from the CTref and pCT of each method. RESULTS In the entire body, the MAEs and MEs of the DLM, HU-D curve method, DAM, and DIR method were 82.4 and 17.1 HU, 266.6 and 208.9 HU, 113.2 and 14.2 HU, and 95.5 and -36.6 HU, respectively. The MAE obtained using the DLM differed significantly from those of other methods (Wilcoxon, P ≤ 0.05). The DLM dose discrepancies were 7 ± 8 cGy (maximum = 44 cGy) for the ipsilateral parotid gland Dmean and 5 ± 6 cGy (max = 26 cGy) for the contralateral parotid gland mean dose (Dmean ). For the parotid gland Dmean , no significant dose difference was observed between the DLM and other methods. The mean 3D gamma pass rate ± standard deviation was 98.1 ± 1.2%, 91.0 ± 5.3%, 97.9 ± 1.6%, and 98.8 ± 0.7% for the DLM, HU-D method, DAM, and DIR method, respectively. The gamma pass rates and mean gamma results of the HU-D curve method, DAM, and DIR method differed significantly from those of the DLM. CONCLUSIONS For H&N radiotherapy, DIR method and DLM appears as the most appealing CBCT-based dose calculation methods among the four methods in terms of dose accuracy as well as calculation time. Using the DIR method or DLM with CBCT images enables dose monitoring in the parotid glands during the treatment course and may be used to trigger replanning.
Collapse
Affiliation(s)
- Anaïs Barateau
- Univ. Rennes, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, Rennes, F-35000, France
| | - Renaud De Crevoisier
- Univ. Rennes, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, Rennes, F-35000, France
| | - Axel Largent
- Univ. Rennes, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, Rennes, F-35000, France
| | - Eugenia Mylona
- Univ. Rennes, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, Rennes, F-35000, France
| | - Nicolas Perichon
- Univ. Rennes, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, Rennes, F-35000, France
| | - Joël Castelli
- Univ. Rennes, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, Rennes, F-35000, France
| | - Enrique Chajon
- Univ. Rennes, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, Rennes, F-35000, France
| | - Oscar Acosta
- Univ. Rennes, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, Rennes, F-35000, France
| | - Antoine Simon
- Univ. Rennes, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, Rennes, F-35000, France
| | - Jean-Claude Nunes
- Univ. Rennes, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, Rennes, F-35000, France
| | - Caroline Lafond
- Univ. Rennes, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, Rennes, F-35000, France
| |
Collapse
|
47
|
Schelb P, Wang X, Radtke JP, Wiesenfarth M, Kickingereder P, Stenzinger A, Hohenfellner M, Schlemmer HP, Maier-Hein KH, Bonekamp D. Simulated clinical deployment of fully automatic deep learning for clinical prostate MRI assessment. Eur Radiol 2020; 31:302-313. [PMID: 32767102 PMCID: PMC7755653 DOI: 10.1007/s00330-020-07086-z] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2020] [Revised: 06/03/2020] [Accepted: 07/20/2020] [Indexed: 01/12/2023]
Abstract
Objectives To simulate clinical deployment, evaluate performance, and establish quality assurance of a deep learning algorithm (U-Net) for detection, localization, and segmentation of clinically significant prostate cancer (sPC), ISUP grade group ≥ 2, using bi-parametric MRI. Methods In 2017, 284 consecutive men in active surveillance, biopsy-naïve or pre-biopsied, received targeted and extended systematic MRI/transrectal US-fusion biopsy, after examination on a single MRI scanner (3 T). A prospective adjustment scheme was evaluated comparing the performance of the Prostate Imaging Reporting and Data System (PI-RADS) and U-Net using sensitivity, specificity, predictive values, and the Dice coefficient. Results In the 259 eligible men (median 64 [IQR 61–72] years), PI-RADS had a sensitivity of 98% [106/108]/84% [91/108] with a specificity of 17% [25/151]/58% [88/151], for thresholds at ≥ 3/≥ 4 respectively. U-Net using dynamic threshold adjustment had a sensitivity of 99% [107/108]/83% [90/108] (p > 0.99/> 0.99) with a specificity of 24% [36/151]/55% [83/151] (p > 0.99/> 0.99) for probability thresholds d3 and d4 emulating PI-RADS ≥ 3 and ≥ 4 decisions respectively, not statistically different from PI-RADS. Co-occurrence of a radiological PI-RADS ≥ 4 examination and U-Net ≥ d3 assessment significantly improved the positive predictive value from 59 to 63% (p = 0.03), on a per-patient basis. Conclusions U-Net has similar performance to PI-RADS in simulated continued clinical use. Regular quality assurance should be implemented to ensure desired performance. Key Points • U-Net maintained similar diagnostic performance compared to radiological assessment of PI-RADS ≥ 4 when applied in a simulated clinical deployment. • Application of our proposed prospective dynamic calibration method successfully adjusted U-Net performance within acceptable limits of the PI-RADS reference over time, while not being limited to PI-RADS as a reference. • Simultaneous detection by U-Net and radiological assessment significantly improved the positive predictive value on a per-patient and per-lesion basis, while the negative predictive value remained unchanged. Electronic supplementary material The online version of this article (10.1007/s00330-020-07086-z) contains supplementary material, which is available to authorized users.
Collapse
Affiliation(s)
- Patrick Schelb
- Division of Radiology, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120, Heidelberg, Germany
| | - Xianfeng Wang
- Division of Radiology, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120, Heidelberg, Germany.,Department of Radiology, Affiliated Hospital of Guilin Medical University, Guangxi, Guilin, People's Republic of China
| | - Jan Philipp Radtke
- Division of Radiology, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120, Heidelberg, Germany.,Department of Urology, University of Heidelberg Medical Center, Heidelberg, Germany
| | - Manuel Wiesenfarth
- Division of Biostatistics, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Philipp Kickingereder
- Department of Neuroradiology, University of Heidelberg Medical Center, Heidelberg, Germany
| | - Albrecht Stenzinger
- Institute of Pathology, University of Heidelberg Medical Center, Heidelberg, Germany
| | - Markus Hohenfellner
- Department of Urology, University of Heidelberg Medical Center, Heidelberg, Germany
| | - Heinz-Peter Schlemmer
- Division of Radiology, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120, Heidelberg, Germany.,German Cancer Consortium (DKTK), Heidelberg, Germany
| | - Klaus H Maier-Hein
- German Cancer Consortium (DKTK), Heidelberg, Germany.,Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - David Bonekamp
- Division of Radiology, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120, Heidelberg, Germany. .,German Cancer Consortium (DKTK), Heidelberg, Germany.
| |
Collapse
|
48
|
Using decision curve analysis to benchmark performance of a magnetic resonance imaging-based deep learning model for prostate cancer risk assessment. Eur Radiol 2020; 30:6867-6876. [PMID: 32591889 DOI: 10.1007/s00330-020-07030-1] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2020] [Accepted: 06/10/2020] [Indexed: 10/24/2022]
Abstract
OBJECTIVES To benchmark the performance of a calibrated 3D convolutional neural network (CNN) applied to multiparametric MRI (mpMRI) for risk assessment of clinically significant prostate cancer (csPCa) using decision curve analysis (DCA). METHODS We retrospectively analyzed 499 patients who had positive mpMRI (PI-RADSv2 ≥ 3) and MRI-targeted biopsy. The training cohort comprised 449 men, including a calibration set of 50 men. Biopsy decision strategies included using risk estimates from the CNN (original and calibrated), to perform biopsy in men with PI-RADSv2 ≥ 4 only, or additionally in men with PI-RADSv2 3 and PSA density (PSAd) ≥ 0.15 ng/ml/ml. Discrimination, calibration and clinical usefulness in the unseen test cohort (n = 50) were assessed using C-statistic, calibration plots and DCA, respectively. RESULTS The calibrated CNN achieved moderate calibration (Hosmer-Lemeshow calibration test, p = 0.41) and good discrimination (C = 0.85). DCA revealed consistently higher net benefit and net reduction in biopsies for the calibrated CNN compared with the original CNN, PI-RADSv2 ≥ 4 and the combined strategy of PI-RADSv2 and PSAd. Original CNN predictions were severely miscalibrated (p < 0.0001) resulting in net harm compared with a 'biopsy all' patients strategy. At-risk thresholds ≥ 10% using the calibrated CNN and the combined strategy reduced the number of biopsies by an estimated 201 and 55 men, respectively, per 1000 men at risk, without missing csPCa, while original CNN and PI-RADSv2 ≥ 4 could not achieve a net reduction in biopsies. CONCLUSIONS DCA revealed that our calibrated 3D-CNN resulted in fewer unnecessary biopsies compared with using PI-RADSv2 alone or in combination with PSAd. CNN calibration is important in achieving clinical utility. KEY POINTS • A 3D deep learning model applied to multiparametric MRI may help to prevent unnecessary prostate biopsies in patients eligible for MRI-targeted biopsy. • Owing to miscalibration, original risk estimates by the deep learning model require prior calibration to enable clinical utility. • Decision curve analysis confirmed a net benefit of using our calibrated deep learning model for biopsy decisions compared with alternative strategies, including PI-RADSv2 alone and in combination with prostate-specific antigen density.
Collapse
|
49
|
Hectors SJ, Said D, Gnerre J, Tewari A, Taouli B. Luminal Water Imaging: Comparison With Diffusion-Weighted Imaging (DWI) and PI-RADS for Characterization of Prostate Cancer Aggressiveness. J Magn Reson Imaging 2020; 52:271-279. [PMID: 31961049 DOI: 10.1002/jmri.27050] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2019] [Revised: 12/14/2019] [Accepted: 12/16/2019] [Indexed: 12/19/2022] Open
Abstract
BACKGROUND Luminal water imaging (LWI), a multicomponent T2 mapping technique, has shown promise for prostate cancer (PCa) detection and characterization. PURPOSE To 1) quantify LWI parameters and apparent diffusion coefficient (ADC) in PCa and benign peripheral zone (PZ) tissues; and 2) evaluate the diagnostic performance of LWI, ADC, and PI-RADS parameters for differentiation between low- and high-grade PCa lesions. STUDY TYPE Prospective. SUBJECTS Twenty-six PCa patients undergoing prostatectomy (mean age 59 years, range 46-72 years). FIELD STRENGTH/SEQUENCE Multiparametric MRI at 3.0T, including diffusion-weighted imaging (DWI) and LWI T2 mapping. ASSESSMENT LWI parameters and ADC were quantified in index PCa lesions and benign PZ. STATISTICAL TESTS Differences in MRI parameters between PCa and benign PZ were assessed using Wilcoxon signed tests. Spearman correlation of pathological grade group (GG) with LWI parameters, ADC, and PI-RADS was evaluated. The utility of each of the parameters for differentiation between low-grade (GG ≤2) and high-grade (GG ≥3) PCa was determined by Mann-Whitney U tests and ROC analyses. RESULTS Twenty-six index lesions were analyzed (mean size 1.7 ± 0.8 cm, GG: 1 [n = 1; 4%], 2 [n = 14, 54%], 3 [n = 8, 31%], 5 [n = 3, 12%]). LWI parameters and ADC both showed high diagnostic performance for differentiation between benign PZ and PCa (highest area under the curve [AUC] for LWI parameter T2,short [AUC = 0.98, P < 0.001]). The LWI parameters luminal water fraction (LWF) and amplitude of long T2 component Along significantly correlated with GG (r = -0.441, P = 0.024 and r = -0.414, P = 0.036, respectively), while PI-RADS, ADC, and the other LWI parameters did not (P = 0.132-0.869). LWF and Along also showed significant differences between low-grade and high-grade PCa (AUC = 0.776, P = 0.008 and AUC = 0.758, P = 0.027, respectively). Maximum diagnostic performance for discrimination of high-grade PCa was found with combined LWI parameters (AUC 0.891, P = 0.001). DATA CONCLUSION LWI parameters, in particular in combination, showed superior diagnostic performance for differentiation between low-grade and high-grade PCa compared to ADC and PI-RADS assessment. J. Magn. Reson. Imaging 2020;52:271-279.
Collapse
Affiliation(s)
- Stefanie J Hectors
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, New York, USA.,Department of Radiology, Icahn School of Medicine at Mount Sinai, New York, New York, USA.,Department of Radiology, Weill Cornell Medicine, New York, New York, USA
| | - Daniela Said
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, New York, USA.,Department of Radiology, Icahn School of Medicine at Mount Sinai, New York, New York, USA.,Department of Radiology, Universidad de los Andes, Santiago, Chile
| | - Jeffrey Gnerre
- Department of Radiology, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Ashutosh Tewari
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Bachir Taouli
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, New York, USA.,Department of Radiology, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| |
Collapse
|
50
|
Deep learning for fully automated tumor segmentation and extraction of magnetic resonance radiomics features in cervical cancer. Eur Radiol 2019; 30:1297-1305. [DOI: 10.1007/s00330-019-06467-3] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2019] [Revised: 08/20/2019] [Accepted: 09/19/2019] [Indexed: 12/13/2022]
|