1
|
Ahmad I, Alqurashi F. Early cancer detection using deep learning and medical imaging: A survey. Crit Rev Oncol Hematol 2024; 204:104528. [PMID: 39413940 DOI: 10.1016/j.critrevonc.2024.104528] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2024] [Accepted: 10/02/2024] [Indexed: 10/18/2024] Open
Abstract
Cancer, characterized by the uncontrolled division of abnormal cells that harm body tissues, necessitates early detection for effective treatment. Medical imaging is crucial for identifying various cancers, yet its manual interpretation by radiologists is often subjective, labour-intensive, and time-consuming. Consequently, there is a critical need for an automated decision-making process to enhance cancer detection and diagnosis. Previously, a lot of work was done on surveys of different cancer detection methods, and most of them were focused on specific cancers and limited techniques. This study presents a comprehensive survey of cancer detection methods. It entails a review of 99 research articles collected from the Web of Science, IEEE, and Scopus databases, published between 2020 and 2024. The scope of the study encompasses 12 types of cancer, including breast, cervical, ovarian, prostate, esophageal, liver, pancreatic, colon, lung, oral, brain, and skin cancers. This study discusses different cancer detection techniques, including medical imaging data, image preprocessing, segmentation, feature extraction, deep learning and transfer learning methods, and evaluation metrics. Eventually, we summarised the datasets and techniques with research challenges and limitations. Finally, we provide future directions for enhancing cancer detection techniques.
Collapse
Affiliation(s)
- Istiak Ahmad
- Department of Computer Science, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia; School of Information and Communication Technology, Griffith University, Queensland 4111, Australia.
| | - Fahad Alqurashi
- Department of Computer Science, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| |
Collapse
|
2
|
Magoulianitis V, Yang J, Yang Y, Xue J, Kaneko M, Cacciamani G, Abreu A, Duddalwar V, Kuo CCJ, Gill IS, Nikias C. PCa-RadHop: A transparent and lightweight feed-forward method for clinically significant prostate cancer segmentation. Comput Med Imaging Graph 2024; 116:102408. [PMID: 38908295 DOI: 10.1016/j.compmedimag.2024.102408] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2024] [Revised: 05/30/2024] [Accepted: 05/31/2024] [Indexed: 06/24/2024]
Abstract
Prostate Cancer is one of the most frequently occurring cancers in men, with a low survival rate if not early diagnosed. PI-RADS reading has a high false positive rate, thus increasing the diagnostic incurred costs and patient discomfort. Deep learning (DL) models achieve a high segmentation performance, although require a large model size and complexity. Also, DL models lack of feature interpretability and are perceived as "black-boxes" in the medical field. PCa-RadHop pipeline is proposed in this work, aiming to provide a more transparent feature extraction process using a linear model. It adopts the recently introduced Green Learning (GL) paradigm, which offers a small model size and low complexity. PCa-RadHop consists of two stages: Stage-1 extracts data-driven radiomics features from the bi-parametric Magnetic Resonance Imaging (bp-MRI) input and predicts an initial heatmap. To reduce the false positive rate, a subsequent stage-2 is introduced to refine the predictions by including more contextual information and radiomics features from each already detected Region of Interest (ROI). Experiments on the largest publicly available dataset, PI-CAI, show a competitive performance standing of the proposed method among other deep DL models, achieving an area under the curve (AUC) of 0.807 among a cohort of 1,000 patients. Moreover, PCa-RadHop maintains orders of magnitude smaller model size and complexity.
Collapse
Affiliation(s)
- Vasileios Magoulianitis
- Electrical and Computer Engineering Department, University of Southern California (USC), 3740 McClintock Ave., Los Angeles, 90089, CA, USA.
| | - Jiaxin Yang
- Electrical and Computer Engineering Department, University of Southern California (USC), 3740 McClintock Ave., Los Angeles, 90089, CA, USA
| | - Yijing Yang
- Electrical and Computer Engineering Department, University of Southern California (USC), 3740 McClintock Ave., Los Angeles, 90089, CA, USA
| | - Jintang Xue
- Electrical and Computer Engineering Department, University of Southern California (USC), 3740 McClintock Ave., Los Angeles, 90089, CA, USA
| | - Masatomo Kaneko
- Department of Urology, Keck School of Medicine, University of Southern California (USC), 1975 Zonal Ave., Los Angeles, 90033, CA, USA
| | - Giovanni Cacciamani
- Department of Urology, Keck School of Medicine, University of Southern California (USC), 1975 Zonal Ave., Los Angeles, 90033, CA, USA
| | - Andre Abreu
- Electrical and Computer Engineering Department, University of Southern California (USC), 3740 McClintock Ave., Los Angeles, 90089, CA, USA
| | - Vinay Duddalwar
- Department of Urology, Keck School of Medicine, University of Southern California (USC), 1975 Zonal Ave., Los Angeles, 90033, CA, USA; Department of Radiology, Keck School of Medicine, University of Southern California (USC), 1975 Zonal Ave., Los Angeles, 90033, CA, USA
| | - C-C Jay Kuo
- Electrical and Computer Engineering Department, University of Southern California (USC), 3740 McClintock Ave., Los Angeles, 90089, CA, USA
| | - Inderbir S Gill
- Department of Urology, Keck School of Medicine, University of Southern California (USC), 1975 Zonal Ave., Los Angeles, 90033, CA, USA
| | - Chrysostomos Nikias
- Electrical and Computer Engineering Department, University of Southern California (USC), 3740 McClintock Ave., Los Angeles, 90089, CA, USA
| |
Collapse
|
3
|
Rippa M, Schulze R, Kenyon G, Himstedt M, Kwiatkowski M, Grobholz R, Wyler S, Cornelius A, Schindera S, Burn F. Evaluation of Machine Learning Classification Models for False-Positive Reduction in Prostate Cancer Detection Using MRI Data. Diagnostics (Basel) 2024; 14:1677. [PMID: 39125553 PMCID: PMC11311676 DOI: 10.3390/diagnostics14151677] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2024] [Accepted: 06/30/2024] [Indexed: 08/12/2024] Open
Abstract
In this work, several machine learning (ML) algorithms, both classical ML and modern deep learning, were investigated for their ability to improve the performance of a pipeline for the segmentation and classification of prostate lesions using MRI data. The algorithms were used to perform a binary classification of benign and malignant tissue visible in MRI sequences. The model choices include support vector machines (SVMs), random decision forests (RDFs), and multi-layer perceptrons (MLPs), along with radiomic features that are reduced by applying PCA or mRMR feature selection. Modern CNN-based architectures, such as ConvNeXt, ConvNet, and ResNet, were also evaluated in various setups, including transfer learning. To optimize the performance, different approaches were compared and applied to whole images, as well as gland, peripheral zone (PZ), and lesion segmentations. The contribution of this study is an investigation of several ML approaches regarding their performance in prostate cancer (PCa) diagnosis algorithms. This work delivers insights into the applicability of different approaches for this context based on an exhaustive examination. The outcome is a recommendation or preference for which machine learning model or family of models is best suited to optimize an existing pipeline when the model is applied as an upstream filter.
Collapse
Affiliation(s)
- Malte Rippa
- Institute for Medical Informatics, University of Lübeck, 23562 Lübeck, Germany;
- Fuse-AI GmbH, 20457 Hamburg, Germany;
| | | | - Georgia Kenyon
- Australian Institute of Machine Learning, University of Adelaide, Adelaide, SA 5005, Australia;
- Precision Imaging Beacon, University of Nottingham, Nottingham NG7 2RD, UK
| | - Marian Himstedt
- Institute for Medical Informatics, University of Lübeck, 23562 Lübeck, Germany;
| | - Maciej Kwiatkowski
- Department of Urology, Cantonal Hospital Aarau, 5001 Aarau, Switzerland
- Medical Faculty, University Hospital Basel, 4056 Basel, Switzerland
- Department of Urology, Academic Hospital Braunschweig, 38126 Brunswick, Germany
| | - Rainer Grobholz
- Institute of Pathology, Cantonal Hospital Aarau, 5001 Aarau, Switzerland
- Medical Faculty, University of Zurich, 8032 Zurich, Switzerland
| | - Stephen Wyler
- Department of Urology, Cantonal Hospital Aarau, 5001 Aarau, Switzerland
- Medical Faculty, University Hospital Basel, 4056 Basel, Switzerland
| | - Alexander Cornelius
- Institute of Radiology, Cantonal Hospital Aarau, 5001 Aarau, Switzerland (F.B.)
| | - Sebastian Schindera
- Medical Faculty, University Hospital Basel, 4056 Basel, Switzerland
- Institute of Radiology, Cantonal Hospital Aarau, 5001 Aarau, Switzerland (F.B.)
| | - Felice Burn
- Institute of Radiology, Cantonal Hospital Aarau, 5001 Aarau, Switzerland (F.B.)
- AI & Data Science CoE, Cantonal Hospital Aarau, 5001 Aarau, Switzerland
| |
Collapse
|
4
|
Newson KS, Benoit DM, Beavis AW. Encoder-decoder convolutional neural network for simple CT segmentation of COVID-19 infected lungs. PeerJ Comput Sci 2024; 10:e2178. [PMID: 39145207 PMCID: PMC11323195 DOI: 10.7717/peerj-cs.2178] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2023] [Accepted: 06/17/2024] [Indexed: 08/16/2024]
Abstract
This work presents the application of an Encoder-Decoder convolutional neural network (ED-CNN) model to automatically segment COVID-19 computerised tomography (CT) data. By doing so we are producing an alternative model to current literature, which is easy to follow and reproduce, making it more accessible for real-world applications as little training would be required to use this. Our simple approach achieves results comparable to those of previously published studies, which use more complex deep-learning networks. We demonstrate a high-quality automated segmentation prediction of thoracic CT scans that correctly delineates the infected regions of the lungs. This segmentation automation can be used as a tool to speed up the contouring process, either to check manual contouring in place of a peer checking, when not possible or to give a rapid indication of infection to be referred for further treatment, thus saving time and resources. In contrast, manual contouring is a time-consuming process in which a professional would contour each patient one by one to be later checked by another professional. The proposed model uses approximately 49 k parameters while others average over 1,000 times more parameters. As our approach relies on a very compact model, shorter training times are observed, which make it possible to easily retrain the model using other data and potentially afford "personalised medicine" workflows. The model achieves similarity scores of Specificity (Sp) = 0.996 ± 0.001, Accuracy (Acc) = 0.994 ± 0.002 and Mean absolute error (MAE) = 0.0075 ± 0.0005.
Collapse
Affiliation(s)
- Kiri S. Newson
- Department of Physics and Mathematics, University of Hull, Hull, United Kingdom
| | - David M. Benoit
- E. A. Milne Centre for Astrophysics, Department of Physics and Mathematics, University of Hull, Hull, United Kingdom
| | - Andrew W. Beavis
- Medical Physics Department, Queen’s Centre for Oncology, Hull University Teaching Hospitals NHS Trust, Cottingham, Hull, United Kingdom
- Medical Physics and Biomedical Engineering, University College London, University of London, London, United Kingdom
- Hull York Medical School, University of Hull, Hull, United Kingdom
| |
Collapse
|
5
|
Riaz IB, Harmon S, Chen Z, Naqvi SAA, Cheng L. Applications of Artificial Intelligence in Prostate Cancer Care: A Path to Enhanced Efficiency and Outcomes. Am Soc Clin Oncol Educ Book 2024; 44:e438516. [PMID: 38935882 DOI: 10.1200/edbk_438516] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/29/2024]
Abstract
The landscape of prostate cancer care has rapidly evolved. We have transitioned from the use of conventional imaging, radical surgeries, and single-agent androgen deprivation therapy to an era of advanced imaging, precision diagnostics, genomics, and targeted treatment options. Concurrently, the emergence of large language models (LLMs) has dramatically transformed the paradigm for artificial intelligence (AI). This convergence of advancements in prostate cancer management and AI provides a compelling rationale to comprehensively review the current state of AI applications in prostate cancer care. Here, we review the advancements in AI-driven applications across the continuum of the journey of a patient with prostate cancer from early interception to survivorship care. We subsequently discuss the role of AI in prostate cancer drug discovery, clinical trials, and clinical practice guidelines. In the localized disease setting, deep learning models demonstrated impressive performance in detecting and grading prostate cancer using imaging and pathology data. For biochemically recurrent diseases, machine learning approaches are being tested for improved risk stratification and treatment decisions. In advanced prostate cancer, deep learning can potentially improve prognostication and assist in clinical decision making. Furthermore, LLMs are poised to revolutionize information summarization and extraction, clinical trial design and operations, drug development, evidence synthesis, and clinical practice guidelines. Synergistic integration of multimodal data integration and human-AI integration are emerging as a key strategy to unlock the full potential of AI in prostate cancer care.
Collapse
Affiliation(s)
- Irbaz Bin Riaz
- Division of Hematology and Oncology, Department of Internal Medicine, Mayo Clinic, Phoenix, AZ
- Department of AI and Informatics, Mayo Clinic, Rochester, MN
| | - Stephanie Harmon
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD
| | - Zhijun Chen
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD
| | | | - Liang Cheng
- Department of Pathology and Laboratory Medicine, Department of Surgery (Urology), Brown University Warren Alpert Medical School, Lifespan Health, and the Legorreta Cancer Center at Brown University, Providence, RI
| |
Collapse
|
6
|
Jin L, Yu Z, Gao F, Li M. T2-weighted imaging-based deep-learning method for noninvasive prostate cancer detection and Gleason grade prediction: a multicenter study. Insights Imaging 2024; 15:111. [PMID: 38713377 PMCID: PMC11076444 DOI: 10.1186/s13244-024-01682-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2024] [Accepted: 03/23/2024] [Indexed: 05/08/2024] Open
Abstract
OBJECTIVES To noninvasively detect prostate cancer and predict the Gleason grade using single-modality T2-weighted imaging with a deep-learning approach. METHODS Patients with prostate cancer, confirmed by histopathology, who underwent magnetic resonance imaging examinations at our hospital during September 2015-June 2022 were retrospectively included in an internal dataset. An external dataset from another medical center and a public challenge dataset were used for external validation. A deep-learning approach was designed for prostate cancer detection and Gleason grade prediction. The area under the curve (AUC) was calculated to compare the model performance. RESULTS For prostate cancer detection, the internal datasets comprised data from 195 healthy individuals (age: 57.27 ± 14.45 years) and 302 patients (age: 72.20 ± 8.34 years) diagnosed with prostate cancer. The AUC of our model for prostate cancer detection in the validation set (n = 96, 19.7%) was 0.918. For Gleason grade prediction, datasets comprising data from 283 of 302 patients with prostate cancer were used, with 227 (age: 72.06 ± 7.98 years) and 56 (age: 72.78 ± 9.49 years) patients being used for training and testing, respectively. The external and public challenge datasets comprised data from 48 (age: 72.19 ± 7.81 years) and 91 patients (unavailable information on age), respectively. The AUC of our model for Gleason grade prediction in the training set (n = 227) was 0.902, whereas those of the validation (n = 56), external validation (n = 48), and public challenge validation sets (n = 91) were 0.854, 0.776, and 0.838, respectively. CONCLUSION Through multicenter dataset validation, our proposed deep-learning method could detect prostate cancer and predict the Gleason grade better than human experts. CRITICAL RELEVANCE STATEMENT Precise prostate cancer detection and Gleason grade prediction have great significance for clinical treatment and decision making. KEY POINTS Prostate segmentation is easier to annotate than prostate cancer lesions for radiologists. Our deep-learning method detected prostate cancer and predicted the Gleason grade, outperforming human experts. Non-invasive Gleason grade prediction can reduce the number of unnecessary biopsies.
Collapse
Affiliation(s)
- Liang Jin
- Radiology Department, Huashan Hospital, Affiliated with Fudan University, Shanghai, 200040, China
- Radiology Department, Huadong Hospital, Affiliated with Fudan University, Shanghai, 200040, China
| | - Zhuo Yu
- School of Information and Safety Engineering, Zhongnan University of Economics and Law, Wuhan, China
| | - Feng Gao
- Radiology Department, Huadong Hospital, Affiliated with Fudan University, Shanghai, 200040, China
| | - Ming Li
- Radiology Department, Huadong Hospital, Affiliated with Fudan University, Shanghai, 200040, China.
- Institute of Functional and Molecular Medical Imaging, Shanghai, 200040, China.
| |
Collapse
|
7
|
Ramacciotti LS, Hershenhouse JS, Mokhtar D, Paralkar D, Kaneko M, Eppler M, Gill K, Mogoulianitis V, Duddalwar V, Abreu AL, Gill I, Cacciamani GE. Comprehensive Assessment of MRI-based Artificial Intelligence Frameworks Performance in the Detection, Segmentation, and Classification of Prostate Lesions Using Open-Source Databases. Urol Clin North Am 2024; 51:131-161. [PMID: 37945098 DOI: 10.1016/j.ucl.2023.08.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2023]
Abstract
Numerous MRI-based artificial intelligence (AI) frameworks have been designed for prostate cancer lesion detection, segmentation, and classification via MRI as a result of intrareader and interreader variability that is inherent to traditional interpretation. Open-source data sets have been released with the intention of providing freely available MRIs for the testing of diverse AI frameworks in automated or semiautomated tasks. Here, an in-depth assessment of the performance of MRI-based AI frameworks for detecting, segmenting, and classifying prostate lesions using open-source databases was performed. Among 17 data sets, 12 were specific to prostate cancer detection/classification, with 52 studies meeting the inclusion criteria.
Collapse
Affiliation(s)
- Lorenzo Storino Ramacciotti
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Jacob S Hershenhouse
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Daniel Mokhtar
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Divyangi Paralkar
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Masatomo Kaneko
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Department of Urology, Graduate School of Medical Science, Kyoto Prefectural University of Medicine, Kyoto, Japan
| | - Michael Eppler
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Karanvir Gill
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Vasileios Mogoulianitis
- Ming Hsieh Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA, USA
| | - Vinay Duddalwar
- Department of Radiology, University of Southern California, Los Angeles, CA, USA
| | - Andre L Abreu
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Department of Radiology, University of Southern California, Los Angeles, CA, USA
| | - Inderbir Gill
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Giovanni E Cacciamani
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Department of Radiology, University of Southern California, Los Angeles, CA, USA.
| |
Collapse
|
8
|
Mehmood M, Abbasi SH, Aurangzeb K, Majeed MF, Anwar MS, Alhussein M. A classifier model for prostate cancer diagnosis using CNNs and transfer learning with multi-parametric MRI. Front Oncol 2023; 13:1225490. [PMID: 38023149 PMCID: PMC10666634 DOI: 10.3389/fonc.2023.1225490] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Accepted: 10/16/2023] [Indexed: 12/01/2023] Open
Abstract
Prostate cancer (PCa) is a major global concern, particularly for men, emphasizing the urgency of early detection to reduce mortality. As the second leading cause of cancer-related male deaths worldwide, precise and efficient diagnostic methods are crucial. Due to high and multiresolution MRI in PCa, computer-aided diagnostic (CAD) methods have emerged to assist radiologists in identifying anomalies. However, the rapid advancement of medical technology has led to the adoption of deep learning methods. These techniques enhance diagnostic efficiency, reduce observer variability, and consistently outperform traditional approaches. Resource constraints that can distinguish whether a cancer is aggressive or not is a significant problem in PCa treatment. This study aims to identify PCa using MRI images by combining deep learning and transfer learning (TL). Researchers have explored numerous CNN-based Deep Learning methods for classifying MRI images related to PCa. In this study, we have developed an approach for the classification of PCa using transfer learning on a limited number of images to achieve high performance and help radiologists instantly identify PCa. The proposed methodology adopts the EfficientNet architecture, pre-trained on the ImageNet dataset, and incorporates three branches for feature extraction from different MRI sequences. The extracted features are then combined, significantly enhancing the model's ability to distinguish MRI images accurately. Our model demonstrated remarkable results in classifying prostate cancer, achieving an accuracy rate of 88.89%. Furthermore, comparative results indicate that our approach achieve higher accuracy than both traditional hand-crafted feature techniques and existing deep learning techniques in PCa classification. The proposed methodology can learn more distinctive features in prostate images and correctly identify cancer.
Collapse
Affiliation(s)
- Mubashar Mehmood
- Department of Computer Science, COMSATS Institute of Information Technology, Islamabad, Pakistan
| | | | - Khursheed Aurangzeb
- Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia
| | | | | | - Musaed Alhussein
- Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia
| |
Collapse
|
9
|
Belue MJ, Harmon SA, Lay NS, Daryanani A, Phelps TE, Choyke PL, Turkbey B. The Low Rate of Adherence to Checklist for Artificial Intelligence in Medical Imaging Criteria Among Published Prostate MRI Artificial Intelligence Algorithms. J Am Coll Radiol 2023; 20:134-145. [PMID: 35922018 PMCID: PMC9887098 DOI: 10.1016/j.jacr.2022.05.022] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2022] [Revised: 05/13/2022] [Accepted: 05/18/2022] [Indexed: 02/02/2023]
Abstract
OBJECTIVE To determine the rigor, generalizability, and reproducibility of published classification and detection artificial intelligence (AI) models for prostate cancer (PCa) on MRI using the Checklist for Artificial Intelligence in Medical Imaging (CLAIM) guidelines, a 42-item checklist that is considered a measure of best practice for presenting and reviewing medical imaging AI research. MATERIALS AND METHODS This review searched English literature for studies proposing PCa AI detection and classification models on MRI. Each study was evaluated with the CLAIM checklist. The additional outcomes for which data were sought included measures of AI model performance (eg, area under the curve [AUC], sensitivity, specificity, free-response operating characteristic curves), training and validation and testing group sample size, AI approach, detection versus classification AI, public data set utilization, MRI sequences used, and definition of gold standard for ground truth. The percentage of CLAIM checklist fulfillment was used to stratify studies into quartiles. Wilcoxon's rank-sum test was used for pair-wise comparisons. RESULTS In all, 75 studies were identified, and 53 studies qualified for analysis. The original CLAIM items that most studies did not fulfill includes item 12 (77% no): de-identification methods; item 13 (68% no): handling missing data; item 15 (47% no): rationale for choosing ground truth reference standard; item 18 (55% no): measurements of inter- and intrareader variability; item 31 (60% no): inclusion of validated interpretability maps; item 37 (92% no): inclusion of failure analysis to elucidate AI model weaknesses. An AUC score versus percentage CLAIM fulfillment quartile revealed a significant difference of the mean AUC scores between quartile 1 versus quartile 2 (0.78 versus 0.86, P = .034) and quartile 1 versus quartile 4 (0.78 versus 0.89, P = .003) scores. Based on additional information and outcome metrics gathered in this study, additional measures of best practice are defined. These new items include disclosure of public dataset usage, ground truth definition in comparison to other referenced works in the defined task, and sample size power calculation. CONCLUSION A large proportion of AI studies do not fulfill key items in CLAIM guidelines within their methods and results sections. The percentage of CLAIM checklist fulfillment is weakly associated with improved AI model performance. Additions or supplementations to CLAIM are recommended to improve publishing standards and aid reviewers in determining study rigor.
Collapse
Affiliation(s)
- Mason J Belue
- Medical Research Scholars Program Fellow, Artificial Intelligence Resource, Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, Maryland
| | - Stephanie A Harmon
- Staff Scientist, Artificial Intelligence Resource, Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, Maryland
| | - Nathan S Lay
- Staff Scientist, Artificial Intelligence Resource, Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, Maryland
| | - Asha Daryanani
- Intramural Research Training Program Fellow, Artificial Intelligence Resource, Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, Maryland
| | - Tim E Phelps
- Postdoctoral Fellow, Artificial Intelligence Resource, Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, Maryland
| | - Peter L Choyke
- Artificial Intelligence Resource, Chief of Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, Maryland
| | - Baris Turkbey
- Senior Clinician/Director, Artificial Intelligence Resource, Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, Maryland.
| |
Collapse
|
10
|
Hu J, Shen A, Qiao X, Zhou Z, Qian X, Zheng Y, Bao J, Wang X, Dai Y. Dual attention guided multiscale neural network trained with curriculum learning for noninvasive prediction of Gleason Grade Group from MRI. Med Phys 2022; 50:2279-2289. [PMID: 36412164 DOI: 10.1002/mp.16102] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2022] [Revised: 07/28/2022] [Accepted: 10/21/2022] [Indexed: 11/24/2022] Open
Abstract
BACKGROUND The Gleason Grade Group (GG) is essential in assessing the malignancy of prostate cancer (PCa) and is typically obtained by invasive biopsy procedures in which sampling errors could lead to inaccurately scored GGs. With the gradually recognized value of bi-parametric magnetic resonance imaging (bpMRI) in PCa, it is beneficial to noninvasively predict GGs from bpMRI for early diagnosis and treatment planning of PCa. However, it is challenging to establish the connection between bpMRI features and GGs. PURPOSE In this study, we propose a dual attention-guided multiscale neural network (DAMS-Net) to predict the 5-scored GG from bpMRI and design a training curriculum to further improve the prediction performance. METHODS The proposed DAMS-Net incorporates a feature pyramid network (FPN) to fully extract the multiscale features for lesions of varying sizes and a dual attention module to focus on lesion and surrounding regions while avoiding the influence of irrelevant ones. Furthermore, to enhance the differential ability for lesions with the inter-grade similarity and intra-grade variation in bpMRI, the training process employs a specially designed curriculum based on the differences between the radiological evaluations and the ground truth GGs. RESULTS Extensive experiments were conducted on a private dataset of 382 patients and the public PROSTATEx-2 dataset. For the private dataset, the experimental results showed that the proposed network performed better than the plain baseline model for GG prediction, achieving a mean quadratic weighted Kappa (Kw ) of 0.4902 and a mean positive predictive value of 0.9098 for predicting clinically significant cancer (PPVGG>1 ). With the application of curriculum learning, the mean Kw and PPVGG>1 further increased to 0.5144 and 0.9118, respectively. For the public dataset, the proposed method achieved state-of-the-art results of 0.5413 Kw and 0.9747 PPVGG>1 . CONCLUSION The proposed DAMS-Net trained with curriculum learning can effectively predict GGs from bpMRI, which may assist clinicians in early diagnosis and treatment planning for PCa patients.
Collapse
Affiliation(s)
- Jisu Hu
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Suzhou, Jiangsu, China.,Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, Jiangsu, China
| | - Ao Shen
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, Jiangsu, China
| | - Xiaomeng Qiao
- Department of Radiology, The First Affiliated Hospital of Soochow University, Suzhou, Jiangsu, China
| | - Zhiyong Zhou
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, Jiangsu, China
| | - Xusheng Qian
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Suzhou, Jiangsu, China.,Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, Jiangsu, China
| | - Yi Zheng
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, Jiangsu, China
| | - Jie Bao
- Department of Radiology, The First Affiliated Hospital of Soochow University, Suzhou, Jiangsu, China
| | - Ximing Wang
- Department of Radiology, The First Affiliated Hospital of Soochow University, Suzhou, Jiangsu, China
| | - Yakang Dai
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, Jiangsu, China
| |
Collapse
|
11
|
Saliency Transfer Learning and Central-Cropping Network for Prostate Cancer Classification. Neural Process Lett 2022. [DOI: 10.1007/s11063-022-10999-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
12
|
Jaen-Lorites JM, Ruiz-Espana S, Pineiro-Vidal T, Santabarbara JM, Maceira AM, Moratal D. Multiclass Classification of Prostate Tumors Following an MR Image Analysis-Based Radiomics Approach. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:1436-1439. [PMID: 36086478 DOI: 10.1109/embc48229.2022.9871746] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Prostate cancer is one of the most common cancers in men, with symptoms that may be confused with those caused by benign prostatic hyperplasia. One of the key aspects of treating prostate cancer is its early detection, increasing life expectancy and improving the quality of life of those patients. However, the tests performed are often invasive, resulting in a biopsy. A non-invasive alternative is the magnetic resonance imaging (MRI)-based PI-RADS v2 classification. The aim of this work was to find objective biomarkers that allow the PI-RADS classification of prostate lesions using a radiomics approach on Multiparametric MRI. A total of 90 subjects were analyzed. From each segmented lesion, 609 different texture features were extracted using five different statistical methods. Two feature selection methods and eight multiclass predictive models were evaluated. This was a multiclass study in which the best AUC result was 0.7442 ± 0.0880, achieved with the Naïve Bayes model using a subset of 120 features. Valuable results were also obtained using the Random Forests model, obtaining an AUC of 0.7394 ± 0.0965 with a lower number of features (52). Clinical Relevance- The current study establishes a methodology for classifying prostate cancer and supporting clinical decision-making in a fast and efficient manner and avoiding additional invasive procedures using MRI.
Collapse
|
13
|
Keenan KE, Delfino JG, Jordanova KV, Poorman ME, Chirra P, Chaudhari AS, Baessler B, Winfield J, Viswanath SE, deSouza NM. Challenges in ensuring the generalizability of image quantitation methods for MRI. Med Phys 2022; 49:2820-2835. [PMID: 34455593 PMCID: PMC8882689 DOI: 10.1002/mp.15195] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2021] [Revised: 08/17/2021] [Accepted: 08/17/2021] [Indexed: 01/31/2023] Open
Abstract
Image quantitation methods including quantitative MRI, multiparametric MRI, and radiomics offer great promise for clinical use. However, many of these methods have limited clinical adoption, in part due to issues of generalizability, that is, the ability to translate methods and models across institutions. Researchers can assess generalizability through measurement of repeatability and reproducibility, thus quantifying different aspects of measurement variance. In this article, we review the challenges to ensuring repeatability and reproducibility of image quantitation methods as well as present strategies to minimize their variance to enable wider clinical implementation. We present possible solutions for achieving clinically acceptable performance of image quantitation methods and briefly discuss the impact of minimizing variance and achieving generalizability towards clinical implementation and adoption.
Collapse
Affiliation(s)
- Kathryn E. Keenan
- Physical Measurement Laboratory, National Institute of Standards and Technology, 325 Broadway, Boulder, CO 80305, USA
| | - Jana G. Delfino
- Center for Devices and Radiological Health, US Food and Drug Administration, 10993 New Hampshire Ave, Silver Spring, MD 20993, USA
| | - Kalina V. Jordanova
- Physical Measurement Laboratory, National Institute of Standards and Technology, 325 Broadway, Boulder, CO 80305, USA
| | - Megan E. Poorman
- Physical Measurement Laboratory, National Institute of Standards and Technology, 325 Broadway, Boulder, CO 80305, USA
| | - Prathyush Chirra
- Dept of Biomedical Engineering, Case Western Reserve University, 10900 Euclid Ave, Cleveland, OH 44106, USA
| | - Akshay S. Chaudhari
- Department of Radiology, Stanford University, 450 Serra Mall, Stanford, CA 94305, USA
- Department of Biomedical Data Science, Stanford University, 450 Serra Mall, Stanford, CA 94305, USA
| | - Bettina Baessler
- University Hospital of Zurich and University of Zurich, Raemistrasse 100, 8091 Zurich, Switzerland
| | - Jessica Winfield
- Cancer Research UK Cancer Imaging Centre, Division of Radiotherapy and Imaging, The Institute of Cancer Research, 123 Old Brompton Road, London, SW7 3RP, UK
- MRI Unit, Royal Marsden NHS Foundation Trust, Downs Road, Sutton, Surrey, SM2 5PT, UK
| | - Satish E. Viswanath
- Dept of Biomedical Engineering, Case Western Reserve University, 10900 Euclid Ave, Cleveland, OH 44106, USA
| | - Nandita M. deSouza
- Cancer Research UK Cancer Imaging Centre, Division of Radiotherapy and Imaging, The Institute of Cancer Research, 123 Old Brompton Road, London, SW7 3RP, UK
- MRI Unit, Royal Marsden NHS Foundation Trust, Downs Road, Sutton, Surrey, SM2 5PT, UK
| |
Collapse
|
14
|
SVseg: Stacked Sparse Autoencoder-Based Patch Classification Modeling for Vertebrae Segmentation. MATHEMATICS 2022. [DOI: 10.3390/math10050796] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
Precise vertebrae segmentation is essential for the image-related analysis of spine pathologies such as vertebral compression fractures and other abnormalities, as well as for clinical diagnostic treatment and surgical planning. An automatic and objective system for vertebra segmentation is required, but its development is likely to run into difficulties such as low segmentation accuracy and the requirement of prior knowledge or human intervention. Recently, vertebral segmentation methods have focused on deep learning-based techniques. To mitigate the challenges involved, we propose deep learning primitives and stacked Sparse autoencoder-based patch classification modeling for Vertebrae segmentation (SVseg) from Computed Tomography (CT) images. After data preprocessing, we extract overlapping patches from CT images as input to train the model. The stacked sparse autoencoder learns high-level features from unlabeled image patches in an unsupervised way. Furthermore, we employ supervised learning to refine the feature representation to improve the discriminability of learned features. These high-level features are fed into a logistic regression classifier to fine-tune the model. A sigmoid classifier is added to the network to discriminate the vertebrae patches from non-vertebrae patches by selecting the class with the highest probabilities. We validated our proposed SVseg model on the publicly available MICCAI Computational Spine Imaging (CSI) dataset. After configuration optimization, our proposed SVseg model achieved impressive performance, with 87.39% in Dice Similarity Coefficient (DSC), 77.60% in Jaccard Similarity Coefficient (JSC), 91.53% in precision (PRE), and 90.88% in sensitivity (SEN). The experimental results demonstrated the method’s efficiency and significant potential for diagnosing and treating clinical spinal diseases.
Collapse
|
15
|
Rouvière O, Souchon R, Lartizien C, Mansuy A, Magaud L, Colom M, Dubreuil-Chambardel M, Debeer S, Jaouen T, Duran A, Rippert P, Riche B, Monini C, Vlaeminck-Guillem V, Haesebaert J, Rabilloud M, Crouzet S. Detection of ISUP ≥2 prostate cancers using multiparametric MRI: prospective multicentre assessment of the non-inferiority of an artificial intelligence system as compared to the PI-RADS V.2.1 score (CHANGE study). BMJ Open 2022; 12:e051274. [PMID: 35140147 PMCID: PMC8830410 DOI: 10.1136/bmjopen-2021-051274] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/03/2022] Open
Abstract
INTRODUCTION Prostate multiparametric MRI (mpMRI) has shown good sensitivity in detecting cancers with an International Society of Urological Pathology (ISUP) grade of ≥2. However, it lacks specificity, and its inter-reader reproducibility remains moderate. Biomarkers, such as the Prostate Health Index (PHI), may help select patients for prostate biopsy. Computer-aided diagnosis/detection (CAD) systems may also improve mpMRI interpretation. Different prototypes of CAD systems are currently developed under the Recherche Hospitalo-Universitaire en Santé / Personalized Focused Ultrasound Surgery of Localized Prostate Cancer (RHU PERFUSE) research programme, tackling challenging issues such as robustness across imaging protocols and magnetic resonance (MR) vendors, and ability to characterise cancer aggressiveness. The study primary objective is to evaluate the non-inferiority of the area under the receiver operating characteristic curve of the final CAD system as compared with the Prostate Imaging-Reporting and Data System V.2.1 (PI-RADS V.2.1) in predicting the presence of ISUP ≥2 prostate cancer in patients undergoing prostate biopsy. METHODS This prospective, multicentre, non-inferiority trial will include 420 men with suspected prostate cancer, a prostate-specific antigen level of ≤30 ng/mL and a clinical stage ≤T2 c. Included men will undergo prostate mpMRI that will be interpreted using the PI-RADS V.2.1 score. Then, they will undergo systematic and targeted biopsy. PHI will be assessed before biopsy. At the end of patient inclusion, MR images will be assessed by the final version of the CAD system developed under the RHU PERFUSE programme. Key secondary outcomes include the prediction of ISUP grade ≥2 prostate cancer during a 3-year follow-up, and the number of biopsy procedures saved and ISUP grade ≥2 cancers missed by several diagnostic pathways combining PHI and MRI findings. ETHICS AND DISSEMINATION Ethical approval was obtained from the Comité de Protection des Personnes Nord Ouest III (ID-RCB: 2020-A02785-34). After publication of the results, access to MR images will be possible for testing other CAD systems. TRIAL REGISTRATION NUMBER NCT04732156.
Collapse
Affiliation(s)
- Olivier Rouvière
- Université Lyon 1, Université de Lyon, Lyon, France
- Department of Urinary and Vascular Imaging, Hôpital Edouard Herriot, Hospices Civils de Lyon, Lyon, France
- LabTau, INSERM U1032, Lyon, France
| | | | - Carole Lartizien
- CREATIS, INSERM U1294, Villeurbanne, France
- CNRS UMR 5220, INSA-Lyon, Villeurbanne, France
| | - Adeline Mansuy
- Department of Urinary and Vascular Imaging, Hôpital Edouard Herriot, Hospices Civils de Lyon, Lyon, France
| | - Laurent Magaud
- Service Recherche et Epidémiologie Cliniques, Pôle Santé Publique, Hospices Civils de Lyon, Lyon, France
| | - Matthieu Colom
- Direction de la Recherche Clinique et de l'Innovation, Hospices Civils de Lyon, Lyon, France
| | - Marine Dubreuil-Chambardel
- Department of Urinary and Vascular Imaging, Hôpital Edouard Herriot, Hospices Civils de Lyon, Lyon, France
| | - Sabine Debeer
- Department of Urinary and Vascular Imaging, Hôpital Edouard Herriot, Hospices Civils de Lyon, Lyon, France
| | | | - Audrey Duran
- CREATIS, INSERM U1294, Villeurbanne, France
- CNRS UMR 5220, INSA-Lyon, Villeurbanne, France
| | - Pascal Rippert
- Service Recherche et Epidémiologie Cliniques, Pôle Santé Publique, Hospices Civils de Lyon, Lyon, France
| | - Benjamin Riche
- Service de Biostatistique-Bioinformatique, Pôle Santé Publique, Hospices Civils de Lyon, Lyon, France
- Laboratoire de Biométrie et Biologie Évolutive CNRS UMR 5558, Équipe Biostatistiques Santé, Université de Lyon, Lyon, France
| | | | - Virginie Vlaeminck-Guillem
- Université Lyon 1, Université de Lyon, Lyon, France
- Service de Biochimie et Biologie Moléculaire Sud, Centre Hospitalier Lyon Sud, Hospices Civils de Lyon, Pierre Bénite, France
| | - Julie Haesebaert
- Université Lyon 1, Université de Lyon, Lyon, France
- Service Recherche et Epidémiologie Cliniques, Pôle Santé Publique, Hospices Civils de Lyon, Lyon, France
- Research on Healthcare Performance (RESHAPE), INSERM U1290, Lyon, France
| | - Muriel Rabilloud
- Université Lyon 1, Université de Lyon, Lyon, France
- Service de Biostatistique-Bioinformatique, Pôle Santé Publique, Hospices Civils de Lyon, Lyon, France
- Laboratoire de Biométrie et Biologie Évolutive CNRS UMR 5558, Équipe Biostatistiques Santé, Université de Lyon, Lyon, France
| | - Sébastien Crouzet
- Université Lyon 1, Université de Lyon, Lyon, France
- LabTau, INSERM U1032, Lyon, France
- Department of Urology, Hôpital Edouard Herriot, Hospices Civils de Lyon, Lyon, France
| |
Collapse
|
16
|
Li H, Lee CH, Chia D, Lin Z, Huang W, Tan CH. Machine Learning in Prostate MRI for Prostate Cancer: Current Status and Future Opportunities. Diagnostics (Basel) 2022; 12:diagnostics12020289. [PMID: 35204380 PMCID: PMC8870978 DOI: 10.3390/diagnostics12020289] [Citation(s) in RCA: 25] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2021] [Revised: 12/31/2021] [Accepted: 01/14/2022] [Indexed: 02/04/2023] Open
Abstract
Advances in our understanding of the role of magnetic resonance imaging (MRI) for the detection of prostate cancer have enabled its integration into clinical routines in the past two decades. The Prostate Imaging Reporting and Data System (PI-RADS) is an established imaging-based scoring system that scores the probability of clinically significant prostate cancer on MRI to guide management. Image fusion technology allows one to combine the superior soft tissue contrast resolution of MRI, with real-time anatomical depiction using ultrasound or computed tomography. This allows the accurate mapping of prostate cancer for targeted biopsy and treatment. Machine learning provides vast opportunities for automated organ and lesion depiction that could increase the reproducibility of PI-RADS categorisation, and improve co-registration across imaging modalities to enhance diagnostic and treatment methods that can then be individualised based on clinical risk of malignancy. In this article, we provide a comprehensive and contemporary review of advancements, and share insights into new opportunities in this field.
Collapse
Affiliation(s)
- Huanye Li
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798, Singapore; (H.L.); (Z.L.)
| | - Chau Hung Lee
- Department of Diagnostic Radiology, Tan Tock Seng Hospital, Singapore 308433, Singapore;
| | - David Chia
- Department of Radiation Oncology, National University Cancer Institute (NUH), Singapore 119074, Singapore;
| | - Zhiping Lin
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798, Singapore; (H.L.); (Z.L.)
| | - Weimin Huang
- Institute for Infocomm Research, A*Star, Singapore 138632, Singapore;
| | - Cher Heng Tan
- Department of Diagnostic Radiology, Tan Tock Seng Hospital, Singapore 308433, Singapore;
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore 639798, Singapore
- Correspondence:
| |
Collapse
|
17
|
Prostate Segmentation via Dynamic Fusion Model. ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING 2022. [DOI: 10.1007/s13369-021-06502-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
18
|
Duran A, Dussert G, Rouviére O, Jaouen T, Jodoin PM, Lartizien C. ProstAttention-Net: a deep attention model for prostate cancer segmentation by aggressiveness in MRI scans. Med Image Anal 2022; 77:102347. [DOI: 10.1016/j.media.2021.102347] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Revised: 12/20/2021] [Accepted: 12/31/2021] [Indexed: 11/27/2022]
|
19
|
Tan XJ, Mustafa N, Mashor MY, Rahman KSA. Automated knowledge-assisted mitosis cells detection framework in breast histopathology images. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2022; 19:1721-1745. [PMID: 35135226 DOI: 10.3934/mbe.2022081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Based on the Nottingham Histopathology Grading (NHG) system, mitosis cells detection is one of the important criteria to determine the grade of breast carcinoma. Mitosis cells detection is a challenging task due to the heterogeneous microenvironment of breast histopathology images. Recognition of complex and inconsistent objects in the medical images could be achieved by incorporating domain knowledge in the field of interest. In this study, the strategies of the histopathologist and domain knowledge approach were used to guide the development of the image processing framework for automated mitosis cells detection in breast histopathology images. The detection framework starts with color normalization and hyperchromatic nucleus segmentation. Then, a knowledge-assisted false positive reduction method is proposed to eliminate the false positive (i.e., non-mitosis cells). This stage aims to minimize the percentage of false positive and thus increase the F1-score. Next, features extraction was performed. The mitosis candidates were classified using a Support Vector Machine (SVM) classifier. For evaluation purposes, the knowledge-assisted detection framework was tested using two datasets: a custom dataset and a publicly available dataset (i.e., MITOS dataset). The proposed knowledge-assisted false positive reduction method was found promising by eliminating at least 87.1% of false positive in both the dataset producing promising results in the F1-score. Experimental results demonstrate that the knowledge-assisted detection framework can achieve promising results in F1-score (custom dataset: 89.1%; MITOS dataset: 88.9%) and outperforms the recent works.
Collapse
Affiliation(s)
- Xiao Jian Tan
- Centre for Multimodal Signal Processing, Department of Electrical and Electronic Engineering, Faculty of Engineering and Technology, Tunku Abdul Rahman University College (TARUC), Jalan Genting Kelang, Setapak 53300, Kuala Lumpur, Malaysia
| | - Nazahah Mustafa
- Biomedical Electronic Engineering Programme, Faculty of Electronic Engineering Technology, Universiti Malaysia Perlis (UniMAP) 02600 Arau, Perlis, Malaysia
| | - Mohd Yusoff Mashor
- Biomedical Electronic Engineering Programme, Faculty of Electronic Engineering Technology, Universiti Malaysia Perlis (UniMAP) 02600 Arau, Perlis, Malaysia
| | - Khairul Shakir Ab Rahman
- Department of Pathology, Hospital Tuanku Fauziah 01000 Jalan Tun Abdul Razak Kangar Perlis, Malaysia
| |
Collapse
|
20
|
Yu G, Chen Z, Wu J, Tan Y. A diagnostic prediction framework on auxiliary medical system for breast cancer in developing countries. Knowl Based Syst 2021. [DOI: 10.1016/j.knosys.2021.107459] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
21
|
Mendes B, Domingues I, Silva A, Santos J. Prostate Cancer Aggressiveness Prediction Using CT Images. Life (Basel) 2021; 11:life11111164. [PMID: 34833040 PMCID: PMC8618689 DOI: 10.3390/life11111164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Revised: 10/25/2021] [Accepted: 10/26/2021] [Indexed: 11/16/2022] Open
Abstract
Prostate Cancer (PCa) is mostly asymptomatic at an early stage and often painless requiring active surveillance screening. Transrectal Ultrasound Guided Biopsy (TRUS) is the principal method to diagnose PCa following a histological examination by observing cell pattern irregularities and assigning the Gleason Score (GS) according to the recommended guidelines. This procedure presents sampling errors and, being invasive may cause complications to the patients. External Beam Radiotherapy Treatment (EBRT) is presented as curative option for localised and locally advanced disease, as a palliative option for metastatic low-volume disease or after prostatectomy for prostate bed and pelvic nodes salvage. In the EBRT worflow a Computed Tomography (CT) scan is performed as the basis for dose calculations and volume delineations. In this work, we evaluated the use of data-characterization algorithms (radiomics) from CT images for PCa aggressiveness assessment. The fundamental motivation relies on the wide availability of CT images and the need to provide tools to assess EBRT effectiveness. We used Pyradiomics and Local Image Features Extraction (LIFEx) to extract features and search for a radiomic signature within CT images. Finnaly, when applying Principal Component Analysis (PCA) to the features, we were able to show promising results.
Collapse
Affiliation(s)
- Bruno Mendes
- Centro de Investigação do Instituto Português de Oncologia do Porto (CI-IPOP), Grupo de Física Médica, Radiobiologia e Protecção Radiológica, 4200-072 Porto, Portugal; (I.D.); (J.S.)
- Faculdade de Engenharia da Universidade do Porto (FEUP), 4200-465 Porto, Portugal
- Correspondence:
| | - Inês Domingues
- Centro de Investigação do Instituto Português de Oncologia do Porto (CI-IPOP), Grupo de Física Médica, Radiobiologia e Protecção Radiológica, 4200-072 Porto, Portugal; (I.D.); (J.S.)
- Instituto Superior de Engenharia de Coimbra (ISEC), 3030-199 Coimbra, Portugal
| | - Augusto Silva
- IEETA, Universidade de Aveiro (UA), 3810-193 Aveiro, Portugal;
| | - João Santos
- Centro de Investigação do Instituto Português de Oncologia do Porto (CI-IPOP), Grupo de Física Médica, Radiobiologia e Protecção Radiológica, 4200-072 Porto, Portugal; (I.D.); (J.S.)
- Instituto de Ciências Biomédicas Abel Salazar (ICBAS), 4050-313 Porto, Portugal
| |
Collapse
|
22
|
DDV: A Taxonomy for Deep Learning Methods in Detecting Prostate Cancer. Neural Process Lett 2021. [DOI: 10.1007/s11063-021-10485-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
23
|
Spohn SK, Bettermann AS, Bamberg F, Benndorf M, Mix M, Nicolay NH, Fechter T, Hölscher T, Grosu R, Chiti A, Grosu AL, Zamboglou C. Radiomics in prostate cancer imaging for a personalized treatment approach - current aspects of methodology and a systematic review on validated studies. Theranostics 2021; 11:8027-8042. [PMID: 34335978 PMCID: PMC8315055 DOI: 10.7150/thno.61207] [Citation(s) in RCA: 38] [Impact Index Per Article: 12.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2021] [Accepted: 06/17/2021] [Indexed: 12/14/2022] Open
Abstract
Prostate cancer (PCa) is one of the most frequently diagnosed malignancies of men in the world. Due to a variety of treatment options in different risk groups, proper diagnostic and risk stratification is pivotal in treatment of PCa. The development of precise medical imaging procedures simultaneously to improvements in big data analysis has led to the establishment of radiomics - a computer-based method of extracting and analyzing image features quantitatively. This approach bears the potential to assess and improve PCa detection, tissue characterization and clinical outcome prediction. This article gives an overview on the current aspects of methodology and systematically reviews available literature on radiomics in PCa patients, showing its potential for personalized therapy approaches. The qualitative synthesis includes all imaging modalities and focuses on validated studies, putting forward future directions.
Collapse
Affiliation(s)
- Simon K.B. Spohn
- Department of Radiation Oncology, Medical Center - University of Freiburg, Faculty of Medicine. University of Freiburg, Germany
- German Cancer Consortium (DKTK). Partner Site Freiburg, Germany
- Berta-Ottenstein-Programme, Faculty of Medicine, University of Freiburg, Germany
| | - Alisa S. Bettermann
- Department of Radiation Oncology, Medical Center - University of Freiburg, Faculty of Medicine. University of Freiburg, Germany
| | - Fabian Bamberg
- Department of Radiology, Medical Center - University of Freiburg, Faculty of Medicine. University of Freiburg, Germany
| | - Matthias Benndorf
- Department of Radiology, Medical Center - University of Freiburg, Faculty of Medicine. University of Freiburg, Germany
| | - Michael Mix
- Department of Nuclear Medicine, Medical Center - University of Freiburg, Faculty of Medicine. University of Freiburg, Germany
| | - Nils H. Nicolay
- Department of Radiation Oncology, Medical Center - University of Freiburg, Faculty of Medicine. University of Freiburg, Germany
- German Cancer Consortium (DKTK). Partner Site Freiburg, Germany
| | - Tobias Fechter
- Department of Radiation Oncology - Division of Medical Physics, Medical Center - University of Freiburg, Faculty of Medicine. University of Freiburg, Germany
| | - Tobias Hölscher
- Radiotherapy and Radiation Oncology, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden
- OncoRay-National Center for Radiation Research in Oncology, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Helmholtz-Zentrum Dresden-Rossendorf, Dresden, Germany
| | - Radu Grosu
- Institute of Computer Engineering, Vienne University of Technology, Vienna, Austria
| | - Arturo Chiti
- Department of Biomedical Sciences, Humanitas University, Via Rita Levi Montalcini 4, 20090 Pieve Emanuele - Milan, Italy
- IRCCS Humanitas Research Hospital, Via Manzoni 56, 20089 Rozzano - Milan, Italy
| | - Anca L. Grosu
- Department of Radiation Oncology, Medical Center - University of Freiburg, Faculty of Medicine. University of Freiburg, Germany
- German Cancer Consortium (DKTK). Partner Site Freiburg, Germany
| | - Constantinos Zamboglou
- Department of Radiation Oncology, Medical Center - University of Freiburg, Faculty of Medicine. University of Freiburg, Germany
- German Cancer Consortium (DKTK). Partner Site Freiburg, Germany
- Berta-Ottenstein-Programme, Faculty of Medicine, University of Freiburg, Germany
- German Oncology Center, European University of Cyprus, Limassol, Cyprus
| |
Collapse
|
24
|
Vente CD, Vos P, Hosseinzadeh M, Pluim J, Veta M. Deep Learning Regression for Prostate Cancer Detection and Grading in Bi-Parametric MRI. IEEE Trans Biomed Eng 2021; 68:374-383. [DOI: 10.1109/tbme.2020.2993528] [Citation(s) in RCA: 36] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
25
|
Shao L, Yan Y, Liu Z, Ye X, Xia H, Zhu X, Zhang Y, Zhang Z, Chen H, He W, Liu C, Lu M, Huang Y, Ma L, Sun K, Zhou X, Yang G, Lu J, Tian J. Radiologist-like artificial intelligence for grade group prediction of radical prostatectomy for reducing upgrading and downgrading from biopsy. Theranostics 2020; 10:10200-10212. [PMID: 32929343 PMCID: PMC7481433 DOI: 10.7150/thno.48706] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2020] [Accepted: 08/21/2020] [Indexed: 12/13/2022] Open
Abstract
Rationale: To reduce upgrading and downgrading between needle biopsy (NB) and radical prostatectomy (RP) by predicting patient-level Gleason grade groups (GGs) of RP to avoid over- and under-treatment. Methods: In this study, we retrospectively enrolled 575 patients from two medical institutions. All patients received prebiopsy magnetic resonance (MR) examinations, and pathological evaluations of NB and RP were available. A total of 12,708 slices of original male pelvic MR images (T2-weighted sequences with fat suppression, T2WI-FS) containing 5405 slices of prostate tissue, and 2,753 tumor annotations (only T2WI-FS were annotated using RP pathological sections as ground truth) were analyzed for the prediction of patient-level RP GGs. We present a prostate cancer (PCa) framework, PCa-GGNet, that mimics radiologist behavior based on deep reinforcement learning (DRL). We developed and validated it using a multi-center format. Results: Accuracy (ACC) of our model outweighed NB results (0.815 [95% confidence interval (CI): 0.773-0.857] vs. 0.437 [95% CI: 0.335-0.539]). The PCa-GGNet scored higher (kappa value: 0.761) than NB (kappa value: 0.289). Our model significantly reduced the upgrading rate by 27.9% (P < 0.001) and downgrading rate by 6.4% (P = 0.029). Conclusions: DRL using MRI can be applied to the prediction of patient-level RP GGs to reduce upgrading and downgrading from biopsy, potentially improving the clinical benefits of prostate cancer oncologic controls.
Collapse
Affiliation(s)
- Lizhi Shao
- School of Computer Science and Engineering, Southeast University, Nanjing, China
- CAS Key Laboratory of Molecular Imaging, Beijing Key Laboratory of Molecular Imaging, the State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Ye Yan
- Department of Urology, Peking University Third Hospital, Beijing, China
| | - Zhenyu Liu
- CAS Key Laboratory of Molecular Imaging, Beijing Key Laboratory of Molecular Imaging, the State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China
- CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Automation, Chinese Academy of Sciences, Beijing, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 100080, China
| | - Xiongjun Ye
- Urology and lithotripsy center, Peking University People's Hospital, Beijing, China
| | - Haizhui Xia
- Department of Urology, Peking University Third Hospital, Beijing, China
| | - Xuehua Zhu
- Department of Urology, Peking University Third Hospital, Beijing, China
| | - Yuting Zhang
- Department of Urology, Peking University Third Hospital, Beijing, China
| | - Zhiying Zhang
- Department of Urology, Peking University Third Hospital, Beijing, China
| | - Huiying Chen
- Department of Radiology, Peking University Third Hospital, Beijing, China
| | - Wei He
- Department of Radiology, Peking University Third Hospital, Beijing, China
| | - Cheng Liu
- Department of Urology, Peking University Third Hospital, Beijing, China
| | - Min Lu
- Department of Pathology, Peking University Third Hospital, Beijing, China
| | - Yi Huang
- Department of Urology, Peking University Third Hospital, Beijing, China
| | - Lulin Ma
- Department of Urology, Peking University Third Hospital, Beijing, China
| | - Kai Sun
- CAS Key Laboratory of Molecular Imaging, Beijing Key Laboratory of Molecular Imaging, the State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi'an, China
| | - Xuezhi Zhou
- CAS Key Laboratory of Molecular Imaging, Beijing Key Laboratory of Molecular Imaging, the State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi'an, China
| | - Guanyu Yang
- School of Computer Science and Engineering, Southeast University, Nanjing, China
- LIST, Key Laboratory of Computer Network and Information Integration, Southeast University, Ministry of Education, Nanjing, China
| | - Jian Lu
- Department of Urology, Peking University Third Hospital, Beijing, China
| | - Jie Tian
- CAS Key Laboratory of Molecular Imaging, Beijing Key Laboratory of Molecular Imaging, the State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi'an, China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Medicine and Engineering, Beihang University, Beijing, China
- Key Laboratory of Big Data-Based Precision Medicine (Beihang University),Ministry of Industry and Information Technology, Beijing, China
| |
Collapse
|
26
|
Hiremath A, Shiradkar R, Merisaari H, Prasanna P, Ettala O, Taimen P, Aronen HJ, Boström PJ, Jambor I, Madabhushi A. Test-retest repeatability of a deep learning architecture in detecting and segmenting clinically significant prostate cancer on apparent diffusion coefficient (ADC) maps. Eur Radiol 2020; 31:379-391. [PMID: 32700021 DOI: 10.1007/s00330-020-07065-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2020] [Revised: 05/22/2020] [Accepted: 07/02/2020] [Indexed: 12/16/2022]
Abstract
OBJECTIVES To evaluate short-term test-retest repeatability of a deep learning architecture (U-Net) in slice- and lesion-level detection and segmentation of clinically significant prostate cancer (csPCa: Gleason grade group > 1) using diffusion-weighted imaging fitted with monoexponential function, ADCm. METHODS One hundred twelve patients with prostate cancer (PCa) underwent 2 prostate MRI examinations on the same day. PCa areas were annotated using whole mount prostatectomy sections. Two U-Net-based convolutional neural networks were trained on three different ADCm b value settings for (a) slice- and (b) lesion-level detection and (c) segmentation of csPCa. Short-term test-retest repeatability was estimated using intra-class correlation coefficient (ICC(3,1)), proportionate agreement, and dice similarity coefficient (DSC). A 3-fold cross-validation was performed on training set (N = 78 patients) and evaluated for performance and repeatability on testing data (N = 34 patients). RESULTS For the three ADCm b value settings, repeatability of mean ADCm of csPCa lesions was ICC(3,1) = 0.86-0.98. Two CNNs with U-Net-based architecture demonstrated ICC(3,1) in the range of 0.80-0.83, agreement of 66-72%, and DSC of 0.68-0.72 for slice- and lesion-level detection and segmentation of csPCa. Bland-Altman plots suggest that there is no systematic bias in agreement between inter-scan ground truth segmentation repeatability and segmentation repeatability of the networks. CONCLUSIONS For the three ADCm b value settings, two CNNs with U-Net-based architecture were repeatable for the problem of detection of csPCa at the slice-level. The network repeatability in segmenting csPCa lesions is affected by inter-scan variability and ground truth segmentation repeatability and may thus improve with better inter-scan reproducibility. KEY POINTS • For the three ADCm b value settings, two CNNs with U-Net-based architecture were repeatable for the problem of detection of csPCa at the slice-level. • The network repeatability in segmenting csPCa lesions is affected by inter-scan variability and ground truth segmentation repeatability and may thus improve with better inter-scan reproducibility.
Collapse
Affiliation(s)
- Amogh Hiremath
- Department of Biomedical Engineering, Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH, 44106, USA.
| | - Rakesh Shiradkar
- Department of Biomedical Engineering, Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH, 44106, USA
| | - Harri Merisaari
- Department of Biomedical Engineering, Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH, 44106, USA.,Department of Diagnostic Radiology, University of Turku, Turku, Finland
| | - Prateek Prasanna
- Department of Biomedical Engineering, Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH, 44106, USA.,Department of Biomedical Informatics, Stony Brook University, Stony Brook, NY, USA
| | - Otto Ettala
- Department of Urology, University of Turku and Turku University Hospital, Turku, Finland
| | - Pekka Taimen
- Institute of Biomedicine, Department of Pathology, University of Turku and Turku University Hospital, Turku, Finland
| | - Hannu J Aronen
- Medical Imaging Centre of Southwest Finland, Turku University Hospital, Turku, Finland
| | - Peter J Boström
- Department of Urology, University of Turku and Turku University Hospital, Turku, Finland
| | - Ivan Jambor
- Department of Diagnostic Radiology, University of Turku, Turku, Finland.,Department of Radiology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Anant Madabhushi
- Department of Biomedical Engineering, Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH, 44106, USA.,Louis Stokes Cleveland Veterans Administration Medical Center, Cleveland, Ohio, USA
| |
Collapse
|
27
|
Automated Classification of Significant Prostate Cancer on MRI: A Systematic Review on the Performance of Machine Learning Applications. Cancers (Basel) 2020; 12:cancers12061606. [PMID: 32560558 PMCID: PMC7352160 DOI: 10.3390/cancers12061606] [Citation(s) in RCA: 39] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2020] [Revised: 06/13/2020] [Accepted: 06/14/2020] [Indexed: 11/16/2022] Open
Abstract
Significant prostate carcinoma (sPCa) classification based on MRI using radiomics or deep learning approaches has gained much interest, due to the potential application in assisting in clinical decision-making. OBJECTIVE To systematically review the literature (i) to determine which algorithms are most frequently used for sPCa classification, (ii) to investigate whether there exists a relation between the performance and the method or the MRI sequences used, (iii) to assess what study design factors affect the performance on sPCa classification, and (iv) to research whether performance had been evaluated in a clinical setting Methods: The databases Embase and Ovid MEDLINE were searched for studies describing machine learning or deep learning classification methods discriminating between significant and nonsignificant PCa on multiparametric MRI that performed a valid validation procedure. Quality was assessed by the modified radiomics quality score. We computed the median area under the receiver operating curve (AUC) from overall methods and the interquartile range. RESULTS From 2846 potentially relevant publications, 27 were included. The most frequent algorithms used in the literature for PCa classification are logistic regression (22%) and convolutional neural networks (CNNs) (22%). The median AUC was 0.79 (interquartile range: 0.77-0.87). No significant effect of number of included patients, image sequences, or reference standard on the reported performance was found. Three studies described an external validation and none of the papers described a validation in a prospective clinical trial. CONCLUSIONS To unlock the promising potential of machine and deep learning approaches, validation studies and clinical prospective studies should be performed with an established protocol to assess the added value in decision-making.
Collapse
|
28
|
Deep neural network for semi-automatic classification of term and preterm uterine recordings. Artif Intell Med 2020; 105:101861. [PMID: 32505424 DOI: 10.1016/j.artmed.2020.101861] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2019] [Revised: 02/25/2020] [Accepted: 04/14/2020] [Indexed: 02/04/2023]
Abstract
Pregnancy is a complex process, and the prediction of premature birth is uncertain. Many researchers are exploring non-invasive approaches to enhance its predictability. Currently, the ElectroHysteroGram (EHG) and Tocography (TOCO) signal are a real-time and non-invasive technology which can be employed to predict preterm birth. For this purpose, sparse autoencoder (SAE) based deep neural network (SAE-based DNN) is developed. The deep neural network has three layers including a stacked sparse autoencoder (SSAE) network with two hidden layers and one final softmax layer. To this end, the bursts of all 26 recordings of the publicly available TPEHGT DS database corresponding to uterine contraction intervals and non-contraction intervals (dummy intervals) were manually segmented. 20 features were extracted by two feature extraction algorithms including sample entropy and wavelet entropy. Afterwards, the SSAE network is adopted to learn high-level features from raw features by unsupervised learning. The softmax layer is added at the top of the SSAE network for classification. In order to verify the effectiveness of the proposed method, this study used 10-fold cross-validation and four indicators to evaluate classification performance. Experimental research results display that the performance of deep neural network can achieve Sensitivity of 98.2%, Specificity of 97.74%, and Accuracy of 97.9% in the publicly TPEHGT DS database. The performance of deep neural network outperforms the comparison models including deep belief networks (DBN) and hierarchical extreme learning machine (H-ELM). Finally, experimental research results reveal that the proposed method could be valid applied to semi-automatic identification of term and preterm uterine recordings.
Collapse
|
29
|
Wildeboer RR, van Sloun RJG, Wijkstra H, Mischi M. Artificial intelligence in multiparametric prostate cancer imaging with focus on deep-learning methods. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 189:105316. [PMID: 31951873 DOI: 10.1016/j.cmpb.2020.105316] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/03/2019] [Revised: 12/09/2019] [Accepted: 01/04/2020] [Indexed: 05/16/2023]
Abstract
Prostate cancer represents today the most typical example of a pathology whose diagnosis requires multiparametric imaging, a strategy where multiple imaging techniques are combined to reach an acceptable diagnostic performance. However, the reviewing, weighing and coupling of multiple images not only places additional burden on the radiologist, it also complicates the reviewing process. Prostate cancer imaging has therefore been an important target for the development of computer-aided diagnostic (CAD) tools. In this survey, we discuss the advances in CAD for prostate cancer over the last decades with special attention to the deep-learning techniques that have been designed in the last few years. Moreover, we elaborate and compare the methods employed to deliver the CAD output to the operator for further medical decision making.
Collapse
Affiliation(s)
- Rogier R Wildeboer
- Lab of Biomedical Diagnostics, Department of Electrical Engineering, Eindhoven University of Technology, De Zaale, 5600 MB, Eindhoven, the Netherlands.
| | - Ruud J G van Sloun
- Lab of Biomedical Diagnostics, Department of Electrical Engineering, Eindhoven University of Technology, De Zaale, 5600 MB, Eindhoven, the Netherlands.
| | - Hessel Wijkstra
- Lab of Biomedical Diagnostics, Department of Electrical Engineering, Eindhoven University of Technology, De Zaale, 5600 MB, Eindhoven, the Netherlands; Department of Urology, Amsterdam University Medical Centers, University of Amsterdam, Meibergdreef 9, 1105 AZ, Amsterdam, the Netherlands
| | - Massimo Mischi
- Lab of Biomedical Diagnostics, Department of Electrical Engineering, Eindhoven University of Technology, De Zaale, 5600 MB, Eindhoven, the Netherlands
| |
Collapse
|
30
|
Stanzione A, Gambardella M, Cuocolo R, Ponsiglione A, Romeo V, Imbriaco M. Prostate MRI radiomics: A systematic review and radiomic quality score assessment. Eur J Radiol 2020; 129:109095. [PMID: 32531722 DOI: 10.1016/j.ejrad.2020.109095] [Citation(s) in RCA: 76] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Revised: 05/20/2020] [Accepted: 05/25/2020] [Indexed: 12/24/2022]
Abstract
BACKGROUND Radiomics have the potential to further increase the value of MRI in prostate cancer management. However, implementation in clinical practice is still far and concerns have been raised regarding the methodological quality of radiomic studies. Therefore, we aimed to systematically review the literature to assess the quality of prostate MRI radiomic studies using the radiomics quality score (RQS). METHODS Multiple medical literature archives (PubMed, Web of Science and EMBASE) were searched to retrieve original investigations focused on prostate MRI radiomic approaches up to the end of June 2019. Three researchers independently assessed each paper using the RQS. Data from the most experienced researcher were used for descriptive analysis. Inter-rater reproducibility was assessed using the intraclass correlation coefficient (ICC) on the total RQS score. RESULTS 73 studies were included in the analysis. Overall, the average RQS total score was 7.93 ± 5.13 on a maximum of 36 points, with a final average percentage of 23 ± 13%. Among the most critical items, the lack of feature robustness testing strategies and external validation datasets. The ICC resulted poor to moderate, with an average value of 0.57 and 95% Confidence Intervals between 0.44 and 0.69. CONCLUSIONS Current studies on prostate MRI radiomics still lack the quality required to allow their introduction in clinical practice.
Collapse
Affiliation(s)
- Arnaldo Stanzione
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Via Pansini 5, 80131, Naples, Italy
| | - Michele Gambardella
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Via Pansini 5, 80131, Naples, Italy
| | - Renato Cuocolo
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Via Pansini 5, 80131, Naples, Italy.
| | - Andrea Ponsiglione
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Via Pansini 5, 80131, Naples, Italy
| | - Valeria Romeo
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Via Pansini 5, 80131, Naples, Italy
| | - Massimo Imbriaco
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Via Pansini 5, 80131, Naples, Italy
| |
Collapse
|
31
|
E Elahi GMM, Kalra S, Zinman L, Genge A, Korngut L, Yang YH. Texture classification of MR images of the brain in ALS using M-CoHOG: A multi-center study. Comput Med Imaging Graph 2019; 79:101659. [PMID: 31786374 DOI: 10.1016/j.compmedimag.2019.101659] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2019] [Revised: 08/16/2019] [Accepted: 09/24/2019] [Indexed: 01/07/2023]
Abstract
Gradient-based texture analysis methods have become popular in computer vision and image processing and has many applications including medical image analysis. This motivates us to develop a texture feature extraction method to discriminate Amyotrophic Lateral Sclerosis (ALS) patients from controls. But, the lack of data in ALS research is a major constraint and can be mitigated by using data from multiple centers. However, multi-center data gives some other challenges such as differing scanner parameters and variation in intensity of the medical images, which motivate the development of the proposed method. To investigate these challenges, we propose a gradient-based texture feature extraction method called Modified Co-occurrence Histograms of Oriented Gradients (M-CoHOG) to extract texture features from 2D Magnetic Resonance Images (MRI). We also propose a new feature-normalization technique before feeding the normalized M-CoHOG features into an ensemble of classifiers, which can accommodate for variation of data from different centers. ALS datasets from four different centers are used in the experiments. We analyze the classification accuracy of single center data as well as that arising from multiple centers. It is observed that the extracted texture features from downsampled images are more significant in distinguishing between patients and controls. Moreover, using an ensemble of classifiers shows improvement in classification accuracy over a single classifier in multi-center data. The proposed method outperforms the state-of-the-art methods by a significant margin.
Collapse
Affiliation(s)
- G M Mashrur E Elahi
- Department of Computing Science, University of Alberta, Edmonton, Alberta, Canada
| | - Sanjay Kalra
- Departments of Medicine (Neurology) and Computing Science, University of Alberta, Edmonton, Alberta, Canada
| | - Lorne Zinman
- Sunnybrook Health Sciences Centre, University of Toronto, Toronto, Ontario, Canada
| | - Angela Genge
- Montreal Neurological Institute and Hospital, McGill University, Montreal, Quebec, Canada
| | - Lawrence Korngut
- Department of Clinical Neurosciences, University of Calgary, Calgary, Alberta, Canada
| | - Yee-Hong Yang
- Department of Computing Science, University of Alberta, Edmonton, Alberta, Canada
| |
Collapse
|
32
|
Image Processing-Based Detection of Pipe Corrosion Using Texture Analysis and Metaheuristic-Optimized Machine Learning Approach. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2019; 2019:8097213. [PMID: 31379936 PMCID: PMC6657638 DOI: 10.1155/2019/8097213] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/27/2019] [Revised: 05/21/2019] [Accepted: 06/17/2019] [Indexed: 11/25/2022]
Abstract
To maintain the serviceability of buildings, the owners need to be informed about the current condition of the water supply and waste disposal systems. Therefore, timely and accurate detection of corrosion on pipe surface is a crucial task. The conventional manual surveying process performed by human inspectors is notoriously time consuming and labor intensive. Hence, this study proposes an image processing-based method for automating the task of pipe corrosion detection. Image texture including statistical measurement of image colors, gray-level co-occurrence matrix, and gray-level run length is employed to extract features of pipe surface. Support vector machine optimized by differential flower pollination is then used to construct a decision boundary that can recognize corroded and intact pipe surfaces. A dataset consisting of 2000 image samples has been collected and utilized to train and test the proposed hybrid model. Experimental results supported by the Wilcoxon signed-rank test confirm that the proposed method is highly suitable for the task of interest with an accuracy rate of 92.81%. Thus, the model proposed in this study can be a promising tool to assist building maintenance agents during the phase of pipe system survey.
Collapse
|
33
|
Abraham B, Nair MS. Computer-aided grading of prostate cancer from MRI images using Convolutional Neural Networks. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2019. [DOI: 10.3233/jifs-169913] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2023]
Affiliation(s)
- Bejoy Abraham
- Department of Computer Science, University of Kerala, Kariavattom, Thiruvananthapuram 695581, Kerala, India
- Department of Computer Science and Engineering, College of Engineering Perumon, Kollam 691601, Kerala, India
| | - Madhu S. Nair
- Department of Computer Science, Cochin University of Science and Technology, Kochi 682022, Kerala, India
| |
Collapse
|
34
|
Wang L, Shi Y, Suk HI, Noble A, Hamarneh G. Special issue on machine learning in medical imaging. Comput Med Imaging Graph 2019; 74:10-11. [PMID: 30908957 DOI: 10.1016/j.compmedimag.2019.03.003] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Affiliation(s)
- Li Wang
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, USA.
| | - Yinghuan Shi
- Department of Computer Science and Technology, Nanjing University, PR China
| | - Heung-Il Suk
- Department of Brain and Cognitive Engineering, Korea University, Republic of Korea
| | - Alison Noble
- Biomedical Engineering, University of Oxford, UK
| | | |
Collapse
|
35
|
Ravi D, Ghavami N, Alexander DC, Ianus A. Current Applications and Future Promises of Machine Learning in Diffusion MRI. COMPUTATIONAL DIFFUSION MRI 2019. [DOI: 10.1007/978-3-030-05831-9_9] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
36
|
Abraham B, Nair MS. Automated grading of prostate cancer using convolutional neural network and ordinal class classifier. INFORMATICS IN MEDICINE UNLOCKED 2019. [DOI: 10.1016/j.imu.2019.100256] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023] Open
|