1
|
Li Y, Wynne J, Wang J, Roper J, Chang CW, Patel AB, Shelton J, Liu T, Mao H, Yang X. MRI-based prostate cancer classification using 3D efficient capsule network. Med Phys 2024; 51:4748-4758. [PMID: 38346111 DOI: 10.1002/mp.16975] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Revised: 12/13/2023] [Accepted: 01/21/2024] [Indexed: 07/10/2024] Open
Abstract
BACKGROUND Prostate cancer (PCa) is the most common cancer in men and the second leading cause of male cancer-related death. Gleason score (GS) is the primary driver of PCa risk-stratification and medical decision-making, but can only be assessed at present via biopsy under anesthesia. Magnetic resonance imaging (MRI) is a promising non-invasive method to further characterize PCa, providing additional anatomical and functional information. Meanwhile, the diagnostic power of MRI is limited by qualitative or, at best, semi-quantitative interpretation criteria, leading to inter-reader variability. PURPOSES Computer-aided diagnosis employing quantitative MRI analysis has yielded promising results in non-invasive prediction of GS. However, convolutional neural networks (CNNs) do not implicitly impose a frame of reference to the objects. Thus, CNNs do not encode the positional information properly, limiting method robustness against simple image variations such as flipping, scaling, or rotation. Capsule network (CapsNet) has been proposed to address this limitation and achieves promising results in this domain. In this study, we develop a 3D Efficient CapsNet to stratify GS-derived PCa risk using T2-weighted (T2W) MRI images. METHODS In our method, we used 3D CNN modules to extract spatial features and primary capsule layers to encode vector features. We then propose to integrate fully-connected capsule layers (FC Caps) to create a deeper hierarchy for PCa grading prediction. FC Caps comprises a secondary capsule layer which routes active primary capsules and a final capsule layer which outputs PCa risk. To account for data imbalance, we propose a novel dynamic weighted margin loss. We evaluate our method on a public PCa T2W MRI dataset from the Cancer Imaging Archive containing data from 976 patients. RESULTS Two groups of experiments were performed: (1) we first identified high-risk disease by classifying low + medium risk versus high risk; (2) we then stratified disease in one-versus-one fashion: low versus high risk, medium versus high risk, and low versus medium risk. Five-fold cross validation was performed. Our model achieved an area under receiver operating characteristic curve (AUC) of 0.83 and 0.64 F1-score for low versus high grade, 0.79 AUC and 0.75 F1-score for low + medium versus high grade, 0.75 AUC and 0.69 F1-score for medium versus high grade and 0.59 AUC and 0.57 F1-score for low versus medium grade. Our method outperformed state-of-the-art radiomics-based classification and deep learning methods with the highest metrics for each experiment. Our divide-and-conquer strategy achieved weighted Cohen's Kappa score of 0.41, suggesting moderate agreement with ground truth PCa risks. CONCLUSIONS In this study, we proposed a novel 3D Efficient CapsNet for PCa risk stratification and demonstrated its feasibility. This developed tool provided a non-invasive approach to assess PCa risk from T2W MR images, which might have potential to personalize the treatment of PCa and reduce the number of unnecessary biopsies.
Collapse
Affiliation(s)
- Yuheng Li
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
- The Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, Georgia, USA
| | - Jacob Wynne
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Jing Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Chih-Wei Chang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Ashish B Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Joseph Shelton
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tian Liu
- Department of Radiation Oncology, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Hui Mao
- The Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, Georgia, USA
- Department of Radiology and Imaging Science and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
- The Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, Georgia, USA
| |
Collapse
|
2
|
Talyshinskii A, Hameed BMZ, Ravinder PP, Naik N, Randhawa P, Shah M, Rai BP, Tokas T, Somani BK. Catalyzing Precision Medicine: Artificial Intelligence Advancements in Prostate Cancer Diagnosis and Management. Cancers (Basel) 2024; 16:1809. [PMID: 38791888 PMCID: PMC11119252 DOI: 10.3390/cancers16101809] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2024] [Revised: 04/29/2024] [Accepted: 05/07/2024] [Indexed: 05/26/2024] Open
Abstract
BACKGROUND The aim was to analyze the current state of deep learning (DL)-based prostate cancer (PCa) diagnosis with a focus on magnetic resonance (MR) prostate reconstruction; PCa detection/stratification/reconstruction; positron emission tomography/computed tomography (PET/CT); androgen deprivation therapy (ADT); prostate biopsy; associated challenges and their clinical implications. METHODS A search of the PubMed database was conducted based on the inclusion and exclusion criteria for the use of DL methods within the abovementioned areas. RESULTS A total of 784 articles were found, of which, 64 were included. Reconstruction of the prostate, the detection and stratification of prostate cancer, the reconstruction of prostate cancer, and diagnosis on PET/CT, ADT, and biopsy were analyzed in 21, 22, 6, 7, 2, and 6 studies, respectively. Among studies describing DL use for MR-based purposes, datasets with magnetic field power of 3 T, 1.5 T, and 3/1.5 T were used in 18/19/5, 0/1/0, and 3/2/1 studies, respectively, of 6/7 studies analyzing DL for PET/CT diagnosis which used data from a single institution. Among the radiotracers, [68Ga]Ga-PSMA-11, [18F]DCFPyl, and [18F]PSMA-1007 were used in 5, 1, and 1 study, respectively. Only two studies that analyzed DL in the context of DT met the inclusion criteria. Both were performed with a single-institution dataset with only manual labeling of training data. Three studies, each analyzing DL for prostate biopsy, were performed with single- and multi-institutional datasets. TeUS, TRUS, and MRI were used as input modalities in two, three, and one study, respectively. CONCLUSION DL models in prostate cancer diagnosis show promise but are not yet ready for clinical use due to variability in methods, labels, and evaluation criteria. Conducting additional research while acknowledging all the limitations outlined is crucial for reinforcing the utility and effectiveness of DL-based models in clinical settings.
Collapse
Affiliation(s)
- Ali Talyshinskii
- Department of Urology and Andrology, Astana Medical University, Astana 010000, Kazakhstan;
| | | | - Prajwal P. Ravinder
- Department of Urology, Kasturba Medical College, Mangaluru, Manipal Academy of Higher Education, Manipal 576104, India;
| | - Nithesh Naik
- Department of Mechanical and Industrial Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India;
| | - Princy Randhawa
- Department of Mechatronics, Manipal University Jaipur, Jaipur 303007, India;
| | - Milap Shah
- Department of Urology, Aarogyam Hospital, Ahmedabad 380014, India;
| | - Bhavan Prasad Rai
- Department of Urology, Freeman Hospital, Newcastle upon Tyne NE7 7DN, UK;
| | - Theodoros Tokas
- Department of Urology, Medical School, University General Hospital of Heraklion, University of Crete, 14122 Heraklion, Greece;
| | - Bhaskar K. Somani
- Department of Mechanical and Industrial Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India;
- Department of Urology, University Hospital Southampton NHS Trust, Southampton SO16 6YD, UK
| |
Collapse
|
3
|
Ramacciotti LS, Hershenhouse JS, Mokhtar D, Paralkar D, Kaneko M, Eppler M, Gill K, Mogoulianitis V, Duddalwar V, Abreu AL, Gill I, Cacciamani GE. Comprehensive Assessment of MRI-based Artificial Intelligence Frameworks Performance in the Detection, Segmentation, and Classification of Prostate Lesions Using Open-Source Databases. Urol Clin North Am 2024; 51:131-161. [PMID: 37945098 DOI: 10.1016/j.ucl.2023.08.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2023]
Abstract
Numerous MRI-based artificial intelligence (AI) frameworks have been designed for prostate cancer lesion detection, segmentation, and classification via MRI as a result of intrareader and interreader variability that is inherent to traditional interpretation. Open-source data sets have been released with the intention of providing freely available MRIs for the testing of diverse AI frameworks in automated or semiautomated tasks. Here, an in-depth assessment of the performance of MRI-based AI frameworks for detecting, segmenting, and classifying prostate lesions using open-source databases was performed. Among 17 data sets, 12 were specific to prostate cancer detection/classification, with 52 studies meeting the inclusion criteria.
Collapse
Affiliation(s)
- Lorenzo Storino Ramacciotti
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Jacob S Hershenhouse
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Daniel Mokhtar
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Divyangi Paralkar
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Masatomo Kaneko
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Department of Urology, Graduate School of Medical Science, Kyoto Prefectural University of Medicine, Kyoto, Japan
| | - Michael Eppler
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Karanvir Gill
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Vasileios Mogoulianitis
- Ming Hsieh Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA, USA
| | - Vinay Duddalwar
- Department of Radiology, University of Southern California, Los Angeles, CA, USA
| | - Andre L Abreu
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Department of Radiology, University of Southern California, Los Angeles, CA, USA
| | - Inderbir Gill
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Giovanni E Cacciamani
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Department of Radiology, University of Southern California, Los Angeles, CA, USA.
| |
Collapse
|
4
|
Mehmood M, Abbasi SH, Aurangzeb K, Majeed MF, Anwar MS, Alhussein M. A classifier model for prostate cancer diagnosis using CNNs and transfer learning with multi-parametric MRI. Front Oncol 2023; 13:1225490. [PMID: 38023149 PMCID: PMC10666634 DOI: 10.3389/fonc.2023.1225490] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Accepted: 10/16/2023] [Indexed: 12/01/2023] Open
Abstract
Prostate cancer (PCa) is a major global concern, particularly for men, emphasizing the urgency of early detection to reduce mortality. As the second leading cause of cancer-related male deaths worldwide, precise and efficient diagnostic methods are crucial. Due to high and multiresolution MRI in PCa, computer-aided diagnostic (CAD) methods have emerged to assist radiologists in identifying anomalies. However, the rapid advancement of medical technology has led to the adoption of deep learning methods. These techniques enhance diagnostic efficiency, reduce observer variability, and consistently outperform traditional approaches. Resource constraints that can distinguish whether a cancer is aggressive or not is a significant problem in PCa treatment. This study aims to identify PCa using MRI images by combining deep learning and transfer learning (TL). Researchers have explored numerous CNN-based Deep Learning methods for classifying MRI images related to PCa. In this study, we have developed an approach for the classification of PCa using transfer learning on a limited number of images to achieve high performance and help radiologists instantly identify PCa. The proposed methodology adopts the EfficientNet architecture, pre-trained on the ImageNet dataset, and incorporates three branches for feature extraction from different MRI sequences. The extracted features are then combined, significantly enhancing the model's ability to distinguish MRI images accurately. Our model demonstrated remarkable results in classifying prostate cancer, achieving an accuracy rate of 88.89%. Furthermore, comparative results indicate that our approach achieve higher accuracy than both traditional hand-crafted feature techniques and existing deep learning techniques in PCa classification. The proposed methodology can learn more distinctive features in prostate images and correctly identify cancer.
Collapse
Affiliation(s)
- Mubashar Mehmood
- Department of Computer Science, COMSATS Institute of Information Technology, Islamabad, Pakistan
| | | | - Khursheed Aurangzeb
- Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia
| | | | | | - Musaed Alhussein
- Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia
| |
Collapse
|
5
|
Belue MJ, Harmon SA, Lay NS, Daryanani A, Phelps TE, Choyke PL, Turkbey B. The Low Rate of Adherence to Checklist for Artificial Intelligence in Medical Imaging Criteria Among Published Prostate MRI Artificial Intelligence Algorithms. J Am Coll Radiol 2023; 20:134-145. [PMID: 35922018 PMCID: PMC9887098 DOI: 10.1016/j.jacr.2022.05.022] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2022] [Revised: 05/13/2022] [Accepted: 05/18/2022] [Indexed: 02/02/2023]
Abstract
OBJECTIVE To determine the rigor, generalizability, and reproducibility of published classification and detection artificial intelligence (AI) models for prostate cancer (PCa) on MRI using the Checklist for Artificial Intelligence in Medical Imaging (CLAIM) guidelines, a 42-item checklist that is considered a measure of best practice for presenting and reviewing medical imaging AI research. MATERIALS AND METHODS This review searched English literature for studies proposing PCa AI detection and classification models on MRI. Each study was evaluated with the CLAIM checklist. The additional outcomes for which data were sought included measures of AI model performance (eg, area under the curve [AUC], sensitivity, specificity, free-response operating characteristic curves), training and validation and testing group sample size, AI approach, detection versus classification AI, public data set utilization, MRI sequences used, and definition of gold standard for ground truth. The percentage of CLAIM checklist fulfillment was used to stratify studies into quartiles. Wilcoxon's rank-sum test was used for pair-wise comparisons. RESULTS In all, 75 studies were identified, and 53 studies qualified for analysis. The original CLAIM items that most studies did not fulfill includes item 12 (77% no): de-identification methods; item 13 (68% no): handling missing data; item 15 (47% no): rationale for choosing ground truth reference standard; item 18 (55% no): measurements of inter- and intrareader variability; item 31 (60% no): inclusion of validated interpretability maps; item 37 (92% no): inclusion of failure analysis to elucidate AI model weaknesses. An AUC score versus percentage CLAIM fulfillment quartile revealed a significant difference of the mean AUC scores between quartile 1 versus quartile 2 (0.78 versus 0.86, P = .034) and quartile 1 versus quartile 4 (0.78 versus 0.89, P = .003) scores. Based on additional information and outcome metrics gathered in this study, additional measures of best practice are defined. These new items include disclosure of public dataset usage, ground truth definition in comparison to other referenced works in the defined task, and sample size power calculation. CONCLUSION A large proportion of AI studies do not fulfill key items in CLAIM guidelines within their methods and results sections. The percentage of CLAIM checklist fulfillment is weakly associated with improved AI model performance. Additions or supplementations to CLAIM are recommended to improve publishing standards and aid reviewers in determining study rigor.
Collapse
Affiliation(s)
- Mason J Belue
- Medical Research Scholars Program Fellow, Artificial Intelligence Resource, Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, Maryland
| | - Stephanie A Harmon
- Staff Scientist, Artificial Intelligence Resource, Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, Maryland
| | - Nathan S Lay
- Staff Scientist, Artificial Intelligence Resource, Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, Maryland
| | - Asha Daryanani
- Intramural Research Training Program Fellow, Artificial Intelligence Resource, Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, Maryland
| | - Tim E Phelps
- Postdoctoral Fellow, Artificial Intelligence Resource, Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, Maryland
| | - Peter L Choyke
- Artificial Intelligence Resource, Chief of Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, Maryland
| | - Baris Turkbey
- Senior Clinician/Director, Artificial Intelligence Resource, Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, Maryland.
| |
Collapse
|
6
|
Tsai CF, Huang CH, Wu FH, Lin CH, Lee CH, Yu SS, Chan YK, Jan FJ. Intelligent image analysis recognizes important orchid viral diseases. FRONTIERS IN PLANT SCIENCE 2022; 13:1051348. [PMID: 36531380 PMCID: PMC9755359 DOI: 10.3389/fpls.2022.1051348] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Accepted: 11/11/2022] [Indexed: 06/17/2023]
Abstract
Phalaenopsis orchids are one of the most important exporting commodities for Taiwan. Most orchids are planted and grown in greenhouses. Early detection of orchid diseases is crucially valuable to orchid farmers during orchid cultivation. At present, orchid viral diseases are generally identified with manual observation and the judgment of the grower's experience. The most commonly used assays for virus identification are nucleic acid amplification and serology. However, it is neither time nor cost efficient. Therefore, this study aimed to create a system for automatically identifying the common viral diseases in orchids using the orchid image. Our methods include the following steps: the image preprocessing by color space transformation and gamma correction, detection of leaves by a U-net model, removal of non-leaf fragment areas by connected component labeling, feature acquisition of leaf texture, and disease identification by the two-stage model with the integration of a random forest model and an inception network (deep learning) model. Thereby, the proposed system achieved the excellent accuracy of 0.9707 and 0.9180 for the image segmentation of orchid leaves and disease identification, respectively. Furthermore, this system outperformed the naked-eye identification for the easily misidentified categories [cymbidium mosaic virus (CymMV) and odontoglossum ringspot virus (ORSV)] with the accuracy of 0.842 using two-stage model and 0.667 by naked-eye identification. This system would benefit the orchid disease recognition for Phalaenopsis cultivation.
Collapse
Affiliation(s)
- Cheng-Feng Tsai
- Department of Management Information Systems, National Chung Hsing University, Taichung, Taiwan
| | - Chih-Hung Huang
- Department of Plant Pathology, National Chung Hsing University, Taichung, Taiwan
- Advanced Plant Biotechnology Center, National Chung Hsing University, Taichung, Taiwan
| | - Fu-Hsing Wu
- Department of Health Services Administration, China Medical University, Taichung, Taiwan
| | - Chuen-Horng Lin
- Department of Computer Science and Information Engineering, National Taichung University of Science and Technology, Taichung, Taiwan
| | - Chia-Hwa Lee
- Department of Plant Pathology, National Chung Hsing University, Taichung, Taiwan
- Ph.D. Program in Microbial Genomics, National Chung Hsing University and Academia Sinica, Taichung, Taipei, Taiwan
| | - Shyr-Shen Yu
- Department of Computer Science and Engineering, National Chung Hsing University, Taichung, Taiwan
| | - Yung-Kuan Chan
- Department of Management Information Systems, National Chung Hsing University, Taichung, Taiwan
- Advanced Plant Biotechnology Center, National Chung Hsing University, Taichung, Taiwan
| | - Fuh-Jyh Jan
- Department of Plant Pathology, National Chung Hsing University, Taichung, Taiwan
- Advanced Plant Biotechnology Center, National Chung Hsing University, Taichung, Taiwan
- Ph.D. Program in Microbial Genomics, National Chung Hsing University and Academia Sinica, Taichung, Taipei, Taiwan
| |
Collapse
|
7
|
Ferreira H, Serranho P, Guimarães P, Trindade R, Martins J, Moreira PI, Ambrósio AF, Castelo-Branco M, Bernardes R. Stage-independent biomarkers for Alzheimer's disease from the living retina: an animal study. Sci Rep 2022; 12:13667. [PMID: 35953633 PMCID: PMC9372147 DOI: 10.1038/s41598-022-18113-y] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Accepted: 08/05/2022] [Indexed: 12/02/2022] Open
Abstract
The early diagnosis of neurodegenerative disorders is still an open issue despite the many efforts to address this problem. In particular, Alzheimer's disease (AD) remains undiagnosed for over a decade before the first symptoms. Optical coherence tomography (OCT) is now common and widely available and has been used to image the retina of AD patients and healthy controls to search for biomarkers of neurodegeneration. However, early diagnosis tools would need to rely on images of patients in early AD stages, which are not available due to late diagnosis. To shed light on how to overcome this obstacle, we resort to 57 wild-type mice and 57 triple-transgenic mouse model of AD to train a network with mice aged 3, 4, and 8 months and classify mice at the ages of 1, 2, and 12 months. To this end, we computed fundus images from OCT data and trained a convolution neural network (CNN) to classify those into the wild-type or transgenic group. CNN performance accuracy ranged from 80 to 88% for mice out of the training group's age, raising the possibility of diagnosing AD before the first symptoms through the non-invasive imaging of the retina.
Collapse
Affiliation(s)
- Hugo Ferreira
- Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), Institute for Nuclear Sciences Applied to Health (ICNAS), University of Coimbra, Azinhaga de Santa Comba, 3000-548, Coimbra, Portugal
| | - Pedro Serranho
- Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), Institute for Nuclear Sciences Applied to Health (ICNAS), University of Coimbra, Azinhaga de Santa Comba, 3000-548, Coimbra, Portugal
- Department of Sciences and Technology, Universidade Aberta, Rua da Escola Politécnica, n.º 147, 1269-001, Lisboa, Portugal
| | - Pedro Guimarães
- Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), Institute for Nuclear Sciences Applied to Health (ICNAS), University of Coimbra, Azinhaga de Santa Comba, 3000-548, Coimbra, Portugal
| | - Rita Trindade
- Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), Institute for Nuclear Sciences Applied to Health (ICNAS), University of Coimbra, Azinhaga de Santa Comba, 3000-548, Coimbra, Portugal
| | - João Martins
- Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), Institute for Nuclear Sciences Applied to Health (ICNAS), University of Coimbra, Azinhaga de Santa Comba, 3000-548, Coimbra, Portugal
- Coimbra Institute for Clinical and Biomedical Research (iCBR), Faculty of Medicine (FMUC), University of Coimbra, Azinhaga de Santa Comba, 3000-548, Coimbra, Portugal
- Center for Innovative Biomedicine and Biotechnology (CIBB), University of Coimbra, Azinhaga de Santa Comba, 3000-548, Coimbra, Portugal
- Clinical Academic Center of Coimbra (CACC), Faculty of Medicine (FMUC), University of Coimbra, Azinhaga de Santa Comba, 3000-548, Coimbra, Portugal
| | - Paula I Moreira
- Center for Innovative Biomedicine and Biotechnology (CIBB), University of Coimbra, Azinhaga de Santa Comba, 3000-548, Coimbra, Portugal
- Clinical Academic Center of Coimbra (CACC), Faculty of Medicine (FMUC), University of Coimbra, Azinhaga de Santa Comba, 3000-548, Coimbra, Portugal
- Laboratory of Physiology, Faculty of Medicine (FMUC), University of Coimbra, Azinhaga de Santa Comba, 3000-548, Coimbra, Portugal
- Center for Neuroscience and Cell Biology (CNC), University of Coimbra, Azinhaga de Santa Comba, 3000-548, Coimbra, Portugal
| | - António Francisco Ambrósio
- Coimbra Institute for Clinical and Biomedical Research (iCBR), Faculty of Medicine (FMUC), University of Coimbra, Azinhaga de Santa Comba, 3000-548, Coimbra, Portugal
- Center for Innovative Biomedicine and Biotechnology (CIBB), University of Coimbra, Azinhaga de Santa Comba, 3000-548, Coimbra, Portugal
- Clinical Academic Center of Coimbra (CACC), Faculty of Medicine (FMUC), University of Coimbra, Azinhaga de Santa Comba, 3000-548, Coimbra, Portugal
| | - Miguel Castelo-Branco
- Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), Institute for Nuclear Sciences Applied to Health (ICNAS), University of Coimbra, Azinhaga de Santa Comba, 3000-548, Coimbra, Portugal
- Clinical Academic Center of Coimbra (CACC), Faculty of Medicine (FMUC), University of Coimbra, Azinhaga de Santa Comba, 3000-548, Coimbra, Portugal
| | - Rui Bernardes
- Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), Institute for Nuclear Sciences Applied to Health (ICNAS), University of Coimbra, Azinhaga de Santa Comba, 3000-548, Coimbra, Portugal.
- Clinical Academic Center of Coimbra (CACC), Faculty of Medicine (FMUC), University of Coimbra, Azinhaga de Santa Comba, 3000-548, Coimbra, Portugal.
| |
Collapse
|
8
|
Basu S, Agarwal R, Srivastava V. Deep discriminative learning model with calibrated attention map for the automated diagnosis of diffuse large B-cell lymphoma. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103728] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
9
|
|
10
|
Kaneko M, Fukuda N, Nagano H, Yamada K, Yamada K, Konishi E, Sato Y, Ukimura O. Artificial intelligence trained with integration of multiparametric MR-US imaging data and fusion biopsy trajectory-proven pathology data for 3D prediction of prostate cancer: A proof-of-concept study. Prostate 2022; 82:793-803. [PMID: 35192229 DOI: 10.1002/pros.24321] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Revised: 01/30/2022] [Accepted: 02/04/2022] [Indexed: 12/09/2022]
Abstract
BACKGROUND We aimed to develop an artificial intelligence (AI) algorithm that predicts the volume and location of clinically significant cancer (CSCa) using convolutional neural network (CNN) trained with integration of multiparametric MR-US image data and MRI-US fusion prostate biopsy (MRI-US PBx) trajectory-proven pathology data. METHODS Twenty consecutive patients prospectively underwent MRI-US PBx, followed by robot-assisted radical prostatectomy (RARP). The AI algorithm was trained with the integration of MR-US image data with a MRI-US PBx trajectory-proven pathology. The relationship with the 3D-cancer-mapping of RARP specimens was compared between AI system-suggested 3D-CSCa mapping and an experienced radiologist's suggested 3D-CSCa mapping on MRI alone according to the Prostate Imaging Reporting and Data System (PI-RADS) version 2. The characteristics of detected and undetected tumors at AI were compared in 22,968 image data. The relationships between CSCa volumes and volumes predicted by AI as well as the radiologist's reading based on PI-RADS were analyzed. RESULTS The concordance of the CSCa center with that in RARP specimens was significantly higher in the AI prediction than the radiologist' reading (83% vs. 54%, p = 0.036). CSCa volumes predicted with AI were more accurate (r = 0.90, p < 0.001) than the radiologist's reading. The limitations include that the elastic fusion technology has its own registration error. CONCLUSIONS We presented a novel pilot AI algorithm for 3D prediction of PCa. AI was trained by integration of multiparametric MR-US image data and fusion biopsy trajectory-proven pathology data. This deep learning AI model may more precisely predict the 3D mapping of CSCa in its volume and center location than a radiologist's reading based on PI-RADS version 2, and has potential in the planning of focal therapy.
Collapse
Affiliation(s)
- Masatomo Kaneko
- Department of Urology, Kyoto Prefectural University of Medicine, Kyoto, Japan
| | - Norio Fukuda
- Division of Information Science, Nara Institute of Science and Technology, Nara, Japan
| | - Hitomi Nagano
- Department of Radiology, Kyoto Prefectural University of Medicine, Kyoto, Japan
| | - Kaori Yamada
- Department of Radiology, Kyoto Prefectural University of Medicine, Kyoto, Japan
| | - Kei Yamada
- Department of Radiology, Kyoto Prefectural University of Medicine, Kyoto, Japan
| | - Eiichi Konishi
- Department of Surgical Pathology, Kyoto Prefectural University of Medicine, Kyoto, Japan
| | - Yoshinobu Sato
- Division of Information Science, Nara Institute of Science and Technology, Nara, Japan
| | - Osamu Ukimura
- Department of Urology, Kyoto Prefectural University of Medicine, Kyoto, Japan
| |
Collapse
|
11
|
Kim HE, Cosa-Linan A, Santhanam N, Jannesari M, Maros ME, Ganslandt T. Transfer learning for medical image classification: a literature review. BMC Med Imaging 2022; 22:69. [PMID: 35418051 PMCID: PMC9007400 DOI: 10.1186/s12880-022-00793-7] [Citation(s) in RCA: 174] [Impact Index Per Article: 58.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Accepted: 03/30/2022] [Indexed: 02/07/2023] Open
Abstract
BACKGROUND Transfer learning (TL) with convolutional neural networks aims to improve performances on a new task by leveraging the knowledge of similar tasks learned in advance. It has made a major contribution to medical image analysis as it overcomes the data scarcity problem as well as it saves time and hardware resources. However, transfer learning has been arbitrarily configured in the majority of studies. This review paper attempts to provide guidance for selecting a model and TL approaches for the medical image classification task. METHODS 425 peer-reviewed articles were retrieved from two databases, PubMed and Web of Science, published in English, up until December 31, 2020. Articles were assessed by two independent reviewers, with the aid of a third reviewer in the case of discrepancies. We followed the PRISMA guidelines for the paper selection and 121 studies were regarded as eligible for the scope of this review. We investigated articles focused on selecting backbone models and TL approaches including feature extractor, feature extractor hybrid, fine-tuning and fine-tuning from scratch. RESULTS The majority of studies (n = 57) empirically evaluated multiple models followed by deep models (n = 33) and shallow (n = 24) models. Inception, one of the deep models, was the most employed in literature (n = 26). With respect to the TL, the majority of studies (n = 46) empirically benchmarked multiple approaches to identify the optimal configuration. The rest of the studies applied only a single approach for which feature extractor (n = 38) and fine-tuning from scratch (n = 27) were the two most favored approaches. Only a few studies applied feature extractor hybrid (n = 7) and fine-tuning (n = 3) with pretrained models. CONCLUSION The investigated studies demonstrated the efficacy of transfer learning despite the data scarcity. We encourage data scientists and practitioners to use deep models (e.g. ResNet or Inception) as feature extractors, which can save computational costs and time without degrading the predictive power.
Collapse
Affiliation(s)
- Hee E Kim
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany.
| | - Alejandro Cosa-Linan
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Nandhini Santhanam
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Mahboubeh Jannesari
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Mate E Maros
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Thomas Ganslandt
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
- Chair of Medical Informatics, Friedrich-Alexander-Universität Erlangen-Nürnberg, Wetterkreuz 15, 91058, Erlangen, Germany
| |
Collapse
|
12
|
Current Value of Biparametric Prostate MRI with Machine-Learning or Deep-Learning in the Detection, Grading, and Characterization of Prostate Cancer: A Systematic Review. Diagnostics (Basel) 2022; 12:diagnostics12040799. [PMID: 35453847 PMCID: PMC9027206 DOI: 10.3390/diagnostics12040799] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Revised: 03/19/2022] [Accepted: 03/23/2022] [Indexed: 02/04/2023] Open
Abstract
Prostate cancer detection with magnetic resonance imaging is based on a standardized MRI-protocol according to the PI-RADS guidelines including morphologic imaging, diffusion weighted imaging, and perfusion. To facilitate data acquisition and analysis the contrast-enhanced perfusion is often omitted resulting in a biparametric prostate MRI protocol. The intention of this review is to analyze the current value of biparametric prostate MRI in combination with methods of machine-learning and deep learning in the detection, grading, and characterization of prostate cancer; if available a direct comparison with human radiologist performance was performed. PubMed was systematically queried and 29 appropriate studies were identified and retrieved. The data show that detection of clinically significant prostate cancer and differentiation of prostate cancer from non-cancerous tissue using machine-learning and deep learning is feasible with promising results. Some techniques of machine-learning and deep-learning currently seem to be equally good as human radiologists in terms of classification of single lesion according to the PIRADS score.
Collapse
|
13
|
|
14
|
Kandel I, Castelli M. Improving convolutional neural networks performance for image classification using test time augmentation: a case study using MURA dataset. Health Inf Sci Syst 2021; 9:33. [PMID: 34349982 PMCID: PMC8325732 DOI: 10.1007/s13755-021-00163-7] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2020] [Accepted: 07/19/2021] [Indexed: 11/27/2022] Open
Abstract
Bone fractures are one of the main causes to visit the emergency room (ER); the primary method to detect bone fractures is using X-Ray images. X-Ray images require an experienced radiologist to classify them; however, an experienced radiologist is not always available in the ER. An accurate automatic X-Ray image classifier in the ER can help reduce error rates by providing an instant second opinion to the emergency doctor. Deep learning is an emerging trend in artificial intelligence, where an automatic classifier can be trained to classify musculoskeletal images. Image augmentations techniques have proven their usefulness in increasing the deep learning model's performance. Usually, in the image classification domain, the augmentation techniques are used during training the network and not during the testing phase. Test time augmentation (TTA) can increase the model prediction by providing, with a negligible computational cost, several transformations for the same image. In this paper, we investigated the effect of TTA on image classification performance on the MURA dataset. Nine different augmentation techniques were evaluated to determine their performance compared to predictions without TTA. Two ensemble techniques were assessed as well, the majority vote and the average vote. Based on our results, TTA increased classification performance significantly, especially for models with a low score.
Collapse
Affiliation(s)
- Ibrahem Kandel
- Nova Information Management School (NOVA IMS), Universidade Nova de Lisboa, Campus de Campolide, 1070-312 Lisboa, Portugal
| | - Mauro Castelli
- Nova Information Management School (NOVA IMS), Universidade Nova de Lisboa, Campus de Campolide, 1070-312 Lisboa, Portugal
| |
Collapse
|
15
|
Bougias H, Georgiadou E, Malamateniou C, Stogiannos N. Identifying cardiomegaly in chest X-rays: a cross-sectional study of evaluation and comparison between different transfer learning methods. Acta Radiol 2021; 62:1601-1609. [PMID: 33203215 DOI: 10.1177/0284185120973630] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
BACKGROUND Cardiomegaly is a relatively common incidental finding on chest X-rays; if left untreated, it can result in significant complications. Using Artificial Intelligence for diagnosing cardiomegaly could be beneficial, as this pathology may be underreported, or overlooked, especially in busy or under-staffed settings. PURPOSE To explore the feasibility of applying four different transfer learning methods to identify the presence of cardiomegaly in chest X-rays and to compare their diagnostic performance using the radiologists' report as the gold standard. MATERIAL AND METHODS Two thousand chest X-rays were utilized in the current study: 1000 were normal and 1000 had confirmed cardiomegaly. Of these exams, 80% were used for training and 20% as a holdout test dataset. A total of 2048 deep features were extracted using Google's Inception V3, VGG16, VGG19, and SqueezeNet networks. A logistic regression algorithm optimized in regularization terms was used to classify chest X-rays into those with presence or absence of cardiomegaly. RESULTS Diagnostic accuracy is reported by means of sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV), with the VGG19 network providing the best values of sensitivity (84%), specificity (83%), PPV (83%), NPV (84%), and overall accuracy (84,5%). The other networks presented sensitivity at 64.1%-82%, specificity at 77.1%-81.1%, PPV at 74%-81.4%, NPV at 68%-82%, and overall accuracy at 71%-81.3%. CONCLUSION Deep learning using transfer learning methods based on VGG19 network can be used for the automatic detection of cardiomegaly on chest X-ray images. However, further validation and training of each method is required before application to clinical cases.
Collapse
Affiliation(s)
- Haralabos Bougias
- Department of Clinical Radiology, Ioannina University Hospital, Ioannina, Greece
| | - Eleni Georgiadou
- Department of Medical Imaging, Metaxa Anticancer Hospital, Athens, Greece
| | - Christina Malamateniou
- Division of Midwifery and Radiography, School of Health Sciences, City University of London, London, UK
| | - Nikolaos Stogiannos
- Division of Midwifery and Radiography, School of Health Sciences, City University of London, London, UK
- Department of Medical Imaging, Corfu General Hospital, Corfu, Greece
| |
Collapse
|
16
|
Hoar D, Lee PQ, Guida A, Patterson S, Bowen CV, Merrimen J, Wang C, Rendon R, Beyea SD, Clarke SE. Combined Transfer Learning and Test-Time Augmentation Improves Convolutional Neural Network-Based Semantic Segmentation of Prostate Cancer from Multi-Parametric MR Images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 210:106375. [PMID: 34500139 DOI: 10.1016/j.cmpb.2021.106375] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/24/2020] [Accepted: 08/22/2021] [Indexed: 06/13/2023]
Abstract
PURPOSE Multiparametric MRI (mp-MRI) is a widely used tool for diagnosing and staging prostate cancer. The purpose of this study was to evaluate whether transfer learning, unsupervised pre-training and test-time augmentation significantly improved the performance of a convolutional neural network (CNN) for pixel-by-pixel prediction of cancer vs. non-cancer using mp-MRI datasets. METHODS 154 subjects undergoing mp-MRI were prospectively recruited, 16 of whom subsequently underwent radical prostatectomy. Logistic regression, random forest and CNN models were trained on mp-MRI data using histopathology as the gold standard. Transfer learning, unsupervised pre-training and test-time augmentation were used to boost CNN performance. Models were evaluated using Dice score and area under the receiver operating curve (AUROC) with leave-one-subject-out cross validation. Permutation feature importance testing was performed to evaluate the relative value of each MR contrast to CNN model performance. Statistical significance (p<0.05) was determined using the paired Wilcoxon signed rank test with Benjamini-Hochberg correction for multiple comparisons. RESULTS Baseline CNN outperformed logistic regression and random forest models. Transfer learning and unsupervised pre-training did not significantly improve CNN performance over baseline; however, test-time augmentation resulted in significantly higher Dice scores over both baseline CNN and CNN plus either of transfer learning or unsupervised pre-training. The best performing model was CNN with transfer learning and test-time augmentation (Dice score of 0.59 and AUROC of 0.93). The most important contrast was apparent diffusion coefficient (ADC), followed by Ktrans and T2, although each contributed significantly to classifier performance. CONCLUSIONS The addition of transfer learning and test-time augmentation resulted in significant improvement in CNN segmentation performance in a small set of prostate cancer mp-MRI data. Results suggest that these techniques may be more broadly useful for the optimization of deep learning algorithms applied to the problem of semantic segmentation in biomedical image datasets. However, further work is needed to improve the generalizability of the specific model presented herein.
Collapse
Affiliation(s)
- David Hoar
- Department of Electrical and Computer Engineering, Dalhousie University, Halifax, NS, Canada
| | - Peter Q Lee
- Faculty of Computer Science, Dalhousie University, Halifax, NS, Canada
| | - Alessandro Guida
- Biomedical Translational Imaging Centre, Nova Scotia Health Authority and IWK Health Centre, Halifax, NS, Canada
| | - Steven Patterson
- Biomedical Translational Imaging Centre, Nova Scotia Health Authority and IWK Health Centre, Halifax, NS, Canada
| | - Chris V Bowen
- Biomedical Translational Imaging Centre, Nova Scotia Health Authority and IWK Health Centre, Halifax, NS, Canada; Department of Diagnostic Radiology, Dalhousie University, Halifax, NS, Canada
| | | | - Cheng Wang
- Department of Pathology, Dalhousie University, Halifax, NS, Canada
| | - Ricardo Rendon
- Department of Urology, Dalhousie University, Halifax, NS, Canada
| | - Steven D Beyea
- Biomedical Translational Imaging Centre, Nova Scotia Health Authority and IWK Health Centre, Halifax, NS, Canada; Department of Diagnostic Radiology, Dalhousie University, Halifax, NS, Canada
| | - Sharon E Clarke
- Biomedical Translational Imaging Centre, Nova Scotia Health Authority and IWK Health Centre, Halifax, NS, Canada; Department of Diagnostic Radiology, Dalhousie University, Halifax, NS, Canada.
| |
Collapse
|
17
|
Challenges in the Use of Artificial Intelligence for Prostate Cancer Diagnosis from Multiparametric Imaging Data. Cancers (Basel) 2021; 13:cancers13163944. [PMID: 34439099 PMCID: PMC8391234 DOI: 10.3390/cancers13163944] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Revised: 08/02/2021] [Accepted: 08/02/2021] [Indexed: 11/18/2022] Open
Abstract
Simple Summary Prostate Cancer is one of the main threats to men’s health. Its accurate diagnosis is crucial to properly treat patients depending on the cancer’s level of aggressiveness. Tumor risk-stratification is still a challenging task due to the difficulties met during the reading of multi-parametric Magnetic Resonance Images. Artificial Intelligence models may help radiologists in staging the aggressiveness of the equivocal lesions, reducing inter-observer variability and evaluation time. However, these algorithms need many high-quality images to work efficiently, bringing up overfitting and lack of standardization and reproducibility as emerging issues to be addressed. This study attempts to illustrate the state of the art of current research of Artificial Intelligence methods to stratify prostate cancer for its clinical significance suggesting how widespread use of public databases could be a possible solution to these issues. Abstract Many efforts have been carried out for the standardization of multiparametric Magnetic Resonance (mp-MR) images evaluation to detect Prostate Cancer (PCa), and specifically to differentiate levels of aggressiveness, a crucial aspect for clinical decision-making. Prostate Imaging—Reporting and Data System (PI-RADS) has contributed noteworthily to this aim. Nevertheless, as pointed out by the European Association of Urology (EAU 2020), the PI-RADS still has limitations mainly due to the moderate inter-reader reproducibility of mp-MRI. In recent years, many aspects in the diagnosis of cancer have taken advantage of the use of Artificial Intelligence (AI) such as detection, segmentation of organs and/or lesions, and characterization. Here a focus on AI as a potentially important tool for the aim of standardization and reproducibility in the characterization of PCa by mp-MRI is reported. AI includes methods such as Machine Learning and Deep learning techniques that have shown to be successful in classifying mp-MR images, with similar performances obtained by radiologists. Nevertheless, they perform differently depending on the acquisition system and protocol used. Besides, these methods need a large number of samples that cover most of the variability of the lesion aspect and zone to avoid overfitting. The use of publicly available datasets could improve AI performance to achieve a higher level of generalizability, exploiting large numbers of cases and a big range of variability in the images. Here we explore the promise and the advantages, as well as emphasizing the pitfall and the warnings, outlined in some recent studies that attempted to classify clinically significant PCa and indolent lesions using AI methods. Specifically, we focus on the overfitting issue due to the scarcity of data and the lack of standardization and reproducibility in every step of the mp-MR image acquisition and the classifier implementation. In the end, we point out that a solution can be found in the use of publicly available datasets, whose usage has already been promoted by some important initiatives. Our future perspective is that AI models may become reliable tools for clinicians in PCa diagnosis, reducing inter-observer variability and evaluation time.
Collapse
|
18
|
Şerbănescu MS, Manea NC, Streba L, Belciug S, Pleşea IE, Pirici I, Bungărdean RM, Pleşea RM. Automated Gleason grading of prostate cancer using transfer learning from general-purpose deep-learning networks. ROMANIAN JOURNAL OF MORPHOLOGY AND EMBRYOLOGY 2021; 61:149-155. [PMID: 32747906 PMCID: PMC7728132 DOI: 10.47162/rjme.61.1.17] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
Two deep-learning algorithms designed to classify images according to the Gleason grading system that used transfer learning from two well-known general-purpose image classification networks (AlexNet and GoogleNet) were trained on Hematoxylin–Eosin histopathology stained microscopy images with prostate cancer. The dataset consisted of 439 images asymmetrically distributed in four Gleason grading groups. Mean and standard deviation accuracy for AlexNet derivate network was of 61.17±7 and for GoogleNet derivate network was of 60.9±7.4. The similar results obtained by the two networks with very different architecture, together with the normal distribution of classification error for both algorithms show that we have reached a maximum classification rate on this dataset. Taking into consideration all the constraints, we conclude that the resulted networks could assist pathologists in this field, providing first or second opinions on Gleason grading, thus presenting an objective opinion in a grading system which has showed in time a great deal of interobserver variability.
Collapse
|
19
|
Twilt JJ, van Leeuwen KG, Huisman HJ, Fütterer JJ, de Rooij M. Artificial Intelligence Based Algorithms for Prostate Cancer Classification and Detection on Magnetic Resonance Imaging: A Narrative Review. Diagnostics (Basel) 2021; 11:diagnostics11060959. [PMID: 34073627 PMCID: PMC8229869 DOI: 10.3390/diagnostics11060959] [Citation(s) in RCA: 49] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2021] [Revised: 05/19/2021] [Accepted: 05/21/2021] [Indexed: 12/14/2022] Open
Abstract
Due to the upfront role of magnetic resonance imaging (MRI) for prostate cancer (PCa) diagnosis, a multitude of artificial intelligence (AI) applications have been suggested to aid in the diagnosis and detection of PCa. In this review, we provide an overview of the current field, including studies between 2018 and February 2021, describing AI algorithms for (1) lesion classification and (2) lesion detection for PCa. Our evaluation of 59 included studies showed that most research has been conducted for the task of PCa lesion classification (66%) followed by PCa lesion detection (34%). Studies showed large heterogeneity in cohort sizes, ranging between 18 to 499 patients (median = 162) combined with different approaches for performance validation. Furthermore, 85% of the studies reported on the stand-alone diagnostic accuracy, whereas 15% demonstrated the impact of AI on diagnostic thinking efficacy, indicating limited proof for the clinical utility of PCa AI applications. In order to introduce AI within the clinical workflow of PCa assessment, robustness and generalizability of AI applications need to be further validated utilizing external validation and clinical workflow experiments.
Collapse
|
20
|
Abstract
PURPOSE OF REVIEW Over the last decade, major advancements in artificial intelligence technology have emerged and revolutionized the extent to which physicians are able to personalize treatment modalities and care for their patients. Artificial intelligence technology aimed at mimicking/simulating human mental processes, such as deep learning artificial neural networks (ANNs), are composed of a collection of individual units known as 'artificial neurons'. These 'neurons', when arranged and interconnected in complex architectural layers, are capable of analyzing the most complex patterns. The aim of this systematic review is to give a comprehensive summary of the contemporary applications of deep learning ANNs in urological medicine. RECENT FINDINGS Fifty-five articles were included in this systematic review and each article was assigned an 'intermediate' score based on its overall quality. Of these 55 articles, nine studies were prospective, but no nonrandomized control trials were identified. SUMMARY In urological medicine, the application of novel artificial intelligence technologies, particularly ANNs, have been considered to be a promising step in improving physicians' diagnostic capabilities, especially with regards to predicting the aggressiveness and recurrence of various disorders. For benign urological disorders, for example, the use of highly predictive and reliable algorithms could be helpful for the improving diagnoses of male infertility, urinary tract infections, and pediatric malformations. In addition, articles with anecdotal experiences shed light on the potential of artificial intelligence-assisted surgeries, such as with the aid of virtual reality or augmented reality.
Collapse
|
21
|
Muhammad K, Khan S, Ser JD, Albuquerque VHCD. Deep Learning for Multigrade Brain Tumor Classification in Smart Healthcare Systems: A Prospective Survey. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:507-522. [PMID: 32603291 DOI: 10.1109/tnnls.2020.2995800] [Citation(s) in RCA: 86] [Impact Index Per Article: 21.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
Brain tumor is one of the most dangerous cancers in people of all ages, and its grade recognition is a challenging problem for radiologists in health monitoring and automated diagnosis. Recently, numerous methods based on deep learning have been presented in the literature for brain tumor classification (BTC) in order to assist radiologists for a better diagnostic analysis. In this overview, we present an in-depth review of the surveys published so far and recent deep learning-based methods for BTC. Our survey covers the main steps of deep learning-based BTC methods, including preprocessing, features extraction, and classification, along with their achievements and limitations. We also investigate the state-of-the-art convolutional neural network models for BTC by performing extensive experiments using transfer learning with and without data augmentation. Furthermore, this overview describes available benchmark data sets used for the evaluation of BTC. Finally, this survey does not only look into the past literature on the topic but also steps on it to delve into the future of this area and enumerates some research directions that should be followed in the future, especially for personalized and smart healthcare.
Collapse
|
22
|
Hiremath A, Shiradkar R, Merisaari H, Prasanna P, Ettala O, Taimen P, Aronen HJ, Boström PJ, Jambor I, Madabhushi A. Test-retest repeatability of a deep learning architecture in detecting and segmenting clinically significant prostate cancer on apparent diffusion coefficient (ADC) maps. Eur Radiol 2020; 31:379-391. [PMID: 32700021 DOI: 10.1007/s00330-020-07065-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2020] [Revised: 05/22/2020] [Accepted: 07/02/2020] [Indexed: 12/16/2022]
Abstract
OBJECTIVES To evaluate short-term test-retest repeatability of a deep learning architecture (U-Net) in slice- and lesion-level detection and segmentation of clinically significant prostate cancer (csPCa: Gleason grade group > 1) using diffusion-weighted imaging fitted with monoexponential function, ADCm. METHODS One hundred twelve patients with prostate cancer (PCa) underwent 2 prostate MRI examinations on the same day. PCa areas were annotated using whole mount prostatectomy sections. Two U-Net-based convolutional neural networks were trained on three different ADCm b value settings for (a) slice- and (b) lesion-level detection and (c) segmentation of csPCa. Short-term test-retest repeatability was estimated using intra-class correlation coefficient (ICC(3,1)), proportionate agreement, and dice similarity coefficient (DSC). A 3-fold cross-validation was performed on training set (N = 78 patients) and evaluated for performance and repeatability on testing data (N = 34 patients). RESULTS For the three ADCm b value settings, repeatability of mean ADCm of csPCa lesions was ICC(3,1) = 0.86-0.98. Two CNNs with U-Net-based architecture demonstrated ICC(3,1) in the range of 0.80-0.83, agreement of 66-72%, and DSC of 0.68-0.72 for slice- and lesion-level detection and segmentation of csPCa. Bland-Altman plots suggest that there is no systematic bias in agreement between inter-scan ground truth segmentation repeatability and segmentation repeatability of the networks. CONCLUSIONS For the three ADCm b value settings, two CNNs with U-Net-based architecture were repeatable for the problem of detection of csPCa at the slice-level. The network repeatability in segmenting csPCa lesions is affected by inter-scan variability and ground truth segmentation repeatability and may thus improve with better inter-scan reproducibility. KEY POINTS • For the three ADCm b value settings, two CNNs with U-Net-based architecture were repeatable for the problem of detection of csPCa at the slice-level. • The network repeatability in segmenting csPCa lesions is affected by inter-scan variability and ground truth segmentation repeatability and may thus improve with better inter-scan reproducibility.
Collapse
Affiliation(s)
- Amogh Hiremath
- Department of Biomedical Engineering, Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH, 44106, USA.
| | - Rakesh Shiradkar
- Department of Biomedical Engineering, Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH, 44106, USA
| | - Harri Merisaari
- Department of Biomedical Engineering, Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH, 44106, USA.,Department of Diagnostic Radiology, University of Turku, Turku, Finland
| | - Prateek Prasanna
- Department of Biomedical Engineering, Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH, 44106, USA.,Department of Biomedical Informatics, Stony Brook University, Stony Brook, NY, USA
| | - Otto Ettala
- Department of Urology, University of Turku and Turku University Hospital, Turku, Finland
| | - Pekka Taimen
- Institute of Biomedicine, Department of Pathology, University of Turku and Turku University Hospital, Turku, Finland
| | - Hannu J Aronen
- Medical Imaging Centre of Southwest Finland, Turku University Hospital, Turku, Finland
| | - Peter J Boström
- Department of Urology, University of Turku and Turku University Hospital, Turku, Finland
| | - Ivan Jambor
- Department of Diagnostic Radiology, University of Turku, Turku, Finland.,Department of Radiology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Anant Madabhushi
- Department of Biomedical Engineering, Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH, 44106, USA.,Louis Stokes Cleveland Veterans Administration Medical Center, Cleveland, Ohio, USA
| |
Collapse
|
23
|
Zhang J, Cui W, Guo X, Wang B, Wang Z. Classification of digital pathological images of non-Hodgkin's lymphoma subtypes based on the fusion of transfer learning and principal component analysis. Med Phys 2020; 47:4241-4253. [PMID: 32593219 DOI: 10.1002/mp.14357] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2020] [Revised: 05/31/2020] [Accepted: 06/19/2020] [Indexed: 11/06/2022] Open
Abstract
PURPOSE Non-Hodgkin's lymphoma (NHL) is a serious malignant disease. Delayed diagnosis will cause anemia, increased intracranial pressure, organ failure, and even lead to death. The current main trend in this area is to use deep learning (DL) for disease diagnosis. Extracting classification information from the digital pathology images by DL may realize the automated qualitative and quantitative analysis of NHL. Previously, DL has been used to classify NHL digital pathology images with some success. However, shortcomings still exist in the data preprocessing methods and feature extraction. Therefore, this paper presents a method for the classification of NHL subtypes based on the fusion of transfer learning (TL) and principal component analysis (PCA). METHODS First, the NHL digital pathology images were preprocessed by image division and segmentation and then input into the transfer models for fine-tuning and feature extraction. Second, PCA was used to map the extracted features. Finally, a neural network was used as a classifier to classify the mapped features. During the fine-tuning of the transfer models, two methods, freezing all feature extraction layers and fine-tuning all layers, were employed to select the optimal model with the best classification result among all the preselected transfer models. On this basis, the use of freezing the layers' location was discussed and analyzed. RESULTS The results show that the proposed method achieved average fivefold cross-validation accuracies of 100%, 99.73%, and 99.20% for chronic lymphocytic leukemia (CLL), follicular lymphoma (FL), and mantle cell lymphoma (MCL) tumor, and each category has standard deviations 0.00, 0.53, and 0.65, respectively, in the NHL reference dataset. The overall classification accuracy for fivefold cross-validation is 98.93%, which is an increase of 1.26% compared to the latest reported methods, having a lower standard deviation (1.00). CONCLUSION The method proposed in this paper achieves a high classification accuracy and strong model generalization for the classification of NHL, which makes it possible to conduct intelligent classification of NHL in clinical practice. Our proposed method has definite clinical value and research significance.
Collapse
Affiliation(s)
- Jianfei Zhang
- School of computer and control engineering, Qiqihar university, Qiqihar, 161006, China
| | - Wensheng Cui
- School of computer and control engineering, Qiqihar university, Qiqihar, 161006, China
| | - Xiaoyan Guo
- School of computer and control engineering, Qiqihar university, Qiqihar, 161006, China
| | - Bo Wang
- School of computer and control engineering, Qiqihar university, Qiqihar, 161006, China
| | - Zhen Wang
- School of computer and control engineering, Qiqihar university, Qiqihar, 161006, China
| |
Collapse
|
24
|
Automated Classification of Significant Prostate Cancer on MRI: A Systematic Review on the Performance of Machine Learning Applications. Cancers (Basel) 2020; 12:cancers12061606. [PMID: 32560558 PMCID: PMC7352160 DOI: 10.3390/cancers12061606] [Citation(s) in RCA: 39] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2020] [Revised: 06/13/2020] [Accepted: 06/14/2020] [Indexed: 11/16/2022] Open
Abstract
Significant prostate carcinoma (sPCa) classification based on MRI using radiomics or deep learning approaches has gained much interest, due to the potential application in assisting in clinical decision-making. OBJECTIVE To systematically review the literature (i) to determine which algorithms are most frequently used for sPCa classification, (ii) to investigate whether there exists a relation between the performance and the method or the MRI sequences used, (iii) to assess what study design factors affect the performance on sPCa classification, and (iv) to research whether performance had been evaluated in a clinical setting Methods: The databases Embase and Ovid MEDLINE were searched for studies describing machine learning or deep learning classification methods discriminating between significant and nonsignificant PCa on multiparametric MRI that performed a valid validation procedure. Quality was assessed by the modified radiomics quality score. We computed the median area under the receiver operating curve (AUC) from overall methods and the interquartile range. RESULTS From 2846 potentially relevant publications, 27 were included. The most frequent algorithms used in the literature for PCa classification are logistic regression (22%) and convolutional neural networks (CNNs) (22%). The median AUC was 0.79 (interquartile range: 0.77-0.87). No significant effect of number of included patients, image sequences, or reference standard on the reported performance was found. Three studies described an external validation and none of the papers described a validation in a prospective clinical trial. CONCLUSIONS To unlock the promising potential of machine and deep learning approaches, validation studies and clinical prospective studies should be performed with an established protocol to assess the added value in decision-making.
Collapse
|
25
|
Bardis MD, Houshyar R, Chang PD, Ushinsky A, Glavis-Bloom J, Chahine C, Bui TL, Rupasinghe M, Filippi CG, Chow DS. Applications of Artificial Intelligence to Prostate Multiparametric MRI (mpMRI): Current and Emerging Trends. Cancers (Basel) 2020; 12:E1204. [PMID: 32403240 PMCID: PMC7281682 DOI: 10.3390/cancers12051204] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2020] [Revised: 05/02/2020] [Accepted: 05/08/2020] [Indexed: 01/13/2023] Open
Abstract
Prostate carcinoma is one of the most prevalent cancers worldwide. Multiparametric magnetic resonance imaging (mpMRI) is a non-invasive tool that can improve prostate lesion detection, classification, and volume quantification. Machine learning (ML), a branch of artificial intelligence, can rapidly and accurately analyze mpMRI images. ML could provide better standardization and consistency in identifying prostate lesions and enhance prostate carcinoma management. This review summarizes ML applications to prostate mpMRI and focuses on prostate organ segmentation, lesion detection and segmentation, and lesion characterization. A literature search was conducted to find studies that have applied ML methods to prostate mpMRI. To date, prostate organ segmentation and volume approximation have been well executed using various ML techniques. Prostate lesion detection and segmentation are much more challenging tasks for ML and were attempted in several studies. They largely remain unsolved problems due to data scarcity and the limitations of current ML algorithms. By contrast, prostate lesion characterization has been successfully completed in several studies because of better data availability. Overall, ML is well situated to become a tool that enhances radiologists' accuracy and speed.
Collapse
Affiliation(s)
- Michelle D. Bardis
- Department of Radiology, University of California, Irvine, Orange, CA 92868-3201, USA; (R.H.); (P.D.C.); (J.G.-B.); (C.C.); (T.-L.B.); (M.R.); (D.S.C.)
| | - Roozbeh Houshyar
- Department of Radiology, University of California, Irvine, Orange, CA 92868-3201, USA; (R.H.); (P.D.C.); (J.G.-B.); (C.C.); (T.-L.B.); (M.R.); (D.S.C.)
| | - Peter D. Chang
- Department of Radiology, University of California, Irvine, Orange, CA 92868-3201, USA; (R.H.); (P.D.C.); (J.G.-B.); (C.C.); (T.-L.B.); (M.R.); (D.S.C.)
| | - Alexander Ushinsky
- Mallinckrodt Institute of Radiology, Washington University Saint Louis, St. Louis, MO 63110, USA;
| | - Justin Glavis-Bloom
- Department of Radiology, University of California, Irvine, Orange, CA 92868-3201, USA; (R.H.); (P.D.C.); (J.G.-B.); (C.C.); (T.-L.B.); (M.R.); (D.S.C.)
| | - Chantal Chahine
- Department of Radiology, University of California, Irvine, Orange, CA 92868-3201, USA; (R.H.); (P.D.C.); (J.G.-B.); (C.C.); (T.-L.B.); (M.R.); (D.S.C.)
| | - Thanh-Lan Bui
- Department of Radiology, University of California, Irvine, Orange, CA 92868-3201, USA; (R.H.); (P.D.C.); (J.G.-B.); (C.C.); (T.-L.B.); (M.R.); (D.S.C.)
| | - Mark Rupasinghe
- Department of Radiology, University of California, Irvine, Orange, CA 92868-3201, USA; (R.H.); (P.D.C.); (J.G.-B.); (C.C.); (T.-L.B.); (M.R.); (D.S.C.)
| | | | - Daniel S. Chow
- Department of Radiology, University of California, Irvine, Orange, CA 92868-3201, USA; (R.H.); (P.D.C.); (J.G.-B.); (C.C.); (T.-L.B.); (M.R.); (D.S.C.)
| |
Collapse
|
26
|
Zheng D, Hong JC, Wang C, Zhu X. Radiotherapy Treatment Planning in the Age of AI: Are We Ready Yet? Technol Cancer Res Treat 2020; 18:1533033819894577. [PMID: 31858890 PMCID: PMC6927195 DOI: 10.1177/1533033819894577] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023] Open
Affiliation(s)
- Dandan Zheng
- Department of Radiation Oncology, University of Nebraska Medical Center, Omaha, NE, USA
| | - Julian C Hong
- Department of Radiation Oncology, University of California, San Francisco, CA, USA
| | - Chunhao Wang
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
| | - Xiaofeng Zhu
- Department of Radiation Oncology, Georgetown University Hospital, Rockville, MD, USA
| |
Collapse
|
27
|
Her EJ, Haworth A, Rowshanfarzad P, Ebert MA. Progress towards Patient-Specific, Spatially-Continuous Radiobiological Dose Prescription and Planning in Prostate Cancer IMRT: An Overview. Cancers (Basel) 2020; 12:E854. [PMID: 32244821 PMCID: PMC7226478 DOI: 10.3390/cancers12040854] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2020] [Revised: 03/12/2020] [Accepted: 03/27/2020] [Indexed: 01/30/2023] Open
Abstract
Advances in imaging have enabled the identification of prostate cancer foci with an initial application to focal dose escalation, with subvolumes created with image intensity thresholds. Through quantitative imaging techniques, correlations between image parameters and tumour characteristics have been identified. Mathematical functions are typically used to relate image parameters to prescription dose to improve the clinical relevance of the resulting dose distribution. However, these relationships have remained speculative or invalidated. In contrast, the use of radiobiological models during treatment planning optimisation, termed biological optimisation, has the advantage of directly considering the biological effect of the resulting dose distribution. This has led to an increased interest in the accurate derivation of radiobiological parameters from quantitative imaging to inform the models. This article reviews the progress in treatment planning using image-informed tumour biology, from focal dose escalation to the current trend of individualised biological treatment planning using image-derived radiobiological parameters, with the focus on prostate intensity-modulated radiotherapy (IMRT).
Collapse
Affiliation(s)
- Emily Jungmin Her
- Department of Physics, University of Western Australia, Crawley, WA 6009, Australia
| | - Annette Haworth
- Institute of Medical Physics, University of Sydney, Camperdown, NSW 2050, Australia
| | - Pejman Rowshanfarzad
- Department of Physics, University of Western Australia, Crawley, WA 6009, Australia
| | - Martin A. Ebert
- Department of Physics, University of Western Australia, Crawley, WA 6009, Australia
- Department of Radiation Oncology, Sir Charles Gairdner Hospital, Nedlands, WA 6009, Australia
- 5D Clinics, Claremont, WA 6010, Australia
| |
Collapse
|