1
|
Gao L, Wang W, Meng X, Zhang S, Xu J, Ju S, Wang YC. TPA: Two-stage progressive attention segmentation framework for hepatocellular carcinoma on multi-modality MRI. Med Phys 2024; 51:4936-4947. [PMID: 38306473 DOI: 10.1002/mp.16968] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2023] [Revised: 01/04/2024] [Accepted: 01/21/2024] [Indexed: 02/04/2024] Open
Abstract
BACKGROUND Dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) plays a crucial role in the diagnosis and measurement of hepatocellular carcinoma (HCC). The multi-modality information contained in the multi-phase images of DCE-MRI is important for improving segmentation. However, this remains a challenging task due to the heterogeneity of HCC, which may cause one HCC lesion to have varied imaging appearance in each phase of DCE-MRI. In particular, some phases exhibit inconsistent sizes and boundaries will result in a lack of correlation between modalities, and it may pose inaccurate segmentation results. PURPOSE We aim to design a multi-modality segmentation model that can learn meaningful inter-phase correlation for achieving HCC segmentation. METHODS In this study, we propose a two-stage progressive attention segmentation framework (TPA) for HCC based on the transformer and the decision-making process of radiologists. Specifically, the first stage aims to fuse features from multi-phase images to identify HCC and provide localization region. In the second stage, a multi-modality attention transformer module (MAT) is designed to focus on the features that can represent the actual size. RESULTS We conduct training, validation, and test in a single-center dataset (386 cases), followed by external test on a batch of multi-center datasets (83 cases). Furthermore, we analyze a subgroup of data with weak inter-phase correlation in the test set. The proposed model achieves Dice coefficient of 0.822 and 0.772 in the internal and external test sets, respectively, and 0.829, 0.791 in the subgroup. The experimental results demonstrate that our model outperforms state-of-the-art models, particularly within subgroup. CONCLUSIONS The proposed TPA provides best segmentation results, and utilizing clinical prior knowledge for network design is practical and feasible.
Collapse
Affiliation(s)
- Lei Gao
- Institute for AI in Medicine, School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing, China
| | - Weilang Wang
- Department of Radiology, Zhongda Hospital, Jiangsu Key Laboratory of Molecular and Functional Imaging, School of Medicine, Southeast University, Nanjing, China
| | - Xiangpan Meng
- Department of Radiology, Zhongda Hospital, Jiangsu Key Laboratory of Molecular and Functional Imaging, School of Medicine, Southeast University, Nanjing, China
| | - Shuhang Zhang
- Department of Radiology, Zhongda Hospital, Jiangsu Key Laboratory of Molecular and Functional Imaging, School of Medicine, Southeast University, Nanjing, China
| | - Jun Xu
- Institute for AI in Medicine, School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing, China
| | - Shenghong Ju
- Institute for AI in Medicine, School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing, China
- Department of Radiology, Zhongda Hospital, Jiangsu Key Laboratory of Molecular and Functional Imaging, School of Medicine, Southeast University, Nanjing, China
| | - Yuan-Cheng Wang
- Institute for AI in Medicine, School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing, China
- Department of Radiology, Zhongda Hospital, Jiangsu Key Laboratory of Molecular and Functional Imaging, School of Medicine, Southeast University, Nanjing, China
| |
Collapse
|
2
|
Sheikh TS, Cho M. Segmentation of Variants of Nuclei on Whole Slide Images by Using Radiomic Features. Bioengineering (Basel) 2024; 11:252. [PMID: 38534526 DOI: 10.3390/bioengineering11030252] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2024] [Revised: 02/10/2024] [Accepted: 02/26/2024] [Indexed: 03/28/2024] Open
Abstract
The histopathological segmentation of nuclear types is a challenging task because nuclei exhibit distinct morphologies, textures, and staining characteristics. Accurate segmentation is critical because it affects the diagnostic workflow for patient assessment. In this study, a framework was proposed for segmenting various types of nuclei from different organs of the body. The proposed framework improved the segmentation performance for each nuclear type using radiomics. First, we used distinct radiomic features to extract and analyze quantitative information about each type of nucleus and subsequently trained various classifiers based on the best input sub-features of each radiomic feature selected by a LASSO operator. Second, we inputted the outputs of the best classifier to various segmentation models to learn the variants of nuclei. Using the MoNuSAC2020 dataset, we achieved state-of-the-art segmentation performance for each category of nuclei type despite the complexity, overlapping, and obscure regions. The generalized adaptability of the proposed framework was verified by the consistent performance obtained in whole slide images of different organs of the body and radiomic features.
Collapse
Affiliation(s)
- Taimoor Shakeel Sheikh
- AIMI-Artificial Intelligence and Medical Imaging Laboratory, Department of Computer & Media Engineering, Tongmyong University, Busan 48520, Republic of Korea
| | - Migyung Cho
- AIMI-Artificial Intelligence and Medical Imaging Laboratory, Department of Computer & Media Engineering, Tongmyong University, Busan 48520, Republic of Korea
| |
Collapse
|
3
|
Li J, Jiang P, An Q, Wang GG, Kong HF. Medical image identification methods: A review. Comput Biol Med 2024; 169:107777. [PMID: 38104516 DOI: 10.1016/j.compbiomed.2023.107777] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 10/30/2023] [Accepted: 11/28/2023] [Indexed: 12/19/2023]
Abstract
The identification of medical images is an essential task in computer-aided diagnosis, medical image retrieval and mining. Medical image data mainly include electronic health record data and gene information data, etc. Although intelligent imaging provided a good scheme for medical image analysis over traditional methods that rely on the handcrafted features, it remains challenging due to the diversity of imaging modalities and clinical pathologies. Many medical image identification methods provide a good scheme for medical image analysis. The concepts pertinent of methods, such as the machine learning, deep learning, convolutional neural networks, transfer learning, and other image processing technologies for medical image are analyzed and summarized in this paper. We reviewed these recent studies to provide a comprehensive overview of applying these methods in various medical image analysis tasks, such as object detection, image classification, image registration, segmentation, and other tasks. Especially, we emphasized the latest progress and contributions of different methods in medical image analysis, which are summarized base on different application scenarios, including classification, segmentation, detection, and image registration. In addition, the applications of different methods are summarized in different application area, such as pulmonary, brain, digital pathology, brain, skin, lung, renal, breast, neuromyelitis, vertebrae, and musculoskeletal, etc. Critical discussion of open challenges and directions for future research are finally summarized. Especially, excellent algorithms in computer vision, natural language processing, and unmanned driving will be applied to medical image recognition in the future.
Collapse
Affiliation(s)
- Juan Li
- School of Information Engineering, Wuhan Business University, Wuhan, 430056, China; School of Artificial Intelligence, Wuchang University of Technology, Wuhan, 430223, China; Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, 130012, China
| | - Pan Jiang
- School of Information Engineering, Wuhan Business University, Wuhan, 430056, China
| | - Qing An
- School of Artificial Intelligence, Wuchang University of Technology, Wuhan, 430223, China
| | - Gai-Ge Wang
- School of Computer Science and Technology, Ocean University of China, Qingdao, 266100, China.
| | - Hua-Feng Kong
- School of Information Engineering, Wuhan Business University, Wuhan, 430056, China.
| |
Collapse
|
4
|
Majumder S, Katz S, Kontos D, Roshkovan L. State of the art: radiomics and radiomics-related artificial intelligence on the road to clinical translation. BJR Open 2024; 6:tzad004. [PMID: 38352179 PMCID: PMC10860524 DOI: 10.1093/bjro/tzad004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Revised: 09/15/2023] [Accepted: 10/30/2023] [Indexed: 02/16/2024] Open
Abstract
Radiomics and artificial intelligence carry the promise of increased precision in oncologic imaging assessments due to the ability of harnessing thousands of occult digital imaging features embedded in conventional medical imaging data. While powerful, these technologies suffer from a number of sources of variability that currently impede clinical translation. In order to overcome this impediment, there is a need to control for these sources of variability through harmonization of imaging data acquisition across institutions, construction of standardized imaging protocols that maximize the acquisition of these features, harmonization of post-processing techniques, and big data resources to properly power studies for hypothesis testing. For this to be accomplished, it will be critical to have multidisciplinary and multi-institutional collaboration.
Collapse
Affiliation(s)
- Shweta Majumder
- Department of Radiology, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA 19104, United States
| | - Sharyn Katz
- Department of Radiology, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA 19104, United States
| | - Despina Kontos
- Department of Radiology, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA 19104, United States
| | - Leonid Roshkovan
- Department of Radiology, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA 19104, United States
| |
Collapse
|
5
|
Huang TL, Lu NH, Huang YH, Twan WH, Yeh LR, Liu KY, Chen TB. Transfer learning with CNNs for efficient prostate cancer and BPH detection in transrectal ultrasound images. Sci Rep 2023; 13:21849. [PMID: 38071254 PMCID: PMC10710441 DOI: 10.1038/s41598-023-49159-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Accepted: 12/05/2023] [Indexed: 12/18/2023] Open
Abstract
Early detection of prostate cancer (PCa) and benign prostatic hyperplasia (BPH) is crucial for maintaining the health and well-being of aging male populations. This study aims to evaluate the performance of transfer learning with convolutional neural networks (CNNs) for efficient classification of PCa and BPH in transrectal ultrasound (TRUS) images. A retrospective experimental design was employed in this study, with 1380 TRUS images for PCa and 1530 for BPH. Seven state-of-the-art deep learning (DL) methods were employed as classifiers with transfer learning applied to popular CNN architectures. Performance indices, including sensitivity, specificity, accuracy, positive predictive value (PPV), negative predictive value (NPV), Kappa value, and Hindex (Youden's index), were used to assess the feasibility and efficacy of the CNN methods. The CNN methods with transfer learning demonstrated a high classification performance for TRUS images, with all accuracy, specificity, sensitivity, PPV, NPV, Kappa, and Hindex values surpassing 0.9400. The optimal accuracy, sensitivity, and specificity reached 0.9987, 0.9980, and 0.9980, respectively, as evaluated using twofold cross-validation. The investigated CNN methods with transfer learning showcased their efficiency and ability for the classification of PCa and BPH in TRUS images. Notably, the EfficientNetV2 with transfer learning displayed a high degree of effectiveness in distinguishing between PCa and BPH, making it a promising tool for future diagnostic applications.
Collapse
Affiliation(s)
- Te-Li Huang
- Department of Radiology, Kaohsiung Veterans General Hospital, No. 386, Dazhong 1st Rd., Zuoying Dist., Kaohsiung, 81362, Taiwan
| | - Nan-Han Lu
- Department of Medical Imaging and Radiological Science, I-Shou University, No. 8, Yida Rd., Jiaosu Village, Yanchao District, Kaohsiung, 82445, Taiwan.
- Department of Pharmacy, Tajen University, No.20, Weixin Rd., Yanpu Township, Pingtung, 90741, Taiwan.
- Department of Radiology, E-DA Hospital, I-Shou University, No.1, Yida Rd., Jiao-Su Village, Yan-Chao District, Kaohsiung, 82445, Taiwan.
| | - Yung-Hui Huang
- Department of Medical Imaging and Radiological Science, I-Shou University, No. 8, Yida Rd., Jiaosu Village, Yanchao District, Kaohsiung, 82445, Taiwan
| | - Wen-Hung Twan
- Department of Life Sciences, National Taitung University, No.369, Sec. 2, University Rd., Taitung, 95092, Taiwan
| | - Li-Ren Yeh
- Department of Anesthesiology, E-DA Cancer Hospital, I-Shou University, No.1, Yida Rd., Jiaosu Village, Yanchao District, Kaohsiung, 82445, Taiwan
| | - Kuo-Ying Liu
- Department of Radiology, E-DA Hospital, I-Shou University, No.1, Yida Rd., Jiao-Su Village, Yan-Chao District, Kaohsiung, 82445, Taiwan
| | - Tai-Been Chen
- Department of Medical Imaging and Radiological Science, I-Shou University, No. 8, Yida Rd., Jiaosu Village, Yanchao District, Kaohsiung, 82445, Taiwan.
- Institute of Statistics, National Yang Ming Chiao Tung University, No. 1001, University Road, Hsinchu, 30010, Taiwan.
| |
Collapse
|
6
|
Sun Z, Wu P, Cui Y, Liu X, Wang K, Gao G, Wang H, Zhang X, Wang X. Deep-Learning Models for Detection and Localization of Visible Clinically Significant Prostate Cancer on Multi-Parametric MRI. J Magn Reson Imaging 2023; 58:1067-1081. [PMID: 36825823 DOI: 10.1002/jmri.28608] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 01/07/2023] [Accepted: 01/09/2023] [Indexed: 02/25/2023] Open
Abstract
BACKGROUND Deep learning for diagnosing clinically significant prostate cancer (csPCa) is feasible but needs further evaluation in patients with prostate-specific antigen (PSA) levels of 4-10 ng/mL. PURPOSE To explore diffusion-weighted imaging (DWI), alone and in combination with T2-weighted imaging (T2WI), for deep-learning-based models to detect and localize visible csPCa. STUDY TYPE Retrospective. POPULATION One thousand six hundred twenty-eight patients with systematic and cognitive-targeted biopsy-confirmation (1007 csPCa, 621 non-csPCa) were divided into model development (N = 1428) and hold-out test (N = 200) datasets. FIELD STRENGTH/SEQUENCE DWI with diffusion-weighted single-shot gradient echo planar imaging sequence and T2WI with T2-weighted fast spin echo sequence at 3.0-T and 1.5-T. ASSESSMENT The ground truth of csPCa was annotated by two radiologists in consensus. A diffusion model, DWI and apparent diffusion coefficient (ADC) as input, and a biparametric model (DWI, ADC, and T2WI as input) were trained based on U-Net. Three radiologists provided the PI-RADS (version 2.1) assessment. The performances were determined at the lesion, location, and the patient level. STATISTICAL TESTS The performance was evaluated using the areas under the ROC curves (AUCs), sensitivity, specificity, and accuracy. A P value <0.05 was considered statistically significant. RESULTS The lesion-level sensitivities of the diffusion model, the biparametric model, and the PI-RADS assessment were 89.0%, 85.3%, and 90.8% (P = 0.289-0.754). At the patient level, the diffusion model had significantly higher sensitivity than the biparametric model (96.0% vs. 90.0%), while there was no significant difference in specificity (77.0%. vs. 85.0%, P = 0.096). For location analysis, there were no significant differences in AUCs between the models (sextant-level, 0.895 vs. 0.893, P = 0.777; zone-level, 0.931 vs. 0.917, P = 0.282), and both models had significantly higher AUCs than the PI-RADS assessment (sextant-level, 0.734; zone-level, 0.863). DATA CONCLUSION The diffusion model achieved the best performance in detecting and localizing csPCa in patients with PSA levels of 4-10 ng/mL. EVIDENCE LEVEL 3 TECHNICAL EFFICACY: Stage 2.
Collapse
Affiliation(s)
- Zhaonan Sun
- Department of Radiology, Peking University First Hospital, Beijing, China
| | - Pengsheng Wu
- Beijing Smart Tree Medical Technology Co. Ltd, Beijing, China
| | - Yingpu Cui
- Department of Nuclear Medicine, Sun Yat-Sen University Cancer Center, Guangzhou, Guangdong, China
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, Guangdong, China
| | - Xiang Liu
- Department of Radiology, Peking University First Hospital, Beijing, China
| | - Kexin Wang
- School of Basic Medical Sciences, Capital Medical University, Beijing, China
| | - Ge Gao
- Department of Radiology, Peking University First Hospital, Beijing, China
| | - Huihui Wang
- Department of Radiology, Peking University First Hospital, Beijing, China
| | - Xiaodong Zhang
- Department of Radiology, Peking University First Hospital, Beijing, China
| | - Xiaoying Wang
- Department of Radiology, Peking University First Hospital, Beijing, China
| |
Collapse
|
7
|
Pachetti E, Colantonio S. 3D-Vision-Transformer Stacking Ensemble for Assessing Prostate Cancer Aggressiveness from T2w Images. Bioengineering (Basel) 2023; 10:1015. [PMID: 37760117 PMCID: PMC10525095 DOI: 10.3390/bioengineering10091015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 07/27/2023] [Accepted: 08/20/2023] [Indexed: 09/29/2023] Open
Abstract
Vision transformers represent the cutting-edge topic in computer vision and are usually employed on two-dimensional data following a transfer learning approach. In this work, we propose a trained-from-scratch stacking ensemble of 3D-vision transformers to assess prostate cancer aggressiveness from T2-weighted images to help radiologists diagnose this disease without performing a biopsy. We trained 18 3D-vision transformers on T2-weighted axial acquisitions and combined them into two- and three-model stacking ensembles. We defined two metrics for measuring model prediction confidence, and we trained all the ensemble combinations according to a five-fold cross-validation, evaluating their accuracy, confidence in predictions, and calibration. In addition, we optimized the 18 base ViTs and compared the best-performing base and ensemble models by re-training them on a 100-sample bootstrapped training set and evaluating each model on the hold-out test set. We compared the two distributions by calculating the median and the 95% confidence interval and performing a Wilcoxon signed-rank test. The best-performing 3D-vision-transformer stacking ensemble provided state-of-the-art results in terms of area under the receiving operating curve (0.89 [0.61-1]) and exceeded the area under the precision-recall curve of the base model of 22% (p < 0.001). However, it resulted to be less confident in classifying the positive class.
Collapse
Affiliation(s)
- Eva Pachetti
- “Alessandro Faedo” Institute of Information Science and Technologies (ISTI), National Research Council of Italy (CNR), 56127 Pisa, Italy;
- Department of Information Engineering (DII), University of Pisa, 56122 Pisa, Italy
| | - Sara Colantonio
- “Alessandro Faedo” Institute of Information Science and Technologies (ISTI), National Research Council of Italy (CNR), 56127 Pisa, Italy;
| |
Collapse
|
8
|
Dasari Y, Duffin J, Sayin ES, Levine HT, Poublanc J, Para AE, Mikulis DJ, Fisher JA, Sobczyk O, Khamesee MB. Convolutional Neural Networks to Assess Steno-Occlusive Disease Using Cerebrovascular Reactivity. Healthcare (Basel) 2023; 11:2231. [PMID: 37628429 PMCID: PMC10454585 DOI: 10.3390/healthcare11162231] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Revised: 07/31/2023] [Accepted: 08/05/2023] [Indexed: 08/27/2023] Open
Abstract
Cerebrovascular Reactivity (CVR) is a provocative test used with Blood oxygenation level-dependent (BOLD) Magnetic Resonance Imaging (MRI) studies, where a vasoactive stimulus is applied and the corresponding changes in the cerebral blood flow (CBF) are measured. The most common clinical application is the assessment of cerebral perfusion insufficiency in patients with steno-occlusive disease (SOD). Globally, millions of people suffer from cerebrovascular diseases, and SOD is the most common cause of ischemic stroke. Therefore, CVR analyses can play a vital role in early diagnosis and guiding clinical treatment. This study develops a convolutional neural network (CNN)-based clinical decision support system to facilitate the screening of SOD patients by discriminating between healthy and unhealthy CVR maps. The networks were trained on a confidential CVR dataset with two classes: 68 healthy control subjects, and 163 SOD patients. This original dataset was distributed in a ratio of 80%-10%-10% for training, validation, and testing, respectively, and image augmentations were applied to the training and validation sets. Additionally, some popular pre-trained networks were imported and customized for the objective classification task to conduct transfer learning experiments. Results indicate that a customized CNN with a double-stacked convolution layer architecture produces the best results, consistent with expert clinical readings.
Collapse
Affiliation(s)
- Yashesh Dasari
- Department of Mechanical and Mechatronics Engineering, University of Waterloo, Waterloo, ON N2L 3G1, Canada;
| | - James Duffin
- Department of Physiology, University of Toronto, Toronto, ON M5S 1A8, Canada
- Department of Anesthesia and Pain Management, University Health Network, Toronto, ON M5G 2C4, Canada
| | - Ece Su Sayin
- Department of Physiology, University of Toronto, Toronto, ON M5S 1A8, Canada
- Department of Anesthesia and Pain Management, University Health Network, Toronto, ON M5G 2C4, Canada
| | - Harrison T. Levine
- Department of Physiology, University of Toronto, Toronto, ON M5S 1A8, Canada
- Department of Anesthesia and Pain Management, University Health Network, Toronto, ON M5G 2C4, Canada
| | - Julien Poublanc
- Joint Department of Medical Imaging and the Functional Neuroimaging Laboratory, University Health Network, Toronto, ON M5G 2C4, Canada
| | - Andrea E. Para
- Joint Department of Medical Imaging and the Functional Neuroimaging Laboratory, University Health Network, Toronto, ON M5G 2C4, Canada
| | - David J. Mikulis
- Joint Department of Medical Imaging and the Functional Neuroimaging Laboratory, University Health Network, Toronto, ON M5G 2C4, Canada
- Institute of Medical Sciences, University of Toronto, Toronto, ON M5S 1A8, Canada
| | - Joseph A. Fisher
- Department of Physiology, University of Toronto, Toronto, ON M5S 1A8, Canada
- Department of Anesthesia and Pain Management, University Health Network, Toronto, ON M5G 2C4, Canada
- Institute of Medical Sciences, University of Toronto, Toronto, ON M5S 1A8, Canada
| | - Olivia Sobczyk
- Department of Anesthesia and Pain Management, University Health Network, Toronto, ON M5G 2C4, Canada
- Joint Department of Medical Imaging and the Functional Neuroimaging Laboratory, University Health Network, Toronto, ON M5G 2C4, Canada
| | - Mir Behrad Khamesee
- Department of Mechanical and Mechatronics Engineering, University of Waterloo, Waterloo, ON N2L 3G1, Canada;
| |
Collapse
|
9
|
Kim H, Kang SW, Kim JH, Nagar H, Sabuncu M, Margolis DJA, Kim CK. The role of AI in prostate MRI quality and interpretation: Opportunities and challenges. Eur J Radiol 2023; 165:110887. [PMID: 37245342 DOI: 10.1016/j.ejrad.2023.110887] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2023] [Revised: 05/06/2023] [Accepted: 05/20/2023] [Indexed: 05/30/2023]
Abstract
Prostate MRI plays an important role in imaging the prostate gland and surrounding tissues, particularly in the diagnosis and management of prostate cancer. With the widespread adoption of multiparametric magnetic resonance imaging in recent years, the concerns surrounding the variability of imaging quality have garnered increased attention. Several factors contribute to the inconsistency of image quality, such as acquisition parameters, scanner differences and interobserver variabilities. While efforts have been made to standardize image acquisition and interpretation via the development of systems, such as PI-RADS and PI-QUAL, the scoring systems still depend on the subjective experience and acumen of humans. Artificial intelligence (AI) has been increasingly used in many applications, including medical imaging, due to its ability to automate tasks and lower human error rates. These advantages have the potential to standardize the tasks of image interpretation and quality control of prostate MRI. Despite its potential, thorough validation is required before the implementation of AI in clinical practice. In this article, we explore the opportunities and challenges of AI, with a focus on the interpretation and quality of prostate MRI.
Collapse
Affiliation(s)
- Heejong Kim
- Department of Radiology, Weill Cornell Medical College, 525 E 68th St Box 141, New York, NY 10021, United States
| | - Shin Won Kang
- Research Institute for Future Medicine, Samsung Medical Center, Republic of Korea
| | - Jae-Hun Kim
- Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Republic of Korea
| | - Himanshu Nagar
- Department of Radiation Oncology, Weill Cornell Medical College, 525 E 68th St, New York, NY 10021, United States
| | - Mert Sabuncu
- Department of Radiology, Weill Cornell Medical College, 525 E 68th St Box 141, New York, NY 10021, United States
| | - Daniel J A Margolis
- Department of Radiology, Weill Cornell Medical College, 525 E 68th St Box 141, New York, NY 10021, United States.
| | - Chan Kyo Kim
- Department of Radiology and Center for Imaging Science, Samsung Medical Center, Sungkyunkwan University School of Medicine, Republic of Korea
| |
Collapse
|
10
|
Zhang Y, Li W, Zhang Z, Xue Y, Liu YL, Nie K, Su MY, Ye Q. Differential diagnosis of prostate cancer and benign prostatic hyperplasia based on DCE-MRI using bi-directional CLSTM deep learning and radiomics. Med Biol Eng Comput 2023; 61:757-771. [PMID: 36598674 PMCID: PMC10548872 DOI: 10.1007/s11517-022-02759-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2022] [Accepted: 12/22/2022] [Indexed: 01/05/2023]
Abstract
Dynamic contrast-enhanced MRI (DCE-MRI) is routinely included in the prostate MRI protocol for a long time; its role has been questioned. It provides rich spatial and temporal information. However, the contained information cannot be fully extracted in radiologists' visual evaluation. More sophisticated computer algorithms are needed to extract the higher-order information. The purpose of this study was to apply a new deep learning algorithm, the bi-directional convolutional long short-term memory (CLSTM) network, and the radiomics analysis for differential diagnosis of PCa and benign prostatic hyperplasia (BPH). To systematically investigate the optimal amount of peritumoral tissue for improving diagnosis, a total of 9 ROIs were delineated by using 3 different methods. The results showed that bi-directional CLSTM with ± 20% region growing peritumoral ROI achieved the mean AUC of 0.89, better than the mean AUC of 0.84 by using the tumor alone without any peritumoral tissue (p = 0.25, not significant). For all 9 ROIs, deep learning had higher AUC than radiomics, but only reaching the significant difference for ± 20% region growing peritumoral ROI (0.89 vs. 0.79, p = 0.04). In conclusion, the kinetic information extracted from DCE-MRI using bi-directional CLSTM may provide helpful supplementary information for diagnosis of PCa.
Collapse
Affiliation(s)
- Yang Zhang
- Department of Radiation Oncology, Rutgers-Cancer Institute of New Jersey, Robert Wood Johnson Medical School, New Brunswick, NJ, USA
- Department of Radiological Sciences, University of California, 164 Irvine Hall, Irvine, CA, 92697, USA
| | - Weikang Li
- Department of Radiology, The Children's Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Zhao Zhang
- Department of Radiology, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Yingnan Xue
- Department of Radiology, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Yan-Lin Liu
- Department of Radiological Sciences, University of California, 164 Irvine Hall, Irvine, CA, 92697, USA
| | - Ke Nie
- Department of Radiation Oncology, Rutgers-Cancer Institute of New Jersey, Robert Wood Johnson Medical School, New Brunswick, NJ, USA
| | - Min-Ying Su
- Department of Radiological Sciences, University of California, 164 Irvine Hall, Irvine, CA, 92697, USA.
| | - Qiong Ye
- High Magnetic Field Laboratory, Hefei Institutes of Physical Science, Chinese Academy of Sciences, 350 Shushanhu Road, Hefei, 230031, Anhui, People's Republic of China.
| |
Collapse
|
11
|
Ragab M, Kateb F, El-Sawy EK, Binyamin SS, Al-Rabia MW, A. Mansouri R. Archimedes Optimization Algorithm with Deep Learning-Based Prostate Cancer Classification on Magnetic Resonance Imaging. Healthcare (Basel) 2023; 11:healthcare11040590. [PMID: 36833124 PMCID: PMC9957347 DOI: 10.3390/healthcare11040590] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2022] [Revised: 02/03/2023] [Accepted: 02/06/2023] [Indexed: 02/18/2023] Open
Abstract
Prostate cancer (PCa) is becoming one of the most frequently occurring cancers among men and causes an even greater number of deaths. Due to the complexity of tumor masses, radiologists find it difficult to identify PCa accurately. Over the years, several PCa-detecting methods have been formulated, but these methods cannot identify cancer efficiently. Artificial Intelligence (AI) has both information technologies that simulate natural or biological phenomena and human intelligence in addressing issues. AI technologies have been broadly implemented in the healthcare domain, including 3D printing, disease diagnosis, health monitoring, hospital scheduling, clinical decision support, classification and prediction, and medical data analysis. These applications significantly boost the cost-effectiveness and accuracy of healthcare services. This article introduces an Archimedes Optimization Algorithm with Deep Learning-based Prostate Cancer Classification (AOADLB-P2C) model on MRI images. The presented AOADLB-P2C model examines MRI images for the identification of PCa. To accomplish this, the AOADLB-P2C model performs pre-processing in two stages: adaptive median filtering (AMF)-based noise removal and contrast enhancement. Additionally, the presented AOADLB-P2C model extracts features via a densely connected network (DenseNet-161) model with a root-mean-square propagation (RMSProp) optimizer. Finally, the presented AOADLB-P2C model classifies PCa using the AOA with a least-squares support vector machine (LS-SVM) method. The simulation values of the presented AOADLB-P2C model are tested using a benchmark MRI dataset. The comparative experimental results demonstrate the improvements of the AOADLB-P2C model over other recent approaches.
Collapse
Affiliation(s)
- Mahmoud Ragab
- Information Technology Department, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
- Department of Mathematics, Faculty of Science, Al-Azhar University, Cairo 11884, Egypt
- Correspondence:
| | - Faris Kateb
- Information Technology Department, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| | - E. K. El-Sawy
- Faculty of Earth Sciences, King Abdulaziz University, Jeddah 21589, Saudi Arabia
- Geology Department, Faculty of Science, Al-Azhar University (Assiut branch), Assiut 71524, Egypt
| | - Sami Saeed Binyamin
- Computer and Information Technology Department, The Applied College, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| | - Mohammed W. Al-Rabia
- Department of Medical Microbiology and Parasitolog, Faculty of Medicine, King Abdulaziz University, Jeddah 21589, Saudi Arabia
- Health Promotion Center, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| | - Rasha A. Mansouri
- Prince Sattam Bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia
- Department of Biochemistry, Faculty of Sciences, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| |
Collapse
|
12
|
González-Patiño D, Villuendas-Rey Y, Saldaña-Pérez M, Argüelles-Cruz AJ. A Novel Bioinspired Algorithm for Mixed and Incomplete Breast Cancer Data Classification. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2023; 20:3240. [PMID: 36833936 PMCID: PMC9965500 DOI: 10.3390/ijerph20043240] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 01/23/2023] [Accepted: 02/08/2023] [Indexed: 06/18/2023]
Abstract
The pre-diagnosis of cancer has been approached from various perspectives, so it is imperative to continue improving classification algorithms to achieve early diagnosis of the disease and improve patient survival. In the medical field, there are data that, for various reasons, are lost. There are also datasets that mix numerical and categorical values. Very few algorithms classify datasets with such characteristics. Therefore, this study proposes the modification of an existing algorithm for the classification of cancer. The said algorithm showed excellent results compared with classical classification algorithms. The AISAC-MMD (Mixed and Missing Data) is based on the AISAC and was modified to work with datasets with missing and mixed values. It showed significantly better performance than bio-inspired or classical classification algorithms. Statistical analysis established that the AISAC-MMD significantly outperformed the Nearest Neighbor, C4.5, Naïve Bayes, ALVOT, Naïve Associative Classifier, AIRS1, Immunos1, and CLONALG algorithms in conducting breast cancer classification.
Collapse
Affiliation(s)
- David González-Patiño
- Centro de Investigación en Computación, Instituto Politécnico Nacional, Ciudad de México 07738, Mexico
| | - Yenny Villuendas-Rey
- Instituto Politécnico Nacional, Centro de Innovación y Desarrollo Tecnológico en Cómputo, Ciudad de México 07700, Mexico
| | - Magdalena Saldaña-Pérez
- Centro de Investigación en Computación, Instituto Politécnico Nacional, Ciudad de México 07738, Mexico
| | | |
Collapse
|
13
|
Jamshidi G, Abbasian Ardakani A, Ghafoori M, Babapour Mofrad F, Saligheh Rad H. Radiomics-based machine-learning method to diagnose prostate cancer using mp-MRI: a comparison between conventional and fused models. MAGMA (NEW YORK, N.Y.) 2023; 36:55-64. [PMID: 36114898 DOI: 10.1007/s10334-022-01037-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 07/11/2022] [Accepted: 08/08/2022] [Indexed: 11/28/2022]
Abstract
OBJECTIVES Multiparametric MRI (mp-MRI) has been significantly used for detection, localization and staging of Prostate cancer (PCa). However, all the assessment suffers from poor reproducibility among the readers. The aim of this study was to evaluate radiomics models to diagnose PCa using high-resolution T2-weighted (T2-W) and dynamic contrast-enhanced (DCE) MRI. MATERIALS AND METHODS Thirty two patients who had high prostate specific antigen level were recruited. The prostate biopsies considered as the reference to differentiate between 66 benign and 36 malignant prostate lesions. 181 features were extracted from each modality. K-nearest neighbors, artificial neural network, decision tree, and linear discriminant analysis were used for machine-learning study. The leave-one-out cross-validation method was used to prevent overfitting and build robust models. RESULTS Radiomics analysis showed that T2-W images were more effective in PCa detection compare to DCE images. Local binary pattern features and speeded up robust features had the highest ability for prediction in T2-W and DCE images, respectively. The classifier fusion using decision template method showed the highest performance with accuracy, specificity, and sensitivity of 100%. DISCUSSION The findings of this framework provide researchers on PCa with a promising method for reliable detection of prostate lesions in MR images by fused model.
Collapse
Affiliation(s)
- Ghazaleh Jamshidi
- Department of Medical Radiation Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran
| | - Ali Abbasian Ardakani
- Department of Radiology Technology, School of Allied Medical Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Mahyar Ghafoori
- Department of Radiology, School of Medicine, Hazrat Rasoul Akram Hospital, Iran University of Medical Sciences, Tehran, Iran
| | - Farshid Babapour Mofrad
- Department of Medical Radiation Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran
| | - Hamidreza Saligheh Rad
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran.
- Quantitative MR Imaging and Spectroscopy Group, Research Center for Cellular and Molecular Imaging, Imam Khomeini Hospital, Tehran University of Medical Sciences, Tehran, Iran.
| |
Collapse
|
14
|
Belue MJ, Harmon SA, Lay NS, Daryanani A, Phelps TE, Choyke PL, Turkbey B. The Low Rate of Adherence to Checklist for Artificial Intelligence in Medical Imaging Criteria Among Published Prostate MRI Artificial Intelligence Algorithms. J Am Coll Radiol 2023; 20:134-145. [PMID: 35922018 PMCID: PMC9887098 DOI: 10.1016/j.jacr.2022.05.022] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2022] [Revised: 05/13/2022] [Accepted: 05/18/2022] [Indexed: 02/02/2023]
Abstract
OBJECTIVE To determine the rigor, generalizability, and reproducibility of published classification and detection artificial intelligence (AI) models for prostate cancer (PCa) on MRI using the Checklist for Artificial Intelligence in Medical Imaging (CLAIM) guidelines, a 42-item checklist that is considered a measure of best practice for presenting and reviewing medical imaging AI research. MATERIALS AND METHODS This review searched English literature for studies proposing PCa AI detection and classification models on MRI. Each study was evaluated with the CLAIM checklist. The additional outcomes for which data were sought included measures of AI model performance (eg, area under the curve [AUC], sensitivity, specificity, free-response operating characteristic curves), training and validation and testing group sample size, AI approach, detection versus classification AI, public data set utilization, MRI sequences used, and definition of gold standard for ground truth. The percentage of CLAIM checklist fulfillment was used to stratify studies into quartiles. Wilcoxon's rank-sum test was used for pair-wise comparisons. RESULTS In all, 75 studies were identified, and 53 studies qualified for analysis. The original CLAIM items that most studies did not fulfill includes item 12 (77% no): de-identification methods; item 13 (68% no): handling missing data; item 15 (47% no): rationale for choosing ground truth reference standard; item 18 (55% no): measurements of inter- and intrareader variability; item 31 (60% no): inclusion of validated interpretability maps; item 37 (92% no): inclusion of failure analysis to elucidate AI model weaknesses. An AUC score versus percentage CLAIM fulfillment quartile revealed a significant difference of the mean AUC scores between quartile 1 versus quartile 2 (0.78 versus 0.86, P = .034) and quartile 1 versus quartile 4 (0.78 versus 0.89, P = .003) scores. Based on additional information and outcome metrics gathered in this study, additional measures of best practice are defined. These new items include disclosure of public dataset usage, ground truth definition in comparison to other referenced works in the defined task, and sample size power calculation. CONCLUSION A large proportion of AI studies do not fulfill key items in CLAIM guidelines within their methods and results sections. The percentage of CLAIM checklist fulfillment is weakly associated with improved AI model performance. Additions or supplementations to CLAIM are recommended to improve publishing standards and aid reviewers in determining study rigor.
Collapse
Affiliation(s)
- Mason J Belue
- Medical Research Scholars Program Fellow, Artificial Intelligence Resource, Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, Maryland
| | - Stephanie A Harmon
- Staff Scientist, Artificial Intelligence Resource, Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, Maryland
| | - Nathan S Lay
- Staff Scientist, Artificial Intelligence Resource, Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, Maryland
| | - Asha Daryanani
- Intramural Research Training Program Fellow, Artificial Intelligence Resource, Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, Maryland
| | - Tim E Phelps
- Postdoctoral Fellow, Artificial Intelligence Resource, Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, Maryland
| | - Peter L Choyke
- Artificial Intelligence Resource, Chief of Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, Maryland
| | - Baris Turkbey
- Senior Clinician/Director, Artificial Intelligence Resource, Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, Maryland.
| |
Collapse
|
15
|
Olaniyi EO, Komolafe TE, Oyedotun OK, Oyemakinde TT, Abdelaziz M, Khashman A. Eye Melanoma Diagnosis System using Statistical Texture Feature Extraction and Soft Computing Techniques. J Biomed Phys Eng 2023; 13:77-88. [PMID: 36818006 PMCID: PMC9923246 DOI: 10.31661/jbpe.v0i0.2101-1268] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Accepted: 06/26/2021] [Indexed: 06/18/2023]
Abstract
BACKGROUND Eye melanoma is deforming in the eye, growing and developing in tissues inside the middle layer of an eyeball, resulting in dark spots in the iris section of the eye, changes in size, the shape of the pupil, and vision. OBJECTIVE The current study aims to diagnose eye melanoma using a gray level co-occurrence matrix (GLCM) for texture extraction and soft computing techniques, leading to the disease diagnosis faster, time-saving, and prevention of misdiagnosis resulting from the physician's manual approach. MATERIAL AND METHODS In this experimental study, two models are proposed for the diagnosis of eye melanoma, including backpropagation neural networks (BPNN) and radial basis functions network (RBFN). The images used for training and validating were obtained from the eye-cancer database. RESULTS Based on our experiments, our proposed models achieve 92.31% and 94.70% recognition rates for GLCM+BPNN and GLCM+RBFN, respectively. CONCLUSION Based on the comparison of our models with the others, the models used in the current study outperform other proposed models.
Collapse
Affiliation(s)
- Ebenezer Obaloluwa Olaniyi
- Center for Quantum Computational System, Department of Electrical and Electronics Engineering, Adeleke University, Osun State, Nigeria
- European Centre for Research and Academic Affairs, Lefkosa, Turkey
| | - Temitope Emmanuel Komolafe
- Department of Medical Imaging, Suzhou Institute of Biomedical and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Oyebade Kayode Oyedotun
- Interdisciplinary Centre for Security, Reliability, and Trust (SnT), University of Luxembourg, Luxembourg
| | | | - Mohamed Abdelaziz
- Department of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Adnan Khashman
- European Centre for Research and Academic Affairs, Turkey
| |
Collapse
|
16
|
Role of Ensemble Deep Learning for Brain Tumor Classification in Multiple Magnetic Resonance Imaging Sequence Data. Diagnostics (Basel) 2023; 13:diagnostics13030481. [PMID: 36766587 PMCID: PMC9914433 DOI: 10.3390/diagnostics13030481] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Revised: 01/24/2023] [Accepted: 01/26/2023] [Indexed: 01/31/2023] Open
Abstract
The biopsy is a gold standard method for tumor grading. However, due to its invasive nature, it has sometimes proved fatal for brain tumor patients. As a result, a non-invasive computer-aided diagnosis (CAD) tool is required. Recently, many magnetic resonance imaging (MRI)-based CAD tools have been proposed for brain tumor grading. The MRI has several sequences, which can express tumor structure in different ways. However, a suitable MRI sequence for brain tumor classification is not yet known. The most common brain tumor is 'glioma', which is the most fatal form. Therefore, in the proposed study, to maximize the classification ability between low-grade versus high-grade glioma, three datasets were designed comprising three MRI sequences: T1-Weighted (T1W), T2-weighted (T2W), and fluid-attenuated inversion recovery (FLAIR). Further, five well-established convolutional neural networks, AlexNet, VGG16, ResNet18, GoogleNet, and ResNet50 were adopted for tumor classification. An ensemble algorithm was proposed using the majority vote of above five deep learning (DL) models to produce more consistent and improved results than any individual model. Five-fold cross validation (K5-CV) protocol was adopted for training and testing. For the proposed ensembled classifier with K5-CV, the highest test accuracies of 98.88 ± 0.63%, 97.98 ± 0.86%, and 94.75 ± 0.61% were achieved for FLAIR, T2W, and T1W-MRI data, respectively. FLAIR-MRI data was found to be most significant for brain tumor classification, where it showed a 4.17% and 0.91% improvement in accuracy against the T1W-MRI and T2W-MRI sequence data, respectively. The proposed ensembled algorithm (MajVot) showed significant improvements in the average accuracy of three datasets of 3.60%, 2.84%, 1.64%, 4.27%, and 1.14%, respectively, against AlexNet, VGG16, ResNet18, GoogleNet, and ResNet50.
Collapse
|
17
|
Deep Learning to Classify AL versus ATTR Cardiac Amyloidosis MR Images. Biomedicines 2023; 11:biomedicines11010193. [PMID: 36672702 PMCID: PMC9855341 DOI: 10.3390/biomedicines11010193] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Revised: 12/18/2022] [Accepted: 01/11/2023] [Indexed: 01/13/2023] Open
Abstract
The aim of this work was to compare the classification of cardiac MR-images of AL versus ATTR amyloidosis by neural networks and by experienced human readers. Cine-MR images and late gadolinium enhancement (LGE) images of 120 patients were studied (70 AL and 50 TTR). A VGG16 convolutional neural network (CNN) was trained with a 5-fold cross validation process, taking care to strictly distribute images of a given patient in either the training group or the test group. The analysis was performed at the patient level by averaging the predictions obtained for each image. The classification accuracy obtained between AL and ATTR amyloidosis was 0.750 for cine-CNN, 0.611 for Gado-CNN and between 0.617 and 0.675 for human readers. The corresponding AUC of the ROC curve was 0.839 for cine-CNN, 0.679 for gado-CNN (p < 0.004 vs. cine) and 0.714 for the best human reader (p < 0.007 vs. cine). Logistic regression with cine-CNN and gado-CNN, as well as analysis focused on the specific orientation plane, did not change the overall results. We conclude that cine-CNN leads to significantly better discrimination between AL and ATTR amyloidosis as compared to gado-CNN or human readers, but with lower performance than reported in studies where visual diagnosis is easy, and is currently suboptimal for clinical practice.
Collapse
|
18
|
Beyond Multiparametric MRI and towards Radiomics to Detect Prostate Cancer: A Machine Learning Model to Predict Clinically Significant Lesions. Cancers (Basel) 2022; 14:cancers14246156. [PMID: 36551642 PMCID: PMC9776977 DOI: 10.3390/cancers14246156] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Revised: 12/08/2022] [Accepted: 12/12/2022] [Indexed: 12/15/2022] Open
Abstract
The risk of misclassifying clinically significant prostate cancer (csPCa) by multiparametric magnetic resonance imaging is consistent, also using the updated PIRADS score and although different definitions of csPCa, patients with Gleason Grade group (GG) ≥ 3 have a significantly worse prognosis. This study aims to develop a machine learning model predicting csPCa (i.e., any GG ≥ 3 lesion at target biopsy) by mpMRI radiomic features and analyzing similarities between GG groups. One hundred and two patients with 117 PIRADS ≥ 3 lesions at mpMRI underwent target+systematic biopsy, providing histologic diagnosis of PCa, 61 GG < 3 and 56 GG ≥ 3. Features were generated locally from an apparent diffusion coefficient and selected, using the LASSO method and Wilcoxon rank-sum test (p < 0.001), to achieve only four features. After data augmentation, the features were exploited to train a support vector machine classifier, subsequently validated on a test set. To assess the results, Kruskal−Wallis and Wilcoxon rank-sum tests (p < 0.001) and receiver operating characteristic (ROC)-related metrics were used. GG1 and GG2 were equivalent (p = 0.26), whilst clear separations between either GG[1,2] and GG ≥ 3 exist (p < 10−6). On the test set, the area under the curve = 0.88 (95% CI, 0.68−0.94), with positive and negative predictive values being 84%. The features retain a histological interpretation. Our model hints at GG2 being much more similar to GG1 than GG ≥ 3.
Collapse
|
19
|
Shao L, Liu Z, Liu J, Yan Y, Sun K, Liu X, Lu J, Tian J. Patient-level grading prediction of prostate cancer from mp-MRI via GMINet. Comput Biol Med 2022; 150:106168. [PMID: 36240594 DOI: 10.1016/j.compbiomed.2022.106168] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 09/21/2022] [Accepted: 10/01/2022] [Indexed: 11/03/2022]
Abstract
Magnetic resonance imaging (MRI) is considered the best imaging modality for non-invasive observation of prostate cancer. However, the existing quantitative analysis methods still have challenges in patient-level prediction, including accuracy, interpretability, context understanding, tumor delineation dependence, and multiple sequence fusion. Therefore, we propose a topological graph-guided multi-instance network (GMINet) to catch global contextual information of multi-parametric MRI for patient-level prediction. We integrate visual information from multi-slice MRI with slice-to-slice correlations for a more complete context. A novel strategy of attention folwing is proposed to fuse different MRI-based network branches for mp-MRI. Our method achieves state-of-the-art performance for Prostate cancer on a multi-center dataset (N = 478) and a public dataset (N = 204). The five-classification accuracy of Grade Group is 81.1 ± 1.8% (multi-center dataset) from the test set of five-fold cross-validation, and the area under curve of detecting clinically significant prostate cancer is 0.801 ± 0.018 (public dataset) from the test set of five-fold cross-validation respectively. The model also achieves tumor detection based on attention analysis, which improves the interpretability of the model. The novel method is hopeful to further improve the accurate prediction ability of MRI in the diagnosis and treatment of prostate cancer.
Collapse
Affiliation(s)
- Lizhi Shao
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Beijing, 100190, China
| | - Zhenyu Liu
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Beijing, 100190, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Jiangang Liu
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Engineering Medicine, Beihang University, Beijing, China and Key Laboratory of Big Data-Based Precision Medicine (Beihang University), Ministry of Industry and Information Technology of the People's Republic of China, Beijing, 100191, China
| | - Ye Yan
- Department of Urology, Peking University Third Hospital, Beijing, 100191, China
| | - Kai Sun
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Beijing, 100190, China
| | - Xiangyu Liu
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Beijing, 100190, China
| | - Jian Lu
- Department of Urology, Peking University Third Hospital, Beijing, 100191, China.
| | - Jie Tian
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Beijing, 100190, China; Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Engineering Medicine, Beihang University, Beijing, China and Key Laboratory of Big Data-Based Precision Medicine (Beihang University), Ministry of Industry and Information Technology of the People's Republic of China, Beijing, 100191, China.
| |
Collapse
|
20
|
Wang X, He D, Feng F, Ashton-Miller JA, DeLancey JOL, Luo J. Multi-label classification of pelvic organ prolapse using stress magnetic resonance imaging with deep learning. Int Urogynecol J 2022; 33:2869-2877. [PMID: 35083500 PMCID: PMC9325920 DOI: 10.1007/s00192-021-05064-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2021] [Accepted: 12/05/2021] [Indexed: 11/25/2022]
Abstract
INTRODUCTION AND HYPOTHESIS We aimed to develop a deep learning-based multi-label classification model to simultaneously diagnose three types of pelvic organ prolapse using stress magnetic resonance imaging (MRI). METHODS Our dataset consisted of 213 midsagittal labeled MR images at maximum Valsalva. For each MR image, the two endpoints of the sacrococcygeal inferior-pubic point line were auto-localized. Based on this line, a region of interest was automatically selected as input to a modified deep learning model, ResNet-50, for diagnosis. An unlabeled MRI dataset, a public dataset, and a synthetic dataset were used along with the labeled image dataset to train the model through a novel training strategy. We conducted a fivefold cross-validation and evaluated the classification results using precision, recall, F1 score, and area under the curve (AUC). RESULTS The average precision, recall, F1 score, and AUC of our proposed multi-label classification model for the three types of prolapse were 0.84, 0.72, 0.77, and 0.91 respectively, which were improved from 0.64, 0.53, 0.57, and 0.83 from the original ResNet-50. Classification took 0.18 s to diagnose one patient. CONCLUSIONS The proposed deep learning-based model were demonstrated feasible and fast in simultaneously diagnosing three types of prolapse based on pelvic floor stress MRI, which could facilitate computer-aided prolapse diagnosis and treatment planning.
Collapse
Affiliation(s)
- Xinyi Wang
- University of Michigan-Shanghai Jiao Tong University Joint Institute, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Da He
- University of Michigan-Shanghai Jiao Tong University Joint Institute, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Fei Feng
- University of Michigan-Shanghai Jiao Tong University Joint Institute, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - James A Ashton-Miller
- Department of Mechanical Engineering, University of Michigan, Ann Arbor, MI, 48109, USA
| | - John O L DeLancey
- Department of Obstetrics and Gynecology, University of Michigan, Ann Arbor, MI, 48109, USA
| | - Jiajia Luo
- Biomedical Engineering Department, Peking University, Beijing, 100191, China.
| |
Collapse
|
21
|
Zhu L, Gao G, Zhu Y, Han C, Liu X, Li D, Liu W, Wang X, Zhang J, Zhang X, Wang X. Fully automated detection and localization of clinically significant prostate cancer on MR images using a cascaded convolutional neural network. Front Oncol 2022; 12:958065. [PMID: 36249048 PMCID: PMC9558117 DOI: 10.3389/fonc.2022.958065] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Accepted: 09/12/2022] [Indexed: 11/13/2022] Open
Abstract
Purpose To develop a cascaded deep learning model trained with apparent diffusion coefficient (ADC) and T2-weighted imaging (T2WI) for fully automated detection and localization of clinically significant prostate cancer (csPCa). Methods This retrospective study included 347 consecutive patients (235 csPCa, 112 non-csPCa) with high-quality prostate MRI data, which were randomly selected for training, validation, and testing. The ground truth was obtained using manual csPCa lesion segmentation, according to pathological results. The proposed cascaded model based on Res-UNet takes prostate MR images (T2WI+ADC or only ADC) as inputs and automatically segments the whole prostate gland, the anatomic zones, and the csPCa region step by step. The performance of the models was evaluated and compared with PI-RADS (version 2.1) assessment using sensitivity, specificity, accuracy, and Dice similarity coefficient (DSC) in the held-out test set. Results In the test set, the per-lesion sensitivity of the biparametric (ADC + T2WI) model, ADC model, and PI-RADS assessment were 95.5% (84/88), 94.3% (83/88), and 94.3% (83/88) respectively (all p > 0.05). Additionally, the mean DSC based on the csPCa lesions were 0.64 ± 0.24 and 0.66 ± 0.23 for the biparametric model and ADC model, respectively. The sensitivity, specificity, and accuracy of the biparametric model were 95.6% (108/113), 91.5% (665/727), and 92.0% (773/840) based on sextant, and were 98.6% (68/69), 64.8% (46/71), and 81.4% (114/140) based on patients. The biparametric model had a similar performance to PI-RADS assessment (p > 0.05) and had higher specificity than the ADC model (86.8% [631/727], p< 0.001) based on sextant. Conclusion The cascaded deep learning model trained with ADC and T2WI achieves good performance for automated csPCa detection and localization.
Collapse
Affiliation(s)
- Lina Zhu
- Department of Radiology, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - Ge Gao
- Department of Radiology, Peking University First Hospital, Beijing, China
| | - Yi Zhu
- Department of Clinical & Technical Support, Philips Healthcare, Beijing, China
| | - Chao Han
- Department of Radiology, Peking University First Hospital, Beijing, China
| | - Xiang Liu
- Department of Radiology, Peking University First Hospital, Beijing, China
| | - Derun Li
- Department of Urology, Peking University First Hospital, Beijing, China
| | - Weipeng Liu
- Department of Development and Research, Beijing Smart Tree Medical Technology Co. Ltd., Beijing, China
| | - Xiangpeng Wang
- Department of Development and Research, Beijing Smart Tree Medical Technology Co. Ltd., Beijing, China
| | - Jingyuan Zhang
- Department of Development and Research, Beijing Smart Tree Medical Technology Co. Ltd., Beijing, China
| | - Xiaodong Zhang
- Department of Radiology, Peking University First Hospital, Beijing, China
| | - Xiaoying Wang
- Department of Radiology, Peking University First Hospital, Beijing, China
- *Correspondence: Xiaoying Wang,
| |
Collapse
|
22
|
Saliency Transfer Learning and Central-Cropping Network for Prostate Cancer Classification. Neural Process Lett 2022. [DOI: 10.1007/s11063-022-10999-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
23
|
Zhang KS, Schelb P, Netzer N, Tavakoli AA, Keymling M, Wehrse E, Hog R, Rotkopf LT, Wennmann M, Glemser PA, Thierjung H, von Knebel Doeberitz N, Kleesiek J, Görtz M, Schütz V, Hielscher T, Stenzinger A, Hohenfellner M, Schlemmer HP, Maier-Hein K, Bonekamp D. Pseudoprospective Paraclinical Interaction of Radiology Residents With a Deep Learning System for Prostate Cancer Detection: Experience, Performance, and Identification of the Need for Intermittent Recalibration. Invest Radiol 2022; 57:601-612. [PMID: 35467572 DOI: 10.1097/rli.0000000000000878] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES The aim of this study was to estimate the prospective utility of a previously retrospectively validated convolutional neural network (CNN) for prostate cancer (PC) detection on prostate magnetic resonance imaging (MRI). MATERIALS AND METHODS The biparametric (T2-weighted and diffusion-weighted) portion of clinical multiparametric prostate MRI from consecutive men included between November 2019 and September 2020 was fully automatically and individually analyzed by a CNN briefly after image acquisition (pseudoprospective design). Radiology residents performed 2 research Prostate Imaging Reporting and Data System (PI-RADS) assessments of the multiparametric dataset independent from clinical reporting (paraclinical design) before and after review of the CNN results and completed a survey. Presence of clinically significant PC was determined by the presence of an International Society of Urological Pathology grade 2 or higher PC on combined targeted and extended systematic transperineal MRI/transrectal ultrasound fusion biopsy. Sensitivities and specificities on a patient and prostate sextant basis were compared using the McNemar test and compared with the receiver operating characteristic (ROC) curve of CNN. Survey results were summarized as absolute counts and percentages. RESULTS A total of 201 men were included. The CNN achieved an ROC area under the curve of 0.77 on a patient basis. Using PI-RADS ≥3-emulating probability threshold (c3), CNN had a patient-based sensitivity of 81.8% and specificity of 54.8%, not statistically different from the current clinical routine PI-RADS ≥4 assessment at 90.9% and 54.8%, respectively ( P = 0.30/ P = 1.0). In general, residents achieved similar sensitivity and specificity before and after CNN review. On a prostate sextant basis, clinical assessment possessed the highest ROC area under the curve of 0.82, higher than CNN (AUC = 0.76, P = 0.21) and significantly higher than resident performance before and after CNN review (AUC = 0.76 / 0.76, P ≤ 0.03). The resident survey indicated CNN to be helpful and clinically useful. CONCLUSIONS Pseudoprospective paraclinical integration of fully automated CNN-based detection of suspicious lesions on prostate multiparametric MRI was demonstrated and showed good acceptance among residents, whereas no significant improvement in resident performance was found. General CNN performance was preserved despite an observed shift in CNN calibration, identifying the requirement for continuous quality control and recalibration.
Collapse
Affiliation(s)
- Kevin Sun Zhang
- From the Division of Radiology, German Cancer Research Center (DKFZ)
| | | | | | | | - Myriam Keymling
- From the Division of Radiology, German Cancer Research Center (DKFZ)
| | - Eckhard Wehrse
- From the Division of Radiology, German Cancer Research Center (DKFZ)
| | - Robert Hog
- From the Division of Radiology, German Cancer Research Center (DKFZ)
| | | | - Markus Wennmann
- From the Division of Radiology, German Cancer Research Center (DKFZ)
| | | | - Heidi Thierjung
- From the Division of Radiology, German Cancer Research Center (DKFZ)
| | | | | | | | - Viktoria Schütz
- Department of Urology, University of Heidelberg Medical Center
| | | | | | | | | | | | | |
Collapse
|
24
|
Zhang L, Yin FF, Lu K, Moore B, Han S, Cai J. Improving liver tumor image contrast and synthesizing novel tissue contrasts by adaptive multiparametric MRI fusion. PRECISION RADIATION ONCOLOGY 2022; 6:190-198. [PMID: 36590077 PMCID: PMC9797133 DOI: 10.1002/pro6.1167] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Accepted: 06/23/2022] [Indexed: 01/05/2023] Open
Abstract
Purpose Multiparametric MRI contains rich and complementary anatomical and functional information, which is often utilized separately. This study aims to propose an adaptive multiparametric MRI (mpMRI) fusion method and examine its capability in improving tumor contrast and synthesizing novel tissue contrasts among liver cancer patients. Methods An adaptive mpMRI fusion method was developed with five components: image pre-processing, fusion algorithm, database, adaptation rules, and fused MRI. Linear-weighted summation algorithm was used for fusion. Weight-driven and feature-driven adaptations were designed for different applications. A clinical-friendly graphic-user-interface (GUI) was developed in Matlab and used for mpMRI fusion. Twelve liver cancer patients and a digital human phantom were included in the study. Synthesis of novel image contrast and enhancement of image signal and contrast were examined in patient cases. Tumor contrast-to-noise ratio (CNR) and liver signal-to-noise ratio (SNR) were evaluated and compared before and after mpMRI fusion. Results The fusion platform was applicable in both XCAT phantom and patient cases. Novel image contrasts, including enhancement of soft-tissue boundary, vertebral body, tumor, and composition of multiple image features in a single image were achieved. Tumor CNR improved from -1.70 ± 2.57 to 4.88 ± 2.28 (p < 0.0001) for T1-w, from 3.39 ± 1.89 to 7.87 ± 3.47 (p < 0.01) for T2-w, and from 1.42 ± 1.66 to 7.69 ± 3.54 (p < 0.001) for T2/T1-w MRI. Liver SNR improved from 2.92 ± 2.39 to 9.96 ± 8.60 (p < 0.05) for DWI. The coefficient of variation (CV) of tumor CNR lowered from 1.57, 0.56, and 1.17 to 0.47, 0.44, and 0.46 for T1-w, T2-w and T2/T1-w MRI, respectively. Conclusion A multiparametric MRI fusion method was proposed and a prototype was developed. The method showed potential in improving clinically relevant features such as tumor contrast and liver signal. Synthesis of novel image contrasts including the composition of multiple image features into single image set was achieved.
Collapse
Affiliation(s)
- Lei Zhang
- Medical Physics Graduate Program, Duke University, Durham, North Carolina 27705, USA
- Department of Radiation Oncology, Duke University Medical Center, Durham, North Carolina 27710, USA
- Medical Physics Graduate Program, Duke Kunshan University, Kunshan, Jiangsu, 215316 China
| | - Fang-Fang Yin
- Medical Physics Graduate Program, Duke University, Durham, North Carolina 27705, USA
- Department of Radiation Oncology, Duke University Medical Center, Durham, North Carolina 27710, USA
- Medical Physics Graduate Program, Duke Kunshan University, Kunshan, Jiangsu, 215316 China
| | - Ke Lu
- Medical Physics Graduate Program, Duke University, Durham, North Carolina 27705, USA
- Department of Radiation Oncology, Duke University Medical Center, Durham, North Carolina 27710, USA
| | - Brittany Moore
- Medical Physics Graduate Program, Duke University, Durham, North Carolina 27705, USA
- Department of Radiation Oncology, Duke University Medical Center, Durham, North Carolina 27710, USA
| | - Silu Han
- Medical Physics Graduate Program, Duke University, Durham, North Carolina 27705, USA
- Department of Radiation Oncology, Duke University Medical Center, Durham, North Carolina 27710, USA
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong
| |
Collapse
|
25
|
Li C, Li W, Liu C, Zheng H, Cai J, Wang S. Artificial intelligence in multi-parametric magnetic resonance imaging: A review. Med Phys 2022; 49:e1024-e1054. [PMID: 35980348 DOI: 10.1002/mp.15936] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2022] [Revised: 08/01/2022] [Accepted: 08/04/2022] [Indexed: 11/06/2022] Open
Abstract
Multi-parametric magnetic resonance imaging (mpMRI) is an indispensable tool in the clinical workflow for the diagnosis and treatment planning of various diseases. Machine learning-based artificial intelligence (AI) methods, especially those adopting the deep learning technique, have been extensively employed to perform mpMRI image classification, segmentation, registration, detection, reconstruction, and super-resolution. The current availability of increasing computational power and fast-improving AI algorithms have empowered numerous computer-based systems for applying mpMRI to disease diagnosis, imaging-guided radiotherapy, patient risk and overall survival time prediction, and the development of advanced quantitative imaging technology for magnetic resonance fingerprinting. However, the wide application of these developed systems in the clinic is still limited by a number of factors, including robustness, reliability, and interpretability. This survey aims to provide an overview for new researchers in the field as well as radiologists with the hope that they can understand the general concepts, main application scenarios, and remaining challenges of AI in mpMRI. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Cheng Li
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Wen Li
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Chenyang Liu
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Hairong Zheng
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Shanshan Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.,Peng Cheng Laboratory, Shenzhen, 518066, China.,Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China
| |
Collapse
|
26
|
Yin HL, Jiang Y, Xu Z, Jia HH, Lin GW. Combined diagnosis of multiparametric MRI-based deep learning models facilitates differentiating triple-negative breast cancer from fibroadenoma magnetic resonance BI-RADS 4 lesions. J Cancer Res Clin Oncol 2022; 149:2575-2584. [PMID: 35771263 DOI: 10.1007/s00432-022-04142-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2022] [Accepted: 06/13/2022] [Indexed: 02/05/2023]
Abstract
PURPOSE To investigate the value of the combined diagnosis of multiparametric MRI-based deep learning models to differentiate triple-negative breast cancer (TNBC) from fibroadenoma magnetic resonance Breast Imaging-Reporting and Data System category 4 (BI-RADS 4) lesions and to evaluate whether the combined diagnosis of these models could improve the diagnostic performance of radiologists. METHODS A total of 319 female patients with 319 pathologically confirmed BI-RADS 4 lesions were randomly divided into training, validation, and testing sets in this retrospective study. The three models were established based on contrast-enhanced T1-weighted imaging, diffusion-weighted imaging, and T2-weighted imaging using the training and validation sets. The artificial intelligence (AI) combination score was calculated according to the results of three models. The diagnostic performances of four radiologists with and without AI assistance were compared with the AI combination score on the testing set. The area under the curve (AUC), sensitivity, specificity, accuracy, and weighted kappa value were calculated to assess the performance. RESULTS The AI combination score yielded an excellent performance (AUC = 0.944) on the testing set. With AI assistance, the AUC for the diagnosis of junior radiologist 1 (JR1) increased from 0.833 to 0.885, and that for JR2 increased from 0.823 to 0.876. The AUCs of senior radiologist 1 (SR1) and SR2 slightly increased from 0.901 and 0.950 to 0.925 and 0.975 after AI assistance, respectively. CONCLUSION Combined diagnosis of multiparametric MRI-based deep learning models to differentiate TNBC from fibroadenoma magnetic resonance BI-RADS 4 lesions can achieve comparable performance to that of SRs and improve the diagnostic performance of JRs.
Collapse
Affiliation(s)
- Hao-Lin Yin
- Department of Radiology, Huadong Hospital Affiliated to Fudan University, Jing'an District, 221# Yan'anxi Road, Shanghai, 200040, China
| | - Yu Jiang
- Department of Radiology, West China Hospital of Sichuan University, 37# Guo Xue Xiang, Chengdu, Sichuan, China
| | - Zihan Xu
- Lung Cancer Center, Cancer Center and State Key Laboratory of Biotherapy, West China Hospital of Sichuan University, 37# Guo Xue Xiang, Chengdu, Sichuan, China
| | - Hui-Hui Jia
- Department of Radiology, Huadong Hospital Affiliated to Fudan University, Jing'an District, 221# Yan'anxi Road, Shanghai, 200040, China
| | - Guang-Wu Lin
- Department of Radiology, Huadong Hospital Affiliated to Fudan University, Jing'an District, 221# Yan'anxi Road, Shanghai, 200040, China.
| |
Collapse
|
27
|
Xiang Y, Dong X, Zeng C, Liu J, Liu H, Hu X, Feng J, Du S, Wang J, Han Y, Luo Q, Chen S, Li Y. Clinical Variables, Deep Learning and Radiomics Features Help Predict the Prognosis of Adult Anti-N-methyl-D-aspartate Receptor Encephalitis Early: A Two-Center Study in Southwest China. Front Immunol 2022; 13:913703. [PMID: 35720336 PMCID: PMC9199424 DOI: 10.3389/fimmu.2022.913703] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2022] [Accepted: 04/26/2022] [Indexed: 11/17/2022] Open
Abstract
Objective To develop a fusion model combining clinical variables, deep learning (DL), and radiomics features to predict the functional outcomes early in patients with adult anti-N-methyl-D-aspartate receptor (NMDAR) encephalitis in Southwest China. Methods From January 2012, a two-center study of anti-NMDAR encephalitis was initiated to collect clinical and MRI data from acute patients in Southwest China. Two experienced neurologists independently assessed the patients’ prognosis at 24 moths based on the modified Rankin Scale (mRS) (good outcome defined as mRS 0–2; bad outcome defined as mRS 3-6). Risk factors influencing the prognosis of patients with acute anti-NMDAR encephalitis were investigated using clinical data. Five DL and radiomics models trained with four single or combined four MRI sequences (T1-weighted imaging, T2-weighted imaging, fluid-attenuated inversion recovery imaging and diffusion weighted imaging) and a clinical model were developed to predict the prognosis of anti-NMDAR encephalitis. A fusion model combing a clinical model and two machine learning-based models was built. The performances of the fusion model, clinical model, DL-based models and radiomics-based models were compared using the area under the receiver operating characteristic curve (AUC) and accuracy and then assessed by paired t-tests (P < 0.05 was considered significant). Results The fusion model achieved the significantly greatest predictive performance in the internal test dataset with an AUC of 0.963 [95% CI: (0.874-0.999)], and also significantly exhibited an equally good performance in the external validation dataset, with an AUC of 0.927 [95% CI: (0.688-0.975)]. The radiomics_combined model (AUC: 0.889; accuracy: 0.857) provided significantly superior predictive performance than the DL_combined (AUC: 0.845; accuracy: 0.857) and clinical models (AUC: 0.840; accuracy: 0.905), whereas the clinical model showed significantly higher accuracy. Compared with all single-sequence models, the DL_combined model and the radiomics_combined model had significantly greater AUCs and accuracies. Conclusions The fusion model combining clinical variables and machine learning-based models may have early predictive value for poor outcomes associated with anti-NMDAR encephalitis.
Collapse
Affiliation(s)
- Yayun Xiang
- Department of Radiology, The First Affiliated Hospital, Chongqing Medical University, Chongqing, China
| | - Xiaoxuan Dong
- College of Computer and Information Science, Chongqing, China
| | - Chun Zeng
- Department of Radiology, The First Affiliated Hospital, Chongqing Medical University, Chongqing, China
| | - Junhang Liu
- Department of Radiology, The First Affiliated Hospital, Chongqing Medical University, Chongqing, China
| | - Hanjing Liu
- Department of Radiology, The First Affiliated Hospital, Chongqing Medical University, Chongqing, China
| | - Xiaofei Hu
- Department of Neurology, Southwest Hospital, Third Military Medical University, Chongqing, China
| | - Jinzhou Feng
- Department of Neurology, First Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Silin Du
- Department of Radiology, The First Affiliated Hospital, Chongqing Medical University, Chongqing, China
| | - Jingjie Wang
- Department of Radiology, The First Affiliated Hospital, Chongqing Medical University, Chongqing, China
| | - Yongliang Han
- Department of Radiology, The First Affiliated Hospital, Chongqing Medical University, Chongqing, China
| | - Qi Luo
- Department of Radiology, The First Affiliated Hospital, Chongqing Medical University, Chongqing, China
| | - Shanxiong Chen
- College of Computer and Information Science, Chongqing, China
| | - Yongmei Li
- Department of Radiology, The First Affiliated Hospital, Chongqing Medical University, Chongqing, China
| |
Collapse
|
28
|
|
29
|
Kim HE, Cosa-Linan A, Santhanam N, Jannesari M, Maros ME, Ganslandt T. Transfer learning for medical image classification: a literature review. BMC Med Imaging 2022; 22:69. [PMID: 35418051 PMCID: PMC9007400 DOI: 10.1186/s12880-022-00793-7] [Citation(s) in RCA: 89] [Impact Index Per Article: 44.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Accepted: 03/30/2022] [Indexed: 02/07/2023] Open
Abstract
BACKGROUND Transfer learning (TL) with convolutional neural networks aims to improve performances on a new task by leveraging the knowledge of similar tasks learned in advance. It has made a major contribution to medical image analysis as it overcomes the data scarcity problem as well as it saves time and hardware resources. However, transfer learning has been arbitrarily configured in the majority of studies. This review paper attempts to provide guidance for selecting a model and TL approaches for the medical image classification task. METHODS 425 peer-reviewed articles were retrieved from two databases, PubMed and Web of Science, published in English, up until December 31, 2020. Articles were assessed by two independent reviewers, with the aid of a third reviewer in the case of discrepancies. We followed the PRISMA guidelines for the paper selection and 121 studies were regarded as eligible for the scope of this review. We investigated articles focused on selecting backbone models and TL approaches including feature extractor, feature extractor hybrid, fine-tuning and fine-tuning from scratch. RESULTS The majority of studies (n = 57) empirically evaluated multiple models followed by deep models (n = 33) and shallow (n = 24) models. Inception, one of the deep models, was the most employed in literature (n = 26). With respect to the TL, the majority of studies (n = 46) empirically benchmarked multiple approaches to identify the optimal configuration. The rest of the studies applied only a single approach for which feature extractor (n = 38) and fine-tuning from scratch (n = 27) were the two most favored approaches. Only a few studies applied feature extractor hybrid (n = 7) and fine-tuning (n = 3) with pretrained models. CONCLUSION The investigated studies demonstrated the efficacy of transfer learning despite the data scarcity. We encourage data scientists and practitioners to use deep models (e.g. ResNet or Inception) as feature extractors, which can save computational costs and time without degrading the predictive power.
Collapse
Affiliation(s)
- Hee E Kim
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany.
| | - Alejandro Cosa-Linan
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Nandhini Santhanam
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Mahboubeh Jannesari
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Mate E Maros
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Thomas Ganslandt
- Department of Biomedical Informatics at the Center for Preventive Medicine and Digital Health (CPD-BW), Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
- Chair of Medical Informatics, Friedrich-Alexander-Universität Erlangen-Nürnberg, Wetterkreuz 15, 91058, Erlangen, Germany
| |
Collapse
|
30
|
Chen X, Wang X, Zhang K, Fung KM, Thai TC, Moore K, Mannel RS, Liu H, Zheng B, Qiu Y. Recent advances and clinical applications of deep learning in medical image analysis. Med Image Anal 2022; 79:102444. [DOI: 10.1016/j.media.2022.102444] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Revised: 03/09/2022] [Accepted: 04/01/2022] [Indexed: 02/07/2023]
|
31
|
Current Value of Biparametric Prostate MRI with Machine-Learning or Deep-Learning in the Detection, Grading, and Characterization of Prostate Cancer: A Systematic Review. Diagnostics (Basel) 2022; 12:diagnostics12040799. [PMID: 35453847 PMCID: PMC9027206 DOI: 10.3390/diagnostics12040799] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Revised: 03/19/2022] [Accepted: 03/23/2022] [Indexed: 02/04/2023] Open
Abstract
Prostate cancer detection with magnetic resonance imaging is based on a standardized MRI-protocol according to the PI-RADS guidelines including morphologic imaging, diffusion weighted imaging, and perfusion. To facilitate data acquisition and analysis the contrast-enhanced perfusion is often omitted resulting in a biparametric prostate MRI protocol. The intention of this review is to analyze the current value of biparametric prostate MRI in combination with methods of machine-learning and deep learning in the detection, grading, and characterization of prostate cancer; if available a direct comparison with human radiologist performance was performed. PubMed was systematically queried and 29 appropriate studies were identified and retrieved. The data show that detection of clinically significant prostate cancer and differentiation of prostate cancer from non-cancerous tissue using machine-learning and deep learning is feasible with promising results. Some techniques of machine-learning and deep-learning currently seem to be equally good as human radiologists in terms of classification of single lesion according to the PIRADS score.
Collapse
|
32
|
Bertelli E, Mercatelli L, Marzi C, Pachetti E, Baccini M, Barucci A, Colantonio S, Gherardini L, Lattavo L, Pascali MA, Agostini S, Miele V. Machine and Deep Learning Prediction Of Prostate Cancer Aggressiveness Using Multiparametric MRI. Front Oncol 2022; 11:802964. [PMID: 35096605 PMCID: PMC8792745 DOI: 10.3389/fonc.2021.802964] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Accepted: 12/07/2021] [Indexed: 12/24/2022] Open
Abstract
Prostate cancer (PCa) is the most frequent male malignancy and the assessment of PCa aggressiveness, for which a biopsy is required, is fundamental for patient management. Currently, multiparametric (mp) MRI is strongly recommended before biopsy. Quantitative assessment of mpMRI might provide the radiologist with an objective and noninvasive tool for supporting the decision-making in clinical practice and decreasing intra- and inter-reader variability. In this view, high dimensional radiomics features and Machine Learning (ML) techniques, along with Deep Learning (DL) methods working on raw images directly, could assist the radiologist in the clinical workflow. The aim of this study was to develop and validate ML/DL frameworks on mpMRI data to characterize PCas according to their aggressiveness. We optimized several ML/DL frameworks on T2w, ADC and T2w+ADC data, using a patient-based nested validation scheme. The dataset was composed of 112 patients (132 peripheral lesions with Prostate Imaging Reporting and Data System (PI-RADS) score ≥ 3) acquired following both PI-RADS 2.0 and 2.1 guidelines. Firstly, ML/DL frameworks trained and validated on PI-RADS 2.0 data were tested on both PI-RADS 2.0 and 2.1 data. Then, we trained, validated and tested ML/DL frameworks on a multi PI-RADS dataset. We reported the performances in terms of Area Under the Receiver Operating curve (AUROC), specificity and sensitivity. The ML/DL frameworks trained on T2w data achieved the overall best performance. Notably, ML and DL frameworks trained and validated on PI-RADS 2.0 data obtained median AUROC values equal to 0.750 and 0.875, respectively, on unseen PI-RADS 2.0 test set. Similarly, ML/DL frameworks trained and validated on multi PI-RADS T2w data showed median AUROC values equal to 0.795 and 0.750, respectively, on unseen multi PI-RADS test set. Conversely, all the ML/DL frameworks trained and validated on PI-RADS 2.0 data, achieved AUROC values no better than the chance level when tested on PI-RADS 2.1 data. Both ML/DL techniques applied on mpMRI seem to be a valid aid in predicting PCa aggressiveness. In particular, ML/DL frameworks fed with T2w images data (objective, fast and non-invasive) show good performances and might support decision-making in patient diagnostic and therapeutic management, reducing intra- and inter-reader variability.
Collapse
Affiliation(s)
- Elena Bertelli
- Department of Radiology, Careggi University Hospital, Florence, Italy
| | - Laura Mercatelli
- Department of Radiology, Careggi University Hospital, Florence, Italy
| | - Chiara Marzi
- "Nello Carrara" Institute of Applied Physics (IFAC), National Research Council of Italy (CNR), Sesto Fiorentino, Italy
| | - Eva Pachetti
- "Alessandro Faedo" Institute of Information Science and Technologies (ISTI), National Research Council of Italy (CNR), Pisa, Italy.,Department of Information Engineering (DII), University of Pisa, Pisa, Italy
| | - Michela Baccini
- "Giuseppe Parenti" Department of Statistics, Computer Science, Applications(DiSIA), University of Florence, Florence, Italy.,Florence Center for Data Science, University of Florence, Florence, Italy
| | - Andrea Barucci
- "Nello Carrara" Institute of Applied Physics (IFAC), National Research Council of Italy (CNR), Sesto Fiorentino, Italy
| | - Sara Colantonio
- "Alessandro Faedo" Institute of Information Science and Technologies (ISTI), National Research Council of Italy (CNR), Pisa, Italy
| | - Luca Gherardini
- "Giuseppe Parenti" Department of Statistics, Computer Science, Applications(DiSIA), University of Florence, Florence, Italy
| | - Lorenzo Lattavo
- Department of Radiology, Careggi University Hospital, Florence, Italy
| | - Maria Antonietta Pascali
- "Alessandro Faedo" Institute of Information Science and Technologies (ISTI), National Research Council of Italy (CNR), Pisa, Italy
| | - Simone Agostini
- Department of Radiology, Careggi University Hospital, Florence, Italy
| | - Vittorio Miele
- Department of Radiology, Careggi University Hospital, Florence, Italy
| |
Collapse
|
33
|
Li H, Lee CH, Chia D, Lin Z, Huang W, Tan CH. Machine Learning in Prostate MRI for Prostate Cancer: Current Status and Future Opportunities. Diagnostics (Basel) 2022; 12:diagnostics12020289. [PMID: 35204380 PMCID: PMC8870978 DOI: 10.3390/diagnostics12020289] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2021] [Revised: 12/31/2021] [Accepted: 01/14/2022] [Indexed: 02/04/2023] Open
Abstract
Advances in our understanding of the role of magnetic resonance imaging (MRI) for the detection of prostate cancer have enabled its integration into clinical routines in the past two decades. The Prostate Imaging Reporting and Data System (PI-RADS) is an established imaging-based scoring system that scores the probability of clinically significant prostate cancer on MRI to guide management. Image fusion technology allows one to combine the superior soft tissue contrast resolution of MRI, with real-time anatomical depiction using ultrasound or computed tomography. This allows the accurate mapping of prostate cancer for targeted biopsy and treatment. Machine learning provides vast opportunities for automated organ and lesion depiction that could increase the reproducibility of PI-RADS categorisation, and improve co-registration across imaging modalities to enhance diagnostic and treatment methods that can then be individualised based on clinical risk of malignancy. In this article, we provide a comprehensive and contemporary review of advancements, and share insights into new opportunities in this field.
Collapse
Affiliation(s)
- Huanye Li
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798, Singapore; (H.L.); (Z.L.)
| | - Chau Hung Lee
- Department of Diagnostic Radiology, Tan Tock Seng Hospital, Singapore 308433, Singapore;
| | - David Chia
- Department of Radiation Oncology, National University Cancer Institute (NUH), Singapore 119074, Singapore;
| | - Zhiping Lin
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798, Singapore; (H.L.); (Z.L.)
| | - Weimin Huang
- Institute for Infocomm Research, A*Star, Singapore 138632, Singapore;
| | - Cher Heng Tan
- Department of Diagnostic Radiology, Tan Tock Seng Hospital, Singapore 308433, Singapore;
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore 639798, Singapore
- Correspondence:
| |
Collapse
|
34
|
Duran A, Dussert G, Rouviére O, Jaouen T, Jodoin PM, Lartizien C. ProstAttention-Net: a deep attention model for prostate cancer segmentation by aggressiveness in MRI scans. Med Image Anal 2022; 77:102347. [DOI: 10.1016/j.media.2021.102347] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Revised: 12/20/2021] [Accepted: 12/31/2021] [Indexed: 11/27/2022]
|
35
|
Bhattacharya I, Khandwala YS, Vesal S, Shao W, Yang Q, Soerensen SJ, Fan RE, Ghanouni P, Kunder CA, Brooks JD, Hu Y, Rusu M, Sonn GA. A review of artificial intelligence in prostate cancer detection on imaging. Ther Adv Urol 2022; 14:17562872221128791. [PMID: 36249889 PMCID: PMC9554123 DOI: 10.1177/17562872221128791] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Accepted: 08/30/2022] [Indexed: 11/07/2022] Open
Abstract
A multitude of studies have explored the role of artificial intelligence (AI) in providing diagnostic support to radiologists, pathologists, and urologists in prostate cancer detection, risk-stratification, and management. This review provides a comprehensive overview of relevant literature regarding the use of AI models in (1) detecting prostate cancer on radiology images (magnetic resonance and ultrasound imaging), (2) detecting prostate cancer on histopathology images of prostate biopsy tissue, and (3) assisting in supporting tasks for prostate cancer detection (prostate gland segmentation, MRI-histopathology registration, MRI-ultrasound registration). We discuss both the potential of these AI models to assist in the clinical workflow of prostate cancer diagnosis, as well as the current limitations including variability in training data sets, algorithms, and evaluation criteria. We also discuss ongoing challenges and what is needed to bridge the gap between academic research on AI for prostate cancer and commercial solutions that improve routine clinical care.
Collapse
Affiliation(s)
- Indrani Bhattacharya
- Department of Radiology, Stanford University School of Medicine, 1201 Welch Road, Stanford, CA 94305, USA
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Yash S. Khandwala
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Sulaiman Vesal
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Wei Shao
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Qianye Yang
- Centre for Medical Image Computing, University College London, London, UK
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Simon J.C. Soerensen
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Epidemiology & Population Health, Stanford University School of Medicine, Stanford, CA, USA
| | - Richard E. Fan
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Pejman Ghanouni
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Christian A. Kunder
- Department of Pathology, Stanford University School of Medicine, Stanford, CA, USA
| | - James D. Brooks
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Yipeng Hu
- Centre for Medical Image Computing, University College London, London, UK
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Mirabela Rusu
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Geoffrey A. Sonn
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| |
Collapse
|
36
|
|
37
|
Germain P, Vardazaryan A, Padoy N, Labani A, Roy C, Schindler TH, El Ghannudi S. Deep Learning Supplants Visual Analysis by Experienced Operators for the Diagnosis of Cardiac Amyloidosis by Cine-CMR. Diagnostics (Basel) 2021; 12:diagnostics12010069. [PMID: 35054236 PMCID: PMC8774777 DOI: 10.3390/diagnostics12010069] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2021] [Revised: 12/24/2021] [Accepted: 12/27/2021] [Indexed: 12/03/2022] Open
Abstract
Background: Diagnosing cardiac amyloidosis (CA) from cine-CMR (cardiac magnetic resonance) alone is not reliable. In this study, we tested if a convolutional neural network (CNN) could outperform the visual diagnosis of experienced operators. Method: 119 patients with cardiac amyloidosis and 122 patients with left ventricular hypertrophy (LVH) of other origins were retrospectively selected. Diastolic and systolic cine-CMR images were preprocessed and labeled. A dual-input visual geometry group (VGG ) model was used for binary image classification. All images belonging to the same patient were distributed in the same set. Accuracy and area under the curve (AUC) were calculated per frame and per patient from a 40% held-out test set. Results were compared to a visual analysis assessed by three experienced operators. Results: frame-based comparisons between humans and a CNN provided an accuracy of 0.605 vs. 0.746 (p < 0.0008) and an AUC of 0.630 vs. 0.824 (p < 0.0001). Patient-based comparisons provided an accuracy of 0.660 vs. 0.825 (p < 0.008) and an AUC of 0.727 vs. 0.895 (p < 0.002). Conclusion: based on cine-CMR images alone, a CNN is able to discriminate cardiac amyloidosis from LVH of other origins better than experienced human operators (15 to 20 points more in absolute value for accuracy and AUC), demonstrating a unique capability to identify what the eyes cannot see through classical radiological analysis.
Collapse
Affiliation(s)
- Philippe Germain
- Department of Radiology, Nouvel Hopital Civil, University Hospital, 67000 Strasbourg, France; (A.L.); (C.R.); (S.E.G.)
- Correspondence:
| | - Armine Vardazaryan
- ICube, University of Strasbourg, CNRS, 67000 Strasbourg, France; (A.V.); (N.P.)
- IHU (Institut Hopitalo-Universitaire), 67000 Strasbourg, France
| | - Nicolas Padoy
- ICube, University of Strasbourg, CNRS, 67000 Strasbourg, France; (A.V.); (N.P.)
- IHU (Institut Hopitalo-Universitaire), 67000 Strasbourg, France
| | - Aissam Labani
- Department of Radiology, Nouvel Hopital Civil, University Hospital, 67000 Strasbourg, France; (A.L.); (C.R.); (S.E.G.)
| | - Catherine Roy
- Department of Radiology, Nouvel Hopital Civil, University Hospital, 67000 Strasbourg, France; (A.L.); (C.R.); (S.E.G.)
| | - Thomas Hellmut Schindler
- Mallinckrodt Institute of Radiology, Division of Nuclear Medicine, Washington University School of Medicine, Saint Louis, MO 63110, USA;
| | - Soraya El Ghannudi
- Department of Radiology, Nouvel Hopital Civil, University Hospital, 67000 Strasbourg, France; (A.L.); (C.R.); (S.E.G.)
- Department of Nuclear Medicine, Nouvel Hopital Civil, University Hospital, 67000 Strasbourg, France
| |
Collapse
|
38
|
Hoar D, Lee PQ, Guida A, Patterson S, Bowen CV, Merrimen J, Wang C, Rendon R, Beyea SD, Clarke SE. Combined Transfer Learning and Test-Time Augmentation Improves Convolutional Neural Network-Based Semantic Segmentation of Prostate Cancer from Multi-Parametric MR Images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 210:106375. [PMID: 34500139 DOI: 10.1016/j.cmpb.2021.106375] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/24/2020] [Accepted: 08/22/2021] [Indexed: 06/13/2023]
Abstract
PURPOSE Multiparametric MRI (mp-MRI) is a widely used tool for diagnosing and staging prostate cancer. The purpose of this study was to evaluate whether transfer learning, unsupervised pre-training and test-time augmentation significantly improved the performance of a convolutional neural network (CNN) for pixel-by-pixel prediction of cancer vs. non-cancer using mp-MRI datasets. METHODS 154 subjects undergoing mp-MRI were prospectively recruited, 16 of whom subsequently underwent radical prostatectomy. Logistic regression, random forest and CNN models were trained on mp-MRI data using histopathology as the gold standard. Transfer learning, unsupervised pre-training and test-time augmentation were used to boost CNN performance. Models were evaluated using Dice score and area under the receiver operating curve (AUROC) with leave-one-subject-out cross validation. Permutation feature importance testing was performed to evaluate the relative value of each MR contrast to CNN model performance. Statistical significance (p<0.05) was determined using the paired Wilcoxon signed rank test with Benjamini-Hochberg correction for multiple comparisons. RESULTS Baseline CNN outperformed logistic regression and random forest models. Transfer learning and unsupervised pre-training did not significantly improve CNN performance over baseline; however, test-time augmentation resulted in significantly higher Dice scores over both baseline CNN and CNN plus either of transfer learning or unsupervised pre-training. The best performing model was CNN with transfer learning and test-time augmentation (Dice score of 0.59 and AUROC of 0.93). The most important contrast was apparent diffusion coefficient (ADC), followed by Ktrans and T2, although each contributed significantly to classifier performance. CONCLUSIONS The addition of transfer learning and test-time augmentation resulted in significant improvement in CNN segmentation performance in a small set of prostate cancer mp-MRI data. Results suggest that these techniques may be more broadly useful for the optimization of deep learning algorithms applied to the problem of semantic segmentation in biomedical image datasets. However, further work is needed to improve the generalizability of the specific model presented herein.
Collapse
Affiliation(s)
- David Hoar
- Department of Electrical and Computer Engineering, Dalhousie University, Halifax, NS, Canada
| | - Peter Q Lee
- Faculty of Computer Science, Dalhousie University, Halifax, NS, Canada
| | - Alessandro Guida
- Biomedical Translational Imaging Centre, Nova Scotia Health Authority and IWK Health Centre, Halifax, NS, Canada
| | - Steven Patterson
- Biomedical Translational Imaging Centre, Nova Scotia Health Authority and IWK Health Centre, Halifax, NS, Canada
| | - Chris V Bowen
- Biomedical Translational Imaging Centre, Nova Scotia Health Authority and IWK Health Centre, Halifax, NS, Canada; Department of Diagnostic Radiology, Dalhousie University, Halifax, NS, Canada
| | | | - Cheng Wang
- Department of Pathology, Dalhousie University, Halifax, NS, Canada
| | - Ricardo Rendon
- Department of Urology, Dalhousie University, Halifax, NS, Canada
| | - Steven D Beyea
- Biomedical Translational Imaging Centre, Nova Scotia Health Authority and IWK Health Centre, Halifax, NS, Canada; Department of Diagnostic Radiology, Dalhousie University, Halifax, NS, Canada
| | - Sharon E Clarke
- Biomedical Translational Imaging Centre, Nova Scotia Health Authority and IWK Health Centre, Halifax, NS, Canada; Department of Diagnostic Radiology, Dalhousie University, Halifax, NS, Canada.
| |
Collapse
|
39
|
Challenges in the Use of Artificial Intelligence for Prostate Cancer Diagnosis from Multiparametric Imaging Data. Cancers (Basel) 2021; 13:cancers13163944. [PMID: 34439099 PMCID: PMC8391234 DOI: 10.3390/cancers13163944] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Revised: 08/02/2021] [Accepted: 08/02/2021] [Indexed: 11/18/2022] Open
Abstract
Simple Summary Prostate Cancer is one of the main threats to men’s health. Its accurate diagnosis is crucial to properly treat patients depending on the cancer’s level of aggressiveness. Tumor risk-stratification is still a challenging task due to the difficulties met during the reading of multi-parametric Magnetic Resonance Images. Artificial Intelligence models may help radiologists in staging the aggressiveness of the equivocal lesions, reducing inter-observer variability and evaluation time. However, these algorithms need many high-quality images to work efficiently, bringing up overfitting and lack of standardization and reproducibility as emerging issues to be addressed. This study attempts to illustrate the state of the art of current research of Artificial Intelligence methods to stratify prostate cancer for its clinical significance suggesting how widespread use of public databases could be a possible solution to these issues. Abstract Many efforts have been carried out for the standardization of multiparametric Magnetic Resonance (mp-MR) images evaluation to detect Prostate Cancer (PCa), and specifically to differentiate levels of aggressiveness, a crucial aspect for clinical decision-making. Prostate Imaging—Reporting and Data System (PI-RADS) has contributed noteworthily to this aim. Nevertheless, as pointed out by the European Association of Urology (EAU 2020), the PI-RADS still has limitations mainly due to the moderate inter-reader reproducibility of mp-MRI. In recent years, many aspects in the diagnosis of cancer have taken advantage of the use of Artificial Intelligence (AI) such as detection, segmentation of organs and/or lesions, and characterization. Here a focus on AI as a potentially important tool for the aim of standardization and reproducibility in the characterization of PCa by mp-MRI is reported. AI includes methods such as Machine Learning and Deep learning techniques that have shown to be successful in classifying mp-MR images, with similar performances obtained by radiologists. Nevertheless, they perform differently depending on the acquisition system and protocol used. Besides, these methods need a large number of samples that cover most of the variability of the lesion aspect and zone to avoid overfitting. The use of publicly available datasets could improve AI performance to achieve a higher level of generalizability, exploiting large numbers of cases and a big range of variability in the images. Here we explore the promise and the advantages, as well as emphasizing the pitfall and the warnings, outlined in some recent studies that attempted to classify clinically significant PCa and indolent lesions using AI methods. Specifically, we focus on the overfitting issue due to the scarcity of data and the lack of standardization and reproducibility in every step of the mp-MR image acquisition and the classifier implementation. In the end, we point out that a solution can be found in the use of publicly available datasets, whose usage has already been promoted by some important initiatives. Our future perspective is that AI models may become reliable tools for clinicians in PCa diagnosis, reducing inter-observer variability and evaluation time.
Collapse
|
40
|
Khosravi P, Lysandrou M, Eljalby M, Li Q, Kazemi E, Zisimopoulos P, Sigaras A, Brendel M, Barnes J, Ricketts C, Meleshko D, Yat A, McClure TD, Robinson BD, Sboner A, Elemento O, Chughtai B, Hajirasouliha I. A Deep Learning Approach to Diagnostic Classification of Prostate Cancer Using Pathology-Radiology Fusion. J Magn Reson Imaging 2021; 54:462-471. [PMID: 33719168 PMCID: PMC8360022 DOI: 10.1002/jmri.27599] [Citation(s) in RCA: 33] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2020] [Revised: 02/22/2021] [Accepted: 02/23/2021] [Indexed: 12/27/2022] Open
Abstract
BACKGROUND A definitive diagnosis of prostate cancer requires a biopsy to obtain tissue for pathologic analysis, but this is an invasive procedure and is associated with complications. PURPOSE To develop an artificial intelligence (AI)-based model (named AI-biopsy) for the early diagnosis of prostate cancer using magnetic resonance (MR) images labeled with histopathology information. STUDY TYPE Retrospective. POPULATION Magnetic resonance imaging (MRI) data sets from 400 patients with suspected prostate cancer and with histological data (228 acquired in-house and 172 from external publicly available databases). FIELD STRENGTH/SEQUENCE 1.5 to 3.0 Tesla, T2-weighted image pulse sequences. ASSESSMENT MR images reviewed and selected by two radiologists (with 6 and 17 years of experience). The patient images were labeled with prostate biopsy including Gleason Score (6 to 10) or Grade Group (1 to 5) and reviewed by one pathologist (with 15 years of experience). Deep learning models were developed to distinguish 1) benign from cancerous tumor and 2) high-risk tumor from low-risk tumor. STATISTICAL TESTS To evaluate our models, we calculated negative predictive value, positive predictive value, specificity, sensitivity, and accuracy. We also calculated areas under the receiver operating characteristic (ROC) curves (AUCs) and Cohen's kappa. RESULTS Our computational method (https://github.com/ih-lab/AI-biopsy) achieved AUCs of 0.89 (95% confidence interval [CI]: [0.86-0.92]) and 0.78 (95% CI: [0.74-0.82]) to classify cancer vs. benign and high- vs. low-risk of prostate disease, respectively. DATA CONCLUSION AI-biopsy provided a data-driven and reproducible way to assess cancer risk from MR images and a personalized strategy to potentially reduce the number of unnecessary biopsies. AI-biopsy highlighted the regions of MR images that contained the predictive features the algorithm used for diagnosis using the class activation map method. It is a fully automatic method with a drag-and-drop web interface (https://ai-biopsy.eipm-research.org) that allows radiologists to review AI-assessed MR images in real time. LEVEL OF EVIDENCE 1 TECHNICAL EFFICACY STAGE: 2.
Collapse
Affiliation(s)
- Pegah Khosravi
- Computational Oncology, Department of Epidemiology and BiostatisticsMemorial Sloan Kettering Cancer CenterNew YorkNew YorkUSA
- Department of Physiology and BiophysicsInstitute for Computational Biomedicine, Weill Cornell Medicine of Cornell UniversityNew YorkNew YorkUSA
- Caryl and Israel Englander Institute for Precision MedicineThe Meyer Cancer Center, Weill Cornell MedicineNew YorkNew YorkUSA
| | - Maria Lysandrou
- Neuroscience InstituteThe University of ChicagoChicagoIllinoisUSA
| | - Mahmoud Eljalby
- Department of UrologyWeill Cornell Medicine of Cornell UniversityNew YorkNew YorkUSA
| | - Qianzi Li
- Department of Physiology and BiophysicsInstitute for Computational Biomedicine, Weill Cornell Medicine of Cornell UniversityNew YorkNew YorkUSA
- Mathematics and Statistics DepartmentCarleton CollegeNorthfieldMinnesotaUSA
| | - Ehsan Kazemi
- Yale University, Department of Electrical Engineering
| | - Pantelis Zisimopoulos
- Department of Physiology and BiophysicsInstitute for Computational Biomedicine, Weill Cornell Medicine of Cornell UniversityNew YorkNew YorkUSA
- Caryl and Israel Englander Institute for Precision MedicineThe Meyer Cancer Center, Weill Cornell MedicineNew YorkNew YorkUSA
| | - Alexandros Sigaras
- Department of Physiology and BiophysicsInstitute for Computational Biomedicine, Weill Cornell Medicine of Cornell UniversityNew YorkNew YorkUSA
- Caryl and Israel Englander Institute for Precision MedicineThe Meyer Cancer Center, Weill Cornell MedicineNew YorkNew YorkUSA
| | - Matthew Brendel
- Department of Physiology and BiophysicsInstitute for Computational Biomedicine, Weill Cornell Medicine of Cornell UniversityNew YorkNew YorkUSA
| | - Josue Barnes
- Department of Physiology and BiophysicsInstitute for Computational Biomedicine, Weill Cornell Medicine of Cornell UniversityNew YorkNew YorkUSA
- Caryl and Israel Englander Institute for Precision MedicineThe Meyer Cancer Center, Weill Cornell MedicineNew YorkNew YorkUSA
| | - Camir Ricketts
- Department of Physiology and BiophysicsInstitute for Computational Biomedicine, Weill Cornell Medicine of Cornell UniversityNew YorkNew YorkUSA
- Caryl and Israel Englander Institute for Precision MedicineThe Meyer Cancer Center, Weill Cornell MedicineNew YorkNew YorkUSA
| | - Dmitry Meleshko
- Department of Physiology and BiophysicsInstitute for Computational Biomedicine, Weill Cornell Medicine of Cornell UniversityNew YorkNew YorkUSA
- Caryl and Israel Englander Institute for Precision MedicineThe Meyer Cancer Center, Weill Cornell MedicineNew YorkNew YorkUSA
| | - Andy Yat
- Department of RadiologyNew York‐Presbyterian HospitalNew YorkNew YorkUSA
| | - Timothy D. McClure
- Department of UrologyWeill Cornell Medicine of Cornell UniversityNew YorkNew YorkUSA
| | - Brian D. Robinson
- Department of PathologyNew York Presbyterian Hospital‐Weill Cornell Medical CollegeNew YorkNew YorkUSA
| | - Andrea Sboner
- Department of Physiology and BiophysicsInstitute for Computational Biomedicine, Weill Cornell Medicine of Cornell UniversityNew YorkNew YorkUSA
- Caryl and Israel Englander Institute for Precision MedicineThe Meyer Cancer Center, Weill Cornell MedicineNew YorkNew YorkUSA
- Department of PathologyNew York Presbyterian Hospital‐Weill Cornell Medical CollegeNew YorkNew YorkUSA
| | - Olivier Elemento
- Department of Physiology and BiophysicsInstitute for Computational Biomedicine, Weill Cornell Medicine of Cornell UniversityNew YorkNew YorkUSA
- Caryl and Israel Englander Institute for Precision MedicineThe Meyer Cancer Center, Weill Cornell MedicineNew YorkNew YorkUSA
- WorldQuant Initiative for Quantitative PredictionWeill Cornell MedicineNew YorkNew YorkUSA
| | - Bilal Chughtai
- Department of UrologyWeill Cornell Medicine of Cornell UniversityNew YorkNew YorkUSA
| | - Iman Hajirasouliha
- Department of Physiology and BiophysicsInstitute for Computational Biomedicine, Weill Cornell Medicine of Cornell UniversityNew YorkNew YorkUSA
- Caryl and Israel Englander Institute for Precision MedicineThe Meyer Cancer Center, Weill Cornell MedicineNew YorkNew YorkUSA
| |
Collapse
|
41
|
Development and Validation of an Interpretable Artificial Intelligence Model to Predict 10-Year Prostate Cancer Mortality. Cancers (Basel) 2021; 13:cancers13123064. [PMID: 34205398 PMCID: PMC8234681 DOI: 10.3390/cancers13123064] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2021] [Revised: 06/03/2021] [Accepted: 06/17/2021] [Indexed: 01/31/2023] Open
Abstract
Simple Summary This article presents a gradient-boosted model that can predict 10-year prostate cancer mortality with high accuracy. The model was developed and validated on prospective multicenter data from the PLCO trial. Using XGBoost and Shapley values, it provides interpretability to understand its prediction. It can be used online to provide predictions and support informed decision-making in PCa treatment. Abstract Prostate cancer treatment strategies are guided by risk-stratification. This stratification can be difficult in some patients with known comorbidities. New models are needed to guide strategies and determine which patients are at risk of prostate cancer mortality. This article presents a gradient-boosting model to predict the risk of prostate cancer mortality within 10 years after a cancer diagnosis, and to provide an interpretable prediction. This work uses prospective data from the PLCO Cancer Screening and selected patients who were diagnosed with prostate cancer. During follow-up, 8776 patients were diagnosed with prostate cancer. The dataset was randomly split into a training (n = 7021) and testing (n = 1755) dataset. Accuracy was 0.98 (±0.01), and the area under the receiver operating characteristic was 0.80 (±0.04). This model can be used to support informed decision-making in prostate cancer treatment. AI interpretability provides a novel understanding of the predictions to the users.
Collapse
|
42
|
Şerbănescu MS, Manea NC, Streba L, Belciug S, Pleşea IE, Pirici I, Bungărdean RM, Pleşea RM. Automated Gleason grading of prostate cancer using transfer learning from general-purpose deep-learning networks. ROMANIAN JOURNAL OF MORPHOLOGY AND EMBRYOLOGY 2021; 61:149-155. [PMID: 32747906 PMCID: PMC7728132 DOI: 10.47162/rjme.61.1.17] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
Two deep-learning algorithms designed to classify images according to the Gleason grading system that used transfer learning from two well-known general-purpose image classification networks (AlexNet and GoogleNet) were trained on Hematoxylin–Eosin histopathology stained microscopy images with prostate cancer. The dataset consisted of 439 images asymmetrically distributed in four Gleason grading groups. Mean and standard deviation accuracy for AlexNet derivate network was of 61.17±7 and for GoogleNet derivate network was of 60.9±7.4. The similar results obtained by the two networks with very different architecture, together with the normal distribution of classification error for both algorithms show that we have reached a maximum classification rate on this dataset. Taking into consideration all the constraints, we conclude that the resulted networks could assist pathologists in this field, providing first or second opinions on Gleason grading, thus presenting an objective opinion in a grading system which has showed in time a great deal of interobserver variability.
Collapse
|
43
|
Twilt JJ, van Leeuwen KG, Huisman HJ, Fütterer JJ, de Rooij M. Artificial Intelligence Based Algorithms for Prostate Cancer Classification and Detection on Magnetic Resonance Imaging: A Narrative Review. Diagnostics (Basel) 2021; 11:diagnostics11060959. [PMID: 34073627 PMCID: PMC8229869 DOI: 10.3390/diagnostics11060959] [Citation(s) in RCA: 41] [Impact Index Per Article: 13.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2021] [Revised: 05/19/2021] [Accepted: 05/21/2021] [Indexed: 12/14/2022] Open
Abstract
Due to the upfront role of magnetic resonance imaging (MRI) for prostate cancer (PCa) diagnosis, a multitude of artificial intelligence (AI) applications have been suggested to aid in the diagnosis and detection of PCa. In this review, we provide an overview of the current field, including studies between 2018 and February 2021, describing AI algorithms for (1) lesion classification and (2) lesion detection for PCa. Our evaluation of 59 included studies showed that most research has been conducted for the task of PCa lesion classification (66%) followed by PCa lesion detection (34%). Studies showed large heterogeneity in cohort sizes, ranging between 18 to 499 patients (median = 162) combined with different approaches for performance validation. Furthermore, 85% of the studies reported on the stand-alone diagnostic accuracy, whereas 15% demonstrated the impact of AI on diagnostic thinking efficacy, indicating limited proof for the clinical utility of PCa AI applications. In order to introduce AI within the clinical workflow of PCa assessment, robustness and generalizability of AI applications need to be further validated utilizing external validation and clinical workflow experiments.
Collapse
|
44
|
Precise Identification of Prostate Cancer from DWI Using Transfer Learning. SENSORS 2021; 21:s21113664. [PMID: 34070290 PMCID: PMC8197382 DOI: 10.3390/s21113664] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/27/2021] [Revised: 05/17/2021] [Accepted: 05/18/2021] [Indexed: 12/23/2022]
Abstract
Background and Objective: The use of computer-aided detection (CAD) systems can help radiologists make objective decisions and reduce the dependence on invasive techniques. In this study, a CAD system that detects and identifies prostate cancer from diffusion-weighted imaging (DWI) is developed. Methods: The proposed system first uses non-negative matrix factorization (NMF) to integrate three different types of features for the accurate segmentation of prostate regions. Then, discriminatory features in the form of apparent diffusion coefficient (ADC) volumes are estimated from the segmented regions. The ADC maps that constitute these volumes are labeled by a radiologist to identify the ADC maps with malignant or benign tumors. Finally, transfer learning is used to fine-tune two different previously-trained convolutional neural network (CNN) models (AlexNet and VGGNet) for detecting and identifying prostate cancer. Results: Multiple experiments were conducted to evaluate the accuracy of different CNN models using DWI datasets acquired at nine distinct b-values that included both high and low b-values. The average accuracy of AlexNet at the nine b-values was 89.2±1.5% with average sensitivity and specificity of 87.5±2.3% and 90.9±1.9%. These results improved with the use of the deeper CNN model (VGGNet). The average accuracy of VGGNet was 91.2±1.3% with sensitivity and specificity of 91.7±1.7% and 90.1±2.8%. Conclusions: The results of the conducted experiments emphasize the feasibility and accuracy of the developed system and the improvement of this accuracy using the deeper CNN.
Collapse
|
45
|
Seetharaman A, Bhattacharya I, Chen LC, Kunder CA, Shao W, Soerensen SJC, Wang JB, Teslovich NC, Fan RE, Ghanouni P, Brooks JD, Too KJ, Sonn GA, Rusu M. Automated detection of aggressive and indolent prostate cancer on magnetic resonance imaging. Med Phys 2021; 48:2960-2972. [PMID: 33760269 PMCID: PMC8360053 DOI: 10.1002/mp.14855] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2020] [Revised: 01/31/2021] [Accepted: 03/16/2021] [Indexed: 01/05/2023] Open
Abstract
PURPOSE While multi-parametric magnetic resonance imaging (MRI) shows great promise in assisting with prostate cancer diagnosis and localization, subtle differences in appearance between cancer and normal tissue lead to many false positive and false negative interpretations by radiologists. We sought to automatically detect aggressive cancer (Gleason pattern ≥ 4) and indolent cancer (Gleason pattern 3) on a per-pixel basis on MRI to facilitate the targeting of aggressive cancer during biopsy. METHODS We created the Stanford Prostate Cancer Network (SPCNet), a convolutional neural network model, trained to distinguish between aggressive cancer, indolent cancer, and normal tissue on MRI. Ground truth cancer labels were obtained by registering MRI with whole-mount digital histopathology images from patients who underwent radical prostatectomy. Before registration, these histopathology images were automatically annotated to show Gleason patterns on a per-pixel basis. The model was trained on data from 78 patients who underwent radical prostatectomy and 24 patients without prostate cancer. The model was evaluated on a pixel and lesion level in 322 patients, including six patients with normal MRI and no cancer, 23 patients who underwent radical prostatectomy, and 293 patients who underwent biopsy. Moreover, we assessed the ability of our model to detect clinically significant cancer (lesions with an aggressive component) and compared it to the performance of radiologists. RESULTS Our model detected clinically significant lesions with an area under the receiver operator characteristics curve of 0.75 for radical prostatectomy patients and 0.80 for biopsy patients. Moreover, the model detected up to 18% of lesions missed by radiologists, and overall had a sensitivity and specificity that approached that of radiologists in detecting clinically significant cancer. CONCLUSIONS Our SPCNet model accurately detected aggressive prostate cancer. Its performance approached that of radiologists, and it helped identify lesions otherwise missed by radiologists. Our model has the potential to assist physicians in specifically targeting the aggressive component of prostate cancers during biopsy or focal treatment.
Collapse
Affiliation(s)
- Arun Seetharaman
- Department of Electrical Engineering, Stanford University, Stanford, CA, 94305, USA
| | - Indrani Bhattacharya
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, 94305, USA.,Department of Urology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Leo C Chen
- Department of Urology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Christian A Kunder
- Department of Pathology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Wei Shao
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Simon J C Soerensen
- Department of Urology, Stanford University School of Medicine, Stanford, CA, 94305, USA.,Department of Urology, Aarhus University Hospital, Aarhus, Denmark
| | - Jeffrey B Wang
- Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Nikola C Teslovich
- Department of Urology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Richard E Fan
- Department of Urology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Pejman Ghanouni
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - James D Brooks
- Department of Urology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Katherine J Too
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, 94305, USA.,Department of Radiology, VA Palo Alto Health Care System, Palo Alto, CA, 94304, USA
| | - Geoffrey A Sonn
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, 94305, USA.,Department of Urology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Mirabela Rusu
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| |
Collapse
|
46
|
Abstract
PURPOSE OF REVIEW Over the last decade, major advancements in artificial intelligence technology have emerged and revolutionized the extent to which physicians are able to personalize treatment modalities and care for their patients. Artificial intelligence technology aimed at mimicking/simulating human mental processes, such as deep learning artificial neural networks (ANNs), are composed of a collection of individual units known as 'artificial neurons'. These 'neurons', when arranged and interconnected in complex architectural layers, are capable of analyzing the most complex patterns. The aim of this systematic review is to give a comprehensive summary of the contemporary applications of deep learning ANNs in urological medicine. RECENT FINDINGS Fifty-five articles were included in this systematic review and each article was assigned an 'intermediate' score based on its overall quality. Of these 55 articles, nine studies were prospective, but no nonrandomized control trials were identified. SUMMARY In urological medicine, the application of novel artificial intelligence technologies, particularly ANNs, have been considered to be a promising step in improving physicians' diagnostic capabilities, especially with regards to predicting the aggressiveness and recurrence of various disorders. For benign urological disorders, for example, the use of highly predictive and reliable algorithms could be helpful for the improving diagnoses of male infertility, urinary tract infections, and pediatric malformations. In addition, articles with anecdotal experiences shed light on the potential of artificial intelligence-assisted surgeries, such as with the aid of virtual reality or augmented reality.
Collapse
|
47
|
Chen H, Guo S, Hao Y, Fang Y, Fang Z, Wu W, Liu Z, Li S. Auxiliary Diagnosis for COVID-19 with Deep Transfer Learning. J Digit Imaging 2021; 34:231-241. [PMID: 33634413 PMCID: PMC7906243 DOI: 10.1007/s10278-021-00431-8] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2020] [Revised: 01/21/2021] [Accepted: 02/02/2021] [Indexed: 12/30/2022] Open
Abstract
To assist physicians identify COVID-19 and its manifestations through the automatic COVID-19 recognition and classification in chest CT images with deep transfer learning. In this retrospective study, the used chest CT image dataset covered 422 subjects, including 72 confirmed COVID-19 subjects (260 studies, 30,171 images), 252 other pneumonia subjects (252 studies, 26,534 images) that contained 158 viral pneumonia subjects and 94 pulmonary tuberculosis subjects, and 98 normal subjects (98 studies, 29,838 images). In the experiment, subjects were split into training (70%), validation (15%) and testing (15%) sets. We utilized the convolutional blocks of ResNets pretrained on the public social image collections and modified the top fully connected layer to suit our task (the COVID-19 recognition). In addition, we tested the proposed method on a finegrained classification task; that is, the images of COVID-19 were further split into 3 main manifestations (ground-glass opacity with 12,924 images, consolidation with 7418 images and fibrotic streaks with 7338 images). Similarly, the data partitioning strategy of 70%-15%-15% was adopted. The best performance obtained by the pretrained ResNet50 model is 94.87% sensitivity, 88.46% specificity, 91.21% accuracy for COVID-19 versus all other groups, and an overall accuracy of 89.01% for the three-category classification in the testing set. Consistent performance was observed from the COVID-19 manifestation classification task on images basis, where the best overall accuracy of 94.08% and AUC of 0.993 were obtained by the pretrained ResNet18 (P < 0.05). All the proposed models have achieved much satisfying performance and were thus very promising in both the practical application and statistics. Transfer learning is worth for exploring to be applied in recognition and classification of COVID-19 on CT images with limited training data. It not only achieved higher sensitivity (COVID-19 vs the rest) but also took far less time than radiologists, which is expected to give the auxiliary diagnosis and reduce the workload for the radiologists.
Collapse
Affiliation(s)
- Hongtao Chen
- The Cancer Center of The Fifth Affiliated Hospital of Sun Yat-Sen University, Zhuhai, 519000, Guangdong, China
| | - Shuanshuan Guo
- The Cancer Center of The Fifth Affiliated Hospital of Sun Yat-Sen University, Zhuhai, 519000, Guangdong, China
| | - Yanbin Hao
- School of Data Science, University of Science and Technology of China, Hefei, 230026, Anhui, China.
- Department of Computer Science, City University of Hong Kong, Hong Kong, 999077, China.
| | - Yijie Fang
- Department of Radiology, The Fifth Affiliated Hospital of Sun Yat-sen University, Zhuhai, 519000, Guangdong, China
| | - Zhaoxiong Fang
- The Cancer Center of The Fifth Affiliated Hospital of Sun Yat-Sen University, Zhuhai, 519000, Guangdong, China
| | - Wenhao Wu
- Department of Radiology, The Fifth Affiliated Hospital of Sun Yat-sen University, Zhuhai, 519000, Guangdong, China
| | - Zhigang Liu
- The Cancer Center of The Fifth Affiliated Hospital of Sun Yat-Sen University, Zhuhai, 519000, Guangdong, China
| | - Shaolin Li
- Department of Radiology, The Fifth Affiliated Hospital of Sun Yat-sen University, Zhuhai, 519000, Guangdong, China.
| |
Collapse
|
48
|
Avanzo M, Wei L, Stancanello J, Vallières M, Rao A, Morin O, Mattonen SA, El Naqa I. Machine and deep learning methods for radiomics. Med Phys 2021; 47:e185-e202. [PMID: 32418336 DOI: 10.1002/mp.13678] [Citation(s) in RCA: 205] [Impact Index Per Article: 68.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2019] [Revised: 05/22/2019] [Accepted: 06/13/2019] [Indexed: 12/12/2022] Open
Abstract
Radiomics is an emerging area in quantitative image analysis that aims to relate large-scale extracted imaging information to clinical and biological endpoints. The development of quantitative imaging methods along with machine learning has enabled the opportunity to move data science research towards translation for more personalized cancer treatments. Accumulating evidence has indeed demonstrated that noninvasive advanced imaging analytics, that is, radiomics, can reveal key components of tumor phenotype for multiple three-dimensional lesions at multiple time points over and beyond the course of treatment. These developments in the use of CT, PET, US, and MR imaging could augment patient stratification and prognostication buttressing emerging targeted therapeutic approaches. In recent years, deep learning architectures have demonstrated their tremendous potential for image segmentation, reconstruction, recognition, and classification. Many powerful open-source and commercial platforms are currently available to embark in new research areas of radiomics. Quantitative imaging research, however, is complex and key statistical principles should be followed to realize its full potential. The field of radiomics, in particular, requires a renewed focus on optimal study design/reporting practices and standardization of image acquisition, feature calculation, and rigorous statistical analysis for the field to move forward. In this article, the role of machine and deep learning as a major computational vehicle for advanced model building of radiomics-based signatures or classifiers, and diverse clinical applications, working principles, research opportunities, and available computational platforms for radiomics will be reviewed with examples drawn primarily from oncology. We also address issues related to common applications in medical physics, such as standardization, feature extraction, model building, and validation.
Collapse
Affiliation(s)
- Michele Avanzo
- Department of Medical Physics, Centro di Riferimento Oncologico di Aviano (CRO) IRCCS, Aviano, PN, 33081, Italy
| | - Lise Wei
- Department of Radiation Oncology, University of Michigan, Ann Arbor, MI, 48103, USA
| | | | - Martin Vallières
- Medical Physics Unit, McGill University, Montreal, QC, Canada.,Department of Radiation Oncology, University of California, San Francisco, San Francisco, CA, 94143, USA
| | - Arvind Rao
- Department of Radiation Oncology, University of Michigan, Ann Arbor, MI, 48103, USA.,Department of Computational Medicine & Bioinformatics, University of Michigan, Ann Arbor, MI, 48103, USA
| | - Olivier Morin
- Department of Radiation Oncology, University of California, San Francisco, San Francisco, CA, 94143, USA
| | - Sarah A Mattonen
- Department of Radiology, Stanford University, Stanford, CA, 94305, USA
| | - Issam El Naqa
- Department of Radiation Oncology, University of Michigan, Ann Arbor, MI, 48103, USA
| |
Collapse
|
49
|
Magnetic Resonance Imaging Based Radiomic Models of Prostate Cancer: A Narrative Review. Cancers (Basel) 2021; 13:cancers13030552. [PMID: 33535569 PMCID: PMC7867056 DOI: 10.3390/cancers13030552] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2020] [Revised: 01/18/2021] [Accepted: 01/27/2021] [Indexed: 12/11/2022] Open
Abstract
Simple Summary The increasing interest in implementing artificial intelligence in radiomic models has occurred alongside advancement in the tools used for computer-aided diagnosis. Such tools typically apply both statistical and machine learning methodologies to assess the various modalities used in medical image analysis. Specific to prostate cancer, the radiomics pipeline has multiple facets that are amenable to improvement. This review discusses the steps of a magnetic resonance imaging based radiomics pipeline. Present successes, existing opportunities for refinement, and the most pertinent pending steps leading to clinical validation are highlighted. Abstract The management of prostate cancer (PCa) is dependent on biomarkers of biological aggression. This includes an invasive biopsy to facilitate a histopathological assessment of the tumor’s grade. This review explores the technical processes of applying magnetic resonance imaging based radiomic models to the evaluation of PCa. By exploring how a deep radiomics approach further optimizes the prediction of a PCa’s grade group, it will be clear how this integration of artificial intelligence mitigates existing major technological challenges faced by a traditional radiomic model: image acquisition, small data sets, image processing, labeling/segmentation, informative features, predicting molecular features and incorporating predictive models. Other potential impacts of artificial intelligence on the personalized treatment of PCa will also be discussed. The role of deep radiomics analysis-a deep texture analysis, which extracts features from convolutional neural networks layers, will be highlighted. Existing clinical work and upcoming clinical trials will be reviewed, directing investigators to pertinent future directions in the field. For future progress to result in clinical translation, the field will likely require multi-institutional collaboration in producing prospectively populated and expertly labeled imaging libraries.
Collapse
|
50
|
Vente CD, Vos P, Hosseinzadeh M, Pluim J, Veta M. Deep Learning Regression for Prostate Cancer Detection and Grading in Bi-Parametric MRI. IEEE Trans Biomed Eng 2021; 68:374-383. [DOI: 10.1109/tbme.2020.2993528] [Citation(s) in RCA: 36] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|