1
|
Rai HM, Yoo J, Razaque A. Comparative analysis of machine learning and deep learning models for improved cancer detection: A comprehensive review of recent advancements in diagnostic techniques. EXPERT SYSTEMS WITH APPLICATIONS 2024; 255:124838. [DOI: 10.1016/j.eswa.2024.124838] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/11/2024]
|
2
|
Zhang J, Yin X, Wang K, Wang L, Yang Z, Zhang Y, Wu P, Zhao C. External validation of AI for detecting clinically significant prostate cancer using biparametric MRI. Abdom Radiol (NY) 2024:10.1007/s00261-024-04560-w. [PMID: 39225718 DOI: 10.1007/s00261-024-04560-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2024] [Revised: 08/23/2024] [Accepted: 08/29/2024] [Indexed: 09/04/2024]
Affiliation(s)
- Jun Zhang
- First Hospital of Qinhuangdao, Qinhuangdao, China
- Beijing Friendship Hospital, Beijing, China
| | - Xuemei Yin
- First Hospital of Qinhuangdao, Qinhuangdao, China.
- Tianjin Medical University General Hospital, Tianjin, China.
| | - Kexin Wang
- School of Basic Medical Sciences, Capital Medical University, Beijing, China
| | - Liang Wang
- Beijing Friendship Hospital, Beijing, China
| | | | - Yaofeng Zhang
- Beijing Smart Tree Medical Technology Co. Ltd., Beijing, China
| | - Pengsheng Wu
- Beijing Smart Tree Medical Technology Co. Ltd., Beijing, China
| | | |
Collapse
|
3
|
Gunashekar DD, Bielak L, Oerther B, Benndorf M, Nedelcu A, Hickey S, Zamboglou C, Grosu AL, Bock M. Comparison of data fusion strategies for automated prostate lesion detection using mpMRI correlated with whole mount histology. Radiat Oncol 2024; 19:96. [PMID: 39080735 PMCID: PMC11287985 DOI: 10.1186/s13014-024-02471-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Accepted: 06/14/2024] [Indexed: 08/03/2024] Open
Abstract
BACKGROUND In this work, we compare input level, feature level and decision level data fusion techniques for automatic detection of clinically significant prostate lesions (csPCa). METHODS Multiple deep learning CNN architectures were developed using the Unet as the baseline. The CNNs use both multiparametric MRI images (T2W, ADC, and High b-value) and quantitative clinical data (prostate specific antigen (PSA), PSA density (PSAD), prostate gland volume & gross tumor volume (GTV)), and only mp-MRI images (n = 118), as input. In addition, co-registered ground truth data from whole mount histopathology images (n = 22) were used as a test set for evaluation. RESULTS The CNNs achieved for early/intermediate / late level fusion a precision of 0.41/0.51/0.61, recall value of 0.18/0.22/0.25, an average precision of 0.13 / 0.19 / 0.27, and F scores of 0.55/0.67/ 0.76. Dice Sorensen Coefficient (DSC) was used to evaluate the influence of combining mpMRI with parametric clinical data for the detection of csPCa. We compared the DSC between the predictions of CNN's trained with mpMRI and parametric clinical and the CNN's trained with only mpMRI images as input with the ground truth. We obtained a DSC of data 0.30/0.34/0.36 and 0.26/0.33/0.34 respectively. Additionally, we evaluated the influence of each mpMRI input channel for the task of csPCa detection and obtained a DSC of 0.14 / 0.25 / 0.28. CONCLUSION The results show that the decision level fusion network performs better for the task of prostate lesion detection. Combining mpMRI data with quantitative clinical data does not show significant differences between these networks (p = 0.26/0.62/0.85). The results show that CNNs trained with all mpMRI data outperform CNNs with less input channels which is consistent with current clinical protocols where the same input is used for PI-RADS lesion scoring. TRIAL REGISTRATION The trial was registered retrospectively at the German Register for Clinical Studies (DRKS) under proposal number Nr. 476/14 & 476/19.
Collapse
Affiliation(s)
- Deepa Darshini Gunashekar
- Division of Medical Physics, Department of Diagnostic and Interventional Radiology, University Medical Center Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany.
- German Cancer Consortium (DKTK), Partner Site Freiburg, Freiburg, Germany.
| | - Lars Bielak
- Division of Medical Physics, Department of Diagnostic and Interventional Radiology, University Medical Center Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
- German Cancer Consortium (DKTK), Partner Site Freiburg, Freiburg, Germany
| | - Benedict Oerther
- Department of Diagnostic and Interventional Radiology, University Medical Center Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Matthias Benndorf
- Department of Diagnostic and Interventional Radiology, University Medical Center Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Andrea Nedelcu
- Department of Diagnostic and Interventional Radiology, University Medical Center Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Samantha Hickey
- Division of Medical Physics, Department of Diagnostic and Interventional Radiology, University Medical Center Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Constantinos Zamboglou
- Department of Diagnostic and Interventional Radiology, University Medical Center Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
- German Oncology Center, European University Cyprus, Limassol, Cyprus
| | - Anca-Ligia Grosu
- Department of Diagnostic and Interventional Radiology, University Medical Center Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
- German Cancer Consortium (DKTK), Partner Site Freiburg, Freiburg, Germany
| | - Michael Bock
- Division of Medical Physics, Department of Diagnostic and Interventional Radiology, University Medical Center Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
- German Cancer Consortium (DKTK), Partner Site Freiburg, Freiburg, Germany
| |
Collapse
|
4
|
Pahud de Mortanges A, Luo H, Shu SZ, Kamath A, Suter Y, Shelan M, Pöllinger A, Reyes M. Orchestrating explainable artificial intelligence for multimodal and longitudinal data in medical imaging. NPJ Digit Med 2024; 7:195. [PMID: 39039248 PMCID: PMC11263688 DOI: 10.1038/s41746-024-01190-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2023] [Accepted: 07/15/2024] [Indexed: 07/24/2024] Open
Abstract
Explainable artificial intelligence (XAI) has experienced a vast increase in recognition over the last few years. While the technical developments are manifold, less focus has been placed on the clinical applicability and usability of systems. Moreover, not much attention has been given to XAI systems that can handle multimodal and longitudinal data, which we postulate are important features in many clinical workflows. In this study, we review, from a clinical perspective, the current state of XAI for multimodal and longitudinal datasets and highlight the challenges thereof. Additionally, we propose the XAI orchestrator, an instance that aims to help clinicians with the synopsis of multimodal and longitudinal data, the resulting AI predictions, and the corresponding explainability output. We propose several desirable properties of the XAI orchestrator, such as being adaptive, hierarchical, interactive, and uncertainty-aware.
Collapse
Affiliation(s)
| | - Haozhe Luo
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - Shelley Zixin Shu
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - Amith Kamath
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - Yannick Suter
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
- Department of Radiation Oncology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Mohamed Shelan
- Department of Radiation Oncology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Alexander Pöllinger
- Department of Diagnostic, Interventional and Pediatric Radiology, Inselspital, Bern University Hospital, Bern, Switzerland
| | - Mauricio Reyes
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
- Department of Radiation Oncology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| |
Collapse
|
5
|
Cui H, Zhao Y, Xiong S, Feng Y, Li P, Lv Y, Chen Q, Wang R, Xie P, Luo Z, Cheng S, Wang W, Li X, Xiong D, Cao X, Bai S, Yang A, Cheng B. Diagnosing Solid Lesions in the Pancreas With Multimodal Artificial Intelligence: A Randomized Crossover Trial. JAMA Netw Open 2024; 7:e2422454. [PMID: 39028670 PMCID: PMC11259905 DOI: 10.1001/jamanetworkopen.2024.22454] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/24/2024] [Accepted: 05/16/2024] [Indexed: 07/21/2024] Open
Abstract
Importance Diagnosing solid lesions in the pancreas via endoscopic ultrasonographic (EUS) images is challenging. Artificial intelligence (AI) has the potential to help with such diagnosis, but existing AI models focus solely on a single modality. Objective To advance the clinical diagnosis of solid lesions in the pancreas through developing a multimodal AI model integrating both clinical information and EUS images. Design, Setting, and Participants In this randomized crossover trial conducted from January 1 to June 30, 2023, from 4 centers across China, 12 endoscopists of varying levels of expertise were randomly assigned to diagnose solid lesions in the pancreas with or without AI assistance. Endoscopic ultrasonographic images and clinical information of 439 patients from 1 institution who had solid lesions in the pancreas between January 1, 2014, and December 31, 2022, were collected to train and validate the joint-AI model, while 189 patients from 3 external institutions were used to evaluate the robustness and generalizability of the model. Intervention Conventional or AI-assisted diagnosis of solid lesions in the pancreas. Main Outcomes and Measures In the retrospective dataset, the performance of the joint-AI model was evaluated internally and externally. In the prospective dataset, diagnostic performance of the endoscopists with or without the AI assistance was compared. Results The retrospective dataset included 628 patients (400 men [63.7%]; mean [SD] age, 57.7 [27.4] years) who underwent EUS procedures. A total of 130 patients (81 men [62.3%]; mean [SD] age, 58.4 [11.7] years) were prospectively recruited for the crossover trial. The area under the curve of the joint-AI model ranged from 0.996 (95% CI, 0.993-0.998) in the internal test dataset to 0.955 (95% CI, 0.940-0.968), 0.924 (95% CI, 0.888-0.955), and 0.976 (95% CI, 0.942-0.995) in the 3 external test datasets, respectively. The diagnostic accuracy of novice endoscopists was significantly enhanced with AI assistance (0.69 [95% CI, 0.61-0.76] vs 0.90 [95% CI, 0.83-0.94]; P < .001), and the supplementary interpretability information alleviated the skepticism of the experienced endoscopists. Conclusions and Relevance In this randomized crossover trial of diagnosing solid lesions in the pancreas with or without AI assistance, the joint-AI model demonstrated positive human-AI interaction, which suggested its potential to facilitate a clinical diagnosis. Nevertheless, future randomized clinical trials are warranted. Trial Registration ClinicalTrials.gov Identifier: NCT05476978.
Collapse
Affiliation(s)
- Haochen Cui
- Department of Gastroenterology and Hepatology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Yuchong Zhao
- Department of Gastroenterology and Hepatology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Si Xiong
- Department of Gastroenterology and Hepatology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Yunlu Feng
- Department of Gastroenterology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China
| | - Peng Li
- Department of Gastroenterology, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| | - Ying Lv
- Department of Gastroenterology, Nanjing Drum Tower Hospital, Affiliated Drum Tower Hospital, Medical School of Nanjing University, Nanjing, Jiangsu, China
| | - Qian Chen
- Department of Gastroenterology and Hepatology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Ronghua Wang
- Department of Surgery, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania
| | - Pengtao Xie
- Department of Electrical and Computer Engineering, University of California San Diego, La Jolla
| | - Zhenlong Luo
- Department of Gastroenterology and Hepatology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Sideng Cheng
- Department of Computer Science, Algoma University, Sault Ste. Marie, Ontario, Canada
| | - Wujun Wang
- Wuhan EndoAngel Medical Technology Company, Wuhan, China
| | - Xing Li
- Wuhan EndoAngel Medical Technology Company, Wuhan, China
| | - Dingkun Xiong
- Department of Gastroenterology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China
| | - Xinyuan Cao
- Department of Gastroenterology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China
| | - Shuya Bai
- Department of Gastroenterology and Hepatology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Aiming Yang
- Department of Gastroenterology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China
| | - Bin Cheng
- Department of Gastroenterology and Hepatology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
6
|
Muksimova S, Umirzakova S, Kang S, Cho YI. CerviLearnNet: Advancing cervical cancer diagnosis with reinforcement learning-enhanced convolutional networks. Heliyon 2024; 10:e29913. [PMID: 38694035 PMCID: PMC11061669 DOI: 10.1016/j.heliyon.2024.e29913] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2023] [Revised: 04/16/2024] [Accepted: 04/17/2024] [Indexed: 05/03/2024] Open
Abstract
Women tend to face many problems throughout their lives; cervical cancer is one of the most dangerous diseases that they can face, and it has many negative consequences. Regular screening and treatment of precancerous lesions play a vital role in the fight against cervical cancer. It is becoming increasingly common in medical practice to predict the early stages of serious illnesses, such as heart attacks, kidney failure, and cancer, using machine learning-based techniques. To overcome these obstacles, we propose the use of auxiliary modules and a special residual block, to record contextual interactions between object classes and to support the object reference strategy. Unlike the latest state-of-the-art classification method, we create a new architecture called the Reinforcement Learning Cancer Network, "RL-CancerNet", which diagnoses cervical cancer with incredible accuracy. We trained and tested our method on two well-known publicly available datasets, SipaKMeD and Herlev, to assess it and enable comparisons with earlier methods. Cervical cancer images were labeled in this dataset; therefore, they had to be marked manually. Our study shows that, compared to previous approaches for the assignment of classifying cervical cancer as an early cellular change, the proposed approach generates a more reliable and stable image derived from images of datasets of vastly different sizes, indicating that it will be effective for other datasets.
Collapse
Affiliation(s)
- Shakhnoza Muksimova
- Department of Computer Engineering, Gachon University, Sujeong-gu, Seongnam-si 461-701, Gyeonggi-do, South Korea
| | - Sabina Umirzakova
- Department of Computer Engineering, Gachon University, Sujeong-gu, Seongnam-si 461-701, Gyeonggi-do, South Korea
| | - Seokwhan Kang
- Department of Computer Engineering, Gachon University, Sujeong-gu, Seongnam-si 461-701, Gyeonggi-do, South Korea
| | - Young Im Cho
- Department of Computer Engineering, Gachon University, Sujeong-gu, Seongnam-si 461-701, Gyeonggi-do, South Korea
| |
Collapse
|
7
|
Huang J, He C, Xu P, Song B, Zhao H, Yin B, He M, Lu X, Wu J, Wang H. Development and validation of a clinical-radiomics model for prediction of prostate cancer: a multicenter study. World J Urol 2024; 42:275. [PMID: 38689190 DOI: 10.1007/s00345-024-04995-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2023] [Accepted: 04/11/2024] [Indexed: 05/02/2024] Open
Abstract
PURPOSE To develop an early diagnosis model of prostate cancer based on clinical-radiomics to improve the accuracy of imaging diagnosis of prostate cancer. METHODS The multicenter study enrolled a total of 449 patients with prostate cancer from December 2017 to January 2022. We retrospectively collected information from 342 patients who underwent prostate biopsy at Minhang Hospital. We extracted T2WI images through 3D-Slice, and used mask tools to mark the prostate area manually. The radiomics features were extracted by Python using the "Pyradiomics" module. Least Absolute Shrinkage and Selection Operator (LASSO) regression was used for data dimensionality reduction and feature selection, and the radiomics score was calculated according to the correlation coefficients. Multivariate logistic regression analysis was used to develop predictive models. We incorporated the radiomics score, PI-RADS, and clinical features, and this was presented as a nomogram. The model was validated using a cohort of 107 patients from the Xuhui Hospital. RESULTS In total, 110 effective radiomics features were extracted. Finally, 9 features were significantly associated with the diagnosis of prostate cancer, from which we calculated the radiomics score. The predictors contained in the individualized prediction nomogram included age, fPSA/tPSA, PI-RADS, and radiomics score. The clinical-radiomics model showed good discrimination in the validation cohort (C-index = 0.88). CONCLUSION This study presents a clinical-radiomics model that incorporates age, fPSA/PSA, PI-RADS, and radiomics score, which can be conveniently used to facilitate individualized prediction of prostate cancer before prostate biopsy.
Collapse
Affiliation(s)
- Jiaqi Huang
- Department of Urology, Minhang Hospital, Fudan University, Shanghai, China
| | - Chang He
- Department of Urology, Minhang Hospital, Fudan University, Shanghai, China
| | - Peirong Xu
- Department of Urology, Zhongshan-Xuhui Hospital, Fudan University, Shanghai, China
- Department of Urology, Zhongshan Hospital, Fudan University, 180th Fengling Rd, Xuhui District, Shanghai, 200032, China
| | - Bin Song
- Department of Radiology, Minhang Hospital, Fudan University, Shanghai, China
| | - Hainan Zhao
- Department of Radiology, Minhang Hospital, Fudan University, Shanghai, China
| | - Bingde Yin
- Department of Urology, Minhang Hospital, Fudan University, Shanghai, China
| | - Minke He
- Department of Urology, Minhang Hospital, Fudan University, Shanghai, China
| | - Xuwei Lu
- Department of Urology, Minhang Hospital, Fudan University, Shanghai, China
| | - Jiawen Wu
- Department of Urology, Minhang Hospital, Fudan University, Shanghai, China
| | - Hang Wang
- Department of Urology, Zhongshan Hospital, Fudan University, 180th Fengling Rd, Xuhui District, Shanghai, 200032, China.
| |
Collapse
|
8
|
Yin X, Wang K, Wang L, Yang Z, Zhang Y, Wu P, Zhao C, Zhang J. Algorithms for classification of sequences and segmentation of prostate gland: an external validation study. Abdom Radiol (NY) 2024; 49:1275-1287. [PMID: 38436698 DOI: 10.1007/s00261-024-04241-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2023] [Revised: 02/05/2024] [Accepted: 02/05/2024] [Indexed: 03/05/2024]
Abstract
OBJECTIVES The aim of the study was to externally validate two AI models for the classification of prostate mpMRI sequences and segmentation of the prostate gland on T2WI. MATERIALS AND METHODS MpMRI data from 719 patients were retrospectively collected from two hospitals, utilizing nine MR scanners from four different vendors, over the period from February 2018 to May 2022. Med3D deep learning pretrained architecture was used to perform image classification,UNet-3D was used to segment the prostate gland. The images were classified into one of nine image types by the mode. The segmentation model was validated using T2WI images. The accuracy of the segmentation was evaluated by measuring the DSC, VS,AHD.Finally,efficacy of the models was compared for different MR field strengths and sequences. RESULTS 20,551 image groups were obtained from 719 MR studies. The classification model accuracy is 99%, with a kappa of 0.932. The precision, recall, and F1 values for the nine image types had statistically significant differences, respectively (all P < 0.001). The accuracy for scanners 1.436 T, 1.5 T, and 3.0 T was 87%, 86%, and 98%, respectively (P < 0.001). For segmentation model, the median DSC was 0.942 to 0.955, the median VS was 0.974 to 0.982, and the median AHD was 5.55 to 6.49 mm,respectively.These values also had statistically significant differences for the three different magnetic field strengths (all P < 0.001). CONCLUSION The AI models for mpMRI image classification and prostate segmentation demonstrated good performance during external validation, which could enhance efficiency in prostate volume measurement and cancer detection with mpMRI. CLINICAL RELEVANCE STATEMENT These models can greatly improve the work efficiency in cancer detection, measurement of prostate volume and guided biopsies.
Collapse
Affiliation(s)
- Xuemei Yin
- Department of Medical Imaging, First Hospital of Qinhuangdao, 066000, Qinhuangdao City, Hebei Province, China
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, 100050, Beijing, China
| | - Kexin Wang
- School of Basic Medical Sciences, Capital Medical University, 100052, Beijing, China
| | - Liang Wang
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, 100050, Beijing, China
| | - Zhenghan Yang
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, 100050, Beijing, China
| | - Yaofeng Zhang
- Beijing Smart Tree Medical Technology Co. Ltd, 100011, Beijing, China
| | - Pengsheng Wu
- Beijing Smart Tree Medical Technology Co. Ltd, 100011, Beijing, China
| | - Chenglin Zhao
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, 100050, Beijing, China.
| | - Jun Zhang
- Department of Medical Imaging, First Hospital of Qinhuangdao, 066000, Qinhuangdao City, Hebei Province, China.
| |
Collapse
|
9
|
Zheng H, Hung ALY, Miao Q, Song W, Scalzo F, Raman SS, Zhao K, Sung K. AtPCa-Net: anatomical-aware prostate cancer detection network on multi-parametric MRI. Sci Rep 2024; 14:5740. [PMID: 38459100 PMCID: PMC10923873 DOI: 10.1038/s41598-024-56405-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Accepted: 03/06/2024] [Indexed: 03/10/2024] Open
Abstract
Multi-parametric MRI (mpMRI) is widely used for prostate cancer (PCa) diagnosis. Deep learning models show good performance in detecting PCa on mpMRI, but domain-specific PCa-related anatomical information is sometimes overlooked and not fully explored even by state-of-the-art deep learning models, causing potential suboptimal performances in PCa detection. Symmetric-related anatomical information is commonly used when distinguishing PCa lesions from other visually similar but benign prostate tissue. In addition, different combinations of mpMRI findings are used for evaluating the aggressiveness of PCa for abnormal findings allocated in different prostate zones. In this study, we investigate these domain-specific anatomical properties in PCa diagnosis and how we can adopt them into the deep learning framework to improve the model's detection performance. We propose an anatomical-aware PCa detection Network (AtPCa-Net) for PCa detection on mpMRI. Experiments show that the AtPCa-Net can better utilize the anatomical-related information, and the proposed anatomical-aware designs help improve the overall model performance on both PCa detection and patient-level classification.
Collapse
Affiliation(s)
- Haoxin Zheng
- Radiological Sciences, University of California, Los Angeles, Los Angeles, 90095, USA.
- Computer Science, University of California, Los Angeles, Los Angeles, 90095, USA.
| | - Alex Ling Yu Hung
- Radiological Sciences, University of California, Los Angeles, Los Angeles, 90095, USA
- Computer Science, University of California, Los Angeles, Los Angeles, 90095, USA
| | - Qi Miao
- Radiological Sciences, University of California, Los Angeles, Los Angeles, 90095, USA
| | - Weinan Song
- Electrical and Computer Engineering, University of California, Los Angeles, Los Angeles, 90095, USA
| | - Fabien Scalzo
- Computer Science, University of California, Los Angeles, Los Angeles, 90095, USA
- The Seaver College, Pepperdine University, Los Angeles, 90363, USA
| | - Steven S Raman
- Radiological Sciences, University of California, Los Angeles, Los Angeles, 90095, USA
| | - Kai Zhao
- Radiological Sciences, University of California, Los Angeles, Los Angeles, 90095, USA
| | - Kyunghyun Sung
- Radiological Sciences, University of California, Los Angeles, Los Angeles, 90095, USA
| |
Collapse
|
10
|
Rai HM, Yoo J, Atif Moqurrab S, Dashkevych S. Advancements in traditional machine learning techniques for detection and diagnosis of fatal cancer types: Comprehensive review of biomedical imaging datasets. MEASUREMENT 2024; 225:114059. [DOI: 10.1016/j.measurement.2023.114059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/11/2024]
|
11
|
Tang J, Zheng X, Wang X, Mao Q, Xie L, Wang R. Computer-aided detection of prostate cancer in early stages using multi-parameter MRI: A promising approach for early diagnosis. Technol Health Care 2024; 32:125-133. [PMID: 38759043 PMCID: PMC11191472 DOI: 10.3233/thc-248011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/19/2024]
Abstract
BACKGROUND Transrectal ultrasound-guided prostate biopsy is the gold standard diagnostic test for prostate cancer, but it is an invasive examination of non-targeted puncture and has a high false-negative rate. OBJECTIVE In this study, we aimed to develop a computer-assisted prostate cancer diagnosis method based on multiparametric MRI (mpMRI) images. METHODS We retrospectively collected 106 patients who underwent radical prostatectomy after diagnosis with prostate biopsy. mpMRI images, including T2 weighted imaging (T2WI), diffusion weighted imaging (DWI), and dynamic-contrast enhanced (DCE), and were accordingly analyzed. We extracted the region of interest (ROI) about the tumor and benign area on the three sequential MRI axial images at the same level. The ROI data of 433 mpMRI images were obtained, of which 202 were benign and 231 were malignant. Of those, 50 benign and 50 malignant images were used for training, and the 333 images were used for verification. Five main feature groups, including histogram, GLCM, GLGCM, wavelet-based multi-fractional Brownian motion features and Minkowski function features, were extracted from the mpMRI images. The selected characteristic parameters were analyzed by MATLAB software, and three analysis methods with higher accuracy were selected. RESULTS Through prostate cancer identification based on mpMRI images, we found that the system uses 58 texture features and 3 classification algorithms, including Support Vector Machine (SVM), K-nearest Neighbor (KNN), and Ensemble Learning (EL), performed well. In the T2WI-based classification results, the SVM achieved the optimal accuracy and AUC values of 64.3% and 0.67. In the DCE-based classification results, the SVM achieved the optimal accuracy and AUC values of 72.2% and 0.77. In the DWI-based classification results, the ensemble learning achieved optimal accuracy as well as AUC values of 75.1% and 0.82. In the classification results based on all data combinations, the SVM achieved the optimal accuracy and AUC values of 66.4% and 0.73. CONCLUSION The proposed computer-aided diagnosis system provides a good assessment of the diagnosis of the prostate cancer, which may reduce the burden of radiologists and improve the early diagnosis of prostate cancer.
Collapse
Affiliation(s)
- Jianer Tang
- Department of Urology, First Affiliated Hospital, Medical College of Zhejiang University, Hangzhou, Zhejiang, China
- Department of Urology, First Affiliated Hospital of Huzhou Teachers College, Huzhou, Zhejiang, China
| | - Xiangyi Zheng
- Department of Urology, First Affiliated Hospital, Medical College of Zhejiang University, Hangzhou, Zhejiang, China
| | - Xiao Wang
- Department of Urology, First Affiliated Hospital, Medical College of Zhejiang University, Hangzhou, Zhejiang, China
| | - Qiqi Mao
- Department of Urology, First Affiliated Hospital, Medical College of Zhejiang University, Hangzhou, Zhejiang, China
| | - Liping Xie
- Department of Urology, First Affiliated Hospital, Medical College of Zhejiang University, Hangzhou, Zhejiang, China
| | - Rongjiang Wang
- Department of Urology, First Affiliated Hospital of Huzhou Teachers College, Huzhou, Zhejiang, China
| |
Collapse
|
12
|
Sun Z, Wu P, Cui Y, Liu X, Wang K, Gao G, Wang H, Zhang X, Wang X. Deep-Learning Models for Detection and Localization of Visible Clinically Significant Prostate Cancer on Multi-Parametric MRI. J Magn Reson Imaging 2023; 58:1067-1081. [PMID: 36825823 DOI: 10.1002/jmri.28608] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 01/07/2023] [Accepted: 01/09/2023] [Indexed: 02/25/2023] Open
Abstract
BACKGROUND Deep learning for diagnosing clinically significant prostate cancer (csPCa) is feasible but needs further evaluation in patients with prostate-specific antigen (PSA) levels of 4-10 ng/mL. PURPOSE To explore diffusion-weighted imaging (DWI), alone and in combination with T2-weighted imaging (T2WI), for deep-learning-based models to detect and localize visible csPCa. STUDY TYPE Retrospective. POPULATION One thousand six hundred twenty-eight patients with systematic and cognitive-targeted biopsy-confirmation (1007 csPCa, 621 non-csPCa) were divided into model development (N = 1428) and hold-out test (N = 200) datasets. FIELD STRENGTH/SEQUENCE DWI with diffusion-weighted single-shot gradient echo planar imaging sequence and T2WI with T2-weighted fast spin echo sequence at 3.0-T and 1.5-T. ASSESSMENT The ground truth of csPCa was annotated by two radiologists in consensus. A diffusion model, DWI and apparent diffusion coefficient (ADC) as input, and a biparametric model (DWI, ADC, and T2WI as input) were trained based on U-Net. Three radiologists provided the PI-RADS (version 2.1) assessment. The performances were determined at the lesion, location, and the patient level. STATISTICAL TESTS The performance was evaluated using the areas under the ROC curves (AUCs), sensitivity, specificity, and accuracy. A P value <0.05 was considered statistically significant. RESULTS The lesion-level sensitivities of the diffusion model, the biparametric model, and the PI-RADS assessment were 89.0%, 85.3%, and 90.8% (P = 0.289-0.754). At the patient level, the diffusion model had significantly higher sensitivity than the biparametric model (96.0% vs. 90.0%), while there was no significant difference in specificity (77.0%. vs. 85.0%, P = 0.096). For location analysis, there were no significant differences in AUCs between the models (sextant-level, 0.895 vs. 0.893, P = 0.777; zone-level, 0.931 vs. 0.917, P = 0.282), and both models had significantly higher AUCs than the PI-RADS assessment (sextant-level, 0.734; zone-level, 0.863). DATA CONCLUSION The diffusion model achieved the best performance in detecting and localizing csPCa in patients with PSA levels of 4-10 ng/mL. EVIDENCE LEVEL 3 TECHNICAL EFFICACY: Stage 2.
Collapse
Affiliation(s)
- Zhaonan Sun
- Department of Radiology, Peking University First Hospital, Beijing, China
| | - Pengsheng Wu
- Beijing Smart Tree Medical Technology Co. Ltd, Beijing, China
| | - Yingpu Cui
- Department of Nuclear Medicine, Sun Yat-Sen University Cancer Center, Guangzhou, Guangdong, China
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, Guangdong, China
| | - Xiang Liu
- Department of Radiology, Peking University First Hospital, Beijing, China
| | - Kexin Wang
- School of Basic Medical Sciences, Capital Medical University, Beijing, China
| | - Ge Gao
- Department of Radiology, Peking University First Hospital, Beijing, China
| | - Huihui Wang
- Department of Radiology, Peking University First Hospital, Beijing, China
| | - Xiaodong Zhang
- Department of Radiology, Peking University First Hospital, Beijing, China
| | - Xiaoying Wang
- Department of Radiology, Peking University First Hospital, Beijing, China
| |
Collapse
|
13
|
Jin T, Pan S, Li X, Chen S. Metadata and Image Features Co-Aware Personalized Federated Learning for Smart Healthcare. IEEE J Biomed Health Inform 2023; 27:4110-4119. [PMID: 37220032 DOI: 10.1109/jbhi.2023.3279096] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Recently, artificial intelligence has been widely used in intelligent disease diagnosis and has achieved great success. However, most of the works mainly rely on the extraction of image features but ignore the use of clinical text information of patients, which may limit the diagnosis accuracy fundamentally. In this paper, we propose a metadata and image features co-aware personalized federated learning scheme for smart healthcare. Specifically, we construct an intelligent diagnosis model, by which users can obtain fast and accurate diagnosis services. Meanwhile, a personalized federated learning scheme is designed to utilize the knowledge learned from other edge nodes with larger contributions and customize high-quality personalized classification models for each edge node. Subsequently, a Naïve Bayes classifier is devised for classifying patient metadata. And then the image and metadata diagnosis results are jointly aggregated by different weights to improve the accuracy of intelligent diagnosis. Finally, the simulation results illustrate that, compared with the existing methods, our proposed algorithm achieves better classification accuracy, reaching about 97.16% on PAD-UFES-20 dataset.
Collapse
|
14
|
Retson TA, Eghtedari M. Expanding Horizons: The Realities of CAD, the Promise of Artificial Intelligence, and Machine Learning's Role in Breast Imaging beyond Screening Mammography. Diagnostics (Basel) 2023; 13:2133. [PMID: 37443526 PMCID: PMC10341264 DOI: 10.3390/diagnostics13132133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Revised: 06/06/2023] [Accepted: 06/12/2023] [Indexed: 07/15/2023] Open
Abstract
Artificial intelligence (AI) applications in mammography have gained significant popular attention; however, AI has the potential to revolutionize other aspects of breast imaging beyond simple lesion detection. AI has the potential to enhance risk assessment by combining conventional factors with imaging and improve lesion detection through a comparison with prior studies and considerations of symmetry. It also holds promise in ultrasound analysis and automated whole breast ultrasound, areas marked by unique challenges. AI's potential utility also extends to administrative tasks such as MQSA compliance, scheduling, and protocoling, which can reduce the radiologists' workload. However, adoption in breast imaging faces limitations in terms of data quality and standardization, generalizability, benchmarking performance, and integration into clinical workflows. Developing methods for radiologists to interpret AI decisions, and understanding patient perspectives to build trust in AI results, will be key future endeavors, with the ultimate aim of fostering more efficient radiology practices and better patient care.
Collapse
Affiliation(s)
- Tara A. Retson
- Department of Radiology, University of California, San Diego, CA 92093, USA;
| | | |
Collapse
|
15
|
Shetty S, S. AV, Mahale A. Multimodal medical tensor fusion network-based DL framework for abnormality prediction from the radiology CXRs and clinical text reports. MULTIMEDIA TOOLS AND APPLICATIONS 2023:1-48. [PMID: 37362656 PMCID: PMC10119019 DOI: 10.1007/s11042-023-14940-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/24/2022] [Revised: 04/05/2022] [Accepted: 02/22/2023] [Indexed: 06/28/2023]
Abstract
Pulmonary disease is a commonly occurring abnormality throughout this world. The pulmonary diseases include Tuberculosis, Pneumothorax, Cardiomegaly, Pulmonary atelectasis, Pneumonia, etc. A timely prognosis of pulmonary disease is essential. Increasing progress in Deep Learning (DL) techniques has significantly impacted and contributed to the medical domain, specifically in leveraging medical imaging for analysis, prognosis, and therapeutic decisions for clinicians. Many contemporary DL strategies for radiology focus on a single modality of data utilizing imaging features without considering the clinical context that provides more valuable complementary information for clinically consistent prognostic decisions. Also, the selection of the best data fusion strategy is crucial when performing Machine Learning (ML) or DL operation on multimodal heterogeneous data. We investigated multimodal medical fusion strategies leveraging DL techniques to predict pulmonary abnormality from the heterogeneous radiology Chest X-Rays (CXRs) and clinical text reports. In this research, we have proposed two effective unimodal and multimodal subnetworks to predict pulmonary abnormality from the CXR and clinical reports. We have conducted a comprehensive analysis and compared the performance of unimodal and multimodal models. The proposed models were applied to standard augmented data and the synthetic data generated to check the model's ability to predict from the new and unseen data. The proposed models were thoroughly assessed and examined against the publicly available Indiana university dataset and the data collected from the private medical hospital. The proposed multimodal models have given superior results compared to the unimodal models.
Collapse
Affiliation(s)
- Shashank Shetty
- Department of Information Technology, National Institute of Technology Karnataka, Mangalore, 575025 Karnataka India
- Department of Computer Science and Engineering, Nitte (Deemed to be University), NMAM Institute of Technology (NMAMIT), Udupi, 574110 Karnataka India
| | - Ananthanarayana V. S.
- Department of Information Technology, National Institute of Technology Karnataka, Mangalore, 575025 Karnataka India
| | - Ajit Mahale
- Department of Radiology, Kasturba Medical College, Mangalore, Manipal Academy of Higher Education, Mangalore, 575001 Karnataka India
| |
Collapse
|
16
|
Li K, Chen C, Cao W, Wang H, Han S, Wang R, Ye Z, Wu Z, Wang W, Cai L, Ding D, Yuan Z. DeAF: A multimodal deep learning framework for disease prediction. Comput Biol Med 2023; 156:106715. [PMID: 36867898 DOI: 10.1016/j.compbiomed.2023.106715] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2022] [Revised: 02/05/2023] [Accepted: 02/26/2023] [Indexed: 03/05/2023]
Abstract
Multimodal deep learning models have been applied for disease prediction tasks, but difficulties exist in training due to the conflict between sub-models and fusion modules. To alleviate this issue, we propose a framework for decoupling feature alignment and fusion (DeAF), which separates the multimodal model training into two stages. In the first stage, unsupervised representation learning is conducted, and the modality adaptation (MA) module is used to align the features from various modalities. In the second stage, the self-attention fusion (SAF) module combines the medical image features and clinical data using supervised learning. Moreover, we apply the DeAF framework to predict the postoperative efficacy of CRS for colorectal cancer and whether the MCI patients change to Alzheimer's disease. The DeAF framework achieves a significant improvement in comparison to the previous methods. Furthermore, extensive ablation experiments are conducted to demonstrate the rationality and effectiveness of our framework. In conclusion, our framework enhances the interaction between the local medical image features and clinical data, and derive more discriminative multimodal features for disease prediction. The framework implementation is available at https://github.com/cchencan/DeAF.
Collapse
Affiliation(s)
- Kangshun Li
- College of Mathematics and Informatics, South China Agricultural University, Guangzhou, 510000, China.
| | - Can Chen
- College of Mathematics and Informatics, South China Agricultural University, Guangzhou, 510000, China
| | - Wuteng Cao
- Department of Radiology, The Sixth Affiliated Hospital, Sun Yat-Sen University, Guangzhou, 510000, China
| | - Hui Wang
- Department of Colorectal Surgery, Department of General Surgery, Guangdong Provincial Key Laboratory of Colorectal and Pelvic Floor Diseases, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, 510000, China
| | - Shuai Han
- General Surgery Center, Department of Gastrointestinal Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, 510000, China
| | - Renjie Wang
- Department of Colorectal Surgery, Fudan University Shanghai Cancer Center, Shanghai, 200000, China
| | - Zaisheng Ye
- Department of Gastrointestinal Surgical Oncology, Fujian Cancer Hospital and Fujian Medical University Cancer Hospital, Fuzhou, 350000, China
| | - Zhijie Wu
- Department of Colorectal Surgery, Department of General Surgery, Guangdong Provincial Key Laboratory of Colorectal and Pelvic Floor Diseases, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, 510000, China
| | - Wenxiang Wang
- College of Mathematics and Informatics, South China Agricultural University, Guangzhou, 510000, China
| | - Leng Cai
- College of Mathematics and Informatics, South China Agricultural University, Guangzhou, 510000, China
| | - Deyu Ding
- Department of Economics, University of Konstanz, Konstanz, 350000, Germany
| | - Zixu Yuan
- Department of Colorectal Surgery, Department of General Surgery, Guangdong Provincial Key Laboratory of Colorectal and Pelvic Floor Diseases, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, 510000, China.
| |
Collapse
|
17
|
Mokoatle M, Marivate V, Mapiye D, Bornman R, Hayes VM. A review and comparative study of cancer detection using machine learning: SBERT and SimCSE application. BMC Bioinformatics 2023; 24:112. [PMID: 36959534 PMCID: PMC10037872 DOI: 10.1186/s12859-023-05235-x] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2022] [Accepted: 03/17/2023] [Indexed: 03/25/2023] Open
Abstract
BACKGROUND Using visual, biological, and electronic health records data as the sole input source, pretrained convolutional neural networks and conventional machine learning methods have been heavily employed for the identification of various malignancies. Initially, a series of preprocessing steps and image segmentation steps are performed to extract region of interest features from noisy features. Then, the extracted features are applied to several machine learning and deep learning methods for the detection of cancer. METHODS In this work, a review of all the methods that have been applied to develop machine learning algorithms that detect cancer is provided. With more than 100 types of cancer, this study only examines research on the four most common and prevalent cancers worldwide: lung, breast, prostate, and colorectal cancer. Next, by using state-of-the-art sentence transformers namely: SBERT (2019) and the unsupervised SimCSE (2021), this study proposes a new methodology for detecting cancer. This method requires raw DNA sequences of matched tumor/normal pair as the only input. The learnt DNA representations retrieved from SBERT and SimCSE will then be sent to machine learning algorithms (XGBoost, Random Forest, LightGBM, and CNNs) for classification. As far as we are aware, SBERT and SimCSE transformers have not been applied to represent DNA sequences in cancer detection settings. RESULTS The XGBoost model, which had the highest overall accuracy of 73 ± 0.13 % using SBERT embeddings and 75 ± 0.12 % using SimCSE embeddings, was the best performing classifier. In light of these findings, it can be concluded that incorporating sentence representations from SimCSE's sentence transformer only marginally improved the performance of machine learning models.
Collapse
Affiliation(s)
- Mpho Mokoatle
- Department of Computer Science, University of Pretoria, Pretoria, South Africa.
| | - Vukosi Marivate
- Department of Computer Science, University of Pretoria, Pretoria, South Africa
| | | | - Riana Bornman
- School of Health Systems and Public Health, University of Pretoria, Pretoria, South Africa
| | - Vanessa M Hayes
- School of Medical Sciences, The University of Sydney, Sydney, Australia
- School of Health Systems and Public Health, University of Pretoria, Pretoria, South Africa
| |
Collapse
|
18
|
Zhu XL, Tung TH, Li H, Wu S, Wang X, Wang L, Zhang M, Chen Z, Liu D, Li F. Using "Age and Total-PSA" as the Main Indicators: The Results of Taizhou Integrated Prostate Screening (No 2). Am J Mens Health 2023; 17:15579883231161292. [PMID: 36998194 PMCID: PMC10068996 DOI: 10.1177/15579883231161292] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/01/2023] Open
Abstract
The aim of the study was to analyze population-based prostate cancer (PCa) screening and the incidence of PCa among males ≥50 years of age residing in the Luqiao district of Taizhou, China. From October to December 2020, male residents ≥50 years of age were screened for serum total prostate-specific antigen (total-PSA). If t-PSA re-test levels persisted above 4 μg/L, subjects underwent further noninvasive examinations, including digital rectal examination or multiparameter magnetic resonance imaging (mpMRI) of the prostate. Subjects underwent prostate biopsy of pathological tissue based on t-PSA and mpMRI results. A total of 3524 (49.1%) residents participated in this PCa screening study. In total, 285 (8.1%) subjects exhibited t-PSA levels ≥4.0 μg/L and 112 (3.2%) underwent noninvasive examinations. Forty-two (1.2%) residents underwent prostate biopsy, of which 16 (0.45%) were diagnosed with PCa. Of those diagnosed with PCa, three (19%) had localized PCa (cT1-cT2N0M0), six (37%) had locally advanced PCa (cT3a- cT4N0-1M0), and seven (44%) had advanced metastatic PCa (M1). Unfortunately, 3477 (48.5%) residents did not participate in the study, mainly due to lack of awareness of PCa based on feedback from local health centers. Age and t-PSA were used as primary screening indicators and, when further combined with mpMRI and prostate biopsy, confirmed the diagnosis of PCa among participating residents. Although this was a relatively economical and convenient screening method, education and knowledge should be further enhanced to increase the participation rate in PCa screening programs.
Collapse
Affiliation(s)
- Xiao-Liang Zhu
- Department of Urology and Andrology, Taizhou Hospital of Zhejiang Province Affiliated to Wenzhou Medical University, Taizhou, China
| | - Tao-Hsin Tung
- Department of Urology and Andrology, Taizhou Hospital of Zhejiang Province Affiliated to Wenzhou Medical University, Taizhou, China
| | - Haipin Li
- Department of Urology and Andrology, Taizhou Hospital of Zhejiang Province Affiliated to Wenzhou Medical University, Taizhou, China
| | - Songjiang Wu
- Department of Urology and Andrology, Taizhou Hospital of Zhejiang Province Affiliated to Wenzhou Medical University, Taizhou, China
| | - Xianyou Wang
- Department of Urology and Andrology, Taizhou Hospital of Zhejiang Province Affiliated to Wenzhou Medical University, Taizhou, China
| | - Lijun Wang
- Department of Urology and Andrology, Taizhou Hospital of Zhejiang Province Affiliated to Wenzhou Medical University, Taizhou, China
| | - Meixian Zhang
- Department of Urology and Andrology, Taizhou Hospital of Zhejiang Province Affiliated to Wenzhou Medical University, Taizhou, China
| | - Zhixia Chen
- Department of Urology and Andrology, Taizhou Hospital of Zhejiang Province Affiliated to Wenzhou Medical University, Taizhou, China
| | - Dingyi Liu
- Department of Urology and Andrology, Taizhou Hospital of Zhejiang Province Affiliated to Wenzhou Medical University, Taizhou, China
| | - Feipin Li
- Department of Urology and Andrology, Taizhou Hospital of Zhejiang Province Affiliated to Wenzhou Medical University, Taizhou, China
| |
Collapse
|
19
|
Wang Y, Li X, Konanur M, Konkel B, Seyferth E, Brajer N, Liu JG, Bashir MR, Lafata KJ. Towards optimal deep fusion of imaging and clinical data via a model-based description of fusion quality. Med Phys 2022. [PMID: 36548913 DOI: 10.1002/mp.16181] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Revised: 07/27/2022] [Accepted: 11/08/2022] [Indexed: 12/24/2022] Open
Abstract
BACKGROUND Due to intrinsic differences in data formatting, data structure, and underlying semantic information, the integration of imaging data with clinical data can be non-trivial. Optimal integration requires robust data fusion, that is, the process of integrating multiple data sources to produce more useful information than captured by individual data sources. Here, we introduce the concept of fusion quality for deep learning problems involving imaging and clinical data. We first provide a general theoretical framework and numerical validation of our technique. To demonstrate real-world applicability, we then apply our technique to optimize the fusion of CT imaging and hepatic blood markers to estimate portal venous hypertension, which is linked to prognosis in patients with cirrhosis of the liver. PURPOSE To develop a measurement method of optimal data fusion quality deep learning problems utilizing both imaging data and clinical data. METHODS Our approach is based on modeling the fully connected layer (FCL) of a convolutional neural network (CNN) as a potential function, whose distribution takes the form of the classical Gibbs measure. The features of the FCL are then modeled as random variables governed by state functions, which are interpreted as the different data sources to be fused. The probability density of each source, relative to the probability density of the FCL, represents a quantitative measure of source-bias. To minimize this source-bias and optimize CNN performance, we implement a vector-growing encoding scheme called positional encoding, where low-dimensional clinical data are transcribed into a rich feature space that complements high-dimensional imaging features. We first provide a numerical validation of our approach based on simulated Gaussian processes. We then applied our approach to patient data, where we optimized the fusion of CT images with blood markers to predict portal venous hypertension in patients with cirrhosis of the liver. This patient study was based on a modified ResNet-152 model that incorporates both images and blood markers as input. These two data sources were processed in parallel, fused into a single FCL, and optimized based on our fusion quality framework. RESULTS Numerical validation of our approach confirmed that the probability density function of a fused feature space converges to a source-specific probability density function when source data are improperly fused. Our numerical results demonstrate that this phenomenon can be quantified as a measure of fusion quality. On patient data, the fused model consisting of both imaging data and positionally encoded blood markers at the theoretically optimal fusion quality metric achieved an AUC of 0.74 and an accuracy of 0.71. This model was statistically better than the imaging-only model (AUC = 0.60; accuracy = 0.62), the blood marker-only model (AUC = 0.58; accuracy = 0.60), and a variety of purposely sub-optimized fusion models (AUC = 0.61-0.70; accuracy = 0.58-0.69). CONCLUSIONS We introduced the concept of data fusion quality for multi-source deep learning problems involving both imaging and clinical data. We provided a theoretical framework, numerical validation, and real-world application in abdominal radiology. Our data suggests that CT imaging and hepatic blood markers provide complementary diagnostic information when appropriately fused.
Collapse
Affiliation(s)
- Yuqi Wang
- Department of Electrical and Computer Engineering, Duke University, Durham, North Carolina, USA
| | - Xiang Li
- Department of Electrical and Computer Engineering, Duke University, Durham, North Carolina, USA
| | - Meghana Konanur
- Department of Radiology, Duke University, Durham, North Carolina, USA
| | - Brandon Konkel
- Department of Radiology, Duke University, Durham, North Carolina, USA
| | | | - Nathan Brajer
- Department of Radiology, Duke University, Durham, North Carolina, USA
| | - Jian-Guo Liu
- Department of Mathematics, Duke University, Durham, North Carolina, USA.,Department of Physics, Duke University, Durham, North Carolina, USA
| | - Mustafa R Bashir
- Department of Radiology, Duke University, Durham, North Carolina, USA.,Department of Medicine, Gastroenterology, Duke University, Durham, North Carolina, USA
| | - Kyle J Lafata
- Department of Electrical and Computer Engineering, Duke University, Durham, North Carolina, USA.,Department of Radiology, Duke University, Durham, North Carolina, USA.,Department of Radiation Oncology, Duke University, Durham, North Carolina, USA
| |
Collapse
|
20
|
Adeoye J, Akinshipo A, Koohi-Moghadam M, Thomson P, Su YX. Construction of machine learning-based models for cancer outcomes in low and lower-middle income countries: A scoping review. Front Oncol 2022; 12:976168. [DOI: 10.3389/fonc.2022.976168] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2022] [Accepted: 11/14/2022] [Indexed: 12/05/2022] Open
Abstract
BackgroundThe impact and utility of machine learning (ML)-based prediction tools for cancer outcomes including assistive diagnosis, risk stratification, and adjunctive decision-making have been largely described and realized in the high income and upper-middle-income countries. However, statistical projections have estimated higher cancer incidence and mortality risks in low and lower-middle-income countries (LLMICs). Therefore, this review aimed to evaluate the utilization, model construction methods, and degree of implementation of ML-based models for cancer outcomes in LLMICs.MethodsPubMed/Medline, Scopus, and Web of Science databases were searched and articles describing the use of ML-based models for cancer among local populations in LLMICs between 2002 and 2022 were included. A total of 140 articles from 22,516 citations that met the eligibility criteria were included in this study.ResultsML-based models from LLMICs were often based on traditional ML algorithms than deep or deep hybrid learning. We found that the construction of ML-based models was skewed to particular LLMICs such as India, Iran, Pakistan, and Egypt with a paucity of applications in sub-Saharan Africa. Moreover, models for breast, head and neck, and brain cancer outcomes were frequently explored. Many models were deemed suboptimal according to the Prediction model Risk of Bias Assessment tool (PROBAST) due to sample size constraints and technical flaws in ML modeling even though their performance accuracy ranged from 0.65 to 1.00. While the development and internal validation were described for all models included (n=137), only 4.4% (6/137) have been validated in independent cohorts and 0.7% (1/137) have been assessed for clinical impact and efficacy.ConclusionOverall, the application of ML for modeling cancer outcomes in LLMICs is increasing. However, model development is largely unsatisfactory. We recommend model retraining using larger sample sizes, intensified external validation practices, and increased impact assessment studies using randomized controlled trial designsSystematic review registrationhttps://www.crd.york.ac.uk/prospero/display_record.php?RecordID=308345, identifier CRD42022308345.
Collapse
|
21
|
Liu H, Jiao ML, Xing XY, Ou-Yang HQ, Yuan Y, Liu JF, Li Y, Wang CJ, Lang N, Qian YL, Jiang L, Yuan HS, Wang XD. BgNet: Classification of benign and malignant tumors with MRI multi-plane attention learning. Front Oncol 2022; 12:971871. [DOI: 10.3389/fonc.2022.971871] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Accepted: 10/05/2022] [Indexed: 11/13/2022] Open
Abstract
ObjectivesTo propose a deep learning-based classification framework, which can carry out patient-level benign and malignant tumors classification according to the patient’s multi-plane images and clinical information.MethodsA total of 430 cases of spinal tumor, including axial and sagittal plane images by MRI, of which 297 cases for training (14072 images), and 133 cases for testing (6161 images) were included. Based on the bipartite graph and attention learning, this study proposed a multi-plane attention learning framework, BgNet, for benign and malignant tumor diagnosis. In a bipartite graph structure, the tumor area in each plane is used as the vertex of the graph, and the matching between different planes is used as the edge of the graph. The tumor areas from different plane images are spliced at the input layer. And based on the convolutional neural network ResNet and visual attention learning model Swin-Transformer, this study proposed a feature fusion model named ResNetST for combining both global and local information to extract the correlation features of multiple planes. The proposed BgNet consists of five modules including a multi-plane fusion module based on the bipartite graph, input layer fusion module, feature layer fusion module, decision layer fusion module, and output module. These modules are respectively used for multi-level fusion of patient multi-plane image data to realize the comprehensive diagnosis of benign and malignant tumors at the patient level.ResultsThe accuracy (ACC: 79.7%) of the proposed BgNet with multi-plane was higher than that with a single plane, and higher than or equal to the four doctors’ ACC (D1: 70.7%, p=0.219; D2: 54.1%, p<0.005; D3: 79.7%, p=0.006; D4: 72.9%, p=0.178). Moreover, the diagnostic accuracy and speed of doctors can be further improved with the aid of BgNet, the ACC of D1, D2, D3, and D4 improved by 4.5%, 21.8%, 0.8%, and 3.8%, respectively.ConclusionsThe proposed deep learning framework BgNet can classify benign and malignant tumors effectively, and can help doctors improve their diagnostic efficiency and accuracy. The code is available at https://github.com/research-med/BgNet.
Collapse
|
22
|
Artificial intelligence-based methods for fusion of electronic health records and imaging data. Sci Rep 2022; 12:17981. [PMID: 36289266 PMCID: PMC9605975 DOI: 10.1038/s41598-022-22514-4] [Citation(s) in RCA: 32] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Accepted: 10/17/2022] [Indexed: 01/24/2023] Open
Abstract
Healthcare data are inherently multimodal, including electronic health records (EHR), medical images, and multi-omics data. Combining these multimodal data sources contributes to a better understanding of human health and provides optimal personalized healthcare. The most important question when using multimodal data is how to fuse them-a field of growing interest among researchers. Advances in artificial intelligence (AI) technologies, particularly machine learning (ML), enable the fusion of these different data modalities to provide multimodal insights. To this end, in this scoping review, we focus on synthesizing and analyzing the literature that uses AI techniques to fuse multimodal medical data for different clinical applications. More specifically, we focus on studies that only fused EHR with medical imaging data to develop various AI methods for clinical applications. We present a comprehensive analysis of the various fusion strategies, the diseases and clinical outcomes for which multimodal fusion was used, the ML algorithms used to perform multimodal fusion for each clinical application, and the available multimodal medical datasets. We followed the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews) guidelines. We searched Embase, PubMed, Scopus, and Google Scholar to retrieve relevant studies. After pre-processing and screening, we extracted data from 34 studies that fulfilled the inclusion criteria. We found that studies fusing imaging data with EHR are increasing and doubling from 2020 to 2021. In our analysis, a typical workflow was observed: feeding raw data, fusing different data modalities by applying conventional machine learning (ML) or deep learning (DL) algorithms, and finally, evaluating the multimodal fusion through clinical outcome predictions. Specifically, early fusion was the most used technique in most applications for multimodal learning (22 out of 34 studies). We found that multimodality fusion models outperformed traditional single-modality models for the same task. Disease diagnosis and prediction were the most common clinical outcomes (reported in 20 and 10 studies, respectively) from a clinical outcome perspective. Neurological disorders were the dominant category (16 studies). From an AI perspective, conventional ML models were the most used (19 studies), followed by DL models (16 studies). Multimodal data used in the included studies were mostly from private repositories (21 studies). Through this scoping review, we offer new insights for researchers interested in knowing the current state of knowledge within this research field.
Collapse
|
23
|
Lipkova J, Chen RJ, Chen B, Lu MY, Barbieri M, Shao D, Vaidya AJ, Chen C, Zhuang L, Williamson DFK, Shaban M, Chen TY, Mahmood F. Artificial intelligence for multimodal data integration in oncology. Cancer Cell 2022; 40:1095-1110. [PMID: 36220072 PMCID: PMC10655164 DOI: 10.1016/j.ccell.2022.09.012] [Citation(s) in RCA: 129] [Impact Index Per Article: 64.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Revised: 07/12/2022] [Accepted: 09/15/2022] [Indexed: 02/07/2023]
Abstract
In oncology, the patient state is characterized by a whole spectrum of modalities, ranging from radiology, histology, and genomics to electronic health records. Current artificial intelligence (AI) models operate mainly in the realm of a single modality, neglecting the broader clinical context, which inevitably diminishes their potential. Integration of different data modalities provides opportunities to increase robustness and accuracy of diagnostic and prognostic models, bringing AI closer to clinical practice. AI models are also capable of discovering novel patterns within and across modalities suitable for explaining differences in patient outcomes or treatment resistance. The insights gleaned from such models can guide exploration studies and contribute to the discovery of novel biomarkers and therapeutic targets. To support these advances, here we present a synopsis of AI methods and strategies for multimodal data fusion and association discovery. We outline approaches for AI interpretability and directions for AI-driven exploration through multimodal data interconnections. We examine challenges in clinical adoption and discuss emerging solutions.
Collapse
Affiliation(s)
- Jana Lipkova
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA; Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Richard J Chen
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA; Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA; Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA
| | - Bowen Chen
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Department of Computer Science, Harvard University, Cambridge, MA, USA
| | - Ming Y Lu
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA; Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA; Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology (MIT), Cambridge, MA, USA
| | - Matteo Barbieri
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Daniel Shao
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Harvard-MIT Health Sciences and Technology (HST), Cambridge, MA, USA
| | - Anurag J Vaidya
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Harvard-MIT Health Sciences and Technology (HST), Cambridge, MA, USA
| | - Chengkuan Chen
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA; Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Luoting Zhuang
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
| | - Drew F K Williamson
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA; Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Muhammad Shaban
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA; Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Tiffany Y Chen
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA; Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Faisal Mahmood
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA; Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA; Harvard Data Science Initiative, Harvard University, Cambridge, MA, USA.
| |
Collapse
|
24
|
Saliency Transfer Learning and Central-Cropping Network for Prostate Cancer Classification. Neural Process Lett 2022. [DOI: 10.1007/s11063-022-10999-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
25
|
Integrated multimodal artificial intelligence framework for healthcare applications. NPJ Digit Med 2022; 5:149. [PMID: 36127417 PMCID: PMC9489871 DOI: 10.1038/s41746-022-00689-4] [Citation(s) in RCA: 46] [Impact Index Per Article: 23.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2022] [Accepted: 08/31/2022] [Indexed: 11/24/2022] Open
Abstract
Artificial intelligence (AI) systems hold great promise to improve healthcare over the next decades. Specifically, AI systems leveraging multiple data sources and input modalities are poised to become a viable method to deliver more accurate results and deployable pipelines across a wide range of applications. In this work, we propose and evaluate a unified Holistic AI in Medicine (HAIM) framework to facilitate the generation and testing of AI systems that leverage multimodal inputs. Our approach uses generalizable data pre-processing and machine learning modeling stages that can be readily adapted for research and deployment in healthcare environments. We evaluate our HAIM framework by training and characterizing 14,324 independent models based on HAIM-MIMIC-MM, a multimodal clinical database (N = 34,537 samples) containing 7279 unique hospitalizations and 6485 patients, spanning all possible input combinations of 4 data modalities (i.e., tabular, time-series, text, and images), 11 unique data sources and 12 predictive tasks. We show that this framework can consistently and robustly produce models that outperform similar single-source approaches across various healthcare demonstrations (by 6–33%), including 10 distinct chest pathology diagnoses, along with length-of-stay and 48 h mortality predictions. We also quantify the contribution of each modality and data source using Shapley values, which demonstrates the heterogeneity in data modality importance and the necessity of multimodal inputs across different healthcare-relevant tasks. The generalizable properties and flexibility of our Holistic AI in Medicine (HAIM) framework could offer a promising pathway for future multimodal predictive systems in clinical and operational healthcare settings.
Collapse
|
26
|
Stahlschmidt SR, Ulfenborg B, Synnergren J. Multimodal deep learning for biomedical data fusion: a review. Brief Bioinform 2022; 23:bbab569. [PMID: 35089332 PMCID: PMC8921642 DOI: 10.1093/bib/bbab569] [Citation(s) in RCA: 92] [Impact Index Per Article: 46.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Revised: 12/06/2021] [Accepted: 12/11/2021] [Indexed: 02/06/2023] Open
Abstract
Biomedical data are becoming increasingly multimodal and thereby capture the underlying complex relationships among biological processes. Deep learning (DL)-based data fusion strategies are a popular approach for modeling these nonlinear relationships. Therefore, we review the current state-of-the-art of such methods and propose a detailed taxonomy that facilitates more informed choices of fusion strategies for biomedical applications, as well as research on novel methods. By doing so, we find that deep fusion strategies often outperform unimodal and shallow approaches. Additionally, the proposed subcategories of fusion strategies show different advantages and drawbacks. The review of current methods has shown that, especially for intermediate fusion strategies, joint representation learning is the preferred approach as it effectively models the complex interactions of different levels of biological organization. Finally, we note that gradual fusion, based on prior biological knowledge or on search strategies, is a promising future research path. Similarly, utilizing transfer learning might overcome sample size limitations of multimodal data sets. As these data sets become increasingly available, multimodal DL approaches present the opportunity to train holistic models that can learn the complex regulatory dynamics behind health and disease.
Collapse
Affiliation(s)
| | | | - Jane Synnergren
- Systems Biology Research Center, University of Skövde, Sweden
| |
Collapse
|
27
|
Ayyad SM, Badawy MA, Shehata M, Alksas A, Mahmoud A, Abou El-Ghar M, Ghazal M, El-Melegy M, Abdel-Hamid NB, Labib LM, Ali HA, El-Baz A. A New Framework for Precise Identification of Prostatic Adenocarcinoma. SENSORS 2022; 22:s22051848. [PMID: 35270995 PMCID: PMC8915102 DOI: 10.3390/s22051848] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/08/2021] [Revised: 02/21/2022] [Accepted: 02/24/2022] [Indexed: 02/01/2023]
Abstract
Prostate cancer, which is also known as prostatic adenocarcinoma, is an unconstrained growth of epithelial cells in the prostate and has become one of the leading causes of cancer-related death worldwide. The survival of patients with prostate cancer relies on detection at an early, treatable stage. In this paper, we introduce a new comprehensive framework to precisely differentiate between malignant and benign prostate cancer. This framework proposes a noninvasive computer-aided diagnosis system that integrates two imaging modalities of MR (diffusion-weighted (DW) and T2-weighted (T2W)). For the first time, it utilizes the combination of functional features represented by apparent diffusion coefficient (ADC) maps estimated from DW-MRI for the whole prostate in combination with texture features with its first- and second-order representations, extracted from T2W-MRIs of the whole prostate, and shape features represented by spherical harmonics constructed for the lesion inside the prostate and integrated with PSA screening results. The dataset presented in the paper includes 80 biopsy confirmed patients, with a mean age of 65.7 years (43 benign prostatic hyperplasia, 37 prostatic carcinomas). Experiments were conducted using different well-known machine learning approaches including support vector machines (SVM), random forests (RF), decision trees (DT), and linear discriminant analysis (LDA) classification models to study the impact of different feature sets that lead to better identification of prostatic adenocarcinoma. Using a leave-one-out cross-validation approach, the diagnostic results obtained using the SVM classification model along with the combined feature set after applying feature selection (88.75% accuracy, 81.08% sensitivity, 95.35% specificity, and 0.8821 AUC) indicated that the system’s performance, after integrating and reducing different types of feature sets, obtained an enhanced diagnostic performance compared with each individual feature set and other machine learning classifiers. In addition, the developed diagnostic system provided consistent diagnostic performance using 10-fold and 5-fold cross-validation approaches, which confirms the reliability, generalization ability, and robustness of the developed system.
Collapse
Affiliation(s)
- Sarah M. Ayyad
- Computers and Systems Department, Faculty of Engineering, Mansoura University, Mansoura 35511, Egypt; (S.M.A.); (N.B.A.-H.); (L.M.L.); (H.A.A.)
| | - Mohamed A. Badawy
- Radiology Department, Urology and Nephrology Center, Mansoura University, Mansoura 35516, Egypt; (M.A.B.); (M.A.E.-G.)
| | - Mohamed Shehata
- BioImaging Laboratory, Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (M.S.); (A.A.); (A.M.)
| | - Ahmed Alksas
- BioImaging Laboratory, Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (M.S.); (A.A.); (A.M.)
| | - Ali Mahmoud
- BioImaging Laboratory, Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (M.S.); (A.A.); (A.M.)
| | - Mohamed Abou El-Ghar
- Radiology Department, Urology and Nephrology Center, Mansoura University, Mansoura 35516, Egypt; (M.A.B.); (M.A.E.-G.)
| | - Mohammed Ghazal
- Department of Electrical and Computer Engineering, College of Engineering, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates;
| | - Moumen El-Melegy
- Department of Electrical Engineering, Assiut University, Assiut 71511, Egypt;
| | - Nahla B. Abdel-Hamid
- Computers and Systems Department, Faculty of Engineering, Mansoura University, Mansoura 35511, Egypt; (S.M.A.); (N.B.A.-H.); (L.M.L.); (H.A.A.)
| | - Labib M. Labib
- Computers and Systems Department, Faculty of Engineering, Mansoura University, Mansoura 35511, Egypt; (S.M.A.); (N.B.A.-H.); (L.M.L.); (H.A.A.)
| | - H. Arafat Ali
- Computers and Systems Department, Faculty of Engineering, Mansoura University, Mansoura 35511, Egypt; (S.M.A.); (N.B.A.-H.); (L.M.L.); (H.A.A.)
- Faulty of Artificial Intelligence, Delta University for Science and Technology, Mansoura 35516, Egypt
| | - Ayman El-Baz
- BioImaging Laboratory, Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (M.S.); (A.A.); (A.M.)
- Correspondence:
| |
Collapse
|
28
|
Bertelli E, Mercatelli L, Marzi C, Pachetti E, Baccini M, Barucci A, Colantonio S, Gherardini L, Lattavo L, Pascali MA, Agostini S, Miele V. Machine and Deep Learning Prediction Of Prostate Cancer Aggressiveness Using Multiparametric MRI. Front Oncol 2022; 11:802964. [PMID: 35096605 PMCID: PMC8792745 DOI: 10.3389/fonc.2021.802964] [Citation(s) in RCA: 24] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Accepted: 12/07/2021] [Indexed: 12/24/2022] Open
Abstract
Prostate cancer (PCa) is the most frequent male malignancy and the assessment of PCa aggressiveness, for which a biopsy is required, is fundamental for patient management. Currently, multiparametric (mp) MRI is strongly recommended before biopsy. Quantitative assessment of mpMRI might provide the radiologist with an objective and noninvasive tool for supporting the decision-making in clinical practice and decreasing intra- and inter-reader variability. In this view, high dimensional radiomics features and Machine Learning (ML) techniques, along with Deep Learning (DL) methods working on raw images directly, could assist the radiologist in the clinical workflow. The aim of this study was to develop and validate ML/DL frameworks on mpMRI data to characterize PCas according to their aggressiveness. We optimized several ML/DL frameworks on T2w, ADC and T2w+ADC data, using a patient-based nested validation scheme. The dataset was composed of 112 patients (132 peripheral lesions with Prostate Imaging Reporting and Data System (PI-RADS) score ≥ 3) acquired following both PI-RADS 2.0 and 2.1 guidelines. Firstly, ML/DL frameworks trained and validated on PI-RADS 2.0 data were tested on both PI-RADS 2.0 and 2.1 data. Then, we trained, validated and tested ML/DL frameworks on a multi PI-RADS dataset. We reported the performances in terms of Area Under the Receiver Operating curve (AUROC), specificity and sensitivity. The ML/DL frameworks trained on T2w data achieved the overall best performance. Notably, ML and DL frameworks trained and validated on PI-RADS 2.0 data obtained median AUROC values equal to 0.750 and 0.875, respectively, on unseen PI-RADS 2.0 test set. Similarly, ML/DL frameworks trained and validated on multi PI-RADS T2w data showed median AUROC values equal to 0.795 and 0.750, respectively, on unseen multi PI-RADS test set. Conversely, all the ML/DL frameworks trained and validated on PI-RADS 2.0 data, achieved AUROC values no better than the chance level when tested on PI-RADS 2.1 data. Both ML/DL techniques applied on mpMRI seem to be a valid aid in predicting PCa aggressiveness. In particular, ML/DL frameworks fed with T2w images data (objective, fast and non-invasive) show good performances and might support decision-making in patient diagnostic and therapeutic management, reducing intra- and inter-reader variability.
Collapse
Affiliation(s)
- Elena Bertelli
- Department of Radiology, Careggi University Hospital, Florence, Italy
| | - Laura Mercatelli
- Department of Radiology, Careggi University Hospital, Florence, Italy
| | - Chiara Marzi
- “Nello Carrara” Institute of Applied Physics (IFAC), National Research Council of Italy (CNR), Sesto Fiorentino, Italy
| | - Eva Pachetti
- “Alessandro Faedo” Institute of Information Science and Technologies (ISTI), National Research Council of Italy (CNR), Pisa, Italy
- Department of Information Engineering (DII), University of Pisa, Pisa, Italy
| | - Michela Baccini
- “Giuseppe Parenti” Department of Statistics, Computer Science, Applications(DiSIA), University of Florence, Florence, Italy
- Florence Center for Data Science, University of Florence, Florence, Italy
| | - Andrea Barucci
- “Nello Carrara” Institute of Applied Physics (IFAC), National Research Council of Italy (CNR), Sesto Fiorentino, Italy
| | - Sara Colantonio
- “Alessandro Faedo” Institute of Information Science and Technologies (ISTI), National Research Council of Italy (CNR), Pisa, Italy
| | - Luca Gherardini
- “Giuseppe Parenti” Department of Statistics, Computer Science, Applications(DiSIA), University of Florence, Florence, Italy
| | - Lorenzo Lattavo
- Department of Radiology, Careggi University Hospital, Florence, Italy
| | - Maria Antonietta Pascali
- “Alessandro Faedo” Institute of Information Science and Technologies (ISTI), National Research Council of Italy (CNR), Pisa, Italy
| | - Simone Agostini
- Department of Radiology, Careggi University Hospital, Florence, Italy
| | - Vittorio Miele
- Department of Radiology, Careggi University Hospital, Florence, Italy
| |
Collapse
|
29
|
Li B, Oka R, Xuan P, Yoshimura Y, Nakaguchi T. Robust multi-modal prostate cancer classification via feature autoencoder and dual attention. INFORMATICS IN MEDICINE UNLOCKED 2022. [DOI: 10.1016/j.imu.2022.100923] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
|
30
|
Tazin T, Sarker S, Gupta P, Ayaz FI, Islam S, Monirujjaman Khan M, Bourouis S, Idris SA, Alshazly H. A Robust and Novel Approach for Brain Tumor Classification Using Convolutional Neural Network. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2021; 2021:2392395. [PMID: 34970309 PMCID: PMC8714377 DOI: 10.1155/2021/2392395] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/14/2021] [Accepted: 12/04/2021] [Indexed: 11/25/2022]
Abstract
Brain tumors are the most common and aggressive illness, with a relatively short life expectancy in their most severe form. Thus, treatment planning is an important step in improving patients' quality of life. In general, image methods such as computed tomography (CT), magnetic resonance imaging (MRI), and ultrasound images are used to assess tumors in the brain, lung, liver, breast, prostate, and so on. X-ray images, in particular, are utilized in this study to diagnose brain tumors. This paper describes the investigation of the convolutional neural network (CNN) to identify brain tumors from X-ray images. It expedites and increases the reliability of the treatment. Because there has been a significant amount of study in this field, the presented model focuses on boosting accuracy while using a transfer learning strategy. Python and Google Colab were utilized to perform this investigation. Deep feature extraction was accomplished with the help of pretrained deep CNN models, VGG19, InceptionV3, and MobileNetV2. The classification accuracy is used to assess the performance of this paper. MobileNetV2 had the accuracy of 92%, InceptionV3 had the accuracy of 91%, and VGG19 had the accuracy of 88%. MobileNetV2 has offered the highest level of accuracy among these networks. These precisions aid in the early identification of tumors before they produce physical adverse effects such as paralysis and other impairments.
Collapse
Affiliation(s)
- Tahia Tazin
- Department of Electrical and Computer Engineering, North South University, Bashundhara, Dhaka 1229, Bangladesh
| | - Sraboni Sarker
- Department of Electrical and Computer Engineering, North South University, Bashundhara, Dhaka 1229, Bangladesh
| | - Punit Gupta
- Department of Computer and Communication, Manipal University Jaipur, Jaipur, India
| | - Fozayel Ibn Ayaz
- Department of Electrical and Computer Engineering, North South University, Bashundhara, Dhaka 1229, Bangladesh
| | - Sumaia Islam
- Department of Electrical and Computer Engineering, North South University, Bashundhara, Dhaka 1229, Bangladesh
| | - Mohammad Monirujjaman Khan
- Department of Electrical and Computer Engineering, North South University, Bashundhara, Dhaka 1229, Bangladesh
| | - Sami Bourouis
- Department of Information Technology, College of Computers and Information Technology, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia
| | - Sahar Ahmed Idris
- College of Industrial Engineering, King Khalid University, Abha, Saudi Arabia
| | - Hammam Alshazly
- Faculty of Computers and Information, South Valley University, Qena 83523, Egypt
| |
Collapse
|
31
|
Rahaman MA, Chen J, Fu Z, Lewis N, Iraji A, Calhoun VD. Multi-modal deep learning of functional and structural neuroimaging and genomic data to predict mental illness. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:3267-3272. [PMID: 34891938 DOI: 10.1109/embc46164.2021.9630693] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Neuropsychiatric disorders such as schizophrenia are very heterogeneous in nature and typically diagnosed using self-reported symptoms. This makes it difficult to pose a confident prediction on the cases and does not provide insight into the underlying neural and biological mechanisms of these disorders. Combining neuroimaging and genomic data with a multi-modal 'predictome' paves the way for biologically informed markers and may improve prediction reliability. With that, we develop a multi-modal deep learning framework by fusing data from different modalities to capture the interaction between the latent features and evaluate their complementary information in characterizing schizophrenia. Our deep model uses structural MRI, functional MRI, and genome-wide polymorphism data to perform the classification task. It includes a multi-layer feed-forward network, an encoder, and a long short-term memory (LSTM) unit with attention to learn the latent features and adopt a joint training scheme capturing synergies between the modalities. The hybrid network also uses different regularizers for addressing the inherent overfitting and modality-specific bias in the multi-modal setup. Next, we run the network through a saliency model to analyze the learned features. Integrating modalities enhances the performance of the classifier, and our framework acquired 88% (P < 0.0001) accuracy on a dataset of 437 subjects. The trimodal accuracy is comparable to the state-of-the-art performance on a data collection of this size and outperforms the unimodal and bimodal baselines we compared. Model introspection was used to expose the salient neural features and genes/biological pathways associated with schizophrenia. To our best knowledge, this is the first approach that fuses genomic information with structural and functional MRI biomarkers for predicting schizophrenia. We believe this type of modality blending can better explain the disorder's dynamics by adding cross-modal prospects.Clinical Relevance- This study combinedly learns imaging and genomic features for the classification of schizophrenia. The data fusion scheme extracts modality interactions, and the saliency experiments report multiple functional and structural networks closely connected to the disorder.
Collapse
|
32
|
Hammouda K, Khalifa F, El-Melegy M, Ghazal M, Darwish HE, Abou El-Ghar M, El-Baz A. A Deep Learning Pipeline for Grade Groups Classification Using Digitized Prostate Biopsy Specimens. SENSORS (BASEL, SWITZERLAND) 2021; 21:6708. [PMID: 34695922 PMCID: PMC8538079 DOI: 10.3390/s21206708] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/03/2021] [Revised: 10/01/2021] [Accepted: 10/04/2021] [Indexed: 11/16/2022]
Abstract
Prostate cancer is a significant cause of morbidity and mortality in the USA. In this paper, we develop a computer-aided diagnostic (CAD) system for automated grade groups (GG) classification using digitized prostate biopsy specimens (PBSs). Our CAD system aims to firstly classify the Gleason pattern (GP), and then identifies the Gleason score (GS) and GG. The GP classification pipeline is based on a pyramidal deep learning system that utilizes three convolution neural networks (CNN) to produce both patch- and pixel-wise classifications. The analysis starts with sequential preprocessing steps that include a histogram equalization step to adjust intensity values, followed by a PBSs' edge enhancement. The digitized PBSs are then divided into overlapping patches with the three sizes: 100 × 100 (CNNS), 150 × 150 (CNNM), and 200 × 200 (CNNL), pixels, and 75% overlap. Those three sizes of patches represent the three pyramidal levels. This pyramidal technique allows us to extract rich information, such as that the larger patches give more global information, while the small patches provide local details. After that, the patch-wise technique assigns each overlapped patch a label as GP categories (1 to 5). Then, the majority voting is the core approach for getting the pixel-wise classification that is used to get a single label for each overlapped pixel. The results after applying those techniques are three images of the same size as the original, and each pixel has a single label. We utilized the majority voting technique again on those three images to obtain only one. The proposed framework is trained, validated, and tested on 608 whole slide images (WSIs) of the digitized PBSs. The overall diagnostic accuracy is evaluated using several metrics: precision, recall, F1-score, accuracy, macro-averaged, and weighted-averaged. The (CNNL) has the best accuracy results for patch classification among the three CNNs, and its classification accuracy is 0.76. The macro-averaged and weighted-average metrics are found to be around 0.70-0.77. For GG, our CAD results are about 80% for precision, and between 60% to 80% for recall and F1-score, respectively. Also, it is around 94% for accuracy and NPV. To highlight our CAD systems' results, we used the standard ResNet50 and VGG-16 to compare our CNN's patch-wise classification results. As well, we compared the GG's results with that of the previous work.
Collapse
Affiliation(s)
- Kamal Hammouda
- BioImaging Laboratory, Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (K.H.); (F.K.)
| | - Fahmi Khalifa
- BioImaging Laboratory, Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (K.H.); (F.K.)
| | - Moumen El-Melegy
- Department of Electrical Engineering, Assiut University, Assiut 71515, Egypt;
| | - Mohamed Ghazal
- Electrical and Computer Engineering Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates;
| | - Hanan E. Darwish
- Mathematics Department, Faculty of Science, Mansoura University, Mansoura 35516, Egypt;
| | - Mohamed Abou El-Ghar
- Radiology Department, Urology and Nephrology Center, Mansoura University, Mansoura 35516, Egypt;
| | - Ayman El-Baz
- BioImaging Laboratory, Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (K.H.); (F.K.)
| |
Collapse
|
33
|
DDV: A Taxonomy for Deep Learning Methods in Detecting Prostate Cancer. Neural Process Lett 2021. [DOI: 10.1007/s11063-021-10485-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
34
|
Biopolymer and Biomaterial Conjugated Iron Oxide Nanomaterials as Prostate Cancer Theranostic Agents: A Comprehensive Review. Symmetry (Basel) 2021. [DOI: 10.3390/sym13060974] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022] Open
Abstract
Prostate cancer (PCa) is the most common malignancy in men and the leading cause of death for men all over the world. Early diagnosis is the key to start treatment at an early stage of PCa and to reduce the death toll. Generally, PCa expresses characteristic morphologic features and serum biomarkers; however, early diagnosis is challenging due to its heterogeneity and long-term indolent phase in the early stage. Following positive diagnosis, PCa patients receive conventional treatments including surgery, radiation therapy, androgen deprivation therapy, focal therapy, and chemotherapy to enhance survival time and alleviate PCa-related complications. However, these treatment strategies have both short and long-term side effects, notably impotence, urinary incontinence, erectile dysfunctions, and recurrence of cancer. These limitations warrant the quest for novel PCa theranostic agents with robust diagnostic and therapeutic potentials to lessen the burden of PCa-related suffering. Iron oxide nanoparticles (IONPs) have recently drawn attention for their symmetrical usage in the diagnosis and treatment of several cancer types. Here, we performed a systematic search in four popular online databases (PubMed, Google Scholar, Scopus, and Web of Science) for the articles regarding PCa and IONPs. Published literature confirmed that the surface modification of IONPs with biopolymers and diagnostic biomarkers improved the early diagnosis of PCa, even in the metastatic stage with reliable accuracy and sensitivity. Furthermore, fine-tuning of IONPs with biopolymers, nucleic acids, anticancer drugs, and bioactive compounds can improve the therapeutic efficacy of these anticancer agents against PCa. This review covers the symmetrical use of IONPs in the diagnosis and treatment of PCa, investigates their biocompatibility, and examines their potential as PCa theranostic agents.
Collapse
|
35
|
Twilt JJ, van Leeuwen KG, Huisman HJ, Fütterer JJ, de Rooij M. Artificial Intelligence Based Algorithms for Prostate Cancer Classification and Detection on Magnetic Resonance Imaging: A Narrative Review. Diagnostics (Basel) 2021; 11:diagnostics11060959. [PMID: 34073627 PMCID: PMC8229869 DOI: 10.3390/diagnostics11060959] [Citation(s) in RCA: 46] [Impact Index Per Article: 15.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2021] [Revised: 05/19/2021] [Accepted: 05/21/2021] [Indexed: 12/14/2022] Open
Abstract
Due to the upfront role of magnetic resonance imaging (MRI) for prostate cancer (PCa) diagnosis, a multitude of artificial intelligence (AI) applications have been suggested to aid in the diagnosis and detection of PCa. In this review, we provide an overview of the current field, including studies between 2018 and February 2021, describing AI algorithms for (1) lesion classification and (2) lesion detection for PCa. Our evaluation of 59 included studies showed that most research has been conducted for the task of PCa lesion classification (66%) followed by PCa lesion detection (34%). Studies showed large heterogeneity in cohort sizes, ranging between 18 to 499 patients (median = 162) combined with different approaches for performance validation. Furthermore, 85% of the studies reported on the stand-alone diagnostic accuracy, whereas 15% demonstrated the impact of AI on diagnostic thinking efficacy, indicating limited proof for the clinical utility of PCa AI applications. In order to introduce AI within the clinical workflow of PCa assessment, robustness and generalizability of AI applications need to be further validated utilizing external validation and clinical workflow experiments.
Collapse
|
36
|
Shao Y, Zhang YX, Chen HH, Lu SS, Zhang SC, Zhang JX. Advances in the application of artificial intelligence in solid tumor imaging. Artif Intell Cancer 2021; 2:12-24. [DOI: 10.35713/aic.v2.i2.12] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/09/2021] [Revised: 04/02/2021] [Accepted: 04/20/2021] [Indexed: 02/06/2023] Open
Abstract
Early diagnosis and timely treatment are crucial in reducing cancer-related mortality. Artificial intelligence (AI) has greatly relieved clinical workloads and changed the current medical workflows. We searched for recent studies, reports and reviews referring to AI and solid tumors; many reviews have summarized AI applications in the diagnosis and treatment of a single tumor type. We herein systematically review the advances of AI application in multiple solid tumors including esophagus, stomach, intestine, breast, thyroid, prostate, lung, liver, cervix, pancreas and kidney with a specific focus on the continual improvement on model performance in imaging practice.
Collapse
Affiliation(s)
- Ying Shao
- Department of Laboratory Medicine, People Hospital of Jiangying, Jiangying 214400, Jiangsu Province, China
| | - Yu-Xuan Zhang
- Department of Laboratory Medicine, The First Affiliated Hospital of Nanjing Medical University, Nanjing 210029, Jiangsu Province, China
| | - Huan-Huan Chen
- Department of Laboratory Medicine, The First Affiliated Hospital of Nanjing Medical University, Nanjing 210029, Jiangsu Province, China
| | - Shan-Shan Lu
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing 210029, Jiangsu Province, China
| | - Shi-Chang Zhang
- Department of Laboratory Medicine, The First Affiliated Hospital of Nanjing Medical University, Nanjing 210029, Jiangsu Province, China
| | - Jie-Xin Zhang
- Department of Laboratory Medicine, The First Affiliated Hospital of Nanjing Medical University, Nanjing 210029, Jiangsu Province, China
| |
Collapse
|
37
|
Ayyad SM, Shehata M, Shalaby A, Abou El-Ghar M, Ghazal M, El-Melegy M, Abdel-Hamid NB, Labib LM, Ali HA, El-Baz A. Role of AI and Histopathological Images in Detecting Prostate Cancer: A Survey. SENSORS (BASEL, SWITZERLAND) 2021; 21:2586. [PMID: 33917035 PMCID: PMC8067693 DOI: 10.3390/s21082586] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/05/2021] [Revised: 03/29/2021] [Accepted: 04/04/2021] [Indexed: 02/07/2023]
Abstract
Prostate cancer is one of the most identified cancers and second most prevalent among cancer-related deaths of men worldwide. Early diagnosis and treatment are substantial to stop or handle the increase and spread of cancer cells in the body. Histopathological image diagnosis is a gold standard for detecting prostate cancer as it has different visual characteristics but interpreting those type of images needs a high level of expertise and takes too much time. One of the ways to accelerate such an analysis is by employing artificial intelligence (AI) through the use of computer-aided diagnosis (CAD) systems. The recent developments in artificial intelligence along with its sub-fields of conventional machine learning and deep learning provide new insights to clinicians and researchers, and an abundance of research is presented specifically for histopathology images tailored for prostate cancer. However, there is a lack of comprehensive surveys that focus on prostate cancer using histopathology images. In this paper, we provide a very comprehensive review of most, if not all, studies that handled the prostate cancer diagnosis using histopathological images. The survey begins with an overview of histopathological image preparation and its challenges. We also briefly review the computing techniques that are commonly applied in image processing, segmentation, feature selection, and classification that can help in detecting prostate malignancies in histopathological images.
Collapse
Affiliation(s)
- Sarah M. Ayyad
- Computers and Systems Department, Faculty of Engineering, Mansoura University, Mansoura 35511, Egypt; (S.M.A.); (N.B.A.-H.); (L.M.L.); (H.A.A.)
| | - Mohamed Shehata
- BioImaging Laboratory, Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (M.S.); (A.S.)
| | - Ahmed Shalaby
- BioImaging Laboratory, Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (M.S.); (A.S.)
| | - Mohamed Abou El-Ghar
- Department of Radiology, Urology and Nephrology Center, Mansoura University, Mansoura 35516, Egypt;
| | - Mohammed Ghazal
- Department of Electrical and Computer Engineering, College of Engineering, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates;
| | - Moumen El-Melegy
- Department of Electrical Engineering, Assiut University, Assiut 71511, Egypt;
| | - Nahla B. Abdel-Hamid
- Computers and Systems Department, Faculty of Engineering, Mansoura University, Mansoura 35511, Egypt; (S.M.A.); (N.B.A.-H.); (L.M.L.); (H.A.A.)
| | - Labib M. Labib
- Computers and Systems Department, Faculty of Engineering, Mansoura University, Mansoura 35511, Egypt; (S.M.A.); (N.B.A.-H.); (L.M.L.); (H.A.A.)
| | - H. Arafat Ali
- Computers and Systems Department, Faculty of Engineering, Mansoura University, Mansoura 35511, Egypt; (S.M.A.); (N.B.A.-H.); (L.M.L.); (H.A.A.)
| | - Ayman El-Baz
- BioImaging Laboratory, Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (M.S.); (A.S.)
| |
Collapse
|
38
|
Wang Y, De Leon AC, Perera R, Abenojar E, Gopalakrishnan R, Basilion JP, Wang X, Exner AA. Molecular imaging of orthotopic prostate cancer with nanobubble ultrasound contrast agents targeted to PSMA. Sci Rep 2021; 11:4726. [PMID: 33633232 PMCID: PMC7907080 DOI: 10.1038/s41598-021-84072-5] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2020] [Accepted: 02/11/2021] [Indexed: 12/15/2022] Open
Abstract
Ultrasound imaging is routinely used to guide prostate biopsies, yet delineation of tumors within the prostate gland is extremely challenging, even with microbubble (MB) contrast. A more effective ultrasound protocol is needed that can effectively localize malignancies for targeted biopsy or aid in patient selection and treatment planning for organ-sparing focal therapy. This study focused on evaluating the application of a novel nanobubble ultrasound contrast agent targeted to the prostate specific membrane antigen (PSMA-targeted NBs) in ultrasound imaging of prostate cancer (PCa) in vivo using a clinically relevant orthotopic tumor model in nude mice. Our results demonstrated that PSMA-targeted NBs had increased extravasation and retention in PSMA-expressing orthotopic mouse tumors. These processes are reflected in significantly different time intensity curve (TIC) and several kinetic parameters for targeted versus non-targeted NBs or LUMASON MBs. These, may in turn, lead to improved image-based detection and diagnosis of PCa in the future.
Collapse
Affiliation(s)
- Yu Wang
- Department of Radiology, Case Western Reserve University, 10900 Euclid Avenue, BRB 330, Cleveland, OH, 44106, USA
- Department of Ultrasound, Peking University People's Hospital, Beijing, 100044, China
| | - Al Christopher De Leon
- Department of Radiology, Case Western Reserve University, 10900 Euclid Avenue, BRB 330, Cleveland, OH, 44106, USA
| | - Reshani Perera
- Department of Radiology, Case Western Reserve University, 10900 Euclid Avenue, BRB 330, Cleveland, OH, 44106, USA
| | - Eric Abenojar
- Department of Radiology, Case Western Reserve University, 10900 Euclid Avenue, BRB 330, Cleveland, OH, 44106, USA
| | - Ramamurthy Gopalakrishnan
- Department of Radiology, Case Western Reserve University, 10900 Euclid Avenue, BRB 330, Cleveland, OH, 44106, USA
| | - James P Basilion
- Department of Radiology, Case Western Reserve University, 10900 Euclid Avenue, BRB 330, Cleveland, OH, 44106, USA
- Department of Biomedical Engineering, Case Western Reserve University, 11100 Euclid Ave, Wearn Building B49, Cleveland, OH, 44106, USA
| | - Xinning Wang
- Department of Biomedical Engineering, Case Western Reserve University, 11100 Euclid Ave, Wearn Building B49, Cleveland, OH, 44106, USA.
| | - Agata A Exner
- Department of Radiology, Case Western Reserve University, 10900 Euclid Avenue, BRB 330, Cleveland, OH, 44106, USA.
- Department of Biomedical Engineering, Case Western Reserve University, 11100 Euclid Ave, Wearn Building B49, Cleveland, OH, 44106, USA.
| |
Collapse
|
39
|
Fusion of medical imaging and electronic health records using deep learning: a systematic review and implementation guidelines. NPJ Digit Med 2020; 3:136. [PMID: 33083571 PMCID: PMC7567861 DOI: 10.1038/s41746-020-00341-z] [Citation(s) in RCA: 204] [Impact Index Per Article: 51.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2020] [Accepted: 09/17/2020] [Indexed: 12/15/2022] Open
Abstract
Advancements in deep learning techniques carry the potential to make significant contributions to healthcare, particularly in fields that utilize medical imaging for diagnosis, prognosis, and treatment decisions. The current state-of-the-art deep learning models for radiology applications consider only pixel-value information without data informing clinical context. Yet in practice, pertinent and accurate non-imaging data based on the clinical history and laboratory data enable physicians to interpret imaging findings in the appropriate clinical context, leading to a higher diagnostic accuracy, informative clinical decision making, and improved patient outcomes. To achieve a similar goal using deep learning, medical imaging pixel-based models must also achieve the capability to process contextual data from electronic health records (EHR) in addition to pixel data. In this paper, we describe different data fusion techniques that can be applied to combine medical imaging with EHR, and systematically review medical data fusion literature published between 2012 and 2020. We conducted a systematic search on PubMed and Scopus for original research articles leveraging deep learning for fusion of multimodality data. In total, we screened 985 studies and extracted data from 17 papers. By means of this systematic review, we present current knowledge, summarize important results and provide implementation guidelines to serve as a reference for researchers interested in the application of multimodal fusion in medical imaging.
Collapse
|
40
|
Deep Learning in Radiation Oncology Treatment Planning for Prostate Cancer: A Systematic Review. J Med Syst 2020; 44:179. [DOI: 10.1007/s10916-020-01641-3] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2020] [Accepted: 08/05/2020] [Indexed: 12/11/2022]
|
41
|
Wildeboer RR, van Sloun RJG, Wijkstra H, Mischi M. Artificial intelligence in multiparametric prostate cancer imaging with focus on deep-learning methods. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 189:105316. [PMID: 31951873 DOI: 10.1016/j.cmpb.2020.105316] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/03/2019] [Revised: 12/09/2019] [Accepted: 01/04/2020] [Indexed: 05/16/2023]
Abstract
Prostate cancer represents today the most typical example of a pathology whose diagnosis requires multiparametric imaging, a strategy where multiple imaging techniques are combined to reach an acceptable diagnostic performance. However, the reviewing, weighing and coupling of multiple images not only places additional burden on the radiologist, it also complicates the reviewing process. Prostate cancer imaging has therefore been an important target for the development of computer-aided diagnostic (CAD) tools. In this survey, we discuss the advances in CAD for prostate cancer over the last decades with special attention to the deep-learning techniques that have been designed in the last few years. Moreover, we elaborate and compare the methods employed to deliver the CAD output to the operator for further medical decision making.
Collapse
Affiliation(s)
- Rogier R Wildeboer
- Lab of Biomedical Diagnostics, Department of Electrical Engineering, Eindhoven University of Technology, De Zaale, 5600 MB, Eindhoven, the Netherlands.
| | - Ruud J G van Sloun
- Lab of Biomedical Diagnostics, Department of Electrical Engineering, Eindhoven University of Technology, De Zaale, 5600 MB, Eindhoven, the Netherlands.
| | - Hessel Wijkstra
- Lab of Biomedical Diagnostics, Department of Electrical Engineering, Eindhoven University of Technology, De Zaale, 5600 MB, Eindhoven, the Netherlands; Department of Urology, Amsterdam University Medical Centers, University of Amsterdam, Meibergdreef 9, 1105 AZ, Amsterdam, the Netherlands
| | - Massimo Mischi
- Lab of Biomedical Diagnostics, Department of Electrical Engineering, Eindhoven University of Technology, De Zaale, 5600 MB, Eindhoven, the Netherlands
| |
Collapse
|
42
|
Parekh VS, Macura KJ, Harvey SC, Kamel IR, EI‐Khouli R, Bluemke DA, Jacobs MA. Multiparametric deep learning tissue signatures for a radiological biomarker of breast cancer: Preliminary results. Med Phys 2020; 47:75-88. [PMID: 31598978 PMCID: PMC7003775 DOI: 10.1002/mp.13849] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2019] [Revised: 09/09/2019] [Accepted: 09/13/2019] [Indexed: 12/18/2022] Open
Abstract
PURPOSE Deep learning is emerging in radiology due to the increased computational capabilities available to reading rooms. These computational developments have the ability to mimic the radiologist and may allow for more accurate tissue characterization of normal and pathological lesion tissue to assist radiologists in defining different diseases. We introduce a novel tissue signature model based on tissue characteristics in breast tissue from multiparametric magnetic resonance imaging (mpMRI). The breast tissue signatures are used as inputs in a stacked sparse autoencoder (SSAE) multiparametric deep learning (MPDL) network for segmentation of breast mpMRI. METHODS We constructed the MPDL network from SSAE with 5 layers with 10 nodes at each layer. A total cohort of 195 breast cancer subjects were used for training and testing of the MPDL network. The cohort consisted of a training dataset of 145 subjects and an independent validation set of 50 subjects. After segmentation, we used a combined SAE-support vector machine (SAE-SVM) learning method for classification. Dice similarity (DS) metrics were calculated between the segmented MPDL and dynamic contrast enhancement (DCE) MRI-defined lesions. Sensitivity, specificity, and area under the curve (AUC) metrics were used to classify benign from malignant lesions. RESULTS The MPDL segmentation resulted in a high DS of 0.87 ± 0.05 for malignant lesions and 0.84 ± 0.07 for benign lesions. The MPDL had excellent sensitivity and specificity of 86% and 86% with positive predictive and negative predictive values of 92% and 73%, respectively, and an AUC of 0.90. CONCLUSIONS Using a new tissue signature model as inputs into the MPDL algorithm, we have successfully validated MPDL in a large cohort of subjects and achieved results similar to radiologists.
Collapse
Affiliation(s)
- Vishwa S. Parekh
- The Russell H. Morgan Department of Radiology and Radiological SciencesThe Johns Hopkins University School of MedicineBaltimoreMD21205USA
- Department of Computer ScienceThe Johns Hopkins UniversityBaltimoreMD21208USA
| | - Katarzyna J. Macura
- The Russell H. Morgan Department of Radiology and Radiological SciencesThe Johns Hopkins University School of MedicineBaltimoreMD21205USA
- Sidney Kimmel Comprehensive Cancer CenterThe Johns Hopkins University School of MedicineBaltimoreMD21205USA
| | - Susan C. Harvey
- The Russell H. Morgan Department of Radiology and Radiological SciencesThe Johns Hopkins University School of MedicineBaltimoreMD21205USA
- Hologic Inc36 Apple Ridge RdDanburyCT06810USA
| | - Ihab R. Kamel
- The Russell H. Morgan Department of Radiology and Radiological SciencesThe Johns Hopkins University School of MedicineBaltimoreMD21205USA
- Sidney Kimmel Comprehensive Cancer CenterThe Johns Hopkins University School of MedicineBaltimoreMD21205USA
| | - Riham EI‐Khouli
- The Russell H. Morgan Department of Radiology and Radiological SciencesThe Johns Hopkins University School of MedicineBaltimoreMD21205USA
- Department of Radiology and Radiological SciencesUniversity of KentuckyLexingtonKY40536USA
| | - David A. Bluemke
- Department of RadiologyUniversity of Wisconsin School of Medicine and Public HealthMadisonWI53726USA
| | - Michael A. Jacobs
- The Russell H. Morgan Department of Radiology and Radiological SciencesThe Johns Hopkins University School of MedicineBaltimoreMD21205USA
- Sidney Kimmel Comprehensive Cancer CenterThe Johns Hopkins University School of MedicineBaltimoreMD21205USA
| |
Collapse
|
43
|
Polymeri E, Sadik M, Kaboteh R, Borrelli P, Enqvist O, Ulén J, Ohlsson M, Trägårdh E, Poulsen MH, Simonsen JA, Hoilund-Carlsen PF, Johnsson ÅA, Edenbrandt L. Deep learning-based quantification of PET/CT prostate gland uptake: association with overall survival. Clin Physiol Funct Imaging 2019; 40:106-113. [PMID: 31794112 PMCID: PMC7027436 DOI: 10.1111/cpf.12611] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2019] [Revised: 11/10/2019] [Accepted: 11/22/2019] [Indexed: 12/20/2022]
Abstract
Aim To validate a deep‐learning (DL) algorithm for automated quantification of prostate cancer on positron emission tomography/computed tomography (PET/CT) and explore the potential of PET/CT measurements as prognostic biomarkers. Material and methods Training of the DL‐algorithm regarding prostate volume was performed on manually segmented CT images in 100 patients. Validation of the DL‐algorithm was carried out in 45 patients with biopsy‐proven hormone‐naïve prostate cancer. The automated measurements of prostate volume were compared with manual measurements made independently by two observers. PET/CT measurements of tumour burden based on volume and SUV of abnormal voxels were calculated automatically. Voxels in the co‐registered 18F‐choline PET images above a standardized uptake value (SUV) of 2·65, and corresponding to the prostate as defined by the automated segmentation in the CT images, were defined as abnormal. Validation of abnormal voxels was performed by manual segmentation of radiotracer uptake. Agreement between algorithm and observers regarding prostate volume was analysed by Sørensen‐Dice index (SDI). Associations between automatically based PET/CT biomarkers and age, prostate‐specific antigen (PSA), Gleason score as well as overall survival were evaluated by a univariate Cox regression model. Results The SDI between the automated and the manual volume segmentations was 0·78 and 0·79, respectively. Automated PET/CT measures reflecting total lesion uptake and the relation between volume of abnormal voxels and total prostate volume were significantly associated with overall survival (P = 0·02), whereas age, PSA, and Gleason score were not. Conclusion Automated PET/CT biomarkers showed good agreement to manual measurements and were significantly associated with overall survival.
Collapse
Affiliation(s)
- Eirini Polymeri
- Department of Radiology, Institute of Clinical Sciences, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden.,Department of Radiology, Region Västra Götaland, Sahlgrenska University Hospital, Gothenburg, Sweden
| | - May Sadik
- Department of Clinical Physiology, Region Västra Götaland, Sahlgrenska University Hospital, Gothenburg, Sweden
| | - Reza Kaboteh
- Department of Clinical Physiology, Region Västra Götaland, Sahlgrenska University Hospital, Gothenburg, Sweden
| | - Pablo Borrelli
- Department of Clinical Physiology, Region Västra Götaland, Sahlgrenska University Hospital, Gothenburg, Sweden
| | - Olof Enqvist
- Department of Electrical Engineering, Region Västra Götaland, Chalmers University of Technology, Gothenburg, Sweden
| | | | - Mattias Ohlsson
- School of Information Technology, Halmstad Embedded and Intelligent Systems Research (EIS), CAISR - Centre for Applied Intelligent Systems Research, Halmstad University, Halmstad, Sweden
| | - Elin Trägårdh
- Department of Translational Medicine, Institute of Clinical Sciences, Lund University, Malmö, Sweden
| | - Mads H Poulsen
- Department of Urology, Odense University Hospital, Odense, Denmark
| | - Jane A Simonsen
- Department of Nuclear Medicine, Odense University Hospital, Odense, Denmark
| | | | - Åse A Johnsson
- Department of Radiology, Institute of Clinical Sciences, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden.,Department of Radiology, Region Västra Götaland, Sahlgrenska University Hospital, Gothenburg, Sweden
| | - Lars Edenbrandt
- Department of Clinical Physiology, Region Västra Götaland, Sahlgrenska University Hospital, Gothenburg, Sweden.,Department of Molecular and Clinical Medicine, Institute of Medicine, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
| |
Collapse
|
44
|
Shakeel P, Manogaran G. Prostate cancer classification from prostate biomedical data using ant rough set algorithm with radial trained extreme learning neural network. HEALTH AND TECHNOLOGY 2018. [DOI: 10.1007/s12553-018-0279-6] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
45
|
Abraham B, Nair MS. Computer-aided diagnosis of clinically significant prostate cancer from MRI images using sparse autoencoder and random forest classifier. Biocybern Biomed Eng 2018. [DOI: 10.1016/j.bbe.2018.06.009] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
|