51
|
Robust segmentation of exudates from retinal surface using M-CapsNet via EM routing. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102770] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
|
52
|
Enriquez JS, Chu Y, Pudakalakatti S, Hsieh KL, Salmon D, Dutta P, Millward NZ, Lurie E, Millward S, McAllister F, Maitra A, Sen S, Killary A, Zhang J, Jiang X, Bhattacharya PK, Shams S. Hyperpolarized Magnetic Resonance and Artificial Intelligence: Frontiers of Imaging in Pancreatic Cancer. JMIR Med Inform 2021; 9:e26601. [PMID: 34137725 PMCID: PMC8277399 DOI: 10.2196/26601] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Revised: 02/24/2021] [Accepted: 04/03/2021] [Indexed: 12/24/2022] Open
Abstract
BACKGROUND There is an unmet need for noninvasive imaging markers that can help identify the aggressive subtype(s) of pancreatic ductal adenocarcinoma (PDAC) at diagnosis and at an earlier time point, and evaluate the efficacy of therapy prior to tumor reduction. In the past few years, there have been two major developments with potential for a significant impact in establishing imaging biomarkers for PDAC and pancreatic cancer premalignancy: (1) hyperpolarized metabolic (HP)-magnetic resonance (MR), which increases the sensitivity of conventional MR by over 10,000-fold, enabling real-time metabolic measurements; and (2) applications of artificial intelligence (AI). OBJECTIVE Our objective of this review was to discuss these two exciting but independent developments (HP-MR and AI) in the realm of PDAC imaging and detection from the available literature to date. METHODS A systematic review following the PRISMA extension for Scoping Reviews (PRISMA-ScR) guidelines was performed. Studies addressing the utilization of HP-MR and/or AI for early detection, assessment of aggressiveness, and interrogating the early efficacy of therapy in patients with PDAC cited in recent clinical guidelines were extracted from the PubMed and Google Scholar databases. The studies were reviewed following predefined exclusion and inclusion criteria, and grouped based on the utilization of HP-MR and/or AI in PDAC diagnosis. RESULTS Part of the goal of this review was to highlight the knowledge gap of early detection in pancreatic cancer by any imaging modality, and to emphasize how AI and HP-MR can address this critical gap. We reviewed every paper published on HP-MR applications in PDAC, including six preclinical studies and one clinical trial. We also reviewed several HP-MR-related articles describing new probes with many functional applications in PDAC. On the AI side, we reviewed all existing papers that met our inclusion criteria on AI applications for evaluating computed tomography (CT) and MR images in PDAC. With the emergence of AI and its unique capability to learn across multimodal data, along with sensitive metabolic imaging using HP-MR, this knowledge gap in PDAC can be adequately addressed. CT is an accessible and widespread imaging modality worldwide as it is affordable; because of this reason alone, most of the data discussed are based on CT imaging datasets. Although there were relatively few MR-related papers included in this review, we believe that with rapid adoption of MR imaging and HP-MR, more clinical data on pancreatic cancer imaging will be available in the near future. CONCLUSIONS Integration of AI, HP-MR, and multimodal imaging information in pancreatic cancer may lead to the development of real-time biomarkers of early detection, assessing aggressiveness, and interrogating early efficacy of therapy in PDAC.
Collapse
Affiliation(s)
- José S Enriquez
- Department of Cancer Systems Imaging, University of Texas MD Anderson Cancer Center, Houston, TX, United States.,Graduate School of Biomedical Sciences, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Yan Chu
- School of Biomedical Informatics, University of Texas Health Science Center at Houston, Houston, TX, United States
| | - Shivanand Pudakalakatti
- Department of Cancer Systems Imaging, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Kang Lin Hsieh
- School of Biomedical Informatics, University of Texas Health Science Center at Houston, Houston, TX, United States
| | - Duncan Salmon
- Department of Electrical and Computer Engineering, Rice University, Houston, TX, United States
| | - Prasanta Dutta
- Department of Cancer Systems Imaging, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Niki Zacharias Millward
- Graduate School of Biomedical Sciences, University of Texas MD Anderson Cancer Center, Houston, TX, United States.,Department of Urology, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Eugene Lurie
- Department of Translational Molecular Pathology, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Steven Millward
- Department of Cancer Systems Imaging, University of Texas MD Anderson Cancer Center, Houston, TX, United States.,Graduate School of Biomedical Sciences, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Florencia McAllister
- Graduate School of Biomedical Sciences, University of Texas MD Anderson Cancer Center, Houston, TX, United States.,Department of Clinical Cancer Prevention, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Anirban Maitra
- Graduate School of Biomedical Sciences, University of Texas MD Anderson Cancer Center, Houston, TX, United States.,Department of Pathology, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Subrata Sen
- Graduate School of Biomedical Sciences, University of Texas MD Anderson Cancer Center, Houston, TX, United States.,Department of Translational Molecular Pathology, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Ann Killary
- Graduate School of Biomedical Sciences, University of Texas MD Anderson Cancer Center, Houston, TX, United States.,Department of Translational Molecular Pathology, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Jian Zhang
- Division of Computer Science and Engineering, Louisiana State University, Baton Rouge, LA, United States
| | - Xiaoqian Jiang
- School of Biomedical Informatics, University of Texas Health Science Center at Houston, Houston, TX, United States
| | - Pratip K Bhattacharya
- Department of Cancer Systems Imaging, University of Texas MD Anderson Cancer Center, Houston, TX, United States.,Graduate School of Biomedical Sciences, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Shayan Shams
- School of Biomedical Informatics, University of Texas Health Science Center at Houston, Houston, TX, United States
| |
Collapse
|
53
|
Laoveeravat P, Abhyankar PR, Brenner AR, Gabr MM, Habr FG, Atsawarungruangkit A. Artificial intelligence for pancreatic cancer detection: Recent development and future direction. Artif Intell Gastroenterol 2021; 2:56-68. [DOI: 10.35712/aig.v2.i2.56] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/26/2021] [Revised: 03/31/2021] [Accepted: 04/20/2021] [Indexed: 02/06/2023] Open
Abstract
Artificial intelligence (AI) has been increasingly utilized in medical applications, especially in the field of gastroenterology. AI can assist gastroenterologists in imaging-based testing and prediction of clinical diagnosis, for examples, detecting polyps during colonoscopy, identifying small bowel lesions using capsule endoscopy images, and predicting liver diseases based on clinical parameters. With its high mortality rate, pancreatic cancer can highly benefit from AI since the early detection of small lesion is difficult with conventional imaging techniques and current biomarkers. Endoscopic ultrasound (EUS) is a main diagnostic tool with high sensitivity for pancreatic adenocarcinoma and pancreatic cystic lesion. The standard tumor markers have not been effective for diagnosis. There have been recent research studies in AI application in EUS and novel biomarkers to early detect and differentiate malignant pancreatic lesions. The findings are impressive compared to the available traditional methods. Herein, we aim to explore the utility of AI in EUS and novel serum and cyst fluid biomarkers for pancreatic cancer detection.
Collapse
Affiliation(s)
- Passisd Laoveeravat
- Division of Digestive Diseases and Nutrition, University of Kentucky College of Medicine, Lexington, KY 40536, United States
| | - Priya R Abhyankar
- Department of Internal Medicine, University of Kentucky College of Medicine, Lexington, KY 40536, United States
| | - Aaron R Brenner
- Department of Internal Medicine, University of Kentucky College of Medicine, Lexington, KY 40536, United States
| | - Moamen M Gabr
- Division of Digestive Diseases and Nutrition, University of Kentucky College of Medicine, Lexington, KY 40536, United States
| | - Fadlallah G Habr
- Division of Gastroenterology, Warren Alpert Medical School of Brown University, Providence, RI 02903, United States
| | - Amporn Atsawarungruangkit
- Division of Gastroenterology, Warren Alpert Medical School of Brown University, Providence, RI 02903, United States
| |
Collapse
|
54
|
Caballo M, Hernandez AM, Lyu SH, Teuwen J, Mann RM, van Ginneken B, Boone JM, Sechopoulos I. Computer-aided diagnosis of masses in breast computed tomography imaging: deep learning model with combined handcrafted and convolutional radiomic features. J Med Imaging (Bellingham) 2021; 8:024501. [PMID: 33796604 DOI: 10.1117/1.jmi.8.2.024501] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Accepted: 03/12/2021] [Indexed: 12/30/2022] Open
Abstract
Purpose: A computer-aided diagnosis (CADx) system for breast masses is proposed, which incorporates both handcrafted and convolutional radiomic features embedded into a single deep learning model. Approach: The model combines handcrafted and convolutional radiomic signatures into a multi-view architecture, which retrieves three-dimensional (3D) image information by simultaneously processing multiple two-dimensional mass patches extracted along different planes through the 3D mass volume. Each patch is processed by a stream composed of two concatenated parallel branches: a multi-layer perceptron fed with automatically extracted handcrafted radiomic features, and a convolutional neural network, for which discriminant features are learned from the input patches. All streams are then concatenated together into a final architecture, where all network weights are shared and the learning occurs simultaneously for each stream and branch. The CADx system was developed and tested for diagnosis of breast masses ( N = 284 ) using image datasets acquired with independent dedicated breast computed tomography systems from two different institutions. The diagnostic classification performance of the CADx system was compared against other machine and deep learning architectures adopting handcrafted and convolutional approaches, and three board-certified breast radiologists. Results: On a test set of 82 masses (45 benign, 37 malignant), the proposed CADx system performed better than all other model architectures evaluated, with an increase in the area under the receiver operating characteristics curve (AUC) of 0.05 ± 0.02 , and achieving a final AUC of 0.947, outperforming the three radiologists ( AUC = 0.814 - 0.902 ). Conclusions: In conclusion, the system demonstrated its potential usefulness in breast cancer diagnosis by improving mass malignancy assessment.
Collapse
Affiliation(s)
- Marco Caballo
- Radboud University Medical Center, Department of Medical Imaging, Nijmegen, The Netherlands
| | - Andrew M Hernandez
- University of California Davis, Department of Radiology, Sacramento, California, United States
| | - Su Hyun Lyu
- University of California Davis, Department of Biomedical Engineering, Sacramento, California, United States
| | - Jonas Teuwen
- Radboud University Medical Center, Department of Medical Imaging, Nijmegen, The Netherlands.,The Netherlands Cancer Institute, Department of Radiation Oncology, Amsterdam, The Netherlands
| | - Ritse M Mann
- Radboud University Medical Center, Department of Medical Imaging, Nijmegen, The Netherlands.,The Netherlands Cancer Institute, Department of Radiology, Amsterdam, The Netherlands
| | - Bram van Ginneken
- Radboud University Medical Center, Department of Medical Imaging, Nijmegen, The Netherlands
| | - John M Boone
- University of California Davis, Department of Radiology, Sacramento, California, United States.,University of California Davis, Department of Biomedical Engineering, Sacramento, California, United States
| | - Ioannis Sechopoulos
- Radboud University Medical Center, Department of Medical Imaging, Nijmegen, The Netherlands.,Dutch Expert Center for Screening, Nijmegen, The Netherlands
| |
Collapse
|
55
|
Zhou W, Jian W, Cen X, Zhang L, Guo H, Liu Z, Liang C, Wang G. Prediction of Microvascular Invasion of Hepatocellular Carcinoma Based on Contrast-Enhanced MR and 3D Convolutional Neural Networks. Front Oncol 2021; 11:588010. [PMID: 33854959 PMCID: PMC8040801 DOI: 10.3389/fonc.2021.588010] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2020] [Accepted: 01/08/2021] [Indexed: 12/24/2022] Open
Abstract
Background and Purpose It is extremely important to predict the microvascular invasion (MVI) of hepatocellular carcinoma (HCC) before surgery, which is a key predictor of recurrence and helps determine the treatment strategy before liver resection or liver transplantation. In this study, we demonstrate that a deep learning approach based on contrast-enhanced MR and 3D convolutional neural networks (CNN) can be applied to better predict MVI in HCC patients. Materials and Methods This retrospective study included 114 consecutive patients who were surgically resected from October 2012 to October 2018 with 117 histologically confirmed HCC. MR sequences including 3.0T/LAVA (liver acquisition with volume acceleration) and 3.0T/e-THRIVE (enhanced T1 high resolution isotropic volume excitation) were used in image acquisition of each patient. First, numerous 3D patches were separately extracted from the region of each lesion for data augmentation. Then, 3D CNN was utilized to extract the discriminant deep features of HCC from contrast-enhanced MR separately. Furthermore, loss function for deep supervision was designed to integrate deep features from multiple phases of contrast-enhanced MR. The dataset was divided into two parts, in which 77 HCCs were used as the training set, while the remaining 40 HCCs were used for independent testing. Receiver operating characteristic curve (ROC) analysis was adopted to assess the performance of MVI prediction. The output probability of the model was assessed by the independent student's t-test or Mann-Whitney U test. Results The mean AUC values of MVI prediction of HCC were 0.793 (p=0.001) in the pre-contrast phase, 0.855 (p=0.000) in arterial phase, and 0.817 (p=0.000) in the portal vein phase. Simple concatenation of deep features using 3D CNN derived from all the three phases improved the performance with the AUC value of 0.906 (p=0.000). By comparison, the proposed deep learning model with deep supervision loss function produced the best results with the AUC value of 0.926 (p=0.000). Conclusion A deep learning framework based on 3D CNN and deeply supervised net with contrast-enhanced MR could be effective for MVI prediction.
Collapse
Affiliation(s)
- Wu Zhou
- School of Medical Information Engineering, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Wanwei Jian
- School of Medical Information Engineering, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Xiaoping Cen
- School of Medical Information Engineering, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Lijuan Zhang
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Hui Guo
- Department of Optometry, Guangzhou Aier Eye Hospital, Jinan University, Guangzhou, China
| | - Zaiyi Liu
- Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Changhong Liang
- Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Guangyi Wang
- Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| |
Collapse
|
56
|
Li J, Wang W, Liao L, Liu X. Analysis of the nonperfused volume ratio of adenomyosis from MRI images based on fewshot learning. Phys Med Biol 2021; 66:045019. [PMID: 33361557 DOI: 10.1088/1361-6560/abd66b] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Abstract
The nonperfused volume (NPV) ratio is the key to the success of high intensity focused ultrasound (HIFU) ablation treatment of adenomyosis. However, there are no qualitative interpretation standards for predicting the NPV ratio of adenomyosis using magnetic resonance imaging (MRI) before HIFU ablation treatment, which leading to inter-reader variability. Convolutional neural networks (CNNs) have achieved state-of-the-art performance in the automatic disease diagnosis of MRI. Since the use of HIFU to treat adenomyosis is a novel treatment, there is not enough MRI data to support CNNs. We proposed a novel few-shot learning framework that extends CNNs to predict NPV ratio of HIFU ablation treatment for adenomyosis. We collected a dataset from 208 patients with adenomyosis who underwent MRI examination before and after HIFU treatment. Our proposed method was trained and evaluated by fourfold cross validation. This framework obtained sensitivity of 85.6%, 89.6% and 92.8% at 0.799, 0.980 and 1.180 FPs per patient. In the receiver operating characteristics analysis for NPV ratio of adenomyosis, our proposed method received the area under the curve of 0.8233, 0.8289, 0.8412, 0.8319, 0.7010, 0.7637, 0.8375, 0.8219, 0.8207, 0.9812 for the classifications of NPV ratio interval [0%-10%), [10%-20%), …, [90%-100%], respectively. The present study demonstrated that few-shot learning on NPV ratio prediction of HIFU ablation treatment for adenomyosis may contribute to the selection of eligible patients and the pre-judgment of clinical efficacy.
Collapse
Affiliation(s)
- Jiaqi Li
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, People's Republic of China
| | - Wei Wang
- Department of Ultrasound, Chinese PLA General Hospital, Beijing, People's Republic of China
| | - Lejian Liao
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, People's Republic of China
| | - Xin Liu
- Department of Ultrasound, Beijing Tiantan Hospital, Capital Medical University, Beijing, People's Republic of China
| |
Collapse
|
57
|
Wang J, Zhu H, Wang SH, Zhang YD. A Review of Deep Learning on Medical Image Analysis. MOBILE NETWORKS AND APPLICATIONS 2021; 26:351-380. [DOI: 10.1007/s11036-020-01672-7] [Citation(s) in RCA: 51] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 10/20/2020] [Indexed: 08/30/2023]
|
58
|
Machine Learning Applied to the Analysis of Nonlinear Beam Dynamics Simulations for the CERN Large Hadron Collider and Its Luminosity Upgrade. INFORMATION 2021. [DOI: 10.3390/info12020053] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
A Machine Learning approach to scientific problems has been in use in Science and Engineering for decades. High-energy physics provided a natural domain of application of Machine Learning, profiting from these powerful tools for the advanced analysis of data from particle colliders. However, Machine Learning has been applied to Accelerator Physics only recently, with several laboratories worldwide deploying intense efforts in this domain. At CERN, Machine Learning techniques have been applied to beam dynamics studies related to the Large Hadron Collider and its luminosity upgrade, in domains including beam measurements and machine performance optimization. In this paper, the recent applications of Machine Learning to the analyses of numerical simulations of nonlinear beam dynamics are presented and discussed in detail. The key concept of dynamic aperture provides a number of topics that have been selected to probe Machine Learning. Indeed, the research presented here aims to devise efficient algorithms to identify outliers and to improve the quality of the fitted models expressing the time evolution of the dynamic aperture.
Collapse
|
59
|
The integration of artificial intelligence models to augment imaging modalities in pancreatic cancer. JOURNAL OF PANCREATOLOGY 2020. [DOI: 10.1097/jp9.0000000000000056] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022] Open
|
60
|
Jing B, Deng Y, Zhang T, Hou D, Li B, Qiang M, Liu K, Ke L, Li T, Sun Y, Lv X, Li C. Deep learning for risk prediction in patients with nasopharyngeal carcinoma using multi-parametric MRIs. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 197:105684. [PMID: 32781421 DOI: 10.1016/j.cmpb.2020.105684] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/06/2019] [Accepted: 07/28/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND Magnetic resonance images (MRI) is the main diagnostic tool for risk stratification and treatment decision in nasopharyngeal carcinoma (NPC). However, the holistic feature information of multi-parametric MRIs has not been fully exploited by clinicians to accurately evaluate patients. OBJECTIVE To help clinicians fully utilize the missed information to regroup patients, we built an end-to-end deep learning model to extract feature information from multi-parametric MRIs for predicting and stratifying the risk scores of NPC patients. METHODS In this paper, we proposed an end-to-end multi-modality deep survival network (MDSN) to precisely predict the risk of disease progression of NPC patients. Extending from 3D dense net, this proposed MDSN extracted deep representation from multi-parametric MRIs (T1w, T2w, and T1c). Moreover, deep features and clinical stages were integrated through MDSN to more accurately predict the overall risk score (ORS) of individual NPC patient. RESULT A total of 1,417 individuals treated between January 2012 and December 2014 were included for training and validating the end-to-end MDSN. Results were then tested in a retrospective cohort of 429 patients included in the same institution. The C-index of the proposed method with or without clinical stages was 0.672 and 0.651 on the test set, respectively, which was higher than the that of the stage grouping (0.610). CONCLUSIONS The C-index of the model which integrated clinical stages with deep features is 0.062 higher than that of stage grouping alone (0.672 vs 0.610). We conclude that features extracted from multi-parametric MRIs based on MDSN can well assist the clinical stages in regrouping patients.
Collapse
Affiliation(s)
- Bingzhong Jing
- Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou 510060, China; Department of Information, Sun Yat-Sen University Cancer Centre, Guangzhou 510060, China
| | - Yishu Deng
- Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou 510060, China; Department of Information, Sun Yat-Sen University Cancer Centre, Guangzhou 510060, China
| | - Tao Zhang
- Guangzhou Deepaint intelligence Tenchnology Co.Ltd., Guangzhou 510060, China
| | - Dan Hou
- Guangzhou Deepaint intelligence Tenchnology Co.Ltd., Guangzhou 510060, China
| | - Bin Li
- Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou 510060, China; Department of Information, Sun Yat-Sen University Cancer Centre, Guangzhou 510060, China
| | - Mengyun Qiang
- Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou 510060, China; Department of Nasopharyngeal Carcinoma, Sun Yat-Sen University Cancer Centre, Guangzhou 510060, China
| | - Kuiyuan Liu
- Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou 510060, China; Department of Nasopharyngeal Carcinoma, Sun Yat-Sen University Cancer Centre, Guangzhou 510060, China
| | - Liangru Ke
- Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou 510060, China; Department of Radiology, Sun Yat-Sen University Cancer Centre, Guangzhou 510060, China
| | - Taihe Li
- Shenzhen Annet Information System Co.LTD., Guangzhou 510060, China
| | - Ying Sun
- Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou 510060, China; Department of Radiotherapy, Sun Yat-Sen University Cancer Centre, Guangzhou 510060, China
| | - Xing Lv
- Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou 510060, China; Department of Nasopharyngeal Carcinoma, Sun Yat-Sen University Cancer Centre, Guangzhou 510060, China.
| | - Chaofeng Li
- Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou 510060, China; Department of Information, Sun Yat-Sen University Cancer Centre, Guangzhou 510060, China.
| |
Collapse
|
61
|
Karar ME, Hemdan EED, Shouman MA. Cascaded deep learning classifiers for computer-aided diagnosis of COVID-19 and pneumonia diseases in X-ray scans. COMPLEX INTELL SYST 2020; 7:235-247. [PMID: 34777953 PMCID: PMC7507595 DOI: 10.1007/s40747-020-00199-4] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2020] [Accepted: 09/09/2020] [Indexed: 12/18/2022]
Abstract
Computer-aided diagnosis (CAD) systems are considered a powerful tool for physicians to support identification of the novel Coronavirus Disease 2019 (COVID-19) using medical imaging modalities. Therefore, this article proposes a new framework of cascaded deep learning classifiers to enhance the performance of these CAD systems for highly suspected COVID-19 and pneumonia diseases in X-ray images. Our proposed deep learning framework constitutes two major advancements as follows. First, complicated multi-label classification of X-ray images have been simplified using a series of binary classifiers for each tested case of the health status. That mimics the clinical situation to diagnose potential diseases for a patient. Second, the cascaded architecture of COVID-19 and pneumonia classifiers is flexible to use different fine-tuned deep learning models simultaneously, achieving the best performance of confirming infected cases. This study includes eleven pre-trained convolutional neural network models, such as Visual Geometry Group Network (VGG) and Residual Neural Network (ResNet). They have been successfully tested and evaluated on public X-ray image dataset for normal and three diseased cases. The results of proposed cascaded classifiers showed that VGG16, ResNet50V2, and Dense Neural Network (DenseNet169) models achieved the best detection accuracy of COVID-19, viral (Non-COVID-19) pneumonia, and bacterial pneumonia images, respectively. Furthermore, the performance of our cascaded deep learning classifiers is superior to other multi-label classification methods of COVID-19 and pneumonia diseases in previous studies. Therefore, the proposed deep learning framework presents a good option to be applied in the clinical routine to assist the diagnostic procedures of COVID-19 infection.
Collapse
Affiliation(s)
- Mohamed Esmail Karar
- Department of Computer Engineering and Networks, College of Computing and Information Technology, Shaqra University, Shaqra, Saudi Arabia
- Department of Industrial Electronics and Control Engineering, Faculty of Electronic Engineering, Menoufia University, Minuf, 32952 Egypt
| | - Ezz El-Din Hemdan
- Department of Computer Science and Engineering, Faculty of Electronic Engineering, Menoufia University, Minuf, 32952 Egypt
| | - Marwa A. Shouman
- Department of Computer Science and Engineering, Faculty of Electronic Engineering, Menoufia University, Minuf, 32952 Egypt
| |
Collapse
|
62
|
Ma H, Liu ZX, Zhang JJ, Wu FT, Xu CF, Shen Z, Yu CH, Li YM. Construction of a convolutional neural network classifier developed by computed tomography images for pancreatic cancer diagnosis. World J Gastroenterol 2020; 26:5156-5168. [PMID: 32982116 PMCID: PMC7495037 DOI: 10.3748/wjg.v26.i34.5156] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/20/2020] [Revised: 05/19/2020] [Accepted: 08/25/2020] [Indexed: 02/06/2023] Open
Abstract
BACKGROUND Efforts should be made to develop a deep-learning diagnosis system to distinguish pancreatic cancer from benign tissue due to the high morbidity of pancreatic cancer.
AIM To identify pancreatic cancer in computed tomography (CT) images automatically by constructing a convolutional neural network (CNN) classifier.
METHODS A CNN model was constructed using a dataset of 3494 CT images obtained from 222 patients with pathologically confirmed pancreatic cancer and 3751 CT images from 190 patients with normal pancreas from June 2017 to June 2018. We established three datasets from these images according to the image phases, evaluated the approach in terms of binary classification (i.e., cancer or not) and ternary classification (i.e., no cancer, cancer at tail/body, cancer at head/neck of the pancreas) using 10-fold cross validation, and measured the effectiveness of the model with regard to the accuracy, sensitivity, and specificity.
RESULTS The overall diagnostic accuracy of the trained binary classifier was 95.47%, 95.76%, 95.15% on the plain scan, arterial phase, and venous phase, respectively. The sensitivity was 91.58%, 94.08%, 92.28% on three phases, with no significant differences (χ2 = 0.914, P = 0.633). Considering that the plain phase had same sensitivity, easier access, and lower radiation compared with arterial phase and venous phase , it is more sufficient for the binary classifier. Its accuracy on plain scans was 95.47%, sensitivity was 91.58%, and specificity was 98.27%. The CNN and board-certified gastroenterologists achieved higher accuracies than trainees on plain scan diagnosis (χ2 = 21.534, P < 0.001; χ2 = 9.524, P < 0.05; respectively). However, the difference between CNN and gastroenterologists was not significant (χ2 = 0.759, P = 0.384). In the trained ternary classifier, the overall diagnostic accuracy of the ternary classifier CNN was 82.06%, 79.06%, and 78.80% on plain phase, arterial phase, and venous phase, respectively. The sensitivity scores for detecting cancers in the tail were 52.51%, 41.10% and, 36.03%, while sensitivity for cancers in the head was 46.21%, 85.24% and 72.87% on three phases, respectively. Difference in sensitivity for cancers in the head among the three phases was significant (χ2 = 16.651, P < 0.001), with arterial phase having the highest sensitivity.
CONCLUSION We proposed a deep learning-based pancreatic cancer classifier trained on medium-sized datasets of CT images. It was suitable for screening purposes in pancreatic cancer detection.
Collapse
Affiliation(s)
- Han Ma
- Department of Gastroenterology, The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou 310003, Zhejiang Province, China
| | - Zhong-Xin Liu
- College of Computer Science and Technology, Zhejiang University, Hangzhou 310058, Zhejiang Province, China
| | - Jing-Jing Zhang
- Department of Gastroenterology, The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou 310003, Zhejiang Province, China
| | - Feng-Tian Wu
- State Key Laboratory for Diagnosis and Treatment of Infectious Disease, The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou 310003, Zhejiang Province, China
| | - Cheng-Fu Xu
- Department of Gastroenterology, The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou 310003, Zhejiang Province, China
| | - Zhe Shen
- Department of Gastroenterology, The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou 310003, Zhejiang Province, China
| | - Chao-Hui Yu
- Department of Gastroenterology, Zhejiang Provincial Key Laboratory of Pancreatic Disease, The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou 310003, Zhejiang Province, China
| | - You-Ming Li
- Department of Gastroenterology, The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou 310003, Zhejiang Province, China
| |
Collapse
|
63
|
Zhang Z, Li S, Wang Z, Lu Y. A Novel and Efficient Tumor Detection Framework for Pancreatic Cancer via CT Images. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:1160-1164. [PMID: 33018193 DOI: 10.1109/embc44109.2020.9176172] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
As Deep Convolutional Neural Networks (DCNNs) have shown robust performance and results in medical image analysis, a number of deep-learning-based tumor detection methods were developed in recent years. Nowadays, the automatic detection of pancreatic tumors using contrast-enhanced Computed Tomography (CT) is widely applied for the diagnosis and staging of pancreatic cancer. Traditional hand-crafted methods only extract low-level features. Normal convolutional neural networks, however, fail to make full use of effective context information, which causes inferior detection results. In this paper, a novel and efficient pancreatic tumor detection framework aiming at fully exploiting the context information at multiple scales is designed. More specifically, the contribution of the proposed method mainly consists of three components: Augmented Feature Pyramid networks, Self-adaptive Feature Fusion and a Dependencies Computation (DC) Module. A bottom-up path augmentation to fully extract and propagate low-level accurate localization information is established firstly. Then, the Self-adaptive Feature Fusion can encode much richer context information at multiple scales based on the proposed regions. Finally, the DC Module is specifically designed to capture the interaction information between proposals and surrounding tissues. Experimental results achieve competitive performance in detection with the AUC of 0.9455, which outperforms other state-of-the-art methods to our best of knowledge, demonstrating the proposed framework can detect the tumor of pancreatic cancer efficiently and accurately.
Collapse
|
64
|
Revathi M, Jeya IJS, Deepa SN. Deep learning-based soft computing model for image classification application. Soft comput 2020. [DOI: 10.1007/s00500-020-05048-7] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
65
|
Lyu J, Bi X, Ling SH. Multi-Level Cross Residual Network for Lung Nodule Classification. SENSORS (BASEL, SWITZERLAND) 2020; 20:E2837. [PMID: 32429401 PMCID: PMC7284728 DOI: 10.3390/s20102837] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/24/2020] [Revised: 05/14/2020] [Accepted: 05/14/2020] [Indexed: 02/06/2023]
Abstract
Computer-aided algorithm plays an important role in disease diagnosis through medical images. As one of the major cancers, lung cancer is commonly detected by computer tomography. To increase the survival rate of lung cancer patients, an early-stage diagnosis is necessary. In this paper, we propose a new structure, multi-level cross residual convolutional neural network (ML-xResNet), to classify the different types of lung nodule malignancies. ML-xResNet is constructed by three-level parallel ResNets with different convolution kernel sizes to extract multi-scale features of the inputs. Moreover, the residuals are connected not only with the current level but also with other levels in a crossover manner. To illustrate the performance of ML-xResNet, we apply the model to process ternary classification (benign, indeterminate, and malignant lung nodules) and binary classification (benign and malignant lung nodules) of lung nodules, respectively. Based on the experiment results, the proposed ML-xResNet achieves the best results of 85.88% accuracy for ternary classification and 92.19% accuracy for binary classification, without any additional handcrafted preprocessing algorithm.
Collapse
Affiliation(s)
- Juan Lyu
- College of Information and Communication Engineering, Harbin Engineering University, Harbin 150001, China; (J.L.); (X.B.)
| | - Xiaojun Bi
- College of Information and Communication Engineering, Harbin Engineering University, Harbin 150001, China; (J.L.); (X.B.)
- College of Information Engineering, Minzu University of China, Beijing 100081, China
| | - Sai Ho Ling
- School of Biomedical Engineering, University of Technology Sydney, Ultimo, NSW 2007, Australia
| |
Collapse
|
66
|
Luo Y, Chen X, Chen J, Song C, Shen J, Xiao H, Chen M, Li ZP, Huang B, Feng ST. Preoperative Prediction of Pancreatic Neuroendocrine Neoplasms Grading Based on Enhanced Computed Tomography Imaging: Validation of Deep Learning with a Convolutional Neural Network. Neuroendocrinology 2020; 110:338-350. [PMID: 31525737 DOI: 10.1159/000503291] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/21/2019] [Accepted: 09/09/2019] [Indexed: 12/13/2022]
Abstract
INTRODUCTION The pathological grading of pancreatic neuroendocrine neoplasms (pNENs) is an independent predictor of survival and indicator for treatment. Deep learning (DL) with a convolutional neural network (CNN) may improve the preoperative prediction of pNEN grading. METHODS Ninety-three pNEN patients with preoperative contrast-enhanced computed tomography (CECT) from Hospital I were retrospectively enrolled. A CNN-based DL algorithm was applied to the CECT images to obtain 3 models (arterial, venous, and arterial/venous models), the performances of which were evaluated via an eightfold cross-validation technique. The CECT images of the optimal phase were used for comparing the DL and traditional machine learning (TML) models in predicting the pathological grading of pNENs. The performance of radiologists by using qualitative and quantitative computed tomography findings was also evaluated. The best DL model from the eightfold cross-validation was evaluated on an independent testing set of 19 patients from Hospital II who were scanned on a different scanner. The Kaplan-Meier (KM) analysis was employed for survival analysis. RESULTS The area under the curve (AUC; 0.81) of arterial phase in validation set was significantly higher than those of venous (AUC 0.57, p = 0.03) and arterial/venous phase (AUC 0.70, p = 0.03) in predicting the pathological grading of pNENs. Compared with the TML models, the DL model gave a higher (although insignificantly) AUC. The highest OR was achieved for the p ratio <0.9, the AUC and accuracy for diagnosing G3 pNENs were 0.80 and 79.1% respectively. The DL algorithm achieved an AUC of 0.82 and an accuracy of 88.1% for the independent testing set. The KM analysis showed a statistical significant difference between the predicted G1/2 and G3 groups in the progression-free survival (p = 0.001) and overall survival (p < 0.001). CONCLUSION The CNN-based DL method showed a relatively robust performance in predicting pathological grading of pNENs from CECT images.
Collapse
Affiliation(s)
- Yanji Luo
- Department of Radiology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China
| | - Xin Chen
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Block A2, Xili Campus of Shenzhen University, Shenzhen, China
| | - Jie Chen
- Department of Gastroenterology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China
| | - Chenyu Song
- Department of Radiology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China
| | - Jingxian Shen
- Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Huanhui Xiao
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Block A2, Xili Campus of Shenzhen University, Shenzhen, China
| | - Minhu Chen
- Department of Gastroenterology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China
| | - Zi-Ping Li
- Department of Radiology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China
| | - Bingsheng Huang
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Block A2, Xili Campus of Shenzhen University, Shenzhen, China,
| | - Shi-Ting Feng
- Department of Radiology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China
| |
Collapse
|
67
|
Deep neural network for automatic characterization of lesions on 68Ga-PSMA-11 PET/CT. Eur J Nucl Med Mol Imaging 2019; 47:603-613. [PMID: 31813050 DOI: 10.1007/s00259-019-04606-y] [Citation(s) in RCA: 57] [Impact Index Per Article: 11.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2019] [Accepted: 11/07/2019] [Indexed: 12/24/2022]
Abstract
PURPOSE This study proposes an automated prostate cancer (PC) lesion characterization method based on the deep neural network to determine tumor burden on 68Ga-PSMA-11 PET/CT to potentially facilitate the optimization of PSMA-directed radionuclide therapy. METHODS We collected 68Ga-PSMA-11 PET/CT images from 193 patients with metastatic PC at three medical centers. For proof-of-concept, we focused on the detection of pelvis bone and lymph node lesions. A deep neural network (triple-combining 2.5D U-Net) was developed for the automated characterization of these lesions. The proposed method simultaneously extracts features from axial, coronal, and sagittal planes, which mimics the workflow of physicians and reduces computational and memory requirements. RESULTS Among all the labeled lesions, the network achieved 99% precision, 99% recall, and an F1 score of 99% on bone lesion detection and 94%, precision 89% recall, and an F1 score of 92% on lymph node lesion detection. The segmentation accuracy is lower than the detection. The performance of the network was correlated with the amount of training data. CONCLUSION We developed a deep neural network to characterize automatically the PC lesions on 68Ga-PSMA-11 PET/CT. The preliminary test within the pelvic area confirms the potential of deep learning methods. Increasing the amount of training data should further enhance the performance of the proposed method and may ultimately allow whole-body assessments.
Collapse
|
68
|
Vinsard DG, Mori Y, Misawa M, Kudo SE, Rastogi A, Bagci U, Rex DK, Wallace MB. Quality assurance of computer-aided detection and diagnosis in colonoscopy. Gastrointest Endosc 2019; 90:55-63. [PMID: 30926431 DOI: 10.1016/j.gie.2019.03.019] [Citation(s) in RCA: 86] [Impact Index Per Article: 17.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/08/2019] [Accepted: 03/18/2019] [Indexed: 02/05/2023]
Abstract
Recent breakthroughs in artificial intelligence (AI), specifically via its emerging sub-field "deep learning," have direct implications for computer-aided detection and diagnosis (CADe and/or CADx) for colonoscopy. AI is expected to have at least 2 major roles in colonoscopy practice-polyp detection (CADe) and polyp characterization (CADx). CADe has the potential to decrease the polyp miss rate, contributing to improving adenoma detection, whereas CADx can improve the accuracy of colorectal polyp optical diagnosis, leading to reduction of unnecessary polypectomy of non-neoplastic lesions, potential implementation of a resect-and-discard paradigm, and proper application of advanced resection techniques. A growing number of medical-engineering researchers are developing both CADe and CADx systems, some of which allow real-time recognition of polyps or in vivo identification of adenomas, with over 90% accuracy. However, the quality of the developed AI systems as well as that of the study designs vary significantly, hence raising some concerns regarding the generalization of the proposed AI systems. Initial studies were conducted in an exploratory or retrospective fashion by using stored images and likely overestimating the results. These drawbacks potentially hinder smooth implementation of this novel technology into colonoscopy practice. The aim of this article is to review both contributions and limitations in recent machine-learning-based CADe and/or CADx colonoscopy studies and propose some principles that should underlie system development and clinical testing.
Collapse
Affiliation(s)
- Daniela Guerrero Vinsard
- Showa University International Center for Endoscopy, Showa University Northern Yokohama Hospital, Yokohama, Japan; Division of Internal Medicine, University of Connecticut Health Center, Farmington, Connecticut, USA
| | - Yuichi Mori
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Yokohama, Japan
| | - Masashi Misawa
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Yokohama, Japan
| | - Shin-Ei Kudo
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Yokohama, Japan
| | - Amit Rastogi
- Division of Gastroenterology, University of Kansas Medical Center, Kansas City, Kansas
| | - Ulas Bagci
- Center for Research in Computer Vision, University of Central Florida, Orlando, Florida
| | - Douglas K Rex
- Division of Gastroenterology and Hepatology, Indiana University School of Medicine, Indianapolis, Indiana
| | - Michael B Wallace
- Division of Gastroenterology and Hepatology, Mayo Clinic, Jacksonville, Florida, USA
| |
Collapse
|
69
|
Deepak S, Ameer PM. Brain tumor classification using deep CNN features via transfer learning. Comput Biol Med 2019; 111:103345. [PMID: 31279167 DOI: 10.1016/j.compbiomed.2019.103345] [Citation(s) in RCA: 264] [Impact Index Per Article: 52.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2019] [Revised: 06/26/2019] [Accepted: 06/26/2019] [Indexed: 11/28/2022]
Abstract
Brain tumor classification is an important problem in computer-aided diagnosis (CAD) for medical applications. This paper focuses on a 3-class classification problem to differentiate among glioma, meningioma and pituitary tumors, which form three prominent types of brain tumor. The proposed classification system adopts the concept of deep transfer learning and uses a pre-trained GoogLeNet to extract features from brain MRI images. Proven classifier models are integrated to classify the extracted features. The experiment follows a patient-level five-fold cross-validation process, on MRI dataset from figshare. The proposed system records a mean classification accuracy of 98%, outperforming all state-of-the-art methods. Other performance measures used in the study are the area under the curve (AUC), precision, recall, F-score and specificity. In addition, the paper addresses a practical aspect by evaluating the system with fewer training samples. The observations of the study imply that transfer learning is a useful technique when the availability of medical images is limited. The paper provides an analytical discussion on misclassifications also.
Collapse
Affiliation(s)
- S Deepak
- Department of Electronics & Communication Engineering, National Institute of Technology, Calicut, India.
| | - P M Ameer
- Department of Electronics & Communication Engineering, National Institute of Technology, Calicut, India
| |
Collapse
|