1
|
Shen M, Jiang Z. Artificial Intelligence Applications in Lymphoma Diagnosis and Management: Opportunities, Challenges, and Future Directions. J Multidiscip Healthc 2024; 17:5329-5339. [PMID: 39582879 PMCID: PMC11583773 DOI: 10.2147/jmdh.s485724] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2024] [Accepted: 10/09/2024] [Indexed: 11/26/2024] Open
Abstract
Lymphoma, a heterogeneous group of blood cancers, presents significant diagnostic and therapeutic challenges due to its complex subtypes and variable clinical outcomes. Artificial intelligence (AI) has emerged as a promising tool to enhance the accuracy and efficiency of lymphoma pathology. This review explores the potential of AI in lymphoma diagnosis, classification, prognosis prediction, and treatment planning, as well as addressing the challenges and future directions in this rapidly evolving field.
Collapse
Affiliation(s)
- Miao Shen
- Department of Pathology, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou City, Zhejiang Province, 310000, People’s Republic of China
- Department of Pathology, Deqing People’s Hospital, Huzhou City, Zhejiang Province, 313200, People’s Republic of China
| | - Zhinong Jiang
- Department of Pathology, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou City, Zhejiang Province, 310000, People’s Republic of China
| |
Collapse
|
2
|
Monga M, Edwards NC, Rojanasarot S, Patel M, Turner E, White J, Bhattacharyya S. Artificial Intelligence in Endourology: Maximizing the Promise Through Consideration of the Principles of Diffusion of Innovation Theory. J Endourol 2024; 38:755-762. [PMID: 38877816 DOI: 10.1089/end.2023.0680] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/16/2024] Open
Abstract
Introduction: Diffusion of Innovation Theory explains how ideas or products gain momentum and diffuse (or spread) through specific populations or social systems over time. The theory analyzes primary influencers of the spread of new ideas, including the innovation itself, communication channels, time, and social systems. Methods: The current study reviewed published medical literature to identify studies and applications of artificial intelligence (AI) in endourology and used E.M. Rogers' Diffusion of Innovation Theory to analyze the primary influencers of the adoption of AI in endourological care. The insights gained were triaged and prioritized into AI application-related action items or "tips" for facilitating the appropriate diffusion of the most valuable endourological innovations. Results: Published medical literature indicates that AI is still a research-based tool in endourology and is not widely used in clinical practice. The published studies have presented AI models and algorithms to assist with stone disease detection (n = 17), the prediction of management outcomes (n = 18), the optimization of operative procedures (n = 9), and the elucidation of stone disease chemistry and composition (n = 24). Five tips for facilitating appropriate adoption of endourological AI are: (1) Develop/prioritize training programs to establish the foundation for effective use; (2) create appropriate data infrastructure for implementation, including its maintenance and evolution over time; (3) deliver AI transparency to gain the trust of endourology stakeholders; (4) adopt innovations in the context of continuous quality improvement Plan-Do-Study-Act cycles as these approaches have proven track records for improving care quality; and (5) be realistic about what AI can/cannot currently do and document to establish the basis for shared understanding. Conclusion: Diffusion of Innovation Theory provides a framework for analyzing the influencers of the adoption of AI in endourological care. The five tips identified through this research may be used to facilitate appropriate diffusion of the most valuable endourological innovations.
Collapse
Affiliation(s)
- Manoj Monga
- UC San Diego Health, San Diego, California, USA
| | - Natalie C Edwards
- Health Services Consulting Corporation, Boxborough, Massachusetts, USA
| | - Sirikan Rojanasarot
- Boston Scientific, Health Economics and Market Access, Marlborough, Massachusetts, USA
| | - Mital Patel
- Boston Scientific, Health Economics and Market Access, Marlborough, Massachusetts, USA
| | - Erin Turner
- Boston Scientific, Health Economics and Market Access, Marlborough, Massachusetts, USA
| | - Jeni White
- Boston Scientific, Health Economics and Market Access, Marlborough, Massachusetts, USA
| | - Samir Bhattacharyya
- Boston Scientific, Health Economics and Market Access, Marlborough, Massachusetts, USA
| |
Collapse
|
3
|
Yang M, Yang M, Yang L, Wang Z, Ye P, Chen C, Fu L, Xu S. Deep learning for MRI lesion segmentation in rectal cancer. Front Med (Lausanne) 2024; 11:1394262. [PMID: 38983364 PMCID: PMC11231084 DOI: 10.3389/fmed.2024.1394262] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2024] [Accepted: 06/14/2024] [Indexed: 07/11/2024] Open
Abstract
Rectal cancer (RC) is a globally prevalent malignant tumor, presenting significant challenges in its management and treatment. Currently, magnetic resonance imaging (MRI) offers superior soft tissue contrast and radiation-free effects for RC patients, making it the most widely used and effective detection method. In early screening, radiologists rely on patients' medical radiology characteristics and their extensive clinical experience for diagnosis. However, diagnostic accuracy may be hindered by factors such as limited expertise, visual fatigue, and image clarity issues, resulting in misdiagnosis or missed diagnosis. Moreover, the distribution of surrounding organs in RC is extensive with some organs having similar shapes to the tumor but unclear boundaries; these complexities greatly impede doctors' ability to diagnose RC accurately. With recent advancements in artificial intelligence, machine learning techniques like deep learning (DL) have demonstrated immense potential and broad prospects in medical image analysis. The emergence of this approach has significantly enhanced research capabilities in medical image classification, detection, and segmentation fields with particular emphasis on medical image segmentation. This review aims to discuss the developmental process of DL segmentation algorithms along with their application progress in lesion segmentation from MRI images of RC to provide theoretical guidance and support for further advancements in this field.
Collapse
Affiliation(s)
- Mingwei Yang
- Department of General Surgery, Nanfang Hospital Zengcheng Campus, Guangzhou, Guangdong, China
| | - Miyang Yang
- Department of Radiology, Fuzong Teaching Hospital, Fujian University of Traditional Chinese Medicine, Fuzhou, Fujian, China
- Department of Radiology, 900th Hospital of Joint Logistics Support Force, Fuzhou, Fujian, China
| | - Lanlan Yang
- Department of Radiology, Fuzong Teaching Hospital, Fujian University of Traditional Chinese Medicine, Fuzhou, Fujian, China
| | - Zhaochu Wang
- Department of Radiology, Fuzong Teaching Hospital, Fujian University of Traditional Chinese Medicine, Fuzhou, Fujian, China
| | - Peiyun Ye
- Department of Radiology, Fuzong Teaching Hospital, Fujian University of Traditional Chinese Medicine, Fuzhou, Fujian, China
- Department of Radiology, 900th Hospital of Joint Logistics Support Force, Fuzhou, Fujian, China
| | - Chujie Chen
- Department of Radiology, Fuzong Teaching Hospital, Fujian University of Traditional Chinese Medicine, Fuzhou, Fujian, China
- Department of Radiology, 900th Hospital of Joint Logistics Support Force, Fuzhou, Fujian, China
| | - Liyuan Fu
- Department of Radiology, 900th Hospital of Joint Logistics Support Force, Fuzhou, Fujian, China
| | - Shangwen Xu
- Department of Radiology, 900th Hospital of Joint Logistics Support Force, Fuzhou, Fujian, China
| |
Collapse
|
4
|
Zhu M, Fu Q, Zang Y, Shao Z, Zhou Y, Jiang Z, Wang W, Shi B, Chen S, Zhu Y. Different diagnostic strategies combining prostate health index and magnetic resonance imaging for predicting prostate cancer: A multicentre study. Urol Oncol 2024; 42:159.e17-159.e23. [PMID: 38480077 DOI: 10.1016/j.urolonc.2024.02.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2023] [Revised: 02/06/2024] [Accepted: 02/23/2024] [Indexed: 04/15/2024]
Abstract
OBJECTIVE To explore how prostate health index (PHI) and multiparametric magnetic resonance imaging (mpMRI) should be used in concert to improve diagnostic capacity for clinically significant prostate cancers (CsCaP) in patients with prostate-specific antigen (PSA) between 4 and 20 ng/ml. METHODS About 426 patients fulfilling the inclusion criteria were included in this study. Univariable and multivariable logistic analyses were performed to analyze the association between the clinical indicators and CaP/CsCaP. We used the Delong test to compare the differences in the area under the curve (AUC) values of four models for CaP and CsCaP. Decision curve analysis (DCA) and calibration plots were used to assess predictive performance. We compared clinical outcomes of different diagnostic strategies constructed using different combinations of the models by the chi-square test and the McNemar test. RESULTS The AUC of PHI-MRI (a risk prediction model based on PHI and mpMRI) was 0.859, which was significantly higher than those of PHI (AUC = 0.792, P < 0.001) and mpMRI (AUC = 0.797, P < 0.001). PHI-MRI had a higher net benefit on DCA for predicting CaP and CsCaP in comparison to PHI and mpMRI. Adding the PHI-MRI in diagnostic strategies for CsCaP, such as use PHI-MRI alone or sequential use of PHI followed by PHI-MRI, could reduce the number of biopsies by approximately 20% compared to use PHI followed by mpMRI (256 vs 316, 257 vs 316, respectively). CONCLUSIONS The PHI-MRI model was superior to PHI and MRI alone. It may reduce the number of biopsies and ensure the detection rate of CsCaP under an appropriate sensitivity at the cost of an increased number of MRI scans.
Collapse
Affiliation(s)
- Meikai Zhu
- Department of Urology, Qilu Hospital of Shandong University, Jinan, China
| | - Qiang Fu
- Department of Urology, Shandong Provincial Hospital, Jinan, China
| | - Yunjiang Zang
- Department of Urology, Weifang People's Hospital, Weifang, China
| | - Zhiqiang Shao
- Department of Urology, Linyi People's Hospital, Linyi, China
| | - Yongheng Zhou
- Department of Urology, Qilu Hospital of Shandong University, Jinan, China
| | - Zhiwen Jiang
- Department of Urology, Qilu Hospital of Shandong University, Jinan, China
| | - Wenfu Wang
- Department of Urology, Qilu Hospital of Shandong University, Jinan, China
| | - Benkang Shi
- Department of Urology, Qilu Hospital of Shandong University, Jinan, China
| | - Shouzhen Chen
- Department of Urology, Qilu Hospital of Shandong University, Jinan, China
| | - Yaofeng Zhu
- Department of Urology, Qilu Hospital of Shandong University, Jinan, China.
| |
Collapse
|
5
|
Miao Q, Wang X, Cui J, Zheng H, Xie Y, Zhu K, Chai R, Jiang Y, Feng D, Zhang X, Shi F, Tan X, Fan G, Liang K. Artificial intelligence to predict T4 stage of pancreatic ductal adenocarcinoma using CT imaging. Comput Biol Med 2024; 171:108125. [PMID: 38340439 DOI: 10.1016/j.compbiomed.2024.108125] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Revised: 02/02/2024] [Accepted: 02/05/2024] [Indexed: 02/12/2024]
Abstract
BACKGROUND The accurate assessment of T4 stage of pancreatic ductal adenocarcinoma (PDAC) has consistently presented a considerable difficulty for radiologists. This study aimed to develop and validate an automated artificial intelligence (AI) pipeline for the prediction of T4 stage of PDAC using contrast-enhanced CT imaging. METHODS The data were obtained retrospectively from consecutive patients with surgically resected and pathologically proved PDAC at two institutions between July 2017 and June 2022. Initially, a deep learning (DL) model was developed to segment PDAC. Subsequently, radiomics features were extracted from the automatically segmented region of interest (ROI), which encompassed both the tumor region and a 3 mm surrounding area, to construct a predictive model for determining T4 stage of PDAC. The assessment of the models' performance involved the calculation of the area under the receiver operating characteristic curve (AUC), sensitivity, and specificity. RESULTS The study encompassed a cohort of 509 PDAC patients, with a median age of 62 years (interquartile range: 55-67). The proportion of patients in T4 stage within the model was 16.9%. The model achieved an AUC of 0.849 (95% CI: 0.753-0.940), a sensitivity of 0.875, and a specificity of 0.728 in predicting T4 stage of PDAC. The performance of the model was determined to be comparable to that of two experienced abdominal radiologists (AUCs: 0.849 vs. 0.834 and 0.857). CONCLUSION The automated AI pipeline utilizing tumor and peritumor-related radiomics features demonstrated comparable performance to that of senior abdominal radiologists in predicting T4 stage of PDAC.
Collapse
Affiliation(s)
- Qi Miao
- Department of Radiology, The First Hospital of China Medical University, Shenyang, China
| | - Xuechun Wang
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Jingjing Cui
- Department of Research and Development, United Imaging Intelligence (Beijing) Co., Ltd., Bejing, China
| | - Haoxin Zheng
- Department of Computer Science, University of California, Los Angeles, USA
| | - Yan Xie
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Kexin Zhu
- Department of Radiology, The First Hospital of China Medical University, Shenyang, China
| | - Ruimei Chai
- Department of Radiology, The First Hospital of China Medical University, Shenyang, China
| | - Yuanxi Jiang
- Department of Radiology, The First Hospital of China Medical University, Shenyang, China
| | - Dongli Feng
- Department of Radiology, The First Hospital of China Medical University, Shenyang, China
| | - Xin Zhang
- Department of Radiology, The First Hospital of China Medical University, Shenyang, China
| | - Feng Shi
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Xiaodong Tan
- Department of General Surgery/Pancreatic and Thyroid Surgery, Shengjing Hospital of China Medical University, Shenyang, China
| | - Guoguang Fan
- Department of Radiology, The First Hospital of China Medical University, Shenyang, China.
| | - Keke Liang
- Department of General Surgery/Pancreatic and Thyroid Surgery, Shengjing Hospital of China Medical University, Shenyang, China.
| |
Collapse
|
6
|
Brandão M, Mendes F, Martins M, Cardoso P, Macedo G, Mascarenhas T, Mascarenhas Saraiva M. Revolutionizing Women's Health: A Comprehensive Review of Artificial Intelligence Advancements in Gynecology. J Clin Med 2024; 13:1061. [PMID: 38398374 PMCID: PMC10889757 DOI: 10.3390/jcm13041061] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2023] [Revised: 02/04/2024] [Accepted: 02/05/2024] [Indexed: 02/25/2024] Open
Abstract
Artificial intelligence has yielded remarkably promising results in several medical fields, namely those with a strong imaging component. Gynecology relies heavily on imaging since it offers useful visual data on the female reproductive system, leading to a deeper understanding of pathophysiological concepts. The applicability of artificial intelligence technologies has not been as noticeable in gynecologic imaging as in other medical fields so far. However, due to growing interest in this area, some studies have been performed with exciting results. From urogynecology to oncology, artificial intelligence algorithms, particularly machine learning and deep learning, have shown huge potential to revolutionize the overall healthcare experience for women's reproductive health. In this review, we aim to establish the current status of AI in gynecology, the upcoming developments in this area, and discuss the challenges facing its clinical implementation, namely the technological and ethical concerns for technology development, implementation, and accountability.
Collapse
Affiliation(s)
- Marta Brandão
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (M.B.); (P.C.); (G.M.); (T.M.)
| | - Francisco Mendes
- Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (F.M.); (M.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
| | - Miguel Martins
- Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (F.M.); (M.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
| | - Pedro Cardoso
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (M.B.); (P.C.); (G.M.); (T.M.)
- Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (F.M.); (M.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
| | - Guilherme Macedo
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (M.B.); (P.C.); (G.M.); (T.M.)
- Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (F.M.); (M.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
| | - Teresa Mascarenhas
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (M.B.); (P.C.); (G.M.); (T.M.)
- Department of Obstetrics and Gynecology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
| | - Miguel Mascarenhas Saraiva
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (M.B.); (P.C.); (G.M.); (T.M.)
- Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (F.M.); (M.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
| |
Collapse
|
7
|
Huang Y, Cheung CY, Li D, Tham YC, Sheng B, Cheng CY, Wang YX, Wong TY. AI-integrated ocular imaging for predicting cardiovascular disease: advancements and future outlook. Eye (Lond) 2024; 38:464-472. [PMID: 37709926 PMCID: PMC10858189 DOI: 10.1038/s41433-023-02724-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2023] [Revised: 07/26/2023] [Accepted: 08/25/2023] [Indexed: 09/16/2023] Open
Abstract
Cardiovascular disease (CVD) remains the leading cause of death worldwide. Assessing of CVD risk plays an essential role in identifying individuals at higher risk and enables the implementation of targeted intervention strategies, leading to improved CVD prevalence reduction and patient survival rates. The ocular vasculature, particularly the retinal vasculature, has emerged as a potential means for CVD risk stratification due to its anatomical similarities and physiological characteristics shared with other vital organs, such as the brain and heart. The integration of artificial intelligence (AI) into ocular imaging has the potential to overcome limitations associated with traditional semi-automated image analysis, including inefficiency and manual measurement errors. Furthermore, AI techniques may uncover novel and subtle features that contribute to the identification of ocular biomarkers associated with CVD. This review provides a comprehensive overview of advancements made in AI-based ocular image analysis for predicting CVD, including the prediction of CVD risk factors, the replacement of traditional CVD biomarkers (e.g., CT-scan measured coronary artery calcium score), and the prediction of symptomatic CVD events. The review covers a range of ocular imaging modalities, including colour fundus photography, optical coherence tomography, and optical coherence tomography angiography, and other types of images like external eye images. Additionally, the review addresses the current limitations of AI research in this field and discusses the challenges associated with translating AI algorithms into clinical practice.
Collapse
Affiliation(s)
- Yu Huang
- Beijing Institute of Ophthalmology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Carol Y Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, Hong Kong, China
| | - Dawei Li
- College of Future Technology, Peking University, Beijing, China
| | - Yih Chung Tham
- Centre for Innovation and Precision Eye Health and Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore, Singapore
| | - Bin Sheng
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Ching Yu Cheng
- Centre for Innovation and Precision Eye Health and Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore, Singapore
| | - Ya Xing Wang
- Beijing Institute of Ophthalmology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Tien Yin Wong
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore, Singapore.
- Tsinghua Medicine, Tsinghua University, Beijing, China.
- School of Clinical Medicine, Beijing Tsinghua Changgung Hospital, Beijing, China.
| |
Collapse
|
8
|
Thirunavukarasu AJ, Elangovan K, Gutierrez L, Li Y, Tan I, Keane PA, Korot E, Ting DSW. Democratizing Artificial Intelligence Imaging Analysis With Automated Machine Learning: Tutorial. J Med Internet Res 2023; 25:e49949. [PMID: 37824185 PMCID: PMC10603560 DOI: 10.2196/49949] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Revised: 08/21/2023] [Accepted: 09/13/2023] [Indexed: 10/13/2023] Open
Abstract
Deep learning-based clinical imaging analysis underlies diagnostic artificial intelligence (AI) models, which can match or even exceed the performance of clinical experts, having the potential to revolutionize clinical practice. A wide variety of automated machine learning (autoML) platforms lower the technical barrier to entry to deep learning, extending AI capabilities to clinicians with limited technical expertise, and even autonomous foundation models such as multimodal large language models. Here, we provide a technical overview of autoML with descriptions of how autoML may be applied in education, research, and clinical practice. Each stage of the process of conducting an autoML project is outlined, with an emphasis on ethical and technical best practices. Specifically, data acquisition, data partitioning, model training, model validation, analysis, and model deployment are considered. The strengths and limitations of available code-free, code-minimal, and code-intensive autoML platforms are considered. AutoML has great potential to democratize AI in medicine, improving AI literacy by enabling "hands-on" education. AutoML may serve as a useful adjunct in research by facilitating rapid testing and benchmarking before significant computational resources are committed. AutoML may also be applied in clinical contexts, provided regulatory requirements are met. The abstraction by autoML of arduous aspects of AI engineering promotes prioritization of data set curation, supporting the transition from conventional model-driven approaches to data-centric development. To fulfill its potential, clinicians must be educated on how to apply these technologies ethically, rigorously, and effectively; this tutorial represents a comprehensive summary of relevant considerations.
Collapse
Affiliation(s)
- Arun James Thirunavukarasu
- University of Cambridge School of Clinical Medicine, Cambridge, United Kingdom
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore, Singapore
| | - Kabilan Elangovan
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore, Singapore
| | - Laura Gutierrez
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore, Singapore
| | - Yong Li
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore, Singapore
| | - Iris Tan
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore, Singapore
| | - Pearse A Keane
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
| | - Edward Korot
- Byers Eye Institute, Stanford University, Palo Alto, CA, United States
- Retina Specialists of Michigan, Grand Rapids, MI, United States
| | - Daniel Shu Wei Ting
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore, Singapore
- Byers Eye Institute, Stanford University, Palo Alto, CA, United States
- Singapore National Eye Centre, Singapore, Singapore
| |
Collapse
|
9
|
Wang Z, Li Z, Li K, Mu S, Zhou X, Di Y. Performance of artificial intelligence in diabetic retinopathy screening: a systematic review and meta-analysis of prospective studies. Front Endocrinol (Lausanne) 2023; 14:1197783. [PMID: 37383397 PMCID: PMC10296189 DOI: 10.3389/fendo.2023.1197783] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Accepted: 05/23/2023] [Indexed: 06/30/2023] Open
Abstract
Aims To systematically evaluate the diagnostic value of an artificial intelligence (AI) algorithm model for various types of diabetic retinopathy (DR) in prospective studies over the previous five years, and to explore the factors affecting its diagnostic effectiveness. Materials and methods A search was conducted in Cochrane Library, Embase, Web of Science, PubMed, and IEEE databases to collect prospective studies on AI models for the diagnosis of DR from January 2017 to December 2022. We used QUADAS-2 to evaluate the risk of bias in the included studies. Meta-analysis was performed using MetaDiSc and STATA 14.0 software to calculate the combined sensitivity, specificity, positive likelihood ratio, and negative likelihood ratio of various types of DR. Diagnostic odds ratios, summary receiver operating characteristic (SROC) plots, coupled forest plots, and subgroup analysis were performed according to the DR categories, patient source, region of study, and quality of literature, image, and algorithm. Results Finally, 21 studies were included. Meta-analysis showed that the pooled sensitivity, specificity, pooled positive likelihood ratio, pooled negative likelihood ratio, area under the curve, Cochrane Q index, and pooled diagnostic odds ratio of AI model for the diagnosis of DR were 0.880 (0.875-0.884), 0.912 (0.99-0.913), 13.021 (10.738-15.789), 0.083 (0.061-0.112), 0.9798, 0.9388, and 206.80 (124.82-342.63), respectively. The DR categories, patient source, region of study, sample size, quality of literature, image, and algorithm may affect the diagnostic efficiency of AI for DR. Conclusion AI model has a clear diagnostic value for DR, but it is influenced by many factors that deserve further study. Systematic review registration https://www.crd.york.ac.uk/prospero/, identifier CRD42023389687.
Collapse
|
10
|
Li H, Cao J, Grzybowski A, Jin K, Lou L, Ye J. Diagnosing Systemic Disorders with AI Algorithms Based on Ocular Images. Healthcare (Basel) 2023; 11:1739. [PMID: 37372857 PMCID: PMC10298137 DOI: 10.3390/healthcare11121739] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2023] [Revised: 06/07/2023] [Accepted: 06/08/2023] [Indexed: 06/29/2023] Open
Abstract
The advent of artificial intelligence (AI), especially the state-of-the-art deep learning frameworks, has begun a silent revolution in all medical subfields, including ophthalmology. Due to their specific microvascular and neural structures, the eyes are anatomically associated with the rest of the body. Hence, ocular image-based AI technology may be a useful alternative or additional screening strategy for systemic diseases, especially where resources are scarce. This review summarizes the current applications of AI related to the prediction of systemic diseases from multimodal ocular images, including cardiovascular diseases, dementia, chronic kidney diseases, and anemia. Finally, we also discuss the current predicaments and future directions of these applications.
Collapse
Affiliation(s)
- Huimin Li
- Eye Center, The Second Affiliated Hospital School of Medicine Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou 310009, China; (H.L.); (J.C.); (K.J.)
| | - Jing Cao
- Eye Center, The Second Affiliated Hospital School of Medicine Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou 310009, China; (H.L.); (J.C.); (K.J.)
| | - Andrzej Grzybowski
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, 60-836 Poznan, Poland;
| | - Kai Jin
- Eye Center, The Second Affiliated Hospital School of Medicine Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou 310009, China; (H.L.); (J.C.); (K.J.)
| | - Lixia Lou
- Eye Center, The Second Affiliated Hospital School of Medicine Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou 310009, China; (H.L.); (J.C.); (K.J.)
| | - Juan Ye
- Eye Center, The Second Affiliated Hospital School of Medicine Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou 310009, China; (H.L.); (J.C.); (K.J.)
| |
Collapse
|
11
|
Lin ZW, Dai WL, Lai QQ, Wu H. Deep learning-based computed tomography applied to the diagnosis of rib fractures. JOURNAL OF RADIATION RESEARCH AND APPLIED SCIENCES 2023. [DOI: 10.1016/j.jrras.2023.100558] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/16/2023]
|
12
|
Chang X, Wang J, Zhang G, Yang M, Xi Y, Xi C, Chen G, Nie X, Meng B, Quan X. Predicting colorectal cancer microsatellite instability with a self-attention-enabled convolutional neural network. Cell Rep Med 2023; 4:100914. [PMID: 36720223 PMCID: PMC9975100 DOI: 10.1016/j.xcrm.2022.100914] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Revised: 09/01/2022] [Accepted: 12/29/2022] [Indexed: 01/31/2023]
Abstract
This study develops a method combining a convolutional neural network model, INSIGHT, with a self-attention model, WiseMSI, to predict microsatellite instability (MSI) based on the tiles in colorectal cancer patients from a multicenter Chinese cohort. After INSIGHT differentiates tumor tiles from normal tissue tiles in a whole slide image, features of tumor tiles are extracted with a ResNet model pre-trained on ImageNet. Attention-based pooling is adopted to aggregate tile-level features into slide-level representation. INSIGHT has an area under the curve (AUC) of 0.985 for tumor patch classification. The Spearman correlation coefficient of tumor cell fraction given by expert pathologist and INSIGHT is 0.7909. WiseMSI achieves a specificity of 94.7% (95% confidence interval [CI] 93.7%-95.7%), a sensitivity of 84.7% (95% CI 82.6%-86.9%), and an AUC of 0.954 (95% CI 0.948-0.960). Comparative analysis shows that this method has better performance than the other five classic deep learning methods.
Collapse
Affiliation(s)
- Xiaona Chang
- Department of Pathology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
| | - Jianchao Wang
- Department of Pathology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, China
| | - Guanjun Zhang
- Department of Pathology, The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an 710061, China
| | - Ming Yang
- Department of Pathology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
| | - Yanfeng Xi
- Department of Pathology, Shanxi Provincial Cancer Hospital, Taiyuan 030013, China
| | | | - Gang Chen
- Department of Pathology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, China
| | - Xiu Nie
- Department of Pathology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China.
| | - Bin Meng
- Department of Pathology, National Clinical Research Center of Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin's Clinical Research Center for Cancer, Tianjin Medical University Cancer Institute and Hospital, Tianjin, China.
| | | |
Collapse
|
13
|
Li S, Zhou Z, Wu S, Wu W. A Review of Quantitative Ultrasound-Based Approaches to Thermometry and Ablation Zone Identification Over the Past Decade. ULTRASONIC IMAGING 2022; 44:213-228. [PMID: 35993226 DOI: 10.1177/01617346221120069] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Percutaneous thermal therapy is an important clinical treatment method for some solid tumors. It is critical to use effective image visualization techniques to monitor the therapy process in real time because precise control of the therapeutic zone directly affects the prognosis of tumor treatment. Ultrasound is used in thermal therapy monitoring because of its real-time, non-invasive, non-ionizing radiation, and low-cost characteristics. This paper presents a review of nine quantitative ultrasound-based methods for thermal therapy monitoring and their advances over the last decade since 2011. These methods were analyzed and compared with respect to two applications: ultrasonic thermometry and ablation zone identification. The advantages and limitations of these methods were compared and discussed, and future developments were suggested.
Collapse
Affiliation(s)
- Sinan Li
- Department of Biomedical Engineering, Faculty of Environment and Life, Beijing University of Technology, Beijing, China
| | - Zhuhuang Zhou
- Department of Biomedical Engineering, Faculty of Environment and Life, Beijing University of Technology, Beijing, China
| | - Shuicai Wu
- Department of Biomedical Engineering, Faculty of Environment and Life, Beijing University of Technology, Beijing, China
| | - Weiwei Wu
- College of Biomedical Engineering, Capital Medical University, Beijing, China
| |
Collapse
|
14
|
Huang X, Li Z, Zhang M, Gao S. Fusing hand-crafted and deep-learning features in a convolutional neural network model to identify prostate cancer in pathology images. Front Oncol 2022; 12:994950. [PMID: 36237311 PMCID: PMC9552083 DOI: 10.3389/fonc.2022.994950] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Accepted: 09/09/2022] [Indexed: 11/13/2022] Open
Abstract
Prostate cancer can be diagnosed by prostate biopsy using transectal ultrasound guidance. The high number of pathology images from biopsy tissues is a burden on pathologists, and analysis is subjective and susceptible to inter-rater variability. The use of machine learning techniques could make prostate histopathology diagnostics more precise, consistent, and efficient overall. This paper presents a new classification fusion network model that was created by fusing eight advanced image features: seven hand-crafted features and one deep-learning feature. These features are the scale-invariant feature transform (SIFT), speeded up robust feature (SURF), oriented features from accelerated segment test (FAST) and rotated binary robust independent elementary features (BRIEF) (ORB) of local features, shape and texture features of the cell nuclei, the histogram of oriented gradients (HOG) feature of the cavities, a color feature, and a convolution deep-learning feature. Matching, integrated, and fusion networks are the three essential components of the proposed deep-learning network. The integrated network consists of both a backbone and an additional network. When classifying 1100 prostate pathology images using this fusion network with different backbones (ResNet-18/50, VGG-11/16, and DenseNet-121/201), we discovered that the proposed model with the ResNet-18 backbone achieved the best performance in terms of the accuracy (95.54%), specificity (93.64%), and sensitivity (97.27%) as well as the area under the receiver operating characteristic curve (98.34%). However, each of the assessment criteria for these separate features had a value lower than 90%, which demonstrates that the suggested model combines differently derived characteristics in an effective manner. Moreover, a Grad-CAM++ heatmap was used to observe the differences between the proposed model and ResNet-18 in terms of the regions of interest. This map showed that the proposed model was better at focusing on cancerous cells than ResNet-18. Hence, the proposed classification fusion network, which combines hand-crafted features and a deep-learning feature, is useful for computer-aided diagnoses based on pathology images of prostate cancer. Because of the similarities in the feature engineering and deep learning for different types of pathology images, the proposed method could be used for other pathology images, such as those of breast, thyroid cancer.
Collapse
Affiliation(s)
- Xinrui Huang
- Department of Biochemistry and Biophysics, School of Basic Medical Sciences, Peking University, Beijing, China
| | - Zhaotong Li
- Institute of Medical Technology, Health Science Center, Peking University, Beijing, China
- *Correspondence: Zhaotong Li, ; Song Gao,
| | - Minghui Zhang
- Department of Pathology, Guangdong Provincial People’s Hospital, Guangzhou, China
| | - Song Gao
- Institute of Medical Technology, Health Science Center, Peking University, Beijing, China
- *Correspondence: Zhaotong Li, ; Song Gao,
| |
Collapse
|
15
|
Lakkshmanan A, Ananth CA, Tiroumalmouroughane S. Multi-objective metaheuristics with intelligent deep learning model for pancreatic tumor diagnosis. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-221171] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Pancreatic tumor is the deadliest disease which needs earlier identification to reduce the mortality rate. With this motivation, this study introduces a Multi-Objective Metaheuristics with Intelligent Deep Learning Model for Pancreatic Tumor Diagnosis (MOM-IDL) model. The proposed MOM-IDL technique encompasses an adaptive Weiner filter based pre-processing technique to enhance the image quality and get rid of the noise. In addition, multi-level thresholding based segmentation using Kapur’s entropy is employed where the threshold values are optimally chosen by the barnacles mating optimizer (BMO). Besides, densely connected network (DenseNet-169) is employed as a feature extractor and fuzzy support vector machine (FSVM) is utilized as a classifier. For improving the classification performance, the BMO technique was implemented for fine-tuning the parameters of the FSVM model. The design of MOBMO algorithm for threshold selection and parameter optimization processes shows the novelty of the work. A wide range of simulations take place on the benchmark dataset and the experimental results highlighted the enhanced performance of the MOM-IDL technique over the recent state of art techniques.
Collapse
Affiliation(s)
| | - C. Anbu Ananth
- Department of CSE, FEAT, Annamalai University, Chidamabaram, Tamilnadu, India
| | - S. Tiroumalmouroughane
- Department of IT, Perunthalaivar Kamarajar Institute of Engineering and Technology, Karaikal, Tamilnadu, India
| |
Collapse
|
16
|
Abstract
Thanks to the proliferation of the Internet of Things (IoT), pervasive healthcare is gaining popularity day by day as it offers health support to patients irrespective of their location. In emergency medical situations, medical aid can be sent quickly. Though not yet standardized, this research direction, healthcare Internet of Things (H-IoT), attracts the attention of the research community, both academia and industry. In this article, we conduct a comprehensive survey of pervasive computing H-IoT. We would like to visit the wide range of applications. We provide a broad vision of key components, their roles, and connections in the big picture. We classify the vast amount of publications into different categories such as sensors, communication, artificial intelligence, infrastructure, and security. Intensively covering 118 research works, we survey (1) applications, (2) key components, their roles and connections, and (3) the challenges. Our survey also discusses the potential solutions to overcome the challenges in this research field.
Collapse
|
17
|
Retinal Glaucoma Public Datasets: What Do We Have and What Is Missing? J Clin Med 2022; 11:jcm11133850. [PMID: 35807135 PMCID: PMC9267177 DOI: 10.3390/jcm11133850] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Revised: 06/29/2022] [Accepted: 06/30/2022] [Indexed: 11/16/2022] Open
Abstract
Public databases for glaucoma studies contain color images of the retina, emphasizing the optic papilla. These databases are intended for research and standardized automated methodologies such as those using deep learning techniques. These techniques are used to solve complex problems in medical imaging, particularly in the automated screening of glaucomatous disease. The development of deep learning techniques has demonstrated potential for implementing protocols for large-scale glaucoma screening in the population, eliminating possible diagnostic doubts among specialists, and benefiting early treatment to delay the onset of blindness. However, the images are obtained by different cameras, in distinct locations, and from various population groups and are centered on multiple parts of the retina. We can also cite the small number of data, the lack of segmentation of the optic papillae, and the excavation. This work is intended to offer contributions to the structure and presentation of public databases used in the automated screening of glaucomatous papillae, adding relevant information from a medical point of view. The gold standard public databases present images with segmentations of the disc and cupping made by experts and division between training and test groups, serving as a reference for use in deep learning architectures. However, the data offered are not interchangeable. The quality and presentation of images are heterogeneous. Moreover, the databases use different criteria for binary classification with and without glaucoma, do not offer simultaneous pictures of the two eyes, and do not contain elements for early diagnosis.
Collapse
|
18
|
Betzler BK, Rim TH, Sabanayagam C, Cheng CY. Artificial Intelligence in Predicting Systemic Parameters and Diseases From Ophthalmic Imaging. Front Digit Health 2022; 4:889445. [PMID: 35706971 PMCID: PMC9190759 DOI: 10.3389/fdgth.2022.889445] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Accepted: 05/06/2022] [Indexed: 12/14/2022] Open
Abstract
Artificial Intelligence (AI) analytics has been used to predict, classify, and aid clinical management of multiple eye diseases. Its robust performances have prompted researchers to expand the use of AI into predicting systemic, non-ocular diseases and parameters based on ocular images. Herein, we discuss the reasons why the eye is well-suited for systemic applications, and review the applications of deep learning on ophthalmic images in the prediction of demographic parameters, body composition factors, and diseases of the cardiovascular, hematological, neurodegenerative, metabolic, renal, and hepatobiliary systems. Three main imaging modalities are included—retinal fundus photographs, optical coherence tomographs and external ophthalmic images. We examine the range of systemic factors studied from ophthalmic imaging in current literature and discuss areas of future research, while acknowledging current limitations of AI systems based on ophthalmic images.
Collapse
Affiliation(s)
- Bjorn Kaijun Betzler
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore, Singapore
| | - Tyler Hyungtaek Rim
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore, Singapore
| | - Charumathi Sabanayagam
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore, Singapore
| | - Ching-Yu Cheng
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program (Eye ACP), Duke-NUS Medical School, Singapore, Singapore
| |
Collapse
|
19
|
Trends in using IoT with machine learning in smart health assessment. Int J Health Sci (Qassim) 2022. [DOI: 10.53730/ijhs.v6ns3.6404] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
The Internet of Things (IoT) provides a rich source of information that can be uncovered using machine learning (ML). The decision-making processes in several industries, such as education, security, business, and healthcare, have been aided by these hybrid technologies. For optimum prediction and recommendation systems, ML enhances the Internet of Things (IoT). Machines are already making medical records, diagnosing diseases, and monitoring patients using IoT and ML in the healthcare industry. Various datasets need different ML algorithms to perform well. It's possible that the total findings will be impacted if the predicted results are not consistent. In clinical decision-making, the variability of prediction outcomes is a major consideration. To effectively utilise IoT data in healthcare, it's critical to have a firm grasp of the various machine learning techniques in use. Algorithms for categorization and prediction that have been employed in the healthcare industry are highlighted in this article. As stated earlier, the purpose of this work is to provide readers with an in-depth look at current machine learning algorithms and how they apply to IoT medical data.
Collapse
|
20
|
Muralidharan N, Gupta S, Prusty MR, Tripathy RK. Detection of COVID19 from X-ray images using multiscale Deep Convolutional Neural Network. Appl Soft Comput 2022; 119:108610. [PMID: 35185439 PMCID: PMC8842414 DOI: 10.1016/j.asoc.2022.108610] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2021] [Revised: 11/09/2021] [Accepted: 02/05/2022] [Indexed: 12/17/2022]
Abstract
The Coronavirus disease 2019 (COVID19) pandemic has led to a dramatic loss of human life worldwide and caused a tremendous challenge to public health. Immediate detection and diagnosis of COVID19 have lifesaving importance for both patients and doctors. The availability of COVID19 tests increased significantly in many countries, thereby provisioning a limited availability of laboratory test kits Additionally, the Reverse Transcription-Polymerase Chain Reaction (RT-PCR) test for the diagnosis of COVID 19 is costly and time-consuming. X-ray imaging is widely used for the diagnosis of COVID19. The detection of COVID19 based on the manual investigation of X-ray images is a tedious process. Therefore, computer-aided diagnosis (CAD) systems are needed for the automated detection of COVID19 disease. This paper proposes a novel approach for the automated detection of COVID19 using chest X-ray images. The Fixed Boundary-based Two-Dimensional Empirical Wavelet Transform (FB2DEWT) is used to extract modes from the X-ray images. In our study, a single X-ray image is decomposed into seven modes. The evaluated modes are used as input to the multiscale deep Convolutional Neural Network (CNN) to classify X-ray images into no-finding, pneumonia, and COVID19 classes. The proposed deep learning model is evaluated using the X-ray images from two different publicly available databases, where database A consists of 1225 images and database B consists of 9000 images. The results show that the proposed approach has obtained a maximum accuracy of 96% and 100% for the multiclass and binary classification schemes using X-ray images from dataset A with 5-fold cross-validation (CV) strategy. For dataset B, the accuracy values of 97.17% and 96.06% are achieved using multiscale deep CNN for multiclass and binary classification schemes with 5-fold CV. The proposed multiscale deep learning model has demonstrated a higher classification performance than the existing approaches for detecting COVID19 using X-ray images.
Collapse
|
21
|
EDNC: Ensemble Deep Neural Network for COVID-19 Recognition. Tomography 2022; 8:869-890. [PMID: 35314648 PMCID: PMC8938826 DOI: 10.3390/tomography8020071] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Revised: 03/15/2022] [Accepted: 03/16/2022] [Indexed: 12/24/2022] Open
Abstract
The automatic recognition of COVID-19 diseases is critical in the present pandemic since it relieves healthcare staff of the burden of screening for infection with COVID-19. Previous studies have proven that deep learning algorithms can be utilized to aid in the diagnosis of patients with potential COVID-19 infection. However, the accuracy of current COVID-19 recognition models is relatively low. Motivated by this fact, we propose three deep learning architectures, F-EDNC, FC-EDNC, and O-EDNC, to quickly and accurately detect COVID-19 infections from chest computed tomography (CT) images. Sixteen deep learning neural networks have been modified and trained to recognize COVID-19 patients using transfer learning and 2458 CT chest images. The proposed EDNC has then been developed using three of sixteen modified pre-trained models to improve the performance of COVID-19 recognition. The results suggested that the F-EDNC method significantly enhanced the recognition of COVID-19 infections with 97.75% accuracy, followed by FC-EDNC and O-EDNC (97.55% and 96.12%, respectively), which is superior to most of the current COVID-19 recognition models. Furthermore, a localhost web application has been built that enables users to easily upload their chest CT scans and obtain their COVID-19 results automatically. This accurate, fast, and automatic COVID-19 recognition system will relieve the stress of medical professionals for screening COVID-19 infections.
Collapse
|
22
|
Sun J, Yuan X. Application of Artificial Intelligence Nuclear Medicine Automated Images Based on Deep Learning in Tumor Diagnosis. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:7247549. [PMID: 35140903 PMCID: PMC8820925 DOI: 10.1155/2022/7247549] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/13/2021] [Accepted: 01/10/2022] [Indexed: 11/17/2022]
Abstract
In order to correctly obtain normal tissues and organs and tumor lesions, the research on multimodal medical image segmentation based on deep learning fully automatic segmentation algorithm is more meaningful. This article aims to study the application of deep learning-based artificial intelligence nuclear medicine automated images in tumor diagnosis. This paper studies the methods to improve the accuracy of the segmentation algorithm from the perspective of boundary recognition and shape changeable adaptive capabilities, studies the active contour model based on boundary constraints, and proposes a superpixel boundary-aware convolution network to realize the automatic CT cutting algorithm. In this way, the tumor image can be cut more accurately. The experimental results in this paper show that the improved algorithm in this paper is more robust than the traditional CT algorithm in terms of accuracy and sensitivity, an increase of about 12%, and a slight increase in the negative prediction rate of 3%. In the comparison of cutting images of malignant tumors, the cutting effect of the algorithm in this paper is about 34% higher than that of the traditional algorithm.
Collapse
Affiliation(s)
- Jian Sun
- Health Management Center, Second Affiliated Hospital of Dalian Medical University, Dalian 116000, China
| | - Xin Yuan
- Nuclear Medicine Department, Second Affiliated Hospital of Dalian Medical University, Dalian 116000, China
| |
Collapse
|
23
|
Ashraf M, Robles WRQ, Kim M, Ko YS, Yi MY. A loss-based patch label denoising method for improving whole-slide image analysis using a convolutional neural network. Sci Rep 2022; 12:1392. [PMID: 35082315 PMCID: PMC8791954 DOI: 10.1038/s41598-022-05001-8] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2021] [Accepted: 01/05/2022] [Indexed: 12/24/2022] Open
Abstract
This paper proposes a deep learning-based patch label denoising method (LossDiff) for improving the classification of whole-slide images of cancer using a convolutional neural network (CNN). Automated whole-slide image classification is often challenging, requiring a large amount of labeled data. Pathologists annotate the region of interest by marking malignant areas, which pose a high risk of introducing patch-based label noise by involving benign regions that are typically small in size within the malignant annotations, resulting in low classification accuracy with many Type-II errors. To overcome this critical problem, this paper presents a simple yet effective method for noisy patch classification. The proposed method, validated using stomach cancer images, provides a significant improvement compared to other existing methods in patch-based cancer classification, with accuracies of 98.81%, 97.30% and 89.47% for binary, ternary, and quaternary classes, respectively. Moreover, we conduct several experiments at different noise levels using a publicly available dataset to further demonstrate the robustness of the proposed method. Given the high cost of producing explicit annotations for whole-slide images and the unavoidable error-prone nature of the human annotation of medical images, the proposed method has practical implications for whole-slide image annotation and automated cancer diagnosis.
Collapse
|
24
|
Wang T, Chen Z, Shang Q, Ma C, Chen X, Xiao E. A Promising and Challenging Approach: Radiologists' Perspective on Deep Learning and Artificial Intelligence for Fighting COVID-19. Diagnostics (Basel) 2021; 11:diagnostics11101924. [PMID: 34679622 PMCID: PMC8534829 DOI: 10.3390/diagnostics11101924] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Revised: 10/10/2021] [Accepted: 10/14/2021] [Indexed: 12/23/2022] Open
Abstract
Chest X-rays (CXR) and computed tomography (CT) are the main medical imaging modalities used against the increased worldwide spread of the 2019 coronavirus disease (COVID-19) epidemic. Machine learning (ML) and artificial intelligence (AI) technology, based on medical imaging fully extracting and utilizing the hidden information in massive medical imaging data, have been used in COVID-19 research of disease diagnosis and classification, treatment decision-making, efficacy evaluation, and prognosis prediction. This review article describes the extensive research of medical image-based ML and AI methods in preventing and controlling COVID-19, and summarizes their characteristics, differences, and significance in terms of application direction, image collection, and algorithm improvement, from the perspective of radiologists. The limitations and challenges faced by these systems and technologies, such as generalization and robustness, are discussed to indicate future research directions.
Collapse
Affiliation(s)
- Tianming Wang
- Department of Radiology, The Second Xiangya Hospital, Central South University, Changsha 410011, China; (T.W.); (Z.C.); (Q.S.); (C.M.); (X.C.)
- Department of Radiology, Xiangya Hospital, Central South University, Changsha 410008, China
| | - Zhu Chen
- Department of Radiology, The Second Xiangya Hospital, Central South University, Changsha 410011, China; (T.W.); (Z.C.); (Q.S.); (C.M.); (X.C.)
| | - Quanliang Shang
- Department of Radiology, The Second Xiangya Hospital, Central South University, Changsha 410011, China; (T.W.); (Z.C.); (Q.S.); (C.M.); (X.C.)
| | - Cong Ma
- Department of Radiology, The Second Xiangya Hospital, Central South University, Changsha 410011, China; (T.W.); (Z.C.); (Q.S.); (C.M.); (X.C.)
| | - Xiangyu Chen
- Department of Radiology, The Second Xiangya Hospital, Central South University, Changsha 410011, China; (T.W.); (Z.C.); (Q.S.); (C.M.); (X.C.)
| | - Enhua Xiao
- Department of Radiology, The Second Xiangya Hospital, Central South University, Changsha 410011, China; (T.W.); (Z.C.); (Q.S.); (C.M.); (X.C.)
- Molecular Imaging Research Center, Central South University, Changsha 410008, China
- Correspondence:
| |
Collapse
|
25
|
Krygier MC, LaBonte T, Martinez C, Norris C, Sharma K, Collins LN, Mukherjee PP, Roberts SA. Quantifying the unknown impact of segmentation uncertainty on image-based simulations. Nat Commun 2021; 12:5414. [PMID: 34521853 PMCID: PMC8440761 DOI: 10.1038/s41467-021-25493-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Accepted: 08/02/2021] [Indexed: 01/31/2023] Open
Abstract
Image-based simulation, the use of 3D images to calculate physical quantities, relies on image segmentation for geometry creation. However, this process introduces image segmentation uncertainty because different segmentation tools (both manual and machine-learning-based) will each produce a unique and valid segmentation. First, we demonstrate that these variations propagate into the physics simulations, compromising the resulting physics quantities. Second, we propose a general framework for rapidly quantifying segmentation uncertainty. Through the creation and sampling of segmentation uncertainty probability maps, we systematically and objectively create uncertainty distributions of the physics quantities. We show that physics quantity uncertainty distributions can follow a Normal distribution, but, in more complicated physics simulations, the resulting uncertainty distribution can be surprisingly nontrivial. We establish that bounding segmentation uncertainty can fail in these nontrivial situations. While our work does not eliminate segmentation uncertainty, it improves simulation credibility by making visible the previously unrecognized segmentation uncertainty plaguing image-based simulation.
Collapse
Affiliation(s)
- Michael C Krygier
- Engineering Sciences Center, Sandia National Laboratories, Albuquerque, NM, USA
| | - Tyler LaBonte
- Applied Machine Intelligence and Application Engineering, Sandia National Laboratories, Albuquerque, NM, USA
- Machine Learning Center, Georgia Institute of Technology, Atlanta, GA, USA
| | - Carianne Martinez
- Applied Machine Intelligence and Application Engineering, Sandia National Laboratories, Albuquerque, NM, USA
| | - Chance Norris
- School of Mechanical Engineering, Purdue University, West Lafayette, IN, USA
| | - Krish Sharma
- Applied Machine Intelligence and Application Engineering, Sandia National Laboratories, Albuquerque, NM, USA
| | - Lincoln N Collins
- Engineering Sciences Center, Sandia National Laboratories, Albuquerque, NM, USA
| | - Partha P Mukherjee
- School of Mechanical Engineering, Purdue University, West Lafayette, IN, USA
| | - Scott A Roberts
- Engineering Sciences Center, Sandia National Laboratories, Albuquerque, NM, USA.
| |
Collapse
|
26
|
Shanthi S, Aruljyothi L, Balasundaram MB, Janakiraman A, Nirmaladevi K, Pyingkodi M. Artificial intelligence applications in different imaging modalities for corneal topography. Surv Ophthalmol 2021; 67:801-816. [PMID: 34450134 DOI: 10.1016/j.survophthal.2021.08.004] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Revised: 08/13/2021] [Accepted: 08/16/2021] [Indexed: 12/26/2022]
Abstract
Interpretation of topographical maps used to detect corneal ectasias requires a high level of expertise. Several artificial intelligence (AI) technologies have attempted to interpret topographic maps. The purpose of this study is to provide a review of AI algorithms in corneal topography from the perspectives of an eye care professional, a biomedical engineer, and a data scientist. A systematic literature review using Web of Science, Pubmed, and Google Scholar was performed from 2010 to 2020 on themes regarding imaging modalities, their parameters, purpose, and conclusions and their samples and performance related to AI in corneal topography. We provide a comprehensive summary of advances in corneal imaging and its applications in AI. Combined metrics from the Dual Scheimpflug and Placido device could be a good starting point to try AI models in corneal imaging systems. The range of area under the receiving operating curve for AI in keratoconus detection and classification was from 0.87 to 1, sensitivity was from 0.89 to 1, and specificity was from 0.82 to 1. A combination of different types of AI applications to corneal ectasia diagnosis is recommended.
Collapse
Affiliation(s)
- S Shanthi
- Kongu Engineering College, Erode, Tamil Nadu, India.
| | | | | | | | | | - M Pyingkodi
- Kongu Engineering College, Erode, Tamil Nadu, India
| |
Collapse
|
27
|
Zhang W, Yin H, Huang Z, Zhao J, Zheng H, He D, Li M, Tan W, Tian S, Song B. Development and validation of MRI-based deep learning models for prediction of microsatellite instability in rectal cancer. Cancer Med 2021; 10:4164-4173. [PMID: 33963688 PMCID: PMC8209621 DOI: 10.1002/cam4.3957] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2020] [Revised: 04/14/2021] [Accepted: 04/15/2021] [Indexed: 02/05/2023] Open
Abstract
BACKGROUND Microsatellite instability (MSI) predetermines responses to adjuvant 5-fluorouracil and immunotherapy in rectal cancer and serves as a prognostic biomarker for clinical outcomes. Our objective was to develop and validate a deep learning model that could preoperatively predict the MSI status of rectal cancer based on magnetic resonance images. METHODS This single-center retrospective study included 491 rectal cancer patients with pathologically proven microsatellite status. Patients were randomly divided into the training/validation cohort (n = 395) and the testing cohort (n = 96). A clinical model using logistic regression was constructed to discriminate MSI status using only clinical factors. Based on a modified MobileNetV2 architecture, deep learning models were tested for the predictive ability of MSI status from magnetic resonance images, with or without integrating clinical factors. RESULTS The clinical model correctly classified 37.5% of MSI status in the testing cohort, with an AUC value of 0.573 (95% confidence interval [CI], 0.468 ~ 0.674). The pure imaging-based model and the combined model correctly classified 75.0% and 85.4% of MSI status in the testing cohort, with AUC values of 0.820 (95% CI, 0.718 ~ 0.884) and 0.868 (95% CI, 0.784 ~ 0.929), respectively. Both deep learning models performed better than the clinical model (p < 0.05). There was no statistically significant difference between the deep learning models with or without integrating clinical factors. CONCLUSIONS Deep learning based on high-resolution T2-weighted magnetic resonance images showed a good predictive performance for MSI status in rectal cancer patients. The proposed model may help to identify patients who would benefit from chemotherapy or immunotherapy and determine individualized therapeutic strategies for these patients.
Collapse
Affiliation(s)
- Wei Zhang
- Department of RadiologyWest China HospitalSichuan UniversityChengduChina
- Department of RadiologySichuan Provincial Corps HospitalChinese People's Armed Police ForcesLeshanChina
| | - Hongkun Yin
- Institute of Advanced ResearchInferVisionBeijingChina
| | - Zixing Huang
- Department of RadiologyWest China HospitalSichuan UniversityChengduChina
| | - Jian Zhao
- Department of RadiologyWest China HospitalSichuan UniversityChengduChina
- Department of RadiologySichuan Provincial Corps HospitalChinese People's Armed Police ForcesLeshanChina
| | - Haoyu Zheng
- Department of RadiologySichuan Provincial Corps HospitalChinese People's Armed Police ForcesLeshanChina
| | - Du He
- Department of PathologyWest China HospitalSichuan UniversityChengduChina
| | - Mou Li
- Department of RadiologyWest China HospitalSichuan UniversityChengduChina
| | - Weixiong Tan
- Institute of Advanced ResearchInferVisionBeijingChina
| | - Song Tian
- Institute of Advanced ResearchInferVisionBeijingChina
| | - Bin Song
- Department of RadiologyWest China HospitalSichuan UniversityChengduChina
| |
Collapse
|
28
|
Liao AH, Chen JR, Liu SH, Lu CH, Lin CW, Shieh JY, Weng WC, Tsui PH. Deep Learning of Ultrasound Imaging for Evaluating Ambulatory Function of Individuals with Duchenne Muscular Dystrophy. Diagnostics (Basel) 2021; 11:diagnostics11060963. [PMID: 34071811 PMCID: PMC8228495 DOI: 10.3390/diagnostics11060963] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2021] [Revised: 05/25/2021] [Accepted: 05/26/2021] [Indexed: 11/16/2022] Open
Abstract
Duchenne muscular dystrophy (DMD) results in loss of ambulation and premature death. Ultrasound provides real-time, safe, and cost-effective routine examinations. Deep learning allows the automatic generation of useful features for classification. This study utilized deep learning of ultrasound imaging for classifying patients with DMD based on their ambulatory function. A total of 85 individuals (including ambulatory and nonambulatory subjects) underwent ultrasound examinations of the gastrocnemius for deep learning of image data using LeNet, AlexNet, VGG-16, VGG-16TL, VGG-19, and VGG-19TL models (the notation TL indicates fine-tuning pretrained models). Gradient-weighted class activation mapping (Grad-CAM) was used to visualize features recognized by the models. The classification performance was evaluated using the confusion matrix and receiver operating characteristic (ROC) curve analysis. The results show that each deep learning model endows muscle ultrasound imaging with the ability to enable DMD evaluations. The Grad-CAMs indicated that boundary visibility, muscular texture clarity, and posterior shadowing are relevant sonographic features recognized by the models for evaluating ambulatory function. Of the proposed models, VGG-19 provided satisfying classification performance (the area under the ROC curve: 0.98; accuracy: 94.18%) and feature recognition in terms of physical characteristics. Deep learning of muscle ultrasound is a potential strategy for DMD characterization.
Collapse
Affiliation(s)
- Ai-Ho Liao
- Graduate Institute of Biomedical Engineering, National Taiwan University of Science and Technology, Taipei 106335, Taiwan; (A.-H.L.); (S.-H.L.)
- Department of Biomedical Engineering, National Defense Medical Center, Taipei 114201, Taiwan
| | - Jheng-Ru Chen
- Department of Medical Imaging and Radiological Sciences, College of Medicine, Chang Gung University, Taoyuan 333323, Taiwan; (J.-R.C.); (C.-H.L.)
| | - Shi-Hong Liu
- Graduate Institute of Biomedical Engineering, National Taiwan University of Science and Technology, Taipei 106335, Taiwan; (A.-H.L.); (S.-H.L.)
| | - Chun-Hao Lu
- Department of Medical Imaging and Radiological Sciences, College of Medicine, Chang Gung University, Taoyuan 333323, Taiwan; (J.-R.C.); (C.-H.L.)
| | - Chia-Wei Lin
- Department of Physical Medicine and Rehabilitation, National Taiwan University Hospital Hsin-Chu Branch, Hsin-Chu 300195, Taiwan;
| | - Jeng-Yi Shieh
- Department of Physical Medicine and Rehabilitation, National Taiwan University Hospital, Taipei 100225, Taiwan;
| | - Wen-Chin Weng
- Department of Pediatrics, National Taiwan University Hospital, Taipei 100225, Taiwan
- Department of Pediatric Neurology, National Taiwan University Children’s Hospital, Taipei 100226, Taiwan
- Department of Pediatrics, College of Medicine, National Taiwan University, Taipei 100233, Taiwan
- Correspondence: (W.-C.W.); (P.-H.T.)
| | - Po-Hsiang Tsui
- Department of Medical Imaging and Radiological Sciences, College of Medicine, Chang Gung University, Taoyuan 333323, Taiwan; (J.-R.C.); (C.-H.L.)
- Institute for Radiological Research, Chang Gung University and Chang Gung Memorial Hospital, Linkou, Taoyuan 333323, Taiwan
- Division of Pediatric Gastroenterology, Department of Pediatrics, Chang Gung Memorial Hospital at Linkou, Taoyuan 333423, Taiwan
- Correspondence: (W.-C.W.); (P.-H.T.)
| |
Collapse
|
29
|
Abstract
Machine learning (ML) is a powerful tool that delivers insights hidden in Internet of Things (IoT) data. These hybrid technologies work smartly to improve the decision-making process in different areas such as education, security, business, and the healthcare industry. ML empowers the IoT to demystify hidden patterns in bulk data for optimal prediction and recommendation systems. Healthcare has embraced IoT and ML so that automated machines make medical records, predict disease diagnoses, and, most importantly, conduct real-time monitoring of patients. Individual ML algorithms perform differently on different datasets. Due to the predictive results varying, this might impact the overall results. The variation in prediction results looms large in the clinical decision-making process. Therefore, it is essential to understand the different ML algorithms used to handle IoT data in the healthcare sector. This article highlights well-known ML algorithms for classification and prediction and demonstrates how they have been used in the healthcare sector. The aim of this paper is to present a comprehensive overview of existing ML approaches and their application in IoT medical data. In a thorough analysis, we observe that different ML prediction algorithms have various shortcomings. Depending on the type of IoT dataset, we need to choose an optimal method to predict critical healthcare data. The paper also provides some examples of IoT and machine learning to predict future healthcare system trends.
Collapse
|
30
|
Marcon Y, Bishop T, Avraam D, Escriba-Montagut X, Ryser-Welch P, Wheater S, Burton P, González JR. Orchestrating privacy-protected big data analyses of data from different resources with R and DataSHIELD. PLoS Comput Biol 2021; 17:e1008880. [PMID: 33784300 PMCID: PMC8034722 DOI: 10.1371/journal.pcbi.1008880] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2020] [Revised: 04/09/2021] [Accepted: 03/17/2021] [Indexed: 01/31/2023] Open
Abstract
Combined analysis of multiple, large datasets is a common objective in the health- and biosciences. Existing methods tend to require researchers to physically bring data together in one place or follow an analysis plan and share results. Developed over the last 10 years, the DataSHIELD platform is a collection of R packages that reduce the challenges of these methods. These include ethico-legal constraints which limit researchers' ability to physically bring data together and the analytical inflexibility associated with conventional approaches to sharing results. The key feature of DataSHIELD is that data from research studies stay on a server at each of the institutions that are responsible for the data. Each institution has control over who can access their data. The platform allows an analyst to pass commands to each server and the analyst receives results that do not disclose the individual-level data of any study participants. DataSHIELD uses Opal which is a data integration system used by epidemiological studies and developed by the OBiBa open source project in the domain of bioinformatics. However, until now the analysis of big data with DataSHIELD has been limited by the storage formats available in Opal and the analysis capabilities available in the DataSHIELD R packages. We present a new architecture ("resources") for DataSHIELD and Opal to allow large, complex datasets to be used at their original location, in their original format and with external computing facilities. We provide some real big data analysis examples in genomics and geospatial projects. For genomic data analyses, we also illustrate how to extend the resources concept to address specific big data infrastructures such as GA4GH or EGA, and make use of shell commands. Our new infrastructure will help researchers to perform data analyses in a privacy-protected way from existing data sharing initiatives or projects. To help researchers use this framework, we describe selected packages and present an online book (https://isglobal-brge.github.io/resource_bookdown).
Collapse
Affiliation(s)
| | - Tom Bishop
- MRC Epidemiology Unit, University of Cambridge, Cambridge, United Kingdom
| | - Demetris Avraam
- Population Health Sciences Institute, Newcastle University, Newcastle, United Kingdom
| | - Xavier Escriba-Montagut
- Barcelona Institute for Global Health (ISGlobal), Barcelona, Spain
- Universitat Pompeu Fabra (UPF), Barcelona, Spain
| | - Patricia Ryser-Welch
- Population Health Sciences Institute, Newcastle University, Newcastle, United Kingdom
| | | | - Paul Burton
- Population Health Sciences Institute, Newcastle University, Newcastle, United Kingdom
| | - Juan R. González
- Barcelona Institute for Global Health (ISGlobal), Barcelona, Spain
- Universitat Pompeu Fabra (UPF), Barcelona, Spain
- Centro de Investigación Biomédica en Red en Epidemiología y Salud Pública (CIBERESP), Barcelona, Spain
- Dept. of Mathematics, Universitat Autònoma de Barcelona (UAB), Bellaterra (Barcelona), Spain
| |
Collapse
|
31
|
Riaz H, Park J, H. Kim P, Kim J. Retinal Healthcare Diagnosis Approaches with Deep Learning Techniques. JOURNAL OF MEDICAL IMAGING AND HEALTH INFORMATICS 2021. [DOI: 10.1166/jmihi.2021.3309] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
The retina is an important organ of the human body, with a crucial function in the vision mechanism. A minor disturbance in the retina can cause various abnormalities in the eye, as well as complex retinal diseases such as diabetic retinopathy. To diagnose such diseases in early stages,
many researchers are incorporating machine learning (ML) technique. The combination of medical science with ML improves the healthcare diagnosis systems of hospitals, clinics, and other providers. Recently, AI-based healthcare diagnosis systems assist clinicians in handling more patients in
less time and improves diagnosis accuracy. In this paper, we review cutting-edge AI-based retinal diagnosis technologies. This article also briefly describes the potential of the latest densely connected convolutional networks (DenseNets) to improve the performance of diagnosis systems. Moreover,
this paper focuses on state-of-the-art results from comprehensive investigations in retinal diagnosis and the development of AI-based retinal healthcare diagnosis approaches with deep-learning models.
Collapse
Affiliation(s)
- Hamza Riaz
- Department of Health Science and Technology, Gachon Advanced Institute for Health Sciences & Technology, Incheon 21999, Korea
| | - Jisu Park
- Department of Health Science and Technology, Gachon Advanced Institute for Health Sciences & Technology, Incheon 21999, Korea
| | - Peter H. Kim
- School of Information, University of California, Berkeley, 102 South Hall #4600, CA 94720, USA
| | - Jungsuk Kim
- Department of Biomedical Engineering, Gachon University, 534-2, Hambakmoe-ro, 21936, Incheon, Korea
| |
Collapse
|
32
|
Machine learning and augmented human intelligence use in histomorphology for haematolymphoid disorders. Pathology 2021; 53:400-407. [PMID: 33642096 DOI: 10.1016/j.pathol.2020.12.004] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2020] [Accepted: 12/21/2020] [Indexed: 02/06/2023]
Abstract
Advances in digital pathology have allowed a number of opportunities such as decision support using artificial intelligence (AI). The application of AI to digital pathology data shows promise as an aid for pathologists in the diagnosis of haematological disorders. AI-based applications have embraced benign haematology, diagnosing leukaemia and lymphoma, as well as ancillary testing modalities including flow cytometry. In this review, we highlight the progress made to date in machine learning applications in haematopathology, summarise important studies in this field, and highlight key limitations. We further present our outlook on the future direction and trends for AI to support diagnostic decisions in haematopathology.
Collapse
|
33
|
Predicting Keratoconus Progression and Need for Corneal Crosslinking Using Deep Learning. J Clin Med 2021; 10:jcm10040844. [PMID: 33670732 PMCID: PMC7923054 DOI: 10.3390/jcm10040844] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2021] [Revised: 02/07/2021] [Accepted: 02/14/2021] [Indexed: 01/04/2023] Open
Abstract
We aimed to predict keratoconus progression and the need for corneal crosslinking (CXL) using deep learning (DL). Two hundred and seventy-four corneal tomography images taken by Pentacam HR® (Oculus, Wetzlar, Germany) of 158 keratoconus patients were examined. All patients were examined two times or more, and divided into two groups; the progression group and the non-progression group. An axial map of the frontal corneal plane, a pachymetry map, and a combination of these two maps at the initial examination were assessed according to the patients’ age. Training with a convolutional neural network on these learning data objects was conducted. Ninety eyes showed progression and 184 eyes showed no progression. The axial map, the pachymetry map, and their combination combined with patients’ age showed mean AUC values of 0.783, 0.784, and 0.814 (95% confidence interval (0.721–0.845) (0.722–0.846), and (0.755–0.872), respectively), with sensitivities of 87.8%, 77.8%, and 77.8% ((79.2–93.7), (67.8–85.9), and (67.8–85.9)) and specificities of 59.8%, 65.8%, and 69.6% ((52.3–66.9), (58.4–72.6), and (62.4–76.1)), respectively. Using the proposed DL neural network model, keratoconus progression can be predicted on corneal tomography maps combined with patients’ age.
Collapse
|
34
|
LaLonde R, Xu Z, Irmakci I, Jain S, Bagci U. Capsules for biomedical image segmentation. Med Image Anal 2021; 68:101889. [PMID: 33246227 PMCID: PMC7944580 DOI: 10.1016/j.media.2020.101889] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2020] [Revised: 08/25/2020] [Accepted: 10/23/2020] [Indexed: 01/31/2023]
Abstract
Our work expands the use of capsule networks to the task of object segmentation for the first time in the literature. This is made possible via the introduction of locally-constrained routing and transformation matrix sharing, which reduces the parameter/memory burden and allows for the segmentation of objects at large resolutions. To compensate for the loss of global information in constraining the routing, we propose the concept of "deconvolutional" capsules to create a deep encoder-decoder style network, called SegCaps. We extend the masked reconstruction regularization to the task of segmentation and perform thorough ablation experiments on each component of our method. The proposed convolutional-deconvolutional capsule network, SegCaps, shows state-of-the-art results while using a fraction of the parameters of popular segmentation networks. To validate our proposed method, we perform experiments segmenting pathological lungs from clinical and pre-clinical thoracic computed tomography (CT) scans and segmenting muscle and adipose (fat) tissue from magnetic resonance imaging (MRI) scans of human subjects' thighs. Notably, our experiments in lung segmentation represent the largest-scale study in pathological lung segmentation in the literature, where we conduct experiments across five extremely challenging datasets, containing both clinical and pre-clinical subjects, and nearly 2000 computed-tomography scans. Our newly developed segmentation platform outperforms other methods across all datasets while utilizing less than 5% of the parameters in the popular U-Net for biomedical image segmentation. Further, we demonstrate capsules' ability to generalize to unseen handling of rotations/reflections on natural images.
Collapse
Affiliation(s)
- Rodney LaLonde
- Center for Research in Computer Vision (CRCV), University of Central Florida (UCF), Orlando, FL
| | | | | | - Sanjay Jain
- Johns Hopkins University, Baltimore, MD US State
| | - Ulas Bagci
- Center for Research in Computer Vision (CRCV), University of Central Florida (UCF), Orlando, FL.
| |
Collapse
|
35
|
Blaivas L, Blaivas M. Are Convolutional Neural Networks Trained on ImageNet Images Wearing Rose-Colored Glasses?: A Quantitative Comparison of ImageNet, Computed Tomographic, Magnetic Resonance, Chest X-Ray, and Point-of-Care Ultrasound Images for Quality. JOURNAL OF ULTRASOUND IN MEDICINE : OFFICIAL JOURNAL OF THE AMERICAN INSTITUTE OF ULTRASOUND IN MEDICINE 2021; 40:377-383. [PMID: 32757235 DOI: 10.1002/jum.15413] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/02/2020] [Revised: 06/16/2020] [Accepted: 06/22/2020] [Indexed: 06/11/2023]
Abstract
OBJECTIVES Deep learning for medical imaging analysis uses convolutional neural networks pretrained on ImageNet (Stanford Vision Lab, Stanford, CA). Little is known about how such color- and scene-rich standard training images compare quantitatively to medical images. We sought to quantitatively compare ImageNet images to point-of-care ultrasound (POCUS), computed tomographic (CT), magnetic resonance (MR), and chest x-ray (CXR) images. METHODS Using a quantitative image quality assessment technique (Blind/Referenceless Image Spatial Quality Evaluator), we compared images based on pixel complexity, relationships, variation, and distinguishing features. We compared 5500 ImageNet images to 2700 CXR, 2300 CT, 1800 MR, and 18,000 POCUS images. Image quality results ranged from 0 to 100 (worst). A 1-way analysis of variance was performed, and the standardized mean-difference effect size value (d) was calculated. RESULTS ImageNet images showed the best image quality rating of 21.7 (95% confidence interval [CI], 0.41) except for CXR at 13.2 (95% CI, 0.28), followed by CT at 35.1 (95% CI, 0.79), MR at 31.6 (95% CI, 0.75), and POCUS at 56.6 (95% CI, 0.21). The differences between ImageNet and all of the medical images were statistically significant (P ≤ .000001). The greatest difference in image quality was between ImageNet and POCUS (d = 2.38). CONCLUSIONS Point-of-care ultrasound (US) quality is significantly different from that of ImageNet and other medical images. This brings considerable implications for convolutional neural network training with medical images for various applications, which may be even more significant in the case of US images. Ultrasound deep learning developers should consider pretraining networks from scratch on US images, as training techniques used for CT, CXR, and MR images may not apply to US.
Collapse
Affiliation(s)
- Laura Blaivas
- Michigan State University, East Lansing, Michigan, USA
| | - Michael Blaivas
- Department of Emergency Medicine, University of South Carolina School of Medicine, Columbia, South Carolina, USA
- St Francis Hospital, Columbus, Georgia, USA
| |
Collapse
|
36
|
Si K, Xue Y, Yu X, Zhu X, Li Q, Gong W, Liang T, Duan S. Fully end-to-end deep-learning-based diagnosis of pancreatic tumors. Am J Cancer Res 2021; 11:1982-1990. [PMID: 33408793 PMCID: PMC7778580 DOI: 10.7150/thno.52508] [Citation(s) in RCA: 40] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2020] [Accepted: 11/17/2020] [Indexed: 12/15/2022] Open
Abstract
Artificial intelligence can facilitate clinical decision making by considering massive amounts of medical imaging data. Various algorithms have been implemented for different clinical applications. Accurate diagnosis and treatment require reliable and interpretable data. For pancreatic tumor diagnosis, only 58.5% of images from the First Affiliated Hospital and the Second Affiliated Hospital, Zhejiang University School of Medicine are used, increasing labor and time costs to manually filter out images not directly used by the diagnostic model. Methods: This study used a training dataset of 143,945 dynamic contrast-enhanced CT images of the abdomen from 319 patients. The proposed model contained four stages: image screening, pancreas location, pancreas segmentation, and pancreatic tumor diagnosis. Results: We established a fully end-to-end deep-learning model for diagnosing pancreatic tumors and proposing treatment. The model considers original abdominal CT images without any manual preprocessing. Our artificial-intelligence-based system achieved an area under the curve of 0.871 and a F1 score of 88.5% using an independent testing dataset containing 107,036 clinical CT images from 347 patients. The average accuracy for all tumor types was 82.7%, and the independent accuracies of identifying intraductal papillary mucinous neoplasm and pancreatic ductal adenocarcinoma were 100% and 87.6%, respectively. The average test time per patient was 18.6 s, compared with at least 8 min for manual reviewing. Furthermore, the model provided a transparent and interpretable diagnosis by producing saliency maps highlighting the regions relevant to its decision. Conclusions: The proposed model can potentially deliver efficient and accurate preoperative diagnoses that could aid the surgical management of pancreatic tumor.
Collapse
|
37
|
Diagnosing of Diabetic Retinopathy with Image Dehazing and Capsule Network. DEEP LEARNING FOR MEDICAL DECISION SUPPORT SYSTEMS 2021. [PMCID: PMC7298988 DOI: 10.1007/978-981-15-6325-6_9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
As it was discussed before in Chap. 10.1007/978-981-15-6325-6_4, the disease of diabetic retinopathy (DR) ensure terrible results such as blindness, it has been a remarkable medical problem examined recently. Here, especially retinal pathologies can be the biggest problem for millions of blindness cases seen world-wide [1]. When all the cases of blindness are examined in detail, it was reported that there are around 2 million diabetic retinopathy cases causing the blindness so that early diagnosis has taken many steps away in terms of having the highest priority in eliminating or at least slowing down disease factors (causing blindness) and so that reducing the rates of blindness at the final [2, 3].
Collapse
|
38
|
Liang WH, Federico SM, London WB, Naranjo A, Irwin MS, Volchenboum SL, Cohn SL. Tailoring Therapy for Children With Neuroblastoma on the Basis of Risk Group Classification: Past, Present, and Future. JCO Clin Cancer Inform 2020; 4:895-905. [PMID: 33058692 PMCID: PMC7608590 DOI: 10.1200/cci.20.00074] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/01/2020] [Indexed: 12/12/2022] Open
Abstract
For children with neuroblastoma, the likelihood of cure varies widely according to age at diagnosis, disease stage, and tumor biology. Treatments are tailored for children with this clinically heterogeneous malignancy on the basis of a combination of markers that are predictive of risk of relapse and death. Sequential risk-based, cooperative-group clinical trials conducted during the past 4 decades have led to improved outcome for children with neuroblastoma. Increasingly accurate risk classification and refinements in treatment stratification strategies have been achieved with the more recent discovery of robust genomic and molecular biomarkers. In this review, we discuss the history of neuroblastoma risk classification in North America and Europe and highlight efforts by the International Neuroblastoma Risk Group (INRG) Task Force to develop a consensus approach for pretreatment stratification using seven risk criteria including an image-based staging system-the INRG Staging System. We also update readers on the current Children's Oncology Group risk classifier and outline plans for the development of a revised 2021 Children's Oncology Group classifier that will incorporate INRG Staging System criteria to facilitate harmonization of risk-based frontline treatment strategies conducted around the globe. In addition, we discuss new approaches to establish increasingly robust, future risk classification algorithms that will further refine treatment stratification using machine learning tools and expanded data from electronic health records and the INRG Data Commons.
Collapse
Affiliation(s)
- Wayne H. Liang
- Department of Pediatrics and Informatics Institute, University of Alabama at Birmingham, Birmingham, AL
| | - Sara M. Federico
- Department of Oncology, St Jude Children’s Research Hospital, Memphis, TN
| | - Wendy B. London
- Dana-Farber/Boston Children’s Cancer and Blood Disorders Center, Harvard Medical School, Boston, MA
| | - Arlene Naranjo
- Department of Biostatistics, Children’s Oncology Group Statistics and Data Center, University of Florida, Gainesville, FL
| | - Meredith S. Irwin
- Department of Pediatrics, The Hospital for Sick Children, University of Toronto, Toronto, Ontario, Canada
| | - Samuel L. Volchenboum
- Department of Pediatrics and Comer Children’s Hospital, University of Chicago, Chicago, IL
| | - Susan L. Cohn
- Department of Pediatrics and Comer Children’s Hospital, University of Chicago, Chicago, IL
| |
Collapse
|
39
|
Chen JR, Chao YP, Tsai YW, Chan HJ, Wan YL, Tai DI, Tsui PH. Clinical Value of Information Entropy Compared with Deep Learning for Ultrasound Grading of Hepatic Steatosis. ENTROPY (BASEL, SWITZERLAND) 2020; 22:E1006. [PMID: 33286775 PMCID: PMC7597079 DOI: 10.3390/e22091006] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/30/2020] [Revised: 08/31/2020] [Accepted: 09/07/2020] [Indexed: 02/07/2023]
Abstract
Entropy is a quantitative measure of signal uncertainty and has been widely applied to ultrasound tissue characterization. Ultrasound assessment of hepatic steatosis typically involves a backscattered statistical analysis of signals based on information entropy. Deep learning extracts features for classification without any physical assumptions or considerations in acoustics. In this study, we assessed clinical values of information entropy and deep learning in the grading of hepatic steatosis. A total of 205 participants underwent ultrasound examinations. The image raw data were used for Shannon entropy imaging and for training and testing by the pretrained VGG-16 model, which has been employed for medical data analysis. The entropy imaging and VGG-16 model predictions were compared with histological examinations. The diagnostic performances in grading hepatic steatosis were evaluated using receiver operating characteristic (ROC) curve analysis and the DeLong test. The areas under the ROC curves when using the VGG-16 model to grade mild, moderate, and severe hepatic steatosis were 0.71, 0.75, and 0.88, respectively; those for entropy imaging were 0.68, 0.85, and 0.9, respectively. Ultrasound entropy, which varies with fatty infiltration in the liver, outperformed VGG-16 in identifying participants with moderate or severe hepatic steatosis (p < 0.05). The results indicated that physics-based information entropy for backscattering statistics analysis can be recommended for ultrasound diagnosis of hepatic steatosis, providing not only improved performance in grading but also clinical interpretations of hepatic steatosis.
Collapse
Affiliation(s)
- Jheng-Ru Chen
- Department of Medical Imaging and Radiological Sciences, College of Medicine, Chang Gung University, Taoyuan 333323, Taiwan; (J.-R.C.); (Y.-W.T.); (H.-J.C.); (Y.-L.W.)
| | - Yi-Ping Chao
- Department of Computer Science and Information Engineering, College of Engineering, Chang Gung University, Taoyuan 333323, Taiwan;
- Graduate Institute of Biomedical Engineering, Chang Gung University, College of Engineering, Taoyuan 333323, Taiwan
- Department of Neurology, Chang Gung Memorial Hospital at Linkou, Taoyuan 333423, Taiwan
| | - Yu-Wei Tsai
- Department of Medical Imaging and Radiological Sciences, College of Medicine, Chang Gung University, Taoyuan 333323, Taiwan; (J.-R.C.); (Y.-W.T.); (H.-J.C.); (Y.-L.W.)
| | - Hsien-Jung Chan
- Department of Medical Imaging and Radiological Sciences, College of Medicine, Chang Gung University, Taoyuan 333323, Taiwan; (J.-R.C.); (Y.-W.T.); (H.-J.C.); (Y.-L.W.)
| | - Yung-Liang Wan
- Department of Medical Imaging and Radiological Sciences, College of Medicine, Chang Gung University, Taoyuan 333323, Taiwan; (J.-R.C.); (Y.-W.T.); (H.-J.C.); (Y.-L.W.)
- Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital at Linkou, Taoyuan 333423, Taiwan
- Institute for Radiological Research, Chang Gung University and Chang Gung Memorial Hospital at Linkou, Taoyuan 333423, Taiwan
| | - Dar-In Tai
- Department of Gastroenterology and Hepatology, Chang Gung Memorial Hospital at Linkou, Chang Gung University, Taoyuan 333423, Taiwan
| | - Po-Hsiang Tsui
- Department of Medical Imaging and Radiological Sciences, College of Medicine, Chang Gung University, Taoyuan 333323, Taiwan; (J.-R.C.); (Y.-W.T.); (H.-J.C.); (Y.-L.W.)
- Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital at Linkou, Taoyuan 333423, Taiwan
- Institute for Radiological Research, Chang Gung University and Chang Gung Memorial Hospital at Linkou, Taoyuan 333423, Taiwan
| |
Collapse
|
40
|
Cao R, Yang F, Ma SC, Liu L, Zhao Y, Li Y, Wu DH, Wang T, Lu WJ, Cai WJ, Zhu HB, Guo XJ, Lu YW, Kuang JJ, Huan WJ, Tang WM, Huang K, Huang J, Yao J, Dong ZY. Development and interpretation of a pathomics-based model for the prediction of microsatellite instability in Colorectal Cancer. Am J Cancer Res 2020; 10:11080-11091. [PMID: 33042271 PMCID: PMC7532670 DOI: 10.7150/thno.49864] [Citation(s) in RCA: 116] [Impact Index Per Article: 23.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2020] [Accepted: 08/25/2020] [Indexed: 12/21/2022] Open
Abstract
Microsatellite instability (MSI) has been approved as a pan-cancer biomarker for immune checkpoint blockade (ICB) therapy. However, current MSI identification methods are not available for all patients. We proposed an ensemble multiple instance deep learning model to predict microsatellite status based on histopathology images, and interpreted the pathomics-based model with multi-omics correlation. Methods: Two cohorts of patients were collected, including 429 from The Cancer Genome Atlas (TCGA-COAD) and 785 from an Asian colorectal cancer (CRC) cohort (Asian-CRC). We established the pathomics model, named Ensembled Patch Likelihood Aggregation (EPLA), based on two consecutive stages: patch-level prediction and WSI-level prediction. The initial model was developed and validated in TCGA-COAD, and then generalized in Asian-CRC through transfer learning. The pathological signatures extracted from the model were analyzed with genomic and transcriptomic profiles for model interpretation. Results: The EPLA model achieved an area-under-the-curve (AUC) of 0.8848 (95% CI: 0.8185-0.9512) in the TCGA-COAD test set and an AUC of 0.8504 (95% CI: 0.7591-0.9323) in the external validation set Asian-CRC after transfer learning. Notably, EPLA captured the relationship between pathological phenotype of poor differentiation and MSI (P < 0.001). Furthermore, the five pathological imaging signatures identified from the EPLA model were associated with mutation burden and DNA damage repair related genotype in the genomic profiles, and antitumor immunity activated pathway in the transcriptomic profiles. Conclusions: Our pathomics-based deep learning model can effectively predict MSI from histopathology images and is transferable to a new patient cohort. The interpretability of our model by association with pathological, genomic and transcriptomic phenotypes lays the foundation for prospective clinical trials of the application of this artificial intelligence (AI) platform in ICB therapy.
Collapse
|
41
|
Cheng KS, Pan R, Pan H, Li B, Meena SS, Xing H, Ng YJ, Qin K, Liao X, Kosgei BK, Wang Z, Han RP. ALICE: a hybrid AI paradigm with enhanced connectivity and cybersecurity for a serendipitous encounter with circulating hybrid cells. Am J Cancer Res 2020; 10:11026-11048. [PMID: 33042268 PMCID: PMC7532685 DOI: 10.7150/thno.44053] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2020] [Accepted: 05/11/2020] [Indexed: 12/12/2022] Open
Abstract
A fully automated and accurate assay of rare cell phenotypes in densely-packed fluorescently-labeled liquid biopsy images remains elusive. Methods: Employing a hybrid artificial intelligence (AI) paradigm that combines traditional rule-based morphological manipulations with modern statistical machine learning, we deployed a next generation software, ALICE (Automated Liquid Biopsy Cell Enumerator) to identify and enumerate minute amounts of tumor cell phenotypes bestrewed in massive populations of leukocytes. As a code designed for futurity, ALICE is armed with internet of things (IOT) connectivity to promote pedagogy and continuing education and also, an advanced cybersecurity system to safeguard against digital attacks from malicious data tampering. Results: By combining robust principal component analysis, random forest classifier and cubic support vector machine, ALICE was able to detect synthetic, anomalous and tampered input images with an average recall and precision of 0.840 and 0.752, respectively. In terms of phenotyping enumeration, ALICE was able to enumerate various circulating tumor cell (CTC) phenotypes with a reliability ranging from 0.725 (substantial agreement) to 0.961 (almost perfect) as compared to human analysts. Further, two subpopulations of circulating hybrid cells (CHCs) were serendipitously discovered and labeled as CHC-1 (DAPI+/CD45+/E-cadherin+/vimentin-) and CHC-2 (DAPI+ /CD45+/E-cadherin+/vimentin+) in the peripheral blood of pancreatic cancer patients. CHC-1 was found to correlate with nodal staging and was able to classify lymph node metastasis with a sensitivity of 0.615 (95% CI: 0.374-0.898) and specificity of 1.000 (95% CI: 1.000-1.000). Conclusion: This study presented a machine-learning-augmented rule-based hybrid AI algorithm with enhanced cybersecurity and connectivity for the automatic and flexibly-adapting enumeration of cellular liquid biopsies. ALICE has the potential to be used in a clinical setting for an accurate and reliable enumeration of CTC phenotypes.
Collapse
|
42
|
Mu W, Liu C, Gao F, Qi Y, Lu H, Liu Z, Zhang X, Cai X, Ji RY, Hou Y, Tian J, Shi Y. Prediction of clinically relevant Pancreatico-enteric Anastomotic Fistulas after Pancreatoduodenectomy using deep learning of Preoperative Computed Tomography. Am J Cancer Res 2020; 10:9779-9788. [PMID: 32863959 PMCID: PMC7449906 DOI: 10.7150/thno.49671] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2020] [Accepted: 07/22/2020] [Indexed: 12/17/2022] Open
Abstract
Rationale: Clinically relevant postoperative pancreatic fistula (CR-POPF) is among the most formidable complications after pancreatoduodenectomy (PD), heightening morbidity/mortality rates. Fistula Risk Score (FRS) is a well-developed predictor, but it is an intraoperative predictor and quantifies >50% patients as intermediate risk. Therefore, an accurate and easy-to-use preoperative index is desired. Herein, we test the hypothesis that quantitative analysis of contrast-enhanced computed tomography (CE-CT) with deep learning could predict CR-POPFs. Methods: A group of 513 patients underwent pancreatico-enteric anastomosis after PD at three institutions between 2006 and 2019 was retrospectively collected, and formed a training (70%) and a validation dataset (30%) randomly. A convolutional neural network was trained and generated a deep-learning score (DLS) to identify the patients with higher risk of CR-POPF preoperatively using CE-CT images, which was further externally tested in a prospective cohort collected from August 2018 to June 2019 at the fourth institution. The biological underpinnings of DLS were assessed using histomorphological data by multivariate linear regression analysis. Results: CR-POPFs developed in 95 patients (16.3%) in total. Compared to FRS, the DLS offered significantly greater predictability in training (AUC:0.85 [95% CI, 0.80-0.90] vs. 0.78 [95% CI, 0.72-0.84]; P = 0.03), validation (0.81 [95% CI, 0.72-0.89] vs. 0.76 [95% CI, 0.66-0.84], P = 0.05) and test (0.89 [95% CI, 0.79-0.96] vs. 0.73 [95% CI, 0.61-0.83], P < 0.001) cohorts. Especially in the challenging patients of intermediate risk (FRS: 3-6), the DLS showed significantly higher accuracy (training: 79.9% vs. 61.5% [P = 0.005]; validation: 70.3% vs. 56.3% [P = 0.04]; test: 92.1% vs. 65.8% [P = 0.013]). Additionally, DLS was independently associated with pancreatic fibrosis (coefficients: -0.167), main pancreatic duct (coefficients: -0.445) and remnant volume (coefficients: 0.138) in multivariate linear regression analysis (r2 = 0.512, P < 0.001). The user satisfaction score in the test cohort was 4 out of 5. Conclusions: Preoperative CT based deep-learning model provides a promising novel method for predicting CR-POPF occurrences after PD, especially at intermediate FRS risk level. This has a potential to be integrated into radiologic reporting system or incorporated into surgical planning software to accommodate the preferences of surgeons to optimize preoperative strategies, intraoperative decision-making, and even postoperative care.
Collapse
|
43
|
Cai WY, Dong ZN, Fu XT, Lin LY, Wang L, Ye GD, Luo QC, Chen YC. Identification of a Tumor Microenvironment-relevant Gene set-based Prognostic Signature and Related Therapy Targets in Gastric Cancer. Am J Cancer Res 2020; 10:8633-8647. [PMID: 32754268 PMCID: PMC7392024 DOI: 10.7150/thno.47938] [Citation(s) in RCA: 54] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2020] [Accepted: 06/23/2020] [Indexed: 12/22/2022] Open
Abstract
Rationale: The prognosis of gastric cancer (GC) patients is poor, and there is limited therapeutic efficacy due to genetic heterogeneity and difficulty in early-stage screening. Here, we developed and validated an individualized gene set-based prognostic signature for gastric cancer (GPSGC) and further explored survival-related regulatory mechanisms as well as therapeutic targets in GC. Methods: By implementing machine learning, a prognostic model was established based on gastric cancer gene expression datasets from 1699 patients from five independent cohorts with reported full clinical annotations. Analysis of the tumor microenvironment, including stromal and immune subcomponents, cell types, panimmune gene sets, and immunomodulatory genes, was carried out in 834 GC patients from three independent cohorts to explore regulatory survival mechanisms and therapeutic targets related to the GPSGC. To prove the stability and reliability of the GPSGC model and therapeutic targets, multiplex fluorescent immunohistochemistry was conducted with tissue microarrays representing 186 GC patients. Based on multivariate Cox analysis, a nomogram that integrated the GPSGC and other clinical risk factors was constructed with two training cohorts and was verified by two validation cohorts. Results: Through machine learning, we obtained an optimal risk assessment model, the GPSGC, which showed higher accuracy in predicting survival than individual prognostic factors. The impact of the GPSGC score on poor survival of GC patients was probably correlated with the remodeling of stromal components in the tumor microenvironment. Specifically, TGFβ and angiogenesis-related gene sets were significantly associated with the GPSGC risk score and poor outcome. Immunomodulatory gene analysis combined with experimental verification further revealed that TGFβ1 and VEGFB may be developed as potential therapeutic targets of GC patients with poor prognosis according to the GPSGC. Furthermore, we developed a nomogram based on the GPSGC and other clinical variables to predict the 3-year and 5-year overall survival for GC patients, which showed improved prognostic accuracy than clinical characteristics only. Conclusion: As a tumor microenvironment-relevant gene set-based prognostic signature, the GPSGC model provides an effective approach to evaluate GC patient survival outcomes and may prolong overall survival by enabling the selection of individualized targeted therapy.
Collapse
|
44
|
Sawai Y, Miyata M, Uji A, Ooto S, Tamura H, Ueda-Arakawa N, Muraoka Y, Miyake M, Takahashi A, Kawashima Y, Kadomoto S, Oritani Y, Kawai K, Yamashiro K, Tsujikawa A. Usefulness of Denoising Process to Depict Myopic Choroidal Neovascularisation Using a Single Optical Coherence Tomography Angiography Image. Sci Rep 2020; 10:6172. [PMID: 32277172 PMCID: PMC7148361 DOI: 10.1038/s41598-020-62607-6] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2020] [Accepted: 03/12/2020] [Indexed: 01/07/2023] Open
Abstract
Quality of single optical coherence tomography angiography (OCTA) images of myopic choroidal neovascularisation (mCNV) is poorer than in averaged images, although obtaining averaged images takes much time. This study evaluated the clinical usefulness of novel denoising process for depicting mCNV. This study included 20 eyes of 20 patients with mCNV. Ten en face images taken in a 3 × 3 mm macular cube were obtained from outer-retina-to-choriocapillaris layer. Three image types were prepared for analysis; single images before and after the denoising process accomplished deep learning (single and denoising groups, respectively) and up to 10 images were averaged (averaging group). Pairwise comparisons showed vessel density, vessel length density, and fractal dimension (FD) were higher; whereas, vessel density index (VDI) was lower in single group than in denoising and averaging groups. Detectable CNV indices, contrast-to-nose ratio, and CNV diagnostic scores were higher in denoising and averaging groups than in single group. No significant differences were detected in VDI, FD, or CNV diagnostic scores between denoising and averaging groups. The denoising process can utilise single OCTA images to provide results comparable to averaged OCTA images, which is clinically useful for shortening examination times with quality similar to averaging.
Collapse
Affiliation(s)
- Yuka Sawai
- 0000 0004 0372 2033grid.258799.8Department of Ophthalmology and Visual Sciences, Kyoto University Graduate School of Medicine, Kyoto, Japan
| | - Manabu Miyata
- 0000 0004 0372 2033grid.258799.8Department of Ophthalmology and Visual Sciences, Kyoto University Graduate School of Medicine, Kyoto, Japan
| | - Akihito Uji
- 0000 0004 0372 2033grid.258799.8Department of Ophthalmology and Visual Sciences, Kyoto University Graduate School of Medicine, Kyoto, Japan
| | - Sotaro Ooto
- 0000 0004 0372 2033grid.258799.8Department of Ophthalmology and Visual Sciences, Kyoto University Graduate School of Medicine, Kyoto, Japan
| | - Hiroshi Tamura
- 0000 0004 0372 2033grid.258799.8Department of Ophthalmology and Visual Sciences, Kyoto University Graduate School of Medicine, Kyoto, Japan
| | - Naoko Ueda-Arakawa
- 0000 0004 0372 2033grid.258799.8Department of Ophthalmology and Visual Sciences, Kyoto University Graduate School of Medicine, Kyoto, Japan
| | - Yuki Muraoka
- 0000 0004 0372 2033grid.258799.8Department of Ophthalmology and Visual Sciences, Kyoto University Graduate School of Medicine, Kyoto, Japan
| | - Masahiro Miyake
- 0000 0004 0372 2033grid.258799.8Department of Ophthalmology and Visual Sciences, Kyoto University Graduate School of Medicine, Kyoto, Japan
| | - Ayako Takahashi
- 0000 0004 0372 2033grid.258799.8Department of Ophthalmology and Visual Sciences, Kyoto University Graduate School of Medicine, Kyoto, Japan
| | - Yu Kawashima
- 0000 0004 0372 2033grid.258799.8Department of Ophthalmology and Visual Sciences, Kyoto University Graduate School of Medicine, Kyoto, Japan
| | - Shin Kadomoto
- 0000 0004 0372 2033grid.258799.8Department of Ophthalmology and Visual Sciences, Kyoto University Graduate School of Medicine, Kyoto, Japan
| | - Yasuyuki Oritani
- 0000 0004 0372 2033grid.258799.8Department of Ophthalmology and Visual Sciences, Kyoto University Graduate School of Medicine, Kyoto, Japan
| | - Kentaro Kawai
- 0000 0004 0372 2033grid.258799.8Department of Ophthalmology and Visual Sciences, Kyoto University Graduate School of Medicine, Kyoto, Japan
| | - Kenji Yamashiro
- 0000 0004 0372 2033grid.258799.8Department of Ophthalmology and Visual Sciences, Kyoto University Graduate School of Medicine, Kyoto, Japan ,Department of Ophthalmology, Red Cross Otsu Hospital, Otsu, Japan
| | - Akitaka Tsujikawa
- 0000 0004 0372 2033grid.258799.8Department of Ophthalmology and Visual Sciences, Kyoto University Graduate School of Medicine, Kyoto, Japan
| |
Collapse
|
45
|
Gong P, Zhang C, Chen M. Editorial: Deep Learning for Toxicity and Disease Prediction. Front Genet 2020; 11:175. [PMID: 32174981 PMCID: PMC7055598 DOI: 10.3389/fgene.2020.00175] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2020] [Accepted: 02/13/2020] [Indexed: 01/01/2023] Open
Affiliation(s)
- Ping Gong
- Environmental Laboratory, U.S. Army Engineer Research and Development Center, Vicksburg, MS, United States
| | - Chaoyang Zhang
- School of Computing Sciences and Computer Engineering, University of Southern Mississippi, Hattiesburg, MS, United States
| | - Minjun Chen
- Division of Bioinformatics and Biostatistics, National Center for Toxicological Research, U.S. Food and Drug Administration, Jefferson, AR, United States
| |
Collapse
|
46
|
Affiliation(s)
- J E Morley
- John E. Morley, MB, BCh, Division of Geriatric Medicine, Saint Louis University School of Medicine, 1402 S. Grand Blvd., M238, St. Louis, MO 63104,
| |
Collapse
|