201
|
Loftus TJ, Shickel B, Ozrazgat-Baslanti T, Ren Y, Glicksberg BS, Cao J, Singh K, Chan L, Nadkarni GN, Bihorac A. Artificial intelligence-enabled decision support in nephrology. Nat Rev Nephrol 2022; 18:452-465. [PMID: 35459850 PMCID: PMC9379375 DOI: 10.1038/s41581-022-00562-3] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/16/2022] [Indexed: 12/12/2022]
Abstract
Kidney pathophysiology is often complex, nonlinear and heterogeneous, which limits the utility of hypothetical-deductive reasoning and linear, statistical approaches to diagnosis and treatment. Emerging evidence suggests that artificial intelligence (AI)-enabled decision support systems - which use algorithms based on learned examples - may have an important role in nephrology. Contemporary AI applications can accurately predict the onset of acute kidney injury before notable biochemical changes occur; can identify modifiable risk factors for chronic kidney disease onset and progression; can match or exceed human accuracy in recognizing renal tumours on imaging studies; and may augment prognostication and decision-making following renal transplantation. Future AI applications have the potential to make real-time, continuous recommendations for discrete actions and yield the greatest probability of achieving optimal kidney health outcomes. Realizing the clinical integration of AI applications will require cooperative, multidisciplinary commitment to ensure algorithm fairness, overcome barriers to clinical implementation, and build an AI-competent workforce. AI-enabled decision support should preserve the pre-eminence of wisdom and augment rather than replace human decision-making. By anchoring intuition with objective predictions and classifications, this approach should favour clinician intuition when it is honed by experience.
Collapse
Affiliation(s)
- Tyler J Loftus
- Department of Surgery, University of Florida Health, Gainesville, FL, USA
| | - Benjamin Shickel
- Department of Medicine, University of Florida Health, Gainesville, FL, USA
| | | | - Yuanfang Ren
- Department of Medicine, University of Florida Health, Gainesville, FL, USA
| | - Benjamin S Glicksberg
- Department of Genetics and Genomic Sciences, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Hasso Plattner Institute for Digital Health at Mount Sinai, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Jie Cao
- Department of Computational Medicine and Bioinformatics, University of Michigan Medical School, Ann Arbor, MI, USA
| | - Karandeep Singh
- Department of Learning Health Sciences and Internal Medicine, University of Michigan Medical School, Ann Arbor, MI, USA
| | - Lili Chan
- The Mount Sinai Clinical Intelligence Center, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Division of Nephrology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Charles Bronfman Institute of Personalized Medicine, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Girish N Nadkarni
- The Mount Sinai Clinical Intelligence Center, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- The Division of Data-Driven and Digital Medicine (D3M), Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Azra Bihorac
- Department of Medicine, University of Florida Health, Gainesville, FL, USA.
| |
Collapse
|
202
|
Neural Network Detection of Pacemakers for MRI Safety. J Digit Imaging 2022; 35:1673-1680. [PMID: 35768751 DOI: 10.1007/s10278-022-00663-2] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Revised: 04/23/2022] [Accepted: 05/30/2022] [Indexed: 10/17/2022] Open
Abstract
Flagging the presence of cardiac devices such as pacemakers before an MRI scan is essential to allow appropriate safety checks. We assess the accuracy with which a machine learning model can classify the presence or absence of a pacemaker on pre-existing chest radiographs. A total of 7973 chest radiographs were collected, 3996 with pacemakers visible and 3977 without. Images were identified from information available on the radiology information system (RIS) and correlated with report text. Manual review of images by two board certified radiologists was performed to ensure correct labeling. The data set was divided into training, validation, and a hold-back test set. The data were used to retrain a pre-trained image classification neural network. Final model performance was assessed on the test set. Accuracy of 99.67% on the test set was achieved. Re-testing the final model on the full training and validation data revealed a few additional misclassified examples which are further analyzed. Neural network image classification could be used to screen for the presence of cardiac devices, in addition to current safety processes, providing notification of device presence in advance of safety questionnaires. Computational power to run the model is low. Further work on misclassified examples could improve accuracy on edge cases. The focus of many healthcare applications of computer vision techniques has been for diagnosis and guiding management. This work illustrates an application of computer vision image classification to enhance current processes and improve patient safety.
Collapse
|
203
|
A Neural Network Model Secret-Sharing Scheme with Multiple Weights for Progressive Recovery. MATHEMATICS 2022. [DOI: 10.3390/math10132231] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
With the widespread use of deep-learning models in production environments, the value of deep-learning models has become more prominent. The key issues are the rights of the model trainers and the security of the specific scenarios using the models. In the commercial domain, consumers pay different fees and have access to different levels of services. Therefore, dividing the model into several shadow models with multiple weights is necessary. When holders want to use the model, they can recover the model whose performance corresponds to the number and weights of the collected shadow models so that access to the model can be controlled progressively, i.e., progressive recovery is significant. This paper proposes a neural network model secret sharing scheme (NNSS) with multiple weights for progressive recovery. The scheme uses Shamir’s polynomial to control model parameters’ sharing and embedding phase, which in turn enables hierarchical performance control in the secret model recovery phase. First, the important model parameters are extracted. Then, effective shadow parameters are assigned based on the holders’ weights in the sharing phase, and t shadow models are generated. The holders can obtain a sufficient number of shadow parameters for recovering the secret parameters with a certain probability during the recovery phase. As the number of shadow models obtained increases, the probability becomes larger, while the performance of the extracted models is related to the participants’ weights in the recovery phase. The probability is proportional to the number and weights of the shadow models obtained in the recovery phase, and the probability of the successful recovery of the shadow parameters is 1 when all t shadow models are obtained, i.e., the performance of the reconstruction model can reach the performance of the secret model. A series of experiments conducted on VGG19 verify the effectiveness of the scheme.
Collapse
|
204
|
Security Evaluation of Financial and Insurance and Ruin Probability Analysis Integrating Deep Learning Models. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:1857100. [PMID: 35720881 PMCID: PMC9200529 DOI: 10.1155/2022/1857100] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/24/2022] [Revised: 04/11/2022] [Accepted: 05/21/2022] [Indexed: 11/17/2022]
Abstract
To ensure safe development of the financial and insurance industry and promote the continuous growth of the social economy, the theory and its role of deep learning are firstly analyzed. Secondly, the security of financial and insurance and bankruptcy probability are discussed. Finally, an analytical model of the security bankruptcy probability of financial and insurance is designed through a deep learning model, and the model is evaluated comprehensively. The research results manifest that first, the designed security evaluation of the financial and insurance industry based on the deep learning and bankruptcy probability analysis model not only has strong learning ability but also can effectively reduce its own calculation error through short-time learning. Then, by comparing with other models, it is found that the designed model has a stronger ability to control various errors than other models, and the overall error rate of the model can be reduced to about 20%. At last, the data training indicates that the model designed by the deep learning method can accurately and effectively predict the basic situation of the financial and insurance industry, the minimum error can reach 0, and the highest is only about 3. The research provides a technical reference for the development of the financial and insurance industry and contributes to the prosperity of the social economy.
Collapse
|
205
|
Ullah F, Moon J, Naeem H, Jabbar S. Explainable artificial intelligence approach in combating real-time surveillance of COVID19 pandemic from CT scan and X-ray images using ensemble model. THE JOURNAL OF SUPERCOMPUTING 2022; 78:19246-19271. [PMID: 35754515 PMCID: PMC9206105 DOI: 10.1007/s11227-022-04631-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 05/25/2022] [Indexed: 06/01/2023]
Abstract
Population size has made disease monitoring a major concern in the healthcare system, due to which auto-detection has become a top priority. Intelligent disease detection frameworks enable doctors to recognize illnesses, provide stable and accurate results, and lower mortality rates. An acute and severe disease known as Coronavirus (COVID19) has suddenly become a global health crisis. The fastest way to avoid the spreading of Covid19 is to implement an automated detection approach. In this study, an explainable COVID19 detection in CT scan and chest X-ray is established using a combination of deep learning and machine learning classification algorithms. A Convolutional Neural Network (CNN) collects deep features from collected images, and these features are then fed into a machine learning ensemble for COVID19 assessment. To identify COVID19 disease from images, an ensemble model is developed which includes, Gaussian Naive Bayes (GNB), Support Vector Machine (SVM), Decision Tree (DT), Logistic Regression (LR), K-Nearest Neighbor (KNN), and Random Forest (RF). The overall performance of the proposed method is interpreted using Gradient-weighted Class Activation Mapping (Grad-CAM), and t-distributed Stochastic Neighbor Embedding (t-SNE). The proposed method is evaluated using two datasets containing 1,646 and 2,481 CT scan images gathered from COVID19 patients, respectively. Various performance comparisons with state-of-the-art approaches were also shown. The proposed approach beats existing models, with scores of 98.5% accuracy, 99% precision, and 99% recall, respectively. Further, the t-SNE and explainable Artificial Intelligence (AI) experiments are conducted to validate the proposed approach.
Collapse
Affiliation(s)
- Farhan Ullah
- School of Software, Northwestern Polytechnical University, Xian, 710072 Shaanxi People’s Republic of China
| | - Jihoon Moon
- Department of Industrial Security, Chung-Ang University, Seoul, 06974 Korea
| | - Hamad Naeem
- School of Computer Science and Technology, Zhoukou Normal University, Zhoukou, 466000 Henan People’s Republic of China
| | - Sohail Jabbar
- Department of Computational Sciences, The University of Faisalabad, Faisalabad, 38000 Pakistan
| |
Collapse
|
206
|
Kugener G, Zhu Y, Pangal DJ, Sinha A, Markarian N, Roshannai A, Chan J, Anandkumar A, Hung AJ, Wrobel BB, Zada G, Donoho DA. Deep Neural Networks Can Accurately Detect Blood Loss and Hemorrhage Control Task Success From Video. Neurosurgery 2022; 90:823-829. [PMID: 35319539 DOI: 10.1227/neu.0000000000001906] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Accepted: 11/24/2021] [Indexed: 11/19/2022] Open
Abstract
BACKGROUND Deep neural networks (DNNs) have not been proven to detect blood loss (BL) or predict surgeon performance from video. OBJECTIVE To train a DNN using video from cadaveric training exercises of surgeons controlling simulated internal carotid hemorrhage to predict clinically relevant outcomes. METHODS Video was input as a series of images; deep learning networks were developed, which predicted BL and task success from images alone (automated model) and images plus human-labeled instrument annotations (semiautomated model). These models were compared against 2 reference models, which used average BL across all trials as its prediction (control 1) and a linear regression with time to hemostasis (a metric with known association with BL) as input (control 2). The root-mean-square error (RMSE) and correlation coefficients were used to compare the models; lower RMSE indicates superior performance. RESULTS One hundred forty-three trials were used (123 for training and 20 for testing). Deep learning models outperformed controls (control 1: RMSE 489 mL, control 2: RMSE 431 mL, R2 = 0.35) at BL prediction. The automated model predicted BL with an RMSE of 358 mL (R2 = 0.4) and correctly classified outcome in 85% of trials. The RMSE and classification performance of the semiautomated model improved to 260 mL and 90%, respectively. CONCLUSION BL and task outcome classification are important components of an automated assessment of surgical performance. DNNs can predict BL and outcome of hemorrhage control from video alone; their performance is improved with surgical instrument presence data. The generalizability of DNNs trained on hemorrhage control tasks should be investigated.
Collapse
Affiliation(s)
- Guillaume Kugener
- Department of Neurosurgery, Keck School of Medicine of the University of Southern California, Los Angeles, California, USA
| | - Yichao Zhu
- Department of Neurosurgery, Keck School of Medicine of the University of Southern California, Los Angeles, California, USA
| | - Dhiraj J Pangal
- Department of Neurosurgery, Keck School of Medicine of the University of Southern California, Los Angeles, California, USA
| | - Aditya Sinha
- Department of Neurosurgery, Keck School of Medicine of the University of Southern California, Los Angeles, California, USA
| | - Nicholas Markarian
- Department of Neurosurgery, Keck School of Medicine of the University of Southern California, Los Angeles, California, USA
| | - Arman Roshannai
- Department of Neurosurgery, Keck School of Medicine of the University of Southern California, Los Angeles, California, USA
| | - Justin Chan
- Department of Neurosurgery, Keck School of Medicine of the University of Southern California, Los Angeles, California, USA
| | - Animashree Anandkumar
- Computing + Mathematical Sciences, California Institute of Technology, Pasadena, California, USA
| | - Andrew J Hung
- Center for Robotic Simulation and Education, USC Institute of Urology, Keck School of Medicine of the University of Southern California, Los Angeles, California, USA
| | - Bozena B Wrobel
- Caruso Department of Otolaryngology-Head and Neck Surgery, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | - Gabriel Zada
- Department of Neurosurgery, Keck School of Medicine of the University of Southern California, Los Angeles, California, USA
| | - Daniel A Donoho
- Division of Neurosurgery, Department of Surgery, Texas Children's Hospital, Baylor College of Medicine, Houston, Texas, USA
- Division of Neurosurgery, Center for Neuroscience, Children's National Hospital, Washington, District of Columbia, USA
| |
Collapse
|
207
|
Ovalle-Magallanes E, Avina-Cervantes JG, Cruz-Aceves I, Ruiz-Pinales J. Improving convolutional neural network learning based on a hierarchical bezier generative model for stenosis detection in X-ray images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 219:106767. [PMID: 35364481 DOI: 10.1016/j.cmpb.2022.106767] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/17/2021] [Revised: 03/09/2022] [Accepted: 03/19/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Automatic detection of stenosis on X-ray Coronary Angiography (XCA) images may help diagnose early coronary artery disease. Stenosis is manifested by a buildup of plaque in the arteries, decreasing the blood flow to the heart, increasing the risk of a heart attack. Convolutional Neural Networks (CNNs) have been successfully applied to identify pathological, regular, and featured tissues on rich and diverse medical image datasets. Nevertheless, CNNs find operative and performing limitations while working with small and poorly diversified databases. Transfer learning from large natural image datasets (such as ImageNet) has become a de-facto method to improve neural networks performance in the medical image domain. METHODS This paper proposes a novel Hierarchical Bezier-based Generative Model (HBGM) to improve the CNNs training process to detect stenosis. Herein, artificial image patches are generated to enlarge the original database, speeding up network convergence. The artificial dataset consists of 10,000 images containing 50% stenosis and 50% non-stenosis cases. Besides, a reliable Fréchet Inception Distance (FID) is used to evaluate the generated data quantitatively. Therefore, by using the proposed framework, the network is pre-trained with the artificial datasets and subsequently fine-tuned using the real XCA training dataset. The real dataset consists of 250 XCA image patches, selecting 125 images for stenosis and the remainder for non-stenosis cases. Furthermore, a Convolutional Block Attention Module (CBAM) was included in the network architecture as a self-attention mechanism to improve the efficiency of the network. RESULTS The results showed that the pre-trained networks using the proposed generative model outperformed the results concerning training from scratch. Particularly, an accuracy, precision, sensitivity, and F1-score of 0.8934, 0.9031, 0.8746, 0.8880, 0.9111, respectively, were achieved. The generated artificial dataset obtains a mean FID of 84.0886, with more realistic visual XCA images. CONCLUSIONS Different ResNet architectures for stenosis detection have been evaluated, including attention modules into the network. Numerical results demonstrated that by using the HBGM is obtained a higher performance than training from scratch, even outperforming the ImageNet pre-trained models.
Collapse
Affiliation(s)
- Emmanuel Ovalle-Magallanes
- Telematics and Digital Signal Processing Research groups (CAs), Engineering Division, Campus Irapuato-Salamanca, University of Guanajuato, Carretera Salamanca-Valle de Santiago km 3.5 + 1.8km, Comunidad de Palo Blanco, Salamanca, 36885 Guanajuato, Mexico.
| | - Juan Gabriel Avina-Cervantes
- Telematics and Digital Signal Processing Research groups (CAs), Engineering Division, Campus Irapuato-Salamanca, University of Guanajuato, Carretera Salamanca-Valle de Santiago km 3.5 + 1.8km, Comunidad de Palo Blanco, Salamanca, 36885 Guanajuato, Mexico.
| | - Ivan Cruz-Aceves
- CONACYT, Center for Research in Mathematics (CIMAT), A.C., Jalisco S/N, Col. Valenciana, Guanajuato, 36000 Guanajuato, Mexico.
| | - Jose Ruiz-Pinales
- Telematics and Digital Signal Processing Research groups (CAs), Engineering Division, Campus Irapuato-Salamanca, University of Guanajuato, Carretera Salamanca-Valle de Santiago km 3.5 + 1.8km, Comunidad de Palo Blanco, Salamanca, 36885 Guanajuato, Mexico.
| |
Collapse
|
208
|
Soto JT, Weston Hughes J, Sanchez PA, Perez M, Ouyang D, Ashley EA. Multimodal deep learning enhances diagnostic precision in left ventricular hypertrophy . EUROPEAN HEART JOURNAL. DIGITAL HEALTH 2022; 3:380-389. [PMID: 36712167 PMCID: PMC9707995 DOI: 10.1093/ehjdh/ztac033] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/20/2021] [Revised: 04/25/2022] [Indexed: 02/01/2023]
Abstract
Aims Determining the aetiology of left ventricular hypertrophy (LVH) can be challenging due to the similarity in clinical presentation and cardiac morphological features of diverse causes of disease. In particular, distinguishing individuals with hypertrophic cardiomyopathy (HCM) from the much larger set of individuals with manifest or occult hypertension (HTN) is of major importance for family screening and the prevention of sudden death. We hypothesized that an artificial intelligence method based joint interpretation of 12-lead electrocardiograms and echocardiogram videos could augment physician interpretation. Methods and results We chose not to train on proximate data labels such as physician over-reads of ECGs or echocardiograms but instead took advantage of electronic health record derived clinical blood pressure measurements and diagnostic consensus (often including molecular testing) among physicians in an HCM centre of excellence. Using more than 18 000 combined instances of electrocardiograms and echocardiograms from 2728 patients, we developed LVH-fusion. On held-out test data, LVH-fusion achieved an F1-score of 0.71 in predicting HCM, and 0.96 in predicting HTN. In head-to-head comparison with human readers LVH-fusion had higher sensitivity and specificity rates than its human counterparts. Finally, we use explainability techniques to investigate local and global features that positively and negatively impact LVH-fusion prediction estimates providing confirmation from unsupervised analysis the diagnostic power of lateral T-wave inversion on the ECG and proximal septal hypertrophy on the echocardiogram for HCM. Conclusion These results show that deep learning can provide effective physician augmentation in the face of a common diagnostic dilemma with far reaching implications for the prevention of sudden cardiac death.
Collapse
Affiliation(s)
| | | | - Pablo Amador Sanchez
- Department of Medicine, Division of Cardiology, Stanford University, Stanford, California, USA
| | - Marco Perez
- Department of Medicine, Division of Cardiology, Stanford University, Stanford, California, USA
| | - David Ouyang
- Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center, USA,Division of Artificial Intelligence in Medicine, Department of Medicine, Cedars-Sinai Medical Center, USA
| | - Euan A Ashley
- Corresponding author. Tel: 650 498-4900, Fax: 650 498-7452,
| |
Collapse
|
209
|
Automated video analysis of emotion and dystonia in epileptic seizures. Epilepsy Res 2022; 184:106953. [DOI: 10.1016/j.eplepsyres.2022.106953] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Revised: 05/11/2022] [Accepted: 05/25/2022] [Indexed: 11/18/2022]
|
210
|
Wongvibulsin S, Frech TM, Chren MM, Tkaczyk ER. Expanding Personalized, Data-Driven Dermatology: Leveraging Digital Health Technology and Machine Learning to Improve Patient Outcomes. JID INNOVATIONS 2022; 2:100105. [PMID: 35462957 PMCID: PMC9026581 DOI: 10.1016/j.xjidi.2022.100105] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2021] [Revised: 12/13/2021] [Accepted: 01/07/2022] [Indexed: 11/30/2022] Open
Abstract
The current revolution of digital health technology and machine learning offers enormous potential to improve patient care. Nevertheless, it is essential to recognize that dermatology requires an approach different from those of other specialties. For many dermatological conditions, there is a lack of standardized methodology for quantitatively tracking disease progression and treatment response (clinimetrics). Furthermore, dermatological diseases impact patients in complex ways, some of which can be measured only through patient reports (psychometrics). New tools using digital health technology (e.g., smartphone applications, wearable devices) can aid in capturing both clinimetric and psychometric variables over time. With these data, machine learning can inform efforts to improve health care by, for example, the identification of high-risk patient groups, optimization of treatment strategies, and prediction of disease outcomes. We use the term personalized, data-driven dermatology to refer to the use of comprehensive data to inform individual patient care and improve patient outcomes. In this paper, we provide a framework that includes data from multiple sources, leverages digital health technology, and uses machine learning. Although this framework is applicable broadly to dermatological conditions, we use the example of a serious inflammatory skin condition, chronic cutaneous graft-versus-host disease, to illustrate personalized, data-driven dermatology.
Collapse
Affiliation(s)
- Shannon Wongvibulsin
- Department of Dermatology, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
- Department of Medicine, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania, USA
| | - Tracy M. Frech
- Division of Rheumatology and Immunology, Department of Medicine, Vanderbilt University Medical Center, Nashville, Tennessee, USA
- VA Tennessee Valley Healthcare System, U.S. Department of Veterans Affairs, Nashville, Tennessee, USA
| | - Mary-Margaret Chren
- Department of Dermatology, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Eric R. Tkaczyk
- VA Tennessee Valley Healthcare System, U.S. Department of Veterans Affairs, Nashville, Tennessee, USA
- Department of Dermatology, Vanderbilt University Medical Center, Nashville, Tennessee, USA
- Department of Biomedical Engineering, School of Engineering, Vanderbilt University, Nashville, Tennessee, USA
| |
Collapse
|
211
|
Crowson MG, Moukheiber D, Arévalo AR, Lam BD, Mantena S, Rana A, Goss D, Bates DW, Celi LA. A systematic review of federated learning applications for biomedical data. PLOS DIGITAL HEALTH 2022; 1:e0000033. [PMID: 36812504 PMCID: PMC9931322 DOI: 10.1371/journal.pdig.0000033] [Citation(s) in RCA: 28] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/24/2022] [Accepted: 03/30/2022] [Indexed: 11/18/2022]
Abstract
OBJECTIVES Federated learning (FL) allows multiple institutions to collaboratively develop a machine learning algorithm without sharing their data. Organizations instead share model parameters only, allowing them to benefit from a model built with a larger dataset while maintaining the privacy of their own data. We conducted a systematic review to evaluate the current state of FL in healthcare and discuss the limitations and promise of this technology. METHODS We conducted a literature search using PRISMA guidelines. At least two reviewers assessed each study for eligibility and extracted a predetermined set of data. The quality of each study was determined using the TRIPOD guideline and PROBAST tool. RESULTS 13 studies were included in the full systematic review. Most were in the field of oncology (6 of 13; 46.1%), followed by radiology (5 of 13; 38.5%). The majority evaluated imaging results, performed a binary classification prediction task via offline learning (n = 12; 92.3%), and used a centralized topology, aggregation server workflow (n = 10; 76.9%). Most studies were compliant with the major reporting requirements of the TRIPOD guidelines. In all, 6 of 13 (46.2%) of studies were judged at high risk of bias using the PROBAST tool and only 5 studies used publicly available data. CONCLUSION Federated learning is a growing field in machine learning with many promising uses in healthcare. Few studies have been published to date. Our evaluation found that investigators can do more to address the risk of bias and increase transparency by adding steps for data homogeneity or sharing required metadata and code.
Collapse
Affiliation(s)
- Matthew G. Crowson
- Department of Otolaryngology-Head & Neck Surgery, Massachusetts Eye & Ear, Boston, Massachusetts, United States of America
- Department of Otolaryngology-Head & Neck Surgery, Harvard Medical School, Massachusetts, United States of America
| | - Dana Moukheiber
- Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, MA, United States of America
| | - Aldo Robles Arévalo
- IDMEC, Instituto Superior Técnico, Universidade de Lisboa, Lisbon, Portugal
- Data & Analytics, NTT DATA Portugal, Lisbon, Portugal
| | - Barbara D. Lam
- Department of Hematology & Oncology, Beth Israel Deaconess Medical Center, Boston, Massachusetts, United States of America
| | - Sreekar Mantena
- Harvard College, Boston, Massachusetts, United States of America
| | - Aakanksha Rana
- Massachusetts Institute of Technology, Boston, Massachusetts, United States of America
| | - Deborah Goss
- Department of Otolaryngology-Head & Neck Surgery, Massachusetts Eye & Ear, Boston, Massachusetts, United States of America
| | - David W. Bates
- Division of General Internal Medicine and Primary Care, Brigham and Women’s Hospital, Boston, MA, United States of America
- Department of Health Policy and Management, Harvard T. H. Chan School of Public Health, Boston, MA, United States of America
| | - Leo Anthony Celi
- Institute for Medical Engineering and Science, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Division of Pulmonary, Critical Care and Sleep Medicine, Beth Israel Deaconess Medical Center, Boston, Massachusetts, United States of America
| |
Collapse
|
212
|
Voigt I, Boeckmann M, Bruder O, Wolf A, Schmitz T, Wieneke H. A deep neural network using audio files for detection of aortic stenosis. Clin Cardiol 2022; 45:657-663. [PMID: 35438211 PMCID: PMC9175247 DOI: 10.1002/clc.23826] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/08/2022] [Revised: 03/10/2022] [Accepted: 03/13/2022] [Indexed: 02/02/2023] Open
Abstract
BACKGROUND Although aortic stenosis (AS) is the most common valvular heart disease in the western world, many affected patients remain undiagnosed. Auscultation is a readily available screening tool for AS. However, it requires a high level of professional expertise. HYPOTHESIS An AI algorithm can detect AS using audio files with the same accuracy as experienced cardiologists. METHODS A deep neural network (DNN) was trained by preprocessed audio files of 100 patients with AS and 100 controls. The DNN's performance was evaluated with a test data set of 40 patients. The primary outcome measures were sensitivity, specificity, and F1-score. Results of the DNN were compared with the performance of cardiologists, residents, and medical students. RESULTS Eighteen percent of patients without AS and 22% of patients with AS showed an additional moderate or severe mitral regurgitation. The DNN showed a sensitivity of 0.90 (0.81-0.99), a specificity of 1, and an F1-score of 0.95 (0.89-1.0) for the detection of AS. In comparison, we calculated an F1-score of 0.94 (0.86-1.0) for cardiologists, 0.88 (0.78-0.98) for residents, and 0.88 (0.78-0.98) for students. CONCLUSIONS The present study shows that deep learning-guided auscultation predicts significant AS with similar accuracy as cardiologists. The results of this pilot study suggest that AI-assisted auscultation may help general practitioners without special cardiology training in daily practice.
Collapse
Affiliation(s)
- Ingo Voigt
- Department of Cardiology and Angiology, Contilia Heart and Vascular Center, Elisabeth-Krankenhaus Essen, Essen, Germany
| | - Marc Boeckmann
- Department of Cardiology and Angiology, Contilia Heart and Vascular Center, Elisabeth-Krankenhaus Essen, Essen, Germany
| | - Oliver Bruder
- Department of Cardiology and Angiology, Contilia Heart and Vascular Center, Elisabeth-Krankenhaus Essen, Essen, Germany
| | - Alexander Wolf
- Department of Cardiology and Angiology, Contilia Heart and Vascular Center, Elisabeth-Krankenhaus Essen, Essen, Germany
| | - Thomas Schmitz
- Department of Cardiology and Angiology, Contilia Heart and Vascular Center, Elisabeth-Krankenhaus Essen, Essen, Germany
| | - Heinrich Wieneke
- Department of Cardiology and Angiology, Contilia Heart and Vascular Center, Elisabeth-Krankenhaus Essen, Essen, Germany
| |
Collapse
|
213
|
Artificial intelligence in gastrointestinal and hepatic imaging: past, present and future scopes. Clin Imaging 2022; 87:43-53. [DOI: 10.1016/j.clinimag.2022.04.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2021] [Revised: 03/09/2022] [Accepted: 04/11/2022] [Indexed: 11/19/2022]
|
214
|
Tsiknakis N, Savvidaki E, Manikis GC, Gotsiou P, Remoundou I, Marias K, Alissandrakis E, Vidakis N. Pollen Grain Classification Based on Ensemble Transfer Learning on the Cretan Pollen Dataset. PLANTS 2022; 11:plants11070919. [PMID: 35406899 PMCID: PMC9002917 DOI: 10.3390/plants11070919] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Revised: 03/17/2022] [Accepted: 03/26/2022] [Indexed: 12/03/2022]
Abstract
Pollen identification is an important task for the botanical certification of honey. It is performed via thorough microscopic examination of the pollen present in honey; a process called melissopalynology. However, manual examination of the images is hard, time-consuming and subject to inter- and intra-observer variability. In this study, we investigated the applicability of deep learning models for the classification of pollen-grain images into 20 pollen types, based on the Cretan Pollen Dataset. In particular, we applied transfer and ensemble learning methods to achieve an accuracy of 97.5%, a sensitivity of 96.9%, a precision of 97%, an F1 score of 96.89% and an AUC of 0.9995. However, in a preliminary case study, when we applied the best-performing model on honey-based pollen-grain images, we found that it performed poorly; only 0.02 better than random guessing (i.e., an AUC of 0.52). This indicates that the model should be further fine-tuned on honey-based pollen-grain images to increase its effectiveness on such data.
Collapse
Affiliation(s)
- Nikos Tsiknakis
- Computational Biomedicine Laboratory, Institute of Computer Science, Foundation for Research and Technology Hellas–FORTH, 70013 Heraklion, Greece; (G.C.M.); (K.M.)
- Correspondence:
| | - Elisavet Savvidaki
- Department of Agriculture, Hellenic Mediterranean University, 71004 Heraklion, Greece; (E.S.); (E.A.)
| | - Georgios C. Manikis
- Computational Biomedicine Laboratory, Institute of Computer Science, Foundation for Research and Technology Hellas–FORTH, 70013 Heraklion, Greece; (G.C.M.); (K.M.)
| | - Panagiota Gotsiou
- Department of Food Quality and Chemistry of Natural Products, Mediterranean Agronomic Institute of Chania (M.A.I.Ch./CIHEAM), 73100 Chania, Greece; (P.G.); (I.R.)
| | - Ilektra Remoundou
- Department of Food Quality and Chemistry of Natural Products, Mediterranean Agronomic Institute of Chania (M.A.I.Ch./CIHEAM), 73100 Chania, Greece; (P.G.); (I.R.)
| | - Kostas Marias
- Computational Biomedicine Laboratory, Institute of Computer Science, Foundation for Research and Technology Hellas–FORTH, 70013 Heraklion, Greece; (G.C.M.); (K.M.)
- Department of Electrical and Computer Engineering, Hellenic Mediterranean University, 71004 Heraklion, Greece;
| | - Eleftherios Alissandrakis
- Department of Agriculture, Hellenic Mediterranean University, 71004 Heraklion, Greece; (E.S.); (E.A.)
| | - Nikolas Vidakis
- Department of Electrical and Computer Engineering, Hellenic Mediterranean University, 71004 Heraklion, Greece;
| |
Collapse
|
215
|
Plant Viral Disease Detection: From Molecular Diagnosis to Optical Sensing Technology—A Multidisciplinary Review. REMOTE SENSING 2022. [DOI: 10.3390/rs14071542] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
Abstract
Plant viral diseases result in productivity and economic losses to agriculture, necessitating accurate detection for effective control. Lab-based molecular testing is the gold standard for providing reliable and accurate diagnostics; however, these tests are expensive, time-consuming, and labour-intensive, especially at the field-scale with a large number of samples. Recent advances in optical remote sensing offer tremendous potential for non-destructive diagnostics of plant viral diseases at large spatial scales. This review provides an overview of traditional diagnostic methods followed by a comprehensive description of optical sensing technology, including camera systems, platforms, and spectral data analysis to detect plant viral diseases. The paper is organized along six multidisciplinary sections: (1) Impact of plant viral disease on plant physiology and consequent phenotypic changes, (2) direct diagnostic methods, (3) traditional indirect detection methods, (4) optical sensing technologies, (5) data processing techniques and modelling for disease detection, and (6) comparison of the costs. Finally, the current challenges and novel ideas of optical sensing for detecting plant viruses are discussed.
Collapse
|
216
|
EDNC: Ensemble Deep Neural Network for COVID-19 Recognition. Tomography 2022; 8:869-890. [PMID: 35314648 PMCID: PMC8938826 DOI: 10.3390/tomography8020071] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Revised: 03/15/2022] [Accepted: 03/16/2022] [Indexed: 12/24/2022] Open
Abstract
The automatic recognition of COVID-19 diseases is critical in the present pandemic since it relieves healthcare staff of the burden of screening for infection with COVID-19. Previous studies have proven that deep learning algorithms can be utilized to aid in the diagnosis of patients with potential COVID-19 infection. However, the accuracy of current COVID-19 recognition models is relatively low. Motivated by this fact, we propose three deep learning architectures, F-EDNC, FC-EDNC, and O-EDNC, to quickly and accurately detect COVID-19 infections from chest computed tomography (CT) images. Sixteen deep learning neural networks have been modified and trained to recognize COVID-19 patients using transfer learning and 2458 CT chest images. The proposed EDNC has then been developed using three of sixteen modified pre-trained models to improve the performance of COVID-19 recognition. The results suggested that the F-EDNC method significantly enhanced the recognition of COVID-19 infections with 97.75% accuracy, followed by FC-EDNC and O-EDNC (97.55% and 96.12%, respectively), which is superior to most of the current COVID-19 recognition models. Furthermore, a localhost web application has been built that enables users to easily upload their chest CT scans and obtain their COVID-19 results automatically. This accurate, fast, and automatic COVID-19 recognition system will relieve the stress of medical professionals for screening COVID-19 infections.
Collapse
|
217
|
Jeong J, Moradzadeh A, Aluru NR. Extended DeepILST for Various Thermodynamic States and Applications in Coarse-Graining. J Phys Chem A 2022; 126:1562-1570. [PMID: 35201773 DOI: 10.1021/acs.jpca.1c10865] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
Molecular dynamics (MD) simulations are widely used to obtain the microscopic properties of atomistic systems when the interatomic potential or the coarse-grained potential is known. In many practical situations, however, it is necessary to predict the interatomic or coarse-grained potential, which is a tremendous challenge. Many approaches have been developed to predict the potential parameters based on various techniques, including the relative entropy method, integral equation theory, etc., but these methods lack transferability and are limited to a specific range of thermodynamic states. Recently, data-driven and machine learning approaches have been developed to overcome such limitations. In this study, we expand the range of thermodynamic states used to train deep inverse liquid-state theory (DeepILST)1, a deep learning framework for solving the inverse problem of liquid-state theory. We also assess the performance of DeepILST in coarse-graining various multiatom molecules and identify the molecular characteristics that affect the coarse-graining performance of DeepILST.
Collapse
Affiliation(s)
- J Jeong
- Department of Mechanical Science and Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 United States
| | - A Moradzadeh
- Department of Mechanical Science and Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 United States
| | - N R Aluru
- Walker Department of Mechanical Engineering, Oden Institute for Computational Engineering & Sciences, The University of Texas at Austin, Austin, Texas 78712 United States
| |
Collapse
|
218
|
Li S, Hickey GW, Lander MM, Kanwar MK. Artificial Intelligence and Mechanical Circulatory Support. Heart Fail Clin 2022; 18:301-309. [DOI: 10.1016/j.hfc.2021.11.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
219
|
Broadbent A, Grote T. Can Robots Do Epidemiology? Machine Learning, Causal Inference, and Predicting the Outcomes of Public Health Interventions. PHILOSOPHY & TECHNOLOGY 2022; 35:14. [PMID: 35251906 PMCID: PMC8881939 DOI: 10.1007/s13347-022-00509-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 03/10/2021] [Accepted: 11/20/2021] [Indexed: 11/29/2022]
Abstract
This paper argues that machine learning (ML) and epidemiology are on collision course over causation. The discipline of epidemiology lays great emphasis on causation, while ML research does not. Some epidemiologists have proposed imposing what amounts to a causal constraint on ML in epidemiology, requiring it either to engage in causal inference or restrict itself to mere projection. We whittle down the issues to the question of whether causal knowledge is necessary for underwriting predictions about the outcomes of public health interventions. While there is great plausibility to the idea that it is, conviction that something is impossible does not by itself motivate a constraint to forbid trying. We disambiguate the possible motivations for such a constraint into definitional, metaphysical, epistemological, and pragmatic considerations and argue that “Proceed with caution” (rather than “Stop!”) is the outcome of each. We then argue that there are positive reasons to proceed, albeit cautiously. Causal inference enforces existing classification schema prior to the testing of associational claims (causal or otherwise), but associations and classification schema are more plausibly discovered (rather than tested or justified) in a back-and-forth process of gaining reflective equilibrium. ML instantiates this kind of process, we argue, and thus offers the welcome prospect of uncovering meaningful new concepts in epidemiology and public health—provided it is not causally constrained.
Collapse
Affiliation(s)
- Alex Broadbent
- Department of Philosophy, Durham University, Durham, England
- Department of Philosophy, University of Johannesburg, Johannesburg, South Africa
| | - Thomas Grote
- Cluster of Excellence: Machine Learning for Science, University of Tubingen, Tubingen, Germany
| |
Collapse
|
220
|
An Overview of Medical Electronic Hardware Security and Emerging Solutions. ELECTRONICS 2022. [DOI: 10.3390/electronics11040610] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Electronic healthcare technology is widespread around the world and creates massive potential to improve clinical outcomes and transform care delivery. However, there are increasing concerns with respect to the cyber vulnerabilities of medical tools, malicious medical errors, and security attacks on healthcare data and devices. Increased connectivity to existing computer networks has exposed the medical devices/systems and their communicating data to new cybersecurity vulnerabilities. Adversaries leverage the state-of-the-art technologies, in particular artificial intelligence and computer vision-based techniques, in order to launch stronger and more detrimental attacks on the medical targets. The medical domain is an attractive area for cybercrimes for two fundamental reasons: (a) it is rich resource of valuable and sensitive data; and (b) its protection and defensive mechanisms are weak and ineffective. The attacks aim to steal health information from the patients, manipulate the medical information and queries, maliciously change the medical diagnosis, decisions, and prescriptions, etc. A successful attack in the medical domain causes serious damage to the patient’s health and even death. Therefore, cybersecurity is critical to patient safety and every aspect of the medical domain, while it has not been studied sufficiently. To tackle this problem, new human- and computer-based countermeasures are researched and proposed for medical attacks using the most effective software and hardware technologies, such as artificial intelligence and computer vision. This review provides insights to the novel and existing solutions in the literature that mitigate cyber risks, errors, damage, and threats in the medical domain. We have performed a scoping review analyzing the four major elements in this area (in order from a medical perspective): (1) medical errors; (2) security weaknesses of medical devices at software- and hardware-level; (3) artificial intelligence and/or computer vision in medical applications; and (4) cyber attacks and defenses in the medical domain. Meanwhile, artificial intelligence and computer vision are key topics in this review and their usage in all these four elements are discussed. The review outcome delivers the solutions through building and evaluating the connections among these elements in order to serve as a beneficial guideline for medical electronic hardware security.
Collapse
|
221
|
Guérinot C, Marcon V, Godard C, Blanc T, Verdier H, Planchon G, Raimondi F, Boddaert N, Alonso M, Sailor K, Lledo PM, Hajj B, El Beheiry M, Masson JB. New Approach to Accelerated Image Annotation by Leveraging Virtual Reality and Cloud Computing. FRONTIERS IN BIOINFORMATICS 2022; 1:777101. [PMID: 36303792 PMCID: PMC9580868 DOI: 10.3389/fbinf.2021.777101] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Accepted: 12/15/2021] [Indexed: 01/02/2023] Open
Abstract
Three-dimensional imaging is at the core of medical imaging and is becoming a standard in biological research. As a result, there is an increasing need to visualize, analyze and interact with data in a natural three-dimensional context. By combining stereoscopy and motion tracking, commercial virtual reality (VR) headsets provide a solution to this critical visualization challenge by allowing users to view volumetric image stacks in a highly intuitive fashion. While optimizing the visualization and interaction process in VR remains an active topic, one of the most pressing issue is how to utilize VR for annotation and analysis of data. Annotating data is often a required step for training machine learning algorithms. For example, enhancing the ability to annotate complex three-dimensional data in biological research as newly acquired data may come in limited quantities. Similarly, medical data annotation is often time-consuming and requires expert knowledge to identify structures of interest correctly. Moreover, simultaneous data analysis and visualization in VR is computationally demanding. Here, we introduce a new procedure to visualize, interact, annotate and analyze data by combining VR with cloud computing. VR is leveraged to provide natural interactions with volumetric representations of experimental imaging data. In parallel, cloud computing performs costly computations to accelerate the data annotation with minimal input required from the user. We demonstrate multiple proof-of-concept applications of our approach on volumetric fluorescent microscopy images of mouse neurons and tumor or organ annotations in medical images.
Collapse
Affiliation(s)
- Corentin Guérinot
- Decision and Bayesian Computation, USR 3756 (C3BI/DBC) & Neuroscience Department CNRS UMR 3751, Université de Paris, Institut Pasteur, Paris, France
- Perception and Memory Unit, CNRS UMR3571, Institut Pasteur, Paris, France
- Sorbonne Université, Collège Doctoral, Paris, France
| | - Valentin Marcon
- Decision and Bayesian Computation, USR 3756 (C3BI/DBC) & Neuroscience Department CNRS UMR 3751, Université de Paris, Institut Pasteur, Paris, France
| | - Charlotte Godard
- Decision and Bayesian Computation, USR 3756 (C3BI/DBC) & Neuroscience Department CNRS UMR 3751, Université de Paris, Institut Pasteur, Paris, France
- École Doctorale Physique en Île-de-France, PSL University, Paris, France
| | - Thomas Blanc
- Sorbonne Université, Collège Doctoral, Paris, France
- Laboratoire Physico-Chimie, Institut Curie, PSL Research University, CNRS UMR168, Paris, France
| | - Hippolyte Verdier
- Decision and Bayesian Computation, USR 3756 (C3BI/DBC) & Neuroscience Department CNRS UMR 3751, Université de Paris, Institut Pasteur, Paris, France
- Histopathology and Bio-Imaging Group, Sanofi R&D, Vitry-Sur-Seine, France
- Université de Paris, UFR de Physique, Paris, France
| | - Guillaume Planchon
- Decision and Bayesian Computation, USR 3756 (C3BI/DBC) & Neuroscience Department CNRS UMR 3751, Université de Paris, Institut Pasteur, Paris, France
| | - Francesca Raimondi
- Decision and Bayesian Computation, USR 3756 (C3BI/DBC) & Neuroscience Department CNRS UMR 3751, Université de Paris, Institut Pasteur, Paris, France
- Unité Médicochirurgicale de Cardiologie Congénitale et Pédiatrique, Centre de Référence des Malformations Cardiaques Congénitales Complexes M3C, Hôpital Universitaire Necker-Enfants Malades, Université de Paris, Paris, France
- Pediatric Radiology Unit, Hôpital Universitaire Necker-Enfants Malades, Université de Paris, Paris, France
- UMR-1163 Institut Imagine, Hôpital Universitaire Necker-Enfants Malades, AP-HP, Paris, France
| | - Nathalie Boddaert
- Pediatric Radiology Unit, Hôpital Universitaire Necker-Enfants Malades, Université de Paris, Paris, France
- UMR-1163 Institut Imagine, Hôpital Universitaire Necker-Enfants Malades, AP-HP, Paris, France
| | - Mariana Alonso
- Perception and Memory Unit, CNRS UMR3571, Institut Pasteur, Paris, France
| | - Kurt Sailor
- Perception and Memory Unit, CNRS UMR3571, Institut Pasteur, Paris, France
| | - Pierre-Marie Lledo
- Perception and Memory Unit, CNRS UMR3571, Institut Pasteur, Paris, France
| | - Bassam Hajj
- Sorbonne Université, Collège Doctoral, Paris, France
- École Doctorale Physique en Île-de-France, PSL University, Paris, France
| | - Mohamed El Beheiry
- Decision and Bayesian Computation, USR 3756 (C3BI/DBC) & Neuroscience Department CNRS UMR 3751, Université de Paris, Institut Pasteur, Paris, France
| | - Jean-Baptiste Masson
- Decision and Bayesian Computation, USR 3756 (C3BI/DBC) & Neuroscience Department CNRS UMR 3751, Université de Paris, Institut Pasteur, Paris, France
| |
Collapse
|
222
|
Fat-based studies for computer-assisted screening of child obesity using thermal imaging based on deep learning techniques: a comparison with quantum machine learning approach. Soft comput 2022. [DOI: 10.1007/s00500-021-06668-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
|
223
|
Tronstad C, Amini M, Bach DR, Martinsen OG. Current trends and opportunities in the methodology of electrodermal activity measurement. Physiol Meas 2022; 43. [PMID: 35090148 DOI: 10.1088/1361-6579/ac5007] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Accepted: 01/28/2022] [Indexed: 11/12/2022]
Abstract
Electrodermal activity (EDA) has been measured in the laboratory since the late 1800s. Although the influence of sudomotor nerve activity and the sympathetic nervous system on EDA is well established, the mechanisms underlying EDA signal generation are not completely understood. Owing to simplicity of instrumentation and modern electronics, these measurements have recently seen a transfer from the laboratory to wearable devices, sparking numerous novel applications while bringing along both challenges and new opportunities. In addition to developments in electronics and miniaturization, current trends in material technology and manufacturing have sparked innovations in electrode technologies, and trends in data science such as machine learning and sensor fusion are expanding the ways that measurement data can be processed and utilized. Although challenges remain for the quality of wearable EDA measurement, ongoing research and developments may shorten the quality gap between wearable EDA and standardized recordings in the laboratory. In this topical review, we provide an overview of the basics of EDA measurement, discuss the challenges and opportunities of wearable EDA, and review recent developments in instrumentation, material technology, signal processing, modeling and data science tools that may advance the field of EDA research and applications over the coming years.
Collapse
Affiliation(s)
- Christian Tronstad
- Department of Clinical and Biomedical Engineering, Oslo University Hospital, Sognsvannsveien 20, Oslo, 0372, NORWAY
| | - Maryam Amini
- Physics, University of Oslo Faculty of Mathematics and Natural Sciences, Sem Sælands vei 24, Oslo, 0371, NORWAY
| | - Dominik R Bach
- Wellcome Centre for Human Neuroimaging, University College London, 12 Queen Square, London, London, WC1N 3AZ, UNITED KINGDOM OF GREAT BRITAIN AND NORTHERN IRELAND
| | | |
Collapse
|
224
|
Uncharted Waters of Machine and Deep Learning for Surgical Phase Recognition in Neurosurgery. World Neurosurg 2022; 160:4-12. [PMID: 35026457 DOI: 10.1016/j.wneu.2022.01.020] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2021] [Revised: 01/05/2022] [Accepted: 01/05/2022] [Indexed: 12/20/2022]
Abstract
Recent years have witnessed artificial intelligence (AI) make meteoric leaps in both medicine and surgery, bridging the gap between the capabilities of humans and machines. Digitization of operating rooms and the creation of massive quantities of data have paved the way for machine learning and computer vision applications in surgery. Surgical phase recognition (SPR) is a newly emerging technology that uses data derived from operative videos to train machine and deep learning algorithms to identify the phases of surgery. Advancement of this technology will be key in establishing context-aware surgical systems in the future. By automatically recognizing and evaluating the current surgical scenario, these intelligent systems are able to provide intraoperative decision support, improve operating room efficiency, assess surgical skills, and aid in surgical training and education. Still in its infancy, SPR has been mainly studied in laparoscopic surgeries, with a contrasting stark lack of research within neurosurgery. Given the high-tech and rapidly advancing nature of neurosurgery, we believe SPR has a tremendous untapped potential in this field. Herein, we present an overview of the SPR technology, its potential applications in neurosurgery, and the challenges that lie ahead.
Collapse
|
225
|
Wang Q, Liu F, Zhao X, Tan Q. Session interest model for CTR prediction based on self-attention mechanism. Sci Rep 2022; 12:252. [PMID: 34996985 PMCID: PMC8741903 DOI: 10.1038/s41598-021-03871-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Accepted: 12/10/2021] [Indexed: 11/17/2022] Open
Abstract
Click-through rate prediction, which aims to predict the probability of the user clicking on an item, is critical to online advertising. How to capture the user evolving interests from the user behavior sequence is an important issue in CTR prediction. However, most existing models ignore the factor that the sequence is composed of sessions, and user behavior can be divided into different sessions according to the occurring time. The user behaviors are highly correlated in each session and are not relevant across sessions. We propose an effective model for CTR prediction, named Session Interest Model via Self-Attention (SISA). First, we divide the user sequential behavior into session layer. A self-attention mechanism with bias coding is used to model each session. Since different session interest may be related to each other or follow a sequential pattern, next, we utilize gated recurrent unit (GRU) to capture the interaction and evolution of user different historical session interests in session interest extractor module. Then, we use the local activation and GRU to aggregate their target ad to form the final representation of the behavior sequence in session interest interacting module. Experimental results show that the SISA model performs better than other models.
Collapse
Affiliation(s)
- Qianqian Wang
- Shandong Women's University, Jinan, China
- Shandong Provincial Key Laboratory of Network Based Intelligent Computing, Jinan, China
| | | | | | | |
Collapse
|
226
|
Keenan TDL, Chen Q, Agrón E, Tham YC, Lin Goh JH, Lei X, Ng YP, Liu Y, Xu X, Cheng CY, Bikbov MM, Jonas JB, Bhandari S, Broadhead GK, Colyer MH, Corsini J, Cousineau-Krieger C, Gensheimer W, Grasic D, Lamba T, Magone MT, Maiberger M, Oshinsky A, Purt B, Shin SY, Thavikulwat AT, Lu Z, Chew EY. Deep Learning Automated Diagnosis and Quantitative Classification of Cataract Type and Severity. Ophthalmology 2022; 129:571-584. [PMID: 34990643 PMCID: PMC9038670 DOI: 10.1016/j.ophtha.2021.12.017] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Revised: 12/10/2021] [Accepted: 12/27/2021] [Indexed: 12/14/2022] Open
Abstract
PURPOSE To develop and evaluate deep learning models to perform automated diagnosis and quantitative classification of age-related cataract, including all three anatomical types, from anterior segment photographs. DESIGN Application of deep learning models to Age-Related Eye Disease Study (AREDS) dataset. PARTICIPANTS 18,999 photographs (6,333 triplets) from longitudinal follow-up of 1,137 eyes (576 AREDS participants). METHODS Deep learning models were trained to detect and quantify nuclear cataract (NS; scale 0.9-7.1) from 45-degree slit-lamp photographs and cortical (CLO; scale 0-100%) and posterior subcapsular (PSC; scale 0-100%) cataract from retroillumination photographs. Model performance was compared with that of 14 ophthalmologists and 24 medical students. The ground truth labels were from reading center grading. MAIN OUTCOME MEASURES Mean squared error (MSE). RESULTS On the full test set, mean MSE values for the deep learning models were: 0.23 (SD 0.01) for NS, 13.1 (SD 1.6) for CLO, and 16.6 (SD 2.4) for PSC. On a subset of the test set (substantially enriched for positive cases of CLO and PSC), for NS, mean MSE for the models was 0.23 (SD 0.02), compared to 0.98 (SD 0.23; p=0.000001) for the ophthalmologists, and 1.24 (SD 0.33; p=0.000005) for the medical students. For CLO, mean MSE values were 53.5 (SD 14.8), compared to 134.9 (SD 89.9; p=0.003) and 422.0 (SD 944.4; p=0.0007), respectively. For PSC, mean MSE values were 171.9 (SD 38.9), compared to 176.8 (SD 98.0; p=0.67) and 395.2 (SD 632.5; p=0.18), respectively. In external validation on the Singapore Malay Eye Study (sampled to reflect the distribution of cataract severity in AREDS), MSE was 1.27 for NS and 25.5 for PSC. CONCLUSIONS A deep learning framework was able to perform automated and quantitative classification of cataract severity for all three types of age-related cataract. For the two most common types (NS and CLO), the accuracy was significantly superior to that of ophthalmologists; for the least common type (PSC), the accuracy was similar. The framework may have wide potential applications in both clinical and research domains. In the future, such approaches may increase the accessibility of cataract assessment globally. The code and models are publicly available at https://XXX.
Collapse
Affiliation(s)
- Tiarnan D L Keenan
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, MD, USA.
| | - Qingyu Chen
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA.
| | - Elvira Agrón
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Yih-Chung Tham
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Duke-NUS Medical School, Singapore
| | | | - Xiaofeng Lei
- Institute of High Performance Computing, A*STAR, Singapore
| | - Yi Pin Ng
- Institute of High Performance Computing, A*STAR, Singapore
| | - Yong Liu
- Duke-NUS Medical School, Singapore; Institute of High Performance Computing, A*STAR, Singapore
| | - Xinxing Xu
- Duke-NUS Medical School, Singapore; Institute of High Performance Computing, A*STAR, Singapore
| | - Ching-Yu Cheng
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Duke-NUS Medical School, Singapore; Institute of High Performance Computing, A*STAR, Singapore; Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | | | - Jost B Jonas
- Department of Ophthalmology, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany; Institute of Molecular and Clinical Ophthalmology Basel, Switzerland; Privatpraxis Prof Jonas und Dr Panda-Jonas, Heidelberg, Germany
| | - Sanjeeb Bhandari
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Geoffrey K Broadhead
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Marcus H Colyer
- Department of Ophthalmology, Madigan Army Medical Center, Tacoma, WA, USA; Department of Surgery, Uniformed Services University of the Health Sciences, Bethesda, MD, USA
| | - Jonathan Corsini
- Warfighter Eye Center, Malcolm Grow Medical Clinics and Surgery Center, Joint Base Andrews, MD, USA
| | - Chantal Cousineau-Krieger
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - William Gensheimer
- White River Junction Veterans Affairs Medical Center, White River Junction, VT, USA; Geisel School of Medicine, Dartmouth, NH, USA
| | - David Grasic
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Tania Lamba
- Washington DC Veterans Affairs Medical Center, Washington DC, USA
| | - M Teresa Magone
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | | | - Arnold Oshinsky
- Washington DC Veterans Affairs Medical Center, Washington DC, USA
| | - Boonkit Purt
- Department of Surgery, Uniformed Services University of the Health Sciences, Bethesda, MD, USA; Department of Ophthalmology, Walter Reed National Military Medical Center, Bethesda, MD, USA
| | - Soo Y Shin
- Washington DC Veterans Affairs Medical Center, Washington DC, USA
| | - Alisa T Thavikulwat
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Zhiyong Lu
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA.
| | - Emily Y Chew
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, MD, USA.
| | | |
Collapse
|
227
|
Abstract
In recent decades, healthcare organizations around the world have increasingly appreciated the value of information technologies for a variety of applications. Three of the new technological advancements that are impacting smart health are metaverse, artificial intelligence (AI), and data science. The metaverse is the intersection of three major technologies — AI, augmented reality (AR), and virtual reality (VR). Metaverse provides new possibilities and potential that are still emerging. The increased work efficiency enabled by artificial intelligence and data science in hospitals not only improves patient care but also cuts costs and workload for healthcare providers. Artificial intelligence, coupled with machine learning, is transforming the healthcare industry. The availability of big data enables data scientists to use the data for descriptive, predictive, and prescriptive analytics. This article reviews multiple case studies and the literature on AI and data science applications in hospital administration. The article also presents unresolved research questions and challenges in the applications of the metaverse, AI, and data science in the smart health context. For researchers, in addition to providing a good synopsis of the development and applications of the metaverse, AI, and data science in the healthcare area, this article identifies possible future research directions and discusses the possibilities of the metaverse, artificial intelligence, and data science in smart health. For practitioners, this article provides both hospital decision-makers and healthcare workers with practical guidelines and a smart health management model.
Collapse
Affiliation(s)
- Yin Yang
- West China Hospital, Sichuan University, China
| | | | - Wen Xie
- West China Hospital, Sichuan University, China
| | - Yan Sun
- Nanyang Technological University, Singapore
| |
Collapse
|
228
|
Gomez-Ramirez J, Quilis-Sancho J, Fernandez-Blazquez MA. A Comparative Analysis of MRI Automated Segmentation of Subcortical Brain Volumes in a Large Dataset of Elderly Subjects. Neuroinformatics 2022; 20:63-72. [PMID: 33783668 DOI: 10.1007/s12021-021-09520-z] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/11/2021] [Indexed: 01/06/2023]
Abstract
In this study, we perform a comparative analysis of automated image segmentation of subcortical structures in the elderly brain. Manual segmentation is very time-consuming and automated methods are gaining importance as a clinical tool for diagnosis. The two most commonly used software libraries for brain segmentation -FreeSurfer and FSL- are put to work in a large dataset of 4,028 magnetic resonance imaging (MRI) scans collected for this study. We find a lack of linear correlation between the segmentation volume estimates obtained from FreeSurfer and FSL. On the other hand, FreeSurfer volume estimates tend to be larger thanFSL estimates of the areas putamen, thalamus, amygdala, caudate, pallidum, hippocampus, and accumbens. The characterization of the performance of brain segmentation algorithms in large datasets as the one presented here is a necessary step towards partially or fully automated end-to-end neuroimaging workflow both in clinical and research settings.
Collapse
Affiliation(s)
- Jaime Gomez-Ramirez
- Instituto de Salud Carlos III, Centro de Alzheimer Fundación Reina Sofía, Madrid, Spain.
| | - Javier Quilis-Sancho
- Instituto de Salud Carlos III, Centro de Alzheimer Fundación Reina Sofía, Madrid, Spain
| | | |
Collapse
|
229
|
Tiu E, Talius E, Patel P, Langlotz CP, Ng AY, Rajpurkar P. Expert-level detection of pathologies from unannotated chest X-ray images via self-supervised learning. Nat Biomed Eng 2022; 6:1399-1406. [PMID: 36109605 PMCID: PMC9792370 DOI: 10.1038/s41551-022-00936-9] [Citation(s) in RCA: 67] [Impact Index Per Article: 33.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Accepted: 08/07/2022] [Indexed: 01/14/2023]
Abstract
In tasks involving the interpretation of medical images, suitably trained machine-learning models often exceed the performance of medical experts. Yet such a high-level of performance typically requires that the models be trained with relevant datasets that have been painstakingly annotated by experts. Here we show that a self-supervised model trained on chest X-ray images that lack explicit annotations performs pathology-classification tasks with accuracies comparable to those of radiologists. On an external validation dataset of chest X-rays, the self-supervised model outperformed a fully supervised model in the detection of three pathologies (out of eight), and the performance generalized to pathologies that were not explicitly annotated for model training, to multiple image-interpretation tasks and to datasets from multiple institutions.
Collapse
Affiliation(s)
- Ekin Tiu
- grid.168010.e0000000419368956Stanford University Department of Computer Science, Stanford, CA USA ,grid.38142.3c000000041936754XDepartment of Biomedical Informatics, Harvard University, Boston, MA USA
| | - Ellie Talius
- grid.168010.e0000000419368956Stanford University Department of Computer Science, Stanford, CA USA ,grid.38142.3c000000041936754XDepartment of Biomedical Informatics, Harvard University, Boston, MA USA
| | - Pujan Patel
- grid.168010.e0000000419368956Stanford University Department of Computer Science, Stanford, CA USA ,grid.38142.3c000000041936754XDepartment of Biomedical Informatics, Harvard University, Boston, MA USA
| | - Curtis P. Langlotz
- grid.168010.e0000000419368956AIMI Center, Stanford University, Palo Alto, CA USA
| | - Andrew Y. Ng
- grid.168010.e0000000419368956Stanford University Department of Computer Science, Stanford, CA USA
| | - Pranav Rajpurkar
- grid.38142.3c000000041936754XDepartment of Biomedical Informatics, Harvard University, Boston, MA USA
| |
Collapse
|
230
|
Kader R, Baggaley RF, Hussein M, Ahmad OF, Patel N, Corbett G, Dolwani S, Stoyanov D, Lovat LB. Survey on the perceptions of UK gastroenterologists and endoscopists to artificial intelligence. Frontline Gastroenterol 2022; 13:423-429. [PMID: 36046492 PMCID: PMC9380773 DOI: 10.1136/flgastro-2021-101994] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/23/2021] [Accepted: 12/21/2021] [Indexed: 02/05/2023] Open
Abstract
BACKGROUND AND AIMS With the potential integration of artificial intelligence (AI) into clinical practice, it is essential to understand end users' perception of this novel technology. The aim of this study, which was endorsed by the British Society of Gastroenterology (BSG), was to evaluate the UK gastroenterology and endoscopy communities' views on AI. METHODS An online survey was developed and disseminated to gastroenterologists and endoscopists across the UK. RESULTS One hundred four participants completed the survey. Quality improvement in endoscopy (97%) and better endoscopic diagnosis (92%) were perceived as the most beneficial applications of AI to clinical practice. The most significant challenges were accountability for incorrect diagnoses (85%) and potential bias of algorithms (82%). A lack of guidelines (92%) was identified as the greatest barrier to adopting AI in routine clinical practice. Participants identified real-time endoscopic image diagnosis (95%) as a research priority for AI, while the most perceived significant barriers to AI research were funding (82%) and the availability of annotated data (76%). Participants consider the priorities for the BSG AI Task Force to be identifying research priorities (96%), guidelines for adopting AI devices in clinical practice (93%) and supporting the delivery of multicentre clinical trials (91%). CONCLUSION This survey has identified views from the UK gastroenterology and endoscopy community regarding AI in clinical practice and research, and identified priorities for the newly formed BSG AI Task Force.
Collapse
Affiliation(s)
- Rawen Kader
- Division of Surgery and Interventional Sciences, University College London, London, UK
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, UK
- Department of Gastroenterology, University College London Hospitals NHS Foundation Trust, London, UK
| | - Rebecca F Baggaley
- Department of Respiratory Infections, University of Leicester, Leicester, UK
| | - Mohamed Hussein
- Division of Surgery and Interventional Sciences, University College London, London, UK
- Department of Gastroenterology, University College London Hospitals NHS Foundation Trust, London, UK
| | - Omer F Ahmad
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, UK
- Department of Gastroenterology, University College London Hospitals NHS Foundation Trust, London, UK
| | - Nisha Patel
- Department of Gastroenterology, Imperial College Healthcare NHS Trust, London, UK
| | - Gareth Corbett
- Department of Gastroenterology, Addenbrooke's Hospital, Cambridge, UK
| | - Sunil Dolwani
- Division of Population Medicine, School of Medicine, Cardiff University, Cardiff, UK
| | - Danail Stoyanov
- Division of Surgery and Interventional Sciences, University College London, London, UK
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, UK
| | - Laurence B Lovat
- Division of Surgery and Interventional Sciences, University College London, London, UK
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS), University College London, London, UK
- Department of Gastroenterology, University College London Hospitals NHS Foundation Trust, London, UK
| |
Collapse
|
231
|
Machine learning & deep learning in data-driven decision making of drug discovery and challenges in high-quality data acquisition in the pharmaceutical industry. Future Med Chem 2021; 14:245-270. [PMID: 34939433 DOI: 10.4155/fmc-2021-0243] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023] Open
Abstract
Predicting novel small molecule bioactivities for the target deconvolution, hit-to-lead optimization in drug discovery research, requires molecular representation. Previous reports have demonstrated that machine learning (ML) and deep learning (DL) have substantial implications in virtual screening, peptide synthesis, drug ADMET screening and biomarker discovery. These strategies can increase the positive outcomes in the drug discovery process without false-positive rates and can be achieved in a cost-effective way with a minimum duration of time by high-quality data acquisition. This review substantially discusses the recent updates in AI tools as cheminformatics application in medicinal chemistry for the data-driven decision making of drug discovery and challenges in high-quality data acquisition in the pharmaceutical industry while improving small-molecule bioactivities and properties.
Collapse
|
232
|
Jones CM, Danaher L, Milne MR, Tang C, Seah J, Oakden-Rayner L, Johnson A, Buchlak QD, Esmaili N. Assessment of the effect of a comprehensive chest radiograph deep learning model on radiologist reports and patient outcomes: a real-world observational study. BMJ Open 2021; 11:e052902. [PMID: 34930738 PMCID: PMC8689166 DOI: 10.1136/bmjopen-2021-052902] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/31/2022] Open
Abstract
OBJECTIVES Artificial intelligence (AI) algorithms have been developed to detect imaging features on chest X-ray (CXR) with a comprehensive AI model capable of detecting 124 CXR findings being recently developed. The aim of this study was to evaluate the real-world usefulness of the model as a diagnostic assistance device for radiologists. DESIGN This prospective real-world multicentre study involved a group of radiologists using the model in their daily reporting workflow to report consecutive CXRs and recording their feedback on level of agreement with the model findings and whether this significantly affected their reporting. SETTING The study took place at radiology clinics and hospitals within a large radiology network in Australia between November and December 2020. PARTICIPANTS Eleven consultant diagnostic radiologists of varying levels of experience participated in this study. PRIMARY AND SECONDARY OUTCOME MEASURES Proportion of CXR cases where use of the AI model led to significant material changes to the radiologist report, to patient management, or to imaging recommendations. Additionally, level of agreement between radiologists and the model findings, and radiologist attitudes towards the model were assessed. RESULTS Of 2972 cases reviewed with the model, 92 cases (3.1%) had significant report changes, 43 cases (1.4%) had changed patient management and 29 cases (1.0%) had further imaging recommendations. In terms of agreement with the model, 2569 cases showed complete agreement (86.5%). 390 (13%) cases had one or more findings rejected by the radiologist. There were 16 findings across 13 cases (0.5%) deemed to be missed by the model. Nine out of 10 radiologists felt their accuracy was improved with the model and were more positive towards AI poststudy. CONCLUSIONS Use of an AI model in a real-world reporting environment significantly improved radiologist reporting and showed good agreement with radiologists, highlighting the potential for AI diagnostic support to improve clinical practice.
Collapse
Affiliation(s)
- Catherine M Jones
- Annalise-AI, Sydney, New South Wales, Australia
- I-Med Radiology Network, Sydney, New South Wales, Australia
| | - Luke Danaher
- I-Med Radiology Network, Sydney, New South Wales, Australia
| | - Michael R Milne
- Annalise-AI, Sydney, New South Wales, Australia
- I-Med Radiology Network, Sydney, New South Wales, Australia
| | - Cyril Tang
- Annalise-AI, Sydney, New South Wales, Australia
| | - Jarrel Seah
- Annalise-AI, Sydney, New South Wales, Australia
- Department of Radiology, Alfred Health, Melbourne, Victoria, Australia
| | - Luke Oakden-Rayner
- Australian Institute for Machine Learning, The University of Adelaide, Adelaide, South Australia, Australia
| | | | - Quinlan D Buchlak
- Annalise-AI, Sydney, New South Wales, Australia
- School of Medicine, The University of Notre Dame Australia School of Medicine Sydney Campus, Darlinghurst, New South Wales, Australia
| | - Nazanin Esmaili
- School of Medicine, The University of Notre Dame Australia School of Medicine Sydney Campus, Darlinghurst, New South Wales, Australia
- Faculty of Engineering and IT, University of Technology Sydney, Sydney, New South Wales, Australia
| |
Collapse
|
233
|
Allam A, Feuerriegel S, Rebhan M, Krauthammer M. Analyzing Patient Trajectories With Artificial Intelligence. J Med Internet Res 2021; 23:e29812. [PMID: 34870606 PMCID: PMC8686456 DOI: 10.2196/29812] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2021] [Revised: 07/26/2021] [Accepted: 10/29/2021] [Indexed: 01/16/2023] Open
Abstract
In digital medicine, patient data typically record health events over time (eg, through electronic health records, wearables, or other sensing technologies) and thus form unique patient trajectories. Patient trajectories are highly predictive of the future course of diseases and therefore facilitate effective care. However, digital medicine often uses only limited patient data, consisting of health events from only a single or small number of time points while ignoring additional information encoded in patient trajectories. To analyze such rich longitudinal data, new artificial intelligence (AI) solutions are needed. In this paper, we provide an overview of the recent efforts to develop trajectory-aware AI solutions and provide suggestions for future directions. Specifically, we examine the implications for developing disease models from patient trajectories along the typical workflow in AI: problem definition, data processing, modeling, evaluation, and interpretation. We conclude with a discussion of how such AI solutions will allow the field to build robust models for personalized risk scoring, subtyping, and disease pathway discovery.
Collapse
Affiliation(s)
- Ahmed Allam
- Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland
- Biomedical Informatics, University Hospital of Zurich, Zurich, Switzerland
| | - Stefan Feuerriegel
- Department of Management, Technology, and Economics, ETH Zurich, Zurich, Switzerland
- ETH Artificial Intelligence Center, ETH Zurich, Zurich, Switzerland
- Ludwig Maximilian University of Munich, Munich, Germany
| | - Michael Rebhan
- Department of Management, Technology, and Economics, ETH Zurich, Zurich, Switzerland
| | - Michael Krauthammer
- Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland
- Biomedical Informatics, University Hospital of Zurich, Zurich, Switzerland
- Yale Center for Medical Informatics, Yale University School of Medicine, New Haven, CT, United States
| |
Collapse
|
234
|
Hügle T, Kalweit M. [Artificial intelligence-supported treatment in rheumatology : Principles, current situation and perspectives]. Z Rheumatol 2021; 80:914-927. [PMID: 34618208 PMCID: PMC8651581 DOI: 10.1007/s00393-021-01096-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/21/2021] [Indexed: 11/02/2022]
Abstract
Computer-guided clinical decision support systems have been finding their way into practice for some time, mostly integrated into electronic medical records. The primary goals are to improve the quality of treatment, save time and avoid errors. These are mostly rule-based algorithms that recognize drug interactions or provide reminder functions. Through artificial intelligence (AI), clinical decision support systems can be disruptively further developed. New knowledge is constantly being created from data through machine learning in order to predict the individual course of a patient's disease, identify phenotypes or support treatment decisions. Such algorithms already exist for rheumatological diseases. Automated image recognition and disease prediction in rheumatoid arthritis are the most advanced; however, these have not yet been sufficiently tested or integrated into existing decision support systems. Rather than dictating the AI-assisted choice of treatment to the doctor, future clinical decision systems are seen as hybrid decision support, always involving both the expert and the patient. There is also a great need for security through comprehensible and auditable algorithms to sustainably guarantee the quality and transparency of AI-assisted treatment recommendations in the long term.
Collapse
Affiliation(s)
- Thomas Hügle
- Abteilung Rheumatologie, Universitätsspital Lausanne (CHUV) und Universität Lausanne, Avenue Pierre-Decker 4, 1011 Lausanne, Schweiz
| | - Maria Kalweit
- Institut für Informatik, Albert-Ludwigs-Universität Freiburg, Universität Freiburg im Breisgau, Georges-Koehler-Allee 80, 79110 Freiburg im Breisgau, Deutschland
| |
Collapse
|
235
|
Dujon AM, Vittecoq M, Bramwell G, Thomas F, Ujvari B. Machine learning is a powerful tool to study the effect of cancer on species and ecosystems. Methods Ecol Evol 2021. [DOI: 10.1111/2041-210x.13703] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Affiliation(s)
- Antoine M. Dujon
- Geelong School of Life and Environmental Sciences Centre for Integrative Ecology Deakin University Waurn Ponds Victoria Australia
- CREECUMR IRD 224‐CNRS 5290‐Université de Montpellier Montpellier France
- CANECEV‐Centre de Recherches Ecologiques et Evolutives sur le cancer (CREEC) Montpellier France
| | - Marion Vittecoq
- CREECUMR IRD 224‐CNRS 5290‐Université de Montpellier Montpellier France
- MIVEGECUniversity of MontpellierCNRSIRD Montpellier France
- Tour du Valat Research Institute for the Conservation of Mediterranean Wetlands Arles France
| | - Georgina Bramwell
- Geelong School of Life and Environmental Sciences Centre for Integrative Ecology Deakin University Waurn Ponds Victoria Australia
- CANECEV‐Centre de Recherches Ecologiques et Evolutives sur le cancer (CREEC) Montpellier France
| | - Frédéric Thomas
- CREECUMR IRD 224‐CNRS 5290‐Université de Montpellier Montpellier France
- CANECEV‐Centre de Recherches Ecologiques et Evolutives sur le cancer (CREEC) Montpellier France
- MIVEGECUniversity of MontpellierCNRSIRD Montpellier France
| | - Beata Ujvari
- Geelong School of Life and Environmental Sciences Centre for Integrative Ecology Deakin University Waurn Ponds Victoria Australia
- CANECEV‐Centre de Recherches Ecologiques et Evolutives sur le cancer (CREEC) Montpellier France
| |
Collapse
|
236
|
Vajen B, Hänselmann S, Lutterloh F, Käfer S, Espenkötter J, Beening A, Bogin J, Schlegelberger B, Göhring G. Classification of fluorescent R-Band metaphase chromosomes using a convolutional neural network is precise and fast in generating karyograms of hematologic neoplastic cells. Cancer Genet 2021; 260-261:23-29. [PMID: 34839233 DOI: 10.1016/j.cancergen.2021.11.005] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2021] [Revised: 11/15/2021] [Accepted: 11/18/2021] [Indexed: 11/02/2022]
Abstract
Karyotype analysis has a great impact on the diagnosis, treatment and prognosis in hematologic neoplasms. The identification and characterization of chromosomes is a challenging process and needs experienced personal. Artificial intelligence provides novel support tools. However, their safe and reliable application in diagnostics needs to be evaluated. Here, we present a novel laboratory approach to identify chromosomes in cancer cells using a convolutional neural network (CNN). The CNN identified the correct chromosome class for 98.8% of chromosomes, which led to a time saving of 42% for the karyotyping workflow. These results demonstrate that the CNN has potential application value in chromosome classification of hematologic neoplasms. This study contributes to the development of an automatic karyotyping platform.
Collapse
Affiliation(s)
- Beate Vajen
- Department of Human Genetics, Hannover Medical School, Hannover 30625, Germany.
| | - Siegfried Hänselmann
- MetaSystems Hard and Software GmbH, Robert-Bosch-Str. 6, Altlussheim 68804, Germany
| | | | - Simon Käfer
- Department of Human Genetics, Hannover Medical School, Hannover 30625, Germany
| | | | - Anna Beening
- Department of Human Genetics, Hannover Medical School, Hannover 30625, Germany
| | - Jochen Bogin
- MetaSystems Hard and Software GmbH, Robert-Bosch-Str. 6, Altlussheim 68804, Germany
| | | | - Gudrun Göhring
- Department of Human Genetics, Hannover Medical School, Hannover 30625, Germany
| |
Collapse
|
237
|
Bao S, Li K, Yan C, Zhang Z, Qu J, Zhou M. Deep learning-based advances and applications for single-cell RNA-sequencing data analysis. Brief Bioinform 2021; 23:6444320. [PMID: 34849562 DOI: 10.1093/bib/bbab473] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Revised: 09/24/2021] [Accepted: 10/15/2021] [Indexed: 11/14/2022] Open
Abstract
The rapid development of single-cell RNA-sequencing (scRNA-seq) technology has raised significant computational and analytical challenges. The application of deep learning to scRNA-seq data analysis is rapidly evolving and can overcome the unique challenges in upstream (quality control and normalization) and downstream (cell-, gene- and pathway-level) analysis of scRNA-seq data. In the present study, recent advances and applications of deep learning-based methods, together with specific tools for scRNA-seq data analysis, were summarized. Moreover, the future perspectives and challenges of deep-learning techniques regarding the appropriate analysis and interpretation of scRNA-seq data were investigated. The present study aimed to provide evidence supporting the biomedical application of deep learning-based tools and may aid biologists and bioinformaticians in navigating this exciting and fast-moving area.
Collapse
Affiliation(s)
- Siqi Bao
- School of Information and Communication Engineering, Hainan University, Haikou 570228, P. R. China.,School of Biomedical Engineering, School of Ophthalmology & Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, P. R. China.,Hainan Institute of Real World Data, Haikou 570228, P. R. China
| | - Ke Li
- School of Biomedical Engineering, School of Ophthalmology & Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, P. R. China
| | - Congcong Yan
- School of Biomedical Engineering, School of Ophthalmology & Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, P. R. China
| | - Zicheng Zhang
- School of Biomedical Engineering, School of Ophthalmology & Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, P. R. China
| | - Jia Qu
- School of Information and Communication Engineering, Hainan University, Haikou 570228, P. R. China.,School of Biomedical Engineering, School of Ophthalmology & Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, P. R. China.,Hainan Institute of Real World Data, Haikou 570228, P. R. China
| | - Meng Zhou
- School of Biomedical Engineering, School of Ophthalmology & Optometry and Eye Hospital, Wenzhou Medical University, Wenzhou 325027, P. R. China
| |
Collapse
|
238
|
Trovato G, Russo M. Artificial Intelligence (AI) and Lung Ultrasound in Infectious Pulmonary Disease. Front Med (Lausanne) 2021; 8:706794. [PMID: 34901048 PMCID: PMC8655241 DOI: 10.3389/fmed.2021.706794] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2021] [Accepted: 11/01/2021] [Indexed: 12/12/2022] Open
Affiliation(s)
| | - Matteo Russo
- The European Medical Association (EMA), Brussels, Belgium
| |
Collapse
|
239
|
Designing clinically translatable artificial intelligence systems for high-dimensional medical imaging. NAT MACH INTELL 2021. [DOI: 10.1038/s42256-021-00399-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
|
240
|
Dunnmon J. Separating Hope from Hype: Artificial Intelligence Pitfalls and Challenges in Radiology. Radiol Clin North Am 2021; 59:1063-1074. [PMID: 34689874 DOI: 10.1016/j.rcl.2021.07.006] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
Abstract
Although recent scientific studies suggest that artificial intelligence (AI) could provide value in many radiology applications, much of the hard engineering work required to consistently realize this value in practice remains to be done. In this article, we summarize the various ways in which AI can benefit radiology practice, identify key challenges that must be overcome for those benefits to be delivered, and discuss promising avenues by which these challenges can be addressed.
Collapse
Affiliation(s)
- Jared Dunnmon
- Department of Biomedical Data Science, Stanford University, 1265 Welch Rd, Stanford, CA 94305, USA.
| |
Collapse
|
241
|
Nguyen-Vo TH, Trinh QH, Nguyen L, Nguyen-Hoang PU, Nguyen TN, Nguyen DT, Nguyen BP, Le L. iCYP-MFE: Identifying Human Cytochrome P450 Inhibitors Using Multitask Learning and Molecular Fingerprint-Embedded Encoding. J Chem Inf Model 2021; 62:5059-5068. [PMID: 34672553 DOI: 10.1021/acs.jcim.1c00628] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
Abstract
The human cytochrome P450 (CYP) superfamily holds responsibilities for the metabolism of both endogenous and exogenous compounds such as drugs, cellular metabolites, and toxins. The inhibition exerted on the CYP enzymes is closely associated with adverse drug reactions encompassing metabolic failures and induced side effects. In modern drug discovery, identification of potential CYP inhibitors is, therefore, highly essential. Alongside experimental approaches, numerous computational models have been proposed to address this biochemical issue. In this study, we introduce iCYP-MFE, a computational framework for virtual screening on CYP inhibitors toward 1A2, 2C9, 2C19, 2D6, and 3A4 isoforms. iCYP-MFE contains a set of five robust, stable, and effective prediction models developed using multitask learning incorporated with molecular fingerprint-embedded features. The results show that multitask learning can remarkably leverage useful information from related tasks to promote global performance. Comparative analysis indicates that iCYP-MFE achieves three predominant tasks, one equivalent task, and one less effective task compared to state-of-the-art methods. The area under the receiver operating characteristic curve (AUC-ROC) and the area under the precision-recall curve (AUC-PR) were two decisive metrics used for model evaluation. The prediction task for CYP2D6-inhibition achieves the highest AUC-ROC value of 0.93 while the prediction task for CYP1A2-inhibition obtains the highest AUC-PR value of 0.92. The substructural analysis preliminarily explains the nature of the CYP-inhibitory activity of compounds. An online web server for iCYP-MFE with a user-friendly interface was also deployed to support scientific communities in identifying CYP inhibitors.
Collapse
Affiliation(s)
- Thanh-Hoang Nguyen-Vo
- School of Mathematics and Statistics, Victoria University of Wellington, Kelburn Parade, Wellington 6140, New Zealand
| | - Quang H Trinh
- Computational Biology Center, International University-VNU HCMC, Ho Chi Minh City 700000, Vietnam
| | - Loc Nguyen
- Computational Biology Center, International University-VNU HCMC, Ho Chi Minh City 700000, Vietnam
| | - Phuong-Uyen Nguyen-Hoang
- Computational Biology Center, International University-VNU HCMC, Ho Chi Minh City 700000, Vietnam
| | - Thien-Ngan Nguyen
- Computational Biology Center, International University-VNU HCMC, Ho Chi Minh City 700000, Vietnam
| | - Dung T Nguyen
- School of Information and Communication Technology, Hanoi University of Science and Technology, Hanoi 100000, Vietnam
| | - Binh P Nguyen
- School of Mathematics and Statistics, Victoria University of Wellington, Kelburn Parade, Wellington 6140, New Zealand
| | - Ly Le
- Computational Biology Center, International University-VNU HCMC, Ho Chi Minh City 700000, Vietnam.,Vingroup Big Data Institute, Ha Noi 100000, Vietnam
| |
Collapse
|
242
|
Balluet M, Sizaire F, El Habouz Y, Walter T, Pont J, Giroux B, Bouchareb O, Tramier M, Pecreaux J. Neural network fast-classifies biological images through features selecting to power automated microscopy. J Microsc 2021; 285:3-19. [PMID: 34623634 DOI: 10.1111/jmi.13062] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Accepted: 09/28/2021] [Indexed: 11/26/2022]
Abstract
Artificial intelligence is nowadays used for cell detection and classification in optical microscopy during post-acquisition analysis. The microscopes are now fully automated and next expected to be smart by making acquisition decisions based on the images. It calls for analysing them on the fly. Biology further imposes training on a reduced data set due to cost and time to prepare the samples and have the data sets annotated by experts. We propose a real-time image processing compliant with these specifications by balancing accurate detection and execution performance. We characterised the images using a generic, high-dimensional feature extractor. We then classified the images using machine learning to understand the contribution of each feature in decision and execution time. We found that the non-linear-classifier random forests outperformed Fisher's linear discriminant. More importantly, the most discriminant and time-consuming features could be excluded without significant accuracy loss, offering a substantial gain in execution time. It suggests a feature-group redundancy likely related to the biology of the observed cells. We offer a method to select fast and discriminant features. In our assay, a 79.6 ± 2.4% accurate classification of a cell took 68.7 ± 3.5 ms (mean ± SD, 5-fold cross-validation nested in 10 bootstrap repeats), corresponding to 14 cells per second, dispatched into eight phases of the cell cycle, using 12 feature groups and operating a consumer market ARM-based embedded system. A simple neural network offered similar performances paving the way to faster training and classification, using parallel execution on a general-purpose graphic processing unit. Finally, this strategy is also usable for deep neural networks paving the way to optimizing these algorithms for smart microscopy.
Collapse
Affiliation(s)
- Maël Balluet
- CNRS, Univ Rennes, IGDR - UMR 6290, Rennes, France.,Inscoper SAS, Cesson-Sévigné, France
| | - Florian Sizaire
- CNRS, Univ Rennes, IGDR - UMR 6290, Rennes, France.,Present address Biologics Research, Sanofi R&D, Vitry-sur-Seine, France
| | | | - Thomas Walter
- Centre for Computational Biology (CBIO), MINES ParisTech, PSL University, Paris, France.,Institut Curie, Paris, France.,INSERM, U900, Paris, France
| | | | | | | | - Marc Tramier
- CNRS, Univ Rennes, IGDR - UMR 6290, Rennes, France.,Univ Rennes, BIOSIT, UMS CNRS 3480, US INSERM 018, Rennes, France
| | | |
Collapse
|
243
|
Horry M, Chakraborty S, Pradhan B, Paul M, Gomes D, Ul-Haq A, Alamri A. Deep Mining Generation of Lung Cancer Malignancy Models from Chest X-ray Images. SENSORS 2021; 21:s21196655. [PMID: 34640976 PMCID: PMC8513105 DOI: 10.3390/s21196655] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Revised: 09/28/2021] [Accepted: 10/05/2021] [Indexed: 12/19/2022]
Abstract
Lung cancer is the leading cause of cancer death and morbidity worldwide. Many studies have shown machine learning models to be effective in detecting lung nodules from chest X-ray images. However, these techniques have yet to be embraced by the medical community due to several practical, ethical, and regulatory constraints stemming from the “black-box” nature of deep learning models. Additionally, most lung nodules visible on chest X-rays are benign; therefore, the narrow task of computer vision-based lung nodule detection cannot be equated to automated lung cancer detection. Addressing both concerns, this study introduces a novel hybrid deep learning and decision tree-based computer vision model, which presents lung cancer malignancy predictions as interpretable decision trees. The deep learning component of this process is trained using a large publicly available dataset on pathological biomarkers associated with lung cancer. These models are then used to inference biomarker scores for chest X-ray images from two independent data sets, for which malignancy metadata is available. Next, multi-variate predictive models were mined by fitting shallow decision trees to the malignancy stratified datasets and interrogating a range of metrics to determine the best model. The best decision tree model achieved sensitivity and specificity of 86.7% and 80.0%, respectively, with a positive predictive value of 92.9%. Decision trees mined using this method may be considered as a starting point for refinement into clinically useful multi-variate lung cancer malignancy models for implementation as a workflow augmentation tool to improve the efficiency of human radiologists.
Collapse
Affiliation(s)
- Michael Horry
- Centre for Advanced Modelling and Geospatial Information Systems (CAMGIS), Faculty of Engineering and IT, University of Technology Sydney, Sydney, NSW 2007, Australia;
- IBM Australia Ltd., Sydney, NSW 2000, Australia
| | - Subrata Chakraborty
- Centre for Advanced Modelling and Geospatial Information Systems (CAMGIS), Faculty of Engineering and IT, University of Technology Sydney, Sydney, NSW 2007, Australia;
- Correspondence: (S.C.); (B.P.)
| | - Biswajeet Pradhan
- Centre for Advanced Modelling and Geospatial Information Systems (CAMGIS), Faculty of Engineering and IT, University of Technology Sydney, Sydney, NSW 2007, Australia;
- Earth Observation Centre, Institute of Climate Change, Universiti Kebangsaan Malaysia (UKM), Bangi 43600, Malaysia
- Correspondence: (S.C.); (B.P.)
| | - Manoranjan Paul
- Machine Vision and Digital Health (MaViDH), School of Computing and Mathematics, Charles Sturt University, Bathurst, NSW 2795, Australia; (M.P.); (D.G.); (A.U.-H.)
| | - Douglas Gomes
- Machine Vision and Digital Health (MaViDH), School of Computing and Mathematics, Charles Sturt University, Bathurst, NSW 2795, Australia; (M.P.); (D.G.); (A.U.-H.)
| | - Anwaar Ul-Haq
- Machine Vision and Digital Health (MaViDH), School of Computing and Mathematics, Charles Sturt University, Bathurst, NSW 2795, Australia; (M.P.); (D.G.); (A.U.-H.)
| | - Abdullah Alamri
- Department of Geology and Geophysics, College of Science, King Saud University, P.O. Box 2455, Riyadh 11451, Saudi Arabia;
| |
Collapse
|
244
|
Hallou A, Yevick HG, Dumitrascu B, Uhlmann V. Deep learning for bioimage analysis in developmental biology. Development 2021; 148:dev199616. [PMID: 34490888 PMCID: PMC8451066 DOI: 10.1242/dev.199616] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
Deep learning has transformed the way large and complex image datasets can be processed, reshaping what is possible in bioimage analysis. As the complexity and size of bioimage data continues to grow, this new analysis paradigm is becoming increasingly ubiquitous. In this Review, we begin by introducing the concepts needed for beginners to understand deep learning. We then review how deep learning has impacted bioimage analysis and explore the open-source resources available to integrate it into a research project. Finally, we discuss the future of deep learning applied to cell and developmental biology. We analyze how state-of-the-art methodologies have the potential to transform our understanding of biological systems through new image-based analysis and modelling that integrate multimodal inputs in space and time.
Collapse
Affiliation(s)
- Adrien Hallou
- Cavendish Laboratory, Department of Physics, University of Cambridge, Cambridge, CB3 0HE, UK
- Wellcome Trust/Cancer Research UK Gurdon Institute, University of Cambridge, Cambridge, CB2 1QN, UK
- Wellcome Trust/Medical Research Council Stem Cell Institute, University of Cambridge, Cambridge, CB2 1QR, UK
| | - Hannah G. Yevick
- Department of Biology, Massachusetts Institute of Technology, Cambridge, MA, 02142, USA
| | - Bianca Dumitrascu
- Computer Laboratory, Cambridge, University of Cambridge, Cambridge, CB3 0FD, UK
| | - Virginie Uhlmann
- European Bioinformatics Institute, European Molecular Biology Laboratory, Cambridge, CB10 1SD, UK
| |
Collapse
|
245
|
Krishnamurthy S, Srinivasan K, Qaisar SM, Vincent PMDR, Chang CY. Evaluating Deep Neural Network Architectures with Transfer Learning for Pneumonitis Diagnosis. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2021; 2021:8036304. [PMID: 34552660 PMCID: PMC8452401 DOI: 10.1155/2021/8036304] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/03/2021] [Accepted: 08/30/2021] [Indexed: 12/11/2022]
Abstract
Pneumonitis is an infectious disease that causes the inflammation of the air sac. It can be life-threatening to the very young and elderly. Detection of pneumonitis from X-ray images is a significant challenge. Early detection and assistance with diagnosis can be crucial. Recent developments in the field of deep learning have significantly improved their performance in medical image analysis. The superior predictive performance of the deep learning methods makes them ideal for pneumonitis classification from chest X-ray images. However, training deep learning models can be cumbersome and resource-intensive. Reusing knowledge representations of public models trained on large-scale datasets through transfer learning can help alleviate these challenges. In this paper, we compare various image classification models based on transfer learning with well-known deep learning architectures. The Kaggle chest X-ray dataset was used to evaluate and compare our models. We apply basic data augmentation and fine-tune our feed-forward classification head on the models pretrained on the ImageNet dataset. We observed that the DenseNet201 model outperforms other models with an AUROC score of 0.966 and a recall score of 0.99. We also visualize the class activation maps from the DenseNet201 model to interpret the patterns recognized by the model for prediction.
Collapse
Affiliation(s)
| | - Kathiravan Srinivasan
- School of Computer Science and Engineering, Vellore Institute of Technology (VIT), Vellore, India
| | - Saeed Mian Qaisar
- Electrical and Computer Engineering Department, Effat University, Jeddah, Saudi Arabia
| | - P. M. Durai Raj Vincent
- School of Information Technology and Engineering, Vellore Institute of Technology (VIT), Vellore, India
| | - Chuan-Yu Chang
- Department of Computer Science and Information Engineering, National Yunlin University of Science and Technology, Yunlin 64002, Taiwan
| |
Collapse
|
246
|
Geometric Regularization of Local Activations for Knowledge Transfer in Convolutional Neural Networks. INFORMATION 2021. [DOI: 10.3390/info12080333] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
In this work, we propose a mechanism for knowledge transfer between Convolutional Neural Networks via the geometric regularization of local features produced by the activations of convolutional layers. We formulate appropriate loss functions, driving a “student” model to adapt such that its local features exhibit similar geometrical characteristics to those of an “instructor” model, at corresponding layers. The investigated functions, inspired by manifold-to-manifold distance measures, are designed to compare the neighboring information inside the feature space of the involved activations without any restrictions in the features’ dimensionality, thus enabling knowledge transfer between different architectures. Experimental evidence demonstrates that the proposed technique is effective in different settings, including knowledge-transfer to smaller models, transfer between different deep architectures and harnessing knowledge from external data, producing models with increased accuracy compared to a typical training. Furthermore, results indicate that the presented method can work synergistically with methods such as knowledge distillation, further increasing the accuracy of the trained models. Finally, experiments on training with limited data show that a combined regularization scheme can achieve the same generalization as a non-regularized training with 50% of the data in the CIFAR-10 classification task.
Collapse
|
247
|
Barros B, Lacerda P, Albuquerque C, Conci A. Pulmonary COVID-19: Learning Spatiotemporal Features Combining CNN and LSTM Networks for Lung Ultrasound Video Classification. SENSORS (BASEL, SWITZERLAND) 2021; 21:5486. [PMID: 34450928 PMCID: PMC8401701 DOI: 10.3390/s21165486] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/12/2021] [Revised: 08/04/2021] [Accepted: 08/05/2021] [Indexed: 12/18/2022]
Abstract
Deep Learning is a very active and important area for building Computer-Aided Diagnosis (CAD) applications. This work aims to present a hybrid model to classify lung ultrasound (LUS) videos captured by convex transducers to diagnose COVID-19. A Convolutional Neural Network (CNN) performed the extraction of spatial features, and the temporal dependence was learned using a Long Short-Term Memory (LSTM). Different types of convolutional architectures were used for feature extraction. The hybrid model (CNN-LSTM) hyperparameters were optimized using the Optuna framework. The best hybrid model was composed of an Xception pre-trained on ImageNet and an LSTM containing 512 units, configured with a dropout rate of 0.4, two fully connected layers containing 1024 neurons each, and a sequence of 20 frames in the input layer (20×2018). The model presented an average accuracy of 93% and sensitivity of 97% for COVID-19, outperforming models based purely on spatial approaches. Furthermore, feature extraction using transfer learning with models pre-trained on ImageNet provided comparable results to models pre-trained on LUS images. The results corroborate with other studies showing that this model for LUS classification can be an important tool in the fight against COVID-19 and other lung diseases.
Collapse
Affiliation(s)
- Bruno Barros
- Institute of Computing, Campus Praia Vermelha, Fluminense Federal University, Niterói 24.210-346, Brazil; (P.L.); (C.A.); (A.C.)
| | | | | | | |
Collapse
|
248
|
A benchmark for neural network robustness in skin cancer classification. Eur J Cancer 2021; 155:191-199. [PMID: 34388516 DOI: 10.1016/j.ejca.2021.06.047] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Revised: 06/18/2021] [Accepted: 06/29/2021] [Indexed: 02/06/2023]
Abstract
BACKGROUND One prominent application for deep learning-based classifiers is skin cancer classification on dermoscopic images. However, classifier evaluation is often limited to holdout data which can mask common shortcomings such as susceptibility to confounding factors. To increase clinical applicability, it is necessary to thoroughly evaluate such classifiers on out-of-distribution (OOD) data. OBJECTIVE The objective of the study was to establish a dermoscopic skin cancer benchmark in which classifier robustness to OOD data can be measured. METHODS Using a proprietary dermoscopic image database and a set of image transformations, we create an OOD robustness benchmark and evaluate the robustness of four different convolutional neural network (CNN) architectures on it. RESULTS The benchmark contains three data sets-Skin Archive Munich (SAM), SAM-corrupted (SAM-C) and SAM-perturbed (SAM-P)-and is publicly available for download. To maintain the benchmark's OOD status, ground truth labels are not provided and test results should be sent to us for assessment. The SAM data set contains 319 unmodified and biopsy-verified dermoscopic melanoma (n = 194) and nevus (n = 125) images. SAM-C and SAM-P contain images from SAM which were artificially modified to test a classifier against low-quality inputs and to measure its prediction stability over small image changes, respectively. All four CNNs showed susceptibility to corruptions and perturbations. CONCLUSIONS This benchmark provides three data sets which allow for OOD testing of binary skin cancer classifiers. Our classifier performance confirms the shortcomings of CNNs and provides a frame of reference. Altogether, this benchmark should facilitate a more thorough evaluation process and thereby enable the development of more robust skin cancer classifiers.
Collapse
|
249
|
Acar E, Şahin E, Yılmaz İ. Improving effectiveness of different deep learning-based models for detecting COVID-19 from computed tomography (CT) images. Neural Comput Appl 2021; 33:17589-17609. [PMID: 34345118 PMCID: PMC8321007 DOI: 10.1007/s00521-021-06344-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2020] [Accepted: 07/18/2021] [Indexed: 12/12/2022]
Abstract
COVID-19 has caused a pandemic crisis that threatens the world in many areas, especially in public health. For the diagnosis of COVID-19, computed tomography has a prognostic role in the early diagnosis of COVID-19 as it provides both rapid and accurate results. This is crucial to assist clinicians in making decisions for rapid isolation and appropriate patient treatment. Therefore, many researchers have shown that the accuracy of COVID-19 patient detection from chest CT images using various deep learning systems is extremely optimistic. Deep learning networks such as convolutional neural networks (CNNs) require substantial training data. One of the biggest problems for researchers is accessing a significant amount of training data. In this work, we combine methods such as segmentation, data augmentation and generative adversarial network (GAN) to increase the effectiveness of deep learning models. We propose a method that generates synthetic chest CT images using the GAN method from a limited number of CT images. We test the performance of experiments (with and without GAN) on internal and external dataset. When the CNN is trained on real images and synthetic images, a slight increase in accuracy and other results are observed in the internal dataset, but between \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$3\%$$\end{document}3% and \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$9\%$$\end{document}9% in the external dataset. It is promising according to the performance results that the proposed method will accelerate the detection of COVID-19 and lead to more robust systems.
Collapse
Affiliation(s)
- Erdi Acar
- Department of Computer Engineering, Çanakkale Onsekiz Mart University, 17100 Çanakkale, Turkey
| | - Engin Şahin
- Department of Computer Engineering, Çanakkale Onsekiz Mart University, 17100 Çanakkale, Turkey
| | - İhsan Yılmaz
- Department of Computer Engineering, Çanakkale Onsekiz Mart University, 17100 Çanakkale, Turkey
| |
Collapse
|
250
|
Ravi V, Narasimhan H, Chakraborty C, Pham TD. Deep learning-based meta-classifier approach for COVID-19 classification using CT scan and chest X-ray images. MULTIMEDIA SYSTEMS 2021; 28:1401-1415. [PMID: 34248292 PMCID: PMC8258271 DOI: 10.1007/s00530-021-00826-1] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Accepted: 06/01/2021] [Indexed: 05/27/2023]
Abstract
Literature survey shows that convolutional neural network (CNN)-based pretrained models have been largely used for CoronaVirus Disease 2019 (COVID-19) classification using chest X-ray (CXR) and computed tomography (CT) datasets. However, most of the methods have used a smaller number of data samples for both CT and CXR datasets for training, validation, and testing. As a result, the model might have shown good performance during testing, but this type of model will not be more effective on unseen COVID-19 data samples. Generalization is an important term to be considered while designing a classifier that can perform well on completely unseen datasets. Here, this work proposes a large-scale learning with stacked ensemble meta-classifier and deep learning-based feature fusion approach for COVID-19 classification. The features from the penultimate layer (global average pooling) of EfficientNet-based pretrained models were extracted and the dimensionality of the extracted features reduced using kernel principal component analysis (PCA). Next, a feature fusion approach was employed to merge the features of various extracted features. Finally, a stacked ensemble meta-classifier-based approach was used for classification. It is a two-stage approach. In the first stage, random forest and support vector machine (SVM) were applied for prediction, then aggregated and fed into the second stage. The second stage includes logistic regression classifier that classifies the data sample of CT and CXR into either COVID-19 or Non-COVID-19. The proposed model was tested using large CT and CXR datasets, which are publicly available. The performance of the proposed model was compared with various existing CNN-based pretrained models. The proposed model outperformed the existing methods and can be used as a tool for point-of-care diagnosis by healthcare professionals.
Collapse
Affiliation(s)
- Vinayakumar Ravi
- Center for Artificial Intelligence, Prince Mohammad Bin Fahd University, Khobar, Saudi Arabia
| | - Harini Narasimhan
- Smart Materials Structures and Systems Lab, Indian Institute of Technology, Kanpur, India
| | - Chinmay Chakraborty
- Department of Electronics and Communication Engineering, Birla Institute of Technology, Ranchi, Jharkhand India
| | - Tuan D. Pham
- Center for Artificial Intelligence, Prince Mohammad Bin Fahd University, Khobar, Saudi Arabia
| |
Collapse
|