1
|
Solak M, Tören M, Asan B, Kaba E, Beyazal M, Çeliker FB. Generative Adversarial Network Based Contrast Enhancement: Synthetic Contrast Brain Magnetic Resonance Imaging. Acad Radiol 2024:S1076-6332(24)00865-1. [PMID: 39694785 DOI: 10.1016/j.acra.2024.11.021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2024] [Revised: 11/05/2024] [Accepted: 11/10/2024] [Indexed: 12/20/2024]
Abstract
RATIONALE AND OBJECTIVES Magnetic resonance imaging (MRI) is a vital tool for diagnosing neurological disorders, frequently utilising gadolinium-based contrast agents (GBCAs) to enhance resolution and specificity. However, GBCAs present certain risks, including side effects, increased costs, and repeated exposure. This study proposes an innovative approach using generative adversarial networks (GANs) for virtual contrast enhancement in brain MRI, with the aim of reducing or eliminating GBCAs, minimising associated risks, and enhancing imaging efficiency while preserving diagnostic quality. MATERIAL AND METHODS In this study, 10,235 images were acquired in a 3.0 Tesla MRI scanner from 81 participants (54 females, 27 males; mean age 35 years, range 19-68 years). T1-weighted and contrast-enhanced images were obtained following the administration of a standard dose of a GBCA. In order to generate "synthetic" images for contrast-enhanced T1-weighted, a CycleGAN model, a sub-model of the GAN structure, was trained to process pre- and post-contrast images. The dataset was divided into three subsets: 80% for training, 10% for validation, and 10% for testing. TensorBoard was employed to prevent image deterioration throughout the training phase, and the image processing and training procedures were optimised. The radiologists were presented with a non-contrast input image and asked to choose between a real contrast-enhanced image and synthetic MR images generated by CycleGAN corresponding to this non-contrast MR image (Turing test). RESULTS The performance of the CycleGAN model was evaluated using a combination of quantitative and qualitative analyses. For the entire dataset, in the test set, the mean square error (MSE) was 0.0038, while the structural similarity index (SSIM) was 0.58. Among the submodels, the most successful model achieved an MSE of 0.0053, while the SSIM was 0.8. The qualitative evaluation was validated through a visual Turing test conducted by four radiologists with varying levels of clinical experience. CONCLUSION The findings of this study support the efficacy of the CycleGAN model in generating synthetic contrast-enhanced T1-weighted brain MR images. Both quantitative and qualitative evaluations demonstrated excellent performance, confirming the model's ability to produce realistic synthetic images. This method shows promise in potentially eliminating the need for intravenous contrast agents, thereby minimising the associated risks of their use.
Collapse
Affiliation(s)
- Merve Solak
- Recep Tayyip Erdogan University, Department of Radiology, Rize, Turkey (M.S., E.K., M.B., F.B.C.)
| | - Murat Tören
- Recep Tayyip Erdogan University, Department of Electrical and Electronics Engineering, Rize, Turkey (M.T., B.A.)
| | - Berkutay Asan
- Recep Tayyip Erdogan University, Department of Electrical and Electronics Engineering, Rize, Turkey (M.T., B.A.)
| | - Esat Kaba
- Recep Tayyip Erdogan University, Department of Radiology, Rize, Turkey (M.S., E.K., M.B., F.B.C.)
| | - Mehmet Beyazal
- Recep Tayyip Erdogan University, Department of Radiology, Rize, Turkey (M.S., E.K., M.B., F.B.C.)
| | - Fatma Beyazal Çeliker
- Recep Tayyip Erdogan University, Department of Radiology, Rize, Turkey (M.S., E.K., M.B., F.B.C.).
| |
Collapse
|
2
|
Capponi S, Wang S. AI in cellular engineering and reprogramming. Biophys J 2024; 123:2658-2670. [PMID: 38576162 PMCID: PMC11393708 DOI: 10.1016/j.bpj.2024.04.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Revised: 03/19/2024] [Accepted: 04/01/2024] [Indexed: 04/06/2024] Open
Abstract
During the last decade, artificial intelligence (AI) has increasingly been applied in biophysics and related fields, including cellular engineering and reprogramming, offering novel approaches to understand, manipulate, and control cellular function. The potential of AI lies in its ability to analyze complex datasets and generate predictive models. AI algorithms can process large amounts of data from single-cell genomics and multiomic technologies, allowing researchers to gain mechanistic insights into the control of cell identity and function. By integrating and interpreting these complex datasets, AI can help identify key molecular events and regulatory pathways involved in cellular reprogramming. This knowledge can inform the design of precision engineering strategies, such as the development of new transcription factor and signaling molecule cocktails, to manipulate cell identity and drive authentic cell fate across lineage boundaries. Furthermore, when used in combination with computational methods, AI can accelerate and improve the analysis and understanding of the intricate relationships between genes, proteins, and cellular processes. In this review article, we explore the current state of AI applications in biophysics with a specific focus on cellular engineering and reprogramming. Then, we showcase a couple of recent applications where we combined machine learning with experimental and computational techniques. Finally, we briefly discuss the challenges and prospects of AI in cellular engineering and reprogramming, emphasizing the potential of these technologies to revolutionize our ability to engineer cells for a variety of applications, from disease modeling and drug discovery to regenerative medicine and biomanufacturing.
Collapse
Affiliation(s)
- Sara Capponi
- IBM Almaden Research Center, San Jose, California; Center for Cellular Construction, San Francisco, California.
| | - Shangying Wang
- Bay Area Institute of Science, Altos Labs, Redwood City, California.
| |
Collapse
|
3
|
Rollan-Martinez-Herrera M, Díaz AA, Estépar RSJ, Sanchez-Ferrero GV, Ross JC, Estépar RSJ, Nardelli P. CNNs trained with adult data are useful in pediatrics. A pneumonia classification example. PLoS One 2024; 19:e0306703. [PMID: 39052572 PMCID: PMC11271847 DOI: 10.1371/journal.pone.0306703] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Accepted: 06/21/2024] [Indexed: 07/27/2024] Open
Abstract
BACKGROUND AND OBJECTIVES The scarcity of data for training deep learning models in pediatrics has prompted questions about the feasibility of employing CNNs trained with adult images for pediatric populations. In this work, a pneumonia classification CNN was used as an exploratory example to showcase the adaptability and efficacy of such models in pediatric healthcare settings despite the inherent data constraints. METHODS To develop a curated training dataset with reduced biases, 46,947 chest X-ray images from various adult datasets were meticulously selected. Two preprocessing approaches were tried to assess the impact of thoracic segmentation on model attention outside the thoracic area. Evaluation of our approach was carried out on a dataset containing 5,856 chest X-rays of children from 1 to 5 years old. RESULTS An analysis of attention maps indicated that networks trained with thorax segmentation placed less attention on regions outside the thorax, thus eliminating potential bias. The ensuing network exhibited impressive performance when evaluated on an adult dataset, achieving a pneumonia discrimination AUC of 0.95. When tested on a pediatric dataset, the pneumonia discrimination AUC reached 0.82. CONCLUSIONS The results of this study show that adult-trained CNNs can be effectively applied to pediatric populations. This could potentially shift focus towards validating adult models over pediatric population instead of training new CNNs with limited pediatric data. To ensure the generalizability of deep learning models, it is important to implement techniques aimed at minimizing biases, such as image segmentation or low-quality image exclusion.
Collapse
Affiliation(s)
- Maria Rollan-Martinez-Herrera
- Department of Radiology, Applied Chest Imaging Laboratory, Harvard Medical School, Brigham and Women’s Hospital, Boston, Massachusetts, United States of America
| | - Alejandro A. Díaz
- Division of Pulmonary and Critical Care Medicine, Chest Imaging Laboratory, Harvard Medical School, Brigham and Women’s Hospital, Boston, Massachusetts, United States of America
| | - Rubén San José Estépar
- Department of Radiology, Applied Chest Imaging Laboratory, Harvard Medical School, Brigham and Women’s Hospital, Boston, Massachusetts, United States of America
| | - Gonzalo Vegas Sanchez-Ferrero
- Department of Radiology, Applied Chest Imaging Laboratory, Harvard Medical School, Brigham and Women’s Hospital, Boston, Massachusetts, United States of America
| | - James C. Ross
- Department of Radiology, Applied Chest Imaging Laboratory, Harvard Medical School, Brigham and Women’s Hospital, Boston, Massachusetts, United States of America
| | - Raúl San José Estépar
- Department of Radiology, Applied Chest Imaging Laboratory, Harvard Medical School, Brigham and Women’s Hospital, Boston, Massachusetts, United States of America
| | - Pietro Nardelli
- Department of Radiology, Applied Chest Imaging Laboratory, Harvard Medical School, Brigham and Women’s Hospital, Boston, Massachusetts, United States of America
| |
Collapse
|
4
|
Tang Z, Wong HS, Yu Z. Privacy-Preserving Federated Learning With Domain Adaptation for Multi-Disease Ocular Disease Recognition. IEEE J Biomed Health Inform 2024; 28:3219-3227. [PMID: 37590112 DOI: 10.1109/jbhi.2023.3305685] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/19/2023]
Abstract
As one of the effective ways of ocular disease recognition, early fundus screening can help patients avoid unrecoverable blindness. Although deep learning is powerful for image-based ocular disease recognition, the performance mainly benefits from a large number of labeled data. For ocular disease, data collection and annotation in a single site usually take a lot of time. If multi-site data are obtained, there are two main issues: 1) the data privacy is easy to be leaked; 2) the domain gap among sites will influence the recognition performance. Inspired by the above, first, a Gaussian randomized mechanism is adopted in local sites, which are then engaged in a global model to preserve the data privacy of local sites and models. Second, to bridge the domain gap among different sites, a two-step domain adaptation method is introduced, which consists of a domain confusion module and a multi-expert learning strategy. Based on the above, a privacy-preserving federated learning framework with domain adaptation is constructed. In the experimental part, a multi-disease early fundus screening dataset, including a detailed ablation study and four experimental settings, is used to show the stepwise performance, which verifies the efficiency of our proposed framework.
Collapse
|
5
|
Sadr S, Rokhshad R, Daghighi Y, Golkar M, Tolooie Kheybari F, Gorjinejad F, Mataji Kojori A, Rahimirad P, Shobeiri P, Mahdian M, Mohammad-Rahimi H. Deep learning for tooth identification and numbering on dental radiography: a systematic review and meta-analysis. Dentomaxillofac Radiol 2024; 53:5-21. [PMID: 38183164 PMCID: PMC11003608 DOI: 10.1093/dmfr/twad001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Revised: 10/03/2023] [Accepted: 10/05/2023] [Indexed: 01/07/2024] Open
Abstract
OBJECTIVES Improved tools based on deep learning can be used to accurately number and identify teeth. This study aims to review the use of deep learning in tooth numbering and identification. METHODS An electronic search was performed through October 2023 on PubMed, Scopus, Cochrane, Google Scholar, IEEE, arXiv, and medRxiv. Studies that used deep learning models with segmentation, object detection, or classification tasks for teeth identification and numbering of human dental radiographs were included. For risk of bias assessment, included studies were critically analysed using quality assessment of diagnostic accuracy studies (QUADAS-2). To generate plots for meta-analysis, MetaDiSc and STATA 17 (StataCorp LP, College Station, TX, USA) were used. Pooled outcome diagnostic odds ratios (DORs) were determined through calculation. RESULTS The initial search yielded 1618 studies, of which 29 were eligible based on the inclusion criteria. Five studies were found to have low bias across all domains of the QUADAS-2 tool. Deep learning has been reported to have an accuracy range of 81.8%-99% in tooth identification and numbering and a precision range of 84.5%-99.94%. Furthermore, sensitivity was reported as 82.7%-98% and F1-scores ranged from 87% to 98%. Sensitivity was 75.5%-98% and specificity was 79.9%-99%. Only 6 studies found the deep learning model to be less than 90% accurate. The average DOR of the pooled data set was 1612, the sensitivity was 89%, the specificity was 99%, and the area under the curve was 96%. CONCLUSION Deep learning models successfully can detect, identify, and number teeth on dental radiographs. Deep learning-powered tooth numbering systems can enhance complex automated processes, such as accurately reporting which teeth have caries, thus aiding clinicians in making informed decisions during clinical practice.
Collapse
Affiliation(s)
- Soroush Sadr
- Department of Endodontics, School of Dentistry, Hamadan University of Medical Sciences, Hamadan 6517838636, Iran
| | - Rata Rokhshad
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group AI on Health, Berlin 10117, Germany
- Section of Endocrinology, Nutrition, and Diabetes, Department of Medicine, Boston University Medical Center, Boston, MA 02118, United States
| | - Yasaman Daghighi
- School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran 1983963113, Iran
| | - Mohsen Golkar
- Department of Oral and Maxillofacial Surgery, School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran 4188794755, Iran
| | - Fateme Tolooie Kheybari
- Faculty of Dentistry, Tabriz Medical Sciences, Islamic Azad University, Tabriz 5166/15731, Iran
| | - Fatemeh Gorjinejad
- Faculty of Dentistry, Dental School of Islamic Azad University of Medical Sciences, Tehran 19395/1495, Iran
| | - Atousa Mataji Kojori
- Faculty of Dentistry, Dental School of Islamic Azad University of Medical Sciences, Tehran 19395/1495, Iran
| | - Parisa Rahimirad
- Student Research Committee, School of Dentistry, Guilan University of Medical Sciences, Rasht 4188794755, Iran
| | - Parnian Shobeiri
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| | - Mina Mahdian
- Department of Prosthodontics and Digital Technology, Stony Brook University School of Dental Medicine, New York, NY 11794, United States
| | - Hossein Mohammad-Rahimi
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group AI on Health, Berlin 10117, Germany
| |
Collapse
|
6
|
Cui Z, Wu Y, Zhang QH, Wang SG, He Y, Huang DS. MV-CVIB: a microbiome-based multi-view convolutional variational information bottleneck for predicting metastatic colorectal cancer. Front Microbiol 2023; 14:1238199. [PMID: 37675425 PMCID: PMC10477591 DOI: 10.3389/fmicb.2023.1238199] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2023] [Accepted: 08/02/2023] [Indexed: 09/08/2023] Open
Abstract
Introduction Imbalances in gut microbes have been implied in many human diseases, including colorectal cancer (CRC), inflammatory bowel disease, type 2 diabetes, obesity, autism, and Alzheimer's disease. Compared with other human diseases, CRC is a gastrointestinal malignancy with high mortality and a high probability of metastasis. However, current studies mainly focus on the prediction of colorectal cancer while neglecting the more serious malignancy of metastatic colorectal cancer (mCRC). In addition, high dimensionality and small samples lead to the complexity of gut microbial data, which increases the difficulty of traditional machine learning models. Methods To address these challenges, we collected and processed 16S rRNA data and calculated abundance data from patients with non-metastatic colorectal cancer (non-mCRC) and mCRC. Different from the traditional health-disease classification strategy, we adopted a novel disease-disease classification strategy and proposed a microbiome-based multi-view convolutional variational information bottleneck (MV-CVIB). Results The experimental results show that MV-CVIB can effectively predict mCRC. This model can achieve AUC values above 0.9 compared to other state-of-the-art models. Not only that, MV-CVIB also achieved satisfactory predictive performance on multiple published CRC gut microbiome datasets. Discussion Finally, multiple gut microbiota analyses were used to elucidate communities and differences between mCRC and non-mCRC, and the metastatic properties of CRC were assessed by patient age and microbiota expression.
Collapse
Affiliation(s)
- Zhen Cui
- Institute of Machine Learning and Systems Biology, College of Electronics and Information Engineering, Tongji University, Shanghai, China
| | - Yan Wu
- College of Electronics and Information Engineering, Tongji University, Shanghai, China
| | - Qin-Hu Zhang
- EIT Institute for Advanced Study, Ningbo, Zhejiang, China
| | - Si-Guo Wang
- Institute of Machine Learning and Systems Biology, College of Electronics and Information Engineering, Tongji University, Shanghai, China
| | - Ying He
- Institute of Machine Learning and Systems Biology, College of Electronics and Information Engineering, Tongji University, Shanghai, China
| | | |
Collapse
|
7
|
Xie P, Yang C, Yang G, Jiang Y, He M, Jiang X, Chen Y, Deng L, Wang M, Armstrong DG, Ma Y, Deng W. Mortality prediction in patients with hyperglycaemic crisis using explainable machine learning: a prospective, multicentre study based on tertiary hospitals. Diabetol Metab Syndr 2023; 15:44. [PMID: 36899433 PMCID: PMC10007769 DOI: 10.1186/s13098-023-01020-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Accepted: 03/06/2023] [Indexed: 03/12/2023] Open
Abstract
BACKGROUND Experiencing a hyperglycaemic crisis is associated with a short- and long-term increased risk of mortality. We aimed to develop an explainable machine learning model for predicting 3-year mortality and providing individualized risk factor assessment of patients with hyperglycaemic crisis after admission. METHODS Based on five representative machine learning algorithms, we trained prediction models on data from patients with hyperglycaemic crisis admitted to two tertiary hospitals between 2016 and 2020. The models were internally validated by tenfold cross-validation and externally validated using previously unseen data from two other tertiary hospitals. A SHapley Additive exPlanations algorithm was used to interpret the predictions of the best performing model, and the relative importance of the features in the model was compared with the traditional statistical test results. RESULTS A total of 337 patients with hyperglycaemic crisis were enrolled in the study, 3-year mortality was 13.6% (46 patients). 257 patients were used to train the models, and 80 patients were used for model validation. The Light Gradient Boosting Machine model performed best across testing cohorts (area under the ROC curve 0.89 [95% CI 0.77-0.97]). Advanced age, higher blood glucose and blood urea nitrogen were the three most important predictors for increased mortality. CONCLUSION The developed explainable model can provide estimates of the mortality and visual contribution of the features to the prediction for an individual patient with hyperglycaemic crisis. Advanced age, metabolic disorders, and impaired renal and cardiac function were important factors that predicted non-survival. TRIAL REGISTRATION NUMBER ChiCTR1800015981, 2018/05/04.
Collapse
Affiliation(s)
- Puguang Xie
- Department of Endocrinology and Bioengineering College, Chongqing University Central Hospital, Chongqing Emergency Medical Centre, Chongqing University, NO. 1 Jiankang Road, Yuzhong District, Chongqing, 400014, China
| | - Cheng Yang
- Department of Endocrinology and Bioengineering College, Chongqing University Central Hospital, Chongqing Emergency Medical Centre, Chongqing University, NO. 1 Jiankang Road, Yuzhong District, Chongqing, 400014, China
| | - Gangyi Yang
- Department of Endocrinology, The Second Affiliated Hospital, Chongqing Medical University, Chongqing, 400010, China
| | - Youzhao Jiang
- Department of Endocrinology, People's Hospital of Chongqing Banan District, Chongqing, 401320, China
| | - Min He
- General Practice Department, Chongqing Southwest Hospital, Chongqing, 400038, China
| | - Xiaoyan Jiang
- Department of Endocrinology and Bioengineering College, Chongqing University Central Hospital, Chongqing Emergency Medical Centre, Chongqing University, NO. 1 Jiankang Road, Yuzhong District, Chongqing, 400014, China
| | - Yan Chen
- Department of Endocrinology and Bioengineering College, Chongqing University Central Hospital, Chongqing Emergency Medical Centre, Chongqing University, NO. 1 Jiankang Road, Yuzhong District, Chongqing, 400014, China
| | - Liling Deng
- Department of Endocrinology and Bioengineering College, Chongqing University Central Hospital, Chongqing Emergency Medical Centre, Chongqing University, NO. 1 Jiankang Road, Yuzhong District, Chongqing, 400014, China
| | - Min Wang
- Department of Endocrinology and Bioengineering College, Chongqing University Central Hospital, Chongqing Emergency Medical Centre, Chongqing University, NO. 1 Jiankang Road, Yuzhong District, Chongqing, 400014, China
| | - David G Armstrong
- Department of Surgery, Keck School of Medicine of University of Southern California, Los Angeles, CA, 90033, USA
| | - Yu Ma
- Department of Endocrinology and Bioengineering College, Chongqing University Central Hospital, Chongqing Emergency Medical Centre, Chongqing University, NO. 1 Jiankang Road, Yuzhong District, Chongqing, 400014, China.
| | - Wuquan Deng
- Department of Endocrinology and Bioengineering College, Chongqing University Central Hospital, Chongqing Emergency Medical Centre, Chongqing University, NO. 1 Jiankang Road, Yuzhong District, Chongqing, 400014, China.
| |
Collapse
|
8
|
Pasquini L, Napolitano A, Pignatelli M, Tagliente E, Parrillo C, Nasta F, Romano A, Bozzao A, Di Napoli A. Synthetic Post-Contrast Imaging through Artificial Intelligence: Clinical Applications of Virtual and Augmented Contrast Media. Pharmaceutics 2022; 14:pharmaceutics14112378. [PMID: 36365197 PMCID: PMC9695136 DOI: 10.3390/pharmaceutics14112378] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Revised: 10/25/2022] [Accepted: 10/26/2022] [Indexed: 11/06/2022] Open
Abstract
Contrast media are widely diffused in biomedical imaging, due to their relevance in the diagnosis of numerous disorders. However, the risk of adverse reactions, the concern of potential damage to sensitive organs, and the recently described brain deposition of gadolinium salts, limit the use of contrast media in clinical practice. In recent years, the application of artificial intelligence (AI) techniques to biomedical imaging has led to the development of 'virtual' and 'augmented' contrasts. The idea behind these applications is to generate synthetic post-contrast images through AI computational modeling starting from the information available on other images acquired during the same scan. In these AI models, non-contrast images (virtual contrast) or low-dose post-contrast images (augmented contrast) are used as input data to generate synthetic post-contrast images, which are often undistinguishable from the native ones. In this review, we discuss the most recent advances of AI applications to biomedical imaging relative to synthetic contrast media.
Collapse
Affiliation(s)
- Luca Pasquini
- Neuroradiology Unit, Department of Radiology, Memorial Sloan Kettering Cancer Center, 1275 York Ave, New York, NY 10065, USA
- Neuroradiology Unit, NESMOS Department, Sant’Andrea Hospital, La Sapienza University, Via di Grottarossa 1035, 00189 Rome, Italy
| | - Antonio Napolitano
- Medical Physics Department, Bambino Gesù Children’s Hospital, IRCCS, Piazza di Sant’Onofrio, 4, 00165 Rome, Italy
- Correspondence:
| | - Matteo Pignatelli
- Radiology Department, Castelli Hospital, Via Nettunense Km 11.5, 00040 Ariccia, Italy
| | - Emanuela Tagliente
- Medical Physics Department, Bambino Gesù Children’s Hospital, IRCCS, Piazza di Sant’Onofrio, 4, 00165 Rome, Italy
| | - Chiara Parrillo
- Medical Physics Department, Bambino Gesù Children’s Hospital, IRCCS, Piazza di Sant’Onofrio, 4, 00165 Rome, Italy
| | - Francesco Nasta
- Medical Physics Department, Bambino Gesù Children’s Hospital, IRCCS, Piazza di Sant’Onofrio, 4, 00165 Rome, Italy
| | - Andrea Romano
- Neuroradiology Unit, NESMOS Department, Sant’Andrea Hospital, La Sapienza University, Via di Grottarossa 1035, 00189 Rome, Italy
| | - Alessandro Bozzao
- Neuroradiology Unit, NESMOS Department, Sant’Andrea Hospital, La Sapienza University, Via di Grottarossa 1035, 00189 Rome, Italy
| | - Alberto Di Napoli
- Neuroradiology Unit, NESMOS Department, Sant’Andrea Hospital, La Sapienza University, Via di Grottarossa 1035, 00189 Rome, Italy
- Neuroimaging Lab, IRCCS Fondazione Santa Lucia, 00179 Rome, Italy
| |
Collapse
|
9
|
McCorry MC, Reardon KF, Black M, Williams C, Babakhanova G, Halpern JM, Sarkar S, Swami NS, Mirica KA, Boermeester S, Underhill A. Sensor technologies for quality control in engineered tissue manufacturing. Biofabrication 2022; 15:10.1088/1758-5090/ac94a1. [PMID: 36150372 PMCID: PMC10283157 DOI: 10.1088/1758-5090/ac94a1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Accepted: 09/23/2022] [Indexed: 11/11/2022]
Abstract
The use of engineered cells, tissues, and organs has the opportunity to change the way injuries and diseases are treated. Commercialization of these groundbreaking technologies has been limited in part by the complex and costly nature of their manufacture. Process-related variability and even small changes in the manufacturing process of a living product will impact its quality. Without real-time integrated detection, the magnitude and mechanism of that impact are largely unknown. Real-time and non-destructive sensor technologies are key for in-process insight and ensuring a consistent product throughout commercial scale-up and/or scale-out. The application of a measurement technology into a manufacturing process requires cell and tissue developers to understand the best way to apply a sensor to their process, and for sensor manufacturers to understand the design requirements and end-user needs. Furthermore, sensors to monitor component cells' health and phenotype need to be compatible with novel integrated and automated manufacturing equipment. This review summarizes commercially relevant sensor technologies that can detect meaningful quality attributes during the manufacturing of regenerative medicine products, the gaps within each technology, and sensor considerations for manufacturing.
Collapse
Affiliation(s)
- Mary Clare McCorry
- Advanced Regenerative Manufacturing Institute, Manchester, NH 03101, United States of America
| | - Kenneth F Reardon
- Chemical and Biological Engineering and Biomedical Engineering, Colorado State University, Fort Collins, CO 80521, United States of America
| | - Marcie Black
- Advanced Silicon Group, Lowell, MA 01854, United States of America
| | - Chrysanthi Williams
- Access Biomedical Solutions, Trinity, Florida 34655, United States of America
| | - Greta Babakhanova
- National Institute of Standards and Technology, Gaithersburg, MD 20899, United States of America
| | - Jeffrey M Halpern
- Department of Chemical Engineering, University of New Hampshire, Durham, NH 03824, United States of America
- Materials Science and Engineering Program, University of New Hampshire, Durham, NH 03824, United States of America
| | - Sumona Sarkar
- National Institute of Standards and Technology, Gaithersburg, MD 20899, United States of America
| | - Nathan S Swami
- Electrical and Computer Engineering, University of Virginia, Charlottesville, VA 22904, United States of America
| | - Katherine A Mirica
- Department of Chemistry, Dartmouth College, Hanover, NH 03755, United States of America
| | - Sarah Boermeester
- Advanced Regenerative Manufacturing Institute, Manchester, NH 03101, United States of America
| | - Abbie Underhill
- Scientific Bioprocessing Inc., Pittsburgh, PA 15238, United States of America
| |
Collapse
|
10
|
Dynamic Physical Activity Recommendation Delivered through a Mobile Fitness App: A Deep Learning Approach. AXIOMS 2022. [DOI: 10.3390/axioms11070346] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
Regular physical activity has a positive impact on our physical and mental health. Adhering to a fixed physical activity regimen is essential for good health and mental wellbeing. Today, fitness trackers and smartphone applications are used to promote physical activity. These applications use step counts recorded by accelerometers to estimate physical activity. In this research, we performed a two-level clustering on a dataset based on individuals’ physical and physiological features, as well as past daily activity patterns. The proposed model exploits the user data with partial or complete features. To include the user with partial features, we trained the proposed model with the data of users who possess exclusive features. Additionally, we classified the users into several clusters to produce more accurate results for the users. This enables the proposed system to provide data-driven and personalized activity planning recommendations every day. A personalized physical activity plan is generated on the basis of hourly patterns for users according to their adherence and past recommended activity plans. Customization of activity plans can be achieved according to the user’s historical activity habits and current activity objective, as well as the likelihood of sticking to the plan. The proposed physical activity recommendation system was evaluated in real time, and the results demonstrated the improved performance over existing baselines.
Collapse
|
11
|
Time-based self-supervised learning for Wireless Capsule Endoscopy. Comput Biol Med 2022; 146:105631. [DOI: 10.1016/j.compbiomed.2022.105631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Revised: 04/17/2022] [Accepted: 04/17/2022] [Indexed: 11/18/2022]
|
12
|
Fan M, Yuan C, Huang G, Xu M, Wang S, Gao X, Li L. A framework for deep multitask learning with multiparametric magnetic resonance imaging for the joint prediction of histological characteristics in breast cancer. IEEE J Biomed Health Inform 2022; 26:3884-3895. [PMID: 35635826 DOI: 10.1109/jbhi.2022.3179014] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
The clinical management and decision-making process related to breast cancer are based on multiple histological indicators. This study aims to jointly predict the Ki-67 expression level, luminal A subtype and histological grade molecular biomarkers using a new deep multitask learning method with multiparametric magnetic resonance imaging. A multitask learning network structure was proposed by introducing a common-task layer and task-specific layers to learn the high-level features that are common to all tasks and related to a specific task, respectively. A network pretrained with knowledge from the ImageNet dataset was used and fine-tuned with MRI data. Information from multiparametric MR images was fused using the strategy at the feature and decision levels. The area under the receiver operating characteristic curve (AUC) was used to measure model performance. For single-task learning using a single image series, the deep learning model generated AUCs of 0.752, 0.722, and 0.596 for the Ki-67, luminal A and histological grade prediction tasks, respectively. The performance was improved by freezing the first 5 convolutional layers, using 20% shared layers and fusing multiparametric series at the feature level, which achieved AUCs of 0.819, 0.799 and 0.747 for Ki-67, luminal A and histological grade prediction tasks, respectively. Our study showed advantages in jointly predicting correlated clinical biomarkers using a deep multitask learning framework with an appropriate number of fine-tuned convolutional layers by taking full advantage of common and complementary imaging features. Multiparametric image series-based multitask learning could be a promising approach for the multiple clinical indicator-based management of breast cancer.
Collapse
|
13
|
Li C, Liu J, Chen J, Yuan Y, Yu J, Gou Q, Guo Y, Pu X. An Interpretable Convolutional Neural Network Framework for Analyzing Molecular Dynamics Trajectories: a Case Study on Functional States for G-Protein-Coupled Receptors. J Chem Inf Model 2022; 62:1399-1410. [PMID: 35257580 DOI: 10.1021/acs.jcim.2c00085] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
Molecular dynamics (MD) simulations have made great contribution to revealing structural and functional mechanisms for many biomolecular systems. However, how to identify functional states and important residues from vast conformation space generated by MD remains challenging; thus an intelligent navigation is highly desired. Despite intelligent advantages of deep learning exhibited in analyzing MD trajectory, its black-box nature limits its application. To address this problem, we explore an interpretable convolutional neural network (CNN)-based deep learning framework to automatically identify diverse active states from the MD trajectory for G-protein-coupled receptors (GPCRs), named the ICNNMD model. To avoid the information loss in representing the conformation structure, the pixel representation is introduced, and then the CNN module is constructed to efficiently extract features followed by a fully connected neural network to realize the classification task. More importantly, we design a local interpretable model-agnostic explanation interpreter for the classification result by local approximation with a linear model, through which important residues underlying distinct active states can be quickly identified. Our model showcases higher than 99% classification accuracy for three important GPCR systems with diverse active states. Notably, some important residues in regulating different biased activities are successfully identified, which are beneficial to elucidating diverse activation mechanisms for GPCRs. Our model can also serve as a general tool to analyze MD trajectory for other biomolecular systems. All source codes are freely available at https://github.com/Jane-Liu97/ICNNMD for aiding MD studies.
Collapse
Affiliation(s)
- Chuan Li
- College of Computer Science, Sichuan University, Chengdu 610064, China
| | - Jiangting Liu
- College of Computer Science, Sichuan University, Chengdu 610064, China
| | - Jianfang Chen
- College of Chemistry, Sichuan University, Chengdu 610064, China
| | - Yuan Yuan
- College of Management, Southwest University for Nationalities, Chengdu 610041, China
| | - Jin Yu
- Department of Physics and Astronomy, University of California, Irvine, California 92697, United States
| | - Qiaolin Gou
- College of Chemistry, Sichuan University, Chengdu 610064, China
| | - Yanzhi Guo
- College of Chemistry, Sichuan University, Chengdu 610064, China
| | - Xuemei Pu
- College of Chemistry, Sichuan University, Chengdu 610064, China
| |
Collapse
|
14
|
Rabbi F, Dabbagh SR, Angin P, Yetisen AK, Tasoglu S. Deep Learning-Enabled Technologies for Bioimage Analysis. MICROMACHINES 2022; 13:mi13020260. [PMID: 35208385 PMCID: PMC8880650 DOI: 10.3390/mi13020260] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/13/2022] [Revised: 01/31/2022] [Accepted: 02/03/2022] [Indexed: 02/05/2023]
Abstract
Deep learning (DL) is a subfield of machine learning (ML), which has recently demonstrated its potency to significantly improve the quantification and classification workflows in biomedical and clinical applications. Among the end applications profoundly benefitting from DL, cellular morphology quantification is one of the pioneers. Here, we first briefly explain fundamental concepts in DL and then we review some of the emerging DL-enabled applications in cell morphology quantification in the fields of embryology, point-of-care ovulation testing, as a predictive tool for fetal heart pregnancy, cancer diagnostics via classification of cancer histology images, autosomal polycystic kidney disease, and chronic kidney diseases.
Collapse
Affiliation(s)
- Fazle Rabbi
- Department of Mechanical Engineering, Koç University, Sariyer, Istanbul 34450, Turkey; (F.R.); (S.R.D.)
| | - Sajjad Rahmani Dabbagh
- Department of Mechanical Engineering, Koç University, Sariyer, Istanbul 34450, Turkey; (F.R.); (S.R.D.)
- Koç University Arçelik Research Center for Creative Industries (KUAR), Koç University, Sariyer, Istanbul 34450, Turkey
- Koc University Is Bank Artificial Intelligence Lab (KUIS AILab), Koç University, Sariyer, Istanbul 34450, Turkey
| | - Pelin Angin
- Department of Computer Engineering, Middle East Technical University, Ankara 06800, Turkey;
| | - Ali Kemal Yetisen
- Department of Chemical Engineering, Imperial College London, London SW7 2AZ, UK;
| | - Savas Tasoglu
- Department of Mechanical Engineering, Koç University, Sariyer, Istanbul 34450, Turkey; (F.R.); (S.R.D.)
- Koç University Arçelik Research Center for Creative Industries (KUAR), Koç University, Sariyer, Istanbul 34450, Turkey
- Koc University Is Bank Artificial Intelligence Lab (KUIS AILab), Koç University, Sariyer, Istanbul 34450, Turkey
- Institute of Biomedical Engineering, Boğaziçi University, Çengelköy, Istanbul 34684, Turkey
- Physical Intelligence Department, Max Planck Institute for Intelligent Systems, 70569 Stuttgart, Germany
- Correspondence:
| |
Collapse
|
15
|
Danchin A. In vivo, in vitro and in silico: an open space for the development of microbe-based applications of synthetic biology. Microb Biotechnol 2022; 15:42-64. [PMID: 34570957 PMCID: PMC8719824 DOI: 10.1111/1751-7915.13937] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2021] [Accepted: 09/14/2021] [Indexed: 12/24/2022] Open
Abstract
Living systems are studied using three complementary approaches: living cells, cell-free systems and computer-mediated modelling. Progresses in understanding, allowing researchers to create novel chassis and industrial processes rest on a cycle that combines in vivo, in vitro and in silico studies. This design-build-test-learn iteration loop cycle between experiments and analyses combines together physiology, genetics, biochemistry and bioinformatics in a way that keeps going forward. Because computer-aided approaches are not directly constrained by the material nature of the entities of interest, we illustrate here how this virtuous cycle allows researchers to explore chemistry which is foreign to that present in extant life, from whole chassis to novel metabolic cycles. Particular emphasis is placed on the importance of evolution.
Collapse
Affiliation(s)
- Antoine Danchin
- Kodikos LabsInstitut Cochin24 rue du Faubourg Saint‐JacquesParis75014France
| |
Collapse
|
16
|
Sunija A, Gopi VP, Palanisamy P. Redundancy reduced depthwise separable convolution for glaucoma classification using OCT images. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103192] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
17
|
Safayari A, Bolhasani H. Depression diagnosis by deep learning using EEG signals: A systematic review. MEDICINE IN NOVEL TECHNOLOGY AND DEVICES 2021. [DOI: 10.1016/j.medntd.2021.100102] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022] Open
|
18
|
Veltri P. Guest Editorial Innovative Data Analysis Methods for Biomedicine. IEEE J Biomed Health Inform 2021. [DOI: 10.1109/jbhi.2021.3116336] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
19
|
Dagi TF, Barker FG, Glass J. Machine Learning and Artificial Intelligence in Neurosurgery: Status, Prospects, and Challenges. Neurosurgery 2021; 89:133-142. [PMID: 34015816 DOI: 10.1093/neuros/nyab170] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023] Open
Affiliation(s)
- T Forcht Dagi
- Queen's University Belfast and The William J. Clinton Leadership Institute, Belfast, UK
- Mayo Clinic College of Medicine and Science, Rochester, Minnesota, USA
| | - Fred G Barker
- Department of Neurosurgery, Harvard Medical School, Boston, Massachusetts, USA
- The Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Jacob Glass
- Center for Epigenetics Research, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| |
Collapse
|
20
|
Lin T, Mai J, Yan M, Li Z, Quan X, Chen X. A Nomogram Based on CT Deep Learning Signature: A Potential Tool for the Prediction of Overall Survival in Resected Non-Small Cell Lung Cancer Patients. Cancer Manag Res 2021; 13:2897-2906. [PMID: 33833572 PMCID: PMC8019610 DOI: 10.2147/cmar.s299020] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2020] [Accepted: 03/02/2021] [Indexed: 12/29/2022] Open
Abstract
PURPOSE To develop and further validate a deep learning signature-based nomogram from computed tomography (CT) images for prediction of the overall survival (OS) in resected non-small cell lung cancer (NSCLC) patients. PATIENTS AND METHODS A total of 1792 deep learning features were extracted from non-enhanced and venous-phase CT images for each NSCLC patient in training cohort (n=231). Then, a deep learning signature was built with the least absolute shrinkage and selection operator (LASSO) Cox regression model for OS estimation. At last, a nomogram was constructed with the signature and other independent clinical risk factors. The performance of nomogram was assessed by discrimination, calibration and clinical usefulness. In addition, in order to quantify the improvement in performance added by deep learning signature, the net reclassification improvement (NRI) was calculated. The results were validated in external validation cohort (n=77). RESULTS A deep learning signature with 9 selected features was significantly associated with OS in both training cohort (hazard ratio [HR]=5.455, 95% CI: 3.393-8.769, P<0.001) and external validation cohort (HR=3.029, 95% CI: 1.673-5.485, P=0.004). The nomogram combining deep learning signature with clinical risk factors of TNM stage, lymphatic vessel invasion and differentiation grade showed favorable discriminative ability with C-index of 0.800 as well as a good calibration, which was validated in external validation cohort (C-index=0.723). Additional value of deep learning signature to the nomogram was statistically significant (NRI=0.093, P=0.027 for training cohort; NRI=0.106, P=0.040 for validation cohort). Decision curve analysis confirmed the clinical usefulness of this nomogram in predicting OS. CONCLUSION The deep learning signature-based nomogram is a robust tool for prognostic prediction in resected NSCLC patients.
Collapse
Affiliation(s)
- Ting Lin
- Department of Radiology, Zhujiang Hospital, Southern Medical University, Guangzhou, 510282, People’s Republic of China
| | - Jinhai Mai
- School of Automation Science and Engineering, South China University of Technology, Guangzhou, 510641, People’s Republic of China
- Department of Radiology, Guangdong Provincial People’s Hospital, Guangdong Academy of Medical Sciences, Guangzhou, 510080, People’s Republic of China
| | - Meng Yan
- Department of Radiology, Zhujiang Hospital, Southern Medical University, Guangzhou, 510282, People’s Republic of China
| | - Zhenhui Li
- Department of Radiology, Yunnan Cancer Hospital, The Third Affiliated Hospital of Kunming Medical University, Kunming, 650118, People’s Republic of China
| | - Xianyue Quan
- Department of Radiology, Zhujiang Hospital, Southern Medical University, Guangzhou, 510282, People’s Republic of China
| | - Xin Chen
- Department of Radiology, Guangdong Provincial People’s Hospital, Guangdong Academy of Medical Sciences, Guangzhou, 510080, People’s Republic of China
- Department Of Radiology, Guangzhou First People’s Hospital, School of Medicine, South China University of Technology, Guangzhou, 510180, People’s Republic of China
| |
Collapse
|
21
|
Ganapathy N, Veeranki YR, Kumar H, Swaminathan R. Emotion Recognition Using Electrodermal Activity Signals and Multiscale Deep Convolutional Neural Network. J Med Syst 2021; 45:49. [PMID: 33660087 DOI: 10.1007/s10916-020-01676-6] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2019] [Accepted: 11/10/2020] [Indexed: 11/30/2022]
Abstract
In this work, an attempt has been made to classify emotional states using electrodermal activity (EDA) signals and multiscale convolutional neural networks. For this, EDA signals are considered from a publicly available "A Dataset for Emotion Analysis using Physiological Signals" (DEAP) database. These signals are decomposed into multiple-scales using the coarse-grained method. The multiscale signals are applied to the Multiscale Convolutional Neural Network (MSCNN) to automatically learn robust features directly from the raw signals. Experiments are performed with the MSCNN approach to evaluate the hypothesis (i) improved classification with electrodermal activity signals, and (ii) multiscale learning captures robust complementary features at a different scale. Results show that the proposed approach is able to differentiate various emotional states. The proposed approach yields a classification accuracy of 69.33% and 71.43% for valence and arousal states, respectively. It is observed that the number of layers and the signal length are the determinants for the classifier performance. The performance of the proposed approach outperforms the single-layer convolutional neural network. The MSCNN approach provides end-to-end learning and classification of emotional states without additional signal processing. Thus, it appears that the proposed method could be a useful tool to assess the difference in emotional states for automated decision making.
Collapse
Affiliation(s)
- Nagarajan Ganapathy
- Biomedical Engineering Group, Department of Applied Mechanics, Indian Institute of Technology Madras, Chennai, India.
| | - Yedukondala Rao Veeranki
- Biomedical Engineering Group, Department of Applied Mechanics, Indian Institute of Technology Madras, Chennai, India
| | - Himanshu Kumar
- Biomedical Engineering Group, Department of Applied Mechanics, Indian Institute of Technology Madras, Chennai, India
| | - Ramakrishnan Swaminathan
- Biomedical Engineering Group, Department of Applied Mechanics, Indian Institute of Technology Madras, Chennai, India
| |
Collapse
|
22
|
|
23
|
Pandey B, Kumar Pandey D, Pratap Mishra B, Rhmann W. A comprehensive survey of deep learning in the field of medical imaging and medical natural language processing: Challenges and research directions. JOURNAL OF KING SAUD UNIVERSITY - COMPUTER AND INFORMATION SCIENCES 2021. [DOI: 10.1016/j.jksuci.2021.01.007] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
24
|
Dabbagh SR, Rabbi F, Doğan Z, Yetisen AK, Tasoglu S. Machine learning-enabled multiplexed microfluidic sensors. BIOMICROFLUIDICS 2020; 14:061506. [PMID: 33343782 PMCID: PMC7733540 DOI: 10.1063/5.0025462] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/15/2020] [Accepted: 12/01/2020] [Indexed: 05/02/2023]
Abstract
High-throughput, cost-effective, and portable devices can enhance the performance of point-of-care tests. Such devices are able to acquire images from samples at a high rate in combination with microfluidic chips in point-of-care applications. However, interpreting and analyzing the large amount of acquired data is not only a labor-intensive and time-consuming process, but also prone to the bias of the user and low accuracy. Integrating machine learning (ML) with the image acquisition capability of smartphones as well as increasing computing power could address the need for high-throughput, accurate, and automatized detection, data processing, and quantification of results. Here, ML-supported diagnostic technologies are presented. These technologies include quantification of colorimetric tests, classification of biological samples (cells and sperms), soft sensors, assay type detection, and recognition of the fluid properties. Challenges regarding the implementation of ML methods, including the required number of data points, image acquisition prerequisites, and execution of data-limited experiments are also discussed.
Collapse
Affiliation(s)
| | - Fazle Rabbi
- Department of Mechanical Engineering, Koç University, Sariyer, Istanbul 34450, Turkey
| | | | - Ali Kemal Yetisen
- Department of Chemical Engineering, Imperial College London, London SW7 2AZ, United Kingdom
| | | |
Collapse
|
25
|
Mursch-Edlmayr AS, Ng WS, Diniz-Filho A, Sousa DC, Arnold L, Schlenker MB, Duenas-Angeles K, Keane PA, Crowston JG, Jayaram H. Artificial Intelligence Algorithms to Diagnose Glaucoma and Detect Glaucoma Progression: Translation to Clinical Practice. Transl Vis Sci Technol 2020; 9:55. [PMID: 33117612 PMCID: PMC7571273 DOI: 10.1167/tvst.9.2.55] [Citation(s) in RCA: 47] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2019] [Accepted: 09/18/2020] [Indexed: 12/11/2022] Open
Abstract
Purpose This concise review aims to explore the potential for the clinical implementation of artificial intelligence (AI) strategies for detecting glaucoma and monitoring glaucoma progression. Methods Nonsystematic literature review using the search combinations “Artificial Intelligence,” “Deep Learning,” “Machine Learning,” “Neural Networks,” “Bayesian Networks,” “Glaucoma Diagnosis,” and “Glaucoma Progression.” Information on sensitivity and specificity regarding glaucoma diagnosis and progression analysis as well as methodological details were extracted. Results Numerous AI strategies provide promising levels of specificity and sensitivity for structural (e.g. optical coherence tomography [OCT] imaging, fundus photography) and functional (visual field [VF] testing) test modalities used for the detection of glaucoma. Area under receiver operating curve (AROC) values of > 0.90 were achieved with every modality. Combining structural and functional inputs has been shown to even more improve the diagnostic ability. Regarding glaucoma progression, AI strategies can detect progression earlier than conventional methods or potentially from one single VF test. Conclusions AI algorithms applied to fundus photographs for screening purposes may provide good results using a simple and widely accessible test. However, for patients who are likely to have glaucoma more sophisticated methods should be used including data from OCT and perimetry. Outputs may serve as an adjunct to assist clinical decision making, whereas also enhancing the efficiency, productivity, and quality of the delivery of glaucoma care. Patients with diagnosed glaucoma may benefit from future algorithms to evaluate their risk of progression. Challenges are yet to be overcome, including the external validity of AI strategies, a move from a “black box” toward “explainable AI,” and likely regulatory hurdles. However, it is clear that AI can enhance the role of specialist clinicians and will inevitably shape the future of the delivery of glaucoma care to the next generation. Translational Relevance The promising levels of diagnostic accuracy reported by AI strategies across the modalities used in clinical practice for glaucoma detection can pave the way for the development of reliable models appropriate for their translation into clinical practice. Future incorporation of AI into healthcare models may help address the current limitations of access and timely management of patients with glaucoma across the world.
Collapse
Affiliation(s)
| | - Wai Siene Ng
- Cardiff Eye Unit, University Hospital of Wales, Cardiff, UK
| | - Alberto Diniz-Filho
- Department of Ophthalmology and Otorhinolaryngology, Federal University of Minas Gerais, Belo Horizonte, Brazil
| | - David C Sousa
- Department of Ophthalmology, Hospital de Santa Maria, Lisbon, Portugal
| | - Louis Arnold
- Department of Ophthalmology, University Hospital, Dijon, France
| | - Matthew B Schlenker
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Canada
| | - Karla Duenas-Angeles
- Department of Ophthalmology, Universidad Nacional Autónoma de Mexico, Mexico City, Mexico
| | - Pearse A Keane
- NIHR Biomedical Research Centre for Ophthalmology, UCL Institute of Ophthalmology & Moorfields Eye Hospital, London, UK
| | - Jonathan G Crowston
- Centre for Vision Research, Duke-NUS Medical School, Singapore.,Singapore Eye Research Institute, Singapore National Eye Centre, Singapore
| | - Hari Jayaram
- NIHR Biomedical Research Centre for Ophthalmology, UCL Institute of Ophthalmology & Moorfields Eye Hospital, London, UK
| |
Collapse
|
26
|
Laiz P, Vitrià J, Wenzek H, Malagelada C, Azpiroz F, Seguí S. WCE polyp detection with triplet based embeddings. Comput Med Imaging Graph 2020; 86:101794. [PMID: 33130417 DOI: 10.1016/j.compmedimag.2020.101794] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2020] [Revised: 09/17/2020] [Accepted: 09/18/2020] [Indexed: 12/20/2022]
Abstract
Wireless capsule endoscopy is a medical procedure used to visualize the entire gastrointestinal tract and to diagnose intestinal conditions, such as polyps or bleeding. Current analyses are performed by manually inspecting nearly each one of the frames of the video, a tedious and error-prone task. Automatic image analysis methods can be used to reduce the time needed for physicians to evaluate a capsule endoscopy video. However these methods are still in a research phase. In this paper we focus on computer-aided polyp detection in capsule endoscopy images. This is a challenging problem because of the diversity of polyp appearance, the imbalanced dataset structure and the scarcity of data. We have developed a new polyp computer-aided decision system that combines a deep convolutional neural network and metric learning. The key point of the method is the use of the Triplet Loss function with the aim of improving feature extraction from the images when having small dataset. The Triplet Loss function allows to train robust detectors by forcing images from the same category to be represented by similar embedding vectors while ensuring that images from different categories are represented by dissimilar vectors. Empirical results show a meaningful increase of AUC values compared to state-of-the-art methods. A good performance is not the only requirement when considering the adoption of this technology to clinical practice. Trust and explainability of decisions are as important as performance. With this purpose, we also provide a method to generate visual explanations of the outcome of our polyp detector. These explanations can be used to build a physician's trust in the system and also to convey information about the inner working of the method to the designer for debugging purposes.
Collapse
Affiliation(s)
- Pablo Laiz
- Department of Mathematics and Computer Science, Universitat de Barcelona, Barcelona, Spain.
| | - Jordi Vitrià
- Department of Mathematics and Computer Science, Universitat de Barcelona, Barcelona, Spain
| | | | - Carolina Malagelada
- Digestive System Research Unit, University Hospital Vall d'Hebron, Barcelona, Spain
| | - Fernando Azpiroz
- Digestive System Research Unit, University Hospital Vall d'Hebron, Barcelona, Spain
| | - Santi Seguí
- Department of Mathematics and Computer Science, Universitat de Barcelona, Barcelona, Spain
| |
Collapse
|
27
|
Automatic Prediction of MGMT Status in Glioblastoma via Deep Learning-Based MR Image Analysis. BIOMED RESEARCH INTERNATIONAL 2020; 2020:9258649. [PMID: 33029531 PMCID: PMC7530505 DOI: 10.1155/2020/9258649] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/16/2020] [Revised: 08/27/2020] [Accepted: 09/03/2020] [Indexed: 11/17/2022]
Abstract
Methylation of the O6-methylguanine methyltransferase (MGMT) gene promoter is correlated with the effectiveness of the current standard of care in glioblastoma patients. In this study, a deep learning pipeline is designed for automatic prediction of MGMT status in 87 glioblastoma patients with contrast-enhanced T1W images and 66 with fluid-attenuated inversion recovery(FLAIR) images. The end-to-end pipeline completes both tumor segmentation and status classification. The better tumor segmentation performance comes from FLAIR images (Dice score, 0.897 ± 0.007) compared to contrast-enhanced T1WI (Dice score, 0.828 ± 0.108), and the better status prediction is also from the FLAIR images (accuracy, 0.827 ± 0.056; recall, 0.852 ± 0.080; precision, 0.821 ± 0.022; and F 1 score, 0.836 ± 0.072). This proposed pipeline not only saves the time in tumor annotation and avoids interrater variability in glioma segmentation but also achieves good prediction of MGMT methylation status. It would help find molecular biomarkers from routine medical images and further facilitate treatment planning.
Collapse
|
28
|
Goodwin NL, Nilsson SRO, Golden SA. Rage Against the Machine: Advancing the study of aggression ethology via machine learning. Psychopharmacology (Berl) 2020; 237:2569-2588. [PMID: 32647898 PMCID: PMC7502501 DOI: 10.1007/s00213-020-05577-x] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/07/2019] [Accepted: 06/01/2020] [Indexed: 12/24/2022]
Abstract
RATIONALE Aggression, comorbid with neuropsychiatric disorders, exhibits with diverse clinical presentations and places a significant burden on patients, caregivers, and society. This diversity is observed because aggression is a complex behavior that can be ethologically demarcated as either appetitive (rewarding) or reactive (defensive), each with its own behavioral characteristics, functionality, and neural basis that may transition from adaptive to maladaptive depending on genetic and environmental factors. There has been a recent surge in the development of preclinical animal models for studying appetitive aggression-related behaviors and identifying the neural mechanisms guiding their progression and expression. However, adoption of these procedures is often impeded by the arduous task of manually scoring complex social interactions. Manual observations are generally susceptible to observer drift, long analysis times, and poor inter-rater reliability, and are further incompatible with the sampling frequencies required of modern neuroscience methods. OBJECTIVES In this review, we discuss recent advances in the preclinical study of appetitive aggression in mice, paired with our perspective on the potential for machine learning techniques in producing automated, robust scoring of aggressive social behavior. We discuss critical considerations for implementing valid computer classifications within behavioral pharmacological studies. KEY RESULTS Open-source automated classification platforms can match or exceed the performance of human observers while removing the confounds of observer drift, bias, and inter-rater reliability. Furthermore, unsupervised approaches can identify previously uncharacterized aggression-related behavioral repertoires in model species. DISCUSSION AND CONCLUSIONS Advances in open-source computational approaches hold promise for overcoming current manual annotation caveats while also introducing and generalizing computational neuroethology to the greater behavioral neuroscience community. We propose that currently available open-source approaches are sufficient for overcoming the main limitations preventing wide adoption of machine learning within the context of preclinical aggression behavioral research.
Collapse
Affiliation(s)
- Nastacia L Goodwin
- Department of Biological Structure, University of Washington, Seattle, WA, USA
- Graduate Program in Neuroscience, University of Washington, Seattle, WA, USA
| | - Simon R O Nilsson
- Department of Biological Structure, University of Washington, Seattle, WA, USA
| | - Sam A Golden
- Department of Biological Structure, University of Washington, Seattle, WA, USA.
- Graduate Program in Neuroscience, University of Washington, Seattle, WA, USA.
- Center of Excellence in Neurobiology of Addiction, Pain, and Emotion (NAPE), University of Washington, Seattle, WA, USA.
| |
Collapse
|
29
|
Monshi MMA, Poon J, Chung V. Deep learning in generating radiology reports: A survey. Artif Intell Med 2020; 106:101878. [PMID: 32425358 PMCID: PMC7227610 DOI: 10.1016/j.artmed.2020.101878] [Citation(s) in RCA: 53] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2019] [Revised: 04/30/2020] [Accepted: 05/10/2020] [Indexed: 12/27/2022]
Abstract
Substantial progress has been made towards implementing automated radiology reporting models based on deep learning (DL). This is due to the introduction of large medical text/image datasets. Generating radiology coherent paragraphs that do more than traditional medical image annotation, or single sentence-based description, has been the subject of recent academic attention. This presents a more practical and challenging application and moves towards bridging visual medical features and radiologist text. So far, the most common approach has been to utilize publicly available datasets and develop DL models that integrate convolutional neural networks (CNN) for image analysis alongside recurrent neural networks (RNN) for natural language processing (NLP) and natural language generation (NLG). This is an area of research that we anticipate will grow in the near future. We focus our investigation on the following critical challenges: understanding radiology text/image structures and datasets, applying DL algorithms (mainly CNN and RNN), generating radiology text, and improving existing DL based models and evaluation metrics. Lastly, we include a critical discussion and future research recommendations. This survey will be useful for researchers interested in DL, particularly those interested in applying DL to radiology reporting.
Collapse
Affiliation(s)
- Maram Mahmoud A Monshi
- School of Computer Science, University of Sydney, Sydney, Australia; Department of Information Technology, Taif University, Taif, Saudi Arabia.
| | - Josiah Poon
- School of Computer Science, University of Sydney, Sydney, Australia
| | - Vera Chung
- School of Computer Science, University of Sydney, Sydney, Australia
| |
Collapse
|
30
|
Lin GM, Nagamine M, Yang SN, Tai YM, Lin C, Sato H. Machine Learning Based Suicide Ideation Prediction for Military Personnel. IEEE J Biomed Health Inform 2020; 24:1907-1916. [PMID: 32324581 DOI: 10.1109/jbhi.2020.2988393] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Abstract
Military personnel have greater psychological stress and are at higher suicide attempt risk compared with the general population. High mental stress may cause suicide ideations which are crucially driving suicide attempts. However, traditional statistical methods could only find a moderate degree of correlation between psychological stress and suicide ideation in non-psychiatric individuals. This article utilizes machine learning techniques including logistic regression, decision tree, random forest, gradient boosting regression tree, support vector machine and multilayer perceptron to predict the presence of suicide ideation by six important psychological stress domains of the military males and females. The accuracies of all the six machine learning methods are over 98%. Among them, the multilayer perceptron and support vector machine provide the best predictions of suicide ideation approximately to 100%. As compared with the BSRS-5 score ≥7, a conventional criterion, for the presence of suicide ideation ≥1, the proposed algorithms can improve the performances of accuracy, sensitivity, specificity, precision, the AUC of ROC curve and the AUC of PR curve up to 5.7%, 35.9%, 4.6%, 65.2%, 4.3% and 53.2%, respectively; and for the presence of more severely intense suicide ideation ≥2, the improvements are 6.1%, 26.2%, 5.8%, 83.5%, 2.8% and 64.7%, respectively.
Collapse
|
31
|
Fleetwood O, Kasimova MA, Westerlund AM, Delemotte L. Molecular Insights from Conformational Ensembles via Machine Learning. Biophys J 2020; 118:765-780. [PMID: 31952811 PMCID: PMC7002924 DOI: 10.1016/j.bpj.2019.12.016] [Citation(s) in RCA: 45] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2019] [Revised: 11/21/2019] [Accepted: 12/16/2019] [Indexed: 01/04/2023] Open
Abstract
Biomolecular simulations are intrinsically high dimensional and generate noisy data sets of ever-increasing size. Extracting important features from the data is crucial for understanding the biophysical properties of molecular processes, but remains a big challenge. Machine learning (ML) provides powerful dimensionality reduction tools. However, such methods are often criticized as resembling black boxes with limited human-interpretable insight. We use methods from supervised and unsupervised ML to efficiently create interpretable maps of important features from molecular simulations. We benchmark the performance of several methods, including neural networks, random forests, and principal component analysis, using a toy model with properties reminiscent of macromolecular behavior. We then analyze three diverse biological processes: conformational changes within the soluble protein calmodulin, ligand binding to a G protein-coupled receptor, and activation of an ion channel voltage-sensor domain, unraveling features critical for signal transduction, ligand binding, and voltage sensing. This work demonstrates the usefulness of ML in understanding biomolecular states and demystifying complex simulations.
Collapse
Affiliation(s)
- Oliver Fleetwood
- Science for Life Laboratory, Department of Applied Physics, KTH Royal Institute of Technology, Solna, Sweden
| | - Marina A Kasimova
- Science for Life Laboratory, Department of Applied Physics, KTH Royal Institute of Technology, Solna, Sweden
| | - Annie M Westerlund
- Science for Life Laboratory, Department of Applied Physics, KTH Royal Institute of Technology, Solna, Sweden
| | - Lucie Delemotte
- Science for Life Laboratory, Department of Applied Physics, KTH Royal Institute of Technology, Solna, Sweden.
| |
Collapse
|
32
|
Fan M, Liu Z, Xie S, Xu M, Wang S, Gao X, Li L. Integration of dynamic contrast-enhanced magnetic resonance imaging and T2-weighted imaging radiomic features by a canonical correlation analysis-based feature fusion method to predict histological grade in ductal breast carcinoma. Phys Med Biol 2019; 64:215001. [PMID: 31470420 DOI: 10.1088/1361-6560/ab3fd3] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
Tumour histological grade has prognostic implications in breast cancer. Tumour features in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) and T2-weighted (T2W) imaging can provide related and complementary information in the analysis of breast lesions to improve MRI-based histological status prediction in breast cancer. A dataset of 167 patients with invasive ductal carcinoma (IDC) was assembled, consisting of 72 low/intermediate-grade and 95 high-grade cases with preoperative DCE-MRI and T2W images. The data cohort was separated into development (n = 111) and validation (n = 56) cohorts. Each tumour was segmented in the precontrast and the intermediate and last postcontrast DCE-MR images and was mapped to the tumour in the T2W images. Radiomic features, including texture, morphology, and histogram distribution features in the tumour image, were extracted for those image series. Features from the DCE-MR and T2W images were fused by a canonical correlation analysis (CCA)-based method. The support vector machine (SVM) classifiers were trained and tested on the development and validation cohorts, respectively. SVM-based recursive feature elimination (SVM-RFE) was adopted to identify the optimal features for prediction. The areas under the ROC curves (AUCs) for the T2W images and the DCE-MRI series of precontrast, intermediate and last postcontrast images were 0.750 ± 0.047, 0.749 ± 0.047, and 0.788 ± 0.045, respectively, for the development cohort and 0.715 ± 0.068, 0.704 ± 0.073, and 0.744 ± 0.067, respectively, for the validation cohort. After the CCA-based fusion of features from the DCE-MRI series and T2W images, the AUCs increased to 0.751 ± 0.065, 0.803 ± 0.0600 and 794 ± 0.060 in the validation cohort. Moreover, the method of fusing features between DCE-MRI and T2W images using CCA achieved better performance than the concatenation-based feature fusion or classifier fusion methods. Our results demonstrated that anatomical and functional MR images yield complementary information, and feature fusion of radiomic features by matrix transformation to optimize their correlations produced a classifier with improved performance for predicting the histological grade of IDC.
Collapse
Affiliation(s)
- Ming Fan
- Institute of Biomedical Engineering and Instrumentation, Hangzhou Dianzi University, Hangzhou, People's Republic of China
| | | | | | | | | | | | | |
Collapse
|