1
|
Retamales G, Gavidia ME, Bausch B, Montanari AN, Husch A, Goncalves J. Towards automatic home-based sleep apnea estimation using deep learning. NPJ Digit Med 2024; 7:144. [PMID: 38824175 PMCID: PMC11144223 DOI: 10.1038/s41746-024-01139-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Accepted: 05/22/2024] [Indexed: 06/03/2024] Open
Abstract
Apnea and hypopnea are common sleep disorders characterized by the obstruction of the airways. Polysomnography (PSG) is a sleep study typically used to compute the Apnea-Hypopnea Index (AHI), the number of times a person has apnea or certain types of hypopnea per hour of sleep, and diagnose the severity of the sleep disorder. Early detection and treatment of apnea can significantly reduce morbidity and mortality. However, long-term PSG monitoring is unfeasible as it is costly and uncomfortable for patients. To address these issues, we propose a method, named DRIVEN, to estimate AHI at home from wearable devices and detect when apnea, hypopnea, and periods of wakefulness occur throughout the night. The method can therefore assist physicians in diagnosing the severity of apneas. Patients can wear a single sensor or a combination of sensors that can be easily measured at home: abdominal movement, thoracic movement, or pulse oximetry. For example, using only two sensors, DRIVEN correctly classifies 72.4% of all test patients into one of the four AHI classes, with 99.3% either correctly classified or placed one class away from the true one. This is a reasonable trade-off between the model's performance and the patient's comfort. We use publicly available data from three large sleep studies with a total of 14,370 recordings. DRIVEN consists of a combination of deep convolutional neural networks and a light-gradient-boost machine for classification. It can be implemented for automatic estimation of AHI in unsupervised long-term home monitoring systems, reducing costs to healthcare systems and improving patient care.
Collapse
Affiliation(s)
- Gabriela Retamales
- Luxembourg Centre for Systems Biomedicine, University of Luxembourg, L-4367, Belvaux, Luxembourg
| | - Marino E Gavidia
- Luxembourg Centre for Systems Biomedicine, University of Luxembourg, L-4367, Belvaux, Luxembourg
| | - Ben Bausch
- Luxembourg Centre for Systems Biomedicine, University of Luxembourg, L-4367, Belvaux, Luxembourg
| | - Arthur N Montanari
- Luxembourg Centre for Systems Biomedicine, University of Luxembourg, L-4367, Belvaux, Luxembourg
- Department of Physics and Astronomy, Northwestern University, Evanston, IL, 60208, USA
| | - Andreas Husch
- Luxembourg Centre for Systems Biomedicine, University of Luxembourg, L-4367, Belvaux, Luxembourg
| | - Jorge Goncalves
- Luxembourg Centre for Systems Biomedicine, University of Luxembourg, L-4367, Belvaux, Luxembourg.
- Department of Plant Sciences, University of Cambridge, Cambridge, CB2 3EA, UK.
| |
Collapse
|
2
|
Lombardi S, Partanen P, Francia P, Calamai I, Deodati R, Luchini M, Spina R, Bocchi L. Classifying sepsis from photoplethysmography. Health Inf Sci Syst 2022; 10:30. [PMID: 36330224 PMCID: PMC9622958 DOI: 10.1007/s13755-022-00199-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Accepted: 10/09/2022] [Indexed: 11/05/2022] Open
Abstract
Sepsis is a life-threatening organ dysfunction. It is caused by a dysregulated immune response to an infection and is one of the leading causes of death in the intensive care unit (ICU). Early detection and treatment of sepsis can increase the survival rate of patients. The use of devices such as the photoplethysmograph could allow the early evaluation in addition to continuous monitoring of septic patients. The aim of this study was to verify the possibility of detecting sepsis in patients from whom the photoplethysmographic signal was acquired via a pulse oximeter. In this work, we developed a deep learning-based model for sepsis identification. The model takes a single input, the photoplethysmographic signal acquired by pulse oximeter, and performs a binary classification between septic and nonseptic samples. To develop the method, we used MIMIC-III database, which contains data from ICU patients. Specifically, the selected dataset includes 85 septic subjects and 101 control subjects. The PPG signals acquired from these patients were segmented, processed and used as input for the developed model with the aim of identifying sepsis. The proposed method achieved an accuracy of 76.37% with a sensitivity of 70.95% and a specificity of 81.04% on the test set. As regards the ROC curve, the Area Under Curve reached a value of 0.842. The results of this study indicate how the plethysmographic signal can be used as a warning sign for the early detection of sepsis with the aim of reducing the time for diagnosis and therapeutic intervention. Furthermore, the proposed method is suitable for integration in continuous patient monitoring.
Collapse
Affiliation(s)
- Sara Lombardi
- Department of Information Engineering, University of Florence, Via S. Marta, 3, 50139 Florence, Italy
| | - Petri Partanen
- Faculty of Information Technology and Electrical Engineering, University of Oulu, Pentti Kaiteran katu 1, 90570 Oulu, Finland
| | - Piergiorgio Francia
- Department of Information Engineering, University of Florence, Via S. Marta, 3, 50139 Florence, Italy
| | - Italo Calamai
- S.O.C. Anestesia e Rianimazione, Ospedale S. Giuseppe, viale Giovanni Boccaccio, 16, 50053 Empoli, Italy
| | - Rossella Deodati
- S.O.C. Anestesia e Rianimazione, Ospedale S. Giuseppe, viale Giovanni Boccaccio, 16, 50053 Empoli, Italy
| | - Marco Luchini
- S.O.C. Anestesia e Rianimazione, Ospedale S. Giuseppe, viale Giovanni Boccaccio, 16, 50053 Empoli, Italy
| | - Rosario Spina
- S.O.C. Anestesia e Rianimazione, Ospedale S. Giuseppe, viale Giovanni Boccaccio, 16, 50053 Empoli, Italy
| | - Leonardo Bocchi
- Department of Information Engineering, University of Florence, Via S. Marta, 3, 50139 Florence, Italy
| |
Collapse
|
3
|
Pouromran F, Lin Y, Kamarthi S. Personalized Deep Bi-LSTM RNN Based Model for Pain Intensity Classification Using EDA Signal. SENSORS (BASEL, SWITZERLAND) 2022; 22:8087. [PMID: 36365785 PMCID: PMC9654781 DOI: 10.3390/s22218087] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Revised: 10/11/2022] [Accepted: 10/12/2022] [Indexed: 06/16/2023]
Abstract
Automatic pain intensity assessment from physiological signals has become an appealing approach, but it remains a largely unexplored research topic. Most studies have used machine learning approaches built on carefully designed features based on the domain knowledge available in the literature on the time series of physiological signals. However, a deep learning framework can automate the feature engineering step, enabling the model to directly deal with the raw input signals for real-time pain monitoring. We investigated a personalized Bidirectional Long short-term memory Recurrent Neural Networks (BiLSTM RNN), and an ensemble of BiLSTM RNN and Extreme Gradient Boosting Decision Trees (XGB) for four-category pain intensity classification. We recorded Electrodermal Activity (EDA) signals from 29 subjects during the cold pressor test. We decomposed EDA signals into tonic and phasic components and augmented them to original signals. The BiLSTM-XGB model outperformed the BiLSTM classification performance and achieved an average F1-score of 0.81 and an Area Under the Receiver Operating Characteristic curve (AUROC) of 0.93 over four pain states: no pain, low pain, medium pain, and high pain. We also explored a concatenation of the deep-learning feature representations and a set of fourteen knowledge-based features extracted from EDA signals. The XGB model trained on this fused feature set showed better performance than when it was trained on component feature sets individually. This study showed that deep learning could let us go beyond expert knowledge and benefit from the generated deep representations of physiological signals for pain assessment.
Collapse
Affiliation(s)
- Fatemeh Pouromran
- Department of Mechanical and Industrial Engineering, Northeastern University, Boston, MA 02115, USA
| | - Yingzi Lin
- Department of Mechanical and Industrial Engineering, Northeastern University, Boston, MA 02115, USA
| | - Sagar Kamarthi
- Department of Mechanical and Industrial Engineering, Northeastern University, Boston, MA 02115, USA
| |
Collapse
|
4
|
Choi JW, Kim DH, Koo DL, Park Y, Nam H, Lee JH, Kim HJ, Hong SN, Jang G, Lim S, Kim B. Automated Detection of Sleep Apnea-Hypopnea Events Based on 60 GHz Frequency-Modulated Continuous-Wave Radar Using Convolutional Recurrent Neural Networks: A Preliminary Report of a Prospective Cohort Study. SENSORS (BASEL, SWITZERLAND) 2022; 22:7177. [PMID: 36236274 PMCID: PMC9570824 DOI: 10.3390/s22197177] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/19/2022] [Revised: 09/12/2022] [Accepted: 09/20/2022] [Indexed: 06/16/2023]
Abstract
Radar is a promising non-contact sensor for overnight polysomnography (PSG), the gold standard for diagnosing obstructive sleep apnea (OSA). This preliminary study aimed to demonstrate the feasibility of the automated detection of apnea-hypopnea events for OSA diagnosis based on 60 GHz frequency-modulated continuous-wave radar using convolutional recurrent neural networks. The dataset comprised 44 participants from an ongoing OSA cohort, recruited from July 2021 to April 2022, who underwent overnight PSG with a radar sensor. All PSG recordings, including sleep and wakefulness, were included in the dataset. Model development and evaluation were based on a five-fold cross-validation. The area under the receiver operating characteristic curve for the classification of 1-min segments ranged from 0.796 to 0.859. Depending on OSA severity, the sensitivities for apnea-hypopnea events were 49.0-67.6%, and the number of false-positive detections per participant was 23.4-52.8. The estimated apnea-hypopnea index showed strong correlations (Pearson correlation coefficient = 0.805-0.949) and good to excellent agreement (intraclass correlation coefficient = 0.776-0.929) with the ground truth. There was substantial agreement between the estimated and ground truth OSA severity (kappa statistics = 0.648-0.736). The results demonstrate the potential of radar as a standalone screening tool for OSA.
Collapse
Affiliation(s)
- Jae Won Choi
- Department of Radiology, Armed Forces Yangju Hospital, Yangju 11429, Korea
| | - Dong Hyun Kim
- Department of Radiology, Seoul Metropolitan Government—Seoul National University Boramae Medical Center, Seoul National University College of Medicine, Seoul 07061, Korea
| | - Dae Lim Koo
- Department of Neurology, Seoul Metropolitan Government—Seoul National University Boramae Medical Center, Seoul National University College of Medicine, Seoul 07061, Korea
| | - Yangmi Park
- Department of Neurology, Seoul Metropolitan Government—Seoul National University Boramae Medical Center, Seoul National University College of Medicine, Seoul 07061, Korea
| | - Hyunwoo Nam
- Department of Neurology, Seoul Metropolitan Government—Seoul National University Boramae Medical Center, Seoul National University College of Medicine, Seoul 07061, Korea
| | - Ji Hyun Lee
- Department of Radiology, Seoul Metropolitan Government—Seoul National University Boramae Medical Center, Seoul National University College of Medicine, Seoul 07061, Korea
| | - Hyo Jin Kim
- Department of Radiology, Seoul Metropolitan Government—Seoul National University Boramae Medical Center, Seoul National University College of Medicine, Seoul 07061, Korea
| | - Seung-No Hong
- Department of Otorhinolaryngology-Head and Neck Surgery, Seoul Metropolitan Government—Seoul National University Boramae Medical Center, Seoul National University College of Medicine, Seoul 07061, Korea
| | | | | | | |
Collapse
|
5
|
Musa N, Gital AY, Aljojo N, Chiroma H, Adewole KS, Mojeed HA, Faruk N, Abdulkarim A, Emmanuel I, Folawiyo YY, Ogunmodede JA, Oloyede AA, Olawoyin LA, Sikiru IA, Katb I. A systematic review and Meta-data analysis on the applications of Deep Learning in Electrocardiogram. JOURNAL OF AMBIENT INTELLIGENCE AND HUMANIZED COMPUTING 2022; 14:9677-9750. [PMID: 35821879 PMCID: PMC9261902 DOI: 10.1007/s12652-022-03868-z] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/15/2021] [Accepted: 04/26/2022] [Indexed: 06/08/2023]
Abstract
The success of deep learning over the traditional machine learning techniques in handling artificial intelligence application tasks such as image processing, computer vision, object detection, speech recognition, medical imaging and so on, has made deep learning the buzz word that dominates Artificial Intelligence applications. From the last decade, the applications of deep learning in physiological signals such as electrocardiogram (ECG) have attracted a good number of research. However, previous surveys have not been able to provide a systematic comprehensive review including biometric ECG based systems of the applications of deep learning in ECG with respect to domain of applications. To address this gap, we conducted a systematic literature review on the applications of deep learning in ECG including biometric ECG based systems. The study analyzed systematically, 150 primary studies with evidence of the application of deep learning in ECG. The study shows that the applications of deep learning in ECG have been applied in different domains. We presented a new taxonomy of the domains of application of the deep learning in ECG. The paper also presented discussions on biometric ECG based systems and meta-data analysis of the studies based on the domain, area, task, deep learning models, dataset sources and preprocessing methods. Challenges and potential research opportunities were highlighted to enable novel research. We believe that this study will be useful to both new researchers and expert researchers who are seeking to add knowledge to the already existing body of knowledge in ECG signal processing using deep learning algorithm. Supplementary information The online version contains supplementary material available at 10.1007/s12652-022-03868-z.
Collapse
Affiliation(s)
- Nehemiah Musa
- Department of Mathematical Sciences, Abubakar Tafawa Balewa University, Bauchi, Nigeria
| | - Abdulsalam Ya’u Gital
- Department of Mathematical Sciences, Abubakar Tafawa Balewa University, Bauchi, Nigeria
| | | | - Haruna Chiroma
- Computer Science and Engineering, University of Hafr Al-Batin, Hafr, Saudi Arabia
- Computer Science and Engineering , University of Hafr Al-Batin, Hafr Al-Batin, Saudi Arabia
| | - Kayode S. Adewole
- Department of Computer Science, University of Ilorin, Ilorin, Nigeria
| | - Hammed A. Mojeed
- Department of Computer Science, University of Ilorin, Ilorin, Nigeria
| | - Nasir Faruk
- Department of Physics, Sule Lamido University, Kafin Hausa, Nigeria
| | - Abubakar Abdulkarim
- Department of Electrical Engineering, Ahmadu Bello University Zaria, Zaria, Nigeria
| | - Ifada Emmanuel
- Department of Physics, Sule Lamido University, Kafin Hausa, Nigeria
| | | | | | | | | | | | - Ibrahim Katb
- Computer Science and Engineering, University of Hafr Al-Batin, Hafr, Saudi Arabia
| |
Collapse
|
6
|
Wipperman MF, Pogoncheff G, Mateo KF, Wu X, Chen Y, Levy O, Avbersek A, Deterding RR, Hamon SC, Vu T, Alaj R, Harari O. A pilot study of the Earable device to measure facial muscle and eye movement tasks among healthy volunteers. PLOS DIGITAL HEALTH 2022; 1:e0000061. [PMID: 36812552 PMCID: PMC9931353 DOI: 10.1371/journal.pdig.0000061] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/09/2022] [Accepted: 05/09/2022] [Indexed: 11/18/2022]
Abstract
The Earable device is a behind-the-ear wearable originally developed to measure cognitive function. Since Earable measures electroencephalography (EEG), electromyography (EMG), and electrooculography (EOG), it may also have the potential to objectively quantify facial muscle and eye movement activities relevant in the assessment of neuromuscular disorders. As an initial step to developing a digital assessment in neuromuscular disorders, a pilot study was conducted to determine whether the Earable device could be utilized to objectively measure facial muscle and eye movements intended to be representative of Performance Outcome Assessments, (PerfOs) with tasks designed to model clinical PerfOs, referred to as mock-PerfO activities. The specific aims of this study were: To determine whether the Earable raw EMG, EOG, and EEG signals could be processed to extract features describing these waveforms; To determine Earable feature data quality, test re-test reliability, and statistical properties; To determine whether features derived from Earable could be used to determine the difference between various facial muscle and eye movement activities; and, To determine what features and feature types are important for mock-PerfO activity level classification. A total of N = 10 healthy volunteers participated in the study. Each study participant performed 16 mock-PerfOs activities, including talking, chewing, swallowing, eye closure, gazing in different directions, puffing cheeks, chewing an apple, and making various facial expressions. Each activity was repeated four times in the morning and four times at night. A total of 161 summary features were extracted from the EEG, EMG, and EOG bio-sensor data. Feature vectors were used as input to machine learning models to classify the mock-PerfO activities, and model performance was evaluated on a held-out test set. Additionally, a convolutional neural network (CNN) was used to classify low-level representations of the raw bio-sensor data for each task, and model performance was correspondingly evaluated and compared directly to feature classification performance. The model's prediction accuracy on the Earable device's classification ability was quantitatively assessed. Study results indicate that Earable can potentially quantify different aspects of facial and eye movements and may be used to differentiate mock-PerfO activities. Specially, Earable was found to differentiate talking, chewing, and swallowing tasks from other tasks with observed F1 scores >0.9. While EMG features contribute to classification accuracy for all tasks, EOG features are important for classifying gaze tasks. Finally, we found that analysis with summary features outperformed a CNN for activity classification. We believe Earable may be used to measure cranial muscle activity relevant for neuromuscular disorder assessment. Classification performance of mock-PerfO activities with summary features enables a strategy for detecting disease-specific signals relative to controls, as well as the monitoring of intra-subject treatment responses. Further testing is needed to evaluate the Earable device in clinical populations and clinical development settings.
Collapse
Affiliation(s)
- Matthew F. Wipperman
- Precision Medicine, Regeneron Pharmaceuticals Inc, Tarrytown, New York, United States of America
- Early Clinical Development & Experimental Sciences, Regeneron Pharmaceuticals Inc, Tarrytown, New York, United States of America
- * E-mail: (MFW); (RA); (OH)
| | | | - Katrina F. Mateo
- Clinical Outcomes Assessment and Patient Innovation, Global Clinical Trial Services, Regeneron Pharmaceuticals Inc, Tarrytown, New York, United States of America
| | - Xuefang Wu
- Clinical Outcomes Assessment and Patient Innovation, Global Clinical Trial Services, Regeneron Pharmaceuticals Inc, Tarrytown, New York, United States of America
| | - Yiziying Chen
- Biostatistics and Data Management, Regeneron Pharmaceuticals Inc, Tarrytown, New York, United States of America
| | - Oren Levy
- Early Clinical Development & Experimental Sciences, Regeneron Pharmaceuticals Inc, Tarrytown, New York, United States of America
| | - Andreja Avbersek
- Early Clinical Development & Experimental Sciences, Regeneron Pharmaceuticals Inc, Tarrytown, New York, United States of America
| | | | - Sara C. Hamon
- Precision Medicine, Regeneron Pharmaceuticals Inc, Tarrytown, New York, United States of America
- Early Clinical Development & Experimental Sciences, Regeneron Pharmaceuticals Inc, Tarrytown, New York, United States of America
| | - Tam Vu
- Earable Inc., Boulder, Colorado, United States of America
| | - Rinol Alaj
- Clinical Outcomes Assessment and Patient Innovation, Global Clinical Trial Services, Regeneron Pharmaceuticals Inc, Tarrytown, New York, United States of America
- * E-mail: (MFW); (RA); (OH)
| | - Olivier Harari
- Early Clinical Development & Experimental Sciences, Regeneron Pharmaceuticals Inc, Tarrytown, New York, United States of America
- * E-mail: (MFW); (RA); (OH)
| |
Collapse
|
7
|
Wang S, Lafaye C, Saubade M, Besson C, Margarit-Taule JM, Gremeaux V, Liu SC. Predicting hydration status using machine learning models from physiological and sweat biomarkers during endurance exercise: a single case study. IEEE J Biomed Health Inform 2022; 26:4725-4732. [PMID: 35749337 DOI: 10.1109/jbhi.2022.3186150] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Improper hydration routines can reduce athletic performance. Recent studies show that data from noninvasive biomarker recordings can help to evaluate the hydration status of subjects during endurance exercise. These studies are usually carried out on multiple subjects. In this work, we present the first study on predicting hydration status using machine learning models from single-subject experiments, which involve 32 exercise sessions of constant moderate intensity performed with and without fluid intake. During exercise, we measured four noninvasive physiological and sweat biomarkers including heart rate, core temperature, sweat sodium concentration, and whole-body sweat rate. Sweat sodium concentration was measured from six body regions using absorbent patches. We used three machine learning models to determine the percentage of body weight loss as an indicator of dehydration with these biomarkers and compared the prediction accuracy. The results on this single subject show that these models gave similar mean absolute errors, while in general the nonlinear models slightly outperformed the linear model in most of the experiments. The prediction accuracy of using the whole-body sweat rate or heart rate was higher than using core temperature or sweat sodium concentration. In addition, the model trained on the sweat sodium concentration collected from the arms gave slightly better accuracy than from the other five body regions. This exploratory work paves the way for the use of these machine learning models to develop personalized health monitoring together with emerging, noninvasive wearable sensor devices.
Collapse
|
8
|
VitalDB, a high-fidelity multi-parameter vital signs database in surgical patients. Sci Data 2022; 9:279. [PMID: 35676300 PMCID: PMC9178032 DOI: 10.1038/s41597-022-01411-5] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2021] [Accepted: 05/18/2022] [Indexed: 12/30/2022] Open
Abstract
In modern anesthesia, multiple medical devices are used simultaneously to comprehensively monitor real-time vital signs to optimize patient care and improve surgical outcomes. However, interpreting the dynamic changes of time-series biosignals and their correlations is a difficult task even for experienced anesthesiologists. Recent advanced machine learning technologies have shown promising results in biosignal analysis, however, research and development in this area is relatively slow due to the lack of biosignal datasets for machine learning. The VitalDB (Vital Signs DataBase) is an open dataset created specifically to facilitate machine learning studies related to monitoring vital signs in surgical patients. This dataset contains high-resolution multi-parameter data from 6,388 cases, including 486,451 waveform and numeric data tracks of 196 intraoperative monitoring parameters, 73 perioperative clinical parameters, and 34 time-series laboratory result parameters. All data is stored in the public cloud after anonymization. The dataset can be freely accessed and analysed using application programming interfaces and Python library. The VitalDB public dataset is expected to be a valuable resource for biosignal research and development. Measurement(s) | vital signs of patients during surgery • perioperative patient information | Technology Type(s) | Vital Signs Measurement • Electronic Medical Record | Factor Type(s) | vital signs data including various numeric and waveform data acquired from multiple patient monitors • perioperative patient information acquired from the electronic medical record system | Sample Characteristic - Organism | Homo sapiens | Sample Characteristic - Environment | hospital | Sample Characteristic - Location | South Korea |
Collapse
|
9
|
Egger J, Gsaxner C, Pepe A, Pomykala KL, Jonske F, Kurz M, Li J, Kleesiek J. Medical deep learning-A systematic meta-review. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 221:106874. [PMID: 35588660 DOI: 10.1016/j.cmpb.2022.106874] [Citation(s) in RCA: 50] [Impact Index Per Article: 25.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/05/2021] [Revised: 04/22/2022] [Accepted: 05/10/2022] [Indexed: 05/22/2023]
Abstract
Deep learning has remarkably impacted several different scientific disciplines over the last few years. For example, in image processing and analysis, deep learning algorithms were able to outperform other cutting-edge methods. Additionally, deep learning has delivered state-of-the-art results in tasks like autonomous driving, outclassing previous attempts. There are even instances where deep learning outperformed humans, for example with object recognition and gaming. Deep learning is also showing vast potential in the medical domain. With the collection of large quantities of patient records and data, and a trend towards personalized treatments, there is a great need for automated and reliable processing and analysis of health information. Patient data is not only collected in clinical centers, like hospitals and private practices, but also by mobile healthcare apps or online websites. The abundance of collected patient data and the recent growth in the deep learning field has resulted in a large increase in research efforts. In Q2/2020, the search engine PubMed returned already over 11,000 results for the search term 'deep learning', and around 90% of these publications are from the last three years. However, even though PubMed represents the largest search engine in the medical field, it does not cover all medical-related publications. Hence, a complete overview of the field of 'medical deep learning' is almost impossible to obtain and acquiring a full overview of medical sub-fields is becoming increasingly more difficult. Nevertheless, several review and survey articles about medical deep learning have been published within the last few years. They focus, in general, on specific medical scenarios, like the analysis of medical images containing specific pathologies. With these surveys as a foundation, the aim of this article is to provide the first high-level, systematic meta-review of medical deep learning surveys.
Collapse
Affiliation(s)
- Jan Egger
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Styria, Austria; Department of Oral &Maxillofacial Surgery, Medical University of Graz, Auenbruggerplatz 5/1, 8036 Graz, Styria, Austria; Computer Algorithms for Medicine Laboratory, Graz, Styria, Austria; Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, 45131 Essen, Germany; Cancer Research Center Cologne Essen (CCCE), University Medicine Essen, Hufelandstraße 55, 45147 Essen, Germany.
| | - Christina Gsaxner
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Styria, Austria; Department of Oral &Maxillofacial Surgery, Medical University of Graz, Auenbruggerplatz 5/1, 8036 Graz, Styria, Austria; Computer Algorithms for Medicine Laboratory, Graz, Styria, Austria
| | - Antonio Pepe
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Styria, Austria; Computer Algorithms for Medicine Laboratory, Graz, Styria, Austria
| | - Kelsey L Pomykala
- Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, 45131 Essen, Germany
| | - Frederic Jonske
- Computer Algorithms for Medicine Laboratory, Graz, Styria, Austria; Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, 45131 Essen, Germany
| | - Manuel Kurz
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Styria, Austria; Computer Algorithms for Medicine Laboratory, Graz, Styria, Austria
| | - Jianning Li
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010 Graz, Styria, Austria; Computer Algorithms for Medicine Laboratory, Graz, Styria, Austria; Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, 45131 Essen, Germany
| | - Jens Kleesiek
- Institute for AI in Medicine (IKIM), University Medicine Essen, Girardetstraße 2, 45131 Essen, Germany; Cancer Research Center Cologne Essen (CCCE), University Medicine Essen, Hufelandstraße 55, 45147 Essen, Germany; German Cancer Consortium (DKTK), Partner Site Essen, Hufelandstraße 55, 45147 Essen, Germany
| |
Collapse
|
10
|
Deep learning for predicting respiratory rate from biosignals. Comput Biol Med 2022; 144:105338. [DOI: 10.1016/j.compbiomed.2022.105338] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Revised: 01/27/2022] [Accepted: 02/10/2022] [Indexed: 12/23/2022]
|
11
|
Miao Y, Liu F, Hou T, Liu Y. Virtifier: a deep learning-based identifier for viral sequences from metagenomes. Bioinformatics 2022; 38:1216-1222. [PMID: 34908121 DOI: 10.1093/bioinformatics/btab845] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2020] [Revised: 11/13/2021] [Accepted: 12/13/2021] [Indexed: 01/05/2023] Open
Abstract
MOTIVATION Viruses, the most abundant biological entities on earth, are important components of microbial communities, and as major human pathogens, they are responsible for human mortality and morbidity. The identification of viral sequences from metagenomes is critical for viral analysis. As massive quantities of short sequences are generated by next-generation sequencing, most methods utilize discrete and sparse one-hot vectors to encode nucleotide sequences, which are usually ineffective in viral identification. RESULTS In this article, Virtifier, a deep learning-based viral identifier for sequences from metagenomic data is proposed. It includes a meaningful nucleotide sequence encoding method named Seq2Vec and a variant viral sequence predictor with an attention-based long short-term memory (LSTM) network. By utilizing a fully trained embedding matrix to encode codons, Seq2Vec can efficiently extract the relationships among those codons in a nucleotide sequence. Combined with an attention layer, the LSTM neural network can further analyze the codon relationships and sift the parts that contribute to the final features. Experimental results of three datasets have shown that Virtifier can accurately identify short viral sequences (<500 bp) from metagenomes, surpassing three widely used methods, VirFinder, DeepVirFinder and PPR-Meta. Meanwhile, a comparable performance was achieved by Virtifier at longer lengths (>5000 bp). AVAILABILITY AND IMPLEMENTATION A Python implementation of Virtifier and the Python code developed for this study have been provided on Github https://github.com/crazyinter/Seq2Vec. The RefSeq genomes in this article are available in VirFinder at https://dx.doi.org/10.1186/s40168-017-0283-5. The CAMI Challenge Dataset 3 CAMI_high dataset in this article is available in CAMI at https://data.cami-challenge.org/participate. The real human gut metagenomes in this article are available at https://dx.doi.org/10.1101/gr.142315.112. SUPPLEMENTARY INFORMATION Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
- Yan Miao
- College of Communication Engineering, Jilin University, Changchun 130022, China
| | - Fu Liu
- College of Communication Engineering, Jilin University, Changchun 130022, China
| | - Tao Hou
- College of Communication Engineering, Jilin University, Changchun 130022, China
| | - Yun Liu
- College of Communication Engineering, Jilin University, Changchun 130022, China
| |
Collapse
|
12
|
Dybowski R. Emergence of Deep Machine Learning in Medicine. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_26] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
13
|
Lin YD, Tan YK, Tian B. A novel approach for decomposition of biomedical signals in different applications based on data-adaptive Gaussian average filtering. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
14
|
Uluer P, Kose H, Gumuslu E, Barkana DE. Experience with an Affective Robot Assistant for Children with Hearing Disabilities. Int J Soc Robot 2021; 15:643-660. [PMID: 34804256 PMCID: PMC8594648 DOI: 10.1007/s12369-021-00830-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/10/2021] [Indexed: 01/10/2023]
Abstract
This study presents an assistive robotic system enhanced with emotion recognition capabilities for children with hearing disabilities. The system is designed and developed for the audiometry tests and rehabilitation of children in a clinical setting and includes a social humanoid robot (Pepper), an interactive interface, gamified audiometry tests, sensory setup and a machine/deep learning based emotion recognition module. Three scenarios involving conventional setup, tablet setup and setup with the robot+tablet are evaluated with 16 children having cochlear implant or hearing aid. Several machine learning techniques and deep learning models are used for the classification of the three test setups and for the classification of the emotions (pleasant, neutral, unpleasant) of children using the recorded physiological signals by E4 wristband. The results show that the collected signals during the tests can be separated successfully and the positive and negative emotions of children can be better distinguished when they interact with the robot than in the other two setups. In addition, the children’s objective and subjective evaluations as well as their impressions about the robot and its emotional behaviors are analyzed and discussed extensively.
Collapse
Affiliation(s)
- Pinar Uluer
- Department of Computer Engineering, Galatasaray University, Istanbul, Turkey.,Department of AI and Data Engineering, Istanbul Technical University, Istanbul, Turkey
| | - Hatice Kose
- Department of AI and Data Engineering, Istanbul Technical University, Istanbul, Turkey
| | - Elif Gumuslu
- Department of Electrical and Electronics Engineering, Yeditepe University, Istanbul, Turkey
| | - Duygun Erol Barkana
- Department of Electrical and Electronics Engineering, Yeditepe University, Istanbul, Turkey
| |
Collapse
|
15
|
Explaining clinical decision support systems in medical imaging using cycle-consistent activation maximization. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.05.081] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
|
16
|
Buongiorno D, Cascarano GD, De Feudis I, Brunetti A, Carnimeo L, Dimauro G, Bevilacqua V. Deep learning for processing electromyographic signals: A taxonomy-based survey. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2020.06.139] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
17
|
Occlusion-Based Explanations in Deep Recurrent Models for Biomedical Signals. ENTROPY 2021; 23:e23081064. [PMID: 34441204 PMCID: PMC8394492 DOI: 10.3390/e23081064] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Revised: 08/04/2021] [Accepted: 08/12/2021] [Indexed: 11/17/2022]
Abstract
The biomedical field is characterized by an ever-increasing production of sequential data, which often come in the form of biosignals capturing the time-evolution of physiological processes, such as blood pressure and brain activity. This has motivated a large body of research dealing with the development of machine learning techniques for the predictive analysis of such biosignals. Unfortunately, in high-stakes decision making, such as clinical diagnosis, the opacity of machine learning models becomes a crucial aspect to be addressed in order to increase the trust and adoption of AI technology. In this paper, we propose a model agnostic explanation method, based on occlusion, that enables the learning of the input’s influence on the model predictions. We specifically target problems involving the predictive analysis of time-series data and the models that are typically used to deal with data of such nature, i.e., recurrent neural networks. Our approach is able to provide two different kinds of explanations: one suitable for technical experts, who need to verify the quality and correctness of machine learning models, and one suited to physicians, who need to understand the rationale underlying the prediction to make aware decisions. A wide experimentation on different physiological data demonstrates the effectiveness of our approach both in classification and regression tasks.
Collapse
|
18
|
Zhang Z, Li G, Xu Y, Tang X. Application of Artificial Intelligence in the MRI Classification Task of Human Brain Neurological and Psychiatric Diseases: A Scoping Review. Diagnostics (Basel) 2021; 11:1402. [PMID: 34441336 PMCID: PMC8392727 DOI: 10.3390/diagnostics11081402] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2021] [Revised: 07/21/2021] [Accepted: 07/21/2021] [Indexed: 12/12/2022] Open
Abstract
Artificial intelligence (AI) for medical imaging is a technology with great potential. An in-depth understanding of the principles and applications of magnetic resonance imaging (MRI), machine learning (ML), and deep learning (DL) is fundamental for developing AI-based algorithms that can meet the requirements of clinical diagnosis and have excellent quality and efficiency. Moreover, a more comprehensive understanding of applications and opportunities would help to implement AI-based methods in an ethical and sustainable manner. This review first summarizes recent research advances in ML and DL techniques for classifying human brain magnetic resonance images. Then, the application of ML and DL methods to six typical neurological and psychiatric diseases is summarized, including Alzheimer's disease (AD), Parkinson's disease (PD), major depressive disorder (MDD), schizophrenia (SCZ), attention-deficit/hyperactivity disorder (ADHD), and autism spectrum disorder (ASD). Finally, the limitations of the existing research are discussed, and possible future research directions are proposed.
Collapse
Affiliation(s)
- Zhao Zhang
- 715-3 Teaching Building No.5, Department of Biomedical Engineering, School of Life Sciences, Beijing Institute of Technology, 5 South Zhongguancun Road, Haidian District, Beijing 100081, China; (Z.Z.); (G.L.)
| | - Guangfei Li
- 715-3 Teaching Building No.5, Department of Biomedical Engineering, School of Life Sciences, Beijing Institute of Technology, 5 South Zhongguancun Road, Haidian District, Beijing 100081, China; (Z.Z.); (G.L.)
| | - Yong Xu
- Department of Cardiology, Chinese PLA General Hospital, Beijing 100853, China;
| | - Xiaoying Tang
- 715-3 Teaching Building No.5, Department of Biomedical Engineering, School of Life Sciences, Beijing Institute of Technology, 5 South Zhongguancun Road, Haidian District, Beijing 100081, China; (Z.Z.); (G.L.)
| |
Collapse
|
19
|
Pröll SM, Tappeiner E, Hofbauer S, Kolbitsch C, Schubert R, Fritscher KD. Heart rate estimation from ballistocardiographic signals using deep learning. Physiol Meas 2021; 42. [PMID: 34198282 DOI: 10.1088/1361-6579/ac10aa] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2021] [Accepted: 06/24/2021] [Indexed: 11/11/2022]
Abstract
Objective.Ballistocardiography (BCG) is an unobtrusive approach for cost-effective and patient-friendly health monitoring. In this work, deep learning methods are used for heart rate estimation from BCG signals and are compared against five digital signal processing methods found in literature.Approach.The models are evaluated on a dataset featuring BCG recordings from 42 patients, acquired with a pneumatic system. Several different deep learning architectures, including convolutional, recurrent and a combination of both are investigated. Besides model performance, we are also concerned about model size and specifically investigate less complex and smaller networks.Main results.Deep learning models outperform traditional methods by a large margin. Across 14 patients in a held-out testing set, an architecture with stacked convolutional and recurrent layers achieves an average mean absolute error (MAE) of 2.07 beat min-1, whereas the best-performing traditional method reaches 4.24 beat min-1. Besides smaller errors, deep learning models show more consistent performance across different patients, indicating the ability to better deal with inter-patient variability, a prevalent issue in BCG analysis. In addition, we develop a smaller version of the best-performing architecture, that only features 8283 parameters, yet still achieves an average MAE of 2.32 beat min-1on the testing set.Significance.This is the first study that applies and compares different deep learning architectures to heart rate estimation from bed-based BCG signals. Compared to signal processing algorithms, deep learning models show dramatically smaller errors and more consistent results across different individuals. The results show that using smaller models instead of excessively large ones can lead to sufficient performance for specific biosignal processing applications. Additionally, we investigate the use of fully convolutional networks for 1D signal processing, which is rarely applied in literature.
Collapse
Affiliation(s)
- Samuel M Pröll
- Institute for Biomedical Image Analysis, UMIT-Private University for Health Sciences, Medical Informatics and Technology, A-6060 Hall in Tirol, Austria
| | - Elias Tappeiner
- Institute for Biomedical Image Analysis, UMIT-Private University for Health Sciences, Medical Informatics and Technology, A-6060 Hall in Tirol, Austria
| | - Stefan Hofbauer
- Department of Anaesthesia and Intensive Care Medicine, Medical University Innsbruck (MUI), A-6020 Innsbruck, Austria
| | - Christian Kolbitsch
- Department of Anaesthesia and Intensive Care Medicine, Medical University Innsbruck (MUI), A-6020 Innsbruck, Austria
| | - Rainer Schubert
- Institute for Biomedical Image Analysis, UMIT-Private University for Health Sciences, Medical Informatics and Technology, A-6060 Hall in Tirol, Austria
| | - Karl D Fritscher
- Institute for Biomedical Image Analysis, UMIT-Private University for Health Sciences, Medical Informatics and Technology, A-6060 Hall in Tirol, Austria
| |
Collapse
|
20
|
Ganapathy N, Veeranki YR, Kumar H, Swaminathan R. Emotion Recognition Using Electrodermal Activity Signals and Multiscale Deep Convolutional Neural Network. J Med Syst 2021; 45:49. [PMID: 33660087 DOI: 10.1007/s10916-020-01676-6] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2019] [Accepted: 11/10/2020] [Indexed: 11/30/2022]
Abstract
In this work, an attempt has been made to classify emotional states using electrodermal activity (EDA) signals and multiscale convolutional neural networks. For this, EDA signals are considered from a publicly available "A Dataset for Emotion Analysis using Physiological Signals" (DEAP) database. These signals are decomposed into multiple-scales using the coarse-grained method. The multiscale signals are applied to the Multiscale Convolutional Neural Network (MSCNN) to automatically learn robust features directly from the raw signals. Experiments are performed with the MSCNN approach to evaluate the hypothesis (i) improved classification with electrodermal activity signals, and (ii) multiscale learning captures robust complementary features at a different scale. Results show that the proposed approach is able to differentiate various emotional states. The proposed approach yields a classification accuracy of 69.33% and 71.43% for valence and arousal states, respectively. It is observed that the number of layers and the signal length are the determinants for the classifier performance. The performance of the proposed approach outperforms the single-layer convolutional neural network. The MSCNN approach provides end-to-end learning and classification of emotional states without additional signal processing. Thus, it appears that the proposed method could be a useful tool to assess the difference in emotional states for automated decision making.
Collapse
Affiliation(s)
- Nagarajan Ganapathy
- Biomedical Engineering Group, Department of Applied Mechanics, Indian Institute of Technology Madras, Chennai, India.
| | - Yedukondala Rao Veeranki
- Biomedical Engineering Group, Department of Applied Mechanics, Indian Institute of Technology Madras, Chennai, India
| | - Himanshu Kumar
- Biomedical Engineering Group, Department of Applied Mechanics, Indian Institute of Technology Madras, Chennai, India
| | - Ramakrishnan Swaminathan
- Biomedical Engineering Group, Department of Applied Mechanics, Indian Institute of Technology Madras, Chennai, India
| |
Collapse
|
21
|
Arora R, Raman B, Nayyar K, Awasthi R. Automated skin lesion segmentation using attention-based deep convolutional neural network. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2020.102358] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
|
22
|
Zamanian H, Mostaar A, Azadeh P, Ahmadi M. Implementation of Combinational Deep Learning Algorithm for Non-alcoholic Fatty Liver Classification in Ultrasound Images. J Biomed Phys Eng 2021; 11:73-84. [PMID: 33564642 PMCID: PMC7859380 DOI: 10.31661/jbpe.v0i0.2009-1180] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2020] [Accepted: 10/23/2020] [Indexed: 12/12/2022]
Abstract
Background Nowadays, fatty liver is one of the commonly occurred diseases for the liver which can be observed generally in obese patients. Final results from a variety of exams and imaging methods can help to identify and evaluate people affected by this condition. Objective The aim of this study is to present a combined algorithm based on neural networks for the classification of ultrasound images from fatty liver affected patients. Material and Methods In experimental research can be categorized as a diagnostic study which focuses on classification of the acquired ultrasonography images for 55 patients with fatty liver. We implemented pre-trained convolutional neural networks of Inception-ResNetv2, GoogleNet, AlexNet, and ResNet101 to extract features from the images and after combining these resulted features, we provided support vector machine (SVM) algorithm to classify the liver images. Then the results are compared with the ones in implementing the algorithms independently. Results The area under the receiver operating characteristic curve (AUC) for the introduced combined network resulted in 0.9999, which is a better result compared to any of the other introduced algorithms. The resulted accuracy for the proposed network also caused 0.9864, which seems acceptable accuracy for clinical application. Conclusion The proposed network can be used with high accuracy to classify ultrasound images of the liver to normal or fatty. The presented approach besides the high AUC in comparison with other methods have the independence of the method from the user or expert interference.
Collapse
Affiliation(s)
- H Zamanian
- MSc, Department of Medical Physics and Biomedical Engineering, School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - A Mostaar
- PhD, Department of Medical Physics and Biomedical Engineering and, School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran
- PhD, Radiation Biology Research Center, Iran University of Medical Sciences, Tehran, Iran
| | - P Azadeh
- MD, Department of Radiation Oncology, School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - M Ahmadi
- PhD, Department of Medical Physics and Biomedical Engineering and, School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| |
Collapse
|
23
|
Biswas U, Goh CH, Ooi SY, Lim E, Redmond SJ, Lovell NH. Telemedicine systems to manage chronic disease. Digit Health 2021. [DOI: 10.1016/b978-0-12-818914-6.00020-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
|
24
|
Emergence of Deep Machine Learning in Medicine. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_26-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
25
|
Goh CH, Tan LK, Lovell NH, Ng SC, Tan MP, Lim E. Robust PPG motion artifact detection using a 1-D convolution neural network. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 196:105596. [PMID: 32580054 DOI: 10.1016/j.cmpb.2020.105596] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/13/2019] [Accepted: 06/01/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVES Continuous monitoring of physiological parameters such as photoplethysmography (PPG) has attracted increased interest due to advances in wearable sensors. However, PPG recordings are susceptible to various artifacts, and thus reducing the reliability of PPG-driven parameters, such as oxygen saturation, heart rate, blood pressure and respiration. This paper proposes a one-dimensional convolution neural network (1-D-CNN) to classify five-second PPG segments into clean or artifact-affected segments, avoiding data-dependent pulse segmentation techniques and heavy manual feature engineering. METHODS Continuous raw PPG waveforms were blindly allocated into segments with an equal length (5s) without leveraging any pulse location information and were normalized with Z-score normalization methods. A 1-D-CNN was designed to automatically learn the intrinsic features of the PPG waveform, and perform the required classification. Several training hyperparameters (initial learning rate and gradient threshold) were varied to investigate the effect of these parameters on the performance of the network. Subsequently, this proposed network was trained and validated with 30 subjects, and then tested with eight subjects, with our local dataset. Moreover, two independent datasets downloaded from the PhysioNet MIMIC II database were used to evaluate the robustness of the proposed network. RESULTS A 13 layer 1-D-CNN model was designed. Within our local study dataset evaluation, the proposed network achieved a testing accuracy of 94.9%. The classification accuracy of two independent datasets also achieved satisfactory accuracy of 93.8% and 86.7% respectively. Our model achieved a comparable performance with most reported works, with the potential to show good generalization as the proposed network was evaluated with multiple cohorts (overall accuracy of 94.5%). CONCLUSION This paper demonstrated the feasibility and effectiveness of applying blind signal processing and deep learning techniques to PPG motion artifact detection, whereby manual feature thresholding was avoided and yet a high generalization ability was achieved.
Collapse
Affiliation(s)
- Choon-Hian Goh
- Department of Biomedical Engineering, Faculty of Engineering, University of Malaya, 50603 Kuala Lumpur, Malaysia; Graduate School of Biomedical Engineering, Faculty of Engineering, UNSW Sydney, New South Wales 2052, Australia; Department of Mechatronics and Biomedical Engineering, Lee Kong Chian Faculty of Engineering and Science, Universiti Tunku Abdul Rahman, 43000 Kajang, Selangor Darul Ehsan, Malaysia
| | - Li Kuo Tan
- Department of Biomedical Imaging, Faculty of Medicine, University of Malaya, 50603 Kuala Lumpur, Malaysia
| | - Nigel H Lovell
- Graduate School of Biomedical Engineering, Faculty of Engineering, UNSW Sydney, New South Wales 2052, Australia
| | - Siew-Cheok Ng
- Department of Biomedical Engineering, Faculty of Engineering, University of Malaya, 50603 Kuala Lumpur, Malaysia
| | - Maw Pin Tan
- Department of Medicine, Faculty of Medicine, University of Malaya, 50603 Kuala Lumpur, Malaysia; Department Medical Sciences, Faculty of Healthcare and Medical Sciences, Sunway University, 47500 Bandar Sunway, Malaysia
| | - Einly Lim
- Department of Biomedical Engineering, Faculty of Engineering, University of Malaya, 50603 Kuala Lumpur, Malaysia.
| |
Collapse
|
26
|
Tan C, Šarlija M, Kasabov N. Spiking Neural Networks: Background, Recent Development and the NeuCube Architecture. Neural Process Lett 2020. [DOI: 10.1007/s11063-020-10322-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
27
|
Wang J, Warnecke JM, Haghi M, Deserno TM. Unobtrusive Health Monitoring in Private Spaces: The Smart Vehicle. SENSORS (BASEL, SWITZERLAND) 2020; 20:E2442. [PMID: 32344815 PMCID: PMC7249030 DOI: 10.3390/s20092442] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/14/2020] [Revised: 04/22/2020] [Accepted: 04/23/2020] [Indexed: 11/18/2022]
Abstract
Unobtrusive in-vehicle health monitoring has the potential to use the driving time to perform regular medical check-ups. This work intends to provide a guide to currently proposed sensor systems for in-vehicle monitoring and to answer, in particular, the questions: (1) Which sensors are suitable for in-vehicle data collection? (2) Where should the sensors be placed? (3) Which biosignals or vital signs can be monitored in the vehicle? (4) Which purposes can be supported with the health data? We reviewed retrospective literature systematically and summarized the up-to-date research on leveraging sensor technology for unobtrusive in-vehicle health monitoring. PubMed, IEEE Xplore, and Scopus delivered 959 articles. We firstly screened titles and abstracts for relevance. Thereafter, we assessed the entire articles. Finally, 46 papers were included and analyzed. A guide is provided to the currently proposed sensor systems. Through this guide, potential sensor information can be derived from the biomedical data needed for respective purposes. The suggested locations for the corresponding sensors are also linked. Fifteen types of sensors were found. Driver-centered locations, such as steering wheel, car seat, and windscreen, are frequently used for mounting unobtrusive sensors, through which some typical biosignals like heart rate and respiration rate are measured. To date, most research focuses on sensor technology development, and most application-driven research aims at driving safety. Health-oriented research on the medical use of sensor-derived physiological parameters is still of interest.
Collapse
Affiliation(s)
- Ju Wang
- Peter L. Reichertz Institute for Medical Informatics of TU Braunschweig and Hannover Medical School, D-38106 Braunschweig, Lower Saxony, Germany; (J.M.W.); (M.H.); (T.M.D.)
| | | | | | | |
Collapse
|
28
|
Ntalianis V, Fakotakis ND, Nousias S, Lalos AS, Birbas M, Zacharaki EI, Moustakas K. Deep CNN Sparse Coding for Real Time Inhaler Sounds Classification. SENSORS 2020; 20:s20082363. [PMID: 32326271 PMCID: PMC7219332 DOI: 10.3390/s20082363] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/31/2020] [Revised: 04/14/2020] [Accepted: 04/16/2020] [Indexed: 12/20/2022]
Abstract
Effective management of chronic constrictive pulmonary conditions lies in proper and timely administration of medication. As a series of studies indicates, medication adherence can effectively be monitored by successfully identifying actions performed by patients during inhaler usage. This study focuses on the recognition of inhaler audio events during usage of pressurized metered dose inhalers (pMDI). Aiming at real-time performance, we investigate deep sparse coding techniques including convolutional filter pruning, scalar pruning and vector quantization, for different convolutional neural network (CNN) architectures. The recognition performance has been assessed on three healthy subjects following both within and across subjects modeling strategies. The selected CNN architecture classified drug actuation, inhalation and exhalation events, with 100%, 92.6% and 97.9% accuracy, respectively, when assessed in a leave-one-subject-out cross-validation setting. Moreover, sparse coding of the same architecture with an increasing compression rate from 1 to 7 resulted in only a small decrease in classification accuracy (from 95.7% to 94.5%), obtained by random (subject-agnostic) cross-validation. A more thorough assessment on a larger dataset, including recordings of subjects with multiple respiratory disease manifestations, is still required in order to better evaluate the method’s generalization ability and robustness.
Collapse
Affiliation(s)
- Vaggelis Ntalianis
- Department of Electrical & Computer Engineering, University of Patras, 26504 Patras, Greece
- Correspondence: (V.N.); (S.N.); Tel.: +30-2610996170 (S.N.)
| | | | - Stavros Nousias
- Department of Electrical & Computer Engineering, University of Patras, 26504 Patras, Greece
- Industrial Systems Institute, Athena Research Center, 26504 Patras, Greece
- Correspondence: (V.N.); (S.N.); Tel.: +30-2610996170 (S.N.)
| | - Aris S. Lalos
- Industrial Systems Institute, Athena Research Center, 26504 Patras, Greece
| | - Michael Birbas
- Department of Electrical & Computer Engineering, University of Patras, 26504 Patras, Greece
| | - Evangelia I. Zacharaki
- Department of Electrical & Computer Engineering, University of Patras, 26504 Patras, Greece
| | - Konstantinos Moustakas
- Department of Electrical & Computer Engineering, University of Patras, 26504 Patras, Greece
| |
Collapse
|
29
|
Rim B, Sung NJ, Min S, Hong M. Deep Learning in Physiological Signal Data: A Survey. SENSORS (BASEL, SWITZERLAND) 2020; 20:E969. [PMID: 32054042 PMCID: PMC7071412 DOI: 10.3390/s20040969] [Citation(s) in RCA: 66] [Impact Index Per Article: 16.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/07/2020] [Revised: 01/31/2020] [Accepted: 02/09/2020] [Indexed: 12/11/2022]
Abstract
Deep Learning (DL), a successful promising approach for discriminative and generative tasks, has recently proved its high potential in 2D medical imaging analysis; however, physiological data in the form of 1D signals have yet to be beneficially exploited from this novel approach to fulfil the desired medical tasks. Therefore, in this paper we survey the latest scientific research on deep learning in physiological signal data such as electromyogram (EMG), electrocardiogram (ECG), electroencephalogram (EEG), and electrooculogram (EOG). We found 147 papers published between January 2018 and October 2019 inclusive from various journals and publishers. The objective of this paper is to conduct a detailed study to comprehend, categorize, and compare the key parameters of the deep-learning approaches that have been used in physiological signal analysis for various medical applications. The key parameters of deep-learning approach that we review are the input data type, deep-learning task, deep-learning model, training architecture, and dataset sources. Those are the main key parameters that affect system performance. We taxonomize the research works using deep-learning method in physiological signal analysis based on: (1) physiological signal data perspective, such as data modality and medical application; and (2) deep-learning concept perspective such as training architecture and dataset sources.
Collapse
Affiliation(s)
- Beanbonyka Rim
- Department of Computer Science, Soonchunhyang University, Asan 31538, Korea
| | - Nak-Jun Sung
- Department of Computer Science, Soonchunhyang University, Asan 31538, Korea
| | - Sedong Min
- Department of Medical IT Engineering, Soonchunhyang University, Asan 31538, Korea
| | - Min Hong
- Department of Computer Software Engineering, Soonchunhyang University, Asan 31538, Korea
| |
Collapse
|
30
|
Fernandez-Maloigne C, Guillevin R. L’intelligence artificielle au service de l’imagerie et de la santé des femmes. IMAGERIE DE LA FEMME 2019. [DOI: 10.1016/j.femme.2019.09.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
31
|
Exploring Deep Physiological Models for Nociceptive Pain Recognition. SENSORS 2019; 19:s19204503. [PMID: 31627305 PMCID: PMC6833075 DOI: 10.3390/s19204503] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/05/2019] [Revised: 10/07/2019] [Accepted: 10/14/2019] [Indexed: 12/19/2022]
Abstract
Standard feature engineering involves manually designing measurable descriptors based on some expert knowledge in the domain of application, followed by the selection of the best performing set of designed features for the subsequent optimisation of an inference model. Several studies have shown that this whole manual process can be efficiently replaced by deep learning approaches which are characterised by the integration of feature engineering, feature selection and inference model optimisation into a single learning process. In the following work, deep learning architectures are designed for the assessment of measurable physiological channels in order to perform an accurate classification of different levels of artificially induced nociceptive pain. In contrast to previous works, which rely on carefully designed sets of hand-crafted features, the current work aims at building competitive pain intensity inference models through autonomous feature learning, based on deep neural networks. The assessment of the designed deep learning architectures is based on the BioVid Heat Pain Database (Part A) and experimental validation demonstrates that the proposed uni-modal architecture for the electrodermal activity (EDA) and the deep fusion approaches significantly outperform previous methods reported in the literature, with respective average performances of 84.57 % and 84.40 % for the binary classification experiment consisting of the discrimination between the baseline and the pain tolerance level ( T 0 vs. T 4 ) in a Leave-One-Subject-Out (LOSO) cross-validation evaluation setting. Moreover, the experimental results clearly show the relevance of the proposed approaches, which also offer more flexibility in the case of transfer learning due to the modular nature of deep neural networks.
Collapse
|
32
|
Deep Learning and Big Data in Healthcare: A Double Review for Critical Beginners. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app9112331] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
In the last few years, there has been a growing expectation created about the analysis of large amounts of data often available in organizations, which has been both scrutinized by the academic world and successfully exploited by industry. Nowadays, two of the most common terms heard in scientific circles are Big Data and Deep Learning. In this double review, we aim to shed some light on the current state of these different, yet somehow related branches of Data Science, in order to understand the current state and future evolution within the healthcare area. We start by giving a simple description of the technical elements of Big Data technologies, as well as an overview of the elements of Deep Learning techniques, according to their usual description in scientific literature. Then, we pay attention to the application fields that can be said to have delivered relevant real-world success stories, with emphasis on examples from large technology companies and financial institutions, among others. The academic effort that has been put into bringing these technologies to the healthcare sector are then summarized and analyzed from a twofold view as follows: first, the landscape of application examples is globally scrutinized according to the varying nature of medical data, including the data forms in electronic health recordings, medical time signals, and medical images; second, a specific application field is given special attention, in particular the electrocardiographic signal analysis, where a number of works have been published in the last two years. A set of toy application examples are provided with the publicly-available MIMIC dataset, aiming to help the beginners start with some principled, basic, and structured material and available code. Critical discussion is provided for current and forthcoming challenges on the use of both sets of techniques in our future healthcare.
Collapse
|
33
|
Lundervold AS, Lundervold A. An overview of deep learning in medical imaging focusing on MRI. Z Med Phys 2018; 29:102-127. [PMID: 30553609 DOI: 10.1016/j.zemedi.2018.11.002] [Citation(s) in RCA: 698] [Impact Index Per Article: 116.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2018] [Revised: 11/19/2018] [Accepted: 11/21/2018] [Indexed: 02/06/2023]
Abstract
What has happened in machine learning lately, and what does it mean for the future of medical image analysis? Machine learning has witnessed a tremendous amount of attention over the last few years. The current boom started around 2009 when so-called deep artificial neural networks began outperforming other established models on a number of important benchmarks. Deep neural networks are now the state-of-the-art machine learning models across a variety of areas, from image analysis to natural language processing, and widely deployed in academia and industry. These developments have a huge potential for medical imaging technology, medical data analysis, medical diagnostics and healthcare in general, slowly being realized. We provide a short overview of recent advances and some associated challenges in machine learning applied to medical image processing and image analysis. As this has become a very broad and fast expanding field we will not survey the entire landscape of applications, but put particular focus on deep learning in MRI. Our aim is threefold: (i) give a brief introduction to deep learning with pointers to core references; (ii) indicate how deep learning has been applied to the entire MRI processing chain, from acquisition to image retrieval, from segmentation to disease prediction; (iii) provide a starting point for people interested in experimenting and perhaps contributing to the field of deep learning for medical imaging by pointing out good educational resources, state-of-the-art open-source code, and interesting sources of data and problems related medical imaging.
Collapse
Affiliation(s)
- Alexander Selvikvåg Lundervold
- Mohn Medical Imaging and Visualization Centre (MMIV), Haukeland University Hospital, Norway; Department of Computing, Mathematics and Physics, Western Norway University of Applied Sciences, Norway.
| | - Arvid Lundervold
- Mohn Medical Imaging and Visualization Centre (MMIV), Haukeland University Hospital, Norway; Neuroinformatics and Image Analysis Laboratory, Department of Biomedicine, University of Bergen, Norway; Department of Health and Functioning, Western Norway University of Applied Sciences, Norway.
| |
Collapse
|
34
|
Abstract
Objective:
To summarize significant contributions to sensor, signal, and imaging informatics literature published in 2017.
Methods:
PubMed
®
and Web of Science
®
were searched to identify the scientific publications published in 2017 that addressed sensors, signals, and imaging in medical informatics. Fifteen papers were selected by consensus as candidate best papers. Each candidate article was reviewed by section editors and at least two other external reviewers. The final selection of the four best papers was conducted by the editorial board of the International Medical Informatics Association (IMIA) Yearbook.
Results:
The selected papers of 2017 demonstrate the important scientific advances in management and analysis of sensor, signal, and imaging information.
Conclusion:
The growth of signal and imaging data and the increasing power of machine learning techniques have engendered new opportunities for research in medical informatics. This synopsis highlights cutting-edge contributions to the science of Sensor, Signal, and Imaging Informatics.
Collapse
Affiliation(s)
- William Hsu
- University of California, Los Angeles, California, USA
| | - Thomas M Deserno
- Technische Universität Braunschweig und Medizinische Hochschule Hannover, Braunschweig, Germany
| | - Charles E Kahn
- University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | | |
Collapse
|