1
|
Sheneman L, Stephanopoulos G, Vasdekis AE. Deep learning classification of lipid droplets in quantitative phase images. PLoS One 2021; 16:e0249196. [PMID: 33819277 PMCID: PMC8021159 DOI: 10.1371/journal.pone.0249196] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2020] [Accepted: 03/12/2021] [Indexed: 12/15/2022] Open
Abstract
We report the application of supervised machine learning to the automated classification of lipid droplets in label-free, quantitative-phase images. By comparing various machine learning methods commonly used in biomedical imaging and remote sensing, we found convolutional neural networks to outperform others, both quantitatively and qualitatively. We describe our imaging approach, all implemented machine learning methods, and their performance with respect to computational efficiency, required training resources, and relative method performance measured across multiple metrics. Overall, our results indicate that quantitative-phase imaging coupled to machine learning enables accurate lipid droplet classification in single living cells. As such, the present paradigm presents an excellent alternative of the more common fluorescent and Raman imaging modalities by enabling label-free, ultra-low phototoxicity, and deeper insight into the thermodynamics of metabolism of single cells.
Collapse
Affiliation(s)
- Luke Sheneman
- Northwest Knowledge Network, University of Idaho, Moscow, Idaho, United States of America
| | - Gregory Stephanopoulos
- Department of Chemical Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| | - Andreas E. Vasdekis
- Department of Physics, University of Idaho, Moscow, Idaho, United States of America
| |
Collapse
|
2
|
Stankoski S, Jordan M, Gjoreski H, Luštrek M. Smartwatch-Based Eating Detection: Data Selection for Machine Learning from Imbalanced Data with Imperfect Labels. SENSORS (BASEL, SWITZERLAND) 2021; 21:1902. [PMID: 33803121 PMCID: PMC7963188 DOI: 10.3390/s21051902] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/31/2021] [Revised: 03/02/2021] [Accepted: 03/04/2021] [Indexed: 11/16/2022]
Abstract
Understanding people's eating habits plays a crucial role in interventions promoting a healthy lifestyle. This requires objective measurement of the time at which a meal takes place, the duration of the meal, and what the individual eats. Smartwatches and similar wrist-worn devices are an emerging technology that offers the possibility of practical and real-time eating monitoring in an unobtrusive, accessible, and affordable way. To this end, we present a novel approach for the detection of eating segments with a wrist-worn device and fusion of deep and classical machine learning. It integrates a novel data selection method to create the training dataset, and a method that incorporates knowledge from raw and virtual sensor modalities for training with highly imbalanced datasets. The proposed method was evaluated using data from 12 subjects recorded in the wild, without any restriction about the type of meals that could be consumed, the cutlery used for the meal, or the location where the meal took place. The recordings consist of data from accelerometer and gyroscope sensors. The experiments show that our method for detection of eating segments achieves precision of 0.85, recall of 0.81, and F1-score of 0.82 in a person-independent manner. The results obtained in this study indicate that reliable eating detection using in the wild recorded data is possible with the use of wearable sensors on the wrist.
Collapse
Affiliation(s)
- Simon Stankoski
- Department of Intelligent Systems, Jožef Stefan Institute, 1000 Ljubljana, Slovenia; (M.J.); (M.L.)
- Jožef Stefan International Postgraduate School, 1000 Ljubljana, Slovenia
| | - Marko Jordan
- Department of Intelligent Systems, Jožef Stefan Institute, 1000 Ljubljana, Slovenia; (M.J.); (M.L.)
| | - Hristijan Gjoreski
- Faculty of Electrical Engineering and Information Technologies, Ss. Cyril and Methodius University, 1000 Skopje, North Macedonia;
| | - Mitja Luštrek
- Department of Intelligent Systems, Jožef Stefan Institute, 1000 Ljubljana, Slovenia; (M.J.); (M.L.)
- Jožef Stefan International Postgraduate School, 1000 Ljubljana, Slovenia
| |
Collapse
|
3
|
Bell BM, Alam R, Alshurafa N, Thomaz E, Mondol AS, de la Haye K, Stankovic JA, Lach J, Spruijt-Metz D. Automatic, wearable-based, in-field eating detection approaches for public health research: a scoping review. NPJ Digit Med 2020; 3:38. [PMID: 32195373 PMCID: PMC7069988 DOI: 10.1038/s41746-020-0246-2] [Citation(s) in RCA: 43] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2019] [Accepted: 02/13/2020] [Indexed: 11/09/2022] Open
Abstract
Dietary intake, eating behaviors, and context are important in chronic disease development, yet our ability to accurately assess these in research settings can be limited by biased traditional self-reporting tools. Objective measurement tools, specifically, wearable sensors, present the opportunity to minimize the major limitations of self-reported eating measures by generating supplementary sensor data that can improve the validity of self-report data in naturalistic settings. This scoping review summarizes the current use of wearable devices/sensors that automatically detect eating-related activity in naturalistic research settings. Five databases were searched in December 2019, and 618 records were retrieved from the literature search. This scoping review included N = 40 studies (from 33 articles) that reported on one or more wearable sensors used to automatically detect eating activity in the field. The majority of studies (N = 26, 65%) used multi-sensor systems (incorporating > 1 wearable sensors), and accelerometers were the most commonly utilized sensor (N = 25, 62.5%). All studies (N = 40, 100.0%) used either self-report or objective ground-truth methods to validate the inferred eating activity detected by the sensor(s). The most frequently reported evaluation metrics were Accuracy (N = 12) and F1-score (N = 10). This scoping review highlights the current state of wearable sensors' ability to improve upon traditional eating assessment methods by passively detecting eating activity in naturalistic settings, over long periods of time, and with minimal user interaction. A key challenge in this field, wide variation in eating outcome measures and evaluation metrics, demonstrates the need for the development of a standardized form of comparability among sensors/multi-sensor systems and multidisciplinary collaboration.
Collapse
Affiliation(s)
- Brooke M. Bell
- Department of Preventive Medicine, Keck School of Medicine, University of Southern California, Los Angeles, CA 90089 USA
| | - Ridwan Alam
- Department of Electrical and Computer Engineering, School of Engineering and Applied Science, University of Virginia, Charlottesville, VA 22904 USA
| | - Nabil Alshurafa
- Department of Preventive Medicine, Feinberg School of Medicine, Northwestern University, Chicago, IL 60611 USA
- Department of Computer Science, McCormick School of Engineering, Northwestern University, Chicago, IL 60611 USA
| | - Edison Thomaz
- Department of Electrical and Computer Engineering, Cockrell School of Engineering, The University of Texas at Austin, Austin, TX 78712 USA
| | - Abu S. Mondol
- Department of Computer Science, School of Engineering and Applied Science, University of Virginia, Charlottesville, VA 22904 USA
| | - Kayla de la Haye
- Department of Preventive Medicine, Keck School of Medicine, University of Southern California, Los Angeles, CA 90089 USA
| | - John A. Stankovic
- Department of Computer Science, School of Engineering and Applied Science, University of Virginia, Charlottesville, VA 22904 USA
| | - John Lach
- Department of Electrical and Computer Engineering, School of Engineering and Applied Science, The George Washington University, Washington, DC 20052 USA
| | - Donna Spruijt-Metz
- Department of Preventive Medicine, Keck School of Medicine, University of Southern California, Los Angeles, CA 90089 USA
- Center for Economic and Social Research, Dornsife College of Letters, Arts, and Sciences, University of Southern California, Los Angeles, CA 90089 USA
- Department of Psychology, Dornsife College of Letters, Arts, and Sciences, University of Southern California, Los Angeles, CA 90089 USA
| |
Collapse
|
4
|
Heydarian H, Adam M, Burrows T, Collins C, Rollo ME. Assessing Eating Behaviour Using Upper Limb Mounted Motion Sensors: A Systematic Review. Nutrients 2019; 11:E1168. [PMID: 31137677 PMCID: PMC6566929 DOI: 10.3390/nu11051168] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2019] [Revised: 05/21/2019] [Accepted: 05/22/2019] [Indexed: 01/08/2023] Open
Abstract
Wearable motion tracking sensors are now widely used to monitor physical activity, and have recently gained more attention in dietary monitoring research. The aim of this review is to synthesise research to date that utilises upper limb motion tracking sensors, either individually or in combination with other technologies (e.g., cameras, microphones), to objectively assess eating behaviour. Eleven electronic databases were searched in January 2019, and 653 distinct records were obtained. Including 10 studies found in backward and forward searches, a total of 69 studies met the inclusion criteria, with 28 published since 2017. Fifty studies were conducted exclusively in laboratory settings, 13 exclusively in free-living settings, and three in both settings. The most commonly used motion sensor was an accelerometer (64) worn on the wrist (60) or lower arm (5), while in most studies (45), accelerometers were used in combination with gyroscopes. Twenty-six studies used commercial-grade smartwatches or fitness bands, 11 used professional grade devices, and 32 used standalone sensor chipsets. The most used machine learning approaches were Support Vector Machine (SVM, n = 21), Random Forest (n = 19), Decision Tree (n = 16), Hidden Markov Model (HMM, n = 10) algorithms, and from 2017 Deep Learning (n = 5). While comparisons of the detection models are not valid due to the use of different datasets, the models that consider the sequential context of data across time, such as HMM and Deep Learning, show promising results for eating activity detection. We discuss opportunities for future research and emerging applications in the context of dietary assessment and monitoring.
Collapse
Affiliation(s)
- Hamid Heydarian
- School of Electrical Engineering and Computing, Faculty of Engineering and Built Environment, The University of Newcastle, Callaghan, NSW 2308, Australia.
| | - Marc Adam
- School of Electrical Engineering and Computing, Faculty of Engineering and Built Environment, The University of Newcastle, Callaghan, NSW 2308, Australia.
- Priority Research Centre for Physical Activity and Nutrition, The University of Newcastle, Callaghan, NSW 2308, Australia.
| | - Tracy Burrows
- Priority Research Centre for Physical Activity and Nutrition, The University of Newcastle, Callaghan, NSW 2308, Australia.
- School of Health Sciences, Faculty of Health and Medicine, The University of Newcastle, Callaghan, NSW 2308, Australia.
| | - Clare Collins
- Priority Research Centre for Physical Activity and Nutrition, The University of Newcastle, Callaghan, NSW 2308, Australia.
- School of Health Sciences, Faculty of Health and Medicine, The University of Newcastle, Callaghan, NSW 2308, Australia.
| | - Megan E Rollo
- Priority Research Centre for Physical Activity and Nutrition, The University of Newcastle, Callaghan, NSW 2308, Australia.
- School of Health Sciences, Faculty of Health and Medicine, The University of Newcastle, Callaghan, NSW 2308, Australia.
| |
Collapse
|
5
|
Validation of Sensor-Based Food Intake Detection by Multicamera Video Observation in an Unconstrained Environment. Nutrients 2019; 11:nu11030609. [PMID: 30871173 PMCID: PMC6472006 DOI: 10.3390/nu11030609] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2019] [Revised: 03/02/2019] [Accepted: 03/07/2019] [Indexed: 11/17/2022] Open
Abstract
Video observations have been widely used for providing ground truth for wearable systems for monitoring food intake in controlled laboratory conditions; however, video observation requires participants be confined to a defined space. The purpose of this analysis was to test an alternative approach for establishing activity types and food intake bouts in a relatively unconstrained environment. The accuracy of a wearable system for assessing food intake was compared with that from video observation, and inter-rater reliability of annotation was also evaluated. Forty participants were enrolled. Multiple participants were simultaneously monitored in a 4-bedroom apartment using six cameras for three days each. Participants could leave the apartment overnight and for short periods of time during the day, during which time monitoring did not take place. A wearable system (Automatic Ingestion Monitor, AIM) was used to detect and monitor participants’ food intake at a resolution of 30 s using a neural network classifier. Two different food intake detection models were tested, one trained on the data from an earlier study and the other on current study data using leave-one-out cross validation. Three trained human raters annotated the videos for major activities of daily living including eating, drinking, resting, walking, and talking. They further annotated individual bites and chewing bouts for each food intake bout. Results for inter-rater reliability showed that, for activity annotation, the raters achieved an average (±standard deviation (STD)) kappa value of 0.74 (±0.02) and for food intake annotation the average kappa (Light’s kappa) of 0.82 (±0.04). Validity results showed that AIM food intake detection matched human video-annotated food intake with a kappa of 0.77 (±0.10) and 0.78 (±0.12) for activity annotation and for food intake bout annotation, respectively. Results of one-way ANOVA suggest that there are no statistically significant differences among the average eating duration estimated from raters’ annotations and AIM predictions (p-value = 0.19). These results suggest that the AIM provides accuracy comparable to video observation and may be used to reliably detect food intake in multi-day observational studies.
Collapse
|
6
|
Spruijt-Metz D, Wen CKF, Bell BM, Intille S, Huang JS, Baranowski T. Advances and Controversies in Diet and Physical Activity Measurement in Youth. Am J Prev Med 2018; 55:e81-e91. [PMID: 30135037 PMCID: PMC6151143 DOI: 10.1016/j.amepre.2018.06.012] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/04/2018] [Revised: 05/09/2018] [Accepted: 06/15/2018] [Indexed: 11/16/2022]
Abstract
Technological advancements in the past decades have improved dietary intake and physical activity measurements. This report reviews current developments in dietary intake and physical activity assessment in youth. Dietary intake assessment has relied predominantly on self-report or image-based methods to measure key aspects of dietary intake (e.g., food types, portion size, eating occasion), which are prone to notable methodologic (e.g., recall bias) and logistic (e.g., participant and researcher burden) challenges. Although there have been improvements in automatic eating detection, artificial intelligence, and sensor-based technologies, participant input is often needed to verify food categories and portions. Current physical activity assessment methods, including self-report, direct observation, and wearable devices, provide researchers with reliable estimations for energy expenditure and bodily movement. Recent developments in algorithms that incorporate signals from multiple sensors and technology-augmented self-reporting methods have shown preliminary efficacy in measuring specific types of activity patterns and relevant contextual information. However, challenges in detecting resistance (e.g., in resistance training, weight lifting), prolonged physical activity monitoring, and algorithm (non)equivalence remain to be addressed. In summary, although dietary intake assessment methods have yet to achieve the same validity and reliability as physical activity measurement, recent developments in wearable technologies in both arenas have the potential to improve current assessment methods. THEME INFORMATION This article is part of a theme issue entitled Innovative Tools for Assessing Diet and Physical Activity for Health Promotion, which is sponsored by the North American branch of the International Life Sciences Institute.
Collapse
Affiliation(s)
- Donna Spruijt-Metz
- Center for Economic and Social Research, University of Southern California, Los Angeles, California; Department of Psychology, University of Southern California, Los Angeles, California; Department of Preventive Medicine, University of Southern California, Los Angeles, California.
| | - Cheng K Fred Wen
- Department of Preventive Medicine, University of Southern California, Los Angeles, California
| | - Brooke M Bell
- Department of Preventive Medicine, University of Southern California, Los Angeles, California
| | - Stephen Intille
- College of Computer and Information Science, Northeastern University, Boston, Massachusetts; Department of Health Sciences, Bouvé College of Health Sciences, Northeastern University, Boston, Massachusetts
| | - Jeannie S Huang
- Department of Pediatrics, School of Medicine, University of California at San Diego, San Diego, California; Rady Children's Hospital, San Diego, California
| | - Tom Baranowski
- Department of Pediatrics, Baylor College of Medicine, Houston, Texas
| |
Collapse
|
7
|
Kyritsis K, Tatli CL, Diou C, Delopoulos A. Automated analysis of in meal eating behavior using a commercial wristband IMU sensor. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2018; 2017:2843-2846. [PMID: 29060490 DOI: 10.1109/embc.2017.8037449] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Automatic objective monitoring of eating behavior using inertial sensors is a research problem that has received a lot of attention recently, mainly due to the mass availability of IMUs and the evidence on the importance of quantifying and monitoring eating patterns. In this paper we propose a method for detecting food intake cycles during the course of a meal using a commercially available wristband. We first model micro-movements that are part of the intake cycle and then use HMMs to model the sequences of micro-movements leading to mouthfuls. Evaluation is carried out on an annotated dataset of 8 subjects where the proposed method achieves 0:78 precision and 0:77 recall. The evaluation dataset is publicly available at http://mug.ee.auth.gr/intake-cycle-detection/.
Collapse
|
8
|
Abstract
Research suggests that there might be a relationship between chew count as well as chewing rate and energy intake. Chewing has been used in wearable sensor systems for the automatic detection of food intake, but little work has been reported on the automatic measurement of chew count or chewing rate. This work presents a method for the automatic quantification of chewing episodes captured by a piezoelectric sensor system. The proposed method was tested on 120 meals from 30 participants using two approaches. In a semi-automatic approach, histogram-based peak detection was used to count the number of chews in manually annotated chewing segments, resulting in a mean absolute error of 10.40% ± 7.03%. In a fully automatic approach, automatic food intake recognition preceded the application of the chew counting algorithm. The sensor signal was divided into 5-s non-overlapping epochs. Leave-one-out cross-validation was used to train a artificial neural network (ANN) to classify epochs as “food intake” or “no intake” with an average F1 score of 91.09%. Chews were counted in epochs classified as food intake with a mean absolute error of 15.01% ± 11.06%. The proposed methods were compared with manual chew counts using an analysis of variance (ANOVA), which showed no statistically significant difference between the two methods. Results suggest that the proposed method can provide objective and automatic quantification of eating behavior in terms of chew counts and chewing rates.
Collapse
|
9
|
Farooq M, Sazonov E. Comparative testing of piezoelectric and printed strain sensors in characterization of chewing. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2016; 2015:7538-41. [PMID: 26738036 DOI: 10.1109/embc.2015.7320136] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Results of recent research suggest that there may be a relationship between the eating rate and the total energy intake in a meal. The chewing rate is an indicator of the eating rate that may be measured by a sensor. A number of wearable solutions have been presented for the automatic detection of chewing, but little work has been done on counting chews automatically. With recent developments in printing technologies, it is possible to draw or print application specific sensors. This paper provides a comparison between an off the shelf piezoelectric strain sensor and a plotter drawn strain sensor for quantifying the number of chews for several food items. Piezoelectric strain sensor and plotter drawn strain sensors were able to achieve absolute mean error rates of 8.09 ± 7.16% and 8.26 ± 7.51% respectively for estimating the number of chew counts. This shows that a plotter drawn sensor can achieve similar performance while potentially providing an easily reconfigurable solution.
Collapse
|
10
|
A Novel Wearable Device for Food Intake and Physical Activity Recognition. SENSORS 2016; 16:s16071067. [PMID: 27409622 PMCID: PMC4970114 DOI: 10.3390/s16071067] [Citation(s) in RCA: 50] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/26/2016] [Revised: 07/07/2016] [Accepted: 07/08/2016] [Indexed: 12/03/2022]
Abstract
Presence of speech and motion artifacts has been shown to impact the performance of wearable sensor systems used for automatic detection of food intake. This work presents a novel wearable device which can detect food intake even when the user is physically active and/or talking. The device consists of a piezoelectric strain sensor placed on the temporalis muscle, an accelerometer, and a data acquisition module connected to the temple of eyeglasses. Data from 10 participants was collected while they performed activities including quiet sitting, talking, eating while sitting, eating while walking, and walking. Piezoelectric strain sensor and accelerometer signals were divided into non-overlapping epochs of 3 s; four features were computed for each signal. To differentiate between eating and not eating, as well as between sedentary postures and physical activity, two multiclass classification approaches are presented. The first approach used a single classifier with sensor fusion and the second approach used two-stage classification. The best results were achieved when two separate linear support vector machine (SVM) classifiers were trained for food intake and activity detection, and their results were combined using a decision tree (two-stage classification) to determine the final class. This approach resulted in an average F1-score of 99.85% and area under the curve (AUC) of 0.99 for multiclass classification. With its ability to differentiate between food intake and activity level, this device may potentially be used for tracking both energy intake and energy expenditure.
Collapse
|
11
|
Fontana JM, Farooq M, Sazonov E. Automatic ingestion monitor: a novel wearable device for monitoring of ingestive behavior. IEEE Trans Biomed Eng 2015; 61:1772-9. [PMID: 24845288 DOI: 10.1109/tbme.2014.2306773] [Citation(s) in RCA: 93] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
Objective monitoring of food intake and ingestive behavior in a free-living environment remains an open problem that has significant implications in study and treatment of obesity and eating disorders. In this paper, a novel wearable sensor system (automatic ingestion monitor, AIM) is presented for objective monitoring of ingestive behavior in free living. The proposed device integrates three sensor modalities that wirelessly interface to a smartphone: a jaw motion sensor, a hand gesture sensor, and an accelerometer. A novel sensor fusion and pattern recognition method was developed for subject-independent food intake recognition. The device and the methodology were validated with data collected from 12 subjects wearing AIM during the course of 24 h in which both the daily activities and the food intake of the subjects were not restricted in any way. Results showed that the system was able to detect food intake with an average accuracy of 89.8%, which suggests that AIM can potentially be used as an instrument to monitor ingestive behavior in free-living individuals.
Collapse
|
12
|
Farooq M, Fontana JM, Sazonov E. A novel approach for food intake detection using electroglottography. Physiol Meas 2014; 35:739-51. [PMID: 24671094 DOI: 10.1088/0967-3334/35/5/739] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Many methods for monitoring diet and food intake rely on subjects self-reporting their daily intake. These methods are subjective, potentially inaccurate and need to be replaced by more accurate and objective methods. This paper presents a novel approach that uses an electroglottograph (EGG) device for an objective and automatic detection of food intake. Thirty subjects participated in a four-visit experiment involving the consumption of meals with self-selected content. Variations in the electrical impedance across the larynx caused by the passage of food during swallowing were captured by the EGG device. To compare performance of the proposed method with a well-established acoustical method, a throat microphone was used for monitoring swallowing sounds. Both signals were segmented into non-overlapping epochs of 30 s and processed to extract wavelet features. Subject-independent classifiers were trained, using artificial neural networks, to identify periods of food intake from the wavelet features. Results from leave-one-out cross validation showed an average per-epoch classification accuracy of 90.1% for the EGG-based method and 83.1% for the acoustic-based method, demonstrating the feasibility of using an EGG for food intake detection.
Collapse
Affiliation(s)
- Muhammad Farooq
- Department of Electrical and Computer Engineering, University of Alabama, Tuscaloosa, AL 35487, USA
| | | | | |
Collapse
|