1
|
Choi JY, Jeon S, Kim H, Ha J, Jeon GS, Lee J, Cho SI. Health-Related Indicators Measured Using Earable Devices: Systematic Review. JMIR Mhealth Uhealth 2022; 10:e36696. [PMID: 36239201 PMCID: PMC9709679 DOI: 10.2196/36696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Revised: 09/23/2022] [Accepted: 10/13/2022] [Indexed: 11/16/2022] Open
Abstract
BACKGROUND Earable devices are novel, wearable Internet of Things devices that are user-friendly and have potential applications in mobile health care. The position of the ear is advantageous for assessing vital status and detecting diseases through reliable and comfortable sensing devices. OBJECTIVE Our study aimed to review the utility of health-related indicators derived from earable devices and propose an improved definition of disease prevention. We also proposed future directions for research on the health care applications of earable devices. METHODS A systematic review was conducted of the PubMed, Embase, and Web of Science databases. Keywords were used to identify studies on earable devices published between 2015 and 2020. The earable devices were described in terms of target health outcomes, biomarkers, sensor types and positions, and their utility for disease prevention. RESULTS A total of 51 articles met the inclusion criteria and were reviewed, and the frequency of 5 health-related characteristics of earable devices was described. The most frequent target health outcomes were diet-related outcomes (9/51, 18%), brain status (7/51, 14%), and cardiovascular disease (CVD) and central nervous system disease (5/51, 10% each). The most frequent biomarkers were electroencephalography (11/51, 22%), body movements (6/51, 12%), and body temperature (5/51, 10%). As for sensor types and sensor positions, electrical sensors (19/51, 37%) and the ear canal (26/51, 51%) were the most common, respectively. Moreover, the most frequent prevention stages were secondary prevention (35/51, 69%), primary prevention (12/51, 24%), and tertiary prevention (4/51, 8%). Combinations of ≥2 target health outcomes were the most frequent in secondary prevention (8/35, 23%) followed by brain status and CVD (5/35, 14% each) and by central nervous system disease and head injury (4/35, 11% each). CONCLUSIONS Earable devices can provide biomarkers for various health outcomes. Brain status, healthy diet status, and CVDs were the most frequently targeted outcomes among the studies. Earable devices were mostly used for secondary prevention via monitoring of health or disease status. The potential utility of earable devices for primary and tertiary prevention needs to be investigated further. Earable devices connected to smartphones or tablets through cloud servers will guarantee user access to personal health information and facilitate comfortable wearing.
Collapse
Affiliation(s)
- Jin-Young Choi
- Department of Public Health Science, Graduate School of Public Health, Seoul National University, Seoul, Republic of Korea
| | - Seonghee Jeon
- Department of Public Health Science, Graduate School of Public Health, Seoul National University, Seoul, Republic of Korea
| | - Hana Kim
- Department of Public Health Science, Graduate School of Public Health, Seoul National University, Seoul, Republic of Korea
| | - Jaeyoung Ha
- Department of Public Health Science, Graduate School of Public Health, Seoul National University, Seoul, Republic of Korea
| | - Gyeong-Suk Jeon
- Department of Nursing, College of Natural Science, Mokpo National University, Mokpo, Republic of Korea
| | - Jeong Lee
- Department of Nursing, College of Health and Medical Science, Chodang University, Muan, Republic of Korea
| | - Sung-Il Cho
- Department of Public Health Science, Graduate School of Public Health, Seoul National University, Seoul, Republic of Korea
- Institute of Health and Environment, Seoul National University, Seoul, Republic of Korea
| |
Collapse
|
2
|
Determination of Chewing Count from Video Recordings Using Discrete Wavelet Decomposition and Low Pass Filtration. SENSORS 2021; 21:s21206806. [PMID: 34696019 PMCID: PMC8538316 DOI: 10.3390/s21206806] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/05/2021] [Revised: 10/02/2021] [Accepted: 10/07/2021] [Indexed: 11/23/2022]
Abstract
Several studies have shown the importance of proper chewing and the effect of chewing speed on the human health in terms of caloric intake and even cognitive functions. This study aims at designing algorithms for determining the chew count from video recordings of subjects consuming food items. A novel algorithm based on image and signal processing techniques has been developed to continuously capture the area of interest from the video clips, determine facial landmarks, generate the chewing signal, and process the signal with two methods: low pass filter, and discrete wavelet decomposition. Peak detection was used to determine the chew count from the output of the processed chewing signal. The system was tested using recordings from 100 subjects at three different chewing speeds (i.e., slow, normal, and fast) without any constraints on gender, skin color, facial hair, or ambience. The low pass filter algorithm achieved the best mean absolute percentage error of 6.48%, 7.76%, and 8.38% for the slow, normal, and fast chewing speeds, respectively. The performance was also evaluated using the Bland-Altman plot, which showed that most of the points lie within the lines of agreement. However, the algorithm needs improvement for faster chewing, but it surpasses the performance of the relevant literature. This research provides a reliable and accurate method for determining the chew count. The proposed methods facilitate the study of the chewing behavior in natural settings without any cumbersome hardware that may affect the results. This work can facilitate research into chewing behavior while using smart devices.
Collapse
|
3
|
Moguel E, Berrocal J, García-Alonso J. Systematic Literature Review of Food-Intake Monitoring in an Aging Population. SENSORS (BASEL, SWITZERLAND) 2019; 19:E3265. [PMID: 31344946 PMCID: PMC6695930 DOI: 10.3390/s19153265] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/30/2019] [Revised: 07/15/2019] [Accepted: 07/22/2019] [Indexed: 11/16/2022]
Abstract
The dietary habits of people directly impact their health conditions. Especially in elder populations (in 2017, 6.7% of the world's population was over 65 years of age), these habits could lead to important-nutrient losses that could seriously affect their cognitive and functional state. Recently, a great research effort has been devoted to using different technologies and proposing different techniques for monitoring food-intake. Nevertheless, these techniques are usually generic but make use of the most innovative technologies and methodologies to obtain the best possible monitoring results. However, a large percentage of elderly people live in depopulated rural areas (in Spain, 28.1% of the elderly population lives in this type of area) with a fragile cultural and socioeconomic context. The use of these techniques in these environments is crucial to improving this group's quality of life (and even reducing their healthcare expenses). At the same time, it is especially challenging since they have very specific and strict requirements regarding the use and application of technology. In this Systematic Literature Review (SLR), we analyze the most important proposed technologies and techniques in order to identify whether they can be applied in this context and if they can be used to improve the quality of life of this fragile collective. In this SLR, we have analyzed 326 papers. From those, 29 proposals have been completely analyzed, taking into account the characteristics and requirements of this population.
Collapse
Affiliation(s)
- Enrique Moguel
- Av. de la Universidad, s/n. University of Extremadura, 10004 Cáceres, Spain.
| | - Javier Berrocal
- Av. de la Universidad, s/n. University of Extremadura, 10004 Cáceres, Spain
| | - José García-Alonso
- Av. de la Universidad, s/n. University of Extremadura, 10004 Cáceres, Spain
| |
Collapse
|
4
|
van den Boer J, van der Lee A, Zhou L, Papapanagiotou V, Diou C, Delopoulos A, Mars M. The SPLENDID Eating Detection Sensor: Development and Feasibility Study. JMIR Mhealth Uhealth 2018; 6:e170. [PMID: 30181111 PMCID: PMC6231803 DOI: 10.2196/mhealth.9781] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2018] [Revised: 04/12/2018] [Accepted: 05/08/2018] [Indexed: 11/27/2022] Open
Abstract
Background The available methods for monitoring food intake—which for a great part rely on self-report—often provide biased and incomplete data. Currently, no good technological solutions are available. Hence, the SPLENDID eating detection sensor (an ear-worn device with an air microphone and a photoplethysmogram [PPG] sensor) was developed to enable complete and objective measurements of eating events. The technical performance of this device has been described before. To date, literature is lacking a description of how such a device is perceived and experienced by potential users. Objective The objective of our study was to explore how potential users perceive and experience the SPLENDID eating detection sensor. Methods Potential users evaluated the eating detection sensor at different stages of its development: (1) At the start, 12 health professionals (eg, dieticians, personal trainers) were interviewed and a focus group was held with 5 potential end users to find out their thoughts on the concept of the eating detection sensor. (2) Then, preliminary prototypes of the eating detection sensor were tested in a laboratory setting where 23 young adults reported their experiences. (3) Next, the first wearable version of the eating detection sensor was tested in a semicontrolled study where 22 young, overweight adults used the sensor on 2 separate days (from lunch till dinner) and reported their experiences. (4) The final version of the sensor was tested in a 4-week feasibility study by 20 young, overweight adults who reported their experiences. Results Throughout all the development stages, most individuals were enthusiastic about the eating detection sensor. However, it was stressed multiple times that it was critical that the device be discreet and comfortable to wear for a longer period. In the final study, the eating detection sensor received an average grade of 3.7 for wearer comfort on a scale of 1 to 10. Moreover, experienced discomfort was the main reason for wearing the eating detection sensor <2 hours a day. The participants reported having used the eating detection sensor on 19/28 instructed days on average. Conclusions The SPLENDID eating detection sensor, which uses an air microphone and a PPG sensor, is a promising new device that can facilitate the collection of reliable food intake data, as shown by its technical potential. Potential users are enthusiastic, but to be successful wearer comfort and discreetness of the device need to be improved.
Collapse
Affiliation(s)
- Janet van den Boer
- Sensory Science and Eating Behaviour Chair Group, Division of Human Nutrition, Wageningen University, Wageningen, Netherlands
| | - Annemiek van der Lee
- Sensory Science and Eating Behaviour Chair Group, Division of Human Nutrition, Wageningen University, Wageningen, Netherlands
| | - Lingchuan Zhou
- Electronics & Firmware, Systems Division, Centre Suisse d'Electronique et de Microtechnique, Neuchâtel, Switzerland
| | - Vasileios Papapanagiotou
- Multimedia Understanding Group, Department of Electrical and Computer Engineering, Aristotle University, Thessaloniki, Greece
| | - Christos Diou
- Multimedia Understanding Group, Department of Electrical and Computer Engineering, Aristotle University, Thessaloniki, Greece
| | - Anastasios Delopoulos
- Multimedia Understanding Group, Department of Electrical and Computer Engineering, Aristotle University, Thessaloniki, Greece
| | - Monica Mars
- Sensory Science and Eating Behaviour Chair Group, Division of Human Nutrition, Wageningen University, Wageningen, Netherlands
| |
Collapse
|
5
|
Papapanagiotou V, Diou C, Ioakimidis I, Sodersten P, Delopoulos A. Automatic Analysis of Food Intake and Meal Microstructure Based on Continuous Weight Measurements. IEEE J Biomed Health Inform 2018; 23:893-902. [PMID: 29993620 DOI: 10.1109/jbhi.2018.2812243] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
The structure of the cumulative food intake (CFI) curve has been associated with obesity and eating disorders. Scales that record the weight loss of a plate from which a subject eats food are used for capturing this curve; however, their measurements are contaminated by additive noise and are distorted by certain types of artifacts. This paper presents an algorithm for automatically processing continuous in-meal weight measurements in order to extract the clean CFI curve and in-meal eating indicators, such as total food intake and food intake rate. The algorithm relies on the representation of the weight-time series by a string of symbols that correspond to events such as bites or food additions. A context-free grammar is next used to model a meal as a sequence of such events. The selection of the most likely parse tree is finally used to determine the predicted eating sequence. The algorithm is evaluated on a dataset of 113 meals collected using the Mandometer, a scale that continuously samples plate weight during eating. We evaluate the effectiveness for seven indicators and for bite-instance detection. We compare our approach with three state-of-the-art algorithms, and achieve the lowest error rates for most indicators (24 g for total meal weight). The proposed algorithm extracts the parameters of the CFI curve automatically, eliminating the need for manual data processing, and thus facilitating large-scale studies of eating behavior.
Collapse
|
6
|
Papapanagiotou V, Diou C, van den Boer J, Mars M, Delopoulos A. The SPLENDID chewing detection challenge. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2017; 2017:817-820. [PMID: 29059997 DOI: 10.1109/embc.2017.8036949] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Monitoring of eating behavior using wearable technology is receiving increased attention, driven by the recent advances in wearable devices and mobile phones. One particularly interesting aspect of eating behavior is the monitoring of chewing activity and eating occurrences. There are several chewing sensor types and chewing detection algorithms proposed in the bibliography, however no datasets are publicly available to facilitate evaluation and further research. In this paper, we present a multi-modal dataset of over 60 hours of recordings from 14 participants in semi-free living conditions, collected in the context of the SPLENDID project. The dataset includes raw signals from a photoplethysmography (PPG) sensor and a 3D accelerometer, and a set of extracted features from audio recordings; detailed annotations and ground truth are also provided both at eating event level and at individual chew level. We also provide a baseline evaluation method, and introduce the "challenge" of improving the baseline chewing detection algorithms. The dataset can be downloaded from http: //dx.doi.org/10.17026/dans-zxw-v8gy, and supplementary code can be downloaded from https://github. com/mug-auth/chewing-detection-challenge.git.
Collapse
|
7
|
Papapanagiotou V, Diou C, Zhou L, van den Boer J, Mars M, Delopoulos A. A Novel Chewing Detection System Based on PPG, Audio, and Accelerometry. IEEE J Biomed Health Inform 2016; 21:607-618. [PMID: 27834659 DOI: 10.1109/jbhi.2016.2625271] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
In the context of dietary management, accurate monitoring of eating habits is receiving increased attention. Wearable sensors, combined with the connectivity and processing of modern smartphones, can be used to robustly extract objective and real-time measurements of human behavior. In particular, for the task of chewing detection, several approaches based on an in-ear microphone can be found in the literature, while other types of sensors have also been reported, such as strain sensors. In this paper, performed in the context of the SPLENDID project, we propose to combine an in-ear microphone with a photoplethysmography (PPG) sensor placed in the ear concha, in a new high accuracy and low sampling rate prototype chewing detection system. We propose a pipeline that initially processes each sensor signal separately, and then fuses both to perform the final detection. Features are extracted from each modality, and support vector machine (SVM) classifiers are used separately to perform snacking detection. Finally, we combine the SVM scores from both signals in a late-fusion scheme, which leads to increased eating detection accuracy. We evaluate the proposed eating monitoring system on a challenging, semifree living dataset of 14 subjects, which includes more than 60 h of audio and PPG signal recordings. Results show that fusing the audio and PPG signals significantly improves the effectiveness of eating event detection, achieving accuracy up to 0.938 and class-weighted accuracy up to 0.892.
Collapse
Affiliation(s)
- Vasileios Papapanagiotou
- Multimedia Understanding Group, Department of Electrical and Computer Engineering, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Christos Diou
- Multimedia Understanding Group, Department of Electrical and Computer Engineering, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | | | | | - Monica Mars
- Wageningen University, Wageningen, Netherlands
| | - Anastasios Delopoulos
- Multimedia Understanding Group, Department of Electrical and Computer Engineering, Aristotle University of Thessaloniki, Thessaloniki, Greece
| |
Collapse
|