1
|
Jia W, Li B, Zheng Y, Mao ZH, Sun M. Estimating Amount of Food in a Circular Dining Bowl from a Single Image. MADIMA '23 : PROCEEDINGS OF THE 8TH INTERNATIONAL WORKSHOP ON MULTIMEDIA ASSISTED DIETARY MANAGEMENT 2023; 2023:1-9. [PMID: 38288389 PMCID: PMC10823382 DOI: 10.1145/3607828.3617789] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/31/2024]
Abstract
Unhealthy diet is a top risk factor causing obesity and numerous chronic diseases. To help the public adopt healthy diet, nutrition scientists need user-friendly tools to conduct Dietary Assessment (DA). In recent years, new DA tools have been developed using a smartphone or a wearable device which acquires images during a meal. These images are then processed to estimate calories and nutrients of the consumed food. Although considerable progress has been made, 2D food images lack scale reference and 3D volumetric information. In addition, food must be sufficiently observable from the image. This basic condition can be met when the food is stand-alone (no food container is used) or it is contained in a shallow plate. However, the condition cannot be met easily when a bowl is used. The food is often occluded by the bowl edge, and the shape of the bowl may not be fully determined from the image. However, bowls are the most utilized food containers by billions of people in many parts of the world, especially in Asia and Africa. In this work, we propose to premeasure plates and bowls using a marked adhesive strip before a dietary study starts. This simple procedure eliminates the use of a scale reference throughout the DA study. In addition, we use mathematical models and image processing to reconstruct the bowl in 3D. Our key idea is to estimate how full the bowl is rather than how much food is (in either volume or weight) in the bowl. This idea reduces the effect of occlusion. The experimental data have shown satisfactory results of our methods which enable accurate DA studies using both plates and bowls with reduced burden on research participants.
Collapse
Affiliation(s)
- Wenyan Jia
- Department of Electrical and Computer Engineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - Boyang Li
- Department of Electrical and Computer Engineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - Yaguang Zheng
- Rory Meyers College of Nursing, New York University, New York, NY, USA
| | - Zhi-Hong Mao
- Departments of Electrical and Computer Engineering, and Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - Mingui Sun
- Departments of Neurosurgery Electrical and Computer Engineering and Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
| |
Collapse
|
2
|
Jobarteh ML, McCrory MA, Lo B, Triantafyllidis KK, Qiu J, Griffin JP, Sazonov E, Sun M, Jia W, Baranowski T, Anderson AK, Maitland K, Frost G. Evaluation of Acceptability, Functionality, and Validity of a Passive Image-Based Dietary Intake Assessment Method in Adults and Children of Ghanaian and Kenyan Origin Living in London, UK. Nutrients 2023; 15:4075. [PMID: 37764857 PMCID: PMC10537234 DOI: 10.3390/nu15184075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Revised: 09/18/2023] [Accepted: 09/19/2023] [Indexed: 09/29/2023] Open
Abstract
BACKGROUND Accurate estimation of dietary intake is challenging. However, whilst some progress has been made in high-income countries, low- and middle-income countries (LMICs) remain behind, contributing to critical nutritional data gaps. This study aimed to validate an objective, passive image-based dietary intake assessment method against weighed food records in London, UK, for onward deployment to LMICs. METHODS Wearable camera devices were used to capture food intake on eating occasions in 18 adults and 17 children of Ghanaian and Kenyan origin living in London. Participants were provided pre-weighed meals of Ghanaian and Kenyan cuisine and camera devices to automatically capture images of the eating occasions. Food images were assessed for portion size, energy, nutrient intake, and the relative validity of the method compared to the weighed food records. RESULTS The Pearson and Intraclass correlation coefficients of estimates of intakes of food, energy, and 19 nutrients ranged from 0.60 to 0.95 and 0.67 to 0.90, respectively. Bland-Altman analysis showed good agreement between the image-based method and the weighed food record. Under-estimation of dietary intake by the image-based method ranged from 4 to 23%. CONCLUSIONS Passive food image capture and analysis provides an objective assessment of dietary intake comparable to weighed food records.
Collapse
Affiliation(s)
- Modou L. Jobarteh
- Department of Population Health, London School of Hygiene and Tropical Medicine, London WC1E 7HT, UK
| | - Megan A. McCrory
- Department of Health Sciences, Boston University, Boston, MA 02215, USA;
| | - Benny Lo
- Hamlyn Centre, Department of Surgery and Cancer, Imperial College London, London SW7 2AZ, UK; (B.L.); (J.Q.)
| | - Konstantinos K. Triantafyllidis
- Section for Nutrition Research, Department of Metabolism, Digestion and Reproduction, Imperial College London, London SW7 2BX, UK; (K.K.T.); (J.P.G.); (G.F.)
| | - Jianing Qiu
- Hamlyn Centre, Department of Surgery and Cancer, Imperial College London, London SW7 2AZ, UK; (B.L.); (J.Q.)
| | - Jennifer P. Griffin
- Section for Nutrition Research, Department of Metabolism, Digestion and Reproduction, Imperial College London, London SW7 2BX, UK; (K.K.T.); (J.P.G.); (G.F.)
| | - Edward Sazonov
- Department of Electrical and Computer Engineering, University of Alabama, Tuscaloosa, AL 35487, USA;
| | - Mingui Sun
- Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, PA 15261, USA; (M.S.); (W.J.)
| | - Wenyan Jia
- Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, PA 15261, USA; (M.S.); (W.J.)
| | - Tom Baranowski
- USDA/ARS Children’s Nutrition Research Center, Department of Pediatrics, Baylor College of Medicine, Houston, TX 77030, USA;
| | - Alex K. Anderson
- Department of Nutritional Sciences, University of Georgia, Athens, GA 30602, USA;
| | | | - Gary Frost
- Section for Nutrition Research, Department of Metabolism, Digestion and Reproduction, Imperial College London, London SW7 2BX, UK; (K.K.T.); (J.P.G.); (G.F.)
| |
Collapse
|
3
|
Sun M, Jia W, Chen G, Hou M, Chen J, Mao ZH. Improved Wearable Devices for Dietary Assessment Using a New Camera System. SENSORS (BASEL, SWITZERLAND) 2022; 22:8006. [PMID: 36298356 PMCID: PMC9609969 DOI: 10.3390/s22208006] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/02/2022] [Revised: 10/12/2022] [Accepted: 10/17/2022] [Indexed: 06/16/2023]
Abstract
An unhealthy diet is strongly linked to obesity and numerous chronic diseases. Currently, over two-thirds of American adults are overweight or obese. Although dietary assessment helps people improve nutrition and lifestyle, traditional methods for dietary assessment depend on self-report, which is inaccurate and often biased. In recent years, as electronics, information, and artificial intelligence (AI) technologies advanced rapidly, image-based objective dietary assessment using wearable electronic devices has become a powerful approach. However, research in this field has been focused on the developments of advanced algorithms to process image data. Few reports exist on the study of device hardware for the particular purpose of dietary assessment. In this work, we demonstrate that, with the current hardware design, there is a considerable risk of missing important dietary data owing to the common use of rectangular image screen and fixed camera orientation. We then present two designs of a new camera system to reduce data loss by generating circular images using rectangular image sensor chips. We also present a mechanical design that allows the camera orientation to be adjusted, adapting to differences among device wearers, such as gender, body height, and so on. Finally, we discuss the pros and cons of rectangular versus circular images with respect to information preservation and data processing using AI algorithms.
Collapse
Affiliation(s)
- Mingui Sun
- Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, PA 15260, USA
- Department of Electrical & Computer Engineering, University of Pittsburgh, Pittsburgh, PA 15260, USA
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA 15260, USA
| | - Wenyan Jia
- Department of Electrical & Computer Engineering, University of Pittsburgh, Pittsburgh, PA 15260, USA
| | - Guangzong Chen
- Department of Electrical & Computer Engineering, University of Pittsburgh, Pittsburgh, PA 15260, USA
| | - Mingke Hou
- Department of Mechanical Engineering, University of Pittsburgh, Pittsburgh, PA 15260, USA
| | - Jiacheng Chen
- Department of Electrical & Computer Engineering, University of Pittsburgh, Pittsburgh, PA 15260, USA
| | - Zhi-Hong Mao
- Department of Electrical & Computer Engineering, University of Pittsburgh, Pittsburgh, PA 15260, USA
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA 15260, USA
| |
Collapse
|
4
|
Li H, Yang G. Dietary Nutritional Information Autonomous Perception Method Based on Machine Vision in Smart Homes. ENTROPY (BASEL, SWITZERLAND) 2022; 24:868. [PMID: 35885091 PMCID: PMC9324181 DOI: 10.3390/e24070868] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Revised: 06/19/2022] [Accepted: 06/20/2022] [Indexed: 02/04/2023]
Abstract
In order to automatically perceive the user's dietary nutritional information in the smart home environment, this paper proposes a dietary nutritional information autonomous perception method based on machine vision in smart homes. Firstly, we proposed a food-recognition algorithm based on YOLOv5 to monitor the user's dietary intake using the social robot. Secondly, in order to obtain the nutritional composition of the user's dietary intake, we calibrated the weight of food ingredients and designed the method for the calculation of food nutritional composition; then, we proposed a dietary nutritional information autonomous perception method based on machine vision (DNPM) that supports the quantitative analysis of nutritional composition. Finally, the proposed algorithm was tested on the self-expanded dataset CFNet-34 based on the Chinese food dataset ChineseFoodNet. The test results show that the average recognition accuracy of the food-recognition algorithm based on YOLOv5 is 89.7%, showing good accuracy and robustness. According to the performance test results of the dietary nutritional information autonomous perception system in smart homes, the average nutritional composition perception accuracy of the system was 90.1%, the response time was less than 6 ms, and the speed was higher than 18 fps, showing excellent robustness and nutritional composition perception performance.
Collapse
Affiliation(s)
- Hongyang Li
- Key Laboratory of Advanced Manufacturing Technology of the Ministry of Education, Guizhou University, Guiyang 550025, China;
| | - Guanci Yang
- Key Laboratory of Advanced Manufacturing Technology of the Ministry of Education, Guizhou University, Guiyang 550025, China;
- Key Laboratory of “Internet+” Collaborative Intelligent Manufacturing in Guizhou Province, Guiyang 550025, China
- State Key Laboratory of Public Big Data, Guizhou University, Guiyang 550025, China
| |
Collapse
|
5
|
Jia W, Ren Y, Li B, Beatrice B, Que J, Cao S, Wu Z, Mao ZH, Lo B, Anderson AK, Frost G, McCrory MA, Sazonov E, Steiner-Asiedu M, Baranowski T, Burke LE, Sun M. A Novel Approach to Dining Bowl Reconstruction for Image-Based Food Volume Estimation. SENSORS (BASEL, SWITZERLAND) 2022; 22:1493. [PMID: 35214399 PMCID: PMC8877095 DOI: 10.3390/s22041493] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/20/2021] [Revised: 02/08/2022] [Accepted: 02/08/2022] [Indexed: 06/14/2023]
Abstract
Knowing the amounts of energy and nutrients in an individual's diet is important for maintaining health and preventing chronic diseases. As electronic and AI technologies advance rapidly, dietary assessment can now be performed using food images obtained from a smartphone or a wearable device. One of the challenges in this approach is to computationally measure the volume of food in a bowl from an image. This problem has not been studied systematically despite the bowl being the most utilized food container in many parts of the world, especially in Asia and Africa. In this paper, we present a new method to measure the size and shape of a bowl by adhering a paper ruler centrally across the bottom and sides of the bowl and then taking an image. When observed from the image, the distortions in the width of the paper ruler and the spacings between ruler markers completely encode the size and shape of the bowl. A computational algorithm is developed to reconstruct the three-dimensional bowl interior using the observed distortions. Our experiments using nine bowls, colored liquids, and amorphous foods demonstrate high accuracy of our method for food volume estimation involving round bowls as containers. A total of 228 images of amorphous foods were also used in a comparative experiment between our algorithm and an independent human estimator. The results showed that our algorithm overperformed the human estimator who utilized different types of reference information and two estimation methods, including direct volume estimation and indirect estimation through the fullness of the bowl.
Collapse
Affiliation(s)
- Wenyan Jia
- Department of Electrical and Computer Engineering, University of Pittsburgh, Pittsburgh, PA 15260, USA; (W.J.); (Y.R.); (B.L.); (J.Q.); (S.C.); (Z.W.); (Z.-H.M.)
| | - Yiqiu Ren
- Department of Electrical and Computer Engineering, University of Pittsburgh, Pittsburgh, PA 15260, USA; (W.J.); (Y.R.); (B.L.); (J.Q.); (S.C.); (Z.W.); (Z.-H.M.)
| | - Boyang Li
- Department of Electrical and Computer Engineering, University of Pittsburgh, Pittsburgh, PA 15260, USA; (W.J.); (Y.R.); (B.L.); (J.Q.); (S.C.); (Z.W.); (Z.-H.M.)
| | - Britney Beatrice
- School of Health and Rehabilitation Sciences, University of Pittsburgh, Pittsburgh, PA 15260, USA;
| | - Jingda Que
- Department of Electrical and Computer Engineering, University of Pittsburgh, Pittsburgh, PA 15260, USA; (W.J.); (Y.R.); (B.L.); (J.Q.); (S.C.); (Z.W.); (Z.-H.M.)
| | - Shunxin Cao
- Department of Electrical and Computer Engineering, University of Pittsburgh, Pittsburgh, PA 15260, USA; (W.J.); (Y.R.); (B.L.); (J.Q.); (S.C.); (Z.W.); (Z.-H.M.)
| | - Zekun Wu
- Department of Electrical and Computer Engineering, University of Pittsburgh, Pittsburgh, PA 15260, USA; (W.J.); (Y.R.); (B.L.); (J.Q.); (S.C.); (Z.W.); (Z.-H.M.)
| | - Zhi-Hong Mao
- Department of Electrical and Computer Engineering, University of Pittsburgh, Pittsburgh, PA 15260, USA; (W.J.); (Y.R.); (B.L.); (J.Q.); (S.C.); (Z.W.); (Z.-H.M.)
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA 15260, USA
| | - Benny Lo
- Hamlyn Centre, Imperial College London, London SW7 2AZ, UK;
| | - Alex K. Anderson
- Department of Nutritional Sciences, University of Georgia, Athens, GA 30602, USA;
| | - Gary Frost
- Section for Nutrition Research, Department of Metabolism, Digestion and Reproduction, Imperial College London, London SW7 2AZ, UK;
| | - Megan A. McCrory
- Department of Health Sciences, Boston University, Boston, MA 02210, USA;
| | - Edward Sazonov
- Department of Electrical and Computer Engineering, University of Alabama, Tuscaloosa, AL 35487, USA;
| | - Matilda Steiner-Asiedu
- Department of Nutrition and Food Science, University of Ghana, Legon Boundary, Accra LG 1181, Ghana;
| | - Tom Baranowski
- USDA/ARS Children’s Nutrition Research Center, Department of Pediatrics, Baylor College of Medicine, Houston, TX 77030, USA;
| | - Lora E. Burke
- School of Nursing, University of Pittsburgh, Pittsburgh, PA 15260, USA;
| | - Mingui Sun
- Department of Electrical and Computer Engineering, University of Pittsburgh, Pittsburgh, PA 15260, USA; (W.J.); (Y.R.); (B.L.); (J.Q.); (S.C.); (Z.W.); (Z.-H.M.)
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA 15260, USA
- Department of Neurosurgery, University of Pittsburgh, Pittsburgh, PA 15260, USA
| |
Collapse
|
6
|
Yang Z, Yu H, Cao S, Xu Q, Yuan D, Zhang H, Jia W, Mao ZH, Sun M. Human-Mimetic Estimation of Food Volume from a Single-View RGB Image Using an AI System. ELECTRONICS 2021; 10:1556. [PMID: 34552763 PMCID: PMC8455030 DOI: 10.3390/electronics10131556] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
It is well known that many chronic diseases are associated with unhealthy diet. Although improving diet is critical, adopting a healthy diet is difficult despite its benefits being well understood. Technology is needed to allow an assessment of dietary intake accurately and easily in real-world settings so that effective intervention to manage being overweight, obesity, and related chronic diseases can be developed. In recent years, new wearable imaging and computational technologies have emerged. These technologies are capable of performing objective and passive dietary assessments with a much simplified procedure than traditional questionnaires. However, a critical task is required to estimate the portion size (in this case, the food volume) from a digital image. Currently, this task is very challenging because the volumetric information in the two-dimensional images is incomplete, and the estimation involves a great deal of imagination, beyond the capacity of the traditional image processing algorithms. In this work, we present a novel Artificial Intelligent (AI) system to mimic the thinking of dietitians who use a set of common objects as gauges (e.g., a teaspoon, a golf ball, a cup, and so on) to estimate the portion size. Specifically, our human-mimetic system "mentally" gauges the volume of food using a set of internal reference volumes that have been learned previously. At the output, our system produces a vector of probabilities of the food with respect to the internal reference volumes. The estimation is then completed by an "intelligent guess", implemented by an inner product between the probability vector and the reference volume vector. Our experiments using both virtual and real food datasets have shown accurate volume estimation results.
Collapse
Affiliation(s)
- Zhengeng Yang
- College of Electrical and Information Engineering, Hunan University, Changsha 410082, China
- Department of Neurosurgery, University of Pittsburgh, Pittsburgh, PA 15260, USA
| | - Hongshan Yu
- College of Electrical and Information Engineering, Hunan University, Changsha 410082, China
| | - Shunxin Cao
- Department of Electrical and Computer Engineering, University of Pittsburgh, Pittsburgh, PA 15260, USA
| | - Qi Xu
- School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Ding Yuan
- Image Processing Center, Beihang University, Beijing 100191, China
| | - Hong Zhang
- Image Processing Center, Beihang University, Beijing 100191, China
| | - Wenyan Jia
- Department of Electrical and Computer Engineering, University of Pittsburgh, Pittsburgh, PA 15260, USA
| | - Zhi-Hong Mao
- Department of Electrical and Computer Engineering, University of Pittsburgh, Pittsburgh, PA 15260, USA
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA 15260, USA
| | - Mingui Sun
- Department of Neurosurgery, University of Pittsburgh, Pittsburgh, PA 15260, USA
- Department of Electrical and Computer Engineering, University of Pittsburgh, Pittsburgh, PA 15260, USA
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA 15260, USA
| |
Collapse
|
7
|
Qiu J, Lo FPW, Jiang S, Tsai YY, Sun Y, Lo B. Counting Bites and Recognizing Consumed Food from Videos for Passive Dietary Monitoring. IEEE J Biomed Health Inform 2021; 25:1471-1482. [PMID: 32897866 DOI: 10.1109/jbhi.2020.3022815] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Assessing dietary intake in epidemiological studies are predominantly based on self-reports, which are subjective, inefficient, and also prone to error. Technological approaches are therefore emerging to provide objective dietary assessments. Using only egocentric dietary intake videos, this work aims to provide accurate estimation on individual dietary intake through recognizing consumed food items and counting the number of bites taken. This is different from previous studies that rely on inertial sensing to count bites, and also previous studies that only recognize visible food items but not consumed ones. As a subject may not consume all food items visible in a meal, recognizing those consumed food items is more valuable. A new dataset that has 1,022 dietary intake video clips was constructed to validate our concept of bite counting and consumed food item recognition from egocentric videos. 12 subjects participated and 52 meals were captured. A total of 66 unique food items, including food ingredients and drinks, were labelled in the dataset along with a total of 2,039 labelled bites. Deep neural networks were used to perform bite counting and food item recognition in an end-to-end manner. Experiments have shown that counting bites directly from video clips can reach 74.15% top-1 accuracy (classifying between 0-4 bites in 20-second clips), and a MSE value of 0.312 (when using regression). Our experiments on video-based food recognition also show that recognizing consumed food items is indeed harder than recognizing visible ones, with a drop of 25% in F1 score.
Collapse
|
8
|
Chen G, Jia W, Zhao Y, Mao ZH, Lo B, Anderson AK, Frost G, Jobarteh ML, McCrory MA, Sazonov E, Steiner-Asiedu M, Ansong RS, Baranowski T, Burke L, Sun M. Food/Non-Food Classification of Real-Life Egocentric Images in Low- and Middle-Income Countries Based on Image Tagging Features. Front Artif Intell 2021; 4:644712. [PMID: 33870184 PMCID: PMC8047062 DOI: 10.3389/frai.2021.644712] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2020] [Accepted: 02/26/2021] [Indexed: 11/25/2022] Open
Abstract
Malnutrition, including both undernutrition and obesity, is a significant problem in low- and middle-income countries (LMICs). In order to study malnutrition and develop effective intervention strategies, it is crucial to evaluate nutritional status in LMICs at the individual, household, and community levels. In a multinational research project supported by the Bill & Melinda Gates Foundation, we have been using a wearable technology to conduct objective dietary assessment in sub-Saharan Africa. Our assessment includes multiple diet-related activities in urban and rural families, including food sources (e.g., shopping, harvesting, and gathering), preservation/storage, preparation, cooking, and consumption (e.g., portion size and nutrition analysis). Our wearable device ("eButton" worn on the chest) acquires real-life images automatically during wake hours at preset time intervals. The recorded images, in amounts of tens of thousands per day, are post-processed to obtain the information of interest. Although we expect future Artificial Intelligence (AI) technology to extract the information automatically, at present we utilize AI to separate the acquired images into two binary classes: images with (Class 1) and without (Class 0) edible items. As a result, researchers need only to study Class-1 images, reducing their workload significantly. In this paper, we present a composite machine learning method to perform this classification, meeting the specific challenges of high complexity and diversity in the real-world LMIC data. Our method consists of a deep neural network (DNN) and a shallow learning network (SLN) connected by a novel probabilistic network interface layer. After presenting the details of our method, an image dataset acquired from Ghana is utilized to train and evaluate the machine learning system. Our comparative experiment indicates that the new composite method performs better than the conventional deep learning method assessed by integrated measures of sensitivity, specificity, and burden index, as indicated by the Receiver Operating Characteristic (ROC) curve.
Collapse
Affiliation(s)
- Guangzong Chen
- Department of Electrical and Computer Engineering, University of Pittsburgh, PA, United States
| | - Wenyan Jia
- Department of Electrical and Computer Engineering, University of Pittsburgh, PA, United States
| | - Yifan Zhao
- Department of Electrical and Computer Engineering, University of Pittsburgh, PA, United States
| | - Zhi-Hong Mao
- Department of Electrical and Computer Engineering, University of Pittsburgh, PA, United States
| | - Benny Lo
- Hamlyn Centre, Imperial College London, London, United Kingdom
| | - Alex K. Anderson
- Department of Foods and Nutrition, University of Georgia, Athens, GA, United States
| | - Gary Frost
- Section for Nutrition Research, Department of Metabolism, Digestion and Reproduction, Imperial College London, London, United Kingdom
| | - Modou L. Jobarteh
- Section for Nutrition Research, Department of Metabolism, Digestion and Reproduction, Imperial College London, London, United Kingdom
| | - Megan A. McCrory
- Department of Health Sciences, Boston University, Boston, MA, United States
| | - Edward Sazonov
- Department of Electrical and Computer Engineering, University of Alabama, Tuscaloosa, AL, United States
| | | | - Richard S. Ansong
- Department of Nutrition and Food Science, University of Ghana, Legon-Accra, Ghana
| | - Thomas Baranowski
- USDA/ARS Children's Nutrition Research Center, Department of Pediatrics, Baylor College of Medicine, Houston, TX, United States
| | - Lora Burke
- School of Nursing, University of Pittsburgh, Pittsburgh, PA, United States
| | - Mingui Sun
- Department of Electrical and Computer Engineering, University of Pittsburgh, PA, United States
- Department of Neurosurgery, University of Pittsburgh, Pittsburgh, PA, United States
| |
Collapse
|
9
|
Jia W, Wu Z, Ren Y, Cao S, Mao ZH, Sun M. Estimating Dining Plate Size From an Egocentric Image Sequence Without a Fiducial Marker. Front Nutr 2021; 7:519444. [PMID: 33521029 PMCID: PMC7840562 DOI: 10.3389/fnut.2020.519444] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2019] [Accepted: 12/04/2020] [Indexed: 11/23/2022] Open
Abstract
Despite the extreme importance of food intake in human health, it is currently difficult to conduct an objective dietary assessment without individuals' self-report. In recent years, a passive method utilizing a wearable electronic device has emerged. This device acquires food images automatically during the eating process. These images are then analyzed to estimate intakes of calories and nutrients, assisted by advanced computational algorithms. Although this passive method is highly desirable, it has been thwarted by the requirement of a fiducial marker which must be present in the image for a scale reference. The importance of this scale reference is analogous to the importance of the scale bar in a map which determines distances or areas in any geological region covered by the map. Likewise, the sizes or volumes of arbitrary foods on a dining table covered by an image cannot be determined without the scale reference. Currently, the fiducial marker (often a checkerboard card) serves as the scale reference which must be present on the table before taking pictures, requiring human efforts to carry, place and retrieve the fiducial marker manually. In this work, we demonstrate that the fiducial marker can be eliminated if an individual's dining location is fixed and a one-time calibration using a circular plate of known size is performed. When the individual uses another circular plate of an unknown size, our algorithm estimates its radius using the range of pre-calibrated distances between the camera and the plate from which the desired scale reference is determined automatically. Our comparative experiment indicates that the mean absolute percentage error of the proposed estimation method is ~10.73%. Although this error is larger than that of the manual method of 6.68% using a fiducial marker on the table, the new method has a distinctive advantage of eliminating the manual procedure and automatically generating the scale reference.
Collapse
Affiliation(s)
- Wenyan Jia
- Department of Electrical & Computer Engineering, University of Pittsburgh, Pittsburgh, PA, United States
| | - Zekun Wu
- Department of Electrical & Computer Engineering, University of Pittsburgh, Pittsburgh, PA, United States
| | - Yiqiu Ren
- Department of Electrical & Computer Engineering, University of Pittsburgh, Pittsburgh, PA, United States
| | - Shunxin Cao
- Department of Electrical & Computer Engineering, University of Pittsburgh, Pittsburgh, PA, United States
| | - Zhi-Hong Mao
- Department of Electrical & Computer Engineering, University of Pittsburgh, Pittsburgh, PA, United States
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, United States
| | - Mingui Sun
- Department of Electrical & Computer Engineering, University of Pittsburgh, Pittsburgh, PA, United States
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, United States
- Department of Neurosurgery, University of Pittsburgh, Pittsburgh, PA, United States
| |
Collapse
|
10
|
Lo FPW, Sun Y, Qiu J, Lo B. Image-Based Food Classification and Volume Estimation for Dietary Assessment: A Review. IEEE J Biomed Health Inform 2020; 24:1926-1939. [PMID: 32365038 DOI: 10.1109/jbhi.2020.2987943] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
A daily dietary assessment method named 24-hour dietary recall has commonly been used in nutritional epidemiology studies to capture detailed information of the food eaten by the participants to help understand their dietary behaviour. However, in this self-reporting technique, the food types and the portion size reported highly depends on users' subjective judgement which may lead to a biased and inaccurate dietary analysis result. As a result, a variety of visual-based dietary assessment approaches have been proposed recently. While these methods show promises in tackling issues in nutritional epidemiology studies, several challenges and forthcoming opportunities, as detailed in this study, still exist. This study provides an overview of computing algorithms, mathematical models and methodologies used in the field of image-based dietary assessment. It also provides a comprehensive comparison of the state of the art approaches in food recognition and volume/weight estimation in terms of their processing speed, model accuracy, efficiency and constraints. It will be followed by a discussion on deep learning method and its efficacy in dietary assessment. After a comprehensive exploration, we found that integrated dietary assessment systems combining with different approaches could be the potential solution to tackling the challenges in accurate dietary intake assessment.
Collapse
|
11
|
Raber M, Baranowski T, Crawford K, Sharma SV, Schick V, Markham C, Jia W, Sun M, Steinman E, Chandra J. The Healthy Cooking Index: Nutrition Optimizing Home Food Preparation Practices across Multiple Data Collection Methods. J Acad Nutr Diet 2020; 120:1119-1132. [PMID: 32280056 DOI: 10.1016/j.jand.2020.01.008] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2018] [Accepted: 01/14/2020] [Indexed: 02/08/2023]
Abstract
BACKGROUND Food preparation interventions are an increasingly popular target for hands-on nutrition education for adults, children, and families, but assessment tools are lacking. Objective data on home cooking practices, and how they are interpreted through different data collection methods, are needed. OBJECTIVE The goal of this study was to explore the utility of the Healthy Cooking Index in coding multiple types of home food preparation data and elucidating healthy cooking behavior patterns. DESIGN Parent-child dyads were recruited between October 2017 and June 2018 in Houston and Austin, Texas for this observational study. Food preparation events were observed and video recorded. Participants also wore a body camera (eButton) and completed a questionnaire during the same event. PARTICIPANTS/SETTING Parents with a school-aged child were recruited as dyads (n=40). Data collection procedures took place in participant homes during evening meal preparation events. MAIN OUTCOME MEASURES Food preparation data were collected from parents through direct observation during preparation as well as eButton and paper questionnaires completed immediately after the event. STATISTICAL ANALYSES PERFORMED All data sets were analyzed using the Healthy Cooking Index coding system and compared for concordance. A paired sample t test was used to examine significant differences between the scores. Cronbach's α and principal components analysis were conducted on the observed Healthy Cooking Index items to examine patterns of cooking practices. RESULTS Two main components of cooking practices emerged from the principal components analysis: one focused on meat products and another on health and taste enhancing practices. The eButton was more accurate in collecting Healthy Cooking Index practices than the self-report questionnaire. Significant differences were found between participant reported and observed summative Healthy Cooking Index scores (P<0.001), with no significant differences between scores computed from eButton images and observations (P=0.187). CONCLUSIONS This is the first study to examine nutrition optimizing home cooking practices by observational, wearable camera and self-report data collection methods. By strengthening cooking behavior assessment tools, future research will be able to elucidate the transmission of cooking education through interventions and the relationships between cooking practices, disease prevention, and health.
Collapse
|
12
|
Jobarteh ML, McCrory MA, Lo B, Sun M, Sazonov E, Anderson AK, Jia W, Maitland K, Qiu J, Steiner-Asiedu M, Higgins JA, Baranowski T, Olupot-Olupot P, Frost G. Development and Validation of an Objective, Passive Dietary Assessment Method for Estimating Food and Nutrient Intake in Households in Low- and Middle-Income Countries: A Study Protocol. Curr Dev Nutr 2020; 4:nzaa020. [PMID: 32099953 PMCID: PMC7031207 DOI: 10.1093/cdn/nzaa020] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2019] [Revised: 01/17/2020] [Accepted: 02/06/2020] [Indexed: 11/13/2022] Open
Abstract
Malnutrition is a major concern in low- and middle-income countries (LMIC), but the full extent of nutritional deficiencies remains unknown largely due to lack of accurate assessment methods. This study seeks to develop and validate an objective, passive method of estimating food and nutrient intake in households in Ghana and Uganda. Household members (including under-5s and adolescents) are assigned a wearable camera device to capture images of their food intake during waking hours. Using custom software, images captured are then used to estimate an individual's food and nutrient (i.e., protein, fat, carbohydrate, energy, and micronutrients) intake. Passive food image capture and assessment provides an objective measure of food and nutrient intake in real time, minimizing some of the limitations associated with self-reported dietary intake methods. Its use in LMIC could potentially increase the understanding of a population's nutritional status, and the contribution of household food intake to the malnutrition burden. This project is registered at clinicaltrials.gov (NCT03723460).
Collapse
Affiliation(s)
- Modou L Jobarteh
- Section for Nutrition Research, Department of Metabolism, Digestion and Reproduction, Imperial College London, London, UK
| | - Megan A McCrory
- Department of Health Sciences, Boston University, Boston, MA, USA
| | - Benny Lo
- Hamlyn Centre, Imperial College London, London, UK
| | - Mingui Sun
- Department of Neurological Surgery, University of Pittsburgh, PA, USA
| | - Edward Sazonov
- Department of Electrical and Computer Engineering, University of Alabama, Tuscaloosa, AL, USA
| | - Alex K Anderson
- Department of Foods and Nutrition, University of Georgia, Athens, GA, USA
| | - Wenyan Jia
- Department of Neurological Surgery, University of Pittsburgh, PA, USA
| | | | - Jianing Qiu
- Hamlyn Centre, Imperial College London, London, UK
| | | | - Janine A Higgins
- Department of Pediatrics, Section of Endocrinology, University of Colorado, Anschutz Medical Campus, Aurora, CO, USA
| | - Tom Baranowski
- USDA/ARS Children's Nutrition Research Center, Department of Pediatrics, Baylor College of Medicine, Houston, TX, USA
| | - Peter Olupot-Olupot
- Mbale Clinical Research Institute, Mbale Regional Referral and Teaching Hospital, Mbale, Uganda
| | - Gary Frost
- Section for Nutrition Research, Department of Metabolism, Digestion and Reproduction, Imperial College London, London, UK
| |
Collapse
|
13
|
Foster E, Lee C, Imamura F, Hollidge SE, Westgate KL, Venables MC, Poliakov I, Rowland MK, Osadchiy T, Bradley JC, Simpson EL, Adamson AJ, Olivier P, Wareham N, Forouhi NG, Brage S. Validity and reliability of an online self-report 24-h dietary recall method (Intake24): a doubly labelled water study and repeated-measures analysis. J Nutr Sci 2019; 8:e29. [PMID: 31501691 PMCID: PMC6722486 DOI: 10.1017/jns.2019.20] [Citation(s) in RCA: 54] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2019] [Revised: 06/08/2019] [Accepted: 06/13/2019] [Indexed: 12/24/2022] Open
Abstract
Online self-reported 24-h dietary recall systems promise increased feasibility of dietary assessment. Comparison against interviewer-led recalls established their convergent validity; however, reliability and criterion-validity information is lacking. The validity of energy intakes (EI) reported using Intake24, an online 24-h recall system, was assessed against concurrent measurement of total energy expenditure (TEE) using doubly labelled water in ninety-eight UK adults (40-65 years). Accuracy and precision of EI were assessed using correlation and Bland-Altman analysis. Test-retest reliability of energy and nutrient intakes was assessed using data from three further UK studies where participants (11-88 years) completed Intake24 at least four times; reliability was assessed using intra-class correlations (ICC). Compared with TEE, participants under-reported EI by 25 % (95 % limits of agreement -73 % to +68 %) in the first recall, 22 % (-61 % to +41 %) for average of first two, and 25 % (-60 % to +28 %) for first three recalls. Correlations between EI and TEE were 0·31 (first), 0·47 (first two) and 0·39 (first three recalls), respectively. ICC for a single recall was 0·35 for EI and ranged from 0·31 for Fe to 0·43 for non-milk extrinsic sugars (NMES). Considering pairs of recalls (first two v. third and fourth recalls), ICC was 0·52 for EI and ranged from 0·37 for fat to 0·63 for NMES. EI reported with Intake24 was moderately correlated with objectively measured TEE and underestimated on average to the same extent as seen with interviewer-led 24-h recalls and estimated weight food diaries. Online 24-h recall systems may offer low-cost, low-burden alternatives for collecting dietary information.
Collapse
Affiliation(s)
- Emma Foster
- Human Nutrition Research Centre, Institute of Health and Society, Newcastle University, Newcastle upon Tyne, UK
| | - Clement Lee
- School of Mathematics, Statistics and Physics, Newcastle University, Newcastle upon Tyne, UK
- Department of Mathematics and Statistics, Lancaster University, Lancaster, UK
| | - Fumiaki Imamura
- MRC Epidemiology Unit, University of Cambridge, Cambridge, UK
| | | | | | | | - Ivan Poliakov
- Open Lab, School of Computing Science, Newcastle University, Newcastle upon Tyne, UK
| | - Maisie K. Rowland
- Human Nutrition Research Centre, Institute of Health and Society, Newcastle University, Newcastle upon Tyne, UK
| | - Timur Osadchiy
- Open Lab, School of Computing Science, Newcastle University, Newcastle upon Tyne, UK
| | - Jennifer C. Bradley
- Human Nutrition Research Centre, Institute of Health and Society, Newcastle University, Newcastle upon Tyne, UK
| | - Emma L. Simpson
- Open Lab, School of Computing Science, Newcastle University, Newcastle upon Tyne, UK
| | - Ashley J. Adamson
- Human Nutrition Research Centre, Institute of Health and Society, Newcastle University, Newcastle upon Tyne, UK
| | - Patrick Olivier
- Faculty of Information Technology, Monash University, Clayton, VIC, Australia
| | - Nick Wareham
- MRC Epidemiology Unit, University of Cambridge, Cambridge, UK
| | - Nita G. Forouhi
- MRC Epidemiology Unit, University of Cambridge, Cambridge, UK
| | - Soren Brage
- Department of Mathematics and Statistics, Lancaster University, Lancaster, UK
| |
Collapse
|
14
|
Yu H, Pan G, Pan M, Li C, Jia W, Zhang L, Sun M. A Hierarchical Deep Fusion Framework for Egocentric Activity Recognition Using a Wearable Hybrid Sensor System. SENSORS (BASEL, SWITZERLAND) 2019; 19:E546. [PMID: 30696100 PMCID: PMC6386921 DOI: 10.3390/s19030546] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/28/2018] [Revised: 01/19/2019] [Accepted: 01/24/2019] [Indexed: 11/22/2022]
Abstract
Recently, egocentric activity recognition has attracted considerable attention in the pattern recognition and artificial intelligence communities because of its wide applicability in medical care, smart homes, and security monitoring. In this study, we developed and implemented a deep-learning-based hierarchical fusion framework for the recognition of egocentric activities of daily living (ADLs) in a wearable hybrid sensor system comprising motion sensors and cameras. Long short-term memory (LSTM) and a convolutional neural network are used to perform egocentric ADL recognition based on motion sensor data and photo streaming in different layers, respectively. The motion sensor data are used solely for activity classification according to motion state, while the photo stream is used for further specific activity recognition in the motion state groups. Thus, both motion sensor data and photo stream work in their most suitable classification mode to significantly reduce the negative influence of sensor differences on the fusion results. Experimental results show that the proposed method not only is more accurate than the existing direct fusion method (by up to 6%) but also avoids the time-consuming computation of optical flow in the existing method, which makes the proposed algorithm less complex and more suitable for practical application.
Collapse
Affiliation(s)
- Haibin Yu
- College of Electronics and Information, Hangzhou Dianzi University, Hangzhou 310018, China.
| | - Guoxiong Pan
- College of Electronics and Information, Hangzhou Dianzi University, Hangzhou 310018, China.
| | - Mian Pan
- College of Electronics and Information, Hangzhou Dianzi University, Hangzhou 310018, China.
| | - Chong Li
- College of Electronics and Information, Hangzhou Dianzi University, Hangzhou 310018, China.
| | - Wenyan Jia
- Department of Electrical and Computer Engineering, University of Pittsburgh, PA 15261, USA.
| | - Li Zhang
- School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou 310018, China.
| | - Mingui Sun
- Department of Neurological Surgery, University of Pittsburgh, PA 15213, USA.
| |
Collapse
|
15
|
Food Volume Estimation Based on Deep Learning View Synthesis from a Single Depth Map. Nutrients 2018; 10:nu10122005. [PMID: 30567362 PMCID: PMC6316017 DOI: 10.3390/nu10122005] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2018] [Revised: 12/10/2018] [Accepted: 12/16/2018] [Indexed: 11/16/2022] Open
Abstract
An objective dietary assessment system can help users to understand their dietary behavior and enable targeted interventions to address underlying health problems. To accurately quantify dietary intake, measurement of the portion size or food volume is required. For volume estimation, previous research studies mostly focused on using model-based or stereo-based approaches which rely on manual intervention or require users to capture multiple frames from different viewing angles which can be tedious. In this paper, a view synthesis approach based on deep learning is proposed to reconstruct 3D point clouds of food items and estimate the volume from a single depth image. A distinct neural network is designed to use a depth image from one viewing angle to predict another depth image captured from the corresponding opposite viewing angle. The whole 3D point cloud map is then reconstructed by fusing the initial data points with the synthesized points of the object items through the proposed point cloud completion and Iterative Closest Point (ICP) algorithms. Furthermore, a database with depth images of food object items captured from different viewing angles is constructed with image rendering and used to validate the proposed neural network. The methodology is then evaluated by comparing the volume estimated by the synthesized 3D point cloud with the ground truth volume of the object items.
Collapse
|
16
|
Spruijt-Metz D, Wen CKF, Bell BM, Intille S, Huang JS, Baranowski T. Advances and Controversies in Diet and Physical Activity Measurement in Youth. Am J Prev Med 2018; 55:e81-e91. [PMID: 30135037 PMCID: PMC6151143 DOI: 10.1016/j.amepre.2018.06.012] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/04/2018] [Revised: 05/09/2018] [Accepted: 06/15/2018] [Indexed: 11/16/2022]
Abstract
Technological advancements in the past decades have improved dietary intake and physical activity measurements. This report reviews current developments in dietary intake and physical activity assessment in youth. Dietary intake assessment has relied predominantly on self-report or image-based methods to measure key aspects of dietary intake (e.g., food types, portion size, eating occasion), which are prone to notable methodologic (e.g., recall bias) and logistic (e.g., participant and researcher burden) challenges. Although there have been improvements in automatic eating detection, artificial intelligence, and sensor-based technologies, participant input is often needed to verify food categories and portions. Current physical activity assessment methods, including self-report, direct observation, and wearable devices, provide researchers with reliable estimations for energy expenditure and bodily movement. Recent developments in algorithms that incorporate signals from multiple sensors and technology-augmented self-reporting methods have shown preliminary efficacy in measuring specific types of activity patterns and relevant contextual information. However, challenges in detecting resistance (e.g., in resistance training, weight lifting), prolonged physical activity monitoring, and algorithm (non)equivalence remain to be addressed. In summary, although dietary intake assessment methods have yet to achieve the same validity and reliability as physical activity measurement, recent developments in wearable technologies in both arenas have the potential to improve current assessment methods. THEME INFORMATION This article is part of a theme issue entitled Innovative Tools for Assessing Diet and Physical Activity for Health Promotion, which is sponsored by the North American branch of the International Life Sciences Institute.
Collapse
Affiliation(s)
- Donna Spruijt-Metz
- Center for Economic and Social Research, University of Southern California, Los Angeles, California; Department of Psychology, University of Southern California, Los Angeles, California; Department of Preventive Medicine, University of Southern California, Los Angeles, California.
| | - Cheng K Fred Wen
- Department of Preventive Medicine, University of Southern California, Los Angeles, California
| | - Brooke M Bell
- Department of Preventive Medicine, University of Southern California, Los Angeles, California
| | - Stephen Intille
- College of Computer and Information Science, Northeastern University, Boston, Massachusetts; Department of Health Sciences, Bouvé College of Health Sciences, Northeastern University, Boston, Massachusetts
| | - Jeannie S Huang
- Department of Pediatrics, School of Medicine, University of California at San Diego, San Diego, California; Rady Children's Hospital, San Diego, California
| | - Tom Baranowski
- Department of Pediatrics, Baylor College of Medicine, Houston, Texas
| |
Collapse
|
17
|
Beltran A, Dadabhoy H, Ryan C, Dholakia R, Jia W, Baranowski J, Sun M, Baranowski T. Dietary Assessment with a Wearable Camera among Children: Feasibility and Intercoder Reliability. J Acad Nutr Diet 2018; 118:2144-2153. [PMID: 30115556 DOI: 10.1016/j.jand.2018.05.013] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2017] [Accepted: 05/14/2018] [Indexed: 11/17/2022]
Abstract
BACKGROUND The eButton, a multisensor device worn on the chest, uses a camera to passively capture images of everything in front of the child throughout the day. These images can be analyzed to provide a passive method of dietary intake assessment. OBJECTIVE This study assessed the eButton's feasibility and intercoder reliability for dietary intake assessment. DESIGN Children were recruited in the summer and fall of 2015, in Houston, TX, to wear the eButton to take 2 full days of dietary images, and the child-parent dyad participated in a following-day interview to verify what dietitians recorded from the images. PARTICIPANTS/SETTING Thirty 9- to 13-year-old children participated during days convenient to them. MAIN OUTCOME MEASURES Two dietitians independently manually reviewed the images to identify eating events, foods in those events, and portion sizes. STATISTICAL ANALYSES PERFORMED Descriptive statistics of agreements and disagreements were calculated between dietitians and with children; t tests and Bland-Altman plots of differences in total kilocalories were calculated between dietitians and between initial dietitian estimates and those finalized after the verification interviews. RESULTS The dietitians agreed on the identity of 60.5% of the 1,026 foods but disagreed on 28.6% of the foods and on the names for 10.8% of the foods. After the verification interviews, the dietitians agreed with the child-parent dyads on the identity of 77.0% of the 921 foods; the child-parent dyad identified 12.4% of the day's foods when images were not available or not clear; the child-parent dyad clarified that 5.4% of the foods identified were not consumed by the child; and the child-parent dyad clarified the identity of 5.2% of the foods. A software-based approach (three-dimensional wire mesh) could be used to estimate portion size on 24% of the foods, and professional judgment was required for 67.8%. Mean caloric intakes per day were not statistically significantly different between dietitians but were different between dietitians and child-parent dyads in total and on day 2. CONCLUSIONS An early test of intercoder reliability of an all-day image method of dietary intake assessment obtained intercoder agreement between the two dietitians processing these images of intraclass correlation coefficient=0.67. A following-day verification interview with the child and parent was necessary to ensure completeness of estimates. Several feasibility problems occurred, which may be remedied with additional participant and dietitian training and further technological development.
Collapse
|
18
|
Image-based food portion size estimation using a smartphone without a fiducial marker. Public Health Nutr 2018; 22:1180-1192. [PMID: 29623867 DOI: 10.1017/s136898001800054x] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Abstract
OBJECTIVE Current approaches to food volume estimation require the person to carry a fiducial marker (e.g. a checkerboard card), to be placed next to the food before taking a picture. This procedure is inconvenient and post-processing of the food picture is time-consuming and sometimes inaccurate. These problems keep people from using the smartphone for self-administered dietary assessment. The current bioengineering study presents a novel smartphone-based imaging approach to table-side estimation of food volume which overcomes current limitations. DESIGN We present a new method for food volume estimation without a fiducial marker. Our mathematical model indicates that, using a special picture-taking strategy, the smartphone-based imaging system can be calibrated adequately if the physical length of the smartphone and the output of the motion sensor within the device are known. We also present and test a new virtual reality method for food volume estimation using the International Food Unit™ and a training process for error control. RESULTS Our pilot study, with sixty-nine participants and fifteen foods, indicates that the fiducial-marker-free approach is valid and that the training improves estimation accuracy significantly (P0·05). CONCLUSIONS Elimination of a fiducial marker and application of virtual reality, the International Food Unit™ and an automated training allowed quick food volume estimation and control of the estimation error. The estimated volume could be used to search a nutrient database and determine energy and nutrients in the diet.
Collapse
|
19
|
Automatic food detection in egocentric images using artificial intelligence technology. Public Health Nutr 2018; 22:1168-1179. [PMID: 29576027 DOI: 10.1017/s1368980018000538] [Citation(s) in RCA: 36] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
OBJECTIVE To develop an artificial intelligence (AI)-based algorithm which can automatically detect food items from images acquired by an egocentric wearable camera for dietary assessment. DESIGN To study human diet and lifestyle, large sets of egocentric images were acquired using a wearable device, called eButton, from free-living individuals. Three thousand nine hundred images containing real-world activities, which formed eButton data set 1, were manually selected from thirty subjects. eButton data set 2 contained 29 515 images acquired from a research participant in a week-long unrestricted recording. They included both food- and non-food-related real-life activities, such as dining at both home and restaurants, cooking, shopping, gardening, housekeeping chores, taking classes, gym exercise, etc. All images in these data sets were classified as food/non-food images based on their tags generated by a convolutional neural network. RESULTS A cross data-set test was conducted on eButton data set 1. The overall accuracy of food detection was 91·5 and 86·4 %, respectively, when one-half of data set 1 was used for training and the other half for testing. For eButton data set 2, 74·0 % sensitivity and 87·0 % specificity were obtained if both 'food' and 'drink' were considered as food images. Alternatively, if only 'food' items were considered, the sensitivity and specificity reached 85·0 and 85·8 %, respectively. CONCLUSIONS The AI technology can automatically detect foods from low-quality, wearable camera-acquired real-world egocentric images with reasonable accuracy, reducing both the burden of data processing and privacy concerns.
Collapse
|
20
|
Raber M, Patterson M, Jia W, Sun M, Baranowski T. Utility of eButton images for identifying food preparation behaviors and meal-related tasks in adolescents. Nutr J 2018; 17:32. [PMID: 29477143 PMCID: PMC6389239 DOI: 10.1186/s12937-018-0341-2] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2017] [Accepted: 02/15/2018] [Indexed: 01/08/2023] Open
Abstract
BACKGROUND Food preparation skills may encourage healthy eating. Traditional assessment of child food preparation employs self- or parent proxy-reporting methods, which are prone to error. The eButton is a wearable all-day camera that has promise as an objective, passive method for measuring child food preparation practices. PURPOSE This paper explores the feasibility of the eButton to reliably capture home food preparation behaviors and practices in a sample of pre- and early adolescents (ages 9 to 13). METHODS This is a secondary analysis of two eButton pilot projects evaluating the dietary intake of pre- and early adolescents in or around Houston, Texas. Food preparation behaviors were coded into seven major categories including: browsing, altering food/adding seasoning, food media, meal related tasks, prep work, cooking and observing. Inter-coder reliability was measured using Cohen's kappa and percent agreement. RESULTS Analysis was completed on data for 31 participants. The most common activity was browsing in the pantry or fridge. Few participants demonstrated any food preparation work beyond unwrapping of food packages and combining two or more ingredients; actual cutting or measuring of foods were rare. CONCLUSIONS Although previous research suggests children who "help" prepare meals may obtain some dietary benefit, accurate assessment tools of food preparation behavior are lacking. The eButton offers a feasible approach to food preparation behavior measurement among pre- and early adolescents. Follow up research exploring the validity of this method in a larger sample, and comparisons between cooking behavior and dietary intake are needed.
Collapse
Affiliation(s)
- Margaret Raber
- Department of Pediatrics Research, University of Texas MD Anderson Cancer Center, Houston, USA
| | - Monika Patterson
- USDA/ARS Children’s Nutrition Research Center, Baylor College of Medicine, Houston, USA
| | - Wenyan Jia
- Department of Neurological Surgery, University of Pittsburg, Pittsburg, USA
| | - Mingui Sun
- Department of Neurological Surgery, University of Pittsburg, Pittsburg, USA
| | - Tom Baranowski
- USDA/ARS Children’s Nutrition Research Center, Baylor College of Medicine, Houston, USA
| |
Collapse
|
21
|
Bell W, Colaiezzi BA, Prata CS, Coates JC. Scaling up Dietary Data for Decision-Making in Low-Income Countries: New Technological Frontiers. Adv Nutr 2017; 8:916-932. [PMID: 29141974 PMCID: PMC5683006 DOI: 10.3945/an.116.014308] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
Dietary surveys in low-income countries (LICs) are hindered by low investment in the necessary research infrastructure, including a lack of basic technology for data collection, links to food composition information, and data processing. The result has been a dearth of dietary data in many LICs because of the high cost and time burden associated with dietary surveys, which are typically carried out by interviewers using pencil and paper. This study reviewed innovative dietary assessment technologies and gauged their suitability to improve the quality and time required to collect dietary data in LICs. Predefined search terms were used to identify technologies from peer-reviewed and gray literature. A total of 78 technologies were identified and grouped into 6 categories: 1) computer- and tablet-based, 2) mobile-based, 3) camera-enabled, 4) scale-based, 5) wearable, and 6) handheld spectrometers. For each technology, information was extracted on a number of overarching factors, including the primary purpose, mode of administration, and data processing capabilities. Each technology was then assessed against predetermined criteria, including requirements for respondent literacy, battery life, requirements for connectivity, ability to measure macro- and micronutrients, and overall appropriateness for use in LICs. Few technologies reviewed met all the criteria, exhibiting both practical constraints and a lack of demonstrated feasibility for use in LICs, particularly for large-scale, population-based surveys. To increase collection of dietary data in LICs, development of a contextually adaptable, interviewer-administered dietary assessment platform is recommended. Additional investments in the research infrastructure are equally important to ensure time and cost savings for the user.
Collapse
Affiliation(s)
- Winnie Bell
- Friedman School of Nutrition Science and Policy, Tufts University, Boston, MA
| | - Brooke A Colaiezzi
- Friedman School of Nutrition Science and Policy, Tufts University, Boston, MA
| | - Cathleen S Prata
- Friedman School of Nutrition Science and Policy, Tufts University, Boston, MA
| | - Jennifer C Coates
- Friedman School of Nutrition Science and Policy, Tufts University, Boston, MA,Address correspondence to JCC (e-mail: )
| |
Collapse
|
22
|
Hierarchical Activity Recognition Using Smart Watches and RGB-Depth Cameras. SENSORS 2016; 16:s16101713. [PMID: 27754458 PMCID: PMC5087501 DOI: 10.3390/s16101713] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/12/2016] [Revised: 09/25/2016] [Accepted: 10/07/2016] [Indexed: 11/25/2022]
Abstract
Human activity recognition is important for healthcare and lifestyle evaluation. In this paper, a novel method for activity recognition by jointly considering motion sensor data recorded by wearable smart watches and image data captured by RGB-Depth (RGB-D) cameras is presented. A normalized cross correlation based mapping method is implemented to establish association between motion sensor data with corresponding image data from the same person in multi-person situations. Further, to improve the performance and accuracy of recognition, a hierarchical structure embedded with an automatic group selection method is proposed. Through this method, if the number of activities to be classified is changed, the structure will be changed correspondingly without interaction. Our comparative experiments against the single data source and single layer methods have shown that our method is more accurate and robust.
Collapse
|
23
|
Wen W, Zeng J, Hu G, Liu G. A System for the Comprehensive Quantification of Real-Time Heartbeat Activity. JOURNAL OF ADVANCED COMPUTATIONAL INTELLIGENCE AND INTELLIGENT INFORMATICS 2016. [DOI: 10.20965/jaciii.2016.p0765] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Heartbeat can reflect the dynamics of the heart control system, and it is also a commonly used index in health monitoring, exercise load calculation and psycho-physiological arousal quantification. This paper fuses three heartbeat measures, i.e. the running mean, the range of local Hurst exponents and the relative fluctuation, to construct a system that can automatically quantify the heartbeat activity both from its static aspect and from its dynamic aspect in a real-time manner. Experiments show that the system can reveal the heartbeat arousal difference between physically relaxed status and exercise-loaded status. When the affective heartbeat data in literature are quantified by this system, the results also show the capability of the system to illustrate psycho-physiological arousal.
Collapse
|
24
|
Beltran A, Dadabhoy H, Chen TA, Lin C, Jia W, Baranowski J, Yan G, Sun M, Baranowski T. Adapting the eButton to the Abilities of Children for Diet Assessment. PROCEEDINGS OF MEASURING BEHAVIOR 2016 : 10TH INTERNATIONAL CONFERENCE ON METHODS AND TECHNIQUES IN BEHAVIORAL RESEARCH. INTERNATIONAL CONFERENCE ON METHODS AND TECHNIQUES IN BEHAVIORAL RESEARCH (10TH : 2016 : DUBLIN, IRELAND) 2016; 2016:72-81. [PMID: 31742257 PMCID: PMC6859905] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Grants] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Affiliation(s)
- A Beltran
- Department of Pediatrics, USDA/ARS Children's Nutrition Research Center, Baylor College of Medicine, Houston, TX, USA.
| | - H Dadabhoy
- Department of Pediatrics, USDA/ARS Children's Nutrition Research Center, Baylor College of Medicine, Houston, TX, USA.
| | - T A Chen
- Department of Pediatrics, USDA/ARS Children's Nutrition Research Center, Baylor College of Medicine, Houston, TX, USA.
| | - C Lin
- Department of Pediatrics, USDA/ARS Children's Nutrition Research Center, Baylor College of Medicine, Houston, TX, USA.
| | - W Jia
- Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, PA, USA
| | - J Baranowski
- Department of Pediatrics, USDA/ARS Children's Nutrition Research Center, Baylor College of Medicine, Houston, TX, USA.
| | - G Yan
- Department of Electrical and Computer Engineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - M Sun
- Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, PA, USA
| | - T Baranowski
- Department of Pediatrics, USDA/ARS Children's Nutrition Research Center, Baylor College of Medicine, Houston, TX, USA.
| |
Collapse
|