1
|
Bianco R, Marinoni M, Coluccia S, Carioni G, Fiori F, Gnagnarella P, Edefonti V, Parpinel M. Tailoring the Nutritional Composition of Italian Foods to the US Nutrition5k Dataset for Food Image Recognition: Challenges and a Comparative Analysis. Nutrients 2024; 16:3339. [PMID: 39408306 PMCID: PMC11479105 DOI: 10.3390/nu16193339] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2024] [Revised: 09/23/2024] [Accepted: 09/27/2024] [Indexed: 10/20/2024] Open
Abstract
BACKGROUND Training of machine learning algorithms on dish images collected in other countries requires possible sources of systematic discrepancies, including country-specific food composition databases (FCDBs), to be tackled. The US Nutrition5k project provides for ~5000 dish images and related dish- and ingredient-level information on mass, energy, and macronutrients from the US FCDB. The aim of this study is to (1) identify challenges/solutions in linking the nutritional composition of Italian foods with food images from Nutrition5k and (2) assess potential differences in nutrient content estimated across the Italian and US FCDBs and their determinants. METHODS After food matching, expert data curation, and handling of missing values, dish-level ingredients from Nutrition5k were integrated with the Italian-FCDB-specific nutritional composition (86 components); dish-specific nutrient content was calculated by summing the corresponding ingredient-specific nutritional values. Measures of agreement/difference were calculated between Italian- and US-FCDB-specific content of energy and macronutrients. Potential determinants of identified differences were investigated with multiple robust regression models. RESULTS Dishes showed a median mass of 145 g and included three ingredients in median. Energy, proteins, fats, and carbohydrates showed moderate-to-strong agreement between Italian- and US-FCDB-specific content; carbohydrates showed the worst performance, with the Italian FCDB providing smaller median values (median raw difference between the Italian and US FCDBs: -2.10 g). Regression models on dishes suggested a role for mass, number of ingredients, and presence of recreated recipes, alone or jointly with differential use of raw/cooked ingredients across the two FCDBs. CONCLUSIONS In the era of machine learning approaches for food image recognition, manual data curation in the alignment of FCDBs is worth the effort.
Collapse
Affiliation(s)
- Rachele Bianco
- Department of Medicine—DMED, Università degli Studi di Udine, 33100 Udine, Italy; (R.B.); (G.C.); (F.F.); (M.P.)
| | - Michela Marinoni
- Branch of Medical Statistics, Biometry and Epidemiology “G. A. Maccacaro”, Department of Clinical Sciences and Community Health, Dipartimento di Eccellenza 2023–2027, Università degli Studi di Milano, 20133 Milan, Italy; (M.M.); (S.C.)
| | - Sergio Coluccia
- Branch of Medical Statistics, Biometry and Epidemiology “G. A. Maccacaro”, Department of Clinical Sciences and Community Health, Dipartimento di Eccellenza 2023–2027, Università degli Studi di Milano, 20133 Milan, Italy; (M.M.); (S.C.)
| | - Giulia Carioni
- Department of Medicine—DMED, Università degli Studi di Udine, 33100 Udine, Italy; (R.B.); (G.C.); (F.F.); (M.P.)
- Division of Epidemiology and Biostatistics, European Institute of Oncology, IRCCS, 20141 Milan, Italy;
| | - Federica Fiori
- Department of Medicine—DMED, Università degli Studi di Udine, 33100 Udine, Italy; (R.B.); (G.C.); (F.F.); (M.P.)
| | - Patrizia Gnagnarella
- Division of Epidemiology and Biostatistics, European Institute of Oncology, IRCCS, 20141 Milan, Italy;
| | - Valeria Edefonti
- Branch of Medical Statistics, Biometry and Epidemiology “G. A. Maccacaro”, Department of Clinical Sciences and Community Health, Dipartimento di Eccellenza 2023–2027, Università degli Studi di Milano, 20133 Milan, Italy; (M.M.); (S.C.)
- Fondazione IRCCS Ca’ Granda Ospedale Maggiore Policlinico, 20122 Milan, Italy
| | - Maria Parpinel
- Department of Medicine—DMED, Università degli Studi di Udine, 33100 Udine, Italy; (R.B.); (G.C.); (F.F.); (M.P.)
| |
Collapse
|
2
|
Zhao Z, Wang R, Liu M, Bai L, Sun Y. Application of machine vision in food computing: A review. Food Chem 2024; 463:141238. [PMID: 39368204 DOI: 10.1016/j.foodchem.2024.141238] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2024] [Revised: 09/03/2024] [Accepted: 09/09/2024] [Indexed: 10/07/2024]
Abstract
With global intelligence advancing and the awareness of sustainable development growing, artificial intelligence technology is increasingly being applied to the food industry. This paper, grounded in practical application cases, reviews the current research status and prospects of machine vision-based image recognition technology in food computing. It explores the general workflow of image recognition, applications based on traditional machine learning and deep learning methods. The paper covers areas including food safety detection, dietary nutrition analysis, process monitoring, and enterprise management model optimization. The aim is to provide a solid theoretical foundation and technical guidance for the integration and cross-fertilization of the food industry with artificial intelligence technology.
Collapse
Affiliation(s)
- Zhiyao Zhao
- School of Computer and Artificial Intelligence, School of Light Industry Science and Engineering, Beijing Technology and Business University, Beijing 100048, China.
| | - Rong Wang
- School of Computer and Artificial Intelligence, School of Light Industry Science and Engineering, Beijing Technology and Business University, Beijing 100048, China.
| | - Minghao Liu
- School of Computer and Artificial Intelligence, School of Light Industry Science and Engineering, Beijing Technology and Business University, Beijing 100048, China.
| | - Lin Bai
- School of Computer and Artificial Intelligence, School of Light Industry Science and Engineering, Beijing Technology and Business University, Beijing 100048, China.
| | - Ying Sun
- School of Computer and Artificial Intelligence, School of Light Industry Science and Engineering, Beijing Technology and Business University, Beijing 100048, China.
| |
Collapse
|
3
|
Li B, Sun M, Mao ZH, Jia W. Dining Bowl Modeling and Optimization for Single-Image-Based Dietary Assessment. SENSORS (BASEL, SWITZERLAND) 2024; 24:6058. [PMID: 39338803 PMCID: PMC11435675 DOI: 10.3390/s24186058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/17/2024] [Revised: 09/11/2024] [Accepted: 09/11/2024] [Indexed: 09/30/2024]
Abstract
In dietary assessment using a single-view food image, an object of known size, such as a checkerboard, is often placed manually in the camera's view as a scale reference to estimate food volume. This traditional scale reference is inconvenient to use because of the manual placement requirement. Consequently, utensils, such as plates and bowls, have been suggested as alternative references. Although these references do not need a manual placement procedure, there is a unique challenge when a dining bowl is used as a reference. Unlike a dining plate, whose shallow shape does not usually block the view of the food, a dining bowl does obscure the food view, and its shape may not be fully observable from the single-view food image. As a result, significant errors may occur in food volume estimation due to the unknown shape of the bowl. To address this challenge, we present a novel method to premeasure both the size and shape of the empty bowl before it is used in a dietary assessment study. In our method, an image is taken with a labeled paper ruler adhered to the interior surface of the bowl, a mathematical model is developed to describe its shape and size, and then an optimization method is used to determine the bowl parameters based on the locations of observed ruler makers from the bowl image. Experimental studies were performed using both simulated and actual bowls to assess the reliability and accuracy of our bowl measurement method.
Collapse
Affiliation(s)
- Boyang Li
- Department of Electrical & Computer Engineering, University of Pittsburgh, Pittsburgh, PA 15260, USA
| | - Mingui Sun
- Department of Electrical & Computer Engineering, University of Pittsburgh, Pittsburgh, PA 15260, USA
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA 15260, USA
- Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, PA 15260, USA
| | - Zhi-Hong Mao
- Department of Electrical & Computer Engineering, University of Pittsburgh, Pittsburgh, PA 15260, USA
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA 15260, USA
| | - Wenyan Jia
- Department of Electrical & Computer Engineering, University of Pittsburgh, Pittsburgh, PA 15260, USA
| |
Collapse
|
4
|
Baumgartner M, Kuhn C, Nakas CT, Herzig D, Bally L. Carbohydrate Estimation Accuracy of Two Commercially Available Smartphone Applications vs Estimation by Individuals With Type 1 Diabetes: A Comparative Study. J Diabetes Sci Technol 2024:19322968241264744. [PMID: 39058316 DOI: 10.1177/19322968241264744] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 07/28/2024]
Abstract
BACKGROUND Despite remarkable progress in diabetes technology, most systems still require estimating meal carbohydrate (CHO) content for meal-time insulin delivery. Emerging smartphone applications may obviate this need, but performance data in relation to patient estimates remain scarce. OBJECTIVE The objective is to assess the accuracy of two commercial CHO estimation applications, SNAQ and Calorie Mama, and compare their performance with the estimation accuracy of people with type 1 diabetes (T1D). METHODS Carbohydrate estimates of 53 individuals with T1D (aged ≥16 years) were compared with those of SNAQ (food recognition + quantification) and Calorie Mama (food recognition + adjustable standard portion size). Twenty-six cooked meals were prepared at the hospital kitchen. Each participant estimated the CHO content of two meals in three different sizes without assistance. Participants then used SNAQ for CHO quantification in one meal and Calorie Mama for the other (all three sizes). Accuracy was the estimate's deviation from ground-truth CHO content (weight multiplied by nutritional facts from recipe database). Furthermore, the applications were rated using the Mars-G questionnaire. RESULTS Participants' mean ± standard deviation (SD) absolute error was 21 ± 21.5 g (71 ± 72.7%). Calorie Mama had a mean absolute error of 24 ± 36.5 g (81.2 ± 123.4%). With a mean absolute error of 13.1 ± 11.3 g (44.3 ± 38.2%), SNAQ outperformed the estimation accuracy of patients and Calorie Mama (both P > .05). Error consistency (quantified by the within-participant SD) did not significantly differ between the methods. CONCLUSIONS SNAQ may provide effective CHO estimation support for people with T1D, particularly those with large or inconsistent CHO estimation errors. Its impact on glucose control remains to be evaluated.
Collapse
Affiliation(s)
- Michelle Baumgartner
- Department of Diabetes, Endocrinology, Nutritional Medicine and Metabolism, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
- Department of Health Sciences and Technology, Eidgenössische Technische Hochschule Zurich, Zurich, Switzerland
| | - Christian Kuhn
- Department of Diabetes, Endocrinology, Nutritional Medicine and Metabolism, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Christos T Nakas
- School of Agricultural Sciences, Laboratory of Biometry, University of Thessaly, Volos, Greece
- University Institute of Clinical Chemistry, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - David Herzig
- Department of Diabetes, Endocrinology, Nutritional Medicine and Metabolism, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Lia Bally
- Department of Diabetes, Endocrinology, Nutritional Medicine and Metabolism, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| |
Collapse
|
5
|
Crystal AA, Valero M, Nino V, Ingram KH. Empowering Diabetics: Advancements in Smartphone-Based Food Classification, Volume Measurement, and Nutritional Estimation. SENSORS (BASEL, SWITZERLAND) 2024; 24:4089. [PMID: 39000868 PMCID: PMC11244259 DOI: 10.3390/s24134089] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/23/2024] [Revised: 06/06/2024] [Accepted: 06/14/2024] [Indexed: 07/16/2024]
Abstract
Diabetes has emerged as a worldwide health crisis, affecting approximately 537 million adults. Maintaining blood glucose requires careful observation of diet, physical activity, and adherence to medications if necessary. Diet monitoring historically involves keeping food diaries; however, this process can be labor-intensive, and recollection of food items may introduce errors. Automated technologies such as food image recognition systems (FIRS) can make use of computer vision and mobile cameras to reduce the burden of keeping diaries and improve diet tracking. These tools provide various levels of diet analysis, and some offer further suggestions for improving the nutritional quality of meals. The current study is a systematic review of mobile computer vision-based approaches for food classification, volume estimation, and nutrient estimation. Relevant articles published over the last two decades are evaluated, and both future directions and issues related to FIRS are explored.
Collapse
Affiliation(s)
- Afnan Ahmed Crystal
- Department of Computer Science, Kennesaw State University, Kennesaw, GA 30060, USA
| | - Maria Valero
- Department of Information Technology, Kennesaw State University, Kennesaw, GA 30060, USA
| | - Valentina Nino
- Departement of Industrial and Systems Engineering, Kennesaw State University, Kennesaw, GA 30060, USA
| | - Katherine H Ingram
- Department of Exercise Science and Sport Management, Kennesaw State University, Kennesaw, GA 30060, USA
| |
Collapse
|
6
|
Chotwanvirat P, Prachansuwan A, Sridonpai P, Kriengsinyos W. Automated Artificial Intelligence-Based Thai Food Dietary Assessment System: Development and Validation. Curr Dev Nutr 2024; 8:102154. [PMID: 38774499 PMCID: PMC11107195 DOI: 10.1016/j.cdnut.2024.102154] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2023] [Revised: 03/24/2024] [Accepted: 03/29/2024] [Indexed: 05/24/2024] Open
Abstract
Background Dietary assessment is a fundamental component of nutrition research and plays a pivotal role in managing chronic diseases. Traditional dietary assessment methods, particularly in the context of Thai cuisine, often require extensive training and may lead to estimation errors. Objectives To address these challenges, Institute of Nutrition, Mahidol University (INMU) iFood, an innovative artificial intelligence-based Thai food dietary assessment system, allows for estimating the nutritive values of dishes from food images. Methods INMU iFood leverages state-of-the-art technology and integrates a validated automated Thai food analysis system. Users can use 3 distinct input methods: food image recognition, manual input, and a convenient barcode scanner. This versatility simplifies the tracking of dietary intake while maximizing data quality at the individual level. The core improvement in INMU iFood can be attributed to 2 key factors, namely, the replacement of Yolov4-tiny with Yolov7 and the expansion of noncarbohydrate source foods in the training image data set. Results This combination significantly enhances the system's ability to identify food items, especially in scenarios with closely packed food images, thus improving accuracy. Validation results showcase the superior performance of the INMU iFood integrated V7-based system over its predecessor, V4-based, with notable improvements in protein and fat estimation. Furthermore, INMU iFood addresses limitations by offering users the option to import additional food products via a barcode scanner, thus providing access to a vast database of nutritional information through Open Food Facts. This integration ensures users can track their dietary intake effectively, with expanded access to over 3000 food items added to or updated in the Open Food Facts database covering a wide variety of dietary choices. Conclusions INMU iFood is a promising tool for researchers, health care professionals, and individuals seeking to monitor their dietary intake within the context of Thai cuisine and for ultimately promoting better health outcomes and facilitating nutrition-related research.
Collapse
Affiliation(s)
- Phawinpon Chotwanvirat
- Human Nutrition Unit, Food and Nutrition Academic and Research Cluster, Institute of Nutrition, Mahidol University, Nakhon Pathom, Thailand
- Diabetes and Thyroid Center, Theptarin Hospital, Khlong Toei, Bangkok, Thailand
| | - Aree Prachansuwan
- Human Nutrition Unit, Food and Nutrition Academic and Research Cluster, Institute of Nutrition, Mahidol University, Nakhon Pathom, Thailand
| | - Pimnapanut Sridonpai
- Human Nutrition Unit, Food and Nutrition Academic and Research Cluster, Institute of Nutrition, Mahidol University, Nakhon Pathom, Thailand
| | - Wantanee Kriengsinyos
- Human Nutrition Unit, Food and Nutrition Academic and Research Cluster, Institute of Nutrition, Mahidol University, Nakhon Pathom, Thailand
| |
Collapse
|
7
|
Konstantakopoulos FS, Georga EI, Fotiadis DI. A Review of Image-Based Food Recognition and Volume Estimation Artificial Intelligence Systems. IEEE Rev Biomed Eng 2024; 17:136-152. [PMID: 37276096 DOI: 10.1109/rbme.2023.3283149] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
The daily healthy diet and balanced intake of essential nutrients play an important role in modern lifestyle. The estimation of a meal's nutrient content is an integral component of significant diseases, such as diabetes, obesity and cardiovascular disease. Lately, there has been an increasing interest towards the development and utilization of smartphone applications with the aim of promoting healthy behaviours. The semi - automatic or automatic, precise and in real-time estimation of the nutrients of daily consumed meals is approached in relevant literature as a computer vision problem using food images which are taken via a user's smartphone. Herein, we present the state-of-the-art on automatic food recognition and food volume estimation methods starting from their basis, i.e., the food image databases. First, by methodically organizing the extracted information from the reviewed studies, this review study enables the comprehensive fair assessment of the methods and techniques applied for segmenting food images, classifying their food content and computing the food volume, associating their results with the characteristics of the used datasets. Second, by unbiasedly reporting the strengths and limitations of these methods and proposing pragmatic solutions to the latter, this review can inspire future directions in the field of dietary assessment systems.
Collapse
|
8
|
Larke JA, Chin EL, Bouzid YY, Nguyen T, Vainberg Y, Lee DH, Pirsiavash H, Smilowitz JT, Lemay DG. Surveying Nutrient Assessment with Photographs of Meals (SNAPMe): A Benchmark Dataset of Food Photos for Dietary Assessment. Nutrients 2023; 15:4972. [PMID: 38068830 PMCID: PMC10708545 DOI: 10.3390/nu15234972] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Revised: 11/08/2023] [Accepted: 11/22/2023] [Indexed: 12/18/2023] Open
Abstract
Photo-based dietary assessment is becoming more feasible as artificial intelligence methods improve. However, advancement of these methods for dietary assessment in research settings has been hindered by the lack of an appropriate dataset against which to benchmark algorithm performance. We conducted the Surveying Nutrient Assessment with Photographs of Meals (SNAPMe) study (ClinicalTrials ID: NCT05008653) to pair meal photographs with traditional food records. Participants were recruited nationally, and 110 enrollment meetings were completed via web-based video conferencing. Participants uploaded and annotated their meal photos using a mobile phone app called Bitesnap and completed food records using the Automated Self-Administered 24-h Dietary Assessment Tool (ASA24®) version 2020. Participants included photos before and after eating non-packaged and multi-serving packaged meals, as well as photos of the front and ingredient labels for single-serving packaged foods. The SNAPMe Database (DB) contains 3311 unique food photos linked with 275 ASA24 food records from 95 participants who photographed all foods consumed and recorded food records in parallel for up to 3 study days each. The use of the SNAPMe DB to evaluate ingredient prediction demonstrated that the publicly available algorithms FB Inverse Cooking and Im2Recipe performed poorly, especially for single-ingredient foods and beverages. Correlations between nutrient estimates common to the Bitesnap and ASA24 dietary assessment tools indicated a range in predictive capacity across nutrients (cholesterol, adjusted R2 = 0.85, p < 0.0001; food folate, adjusted R2 = 0.21, p < 0.05). SNAPMe DB is a publicly available benchmark for photo-based dietary assessment in nutrition research. Its demonstrated utility suggested areas of needed improvement, especially the prediction of single-ingredient foods and beverages.
Collapse
Affiliation(s)
- Jules A. Larke
- United States Department of Agriculture, Agricultural Research Service, Western Human Nutrition Research Center, Davis, CA 95616, USA
| | - Elizabeth L. Chin
- United States Department of Agriculture, Agricultural Research Service, Western Human Nutrition Research Center, Davis, CA 95616, USA
| | - Yasmine Y. Bouzid
- Department of Nutrition, University of California Davis, Davis, CA 95616, USA
| | - Tu Nguyen
- United States Department of Agriculture, Agricultural Research Service, Western Human Nutrition Research Center, Davis, CA 95616, USA
| | - Yael Vainberg
- Department of Nutrition, University of California Davis, Davis, CA 95616, USA
| | - Dong Hee Lee
- Department of Computer Science, University of California Davis, Davis, CA 95616, USA (H.P.)
| | - Hamed Pirsiavash
- Department of Computer Science, University of California Davis, Davis, CA 95616, USA (H.P.)
| | | | - Danielle G. Lemay
- United States Department of Agriculture, Agricultural Research Service, Western Human Nutrition Research Center, Davis, CA 95616, USA
- Department of Nutrition, University of California Davis, Davis, CA 95616, USA
| |
Collapse
|
9
|
Konstantakopoulos FS, Georga EI, Fotiadis DI. A novel approach to estimate the weight of food items based on features extracted from an image using boosting algorithms. Sci Rep 2023; 13:21040. [PMID: 38030660 PMCID: PMC10686975 DOI: 10.1038/s41598-023-47885-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Accepted: 11/20/2023] [Indexed: 12/01/2023] Open
Abstract
Managing daily nutrition is a prominent concern among individuals in contemporary society. The advancement of dietary assessment systems and applications utilizing images has facilitated the effective management of individuals' nutritional information and dietary habits over time. The determination of food weight or volume is a vital part in these systems for assessing food quantities and nutritional information. This study presents a novel methodology for evaluating the weight of food by utilizing extracted features from images and training them through advanced boosting regression algorithms. Α unique dataset of 23,052 annotated food images of Mediterranean cuisine, including 226 different dishes with a reference object placed next to the dish, was used to train the proposed pipeline. Then, using extracted features from the annotated images, such as food area, reference object area, food id, food category, and food weight, we built a dataframe with 24,996 records. Finally, we trained the weight estimation model by applying cross validation, hyperparameter tuning, and boosting regression algorithms such as XGBoost, CatBoost, and LightGBM. Between the predicted and actual weight values for each food in the proposed dataset, the proposed model achieves a mean weight absolute error 3.93 g, a mean absolute percentage error 3.73% and a root mean square error 6.05 g for the 226 food items of the Mediterranean Greek Food database (MedGRFood), setting new perspectives in food image-based weight and nutrition estimate models and systems.
Collapse
Affiliation(s)
- Fotios S Konstantakopoulos
- Unit of Medical Technology and Intelligent Information Systems, Dept. of Materials Science and Engineering, University of Ioannina, Ioannina, Greece
- Foundation for Research and Technology-Hellas, Biomedical Research Institute, Ioannina, Greece
| | - Eleni I Georga
- Unit of Medical Technology and Intelligent Information Systems, Dept. of Materials Science and Engineering, University of Ioannina, Ioannina, Greece
- Foundation for Research and Technology-Hellas, Biomedical Research Institute, Ioannina, Greece
| | - Dimitrios I Fotiadis
- Unit of Medical Technology and Intelligent Information Systems, Dept. of Materials Science and Engineering, University of Ioannina, Ioannina, Greece.
- Foundation for Research and Technology-Hellas, Biomedical Research Institute, Ioannina, Greece.
| |
Collapse
|
10
|
Han Y, Cheng Q, Wu W, Huang Z. DPF-Nutrition: Food Nutrition Estimation via Depth Prediction and Fusion. Foods 2023; 12:4293. [PMID: 38231726 DOI: 10.3390/foods12234293] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2023] [Revised: 11/20/2023] [Accepted: 11/22/2023] [Indexed: 01/19/2024] Open
Abstract
A reasonable and balanced diet is essential for maintaining good health. With advancements in deep learning, an automated nutrition estimation method based on food images offers a promising solution for monitoring daily nutritional intake and promoting dietary health. While monocular image-based nutrition estimation is convenient, efficient and economical, the challenge of limited accuracy remains a significant concern. To tackle this issue, we proposed DPF-Nutrition, an end-to-end nutrition estimation method using monocular images. In DPF-Nutrition, we introduced a depth prediction module to generate depth maps, thereby improving the accuracy of food portion estimation. Additionally, we designed an RGB-D fusion module that combined monocular images with the predicted depth information, resulting in better performance for nutrition estimation. To the best of our knowledge, this was the pioneering effort that integrated depth prediction and RGB-D fusion techniques in food nutrition estimation. Comprehensive experiments performed on Nutrition5k evaluated the effectiveness and efficiency of DPF-Nutrition.
Collapse
Affiliation(s)
- Yuzhe Han
- School of Electronic Information and Communication, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Qimin Cheng
- School of Electronic Information and Communication, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Wenjin Wu
- Institute of Agricultural Products Processing and Nuclear Agricultural Technology, Hubei Academy of Agricultural Science, Wuhan 430064, China
| | - Ziyang Huang
- School of Electronic Information and Communication, Huazhong University of Science and Technology, Wuhan 430074, China
| |
Collapse
|
11
|
James Stubbs R, Horgan G, Robinson E, Hopkins M, Dakin C, Finlayson G. Diet composition and energy intake in humans. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220449. [PMID: 37661746 PMCID: PMC10475874 DOI: 10.1098/rstb.2022.0449] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2023] [Accepted: 06/16/2023] [Indexed: 09/05/2023] Open
Abstract
Absolute energy from fats and carbohydrates and the proportion of carbohydrates in the food supply have increased over 50 years. Dietary energy density (ED) is primarily decreased by the water and increased by the fat content of foods. Protein, carbohydrates and fat exert different effects on satiety or energy intake (EI) in the order protein > carbohydrates > fat. When the ED of different foods is equalized the differences between fat and carbohydrates are modest. Covertly increasing dietary ED with fat, carbohydrate or mixed macronutrients elevates EI, producing weight gain and vice versa. In more naturalistic situations where learning cues are intact, there appears to be greater compensation for the different ED of foods. There is considerable individual variability in response. Macronutrient-specific negative feedback models of EI regulation have limited capacity to explain how availability of cheap, highly palatable, readily assimilated, energy-dense foods lead to obesity in modern environments. Neuropsychological constructs including food reward (liking, wanting and learning), reactive and reflective decision making, in the context of asymmetric energy balance regulation, give more comprehensive explanations of how environmental superabundance of foods containing mixtures of readily assimilated fats and carbohydrates and caloric beverages elevate EI through combined hedonic, affective, cognitive and physiological mechanisms. This article is part of a discussion meeting issue 'Causes of obesity: theories, conjectures and evidence (Part II)'.
Collapse
Affiliation(s)
| | - Graham Horgan
- Biomathematics and Statistics Scotland, Rowett Institute, University of Aberdeen, Foresterhill, Aberdeen, AB25 2ZD Scotland, UK
| | - Eric Robinson
- School of Food Science and Nutrition, Faculty of Environment, University of Leeds, Leeds LS2 9JT, UK
| | - Mark Hopkins
- Institute of Population health, University of Liverpool, Liverpool L69 3GF, UK
| | - Clarissa Dakin
- School of Psychology, Faculty of Medicine and Health and
| | | |
Collapse
|
12
|
Jia W, Li B, Zheng Y, Mao ZH, Sun M. Estimating Amount of Food in a Circular Dining Bowl from a Single Image. MADIMA '23 : PROCEEDINGS OF THE 8TH INTERNATIONAL WORKSHOP ON MULTIMEDIA ASSISTED DIETARY MANAGEMENT 2023; 2023:1-9. [PMID: 38288389 PMCID: PMC10823382 DOI: 10.1145/3607828.3617789] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/31/2024]
Abstract
Unhealthy diet is a top risk factor causing obesity and numerous chronic diseases. To help the public adopt healthy diet, nutrition scientists need user-friendly tools to conduct Dietary Assessment (DA). In recent years, new DA tools have been developed using a smartphone or a wearable device which acquires images during a meal. These images are then processed to estimate calories and nutrients of the consumed food. Although considerable progress has been made, 2D food images lack scale reference and 3D volumetric information. In addition, food must be sufficiently observable from the image. This basic condition can be met when the food is stand-alone (no food container is used) or it is contained in a shallow plate. However, the condition cannot be met easily when a bowl is used. The food is often occluded by the bowl edge, and the shape of the bowl may not be fully determined from the image. However, bowls are the most utilized food containers by billions of people in many parts of the world, especially in Asia and Africa. In this work, we propose to premeasure plates and bowls using a marked adhesive strip before a dietary study starts. This simple procedure eliminates the use of a scale reference throughout the DA study. In addition, we use mathematical models and image processing to reconstruct the bowl in 3D. Our key idea is to estimate how full the bowl is rather than how much food is (in either volume or weight) in the bowl. This idea reduces the effect of occlusion. The experimental data have shown satisfactory results of our methods which enable accurate DA studies using both plates and bowls with reduced burden on research participants.
Collapse
Affiliation(s)
- Wenyan Jia
- Department of Electrical and Computer Engineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - Boyang Li
- Department of Electrical and Computer Engineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - Yaguang Zheng
- Rory Meyers College of Nursing, New York University, New York, NY, USA
| | - Zhi-Hong Mao
- Departments of Electrical and Computer Engineering, and Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - Mingui Sun
- Departments of Neurosurgery Electrical and Computer Engineering and Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
| |
Collapse
|
13
|
Zhu Z, Dai Y. A New CNN-Based Single-Ingredient Classification Model and Its Application in Food Image Segmentation. J Imaging 2023; 9:205. [PMID: 37888312 PMCID: PMC10607895 DOI: 10.3390/jimaging9100205] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Revised: 09/26/2023] [Accepted: 09/28/2023] [Indexed: 10/28/2023] Open
Abstract
It is important for food recognition to separate each ingredient within a food image at the pixel level. Most existing research has trained a segmentation network on datasets with pixel-level annotations to achieve food ingredient segmentation. However, preparing such datasets is exceedingly hard and time-consuming. In this paper, we propose a new framework for ingredient segmentation utilizing feature maps of the CNN-based Single-Ingredient Classification Model that is trained on the dataset with image-level annotation. To train this model, we first introduce a standardized biological-based hierarchical ingredient structure and construct a single-ingredient image dataset based on this structure. Then, we build a single-ingredient classification model on this dataset as the backbone of the proposed framework. In this framework, we extract feature maps from the single-ingredient classification model and propose two methods for processing these feature maps for segmenting ingredients in the food images. We introduce five evaluation metrics (IoU, Dice, Purity, Entirety, and Loss of GTs) to assess the performance of ingredient segmentation in terms of ingredient classification. Extensive experiments demonstrate the effectiveness of the proposed method, achieving a mIoU of 0.65, mDice of 0.77, mPurity of 0.83, mEntirety of 0.80, and mLoGTs of 0.06 for the optimal model on the FoodSeg103 dataset. We believe that our approach lays the foundation for subsequent ingredient recognition.
Collapse
Affiliation(s)
| | - Ying Dai
- Faculty of Software and Information Science, Iwate Prefectural University, Takizawa, Iwate 020-0693, Japan;
| |
Collapse
|
14
|
Hernández-Hernández DJ, Perez-Lizaur AB, Palacios-González B, Morales-Luna G. Machine learning accurately predicts food exchange list and the exchangeable portion. Front Nutr 2023; 10:1231873. [PMID: 37637952 PMCID: PMC10449541 DOI: 10.3389/fnut.2023.1231873] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Accepted: 07/26/2023] [Indexed: 08/29/2023] Open
Abstract
Introduction Food Exchange Lists (FELs) are a user-friendly tool developed to help individuals aid healthy eating habits and follow a specific diet plan. Given the rapidly increasing number of new products or access to new foods, one of the biggest challenges for FELs is being outdated. Supervised machine learning algorithms could be a tool that facilitates this process and allows for updated FELs-the present study aimed to generate an algorithm to predict food classification and calculate the equivalent portion. Methods Data mining techniques were used to generate the algorithm, which consists of processing and analyzing the information to find patterns, trends, or repetitive rules that explain the behavior of the data in a food database after performing this task. It was decided to approach the problem from a vector formulation (through 9 nutrient dimensions) that led to proposals for classifiers such as Spherical K-Means (SKM), and by developing this idea, it was possible to smooth the limits of the classifier with the help of a Multilayer Perceptron (MLP) which were compared with two other algorithms of machine learning, these being Random Forest and XGBoost. Results The algorithm proposed in this study could classify and calculate the equivalent portion of a single or a list of foods. The algorithm allows the categorization of more than one thousand foods with a confidence level of 97% at the first three places. Also, the algorithm indicates which foods exceed the limits established in sodium, sugar, and/or fat content and show their equivalents. Discussion Accurate and robust FELs could improve implementation and adherence to the recommended diet. Compared with manual categorization and calculation, machine learning approaches have several advantages. Machine learning reduces the time needed for manual food categorization and equivalent portion calculation of many food products. Since it is possible to access food composition databases of various populations, our algorithm could be adapted and applied in other databases, offering an even greater diversity of regional products and foods. In conclusion, machine learning is a promising method for automation in generating FELs. This study provides evidence of a large-scale, accurate real-time processing algorithm that can be useful for designing meal plans tailored to the foods consumed by the population. Our model allowed us not only to distinguish and classify foods within a group or subgroup but also to perform the calculation of an equivalent food. As a neural network, this model could be trained with other food bases and thus improve its predictive capacity. Although the performance of the SKM model was lower compared to other types of classifiers, our model allows selecting an equivalent food not from a group previously classified by machine learning but with a fully interpretable algorithm such as cosine similarity for comparing food.
Collapse
Affiliation(s)
| | - Ana Bertha Perez-Lizaur
- Departamento de Salud, Universidad Iberoamericana Ciudad de México, Ciudad de México, Mexico
| | - Berenice Palacios-González
- Laboratorio de Envejecimiento Saludable, Centro de Investigación Sobre Envejecimiento (CIE-CINVESTAV Sur), Instituto Nacional de Medicina Genómica, Ciudad de México, Mexico
| | - Gesuri Morales-Luna
- Departamento de Física y Matemáticas, Universidad Iberoamericana Ciudad de México, Ciudad de México, Mexico
| |
Collapse
|
15
|
Guan V, Zhou C, Wan H, Zhou R, Zhang D, Zhang S, Yang W, Voutharoja BP, Wang L, Win KT, Wang P. A Novel Mobile App for Personalized Dietary Advice Leveraging Persuasive Technology, Computer Vision, and Cloud Computing: Development and Usability Study. JMIR Form Res 2023; 7:e46839. [PMID: 37549000 PMCID: PMC10442736 DOI: 10.2196/46839] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 04/23/2023] [Accepted: 05/10/2023] [Indexed: 08/08/2023] Open
Abstract
BACKGROUND The Australian Dietary Guidelines (ADG) translate the best available evidence in nutrition into food choice recommendations. However, adherence to the ADG is poor in Australia. Given that following a healthy diet can be a potentially cost-effective strategy for lowering the risk of chronic diseases, there is an urgent need to develop novel technologies for individuals to improve their adherence to the ADG. OBJECTIVE This study describes the development process and design of a prototype mobile app for personalized dietary advice based on the ADG for adults in Australia, with the aim of exploring the usability of the prototype. The goal of the prototype was to provide personalized, evidence-based support for self-managing food choices in real time. METHODS The guidelines of the design science paradigm were applied to guide the design, development, and evaluation of a progressive web app using Amazon Web Services Elastic Compute Cloud services via iterations. The food layer of the Nutrition Care Process, the strategies of cognitive behavioral theory, and the ADG were translated into prototype features guided by the Persuasive Systems Design model. A gain-framed approach was adopted to promote positive behavior changes. A cross-modal image-to-recipe retrieval model under an Apache 2.0 license was deployed for dietary assessment. A survey using the Mobile Application Rating Scale and semistructured in-depth interviews were conducted to explore the usability of the prototype through convenience sampling (N=15). RESULTS The prominent features of the prototype included the use of image-based dietary assessment, food choice tracking with immediate feedback leveraging gamification principles, personal goal setting for food choices, and the provision of recipe ideas and information on the ADG. The overall prototype quality score was "acceptable," with a median of 3.46 (IQR 2.78-3.81) out of 5 points. The median score of the perceived impact of the prototype on healthy eating based on the ADG was 3.83 (IQR 2.75-4.08) out of 5 points. In-depth interviews identified the use of gamification for tracking food choices and innovation in the image-based dietary assessment as the main drivers of the positive user experience of using the prototype. CONCLUSIONS A novel evidence-based prototype mobile app was successfully developed by leveraging a cross-disciplinary collaboration. A detailed description of the development process and design of the prototype enhances its transparency and provides detailed insights into its creation. This study provides a valuable example of the development of a novel, evidence-based app for personalized dietary advice on food choices using recent advancements in computer vision. A revised version of this prototype is currently under development.
Collapse
Affiliation(s)
- Vivienne Guan
- School of Medical, Indigenous and Health Sciences, Faculty of Science, Medicine and Health, University of Wollongong, Wollongong, New South Wales, Australia
| | - Chenghuai Zhou
- School of Computing and Information Technology, Faculty of Engineering and Information Sciences, University of Wollongong, Wollongong, New South Wales, Australia
| | - Hengyi Wan
- School of Computing and Information Technology, Faculty of Engineering and Information Sciences, University of Wollongong, Wollongong, New South Wales, Australia
| | - Rengui Zhou
- School of Computing and Information Technology, Faculty of Engineering and Information Sciences, University of Wollongong, Wollongong, New South Wales, Australia
| | - Dongfa Zhang
- School of Computing and Information Technology, Faculty of Engineering and Information Sciences, University of Wollongong, Wollongong, New South Wales, Australia
| | - Sihan Zhang
- School of Computing and Information Technology, Faculty of Engineering and Information Sciences, University of Wollongong, Wollongong, New South Wales, Australia
| | - Wangli Yang
- School of Computing and Information Technology, Faculty of Engineering and Information Sciences, University of Wollongong, Wollongong, New South Wales, Australia
| | - Bhanu Prakash Voutharoja
- School of Computing and Information Technology, Faculty of Engineering and Information Sciences, University of Wollongong, Wollongong, New South Wales, Australia
| | - Lei Wang
- School of Computing and Information Technology, Faculty of Engineering and Information Sciences, University of Wollongong, Wollongong, New South Wales, Australia
| | - Khin Than Win
- School of Computing and Information Technology, Faculty of Engineering and Information Sciences, University of Wollongong, Wollongong, New South Wales, Australia
| | - Peng Wang
- School of Computing and Information Technology, Faculty of Engineering and Information Sciences, University of Wollongong, Wollongong, New South Wales, Australia
| |
Collapse
|
16
|
da Silva RAD, Szmuchrowski LA, Rosa JPP, Santos MAPD, de Mello MT, Savoi L, Porto YF, de Assis Dias Martins Júnior F, Drummond MDM. Intermittent Fasting Promotes Weight Loss without Decreasing Performance in Taekwondo. Nutrients 2023; 15:3131. [PMID: 37513549 PMCID: PMC10384508 DOI: 10.3390/nu15143131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Revised: 07/03/2023] [Accepted: 07/12/2023] [Indexed: 07/30/2023] Open
Abstract
Intermittent fasting (IF) is commonly used by combat sports athletes for weight loss. However, IF can decrease performance. This study aimed to investigate the effect of IF on total body mass (TBM) and Taekwondo performance. Nine athletes (seven male, two female; 18.4 ± 3.3 years) underwent 4 weeks of 12 h IF. TBM, countermovement jump (CMJ), mean kicks (MK), and total number of kicks (TNK) were compared weekly. Performance was measured in the fed state (FED) and fast state (FAST). Results showed decreased TBM in week 1 (62.20 ± 6.56 kg; p = 0.001) and week 2 (62.38 ± 6.83 kg; p = 0.022) compared to pre-intervention (63.58 ± 6.57 kg), stabilizing in week 3 (62.42 ± 6.12 kg), and no significant change in week 4 (63.36 ± 6.20 kg). CMJ performance in week 1 was lower in FED (35.26 ± 7.15 cm) than FAST (37.36 ± 6.77 cm; p = 0.003), but in week 3, FED (38.24 ± 6.45 cm) was higher than FAST (35.96 ± 5.05 cm; p = 0.047). No significant differences were found in MK and TNK in FSKTmult. RPE, KDI, and HR were similar between FED and FAST (p < 0.05). [LAC] was higher post-test compared to pre-test (p = 0.001), with higher concentrations in FED than FAST (p = 0.020). BG was higher in FED than FAST (p < 0.05) before physical tests. Therefore, IF promotes decreased TBM without decreasing performance.
Collapse
Affiliation(s)
- Ronaldo Angelo Dias da Silva
- Laboratório de Nutrição e Treinamento Esportivo, Universidade Federal de Minas Gerais, Belo Horizonte 31270-901, MG, Brazil
- Laboratório de Avaliação da Carga, Universidade Federal de Minas Gerais, Belo Horizonte 31270-901, MG, Brazil
| | - Leszek Antoni Szmuchrowski
- Laboratório de Avaliação da Carga, Universidade Federal de Minas Gerais, Belo Horizonte 31270-901, MG, Brazil
| | - João Paulo Pereira Rosa
- Department of Physical Education, Institute of Biosciences, São Paulo State University (UNESP), Rio Claro 13506-900, SP, Brazil
| | - Marcos Antônio Pereira Dos Santos
- Nucleus of Study in Physiology Applied to Performance and Health, Department of Biophysics and Physiology, Federal University of Piauí, Teresina 64049-550, PI, Brazil
| | - Marco Túlio de Mello
- Centro de Estudos em Psicobiologia e Exercício, Universidade Federal de Minas Gerais, Belo Horizonte 31270-901, MG, Brazil
| | - Lucas Savoi
- Laboratório de Nutrição e Treinamento Esportivo, Universidade Federal de Minas Gerais, Belo Horizonte 31270-901, MG, Brazil
| | - Yves Ferreira Porto
- Laboratório de Nutrição e Treinamento Esportivo, Universidade Federal de Minas Gerais, Belo Horizonte 31270-901, MG, Brazil
| | | | - Marcos Daniel Motta Drummond
- Laboratório de Nutrição e Treinamento Esportivo, Universidade Federal de Minas Gerais, Belo Horizonte 31270-901, MG, Brazil
- Laboratório de Avaliação da Carga, Universidade Federal de Minas Gerais, Belo Horizonte 31270-901, MG, Brazil
| |
Collapse
|
17
|
Konstantakopoulos FS, Georga EI, Tachos NS, Fotiadis DI. Weight Estimation of Mediterranean Food Images using Random Forest Regression Algorithm . ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38082778 DOI: 10.1109/embc40787.2023.10340040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
The daily nutrition management is one of the most important issues that concern individuals in the modern lifestyle. Over the years, the development of dietary assessment systems and applications based on food images has assisted experts to manage people's nutritional facts and eating habits. In these systems, the food volume estimation is the most important task for calculating food quantity and nutritional information. In this study, we present a novel methodology for food weight estimation based on a food image, using the Random Forest regression algorithm. The weight estimation model was trained on a unique dataset of 5,177 annotated Mediterranean food images, consisting of 50 different foods with a reference card placed next to the plate. Then, we created a data frame of 6,425 records from the annotated food images with features such as: food area, reference object area, food id, food category and food weight. Finally, using the Random Forest regression algorithm and applying nested cross validation and hyperparameters tuning, we trained the weight estimation model. The proposed model achieves 22.6 grams average difference between predicted and real weight values for each food item record in the data frame and 15.1% mean absolute percentage error for each food item, opening new perspectives in food image-based volume and nutrition estimation models and systems.Clinical Relevance- The proposed methodology is suitable for healthcare systems and applications that monitor an individual's malnutrition, offering the ability to estimate the energy and nutrients consumed using an image of the meal.
Collapse
|
18
|
Konstantakopoulos FS, Georga EI, Fotiadis DI. An Automated Image-Based Dietary Assessment System for Mediterranean Foods. IEEE OPEN JOURNAL OF ENGINEERING IN MEDICINE AND BIOLOGY 2023; 4:45-54. [PMID: 37223053 PMCID: PMC10202193 DOI: 10.1109/ojemb.2023.3266135] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Revised: 01/03/2023] [Accepted: 03/26/2023] [Indexed: 05/25/2023] Open
Abstract
Goal: The modern way of living has significantly influenced the daily diet. The ever-increasing number of people with obesity, diabetes and cardiovascular diseases stresses the need to find tools that could help in the daily intake of the necessary nutrients. Methods: In this paper, we present an automated image-based dietary assessment system of Mediterranean food, based on: 1) an image dataset of Mediterranean foods, 2) on a pre-trained Convolutional Neural Network (CNN) for food image classification, and 3) on stereo vision techniques for the volume and nutrition estimation of the food. We use a pre-trained CNN in the Food-101 dataset to train a deep learning classification model employing our dataset Mediterranean Greek Food (MedGRFood). Based on the EfficientNet family of CNNs, we use the EfficientNetB2 both for the pre-trained model and its weights evaluation, as well as for classifying food images in the MedGRFood dataset. Next, we estimate the volume of the food, through 3D food reconstruction of two images taken by a smartphone camera. The proposed volume estimation subsystem uses stereo vision techniques and algorithms, and needs the input of two food images to reconstruct the point cloud of the food and to compute its quantity. Results: The classification accuracy where true class matches with the most probable class predicted by the model (Top-1 accuracy) is 83.8%, while the accuracy where true class matches with any one of the 5 most probable classes predicted by the model (Top-5 accuracy) is 97.6%, for the food classification subsystem. The food volume estimation subsystem achieves an overall mean absolute percentage error 10.5% for 148 different food dishes. Conclusions: The proposed automated image-based dietary assessment system provides the capability of continuous recording of health data in real time.
Collapse
Affiliation(s)
- Fotios S. Konstantakopoulos
- Unit of Medical Technology and Intelligent Information Systems, Materials Science and Engineering DepartmentUniversity of IoanninaGR45110IoanninaGreece
- Biomedical Research InstituteFORTH, University of IoanninaGR45110IoanninaGreece
| | - Eleni I. Georga
- Unit of Medical Technology and Intelligent Information Systems, Materials Science and Engineering DepartmentUniversity of IoanninaGR45110IoanninaGreece
- Biomedical Research InstituteFORTH, University of IoanninaGR45110IoanninaGreece
| | - Dimitrios I. Fotiadis
- Unit of Medical Technology and Intelligent Information Systems, Materials Science and Engineering DepartmentUniversity of IoanninaGR45110IoanninaGreece
- Biomedical Research InstituteFORTH, University of IoanninaGR45110IoanninaGreece
| |
Collapse
|
19
|
Montoye AHK, Vondrasek JD, Neph SE. Validation of the SmartPlate for detecting food weight and type. Int J Food Sci Nutr 2023; 74:22-32. [PMID: 36476219 DOI: 10.1080/09637486.2022.2151987] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
This study determined accuracy (comparing to criterion), inter-plate reliability (comparing measures between two plates), and intra-plate reliability (comparing successive measures on one plate) of the SmartPlate for food weight and type. Food weight validation included comparing SmartPlate weights to criterion [reference] scale weights (1,980 measures) and weights of 188 foods (2,256 measures). Food type validation included assessing SmartPlate accuracy for 188 foods. For weight, mean absolute percent errors for accuracy, inter-plate reliability, and intra-plate reliability were 6.2, 7.4, and 4.9%, respectively. For food type, foods were correctly identified/listed or searchable 67.0 or 98.9% of the time, respectively, with 76.0% inter-plate reliability and 86.3% intra-plate reliability. The SmartPlate had acceptable accuracy and reliability for assessing food weight and type and may be appealing for monitoring dietary surveillance or intervention. Due to high intra-plate reliability, the SmartPlate may be especially useful for one-on-one interventions and assessing change over time.
Collapse
Affiliation(s)
- Alexander H K Montoye
- Department of Integrative Physiology and Health Science, Alma College, Alma, MI, USA
| | - Joseph D Vondrasek
- Department of Integrative Physiology and Health Science, Alma College, Alma, MI, USA.,Department of Health Sciences and Kinesiology, Georgia Southern University, Savannah, GA, USA
| | - Sylvia E Neph
- Department of Integrative Physiology and Health Science, Alma College, Alma, MI, USA
| |
Collapse
|
20
|
Zheng X, Liu C, Gong Y, Yin Q, Jia W, Sun M. Food volume estimation by multi-layer superpixel. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:6294-6311. [PMID: 37161107 DOI: 10.3934/mbe.2023271] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
Estimating the volume of food plays an important role in diet monitoring. However, it is difficult to perform this estimation automatically and accurately. A new method based on the multi-layer superpixel technique is proposed in this paper to avoid tedious human-computer interaction and improve estimation accuracy. Our method includes the following steps: 1) obtain a pair of food images along with the depth information using a stereo camera; 2) reconstruct the plate plane from the disparity map; 3) warp the input image and the disparity map to form a new direction of view parallel to the plate plane; 4) cut the warped image into a series of slices according to the depth information and estimate the occluded part of the food; and 5) rescale superpixels for each slice and estimate the food volume by accumulating all available slices in the segmented food region. Through a combination of image data and disparity map, the influences of noise and visual error in existing interactive food volume estimation methods are reduced, and the estimation accuracy is improved. Our experiments show that our method is effective, accurate and convenient, providing a new tool for promoting a balanced diet and maintaining health.
Collapse
Affiliation(s)
- Xin Zheng
- School of Artificial Intelligence, Beijing Normal University, Beijing 100875, China
| | - Chenhan Liu
- School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, China
| | - Yifei Gong
- Beijing Sankuai Online Technology Co., Ltd., Beijing 100190, China
| | - Qian Yin
- School of Artificial Intelligence, Beijing Normal University, Beijing 100875, China
| | - Wenyan Jia
- Department of Electrical and Computer Engineering, University of Pittsburgh, PA 15260, USA
| | - Mingui Sun
- Department of Neurosurgery, University of Pittsburgh, PA 15260, USA
- Department of Electrical and Computer Engineering, University of Pittsburgh, PA 15260, USA
| |
Collapse
|
21
|
Amugongo LM, Kriebitz A, Boch A, Lütge C. Mobile Computer Vision-Based Applications for Food Recognition and Volume and Calorific Estimation: A Systematic Review. Healthcare (Basel) 2022; 11:healthcare11010059. [PMID: 36611519 PMCID: PMC9818870 DOI: 10.3390/healthcare11010059] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2022] [Revised: 12/20/2022] [Accepted: 12/21/2022] [Indexed: 12/28/2022] Open
Abstract
The growing awareness of the influence of "what we eat" on lifestyle and health has led to an increase in the use of embedded food analysis and recognition systems. These solutions aim to effectively monitor daily food consumption, and therefore provide dietary recommendations to enable and support lifestyle changes. Mobile applications, due to their high accessibility, are ideal for real-life food recognition, volume estimation and calorific estimation. In this study, we conducted a systematic review based on articles that proposed mobile computer vision-based solutions for food recognition, volume estimation and calorific estimation. In addition, we assessed the extent to which these applications provide explanations to aid the users to understand the related classification and/or predictions. Our results show that 90.9% of applications do not distinguish between food and non-food. Similarly, only one study that proposed a mobile computer vision-based application for dietary intake attempted to provide explanations of features that contribute towards classification. Mobile computer vision-based applications are attracting a lot of interest in healthcare. They have the potential to assist in the management of chronic illnesses such as diabetes, ensuring that patients eat healthily and reducing complications associated with unhealthy food. However, to improve trust, mobile computer vision-based applications in healthcare should provide explanations of how they derive their classifications or volume and calorific estimations.
Collapse
|
22
|
König LM, Van Emmenis M, Nurmi J, Kassavou A, Sutton S. Characteristics of smartphone-based dietary assessment tools: a systematic review. Health Psychol Rev 2022; 16:526-550. [PMID: 34875978 DOI: 10.1080/17437199.2021.2016066] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/05/2023]
Abstract
Smartphones have become popular in assessing eating behaviour in real-life and real-time. This systematic review provides a comprehensive overview of smartphone-based dietary assessment tools, focusing on how dietary data is assessed and its completeness ensured. Seven databases from behavioural, social and computer science were searched in March 2020. All observational, experimental or intervention studies and study protocols using a smartphone-based assessment tool for dietary intake were included if they reported data collected by adults and were published in English. Out of 21,722 records initially screened, 117 publications using 129 tools were included. Five core assessment features were identified: photo-based assessment (48.8% of tools), assessed serving/ portion sizes (48.8%), free-text descriptions of food intake (42.6%), food databases (30.2%), and classification systems (27.9%). On average, a tool used two features. The majority of studies did not implement any features to improve completeness of the records. This review provides a comprehensive overview and framework of smartphone-based dietary assessment tools to help researchers identify suitable assessment tools for their studies. Future research needs to address the potential impact of specific dietary assessment methods on data quality and participants' willingness to record their behaviour to ultimately improve the quality of smartphone-based dietary assessment for health research.
Collapse
Affiliation(s)
- Laura M König
- Faculty of Life Sciences: Food, Nutrition and Health, University of Bayreuth, Kulmbach, Germany.,Behavioural Science Group, Primary Care Unit, Department of Public Health and Primary Care, University of Cambridge, Cambridge, UK
| | - Miranda Van Emmenis
- Behavioural Science Group, Primary Care Unit, Department of Public Health and Primary Care, University of Cambridge, Cambridge, UK
| | - Johanna Nurmi
- Behavioural Science Group, Primary Care Unit, Department of Public Health and Primary Care, University of Cambridge, Cambridge, UK.,Faculty of Social Sciences, University of Helsinki, Helsinki, Finland
| | - Aikaterini Kassavou
- Behavioural Science Group, Primary Care Unit, Department of Public Health and Primary Care, University of Cambridge, Cambridge, UK
| | - Stephen Sutton
- Behavioural Science Group, Primary Care Unit, Department of Public Health and Primary Care, University of Cambridge, Cambridge, UK
| |
Collapse
|
23
|
Konstantakopoulos FS, Georga EI, Tzanettis KE, Kokkinopoulos KA, Raptis SK, Michaloglou KA, Fotiadis DI. GlucoseML Mobile Application for Automated Dietary Assessment of Mediterranean Food. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:1432-1435. [PMID: 36085710 DOI: 10.1109/embc48229.2022.9871732] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Over the years and with the help of technology, the daily care of type 1 diabetes has been improved significantly. The increased adoption of continuous glucose monitoring, the continuous subcutaneous insulin injection and the accurate behavioral monitoring mHealth solutions have contributed to this phenomenon. In this study we present a mobile application for automated dietary assessment of Mediterranean food images as part of the GlucoseML system. Based on short-term predictive analysis of the glucose trajectory, GlucoseML is a type-1 diabetes self-management system. A computer vision approach is used as main part of the GlucoseML dietary assessment system calculating food carbohydrates, fats and proteins, relying on: (i) a deep learning subsystem for food image classification, and (ii) a 3D food image reconstruction subsystem for the volume estimation of food. The deep learning subsystem achieves 82.4% and 97.5% top-1 and top-5 accuracy, respectively, for food image classification while the subsystem for volume estimation of food achieves a mean absolute percentage error 10.7% for the four main categories of MedGRFood dataset.
Collapse
|
24
|
Rantala E, Balatsas-Lekkas A, Sozer N, Pennanen K. Overview of objective measurement technologies for nutrition research, food-related consumer and marketing research. Trends Food Sci Technol 2022. [DOI: 10.1016/j.tifs.2022.05.006] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
25
|
Li H, Yang G. Dietary Nutritional Information Autonomous Perception Method Based on Machine Vision in Smart Homes. ENTROPY (BASEL, SWITZERLAND) 2022; 24:868. [PMID: 35885091 PMCID: PMC9324181 DOI: 10.3390/e24070868] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Revised: 06/19/2022] [Accepted: 06/20/2022] [Indexed: 02/04/2023]
Abstract
In order to automatically perceive the user's dietary nutritional information in the smart home environment, this paper proposes a dietary nutritional information autonomous perception method based on machine vision in smart homes. Firstly, we proposed a food-recognition algorithm based on YOLOv5 to monitor the user's dietary intake using the social robot. Secondly, in order to obtain the nutritional composition of the user's dietary intake, we calibrated the weight of food ingredients and designed the method for the calculation of food nutritional composition; then, we proposed a dietary nutritional information autonomous perception method based on machine vision (DNPM) that supports the quantitative analysis of nutritional composition. Finally, the proposed algorithm was tested on the self-expanded dataset CFNet-34 based on the Chinese food dataset ChineseFoodNet. The test results show that the average recognition accuracy of the food-recognition algorithm based on YOLOv5 is 89.7%, showing good accuracy and robustness. According to the performance test results of the dietary nutritional information autonomous perception system in smart homes, the average nutritional composition perception accuracy of the system was 90.1%, the response time was less than 6 ms, and the speed was higher than 18 fps, showing excellent robustness and nutritional composition perception performance.
Collapse
Affiliation(s)
- Hongyang Li
- Key Laboratory of Advanced Manufacturing Technology of the Ministry of Education, Guizhou University, Guiyang 550025, China;
| | - Guanci Yang
- Key Laboratory of Advanced Manufacturing Technology of the Ministry of Education, Guizhou University, Guiyang 550025, China;
- Key Laboratory of “Internet+” Collaborative Intelligent Manufacturing in Guizhou Province, Guiyang 550025, China
- State Key Laboratory of Public Big Data, Guizhou University, Guiyang 550025, China
| |
Collapse
|
26
|
Russo S, Bonassi S. Prospects and Pitfalls of Machine Learning in Nutritional Epidemiology. Nutrients 2022; 14:1705. [PMID: 35565673 PMCID: PMC9105182 DOI: 10.3390/nu14091705] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Revised: 04/13/2022] [Accepted: 04/14/2022] [Indexed: 02/06/2023] Open
Abstract
Nutritional epidemiology employs observational data to discover associations between diet and disease risk. However, existing analytic methods of dietary data are often sub-optimal, with limited incorporation and analysis of the correlations between the studied variables and nonlinear behaviours in the data. Machine learning (ML) is an area of artificial intelligence that has the potential to improve modelling of nonlinear associations and confounding which are found in nutritional data. These opportunities notwithstanding, the applications of ML in nutritional epidemiology must be approached cautiously to safeguard the scientific quality of the results and provide accurate interpretations. Given the complex scenario around ML, judicious application of such tools is necessary to offer nutritional epidemiology a novel analytical resource for dietary measurement and assessment and a tool to model the complexity of dietary intake and its relation to health. This work describes the applications of ML in nutritional epidemiology and provides guidelines to avoid common pitfalls encountered in applying predictive statistical models to nutritional data. Furthermore, it helps unfamiliar readers better assess the significance of their results and provides new possible future directions in the field of ML in nutritional epidemiology.
Collapse
Affiliation(s)
- Stefania Russo
- EcoVision Lab, Photogrammetry and Remote Sensing Group, ETH Zürich, 8092 Zurich, Switzerland
| | - Stefano Bonassi
- Department of Human Sciences and Quality of Life Promotion, San Raffaele University, 00166 Rome, Italy;
- Unit of Clinical and Molecular Epidemiology, IRCCS San Raffaele Roma, 00163 Rome, Italy
| |
Collapse
|
27
|
|
28
|
Jia W, Ren Y, Li B, Beatrice B, Que J, Cao S, Wu Z, Mao ZH, Lo B, Anderson AK, Frost G, McCrory MA, Sazonov E, Steiner-Asiedu M, Baranowski T, Burke LE, Sun M. A Novel Approach to Dining Bowl Reconstruction for Image-Based Food Volume Estimation. SENSORS (BASEL, SWITZERLAND) 2022; 22:1493. [PMID: 35214399 PMCID: PMC8877095 DOI: 10.3390/s22041493] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/20/2021] [Revised: 02/08/2022] [Accepted: 02/08/2022] [Indexed: 06/14/2023]
Abstract
Knowing the amounts of energy and nutrients in an individual's diet is important for maintaining health and preventing chronic diseases. As electronic and AI technologies advance rapidly, dietary assessment can now be performed using food images obtained from a smartphone or a wearable device. One of the challenges in this approach is to computationally measure the volume of food in a bowl from an image. This problem has not been studied systematically despite the bowl being the most utilized food container in many parts of the world, especially in Asia and Africa. In this paper, we present a new method to measure the size and shape of a bowl by adhering a paper ruler centrally across the bottom and sides of the bowl and then taking an image. When observed from the image, the distortions in the width of the paper ruler and the spacings between ruler markers completely encode the size and shape of the bowl. A computational algorithm is developed to reconstruct the three-dimensional bowl interior using the observed distortions. Our experiments using nine bowls, colored liquids, and amorphous foods demonstrate high accuracy of our method for food volume estimation involving round bowls as containers. A total of 228 images of amorphous foods were also used in a comparative experiment between our algorithm and an independent human estimator. The results showed that our algorithm overperformed the human estimator who utilized different types of reference information and two estimation methods, including direct volume estimation and indirect estimation through the fullness of the bowl.
Collapse
Affiliation(s)
- Wenyan Jia
- Department of Electrical and Computer Engineering, University of Pittsburgh, Pittsburgh, PA 15260, USA; (W.J.); (Y.R.); (B.L.); (J.Q.); (S.C.); (Z.W.); (Z.-H.M.)
| | - Yiqiu Ren
- Department of Electrical and Computer Engineering, University of Pittsburgh, Pittsburgh, PA 15260, USA; (W.J.); (Y.R.); (B.L.); (J.Q.); (S.C.); (Z.W.); (Z.-H.M.)
| | - Boyang Li
- Department of Electrical and Computer Engineering, University of Pittsburgh, Pittsburgh, PA 15260, USA; (W.J.); (Y.R.); (B.L.); (J.Q.); (S.C.); (Z.W.); (Z.-H.M.)
| | - Britney Beatrice
- School of Health and Rehabilitation Sciences, University of Pittsburgh, Pittsburgh, PA 15260, USA;
| | - Jingda Que
- Department of Electrical and Computer Engineering, University of Pittsburgh, Pittsburgh, PA 15260, USA; (W.J.); (Y.R.); (B.L.); (J.Q.); (S.C.); (Z.W.); (Z.-H.M.)
| | - Shunxin Cao
- Department of Electrical and Computer Engineering, University of Pittsburgh, Pittsburgh, PA 15260, USA; (W.J.); (Y.R.); (B.L.); (J.Q.); (S.C.); (Z.W.); (Z.-H.M.)
| | - Zekun Wu
- Department of Electrical and Computer Engineering, University of Pittsburgh, Pittsburgh, PA 15260, USA; (W.J.); (Y.R.); (B.L.); (J.Q.); (S.C.); (Z.W.); (Z.-H.M.)
| | - Zhi-Hong Mao
- Department of Electrical and Computer Engineering, University of Pittsburgh, Pittsburgh, PA 15260, USA; (W.J.); (Y.R.); (B.L.); (J.Q.); (S.C.); (Z.W.); (Z.-H.M.)
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA 15260, USA
| | - Benny Lo
- Hamlyn Centre, Imperial College London, London SW7 2AZ, UK;
| | - Alex K. Anderson
- Department of Nutritional Sciences, University of Georgia, Athens, GA 30602, USA;
| | - Gary Frost
- Section for Nutrition Research, Department of Metabolism, Digestion and Reproduction, Imperial College London, London SW7 2AZ, UK;
| | - Megan A. McCrory
- Department of Health Sciences, Boston University, Boston, MA 02210, USA;
| | - Edward Sazonov
- Department of Electrical and Computer Engineering, University of Alabama, Tuscaloosa, AL 35487, USA;
| | - Matilda Steiner-Asiedu
- Department of Nutrition and Food Science, University of Ghana, Legon Boundary, Accra LG 1181, Ghana;
| | - Tom Baranowski
- USDA/ARS Children’s Nutrition Research Center, Department of Pediatrics, Baylor College of Medicine, Houston, TX 77030, USA;
| | - Lora E. Burke
- School of Nursing, University of Pittsburgh, Pittsburgh, PA 15260, USA;
| | - Mingui Sun
- Department of Electrical and Computer Engineering, University of Pittsburgh, Pittsburgh, PA 15260, USA; (W.J.); (Y.R.); (B.L.); (J.Q.); (S.C.); (Z.W.); (Z.-H.M.)
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA 15260, USA
- Department of Neurosurgery, University of Pittsburgh, Pittsburgh, PA 15260, USA
| |
Collapse
|
29
|
Pfisterer KJ, Amelard R, Chung AG, Syrnyk B, MacLean A, Keller HH, Wong A. Automated food intake tracking requires depth-refined semantic segmentation to rectify visual-volume discordance in long-term care homes. Sci Rep 2022; 12:83. [PMID: 34997022 PMCID: PMC8742067 DOI: 10.1038/s41598-021-03972-8] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2021] [Accepted: 12/13/2021] [Indexed: 12/26/2022] Open
Abstract
Malnutrition is a multidomain problem affecting 54% of older adults in long-term care (LTC). Monitoring nutritional intake in LTC is laborious and subjective, limiting clinical inference capabilities. Recent advances in automatic image-based food estimation have not yet been evaluated in LTC settings. Here, we describe a fully automatic imaging system for quantifying food intake. We propose a novel deep convolutional encoder-decoder food network with depth-refinement (EDFN-D) using an RGB-D camera for quantifying a plate's remaining food volume relative to reference portions in whole and modified texture foods. We trained and validated the network on the pre-labelled UNIMIB2016 food dataset and tested on our two novel LTC-inspired plate datasets (689 plate images, 36 unique foods). EDFN-D performed comparably to depth-refined graph cut on IOU (0.879 vs. 0.887), with intake errors well below typical 50% (mean percent intake error: [Formula: see text]%). We identify how standard segmentation metrics are insufficient due to visual-volume discordance, and include volume disparity analysis to facilitate system trust. This system provides improved transparency, approximates human assessors with enhanced objectivity, accuracy, and precision while avoiding hefty semi-automatic method time requirements. This may help address short-comings currently limiting utility of automated early malnutrition detection in resource-constrained LTC and hospital settings.
Collapse
Affiliation(s)
- Kaylen J Pfisterer
- University of Waterloo, Waterloo, Systems Design Engineering, Waterloo, ON, N2L 3G1, Canada.
- Waterloo AI Institute, Waterloo, ON, N2L 3G1, Canada.
- Schlegel-UW Research Institute for Aging, Waterloo, N2J 0E2, Canada.
| | - Robert Amelard
- KITE-Toronto Rehabilitation Institute, University Health Network, Toronto, ON, M5G 2A2, Canada
| | - Audrey G Chung
- University of Waterloo, Waterloo, Systems Design Engineering, Waterloo, ON, N2L 3G1, Canada
- Waterloo AI Institute, Waterloo, ON, N2L 3G1, Canada
| | - Braeden Syrnyk
- University of Waterloo, Waterloo, Mechanical and Mechatronics Engineering, Waterloo, ON, N2L 3G1, Canada
| | - Alexander MacLean
- University of Waterloo, Waterloo, Systems Design Engineering, Waterloo, ON, N2L 3G1, Canada
- Waterloo AI Institute, Waterloo, ON, N2L 3G1, Canada
| | - Heather H Keller
- Schlegel-UW Research Institute for Aging, Waterloo, N2J 0E2, Canada
- University of Waterloo, Waterloo, Kinesiology and Health Studies, Waterloo, ON, N2L 3G1, Canada
| | - Alexander Wong
- University of Waterloo, Waterloo, Systems Design Engineering, Waterloo, ON, N2L 3G1, Canada
- Waterloo AI Institute, Waterloo, ON, N2L 3G1, Canada
- Schlegel-UW Research Institute for Aging, Waterloo, N2J 0E2, Canada
| |
Collapse
|
30
|
AIM in Eating Disorders. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_213] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
31
|
Food Image Recognition and Food Safety Detection Method Based on Deep Learning. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2021; 2021:1268453. [PMID: 34956342 PMCID: PMC8702345 DOI: 10.1155/2021/1268453] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/10/2021] [Revised: 10/22/2021] [Accepted: 10/29/2021] [Indexed: 12/02/2022]
Abstract
With the development of machine learning, as a branch of machine learning, deep learning has been applied in many fields such as image recognition, image segmentation, video segmentation, and so on. In recent years, deep learning has also been gradually applied to food recognition. However, in the field of food recognition, the degree of complexity is high, the situation is complex, and the accuracy and speed of recognition are worrying. This paper tries to solve the above problems and proposes a food image recognition method based on neural network. Combining Tiny-YOLO and twin network, this method proposes a two-stage learning mode of YOLO-SIMM and designs two versions of YOLO-SiamV1 and YOLO-SiamV2. Through experiments, this method has a general recognition accuracy. However, there is no need for manual marking, and it has a good development prospect in practical popularization and application. In addition, a method for foreign body detection and recognition in food is proposed. This method can effectively separate foreign body from food by threshold segmentation technology. Experimental results show that this method can effectively distinguish desiccant from foreign matter and achieve the desired effect.
Collapse
|
32
|
Côté M, Lamarche B. Artificial intelligence in nutrition research: perspectives on current and future applications. Appl Physiol Nutr Metab 2021; 47:1-8. [PMID: 34525321 DOI: 10.1139/apnm-2021-0448] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
Abstract
Artificial intelligence (AI) is a rapidly evolving area that offers unparalleled opportunities of progress and applications in many healthcare fields. In this review, we provide an overview of the main and latest applications of AI in nutrition research and identify gaps to address to potentialize this emerging field. AI algorithms may help better understand and predict the complex and non-linear interactions between nutrition-related data and health outcomes, particularly when large amounts of data need to be structured and integrated, such as in metabolomics. AI-based approaches, including image recognition, may also improve dietary assessment by maximizing efficiency and addressing systematic and random errors associated with self-reported measurements of dietary intakes. Finally, AI applications can extract, structure and analyze large amounts of data from social media platforms to better understand dietary behaviours and perceptions among the population. In summary, AI-based approaches will likely improve and advance nutrition research as well as help explore new applications. However, further research is needed to identify areas where AI does deliver added value compared with traditional approaches, and other areas where AI is simply not likely to advance the field. Novelty: Artificial intelligence offers unparalleled opportunities of progress and applications in nutrition. There remain gaps to address to potentialize this emerging field.
Collapse
Affiliation(s)
- Mélina Côté
- Centre de recherche Nutrition, santé et société (NUTRISS), INAF, Université Laval, Québec, QC, Canada
- School of Nutrition, Université Laval, Québec, QC, Canada
| | - Benoît Lamarche
- Centre de recherche Nutrition, santé et société (NUTRISS), INAF, Université Laval, Québec, QC, Canada
- School of Nutrition, Université Laval, Québec, QC, Canada
| |
Collapse
|
33
|
Lucassen DA, Lasschuijt MP, Camps G, Van Loo EJ, Fischer ARH, de Vries RAJ, Haarman JAM, Simons M, de Vet E, Bos-de Vos M, Pan S, Ren X, de Graaf K, Lu Y, Feskens EJM, Brouwer-Brolsma EM. Short and Long-Term Innovations on Dietary Behavior Assessment and Coaching: Present Efforts and Vision of the Pride and Prejudice Consortium. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 18:7877. [PMID: 34360170 PMCID: PMC8345591 DOI: 10.3390/ijerph18157877] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/24/2021] [Revised: 07/22/2021] [Accepted: 07/23/2021] [Indexed: 01/10/2023]
Abstract
Overweight, obesity and cardiometabolic diseases are major global health concerns. Lifestyle factors, including diet, have been acknowledged to play a key role in the solution of these health risks. However, as shown by numerous studies, and in clinical practice, it is extremely challenging to quantify dietary behaviors as well as influencing them via dietary interventions. As shown by the limited success of 'one-size-fits-all' nutritional campaigns catered to an entire population or subpopulation, the need for more personalized coaching approaches is evident. New technology-based innovations provide opportunities to further improve the accuracy of dietary assessment and develop approaches to coach individuals towards healthier dietary behaviors. Pride & Prejudice (P&P) is a unique multi-disciplinary consortium consisting of researchers in life, nutrition, ICT, design, behavioral and social sciences from all four Dutch Universities of Technology. P&P focuses on the development and integration of innovative technological techniques such as artificial intelligence (AI), machine learning, conversational agents, behavior change theory and personalized coaching to improve current practices and establish lasting dietary behavior change.
Collapse
Affiliation(s)
- Desiree A. Lucassen
- Division of Human Nutrition and Health, Wageningen University & Research, Stippeneng 4, 6708 WE Wageningen, The Netherlands; (D.A.L.); (M.P.L.); (G.C.); (K.d.G.); (E.J.M.F.)
| | - Marlou P. Lasschuijt
- Division of Human Nutrition and Health, Wageningen University & Research, Stippeneng 4, 6708 WE Wageningen, The Netherlands; (D.A.L.); (M.P.L.); (G.C.); (K.d.G.); (E.J.M.F.)
| | - Guido Camps
- Division of Human Nutrition and Health, Wageningen University & Research, Stippeneng 4, 6708 WE Wageningen, The Netherlands; (D.A.L.); (M.P.L.); (G.C.); (K.d.G.); (E.J.M.F.)
| | - Ellen J. Van Loo
- Marketing and Consumer Behavior Group, Wageningen University & Research, Hollandseweg 1, 6706 KN Wageningen, The Netherlands; (E.J.V.L.); (A.R.H.F.)
| | - Arnout R. H. Fischer
- Marketing and Consumer Behavior Group, Wageningen University & Research, Hollandseweg 1, 6706 KN Wageningen, The Netherlands; (E.J.V.L.); (A.R.H.F.)
| | - Roelof A. J. de Vries
- Biomedical Signals and Systems, University of Twente, Drienerlolaan 5, 7522 NB Enschede, The Netherlands;
| | - Juliet A. M. Haarman
- Human Media Interaction, University of Twente, Drienerlolaan 5, 7522 NB Enschede, The Netherlands;
| | - Monique Simons
- Consumption and Healthy Lifestyles, Wageningen University & Research, Hollandseweg 1, 6706 KN Wageningen, The Netherlands; (M.S.); (E.d.V.)
| | - Emely de Vet
- Consumption and Healthy Lifestyles, Wageningen University & Research, Hollandseweg 1, 6706 KN Wageningen, The Netherlands; (M.S.); (E.d.V.)
| | - Marina Bos-de Vos
- Faculty of Industrial Design Engineering, Delft University of Technology, Landbergstraat 15, 2628 CE Delft, The Netherlands;
| | - Sibo Pan
- Systemic Change Group, Department of Industrial Design, Eindhoven University of Technology, Atlas 7.106, 5612 AP Eindhoven, The Netherlands; (S.P.); (X.R.); (Y.L.)
| | - Xipei Ren
- Systemic Change Group, Department of Industrial Design, Eindhoven University of Technology, Atlas 7.106, 5612 AP Eindhoven, The Netherlands; (S.P.); (X.R.); (Y.L.)
- School of Design and Arts, Beijing Institute of Technology, 5 Zhongguancun St. Haidian District, Beijing 100081, China
| | - Kees de Graaf
- Division of Human Nutrition and Health, Wageningen University & Research, Stippeneng 4, 6708 WE Wageningen, The Netherlands; (D.A.L.); (M.P.L.); (G.C.); (K.d.G.); (E.J.M.F.)
| | - Yuan Lu
- Systemic Change Group, Department of Industrial Design, Eindhoven University of Technology, Atlas 7.106, 5612 AP Eindhoven, The Netherlands; (S.P.); (X.R.); (Y.L.)
| | - Edith J. M. Feskens
- Division of Human Nutrition and Health, Wageningen University & Research, Stippeneng 4, 6708 WE Wageningen, The Netherlands; (D.A.L.); (M.P.L.); (G.C.); (K.d.G.); (E.J.M.F.)
| | - Elske M. Brouwer-Brolsma
- Division of Human Nutrition and Health, Wageningen University & Research, Stippeneng 4, 6708 WE Wageningen, The Netherlands; (D.A.L.); (M.P.L.); (G.C.); (K.d.G.); (E.J.M.F.)
| |
Collapse
|
34
|
Yang Z, Yu H, Cao S, Xu Q, Yuan D, Zhang H, Jia W, Mao ZH, Sun M. Human-Mimetic Estimation of Food Volume from a Single-View RGB Image Using an AI System. ELECTRONICS 2021; 10:1556. [PMID: 34552763 PMCID: PMC8455030 DOI: 10.3390/electronics10131556] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
It is well known that many chronic diseases are associated with unhealthy diet. Although improving diet is critical, adopting a healthy diet is difficult despite its benefits being well understood. Technology is needed to allow an assessment of dietary intake accurately and easily in real-world settings so that effective intervention to manage being overweight, obesity, and related chronic diseases can be developed. In recent years, new wearable imaging and computational technologies have emerged. These technologies are capable of performing objective and passive dietary assessments with a much simplified procedure than traditional questionnaires. However, a critical task is required to estimate the portion size (in this case, the food volume) from a digital image. Currently, this task is very challenging because the volumetric information in the two-dimensional images is incomplete, and the estimation involves a great deal of imagination, beyond the capacity of the traditional image processing algorithms. In this work, we present a novel Artificial Intelligent (AI) system to mimic the thinking of dietitians who use a set of common objects as gauges (e.g., a teaspoon, a golf ball, a cup, and so on) to estimate the portion size. Specifically, our human-mimetic system "mentally" gauges the volume of food using a set of internal reference volumes that have been learned previously. At the output, our system produces a vector of probabilities of the food with respect to the internal reference volumes. The estimation is then completed by an "intelligent guess", implemented by an inner product between the probability vector and the reference volume vector. Our experiments using both virtual and real food datasets have shown accurate volume estimation results.
Collapse
Affiliation(s)
- Zhengeng Yang
- College of Electrical and Information Engineering, Hunan University, Changsha 410082, China
- Department of Neurosurgery, University of Pittsburgh, Pittsburgh, PA 15260, USA
| | - Hongshan Yu
- College of Electrical and Information Engineering, Hunan University, Changsha 410082, China
| | - Shunxin Cao
- Department of Electrical and Computer Engineering, University of Pittsburgh, Pittsburgh, PA 15260, USA
| | - Qi Xu
- School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Ding Yuan
- Image Processing Center, Beihang University, Beijing 100191, China
| | - Hong Zhang
- Image Processing Center, Beihang University, Beijing 100191, China
| | - Wenyan Jia
- Department of Electrical and Computer Engineering, University of Pittsburgh, Pittsburgh, PA 15260, USA
| | - Zhi-Hong Mao
- Department of Electrical and Computer Engineering, University of Pittsburgh, Pittsburgh, PA 15260, USA
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA 15260, USA
| | - Mingui Sun
- Department of Neurosurgery, University of Pittsburgh, Pittsburgh, PA 15260, USA
- Department of Electrical and Computer Engineering, University of Pittsburgh, Pittsburgh, PA 15260, USA
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA 15260, USA
| |
Collapse
|
35
|
Morgenstern JD, Rosella LC, Costa AP, de Souza RJ, Anderson LN. Perspective: Big Data and Machine Learning Could Help Advance Nutritional Epidemiology. Adv Nutr 2021; 12:621-631. [PMID: 33606879 PMCID: PMC8166570 DOI: 10.1093/advances/nmaa183] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2020] [Revised: 11/04/2020] [Accepted: 12/29/2020] [Indexed: 01/09/2023] Open
Abstract
The field of nutritional epidemiology faces challenges posed by measurement error, diet as a complex exposure, and residual confounding. The objective of this perspective article is to highlight how developments in big data and machine learning can help address these challenges. New methods of collecting 24-h dietary recalls and recording diet could enable larger samples and more repeated measures to increase statistical power and measurement precision. In addition, use of machine learning to automatically classify pictures of food could become a useful complimentary method to help improve precision and validity of dietary measurements. Diet is complex due to thousands of different foods that are consumed in varying proportions, fluctuating quantities over time, and differing combinations. Current dietary pattern methods may not integrate sufficient dietary variation, and most traditional modeling approaches have limited incorporation of interactions and nonlinearity. Machine learning could help better model diet as a complex exposure with nonadditive and nonlinear associations. Last, novel big data sources could help avoid unmeasured confounding by offering more covariates, including both omics and features derived from unstructured data with machine learning methods. These opportunities notwithstanding, application of big data and machine learning must be approached cautiously to ensure quality of dietary measurements, avoid overfitting, and confirm accurate interpretations. Greater use of machine learning and big data would also require substantial investments in training, collaborations, and computing infrastructure. Overall, we propose that judicious application of big data and machine learning in nutrition science could offer new means of dietary measurement, more tools to model the complexity of diet and its relations with diseases, and additional potential ways of addressing confounding.
Collapse
Affiliation(s)
- Jason D Morgenstern
- Department of Health Research Methods, Evidence, and Impact, McMaster University, Hamilton, Ontario, Canada
| | - Laura C Rosella
- Dalla Lana School of Public Health, University of Toronto, Toronto, Ontario, Canada
- Vector Institute, Toronto, Ontario, Canada
| | - Andrew P Costa
- Department of Health Research Methods, Evidence, and Impact, McMaster University, Hamilton, Ontario, Canada
| | - Russell J de Souza
- Department of Health Research Methods, Evidence, and Impact, McMaster University, Hamilton, Ontario, Canada
- Population Health Research Institute, Hamilton Health Sciences, Hamilton, Ontario, Canada
| | - Laura N Anderson
- Department of Health Research Methods, Evidence, and Impact, McMaster University, Hamilton, Ontario, Canada
| |
Collapse
|
36
|
Qiu J, Lo FPW, Jiang S, Tsai YY, Sun Y, Lo B. Counting Bites and Recognizing Consumed Food from Videos for Passive Dietary Monitoring. IEEE J Biomed Health Inform 2021; 25:1471-1482. [PMID: 32897866 DOI: 10.1109/jbhi.2020.3022815] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Assessing dietary intake in epidemiological studies are predominantly based on self-reports, which are subjective, inefficient, and also prone to error. Technological approaches are therefore emerging to provide objective dietary assessments. Using only egocentric dietary intake videos, this work aims to provide accurate estimation on individual dietary intake through recognizing consumed food items and counting the number of bites taken. This is different from previous studies that rely on inertial sensing to count bites, and also previous studies that only recognize visible food items but not consumed ones. As a subject may not consume all food items visible in a meal, recognizing those consumed food items is more valuable. A new dataset that has 1,022 dietary intake video clips was constructed to validate our concept of bite counting and consumed food item recognition from egocentric videos. 12 subjects participated and 52 meals were captured. A total of 66 unique food items, including food ingredients and drinks, were labelled in the dataset along with a total of 2,039 labelled bites. Deep neural networks were used to perform bite counting and food item recognition in an end-to-end manner. Experiments have shown that counting bites directly from video clips can reach 74.15% top-1 accuracy (classifying between 0-4 bites in 20-second clips), and a MSE value of 0.312 (when using regression). Our experiments on video-based food recognition also show that recognizing consumed food items is indeed harder than recognizing visible ones, with a drop of 25% in F1 score.
Collapse
|
37
|
AIM in Eating Disorders. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_213-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|