1
|
Moore J, McMeekin P, Parkes T, Walker R, Morris R, Stuart S, Hetherington V, Godfrey A. Contextualizing remote fall risk: Video data capture and implementing ethical AI. NPJ Digit Med 2024; 7:61. [PMID: 38448611 PMCID: PMC10917734 DOI: 10.1038/s41746-024-01050-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2023] [Accepted: 02/16/2024] [Indexed: 03/08/2024] Open
Abstract
Wearable inertial measurement units (IMUs) are being used to quantify gait characteristics that are associated with increased fall risk, but the current limitation is the lack of contextual information that would clarify IMU data. Use of wearable video-based cameras would provide a comprehensive understanding of an individual's habitual fall risk, adding context to clarify abnormal IMU data. Generally, there is taboo when suggesting the use of wearable cameras to capture real-world video, clinical and patient apprehension due to ethical and privacy concerns. This perspective proposes that routine use of wearable cameras could be realized within digital medicine through AI-based computer vision models to obfuscate/blur/shade sensitive information while preserving helpful contextual information for a comprehensive patient assessment. Specifically, no person sees the raw video data to understand context, rather AI interprets the raw video data first to blur sensitive objects and uphold privacy. That may be more routinely achieved than one imagines as contemporary resources exist. Here, to showcase/display the potential an exemplar model is suggested via off-the-shelf methods to detect and blur sensitive objects (e.g., people) with an accuracy of 88%. Here, the benefit of the proposed approach includes a more comprehensive understanding of an individual's free-living fall risk (from free-living IMU-based gait) without compromising privacy. More generally, the video and AI approach could be used beyond fall risk to better inform habitual experiences and challenges across a range of clinical cohorts. Medicine is becoming more receptive to wearables as a helpful toolbox, camera-based devices should be plausible instruments.
Collapse
Affiliation(s)
- Jason Moore
- Department of Computer and Information Sciences, Northumbria University, Newcastle upon Tyne, UK
| | - Peter McMeekin
- Nursing, Midwifery and Health, Northumbria University, Newcastle upon Tyne, UK
| | - Thomas Parkes
- Department of Computer and Information Sciences, Northumbria University, Newcastle upon Tyne, UK
| | - Richard Walker
- Northumbria Healthcare NHS Foundation Trust, North Tyneside, Newcastle upon Tyne, UK
| | - Rosie Morris
- Northumbria Healthcare NHS Foundation Trust, North Tyneside, Newcastle upon Tyne, UK
- Department of Sport, Exercise and Rehabilitation, Northumbria University, Newcastle upon Tyne, UK
| | - Samuel Stuart
- Northumbria Healthcare NHS Foundation Trust, North Tyneside, Newcastle upon Tyne, UK
- Department of Sport, Exercise and Rehabilitation, Northumbria University, Newcastle upon Tyne, UK
- Department of Neurology, Oregon Health & Science University, Portland, OR, USA
| | - Victoria Hetherington
- Cumbria, Northumberland, Tyne and Wear NHS Foundation Trust, Wolfson Research Centre, Campus for Ageing and Vitality, Newcastle upon Tyne, UK
| | - Alan Godfrey
- Department of Computer and Information Sciences, Northumbria University, Newcastle upon Tyne, UK.
| |
Collapse
|
2
|
Rantala E, Balatsas-Lekkas A, Sozer N, Pennanen K. Overview of objective measurement technologies for nutrition research, food-related consumer and marketing research. Trends Food Sci Technol 2022. [DOI: 10.1016/j.tifs.2022.05.006] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
3
|
Shahi S, Alharbi R, Gao Y, Sen S, Katsaggelos AK, Hester J, Alshurafa N. Impacts of Image Obfuscation on Fine-grained Activity Recognition in Egocentric Video. PROCEEDINGS OF THE ... IEEE INTERNATIONAL CONFERENCE ON PERVASIVE COMPUTING AND COMMUNICATIONS WORKSHOPS : PERCOM ... IEEE INTERNATIONAL CONFERENCE ON PERVASIVE COMPUTING AND COMMUNICATIONS. WORKSHOPS 2022; 2022:341-346. [PMID: 36448973 PMCID: PMC9704364 DOI: 10.1109/percomworkshops53856.2022.9767447] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
Automated detection and validation of fine-grained human activities from egocentric vision has gained increased attention in recent years due to the rich information afforded by RGB images. However, it is not easy to discern how much rich information is necessary to detect the activity of interest reliably. Localization of hands and objects in the image has proven helpful to distinguishing between hand-related fine-grained activities. This paper describes the design of a hand-object-based mask obfuscation method (HOBM) and assesses its effect on automated recognition of fine-grained human activities. HOBM masks all pixels other than the hand and object in-hand, improving the protection of personal user information (PUI). We test a deep learning model trained with and without obfuscation using a public egocentric activity dataset with 86 class labels and achieve almost similar classification accuracies (2% decrease with obfuscation). Our findings show that it is possible to protect PUI at smaller image utility costs (loss of accuracy).
Collapse
Affiliation(s)
- Soroush Shahi
- Department of Computer Science, Northwestern University, Evanston, IL, USA
- Department of Preventive Medicine, Northwestern University, Chicago, IL, USA
| | - Rawan Alharbi
- Department of Computer Science, Northwestern University, Evanston, IL, USA
- Department of Preventive Medicine, Northwestern University, Chicago, IL, USA
| | - Yang Gao
- Department of Computer Science, Northwestern University, Evanston, IL, USA
- Department of Preventive Medicine, Northwestern University, Chicago, IL, USA
| | - Sougata Sen
- Department of Computer Science and Information System, BITS, Pilani, Goa, India
| | - Aggelos K Katsaggelos
- Department of Computer Science, Northwestern University, Evanston, IL, USA
- Electrical and Computer Engineering, Northwestern University, Evanston, IL, USA
| | - Josiah Hester
- Department of Computer Science, Northwestern University, Evanston, IL, USA
- Electrical and Computer Engineering, Northwestern University, Evanston, IL, USA
- Department of Preventive Medicine, Northwestern University, Chicago, IL, USA
| | - Nabil Alshurafa
- Department of Computer Science, Northwestern University, Evanston, IL, USA
- Electrical and Computer Engineering, Northwestern University, Evanston, IL, USA
- Department of Preventive Medicine, Northwestern University, Chicago, IL, USA
| |
Collapse
|
4
|
Alharbi R, Sen S, Ng A, Alshurafa N, Hester J. ActiSight: Wearer Foreground Extraction Using a Practical RGB-Thermal Wearable. PROCEEDINGS OF THE ... IEEE INTERNATIONAL CONFERENCE ON PERVASIVE COMPUTING AND COMMUNICATIONS. IEEE INTERNATIONAL CONFERENCE ON PERVASIVE COMPUTING AND COMMUNICATIONS 2022; 2022:237-246. [PMID: 36447642 PMCID: PMC9704365 DOI: 10.1109/percom53586.2022.9762385] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Wearable cameras provide an informative view of wearer activities, context, and interactions. Video obtained from wearable cameras is useful for life-logging, human activity recognition, visual confirmation, and other tasks widely utilized in mobile computing today. Extracting foreground information related to the wearer and separating irrelevant background pixels is the fundamental operation underlying these tasks. However, current wearer foreground extraction methods that depend on image data alone are slow, energy-inefficient, and even inaccurate in some cases, making many tasks-like activity recognition- challenging to implement in the absence of significant computational resources. To fill this gap, we built ActiSight, a wearable RGB-Thermal video camera that uses thermal information to make wearer segmentation practical for body-worn video. Using ActiSight, we collected a total of 59 hours of video from 6 participants, capturing a wide variety of activities in a natural setting. We show that wearer foreground extracted with ActiSight achieves a high dice similarity score while significantly lowering execution time and energy cost when compared with an RGB-only approach.
Collapse
Affiliation(s)
- Rawan Alharbi
- Department of Computer Science, Northwestern University, Evanston, IL, USA
- Department of Preventive Medicine, Northwestern University, Chicago, IL, USA
| | - Sougata Sen
- Department of Preventive Medicine, Northwestern University, Chicago, IL, USA
- Department of Computer Science and Information Systems, BITS Pilani, Goa, India
| | - Ada Ng
- Department of Computer Science, Northwestern University, Evanston, IL, USA
| | - Nabil Alshurafa
- Department of Computer Science, Northwestern University, Evanston, IL, USA
- Department of Preventive Medicine, Northwestern University, Chicago, IL, USA
| | - Josiah Hester
- Department of Computer Science, Northwestern University, Evanston, IL, USA
- Department of Preventive Medicine, Northwestern University, Chicago, IL, USA
| |
Collapse
|
5
|
Zhang S, Li Y, Zhang S, Shahabi F, Xia S, Deng Y, Alshurafa N. Deep Learning in Human Activity Recognition with Wearable Sensors: A Review on Advances. SENSORS (BASEL, SWITZERLAND) 2022; 22:1476. [PMID: 35214377 PMCID: PMC8879042 DOI: 10.3390/s22041476] [Citation(s) in RCA: 55] [Impact Index Per Article: 27.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/06/2021] [Revised: 01/30/2022] [Accepted: 01/31/2022] [Indexed: 02/04/2023]
Abstract
Mobile and wearable devices have enabled numerous applications, including activity tracking, wellness monitoring, and human-computer interaction, that measure and improve our daily lives. Many of these applications are made possible by leveraging the rich collection of low-power sensors found in many mobile and wearable devices to perform human activity recognition (HAR). Recently, deep learning has greatly pushed the boundaries of HAR on mobile and wearable devices. This paper systematically categorizes and summarizes existing work that introduces deep learning methods for wearables-based HAR and provides a comprehensive analysis of the current advancements, developing trends, and major challenges. We also present cutting-edge frontiers and future directions for deep learning-based HAR.
Collapse
Affiliation(s)
- Shibo Zhang
- Department of Computer Science, McCormick School of Engineering, Northwestern University, Mudd Hall, 2233 Tech Drive, Evanston, IL 60208, USA; (F.S.); (N.A.)
- Department of Preventive Medicine, Feinberg School of Medicine, Northwestern University, 680 N. Lakeshore Dr., Suite 1400, Chicago, IL 60611, USA
| | - Yaxuan Li
- Electrical and Computer Engineering Department, McGill University, McConnell Engineering Building, 3480 Rue University, Montréal, QC H3A 0E9, Canada;
| | - Shen Zhang
- School of Electrical and Computer Engineering, Georgia Institute of Technology, 777 Atlantic Drive, Atlanta, GA 30332, USA;
| | - Farzad Shahabi
- Department of Computer Science, McCormick School of Engineering, Northwestern University, Mudd Hall, 2233 Tech Drive, Evanston, IL 60208, USA; (F.S.); (N.A.)
- Department of Preventive Medicine, Feinberg School of Medicine, Northwestern University, 680 N. Lakeshore Dr., Suite 1400, Chicago, IL 60611, USA
| | - Stephen Xia
- Department of Electrical Engineering, Columbia University, Mudd 1310, 500 W. 120th Street, New York, NY 10027, USA;
| | - Yu Deng
- Center for Health Information Partnerships, Feinberg School of Medicine, Northwestern University, 625 N Michigan Ave, Chicago, IL 60611, USA;
| | - Nabil Alshurafa
- Department of Computer Science, McCormick School of Engineering, Northwestern University, Mudd Hall, 2233 Tech Drive, Evanston, IL 60208, USA; (F.S.); (N.A.)
- Department of Preventive Medicine, Feinberg School of Medicine, Northwestern University, 680 N. Lakeshore Dr., Suite 1400, Chicago, IL 60611, USA
| |
Collapse
|
6
|
Zhang S, Zhao Y, Nguyen DT, Xu R, Sen S, Hester J, Alshurafa N. NeckSense: A Multi-Sensor Necklace for Detecting Eating Activities in Free-Living Conditions. PROCEEDINGS OF THE ACM ON INTERACTIVE, MOBILE, WEARABLE AND UBIQUITOUS TECHNOLOGIES 2021; 4. [PMID: 34222759 DOI: 10.1145/3397313] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
We present the design, implementation, and evaluation of a multi-sensor, low-power necklace, NeckSense, for automatically and unobtrusively capturing fine-grained information about an individual's eating activity and eating episodes, across an entire waking day in a naturalistic setting. NeckSense fuses and classifies the proximity of the necklace from the chin, the ambient light, the Lean Forward Angle, and the energy signals to determine chewing sequences, a building block of the eating activity. It then clusters the identified chewing sequences to determine eating episodes. We tested NeckSense on 11 participants with and 9 participants without obesity, across two studies, where we collected more than 470 hours of data in a naturalistic setting. Our results demonstrate that NeckSense enables reliable eating detection for individuals with diverse body mass index (BMI) profiles, across an entire waking day, even in free-living environments. Overall, our system achieves an F1-score of 81.6% in detecting eating episodes in an exploratory study. Moreover, our system can achieve an F1-score of 77.1% for episodes even in an all-day-long free-living setting. With more than 15.8 hours of battery life, NeckSense will allow researchers and dietitians to better understand natural chewing and eating behaviors. In the future, researchers and dietitians can use NeckSense to provide appropriate real-time interventions when an eating episode is detected or when problematic eating is identified.
Collapse
Affiliation(s)
| | - Yuqi Zhao
- Northwestern University, United States
| | | | | | | | | | | |
Collapse
|