1
|
Larsen AG, Sadolin LØ, Thomsen TR, Oliveira AS. Accurate detection of gait events using neural networks and IMU data mimicking real-world smartphone usage. Comput Methods Biomech Biomed Engin 2024:1-11. [PMID: 39508167 DOI: 10.1080/10255842.2024.2423252] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2024] [Revised: 09/10/2024] [Accepted: 10/20/2024] [Indexed: 11/08/2024]
Abstract
Wearable technologies such as inertial measurement units (IMUs) can be used to evaluate human gait and improve mobility, but sensor fixation is still a limitation that needs to be addressed. Therefore, aim of this study was to create a machine learning algorithm to predict gait events using a single IMU mimicking the carrying of a smartphone. Fifty-two healthy adults (35 males/17 females) walked on a treadmill at various speeds while carrying a surrogate smartphone in the right hand, front right trouser pocket, and right jacket pocket. Ground-truth gait events (e.g. heel strikes and toe-offs) were determined bilaterally using a gold standard optical motion capture system. The tri-dimensional accelerometer and gyroscope data were segmented in 20-ms windows, which were labelled as containing or not the gait events. A long-short term memory neural network (LSTM-NN) was used to classify the 20-ms windows as containing the heel strike or toe-off for the right or left legs, using 80% of the data for training and 20% of the data for testing. The results demonstrated an overall accuracy of 92% across all phone positions and walking speeds, with a slightly higher accuracy for the right-side predictions (∼94%) when compared to the left side (∼91%). Moreover, we found a median time error <3% of the gait cycle duration across all speeds and positions (∼77 ms). Our results represent a promising first step towards using smartphones for remote gait analysis without requiring IMU fixation, but further research is needed to enhance generalizability and explore real-world deployment.
Collapse
Affiliation(s)
- Aske G Larsen
- Department of Chemistry and Bioscience, Aalborg University, Aalborg, Denmark
- Faculty of Behavioural and Movement Sciences, Biomechanics, Vrije Universiteit, Amsterdam, The Netherlands
| | - Line Ø Sadolin
- Department of Chemistry and Bioscience, Aalborg University, Aalborg, Denmark
| | - Trine R Thomsen
- Department of Chemistry and Bioscience, Aalborg University, Aalborg, Denmark
| | - Anderson S Oliveira
- Department of Materials and Production, Aalborg University, Aalborg, Denmark
| |
Collapse
|
2
|
Chen J, Liu G, Guo M. Data Fusion of Dual Foot-Mounted INS Based on Human Step Length Model. SENSORS (BASEL, SWITZERLAND) 2024; 24:1073. [PMID: 38400230 PMCID: PMC10892232 DOI: 10.3390/s24041073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Revised: 01/29/2024] [Accepted: 02/01/2024] [Indexed: 02/25/2024]
Abstract
Pedestrian navigation methods based on inertial sensors are commonly used to solve navigation and positioning problems when satellite signals are unavailable. To address the issue of heading angle errors accumulating over time in pedestrian navigation systems that rely solely on the Zero Velocity Update (ZUPT) algorithm, it is feasible to use the pedestrian's motion constraints to constrain the errors. Firstly, a human step length model is built using human kinematic data collected by the motion capture system. Secondly, we propose the bipedal constraint algorithm based on the established human step length model. Real field experiments demonstrate that, by introducing the bipedal constraint algorithm, the mean biped radial errors of the experiments are reduced by 68.16% and 50.61%, respectively. The experimental results show that the proposed algorithm effectively reduces the radial error of the navigation results and improves the accuracy of the navigation.
Collapse
Affiliation(s)
- Jianqiang Chen
- Department of Precision Instrument, Tsinghua University, Beijing 100084, China; (J.C.); (M.G.)
| | - Gang Liu
- Department of Electronic Engineering, Tsinghua University, Beijing 100084, China
| | - Meifeng Guo
- Department of Precision Instrument, Tsinghua University, Beijing 100084, China; (J.C.); (M.G.)
| |
Collapse
|
3
|
Bernaś M, Płaczek B, Lewandowski M. Ensemble of RNN Classifiers for Activity Detection Using a Smartphone and Supporting Nodes. SENSORS (BASEL, SWITZERLAND) 2022; 22:9451. [PMID: 36502154 PMCID: PMC9739648 DOI: 10.3390/s22239451] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 11/27/2022] [Accepted: 12/01/2022] [Indexed: 06/17/2023]
Abstract
Nowadays, sensor-equipped mobile devices allow us to detect basic daily activities accurately. However, the accuracy of the existing activity recognition methods decreases rapidly if the set of activities is extended and includes training routines, such as squats, jumps, or arm swings. Thus, this paper proposes a model of a personal area network with a smartphone (as a main node) and supporting sensor nodes that deliver additional data to increase activity-recognition accuracy. The introduced personal area sensor network takes advantage of the information from multiple sensor nodes attached to different parts of the human body. In this scheme, nodes process their sensor readings locally with the use of recurrent neural networks (RNNs) to categorize the activities. Then, the main node collects results from supporting sensor nodes and performs a final activity recognition run based on a weighted voting procedure. In order to save energy and extend the network's lifetime, sensor nodes report their local results only for specific types of recognized activity. The presented method was evaluated during experiments with sensor nodes attached to the waist, chest, leg, and arm. The results obtained for a set of eight activities show that the proposed approach achieves higher recognition accuracy when compared with the existing methods. Based on the experimental results, the optimal configuration of the sensor nodes was determined to maximize the activity-recognition accuracy and reduce the number of transmissions from supporting sensor nodes.
Collapse
Affiliation(s)
- Marcin Bernaś
- Department of Computer Science and Automatics, University of Bielsko-Biała, Willowa 2, 43-309 Bielsko-Biała, Poland
| | - Bartłomiej Płaczek
- Institute of Computer Science, University of Silesia, Będzińska 39, 41-200 Sosnowiec, Poland
| | - Marcin Lewandowski
- Institute of Computer Science, University of Silesia, Będzińska 39, 41-200 Sosnowiec, Poland
| |
Collapse
|
4
|
Engelsman D, Sherif T, Meller S, Twele F, Klein I, Zamansky A, Volk HA. Measurement of Canine Ataxic Gait Patterns Using Body-Worn Smartphone Sensor Data. Front Vet Sci 2022; 9:912253. [PMID: 35990267 PMCID: PMC9386067 DOI: 10.3389/fvets.2022.912253] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Accepted: 06/09/2022] [Indexed: 11/13/2022] Open
Abstract
Ataxia is an impairment of the coordination of movement or the interaction of associated muscles, accompanied by a disturbance of the gait pattern. Diagnosis of this clinical sign, and evaluation of its severity is usually done using subjective scales during neurological examination. In this exploratory study we investigated if inertial sensors in a smart phone (3 axes of accelerometer and 3 axes of gyroscope) can be used to detect ataxia. The setting involved inertial sensor data collected by smartphone placed on the dog's back while walking in a straight line. A total of 770 walking sessions were evaluated comparing the gait of 55 healthy dogs to the one of 23 dogs with ataxia. Different machine learning techniques were used with the K-nearest neighbors technique reaching 95% accuracy in discriminating between a healthy control group and ataxic dogs, indicating potential use for smartphone apps for canine ataxia diagnosis and monitoring of treatment effect.
Collapse
Affiliation(s)
- Daniel Engelsman
- The Hatter Department of Marine Technologies, University of Haifa, Haifa, Israel
| | - Tamara Sherif
- Department of Small Animal Medicine and Surgery, University of Veterinary Medicine Hanover, Hanover, Germany
| | - Sebastian Meller
- Department of Small Animal Medicine and Surgery, University of Veterinary Medicine Hanover, Hanover, Germany
| | - Friederike Twele
- Department of Small Animal Medicine and Surgery, University of Veterinary Medicine Hanover, Hanover, Germany
| | - Itzik Klein
- The Hatter Department of Marine Technologies, University of Haifa, Haifa, Israel
| | - Anna Zamansky
- Information Systems Department, University of Haifa, Haifa, Israel
- *Correspondence: Anna Zamansky
| | - Holger A. Volk
- Department of Small Animal Medicine and Surgery, University of Veterinary Medicine Hanover, Hanover, Germany
- Center for Systems Neuroscience, Hanover, Germany
| |
Collapse
|
5
|
Wehbi M, Luge D, Hamann T, Barth J, Kaempf P, Zanca D, Eskofier BM. Surface-Free Multi-Stroke Trajectory Reconstruction and Word Recognition Using an IMU-Enhanced Digital Pen. SENSORS (BASEL, SWITZERLAND) 2022; 22:5347. [PMID: 35891027 PMCID: PMC9318904 DOI: 10.3390/s22145347] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/25/2022] [Revised: 07/15/2022] [Accepted: 07/15/2022] [Indexed: 06/15/2023]
Abstract
Efficient handwriting trajectory reconstruction (TR) requires specific writing surfaces for detecting movements of digital pens. Although several motion-based solutions have been developed to remove the necessity of writing surfaces, most of them are based on classical sensor fusion methods limited, by sensor error accumulation over time, to tracing only single strokes. In this work, we present an approach to map the movements of an IMU-enhanced digital pen to relative displacement data. Training data is collected by means of a tablet. We propose several pre-processing and data-preparation methods to synchronize data between the pen and the tablet, which are of different sampling rates, and train a convolutional neural network (CNN) to reconstruct multiple strokes without the need of writing segmentation or post-processing correction of the predicted trajectory. The proposed system learns the relative displacement of the pen tip over time from the recorded raw sensor data, achieving a normalized error rate of 0.176 relative to unit-scaled tablet ground truth (GT) trajectory. To test the effectiveness of the approach, we train a neural network for character recognition from the reconstructed trajectories, which achieved a character error rate of 19.51%. Finally, a joint model is implemented that makes use of both the IMU data and the generated trajectories, which outperforms the sensor-only-based recognition approach by 0.75%.
Collapse
Affiliation(s)
- Mohamad Wehbi
- Machine Learning and Data Analytics Lab., Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), 91052 Erlangen, Germany; (D.L.); (D.Z.); (B.M.E.)
| | - Daniel Luge
- Machine Learning and Data Analytics Lab., Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), 91052 Erlangen, Germany; (D.L.); (D.Z.); (B.M.E.)
| | - Tim Hamann
- STABILO International GmbH, 90562 Heroldsberg, Germany; (T.H.); (J.B.); (P.K.)
| | - Jens Barth
- STABILO International GmbH, 90562 Heroldsberg, Germany; (T.H.); (J.B.); (P.K.)
| | - Peter Kaempf
- STABILO International GmbH, 90562 Heroldsberg, Germany; (T.H.); (J.B.); (P.K.)
| | - Dario Zanca
- Machine Learning and Data Analytics Lab., Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), 91052 Erlangen, Germany; (D.L.); (D.Z.); (B.M.E.)
| | - Bjoern M. Eskofier
- Machine Learning and Data Analytics Lab., Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), 91052 Erlangen, Germany; (D.L.); (D.Z.); (B.M.E.)
| |
Collapse
|
6
|
QuadNet: A Hybrid Framework for Quadrotor Dead Reckoning. SENSORS 2022; 22:s22041426. [PMID: 35214328 PMCID: PMC8878889 DOI: 10.3390/s22041426] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/23/2021] [Revised: 01/16/2022] [Accepted: 02/10/2022] [Indexed: 12/10/2022]
Abstract
Quadrotor usage is continuously increasing for both civilian and military applications such as surveillance, mapping, and deliveries. Commonly, quadrotors use an inertial navigation system combined with a global navigation satellite systems receiver for outdoor applications and a camera for indoor/outdoor applications. For various reasons, such as lighting conditions or satellite signal blocking, the quadrotor’s navigation solution depends only on the inertial navigation system solution. As a consequence, the navigation solution drifts in time due to errors and noises in the inertial sensor measurements. To handle such situations and bind the solution drift, the quadrotor dead reckoning (QDR) approach utilizes pedestrian dead reckoning principles. To that end, instead of flying the quadrotor in a straight line trajectory, it is flown in a periodic motion, in the vertical plane, to enable peak-to-peak (two local maximum points within the cycle) distance estimation. Although QDR manages to improve the pure inertial navigation solution, it has several shortcomings as it requires calibration before usage, provides only peak-to-peak distance, and does not provide the altitude of the quadrotor. To circumvent these issues, we propose QuadNet, a hybrid framework for quadrotor dead reckoning to estimate the quadrotor’s three-dimensional position vector at any user-defined time rate. As a hybrid approach, QuadNet uses both neural networks and model-based equations during its operation. QuadNet requires only the inertial sensor readings to provide the position vector. Experimental results with DJI’s Matrice 300 quadrotor are provided to show the benefits of using the proposed approach.
Collapse
|
7
|
Yoshida K, Murao K. Load Position Estimation Method for Wearable Devices Based on Difference in Pulse Wave Arrival Time. SENSORS (BASEL, SWITZERLAND) 2022; 22:1090. [PMID: 35161835 PMCID: PMC8840559 DOI: 10.3390/s22031090] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Revised: 01/22/2022] [Accepted: 01/27/2022] [Indexed: 02/01/2023]
Abstract
With the increasing use of wearable devices equipped with various sensors, information on human activities, biometrics, and surrounding environments can be obtained via sensor data at any time and place. When such devices are attached to arbitrary body parts and multiple devices are used to capture body-wide movements, it is important to estimate where the devices are attached. In this study, we propose a method that estimates the load positions of wearable devices without requiring the user to perform specific actions. The proposed method estimates the time difference between a heartbeat obtained by an ECG sensor and a pulse wave obtained by a pulse sensor, and it classifies the pulse sensor position from the estimated time difference. Data were collected at 12 body parts from four male subjects and one female subject, and the proposed method was evaluated in both user-dependent and user-independent environments. The average F-value was 1.0 when the number of target body parts was from two to five.
Collapse
Affiliation(s)
- Kazuki Yoshida
- Graduate School of Information Science and Engineering, Ritsumeikan University, 1-1-1 Nojihigashi, Kusatsu 525-8577, Shiga, Japan;
| | - Kazuya Murao
- Graduate School of Information Science and Engineering, Ritsumeikan University, 1-1-1 Nojihigashi, Kusatsu 525-8577, Shiga, Japan;
- Strategic Creation Research Promotion Project (PRESTO), Japan Science and Technology Agency (JST), 4-1-8 Honmachi, Kawaguchi 332-0012, Saitama, Japan
| |
Collapse
|
8
|
Straczkiewicz M, James P, Onnela JP. A systematic review of smartphone-based human activity recognition methods for health research. NPJ Digit Med 2021; 4:148. [PMID: 34663863 PMCID: PMC8523707 DOI: 10.1038/s41746-021-00514-4] [Citation(s) in RCA: 38] [Impact Index Per Article: 12.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Accepted: 09/13/2021] [Indexed: 11/20/2022] Open
Abstract
Smartphones are now nearly ubiquitous; their numerous built-in sensors enable continuous measurement of activities of daily living, making them especially well-suited for health research. Researchers have proposed various human activity recognition (HAR) systems aimed at translating measurements from smartphones into various types of physical activity. In this review, we summarized the existing approaches to smartphone-based HAR. For this purpose, we systematically searched Scopus, PubMed, and Web of Science for peer-reviewed articles published up to December 2020 on the use of smartphones for HAR. We extracted information on smartphone body location, sensors, and physical activity types studied and the data transformation techniques and classification schemes used for activity recognition. Consequently, we identified 108 articles and described the various approaches used for data acquisition, data preprocessing, feature extraction, and activity classification, identifying the most common practices, and their alternatives. We conclude that smartphones are well-suited for HAR research in the health sciences. For population-level impact, future studies should focus on improving the quality of collected data, address missing data, incorporate more diverse participants and activities, relax requirements about phone placement, provide more complete documentation on study participants, and share the source code of the implemented methods and algorithms.
Collapse
Affiliation(s)
- Marcin Straczkiewicz
- Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, MA, 02115, USA.
| | - Peter James
- Department of Population Medicine, Harvard Medical School and Harvard Pilgrim Health Care Institute, Boston, MA, 02215, USA
- Department of Environmental Health, Harvard T.H. Chan School of Public Health, Boston, MA, 02115, USA
| | - Jukka-Pekka Onnela
- Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, MA, 02115, USA
| |
Collapse
|
9
|
Daniel N, Goldberg F, Klein I. Smartphone Location Recognition with Unknown Modes in Deep Feature Space. SENSORS (BASEL, SWITZERLAND) 2021; 21:4807. [PMID: 34300554 PMCID: PMC8309937 DOI: 10.3390/s21144807] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/27/2021] [Revised: 07/08/2021] [Accepted: 07/09/2021] [Indexed: 11/17/2022]
Abstract
Smartphone location recognition aims to identify the location of a smartphone on a user in specific actions such as talking or texting. This task is critical for accurate indoor navigation using pedestrian dead reckoning. Usually, for that task, a supervised network is trained on a set of defined user modes (smartphone locations), available during the training process. In such situations, when the user encounters an unknown mode, the classifier will be forced to identify it as one of the original modes it was trained on. Such classification errors will degrade the navigation solution accuracy. A solution to detect unknown modes is based on a probability threshold of existing modes, yet fails to work with the problem setup. Therefore, to identify unknown modes, two end-to-end ML-based approaches are derived utilizing only the smartphone's accelerometers measurements. Results using six different datasets shows the ability of the proposed approaches to classify unknown smartphone locations with an accuracy of 93.12%. The proposed approaches can be easily applied to any other classification problems containing unknown modes.
Collapse
Affiliation(s)
- Nati Daniel
- Technion-Israel Institute of Technology, 1st Efron st., Haifa 35254, Israel
| | - Felix Goldberg
- Department of Marine Technologies, University of Haifa, 199 Aba Khoushy Ave., Haifa 3498838, Israel; (F.G.); (I.K.)
| | - Itzik Klein
- Department of Marine Technologies, University of Haifa, 199 Aba Khoushy Ave., Haifa 3498838, Israel; (F.G.); (I.K.)
| |
Collapse
|
10
|
Daniel N, Klein I. INIM: Inertial Images Construction with Applications to Activity Recognition. SENSORS 2021; 21:s21144787. [PMID: 34300524 PMCID: PMC8309892 DOI: 10.3390/s21144787] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/31/2021] [Revised: 07/08/2021] [Accepted: 07/08/2021] [Indexed: 11/25/2022]
Abstract
Human activity recognition aims to classify the user activity in various applications like healthcare, gesture recognition and indoor navigation. In the latter, smartphone location recognition is gaining more attention as it enhances indoor positioning accuracy. Commonly the smartphone’s inertial sensor readings are used as input to a machine learning algorithm which performs the classification. There are several approaches to tackle such a task: feature based approaches, one dimensional deep learning algorithms, and two dimensional deep learning architectures. When using deep learning approaches, feature engineering is redundant. In addition, while utilizing two-dimensional deep learning approaches enables to utilize methods from the well-established computer vision domain. In this paper, a framework for smartphone location and human activity recognition, based on the smartphone’s inertial sensors, is proposed. The contributions of this work are a novel time series encoding approach, from inertial signals to inertial images, and transfer learning from computer vision domain to the inertial sensors classification problem. Four different datasets are employed to show the benefits of using the proposed approach. In addition, as the proposed framework performs classification on inertial sensors readings, it can be applied for other classification tasks using inertial data. It can also be adopted to handle other types of sensory data collected for a classification task.
Collapse
Affiliation(s)
- Nati Daniel
- Technion-Israel Institute of Technology, 1st Efron st., Haifa 3525433, Israel
- Correspondence:
| | - Itzik Klein
- Department of Marine Technologies, University of Haifa, 199 Aba Khoushy Ave., Haifa 3498838, Israel;
| |
Collapse
|
11
|
Zempo K, Arai T, Aoki T, Okada Y. Sensing Framework for the Internet of Actors in the Value Co-Creation Process with a Beacon-Attachable Indoor Positioning System. SENSORS 2020; 21:s21010083. [PMID: 33375596 PMCID: PMC7795509 DOI: 10.3390/s21010083] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/17/2020] [Revised: 12/19/2020] [Accepted: 12/22/2020] [Indexed: 01/10/2023]
Abstract
To evaluate and improve the value of a service, it is important to measure not only the outcomes, but also the process of the service. Value co-creation (VCC) is not limited to outcomes, especially in interpersonal services based on interactions between actors. In this paper, a sensing framework for a VCC process in retail stores is proposed by improving an environment recognition based indoor positioning system with high positioning performance in a metal shelf environment. The conventional indoor positioning systems use radio waves; therefore, errors are caused by reflection, absorption, and interference from metal shelves. An improvement in positioning performance was achieved in the proposed method by using an IR (infrared) slit and IR light, which avoids such errors. The system was designed to recognize many and unspecified people based on the environment recognition method that the receivers had installed, in the service environment. In addition, sensor networking was also conducted by adding a function to transmit payload and identification simultaneously to the beacons that were attached to positioning objects. The effectiveness of the proposed method was verified by installing it not only in an experimental environment with ideal conditions, but posteriorly, the system was tested in real conditions, in a retail store. In our experimental setup, in a comparison with equal element numbers, positioning identification was possible within an error of 96.2 mm in a static environment in contrast to the radio wave based method where an average positioning error of approximately 648 mm was measured using the radio wave based method (Bluetooth low-energy fingerprinting technique). Moreover, when multiple beacons were used simultaneously in our system within the measurement range of one receiver, the appropriate setting of the pulse interval and jitter rate was implemented by simulation. Additionally, it was confirmed that, in a real scenario, it is possible to measure the changes in movement and positional relationships between people. This result shows the feasibility of measuring and evaluating the VCC process in retail stores, although it was difficult to measure the interaction between actors.
Collapse
Affiliation(s)
- Keiichi Zempo
- Faculty of Engineering, Information and Systems, University of Tsukuba, Tsukuba 305-8573, Ibaraki, Japan;
- Correspondence:
| | - Taiga Arai
- Graduate School of Systems and Information Engineering, University of Tsukuba, Tsukuba 305-8573, Ibaraki, Japan; (T.A.); (T.A.)
| | - Takuya Aoki
- Graduate School of Systems and Information Engineering, University of Tsukuba, Tsukuba 305-8573, Ibaraki, Japan; (T.A.); (T.A.)
| | - Yukihiko Okada
- Faculty of Engineering, Information and Systems, University of Tsukuba, Tsukuba 305-8573, Ibaraki, Japan;
| |
Collapse
|
12
|
A Mapping Review of Physical Activity Recordings Derived From Smartphone Accelerometers. J Phys Act Health 2020; 17:1184-1192. [PMID: 33027761 DOI: 10.1123/jpah.2020-0041] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2020] [Revised: 08/25/2020] [Accepted: 08/28/2020] [Indexed: 11/18/2022]
Abstract
BACKGROUND Smartphones with embedded sensors, such as accelerometers, are promising tools for assessing physical activity (PA), provided they can produce valid and reliable indices. The authors aimed to summarize studies on the PA measurement properties of smartphone accelerometers compared with research-grade PA monitors or other objective methods across the intensity spectrum, and to report the effects of different smartphone placements on the accuracy of measurements. METHODS A systematic search was conducted on July 1, 2019 in PubMed, Embase, SPORTDiscus, and Scopus, followed by screening. RESULTS Nine studies were included, showing moderate-to-good agreements between PA indices derived from smartphone accelerometers and research-grade PA monitors and/or indirect calorimetry. Three studies investigated measurement properties across smartphone placements, with small differences. Large heterogeneity across studies hampered further comparisons. CONCLUSIONS Despite moderate-to-good agreements between PA indices derived from smartphone accelerometers and research-grade PA monitors and/or indirect calorimetry, the validity of smartphone monitoring is currently challenged by poor intermonitor reliability between smartphone brands/versions, heterogeneity in protocols used for validation, the sparsity of studies, and the need to address the effects of smartphone placement.
Collapse
|