1
|
Islam B, McElwain NL, Li J, Davila MI, Hu Y, Hu K, Bodway JM, Dhekne A, Roy Choudhury R, Hasegawa-Johnson M. Preliminary Technical Validation of LittleBeats™: A Multimodal Sensing Platform to Capture Cardiac Physiology, Motion, and Vocalizations. SENSORS (BASEL, SWITZERLAND) 2024; 24:901. [PMID: 38339617 PMCID: PMC10857055 DOI: 10.3390/s24030901] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/07/2023] [Revised: 01/19/2024] [Accepted: 01/19/2024] [Indexed: 02/12/2024]
Abstract
Across five studies, we present the preliminary technical validation of an infant-wearable platform, LittleBeats™, that integrates electrocardiogram (ECG), inertial measurement unit (IMU), and audio sensors. Each sensor modality is validated against data from gold-standard equipment using established algorithms and laboratory tasks. Interbeat interval (IBI) data obtained from the LittleBeats™ ECG sensor indicate acceptable mean absolute percent error rates for both adults (Study 1, N = 16) and infants (Study 2, N = 5) across low- and high-challenge sessions and expected patterns of change in respiratory sinus arrythmia (RSA). For automated activity recognition (upright vs. walk vs. glide vs. squat) using accelerometer data from the LittleBeats™ IMU (Study 3, N = 12 adults), performance was good to excellent, with smartphone (industry standard) data outperforming LittleBeats™ by less than 4 percentage points. Speech emotion recognition (Study 4, N = 8 adults) applied to LittleBeats™ versus smartphone audio data indicated a comparable performance, with no significant difference in error rates. On an automatic speech recognition task (Study 5, N = 12 adults), the best performing algorithm yielded relatively low word error rates, although LittleBeats™ (4.16%) versus smartphone (2.73%) error rates were somewhat higher. Together, these validation studies indicate that LittleBeats™ sensors yield a data quality that is largely comparable to those obtained from gold-standard devices and established protocols used in prior research.
Collapse
Affiliation(s)
- Bashima Islam
- Department of Electrical and Computer Engineering, Worcester Polytechnic Institute, Worcester, MA 01609, USA
| | - Nancy L. McElwain
- Department of Human Development and Family Studies, University of Illinois Urbana-Champaign, Urbana, IL 61801, USA; (Y.H.); (K.H.); (J.M.B.)
- Beckman Institute for Advanced Science and Technology, University of Illinois Urbana-Champaign, Urbana, IL 61801, USA
| | - Jialu Li
- Department of Electrical and Computer Engineering, University of Illinois Urbana-Champaign, Urbana, IL 61801, USA; (J.L.); (R.R.C.)
| | - Maria I. Davila
- Research Triangle Institute, Research Triangle Park, NC 27709, USA;
| | - Yannan Hu
- Department of Human Development and Family Studies, University of Illinois Urbana-Champaign, Urbana, IL 61801, USA; (Y.H.); (K.H.); (J.M.B.)
| | - Kexin Hu
- Department of Human Development and Family Studies, University of Illinois Urbana-Champaign, Urbana, IL 61801, USA; (Y.H.); (K.H.); (J.M.B.)
| | - Jordan M. Bodway
- Department of Human Development and Family Studies, University of Illinois Urbana-Champaign, Urbana, IL 61801, USA; (Y.H.); (K.H.); (J.M.B.)
| | - Ashutosh Dhekne
- School of Computer Science, Georgia Institute of Technology, Atlanta, GA 30332, USA;
| | - Romit Roy Choudhury
- Department of Electrical and Computer Engineering, University of Illinois Urbana-Champaign, Urbana, IL 61801, USA; (J.L.); (R.R.C.)
| | - Mark Hasegawa-Johnson
- Beckman Institute for Advanced Science and Technology, University of Illinois Urbana-Champaign, Urbana, IL 61801, USA
- Department of Electrical and Computer Engineering, University of Illinois Urbana-Champaign, Urbana, IL 61801, USA; (J.L.); (R.R.C.)
| |
Collapse
|
2
|
Geangu E, Smith WAP, Mason HT, Martinez-Cedillo AP, Hunter D, Knight MI, Liang H, del Carmen Garcia de Soria Bazan M, Tse ZTH, Rowland T, Corpuz D, Hunter J, Singh N, Vuong QC, Abdelgayed MRS, Mullineaux DR, Smith S, Muller BR. EgoActive: Integrated Wireless Wearable Sensors for Capturing Infant Egocentric Auditory-Visual Statistics and Autonomic Nervous System Function 'in the Wild'. SENSORS (BASEL, SWITZERLAND) 2023; 23:7930. [PMID: 37765987 PMCID: PMC10534696 DOI: 10.3390/s23187930] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Revised: 08/25/2023] [Accepted: 09/11/2023] [Indexed: 09/29/2023]
Abstract
There have been sustained efforts toward using naturalistic methods in developmental science to measure infant behaviors in the real world from an egocentric perspective because statistical regularities in the environment can shape and be shaped by the developing infant. However, there is no user-friendly and unobtrusive technology to densely and reliably sample life in the wild. To address this gap, we present the design, implementation and validation of the EgoActive platform, which addresses limitations of existing wearable technologies for developmental research. EgoActive records the active infants' egocentric perspective of the world via a miniature wireless head-mounted camera concurrently with their physiological responses to this input via a lightweight, wireless ECG/acceleration sensor. We also provide software tools to facilitate data analyses. Our validation studies showed that the cameras and body sensors performed well. Families also reported that the platform was comfortable, easy to use and operate, and did not interfere with daily activities. The synchronized multimodal data from the EgoActive platform can help tease apart complex processes that are important for child development to further our understanding of areas ranging from executive function to emotion processing and social learning.
Collapse
Affiliation(s)
- Elena Geangu
- Psychology Department, University of York, York YO10 5DD, UK; (A.P.M.-C.); (M.d.C.G.d.S.B.)
| | - William A. P. Smith
- Department of Computer Science, University of York, York YO10 5DD, UK; (W.A.P.S.); (J.H.); (M.R.S.A.); (B.R.M.)
| | - Harry T. Mason
- School of Physics, Engineering and Technology, University of York, York YO10 5DD, UK; (H.T.M.); (D.H.); (N.S.); (S.S.)
| | | | - David Hunter
- School of Physics, Engineering and Technology, University of York, York YO10 5DD, UK; (H.T.M.); (D.H.); (N.S.); (S.S.)
| | - Marina I. Knight
- Department of Mathematics, University of York, York YO10 5DD, UK; (M.I.K.); (D.R.M.)
| | - Haipeng Liang
- School of Engineering and Materials Science, Queen Mary University of London, London E1 2AT, UK; (H.L.); (Z.T.H.T.)
| | | | - Zion Tsz Ho Tse
- School of Engineering and Materials Science, Queen Mary University of London, London E1 2AT, UK; (H.L.); (Z.T.H.T.)
| | - Thomas Rowland
- Protolabs, Halesfield 8, Telford TF7 4QN, UK; (T.R.); (D.C.)
| | - Dom Corpuz
- Protolabs, Halesfield 8, Telford TF7 4QN, UK; (T.R.); (D.C.)
| | - Josh Hunter
- Department of Computer Science, University of York, York YO10 5DD, UK; (W.A.P.S.); (J.H.); (M.R.S.A.); (B.R.M.)
| | - Nishant Singh
- School of Physics, Engineering and Technology, University of York, York YO10 5DD, UK; (H.T.M.); (D.H.); (N.S.); (S.S.)
| | - Quoc C. Vuong
- Biosciences Institute, Newcastle University, Newcastle upon Tyne NE1 7RU, UK;
| | - Mona Ragab Sayed Abdelgayed
- Department of Computer Science, University of York, York YO10 5DD, UK; (W.A.P.S.); (J.H.); (M.R.S.A.); (B.R.M.)
| | - David R. Mullineaux
- Department of Mathematics, University of York, York YO10 5DD, UK; (M.I.K.); (D.R.M.)
| | - Stephen Smith
- School of Physics, Engineering and Technology, University of York, York YO10 5DD, UK; (H.T.M.); (D.H.); (N.S.); (S.S.)
| | - Bruce R. Muller
- Department of Computer Science, University of York, York YO10 5DD, UK; (W.A.P.S.); (J.H.); (M.R.S.A.); (B.R.M.)
| |
Collapse
|
3
|
Grooby E, Sitaula C, Ahani S, Holsti L, Malhotra A, Dumont GA, Marzbanrad F. Neonatal Face and Facial Landmark Detection from Video Recordings. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-5. [PMID: 38083549 DOI: 10.1109/embc40787.2023.10340960] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
This paper explores automated face and facial landmark detection of neonates, which is an important first step in many video-based neonatal health applications, such as vital sign estimation, pain assessment, sleep-wake classification, and jaundice detection. Utilising three publicly available datasets of neonates in the clinical environment, 366 images (258 subjects) and 89 (66 subjects) were annotated for training and testing, respectively. Transfer learning was applied to two YOLO-based models, with input training images augmented with random horizontal flipping, photo-metric colour distortion, translation and scaling during each training epoch. Additionally, the re-orientation of input images and fusion of trained deep learning models was explored. Our proposed model based on YOLOv7Face outperformed existing methods with a mean average precision of 84.8% for face detection, and a normalised mean error of 0.072 for facial landmark detection. Overall, this will assist in the development of fully automated neonatal health assessment algorithms.Clinical relevance- Accurate face and facial landmark detection provides an automated and non-contact option to assist in video-based neonatal health applications.
Collapse
|
4
|
Chen W, Zhou Z, Bao J, Wang C, Chen H, Xu C, Xie G, Shen H, Wu H. Classifying Heart-Sound Signals Based on CNN Trained on MelSpectrum and Log-MelSpectrum Features. Bioengineering (Basel) 2023; 10:645. [PMID: 37370576 DOI: 10.3390/bioengineering10060645] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Revised: 05/14/2023] [Accepted: 05/22/2023] [Indexed: 06/29/2023] Open
Abstract
The intelligent classification of heart-sound signals can assist clinicians in the rapid diagnosis of cardiovascular diseases. Mel-frequency cepstral coefficients (MelSpectrums) and log Mel-frequency cepstral coefficients (Log-MelSpectrums) based on a short-time Fourier transform (STFT) can represent the temporal and spectral structures of original heart-sound signals. Recently, various systems based on convolutional neural networks (CNNs) trained on the MelSpectrum and Log-MelSpectrum of segmental heart-sound frames that outperform systems using handcrafted features have been presented and classified heart-sound signals accurately. However, there is no a priori evidence of the best input representation for classifying heart sounds when using CNN models. Therefore, in this study, the MelSpectrum and Log-MelSpectrum features of heart-sound signals combined with a mathematical model of cardiac-sound acquisition were analysed theoretically. Both the experimental results and theoretical analysis demonstrated that the Log-MelSpectrum features can reduce the classification difference between domains and improve the performance of CNNs for heart-sound classification.
Collapse
Affiliation(s)
- Wei Chen
- Medical School, Nantong University, Nantong 226001, China
- School of Information Science and Technology, Nantong University, Nantong 226019, China
| | - Zixuan Zhou
- School of Information Science and Technology, Nantong University, Nantong 226019, China
| | - Junze Bao
- Medical School, Nantong University, Nantong 226001, China
| | - Chengniu Wang
- Medical School, Nantong University, Nantong 226001, China
| | - Hanqing Chen
- Medical School, Nantong University, Nantong 226001, China
| | - Chen Xu
- School of Information Science and Technology, Nantong University, Nantong 226019, China
| | - Gangcai Xie
- Medical School, Nantong University, Nantong 226001, China
| | - Hongmin Shen
- School of Information Science and Technology, Nantong University, Nantong 226019, China
| | - Huiqun Wu
- Medical School, Nantong University, Nantong 226001, China
| |
Collapse
|