1
|
Banos O, Comas-González Z, Medina J, Polo-Rodríguez A, Gil D, Peral J, Amador S, Villalonga C. Sensing technologies and machine learning methods for emotion recognition in autism: Systematic review. Int J Med Inform 2024; 187:105469. [PMID: 38723429 DOI: 10.1016/j.ijmedinf.2024.105469] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Revised: 04/05/2024] [Accepted: 04/28/2024] [Indexed: 05/23/2024]
Abstract
BACKGROUND Human Emotion Recognition (HER) has been a popular field of study in the past years. Despite the great progresses made so far, relatively little attention has been paid to the use of HER in autism. People with autism are known to face problems with daily social communication and the prototypical interpretation of emotional responses, which are most frequently exerted via facial expressions. This poses significant practical challenges to the application of regular HER systems, which are normally developed for and by neurotypical people. OBJECTIVE This study reviews the literature on the use of HER systems in autism, particularly with respect to sensing technologies and machine learning methods, as to identify existing barriers and possible future directions. METHODS We conducted a systematic review of articles published between January 2011 and June 2023 according to the 2020 PRISMA guidelines. Manuscripts were identified through searching Web of Science and Scopus databases. Manuscripts were included when related to emotion recognition, used sensors and machine learning techniques, and involved children with autism, young, or adults. RESULTS The search yielded 346 articles. A total of 65 publications met the eligibility criteria and were included in the review. CONCLUSIONS Studies predominantly used facial expression techniques as the emotion recognition method. Consequently, video cameras were the most widely used devices across studies, although a growing trend in the use of physiological sensors was observed lately. Happiness, sadness, anger, fear, disgust, and surprise were most frequently addressed. Classical supervised machine learning techniques were primarily used at the expense of unsupervised approaches or more recent deep learning models. Studies focused on autism in a broad sense but limited efforts have been directed towards more specific disorders of the spectrum. Privacy or security issues were seldom addressed, and if so, at a rather insufficient level of detail.
Collapse
Affiliation(s)
- Oresti Banos
- Department of Computer Engineering, Automation and Robotics, University of Granada, Granada, Spain.
| | - Zhoe Comas-González
- Department of Computer Engineering, Automation and Robotics, University of Granada, Granada, Spain; Department of Computer Science and Electronics, Universidad de la Costa, Barranquilla, Colombia
| | - Javier Medina
- Department of Computer Engineering, Automation and Robotics, University of Granada, Granada, Spain
| | - Aurora Polo-Rodríguez
- Department of Computer Engineering, Automation and Robotics, University of Granada, Granada, Spain; Department of Computer Science, University of Jaén, Jaén, Spain
| | - David Gil
- Department of Computer Technology and Computation, University of Alicante, Alicante, Spain
| | - Jesús Peral
- Department of Sotware and Computing Systems, University of Alicante, Alicante, Spain.
| | - Sandra Amador
- Department of Computer Technology and Computation, University of Alicante, Alicante, Spain
| | - Claudia Villalonga
- Department of Computer Engineering, Automation and Robotics, University of Granada, Granada, Spain
| |
Collapse
|
2
|
Li J, Washington P. A Comparison of Personalized and Generalized Approaches to Emotion Recognition Using Consumer Wearable Devices: Machine Learning Study. JMIR AI 2024; 3:e52171. [PMID: 38875573 PMCID: PMC11127131 DOI: 10.2196/52171] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 02/19/2024] [Accepted: 03/23/2024] [Indexed: 06/16/2024]
Abstract
BACKGROUND There are a wide range of potential adverse health effects, ranging from headaches to cardiovascular disease, associated with long-term negative emotions and chronic stress. Because many indicators of stress are imperceptible to observers, the early detection of stress remains a pressing medical need, as it can enable early intervention. Physiological signals offer a noninvasive method for monitoring affective states and are recorded by a growing number of commercially available wearables. OBJECTIVE We aim to study the differences between personalized and generalized machine learning models for 3-class emotion classification (neutral, stress, and amusement) using wearable biosignal data. METHODS We developed a neural network for the 3-class emotion classification problem using data from the Wearable Stress and Affect Detection (WESAD) data set, a multimodal data set with physiological signals from 15 participants. We compared the results between a participant-exclusive generalized, a participant-inclusive generalized, and a personalized deep learning model. RESULTS For the 3-class classification problem, our personalized model achieved an average accuracy of 95.06% and an F1-score of 91.71%; our participant-inclusive generalized model achieved an average accuracy of 66.95% and an F1-score of 42.50%; and our participant-exclusive generalized model achieved an average accuracy of 67.65% and an F1-score of 43.05%. CONCLUSIONS Our results emphasize the need for increased research in personalized emotion recognition models given that they outperform generalized models in certain contexts. We also demonstrate that personalized machine learning models for emotion classification are viable and can achieve high performance.
Collapse
Affiliation(s)
- Joe Li
- Information and Computer Sciences, University of Hawai`i at Mānoa, Honolulu, HI, United States
| | - Peter Washington
- Information and Computer Sciences, University of Hawai`i at Mānoa, Honolulu, HI, United States
| |
Collapse
|
3
|
Wimmer L, Steininger TM, Schmid A, Wittwer J. Category learning in autistic individuals: A meta-analysis. Psychon Bull Rev 2024; 31:460-483. [PMID: 37673843 PMCID: PMC11061057 DOI: 10.3758/s13423-023-02365-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/07/2023] [Indexed: 09/08/2023]
Abstract
Learning new categories is a fundamental human skill. In the present article, we report the first comprehensive meta-analysis of category learning in autism. Including studies comparing groups of autistic and nonautistic individuals, we investigated whether autistic individuals differ in category learning from nonautistic individuals. In addition, we examined moderator variables accounting for variability between studies. A multilevel meta-analysis of k = 50 studies examining n = 1,220 autistic and n = 1,445 nonautistic individuals based on 112 effect sizes in terms of the standardized mean difference revealed lower-level category learning skills for autistic compared with nonautistic individuals, g = -0.55, 95% CI = [-0.73, -0.38], p < .0001. According to moderator analyses, the significant amount of heterogeneity, Q(111) = 617.88, p < .0001, was explained by only one of the moderator variables under investigation-namely, study language. For the remaining variables-namely, age, year of publication, risk of bias, type of control group, IQ of autistic group, percentage of male autistic participants, type of category, type of task, and type of dependent measure-there were no significant effects. Although hat values and Cook's distance statistics confirmed the robustness of findings, results of Egger's test and a funnel plot suggested the presence of publication bias reflecting an overrepresentation of disadvantageous findings for autistic groups. Objectives for future work include identifying additional moderator variables, examining downstream effects of suboptimal category learning skills, and developing interventions.
Collapse
Affiliation(s)
- Lena Wimmer
- Department of Education, University of Freiburg, Rempartstr. 11, D-79098, Freiburg im Breisgau, Germany.
| | - Tim M Steininger
- Department of Education, University of Freiburg, Rempartstr. 11, D-79098, Freiburg im Breisgau, Germany
| | - Annalena Schmid
- Department of Education, University of Freiburg, Rempartstr. 11, D-79098, Freiburg im Breisgau, Germany
- Faculty of Applied Psychology, SRH University Heidelberg, Heidelberg, Germany
| | - Jörg Wittwer
- Department of Education, University of Freiburg, Rempartstr. 11, D-79098, Freiburg im Breisgau, Germany
| |
Collapse
|
4
|
Jaiswal A, Kruiper R, Rasool A, Nandkeolyar A, Wall DP, Washington P. Digitally Diagnosing Multiple Developmental Delays Using Crowdsourcing Fused With Machine Learning: Protocol for a Human-in-the-Loop Machine Learning Study. JMIR Res Protoc 2024; 13:e52205. [PMID: 38329783 PMCID: PMC10884895 DOI: 10.2196/52205] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 12/17/2023] [Accepted: 12/26/2023] [Indexed: 02/09/2024] Open
Abstract
BACKGROUND A considerable number of minors in the United States are diagnosed with developmental or psychiatric conditions, potentially influenced by underdiagnosis factors such as cost, distance, and clinician availability. Despite the potential of digital phenotyping tools with machine learning (ML) approaches to expedite diagnoses and enhance diagnostic services for pediatric psychiatric conditions, existing methods face limitations because they use a limited set of social features for prediction tasks and focus on a single binary prediction, resulting in uncertain accuracies. OBJECTIVE This study aims to propose the development of a gamified web system for data collection, followed by a fusion of novel crowdsourcing algorithms with ML behavioral feature extraction approaches to simultaneously predict diagnoses of autism spectrum disorder and attention-deficit/hyperactivity disorder in a precise and specific manner. METHODS The proposed pipeline will consist of (1) gamified web applications to curate videos of social interactions adaptively based on the needs of the diagnostic system, (2) behavioral feature extraction techniques consisting of automated ML methods and novel crowdsourcing algorithms, and (3) the development of ML models that classify several conditions simultaneously and that adaptively request additional information based on uncertainties about the data. RESULTS A preliminary version of the web interface has been implemented, and a prior feature selection method has highlighted a core set of behavioral features that can be targeted through the proposed gamified approach. CONCLUSIONS The prospect for high reward stems from the possibility of creating the first artificial intelligence-powered tool that can identify complex social behaviors well enough to distinguish conditions with nuanced differentiators such as autism spectrum disorder and attention-deficit/hyperactivity disorder. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID) PRR1-10.2196/52205.
Collapse
Affiliation(s)
- Aditi Jaiswal
- Department of Information and Computer Sciences, University of Hawaii at Manoa, Honolulu, HI, United States
| | - Ruben Kruiper
- Department of Information and Computer Sciences, University of Hawaii at Manoa, Honolulu, HI, United States
| | - Abdur Rasool
- Department of Information and Computer Sciences, University of Hawaii at Manoa, Honolulu, HI, United States
| | - Aayush Nandkeolyar
- Department of Information and Computer Sciences, University of Hawaii at Manoa, Honolulu, HI, United States
| | - Dennis P Wall
- Department of Pediatrics (Systems Medicine), Stanford University School of Medicine, Stanford, CA, United States
- Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, CA, United States
- Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, Stanford, CA, United States
| | - Peter Washington
- Department of Information and Computer Sciences, University of Hawaii at Manoa, Honolulu, HI, United States
| |
Collapse
|
5
|
Sun Y, Kargarandehkordi A, Slade C, Jaiswal A, Busch G, Guerrero A, Phillips KT, Washington P. Personalized Deep Learning for Substance Use in Hawaii: Protocol for a Passive Sensing and Ecological Momentary Assessment Study. JMIR Res Protoc 2024; 13:e46493. [PMID: 38324375 PMCID: PMC10882478 DOI: 10.2196/46493] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 12/08/2023] [Accepted: 12/11/2023] [Indexed: 02/08/2024] Open
Abstract
BACKGROUND Artificial intelligence (AI)-powered digital therapies that detect methamphetamine cravings via consumer devices have the potential to reduce health care disparities by providing remote and accessible care solutions to communities with limited care solutions, such as Native Hawaiian, Filipino, and Pacific Islander communities. However, Native Hawaiian, Filipino, and Pacific Islander communities are understudied with respect to digital therapeutics and AI health sensing despite using technology at the same rates as other racial groups. OBJECTIVE In this study, we aimed to understand the feasibility of continuous remote digital monitoring and ecological momentary assessments in Native Hawaiian, Filipino, and Pacific Islander communities in Hawaii by curating a novel data set of longitudinal Fitbit (Fitbit Inc) biosignals with the corresponding craving and substance use labels. We also aimed to develop personalized AI models that predict methamphetamine craving events in real time using wearable sensor data. METHODS We will develop personalized AI and machine learning models for methamphetamine use and craving prediction in 40 individuals from Native Hawaiian, Filipino, and Pacific Islander communities by curating a novel data set of real-time Fitbit biosensor readings and the corresponding participant annotations (ie, raw self-reported substance use data) of their methamphetamine use and cravings. In the process of collecting this data set, we will gain insights into cultural and other human factors that can challenge the proper acquisition of precise annotations. With the resulting data set, we will use self-supervised learning AI approaches, which are a new family of machine learning methods that allows a neural network to be trained without labels by being optimized to make predictions about the data. The inputs to the proposed AI models are Fitbit biosensor readings, and the outputs are predictions of methamphetamine use or craving. This paradigm is gaining increased attention in AI for health care. RESULTS To date, more than 40 individuals have expressed interest in participating in the study, and we have successfully recruited our first 5 participants with minimal logistical challenges and proper compliance. Several logistical challenges that the research team has encountered so far and the related implications are discussed. CONCLUSIONS We expect to develop models that significantly outperform traditional supervised methods by finetuning according to the data of a participant. Such methods will enable AI solutions that work with the limited data available from Native Hawaiian, Filipino, and Pacific Islander populations and that are inherently unbiased owing to their personalized nature. Such models can support future AI-powered digital therapeutics for substance abuse. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID) DERR1-10.2196/46493.
Collapse
Affiliation(s)
- Yinan Sun
- Department of Information and Computer Sciences, University of Hawaii at Manoa, Honolulu, HI, United States
| | - Ali Kargarandehkordi
- Department of Information and Computer Sciences, University of Hawaii at Manoa, Honolulu, HI, United States
| | - Christopher Slade
- Department of Information and Computer Sciences, University of Hawaii at Manoa, Honolulu, HI, United States
| | - Aditi Jaiswal
- Department of Information and Computer Sciences, University of Hawaii at Manoa, Honolulu, HI, United States
| | - Gerald Busch
- Department of Psychiatry, University of Hawaii at Manoa, Honolulu, HI, United States
| | - Anthony Guerrero
- Department of Psychiatry, University of Hawaii at Manoa, Honolulu, HI, United States
| | - Kristina T Phillips
- Center for Integrated Health Care Research, Kaiser Permanente Hawaii, Honolulu, HI, United States
| | - Peter Washington
- Department of Information and Computer Sciences, University of Hawaii at Manoa, Honolulu, HI, United States
| |
Collapse
|
6
|
Washington P. Personalized Machine Learning using Passive Sensing and Ecological Momentary Assessments for Meth Users in Hawaii: A Research Protocol. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2023:2023.08.24.23294587. [PMID: 37662253 PMCID: PMC10473804 DOI: 10.1101/2023.08.24.23294587] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/05/2023]
Abstract
Background Artificial intelligence (AI)-powered digital therapies which detect meth cravings delivered on consumer devices have the potential to reduce these disparities by providing remote and accessible care solutions to Native Hawaiians, Filipinos, and Pacific Islanders (NHFPI) communities with limited care solutions. However, NHFPI are fully understudied with respect to digital therapeutics and AI health sensing despite using technology at the same rates as other races. Objective We seek to fulfill two research aims: (1) Understand the feasibility of continuous remote digital monitoring and ecological momentary assessments (EMAs) in NHFPI in Hawaii by curating a novel dataset of longitudinal FitBit biosignals with corresponding craving and substance use labels. (2) Develop personalized AI models which predict meth craving events in real time using wearable sensor data. Methods We will develop personalized AI/ML (artificial intelligence/machine learning) models for meth use and craving prediction in 40 NHFPI individuals by curating a novel dataset of real-time FitBit biosensor readings and corresponding participant annotations (i.e., raw self-reported substance use data) of their meth use and cravings. In the process of collecting this dataset, we will glean insights about cultural and other human factors which can challenge the proper acquisition of precise annotations. With the resulting dataset, we will employ self-supervised learning (SSL) AI approaches, which are a new family of ML methods that allow a neural network to be trained without labels by being optimized to make predictions about the data itself. The inputs to the proposed AI models are FitBit biosensor readings and the outputs are predictions of meth use or craving. This paradigm is gaining increased attention in AI for healthcare. Conclusions We expect to develop models which significantly outperform traditional supervised methods by fine-tuning to an individual subject's data. Such methods will enable AI solutions which work with the limited data available from NHFPI populations and which are inherently unbiased due to their personalized nature. Such models can support future AI-powered digital therapeutics for substance abuse.
Collapse
|
7
|
Washington P, Wall DP. A Review of and Roadmap for Data Science and Machine Learning for the Neuropsychiatric Phenotype of Autism. Annu Rev Biomed Data Sci 2023; 6:211-228. [PMID: 37137169 PMCID: PMC11093217 DOI: 10.1146/annurev-biodatasci-020722-125454] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
Autism spectrum disorder (autism) is a neurodevelopmental delay that affects at least 1 in 44 children. Like many neurological disorder phenotypes, the diagnostic features are observable, can be tracked over time, and can be managed or even eliminated through proper therapy and treatments. However, there are major bottlenecks in the diagnostic, therapeutic, and longitudinal tracking pipelines for autism and related neurodevelopmental delays, creating an opportunity for novel data science solutions to augment and transform existing workflows and provide increased access to services for affected families. Several efforts previously conducted by a multitude of research labs have spawned great progress toward improved digital diagnostics and digital therapies for children with autism. We review the literature on digital health methods for autism behavior quantification and beneficial therapies using data science. We describe both case-control studies and classification systems for digital phenotyping. We then discuss digital diagnostics and therapeutics that integrate machine learning models of autism-related behaviors, including the factors that must be addressed for translational use. Finally, we describe ongoing challenges and potential opportunities for the field of autism data science. Given the heterogeneous nature of autism and the complexities of the relevant behaviors, this review contains insights that are relevant to neurological behavior analysis and digital psychiatry more broadly.
Collapse
Affiliation(s)
- Peter Washington
- Department of Information and Computer Sciences, University of Hawai'i at Mānoa, Honolulu, Hawai'i, USA
| | - Dennis P Wall
- Departments of Pediatrics (Systems Medicine), Biomedical Data Science, and Psychiatry and Behavioral Sciences, Stanford University School of Medicine, Stanford, California, USA;
| |
Collapse
|
8
|
Banerjee A, Mutlu OC, Kline A, Surabhi S, Washington P, Wall DP. Training and Profiling a Pediatric Facial Expression Classifier for Children on Mobile Devices: Machine Learning Study. JMIR Form Res 2023; 7:e39917. [PMID: 35962462 PMCID: PMC10131663 DOI: 10.2196/39917] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2022] [Revised: 08/01/2022] [Accepted: 08/09/2022] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND Implementing automated facial expression recognition on mobile devices could provide an accessible diagnostic and therapeutic tool for those who struggle to recognize facial expressions, including children with developmental behavioral conditions such as autism. Despite recent advances in facial expression classifiers for children, existing models are too computationally expensive for smartphone use. OBJECTIVE We explored several state-of-the-art facial expression classifiers designed for mobile devices, used posttraining optimization techniques for both classification performance and efficiency on a Motorola Moto G6 phone, evaluated the importance of training our classifiers on children versus adults, and evaluated the models' performance against different ethnic groups. METHODS We collected images from 12 public data sets and used video frames crowdsourced from the GuessWhat app to train our classifiers. All images were annotated for 7 expressions: neutral, fear, happiness, sadness, surprise, anger, and disgust. We tested 3 copies for each of 5 different convolutional neural network architectures: MobileNetV3-Small 1.0x, MobileNetV2 1.0x, EfficientNetB0, MobileNetV3-Large 1.0x, and NASNetMobile. We trained the first copy on images of children, second copy on images of adults, and third copy on all data sets. We evaluated each model against the entire Child Affective Facial Expression (CAFE) set and by ethnicity. We performed weight pruning, weight clustering, and quantize-aware training when possible and profiled each model's performance on the Moto G6. RESULTS Our best model, a MobileNetV3-Large network pretrained on ImageNet, achieved 65.78% accuracy and 65.31% F1-score on the CAFE and a 90-millisecond inference latency on a Moto G6 phone when trained on all data. This accuracy is only 1.12% lower than the current state of the art for CAFE, a model with 13.91x more parameters that was unable to run on the Moto G6 due to its size, even when fully optimized. When trained solely on children, this model achieved 60.57% accuracy and 60.29% F1-score. When trained only on adults, the model received 53.36% accuracy and 53.10% F1-score. Although the MobileNetV3-Large trained on all data sets achieved nearly a 60% F1-score across all ethnicities, the data sets for South Asian and African American children achieved lower accuracy (as much as 11.56%) and F1-score (as much as 11.25%) than other groups. CONCLUSIONS With specialized design and optimization techniques, facial expression classifiers can become lightweight enough to run on mobile devices and achieve state-of-the-art performance. There is potentially a "data shift" phenomenon between facial expressions of children compared with adults; our classifiers performed much better when trained on children. Certain underrepresented ethnic groups (e.g., South Asian and African American) also perform significantly worse than groups such as European Caucasian despite similar data quality. Our models can be integrated into mobile health therapies to help diagnose autism spectrum disorder and provide targeted therapeutic treatment to children.
Collapse
Affiliation(s)
- Agnik Banerjee
- Department of Pediatrics (Systems Medicine), Stanford University, Stanford, CA, United States
| | - Onur Cezmi Mutlu
- Department of Electrical Engineering, Stanford University, Stanford, CA, United States
| | - Aaron Kline
- Department of Pediatrics (Systems Medicine), Stanford University, Stanford, CA, United States
| | - Saimourya Surabhi
- Department of Pediatrics (Systems Medicine), Stanford University, Stanford, CA, United States
| | - Peter Washington
- Department of Information and Computer Sciences, University of Hawai`i at Mānoa, Honolulu, HI, United States
| | - Dennis Paul Wall
- Department of Pediatrics (Systems Medicine), Stanford University, Stanford, CA, United States
- Department of Biomedical Data Science, Stanford University, Stanford, CA, United States
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, United States
| |
Collapse
|
9
|
Washington P. Digitally Diagnosing Multiple Developmental Delays using Crowdsourcing fused with Machine Learning: A Research Protocol. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2023:2023.03.05.23286817. [PMID: 36945467 PMCID: PMC10029023 DOI: 10.1101/2023.03.05.23286817] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/09/2023]
Abstract
Background Roughly 17% percent of minors in the United States aged 3 through 17 years have a diagnosis of one or more developmental or psychiatric conditions, with the true prevalence likely being higher due to underdiagnosis in rural areas and for minority populations. Unfortunately, timely diagnostic services are inaccessible to a large portion of the United States and global population due to cost, distance, and clinician availability. Digital phenotyping tools have the potential to shorten the time-to-diagnosis and to bring diagnostic services to more people by enabling accessible evaluations. While automated machine learning (ML) approaches for detection of pediatric psychiatry conditions have garnered increased research attention in recent years, existing approaches use a limited set of social features for the prediction task and focus on a single binary prediction. Objective I propose the development of a gamified web system for data collection followed by a fusion of novel crowdsourcing algorithms with machine learning behavioral feature extraction approaches to simultaneously predict diagnoses of Autism Spectrum Disorder (ASD) and Attention-Deficit/Hyperactivity Disorder (ADHD) in a precise and specific manner. Methods The proposed pipeline will consist of: (1) a gamified web applications to curate videos of social interactions adaptively based on needs of the diagnostic system, (2) behavioral feature extraction techniques consisting of automated ML methods and novel crowdsourcing algorithms, and (3) development of ML models which classify several conditions simultaneously and which adaptively request additional information based on uncertainties about the data. Conclusions The prospective for high reward stems from the possibility of creating the first AI-powered tool which can identify complex social behaviors well enough to distinguish conditions with nuanced differentiators such as ASD and ADHD.
Collapse
|
10
|
Washington PY, Puniwai N, Kamaka M, Gürsoy G, Tatonetti N, Brenner SE, Wall DP. Session Introduction: TOWARDS ETHICAL BIOMEDICAL INFORMATICS: LEARNING FROM OLELO NOEAU, HAWAIIAN PROVERBS. PACIFIC SYMPOSIUM ON BIOCOMPUTING. PACIFIC SYMPOSIUM ON BIOCOMPUTING 2023; 28:461-471. [PMID: 36541000 PMCID: PMC11095408] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
Abstract
Innovations in human-centered biomedical informatics are often developed with the eventual goal of real-world translation. While biomedical research questions are usually answered in terms of how a method performs in a particular context, we argue that it is equally important to consider and formally evaluate the ethical implications of informatics solutions. Several new research paradigms have arisen as a result of the consideration of ethical issues, including but not limited for privacy-preserving computation and fair machine learning. In the spirit of the Pacific Symposium on Biocomputing, we discuss broad and fundamental principles of ethical biomedical informatics in terms of Olelo Noeau, or Hawaiian proverbs and poetical sayings that capture Hawaiian values. While we emphasize issues related to privacy and fairness in particular, there are a multitude of facets to ethical biomedical informatics that can benefit from a critical analysis grounded in ethics.
Collapse
Affiliation(s)
- Peter Y Washington
- Department of Information & Computer Sciences, University of Hawaii at Manoa Honolulu, HI 96822, USA,
| | | | | | | | | | | | | |
Collapse
|
11
|
Deveau N, Washington P, Leblanc E, Husic A, Dunlap K, Penev Y, Kline A, Mutlu OC, Wall DP. Machine learning models using mobile game play accurately classify children with autism. INTELLIGENCE-BASED MEDICINE 2022; 6:100057. [PMID: 36035501 PMCID: PMC9398788 DOI: 10.1016/j.ibmed.2022.100057] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/06/2021] [Revised: 01/10/2022] [Accepted: 03/29/2022] [Indexed: 11/23/2022]
Abstract
Digitally-delivered healthcare is well suited to address current inequities in the delivery of care due to barriers of access to healthcare facilities. As the COVID-19 pandemic phases out, we have a unique opportunity to capitalize on the current familiarity with telemedicine approaches and continue to advocate for mainstream adoption of remote care delivery. In this paper, we specifically focus on the ability of GuessWhat? a smartphone-based charades-style gamified therapeutic intervention for autism spectrum disorder (ASD) to generate a signal that distinguishes children with ASD from neurotypical (NT) children. We demonstrate the feasibility of using "in-the-wild", naturalistic gameplay data to distinguish between ASD and NT by children by training a random forest classifier to discern the two classes (AU-ROC = 0.745, recall = 0.769). This performance demonstrates the potential for GuessWhat? to facilitate screening for ASD in historically difficult-to-reach communities. To further examine this potential, future work should expand the size of the training sample and interrogate differences in predictive ability by demographic.
Collapse
Affiliation(s)
- Nicholas Deveau
- Biomedical Data Science, Stanford University, Stanford, 94305, California, United States
| | - Peter Washington
- Bioengineering, Stanford University, Stanford, 94305, California, United States
| | - Emilie Leblanc
- Pediatrics, Stanford University, Stanford, 94305, California, United States
| | - Arman Husic
- Pediatrics, Stanford University, Stanford, 94305, California, United States
| | - Kaitlyn Dunlap
- Pediatrics, Stanford University, Stanford, 94305, California, United States
| | - Yordan Penev
- Pediatrics, Stanford University, Stanford, 94305, California, United States
| | - Aaron Kline
- Pediatrics, Stanford University, Stanford, 94305, California, United States
| | - Onur Cezmi Mutlu
- Electrical Engineering, Stanford University, Stanford, 94305, California, United States
| | - Dennis P Wall
- Biomedical Data Science, Stanford University, Stanford, 94305, California, United States
- Pediatrics, Stanford University, Stanford, 94305, California, United States
| |
Collapse
|
12
|
Lakkapragada A, Kline A, Mutlu OC, Paskov K, Chrisman B, Stockham N, Washington P, Wall DP. The Classification of Abnormal Hand Movement to Aid in Autism Detection: Machine Learning Study. JMIR BIOMEDICAL ENGINEERING 2022. [DOI: 10.2196/33771] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023] Open
Abstract
Background
A formal autism diagnosis can be an inefficient and lengthy process. Families may wait several months or longer before receiving a diagnosis for their child despite evidence that earlier intervention leads to better treatment outcomes. Digital technologies that detect the presence of behaviors related to autism can scale access to pediatric diagnoses. A strong indicator of the presence of autism is self-stimulatory behaviors such as hand flapping.
Objective
This study aims to demonstrate the feasibility of deep learning technologies for the detection of hand flapping from unstructured home videos as a first step toward validation of whether statistical models coupled with digital technologies can be leveraged to aid in the automatic behavioral analysis of autism. To support the widespread sharing of such home videos, we explored privacy-preserving modifications to the input space via conversion of each video to hand landmark coordinates and measured the performance of corresponding time series classifiers.
Methods
We used the Self-Stimulatory Behavior Dataset (SSBD) that contains 75 videos of hand flapping, head banging, and spinning exhibited by children. From this data set, we extracted 100 hand flapping videos and 100 control videos, each between 2 to 5 seconds in duration. We evaluated five separate feature representations: four privacy-preserved subsets of hand landmarks detected by MediaPipe and one feature representation obtained from the output of the penultimate layer of a MobileNetV2 model fine-tuned on the SSBD. We fed these feature vectors into a long short-term memory network that predicted the presence of hand flapping in each video clip.
Results
The highest-performing model used MobileNetV2 to extract features and achieved a test F1 score of 84 (SD 3.7; precision 89.6, SD 4.3 and recall 80.4, SD 6) using 5-fold cross-validation for 100 random seeds on the SSBD data (500 total distinct folds). Of the models we trained on privacy-preserved data, the model trained with all hand landmarks reached an F1 score of 66.6 (SD 3.35). Another such model trained with a select 6 landmarks reached an F1 score of 68.3 (SD 3.6). A privacy-preserved model trained using a single landmark at the base of the hands and a model trained with the average of the locations of all the hand landmarks reached an F1 score of 64.9 (SD 6.5) and 64.2 (SD 6.8), respectively.
Conclusions
We created five lightweight neural networks that can detect hand flapping from unstructured videos. Training a long short-term memory network with convolutional feature vectors outperformed training with feature vectors of hand coordinates and used almost 900,000 fewer model parameters. This study provides the first step toward developing precise deep learning methods for activity detection of autism-related behaviors.
Collapse
|
13
|
Chi NA, Washington P, Kline A, Husic A, Hou C, He C, Dunlap K, Wall DP. Classifying Autism From Crowdsourced Semistructured Speech Recordings: Machine Learning Model Comparison Study. JMIR Pediatr Parent 2022; 5:e35406. [PMID: 35436234 PMCID: PMC9052034 DOI: 10.2196/35406] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/09/2021] [Revised: 01/18/2022] [Accepted: 01/25/2022] [Indexed: 01/27/2023] Open
Abstract
BACKGROUND Autism spectrum disorder (ASD) is a neurodevelopmental disorder that results in altered behavior, social development, and communication patterns. In recent years, autism prevalence has tripled, with 1 in 44 children now affected. Given that traditional diagnosis is a lengthy, labor-intensive process that requires the work of trained physicians, significant attention has been given to developing systems that automatically detect autism. We work toward this goal by analyzing audio data, as prosody abnormalities are a signal of autism, with affected children displaying speech idiosyncrasies such as echolalia, monotonous intonation, atypical pitch, and irregular linguistic stress patterns. OBJECTIVE We aimed to test the ability for machine learning approaches to aid in detection of autism in self-recorded speech audio captured from children with ASD and neurotypical (NT) children in their home environments. METHODS We considered three methods to detect autism in child speech: (1) random forests trained on extracted audio features (including Mel-frequency cepstral coefficients); (2) convolutional neural networks trained on spectrograms; and (3) fine-tuned wav2vec 2.0-a state-of-the-art transformer-based speech recognition model. We trained our classifiers on our novel data set of cellphone-recorded child speech audio curated from the Guess What? mobile game, an app designed to crowdsource videos of children with ASD and NT children in a natural home environment. RESULTS The random forest classifier achieved 70% accuracy, the fine-tuned wav2vec 2.0 model achieved 77% accuracy, and the convolutional neural network achieved 79% accuracy when classifying children's audio as either ASD or NT. We used 5-fold cross-validation to evaluate model performance. CONCLUSIONS Our models were able to predict autism status when trained on a varied selection of home audio clips with inconsistent recording qualities, which may be more representative of real-world conditions. The results demonstrate that machine learning methods offer promise in detecting autism automatically from speech without specialized equipment.
Collapse
Affiliation(s)
- Nathan A Chi
- Division of Systems Medicine, Department of Pediatrics, Stanford University, Palo Alto, CA, United States
| | - Peter Washington
- Department of Bioengineering, Stanford University, Stanford, CA, United States
| | - Aaron Kline
- Division of Systems Medicine, Department of Pediatrics, Stanford University, Palo Alto, CA, United States
| | - Arman Husic
- Division of Systems Medicine, Department of Pediatrics, Stanford University, Palo Alto, CA, United States
| | - Cathy Hou
- Department of Computer Science, Stanford University, Stanford, CA, United States
| | - Chloe He
- Department of Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Kaitlyn Dunlap
- Division of Systems Medicine, Department of Pediatrics, Stanford University, Palo Alto, CA, United States
| | - Dennis P Wall
- Division of Systems Medicine, Department of Pediatrics, Stanford University, Palo Alto, CA, United States
- Department of Biomedical Data Science, Stanford University, Stanford, CA, United States
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, United States
| |
Collapse
|
14
|
Washington P, Kalantarian H, Kent J, Husic A, Kline A, Leblanc E, Hou C, Mutlu OC, Dunlap K, Penev Y, Varma M, Stockham NT, Chrisman B, Paskov K, Sun MW, Jung JY, Voss C, Haber N, Wall DP. Improved Digital Therapy for Developmental Pediatrics Using Domain-Specific Artificial Intelligence: Machine Learning Study. JMIR Pediatr Parent 2022; 5:e26760. [PMID: 35394438 PMCID: PMC9034430 DOI: 10.2196/26760] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/23/2020] [Revised: 03/24/2021] [Accepted: 01/03/2022] [Indexed: 02/02/2023] Open
Abstract
BACKGROUND Automated emotion classification could aid those who struggle to recognize emotions, including children with developmental behavioral conditions such as autism. However, most computer vision emotion recognition models are trained on adult emotion and therefore underperform when applied to child faces. OBJECTIVE We designed a strategy to gamify the collection and labeling of child emotion-enriched images to boost the performance of automatic child emotion recognition models to a level closer to what will be needed for digital health care approaches. METHODS We leveraged our prototype therapeutic smartphone game, GuessWhat, which was designed in large part for children with developmental and behavioral conditions, to gamify the secure collection of video data of children expressing a variety of emotions prompted by the game. Independently, we created a secure web interface to gamify the human labeling effort, called HollywoodSquares, tailored for use by any qualified labeler. We gathered and labeled 2155 videos, 39,968 emotion frames, and 106,001 labels on all images. With this drastically expanded pediatric emotion-centric database (>30 times larger than existing public pediatric emotion data sets), we trained a convolutional neural network (CNN) computer vision classifier of happy, sad, surprised, fearful, angry, disgust, and neutral expressions evoked by children. RESULTS The classifier achieved a 66.9% balanced accuracy and 67.4% F1-score on the entirety of the Child Affective Facial Expression (CAFE) as well as a 79.1% balanced accuracy and 78% F1-score on CAFE Subset A, a subset containing at least 60% human agreement on emotions labels. This performance is at least 10% higher than all previously developed classifiers evaluated against CAFE, the best of which reached a 56% balanced accuracy even when combining "anger" and "disgust" into a single class. CONCLUSIONS This work validates that mobile games designed for pediatric therapies can generate high volumes of domain-relevant data sets to train state-of-the-art classifiers to perform tasks helpful to precision health efforts.
Collapse
Affiliation(s)
- Peter Washington
- Departments of Pediatrics (Systems Medicine) and Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Haik Kalantarian
- Departments of Pediatrics (Systems Medicine) and Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - John Kent
- Departments of Pediatrics (Systems Medicine) and Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Arman Husic
- Departments of Pediatrics (Systems Medicine) and Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Aaron Kline
- Departments of Pediatrics (Systems Medicine) and Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Emilie Leblanc
- Departments of Pediatrics (Systems Medicine) and Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Cathy Hou
- Departments of Pediatrics (Systems Medicine) and Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Onur Cezmi Mutlu
- Departments of Pediatrics (Systems Medicine) and Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Kaitlyn Dunlap
- Departments of Pediatrics (Systems Medicine) and Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Yordan Penev
- Departments of Pediatrics (Systems Medicine) and Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Maya Varma
- Departments of Pediatrics (Systems Medicine) and Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Nate Tyler Stockham
- Departments of Pediatrics (Systems Medicine) and Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Brianna Chrisman
- Departments of Pediatrics (Systems Medicine) and Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Kelley Paskov
- Departments of Pediatrics (Systems Medicine) and Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Min Woo Sun
- Departments of Pediatrics (Systems Medicine) and Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Jae-Yoon Jung
- Departments of Pediatrics (Systems Medicine) and Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Catalin Voss
- Departments of Pediatrics (Systems Medicine) and Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Nick Haber
- Departments of Pediatrics (Systems Medicine) and Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Dennis Paul Wall
- Departments of Pediatrics (Systems Medicine) and Biomedical Data Science, Stanford University, Stanford, CA, United States
| |
Collapse
|
15
|
Nahavandi D, Alizadehsani R, Khosravi A, Acharya UR. Application of artificial intelligence in wearable devices: Opportunities and challenges. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 213:106541. [PMID: 34837860 DOI: 10.1016/j.cmpb.2021.106541] [Citation(s) in RCA: 46] [Impact Index Per Article: 23.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/05/2021] [Revised: 11/07/2021] [Accepted: 11/15/2021] [Indexed: 05/13/2023]
Abstract
BACKGROUND AND OBJECTIVES Wearable technologies have added completely new and fast emerging tools to the popular field of personal gadgets. Aside from being fashionable and equipped with advanced hardware technologies such as communication modules and networking, wearable devices have the potential to fuel artificial intelligence (AI) methods with a wide range of valuable data. METHODS Various AI techniques such as supervised, unsupervised, semi-supervised and reinforcement learning (RL) have already been used to carry out various tasks. This paper reviews the recent applications of wearables that have leveraged AI to achieve their objectives. RESULTS Particular example applications of supervised and unsupervised learning for medical diagnosis are reviewed. Moreover, examples combining the internet of things, wearables, and RL are reviewed. Application examples of wearables will be also presented for specific domains such as medical, industrial, and sport. Medical applications include fitness, movement disorder, mental health, etc. Industrial applications include employee performance improvement with the aid of wearables. Sport applications are all about providing better user experience during workout sessions or professional gameplays. CONCLUSION The most important challenges regarding design and development of wearable devices and the computation burden of using AI methods are presented. Finally, future challenges and opportunities for wearable devices are presented.
Collapse
Affiliation(s)
- Darius Nahavandi
- Institute for Intelligent Systems Research and Innovation (IISRI), Deakin University, Waurn Ponds, VIC 3216, Australia
| | - Roohallah Alizadehsani
- Institute for Intelligent Systems Research and Innovation (IISRI), Deakin University, Waurn Ponds, VIC 3216, Australia
| | - Abbas Khosravi
- Institute for Intelligent Systems Research and Innovation (IISRI), Deakin University, Waurn Ponds, VIC 3216, Australia.
| | - U Rajendra Acharya
- Department of Electronics and Computer Engineering, Ngee Ann Polytechnic, Singapore; Department of Biomedical Engineering, School of Science and Technology, Singapore University of Social Sciences, Singapore; Department of Bioinformatics and Medical Engineering, Asia University, Taiwan
| |
Collapse
|
16
|
Crowd annotations can approximate clinical autism impressions from short home videos with privacy protections. INTELLIGENCE-BASED MEDICINE 2022; 6. [PMID: 35634270 PMCID: PMC9139408 DOI: 10.1016/j.ibmed.2022.100056] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
Abstract
Artificial Intelligence (A.I.) solutions are increasingly considered for telemedicine. For these methods to serve children and their families in home settings, it is crucial to ensure the privacy of the child and parent or caregiver. To address this challenge, we explore the potential for global image transformations to provide privacy while preserving the quality of behavioral annotations. Crowd workers have previously been shown to reliably annotate behavioral features in unstructured home videos, allowing machine learning classifiers to detect autism using the annotations as input. We evaluate this method with videos altered via pixelation, dense optical flow, and Gaussian blurring. On a balanced test set of 30 videos of children with autism and 30 neurotypical controls, we find that the visual privacy alterations do not drastically alter any individual behavioral annotation at the item level. The AUROC on the evaluation set was 90.0% ±7.5% for unaltered videos, 85.0% ±9.0% for pixelation, 85.0% ±9.0% for optical flow, and 83.3% ±9.3% for blurring, demonstrating that an aggregation of small changes across behavioral questions can collectively result in increased misdiagnosis rates. We also compare crowd answers against clinicians who provided the same annotations for the same videos as crowd workers, and we find that clinicians have higher sensitivity in their recognition of autism-related symptoms. We also find that there is a linear correlation (r = 0.75, p < 0.0001) between the mean Clinical Global Impression (CGI) score provided by professional clinicians and the corresponding score emitted by a previously validated autism classifier with crowd inputs, indicating that the classifier’s output probability is a reliable estimate of the clinical impression of autism. A significant correlation is maintained with privacy alterations, indicating that crowd annotations can approximate clinician-provided autism impression from home videos in a privacy-preserved manner.
Collapse
|
17
|
Penev Y, Dunlap K, Husic A, Hou C, Washington P, Leblanc E, Kline A, Kent J, Ng-Thow-Hing A, Liu B, Harjadi C, Tsou M, Desai M, Wall DP. A Mobile Game Platform for Improving Social Communication in Children with Autism: A Feasibility Study. Appl Clin Inform 2021; 12:1030-1040. [PMID: 34788890 PMCID: PMC8598393 DOI: 10.1055/s-0041-1736626] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023] Open
Abstract
Background
Many children with autism cannot receive timely in-person diagnosis and therapy, especially in situations where access is limited by geography, socioeconomics, or global health concerns such as the current COVD-19 pandemic. Mobile solutions that work outside of traditional clinical environments can safeguard against gaps in access to quality care.
Objective
The aim of the study is to examine the engagement level and therapeutic feasibility of a mobile game platform for children with autism.
Methods
We designed a mobile application,
GuessWhat
, which, in its current form, delivers game-based therapy to children aged 3 to 12 in home settings through a smartphone. The phone, held by a caregiver on their forehead, displays one of a range of appropriate and therapeutically relevant prompts (e.g., a surprised face) that the child must recognize and mimic sufficiently to allow the caregiver to guess what is being imitated and proceed to the next prompt. Each game runs for 90 seconds to create a robust social exchange between the child and the caregiver.
Results
We examined the therapeutic feasibility of
GuessWhat
in 72 children (75% male, average age 8 years 2 months) with autism who were asked to play the game for three 90-second sessions per day, 3 days per week, for a total of 4 weeks. The group showed significant improvements in Social Responsiveness Score-2 (SRS-2) total (3.97,
p
<0.001) and Vineland Adaptive Behavior Scales-II (VABS-II) socialization standard (5.27,
p
= 0.002) scores.
Conclusion
The results support that the
GuessWhat
mobile game is a viable approach for efficacious treatment of autism and further support the possibility that the game can be used in natural settings to increase access to treatment when barriers to care exist.
Collapse
Affiliation(s)
- Yordan Penev
- Department of Pediatrics (Systems Medicine), Stanford University, Stanford, California, United States.,Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, California, United States.,Department of Biomedical Data Science, Stanford University, Stanford, California, United States
| | - Kaitlyn Dunlap
- Department of Pediatrics (Systems Medicine), Stanford University, Stanford, California, United States.,Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, California, United States.,Department of Biomedical Data Science, Stanford University, Stanford, California, United States
| | - Arman Husic
- Department of Pediatrics (Systems Medicine), Stanford University, Stanford, California, United States.,Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, California, United States.,Department of Biomedical Data Science, Stanford University, Stanford, California, United States
| | - Cathy Hou
- Department of Pediatrics (Systems Medicine), Stanford University, Stanford, California, United States.,Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, California, United States.,Department of Biomedical Data Science, Stanford University, Stanford, California, United States
| | - Peter Washington
- Department of Bioengineering, Stanford University, Stanford, California, United States
| | - Emilie Leblanc
- Department of Pediatrics (Systems Medicine), Stanford University, Stanford, California, United States.,Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, California, United States.,Department of Biomedical Data Science, Stanford University, Stanford, California, United States
| | - Aaron Kline
- Department of Pediatrics (Systems Medicine), Stanford University, Stanford, California, United States.,Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, California, United States.,Department of Biomedical Data Science, Stanford University, Stanford, California, United States
| | - John Kent
- Department of Pediatrics (Systems Medicine), Stanford University, Stanford, California, United States.,Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, California, United States.,Department of Biomedical Data Science, Stanford University, Stanford, California, United States
| | - Anthony Ng-Thow-Hing
- Department of Pediatrics (Systems Medicine), Stanford University, Stanford, California, United States.,Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, California, United States.,Department of Biomedical Data Science, Stanford University, Stanford, California, United States
| | - Bennett Liu
- Department of Pediatrics (Systems Medicine), Stanford University, Stanford, California, United States.,Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, California, United States.,Department of Biomedical Data Science, Stanford University, Stanford, California, United States
| | - Christopher Harjadi
- Department of Pediatrics (Systems Medicine), Stanford University, Stanford, California, United States.,Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, California, United States.,Department of Biomedical Data Science, Stanford University, Stanford, California, United States
| | - Meagan Tsou
- Department of Pediatrics (Systems Medicine), Stanford University, Stanford, California, United States.,Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, California, United States.,Department of Biomedical Data Science, Stanford University, Stanford, California, United States
| | - Manisha Desai
- Department of Biomedical Data Science, Stanford University, Stanford, California, United States
| | - Dennis P Wall
- Department of Pediatrics (Systems Medicine), Stanford University, Stanford, California, United States.,Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, California, United States.,Department of Biomedical Data Science, Stanford University, Stanford, California, United States
| |
Collapse
|
18
|
Li W, Zhou X, Yang Q. Designing medical artificial intelligence for in- and out-groups. COMPUTERS IN HUMAN BEHAVIOR 2021. [DOI: 10.1016/j.chb.2021.106929] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
|
19
|
Shih C, Pudipeddi R, Uthayakumar A, Washington P. A Local Community-Based Social Network for Mental Health and Well-being (Quokka): Exploratory Feasibility Study. JMIRX MED 2021; 2:e24972. [PMID: 37725541 PMCID: PMC10414255 DOI: 10.2196/24972] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/12/2020] [Revised: 03/30/2021] [Accepted: 07/25/2021] [Indexed: 09/21/2023]
Abstract
BACKGROUND Developing healthy habits and maintaining prolonged behavior changes are often difficult tasks. Mental health is one of the largest health concerns globally, including for college students. OBJECTIVE Our aim was to conduct an exploratory feasibility study of local community-based interventions by developing Quokka, a web platform promoting well-being activity on university campuses. We evaluated the intervention's potential for promotion of local, social, and unfamiliar activities pertaining to healthy habits. METHODS To evaluate this framework's potential for increased participation in healthy habits, we conducted a 6-to-8-week feasibility study via a "challenge" across 4 university campuses with a total of 277 participants. We chose a different well-being theme each week, and we conducted weekly surveys to (1) gauge factors that motivated users to complete or not complete the weekly challenge, (2) identify participation trends, and (3) evaluate the feasibility of the intervention to promote local, social, and novel well-being activities. We tested the hypotheses that Quokka participants would self-report participation in more local activities than remote activities for all challenges (Hypothesis H1), more social activities than individual activities (Hypothesis H2), and new rather than familiar activities (Hypothesis H3). RESULTS After Bonferroni correction using a Clopper-Pearson binomial proportion confidence interval for one test, we found that there was a strong preference for local activities for all challenge themes. Similarly, users significantly preferred group activities over individual activities (P<.001 for most challenge themes). For most challenge themes, there were not enough data to significantly distinguish a preference toward familiar or new activities (P<.001 for a subset of challenge themes in some schools). CONCLUSIONS We find that local community-based well-being interventions such as Quokka can facilitate positive behaviors. We discuss these findings and their implications for the research and design of location-based digital communities for well-being promotion.
Collapse
Affiliation(s)
| | - Ruhi Pudipeddi
- Department of Computer Science, University of California, Berkeley, Berkeley, CA, United States
| | - Arany Uthayakumar
- Department of Cognitive Science, University of California, Berkeley, Berkeley, CA, United States
| | - Peter Washington
- Department of Bioengineering, Stanford University, Stanford, CA, United States
| |
Collapse
|
20
|
Washington P, Kalantarian H, Kent J, Husic A, Kline A, Leblanc E, Hou C, Mutlu C, Dunlap K, Penev Y, Stockham N, Chrisman B, Paskov K, Jung JY, Voss C, Haber N, Wall DP. Training Affective Computer Vision Models by Crowdsourcing Soft-Target Labels. Cognit Comput 2021; 13:1363-1373. [PMID: 35669554 PMCID: PMC9165031 DOI: 10.1007/s12559-021-09936-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2021] [Accepted: 09/12/2021] [Indexed: 01/12/2023]
Abstract
Background/Introduction Emotion detection classifiers traditionally predict discrete emotions. However, emotion expressions are often subjective, thus requiring a method to handle compound and ambiguous labels. We explore the feasibility of using crowdsourcing to acquire reliable soft-target labels and evaluate an emotion detection classifier trained with these labels. We hypothesize that training with labels that are representative of the diversity of human interpretation of an image will result in predictions that are similarly representative on a disjoint test set. We also hypothesize that crowdsourcing can generate distributions which mirror those generated in a lab setting. Methods We center our study on the Child Affective Facial Expression (CAFE) dataset, a gold standard collection of images depicting pediatric facial expressions along with 100 human labels per image. To test the feasibility of crowdsourcing to generate these labels, we used Microworkers to acquire labels for 207 CAFE images. We evaluate both unfiltered workers as well as workers selected through a short crowd filtration process. We then train two versions of a ResNet-152 neural network on soft-target CAFE labels using the original 100 annotations provided with the dataset: (1) a classifier trained with traditional one-hot encoded labels, and (2) a classifier trained with vector labels representing the distribution of CAFE annotator responses. We compare the resulting softmax output distributions of the two classifiers with a 2-sample independent t-test of L1 distances between the classifier's output probability distribution and the distribution of human labels. Results While agreement with CAFE is weak for unfiltered crowd workers, the filtered crowd agree with the CAFE labels 100% of the time for happy, neutral, sad and "fear + surprise", and 88.8% for "anger + disgust". While the F1-score for a one-hot encoded classifier is much higher (94.33% vs. 78.68%) with respect to the ground truth CAFE labels, the output probability vector of the crowd-trained classifier more closely resembles the distribution of human labels (t=3.2827, p=0.0014). Conclusions For many applications of affective computing, reporting an emotion probability distribution that accounts for the subjectivity of human interpretation can be more useful than an absolute label. Crowdsourcing, including a sufficient filtering mechanism for selecting reliable crowd workers, is a feasible solution for acquiring soft-target labels.
Collapse
Affiliation(s)
| | - Haik Kalantarian
- Department of Pediatrics (Systems Medicine), Stanford University
| | - Jack Kent
- Department of Pediatrics (Systems Medicine), Stanford University
| | - Arman Husic
- Department of Pediatrics (Systems Medicine), Stanford University
| | - Aaron Kline
- Department of Pediatrics (Systems Medicine), Stanford University
| | - Emilie Leblanc
- Department of Pediatrics (Systems Medicine), Stanford University
| | - Cathy Hou
- Department of Computer Science, Stanford University
| | - Cezmi Mutlu
- Department of Electrical Engineering, Stanford University
| | | | - Yordan Penev
- Department of Pediatrics (Systems Medicine), Stanford University
| | | | | | - Kelley Paskov
- Department of Biomedical Data Science, Stanford University
| | - Jae-Yoon Jung
- Department of Pediatrics (Systems Medicine), Stanford University
| | - Catalin Voss
- Department of Computer Science, Stanford University
| | - Nick Haber
- Graduate School of Education, Stanford University
| | - Dennis P. Wall
- Departments of Pediatrics (Systems Medicine), Biomedical Data Science, and Psychiatry and Behavioral Sciences, Stanford University
| |
Collapse
|
21
|
Cantin-Garside KD, Nussbaum MA, White SW, Kim S, Kim CD, Fortes DMG, Valdez RS. Understanding the experiences of self-injurious behavior in autism spectrum disorder: Implications for monitoring technology design. J Am Med Inform Assoc 2021; 28:303-310. [PMID: 32974678 DOI: 10.1093/jamia/ocaa169] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2020] [Revised: 05/21/2020] [Accepted: 07/14/2020] [Indexed: 12/27/2022] Open
Abstract
OBJECTIVE Monitoring technology may assist in managing self-injurious behavior (SIB), a pervasive concern in autism spectrum disorder (ASD). Affiliated stakeholder perspectives should be considered to design effective and accepted SIB monitoring methods. We examined caregiver experiences to generate design guidance for SIB monitoring technology. MATERIALS AND METHODS Twenty-three educators and 16 parents of individuals with ASD and SIB completed interviews or focus groups to discuss needs related to monitoring SIB and associated technology use. RESULTS Qualitative content analysis of participant responses revealed 7 main themes associated with SIB and technology: triggers, emotional responses, SIB characteristics, management approaches, caregiver impact, child/student impact, and sensory/technology preferences. DISCUSSION The derived themes indicated areas of emphasis for design at the intersection of monitoring and SIB. Systems design at this intersection should consider the range of manifestations of and management approaches for SIB. It should also attend to interactions among children with SIB, their caregivers, and the technology. Design should prioritize the transferability of physical technology and behavioral data as well as the safety, durability, and sensory implications of technology. CONCLUSIONS The collected stakeholder perspectives provide preliminary groundwork for an SIB monitoring system responsive to needs as articulated by caregivers. Technology design based on this groundwork should follow an iterative process that meaningfully engages caregivers and individuals with SIB in naturalistic settings.
Collapse
Affiliation(s)
- Kristine D Cantin-Garside
- Department of Industrial and Systems Engineering, Virginia Polytechnic Institute and State University, Blacksburg, Virginia, USA
| | - Maury A Nussbaum
- Department of Industrial and Systems Engineering, Virginia Polytechnic Institute and State University, Blacksburg, Virginia, USA
| | - Susan W White
- Department of Psychology, The University of Alabama, Tuscaloosa, Alabama, USA
| | - Sunwook Kim
- Department of Industrial and Systems Engineering, Virginia Polytechnic Institute and State University, Blacksburg, Virginia, USA
| | - Chung Do Kim
- Department of Public Health Sciences, University of Virginia, Charlottesville, Virginia, USA
| | - Diogo M G Fortes
- Department of Public Health Sciences, University of Virginia, Charlottesville, Virginia, USA
| | - Rupa S Valdez
- Department of Public Health Sciences, University of Virginia, Charlottesville, Virginia, USA
| |
Collapse
|
22
|
Washington P, Leblanc E, Dunlap K, Penev Y, Varma M, Jung JY, Chrisman B, Sun MW, Stockham N, Paskov KM, Kalantarian H, Voss C, Haber N, Wall DP. Selection of trustworthy crowd workers for telemedical diagnosis of pediatric autism spectrum disorder. PACIFIC SYMPOSIUM ON BIOCOMPUTING. PACIFIC SYMPOSIUM ON BIOCOMPUTING 2021; 26:14-25. [PMID: 33691000 PMCID: PMC7958981] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
Abstract
Crowd-powered telemedicine has the potential to revolutionize healthcare, especially during times that require remote access to care. However, sharing private health data with strangers from around the world is not compatible with data privacy standards, requiring a stringent filtration process to recruit reliable and trustworthy workers who can go through the proper training and security steps. The key challenge, then, is to identify capable, trustworthy, and reliable workers through high-fidelity evaluation tasks without exposing any sensitive patient data during the evaluation process. We contribute a set of experimentally validated metrics for assessing the trustworthiness and reliability of crowd workers tasked with providing behavioral feature tags to unstructured videos of children with autism and matched neurotypical controls. The workers are blinded to diagnosis and blinded to the goal of using the features to diagnose autism. These behavioral labels are fed as input to a previously validated binary logistic regression classifier for detecting autism cases using categorical feature vectors. While the metrics do not incorporate any ground truth labels of child diagnosis, linear regression using the 3 correlative metrics as input can predict the mean probability of the correct class of each worker with a mean average error of 7.51% for performance on the same set of videos and 10.93% for performance on a distinct balanced video set with different children. These results indicate that crowd workers can be recruited for performance based largely on behavioral metrics on a crowdsourced task, enabling an affordable way to filter crowd workforces into a trustworthy and reliable diagnostic workforce.
Collapse
Affiliation(s)
- Peter Washington
- Department of Bioengineering, Stanford University, Palo Alto, CA, 94305, USA
| | - Emilie Leblanc
- Department of Pediatrics (Systems Medicine), Stanford University, Palo Alto, CA, 94305, USA,Department of Biomedical Data Science, Stanford University, Palo Alto, CA, 94305, USA
| | - Kaitlyn Dunlap
- Department of Pediatrics (Systems Medicine), Stanford University, Palo Alto, CA, 94305, USA,Department of Biomedical Data Science, Stanford University, Palo Alto, CA, 94305, USA
| | - Yordan Penev
- Department of Pediatrics (Systems Medicine), Stanford University, Palo Alto, CA, 94305, USA,Department of Biomedical Data Science, Stanford University, Palo Alto, CA, 94305, USA
| | - Maya Varma
- Department of Computer Science, Stanford University, Palo Alto, CA, 94305, USA
| | - Jae-Yoon Jung
- Department of Pediatrics (Systems Medicine), Stanford University, Palo Alto, CA, 94305, USA,Department of Biomedical Data Science, Stanford University, Palo Alto, CA, 94305, USA
| | - Brianna Chrisman
- Department of Bioengineering, Stanford University, Palo Alto, CA, 94305, USA
| | - Min Woo Sun
- Department of Biomedical Data Science, Stanford University, Palo Alto, CA, 94305, USA
| | - Nathaniel Stockham
- Department of Neuroscience, Stanford University, Palo Alto, CA, 94305, USA
| | - Kelley Marie Paskov
- Department of Biomedical Data Science, Stanford University, Palo Alto, CA, 94305, USA
| | - Haik Kalantarian
- Department of Pediatrics (Systems Medicine), Stanford University, Palo Alto, CA, 94305, USA,Department of Biomedical Data Science, Stanford University, Palo Alto, CA, 94305, USA
| | - Catalin Voss
- Department of Computer Science, Stanford University, Palo Alto, CA, 94305, USA
| | - Nick Haber
- School of Education, Stanford University, Palo Alto, CA, 94305, USA
| | - Dennis P. Wall
- Department of Pediatrics (Systems Medicine), Stanford University, Palo Alto, CA, 94305, USA,Department of Biomedical Data Science, Stanford University, Palo Alto, CA, 94305, USA
| |
Collapse
|
23
|
Baig MM, GholamHosseini H, Gutierrez J, Ullah E, Lindén M. Early Detection of Prediabetes and T2DM Using Wearable Sensors and Internet-of-Things-Based Monitoring Applications. Appl Clin Inform 2021; 12:1-9. [PMID: 33406540 PMCID: PMC7787711 DOI: 10.1055/s-0040-1719043] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2020] [Accepted: 09/25/2020] [Indexed: 01/16/2023] Open
Abstract
BACKGROUND Prediabetes and type 2 diabetes mellitus (T2DM) are one of the major long-term health conditions affecting global healthcare delivery. One of the few effective approaches is to actively manage diabetes via a healthy and active lifestyle. OBJECTIVES This research is focused on early detection of prediabetes and T2DM using wearable technology and Internet-of-Things-based monitoring applications. METHODS We developed an artificial intelligence model based on adaptive neuro-fuzzy inference to detect prediabetes and T2DM via individualized monitoring. The key contributing factors to the proposed model include heart rate, heart rate variability, breathing rate, breathing volume, and activity data (steps, cadence, and calories). The data was collected using an advanced wearable body vest and combined with manual recordings of blood glucose, height, weight, age, and sex. The model analyzed the data alongside a clinical knowledgebase. Fuzzy rules were used to establish baseline values via existing interventions, clinical guidelines, and protocols. RESULTS The proposed model was tested and validated using Kappa analysis and achieved an overall agreement of 91%. CONCLUSION We also present a 2-year follow-up observation from the prediction results of the original model. Moreover, the diabetic profile of a participant using M-health applications and a wearable vest (smart shirt) improved when compared to the traditional/routine practice.
Collapse
Affiliation(s)
- Mirza Mansoor Baig
- School of Engineering, Computer and Mathematical Sciences, Auckland University of Technology, Auckland, New Zealand
| | - Hamid GholamHosseini
- School of Engineering, Computer and Mathematical Sciences, Auckland University of Technology, Auckland, New Zealand
| | - Jairo Gutierrez
- School of Engineering, Computer and Mathematical Sciences, Auckland University of Technology, Auckland, New Zealand
| | - Ehsan Ullah
- School of Engineering, Computer and Mathematical Sciences, Auckland University of Technology, Auckland, New Zealand
| | - Maria Lindén
- School of Innovation Design and Engineering, Mälardalen University, Västerås, Sweden
| |
Collapse
|
24
|
Leblanc E, Washington P, Varma M, Dunlap K, Penev Y, Kline A, Wall DP. Feature replacement methods enable reliable home video analysis for machine learning detection of autism. Sci Rep 2020; 10:21245. [PMID: 33277527 PMCID: PMC7719177 DOI: 10.1038/s41598-020-76874-w] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2020] [Accepted: 11/02/2020] [Indexed: 12/15/2022] Open
Abstract
Autism Spectrum Disorder is a neuropsychiatric condition affecting 53 million children worldwide and for which early diagnosis is critical to the outcome of behavior therapies. Machine learning applied to features manually extracted from readily accessible videos (e.g., from smartphones) has the potential to scale this diagnostic process. However, nearly unavoidable variability in video quality can lead to missing features that degrade algorithm performance. To manage this uncertainty, we evaluated the impact of missing values and feature imputation methods on two previously published autism detection classifiers, trained on standard-of-care instrument scoresheets and tested on ratings of 140 children videos from YouTube. We compare the baseline method of listwise deletion to classic univariate and multivariate techniques. We also introduce a feature replacement method that, based on a score, selects a feature from an expanded dataset to fill-in the missing value. The replacement feature selected can be identical for all records (general) or automatically adjusted to the record considered (dynamic). Our results show that general and dynamic feature replacement methods achieve a higher performance than classic univariate and multivariate methods, supporting the hypothesis that algorithmic management can maintain the fidelity of video-based diagnostics in the face of missing values and variable video quality.
Collapse
Affiliation(s)
- Emilie Leblanc
- Department of Pediatrics, Stanford University, Palo Alto, CA, 94305, USA
| | - Peter Washington
- Department of Bioengineering, Stanford University, Palo Alto, CA, 94305, USA
| | - Maya Varma
- Department of Computer Science, Stanford University, Palo Alto, CA, 94305, USA
| | - Kaitlyn Dunlap
- Department of Pediatrics, Stanford University, Palo Alto, CA, 94305, USA
- Department of Biomedical Data Science, Stanford University, Palo Alto, CA, 94305, USA
| | - Yordan Penev
- Department of Pediatrics, Stanford University, Palo Alto, CA, 94305, USA
- Department of Biomedical Data Science, Stanford University, Palo Alto, CA, 94305, USA
| | - Aaron Kline
- Department of Pediatrics, Stanford University, Palo Alto, CA, 94305, USA
- Department of Biomedical Data Science, Stanford University, Palo Alto, CA, 94305, USA
| | - Dennis P Wall
- Department of Pediatrics, Stanford University, Palo Alto, CA, 94305, USA.
- Department of Biomedical Data Science, Stanford University, Palo Alto, CA, 94305, USA.
- Department of Psychiatry and Behavioral Sciences (by courtesy), Stanford University, Palo Alto, CA, 94305, USA.
| |
Collapse
|
25
|
de Belen RAJ, Bednarz T, Sowmya A, Del Favero D. Computer vision in autism spectrum disorder research: a systematic review of published studies from 2009 to 2019. Transl Psychiatry 2020; 10:333. [PMID: 32999273 PMCID: PMC7528087 DOI: 10.1038/s41398-020-01015-w] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/05/2020] [Revised: 09/04/2020] [Accepted: 09/09/2020] [Indexed: 11/29/2022] Open
Abstract
The current state of computer vision methods applied to autism spectrum disorder (ASD) research has not been well established. Increasing evidence suggests that computer vision techniques have a strong impact on autism research. The primary objective of this systematic review is to examine how computer vision analysis has been useful in ASD diagnosis, therapy and autism research in general. A systematic review of publications indexed on PubMed, IEEE Xplore and ACM Digital Library was conducted from 2009 to 2019. Search terms included ['autis*' AND ('computer vision' OR 'behavio* imaging' OR 'behavio* analysis' OR 'affective computing')]. Results are reported according to PRISMA statement. A total of 94 studies are included in the analysis. Eligible papers are categorised based on the potential biological/behavioural markers quantified in each study. Then, different computer vision approaches that were employed in the included papers are described. Different publicly available datasets are also reviewed in order to rapidly familiarise researchers with datasets applicable to their field and to accelerate both new behavioural and technological work on autism research. Finally, future research directions are outlined. The findings in this review suggest that computer vision analysis is useful for the quantification of behavioural/biological markers which can further lead to a more objective analysis in autism research.
Collapse
Affiliation(s)
| | - Tomasz Bednarz
- School of Art & Design, University of New South Wales, Sydney, NSW, Australia
| | - Arcot Sowmya
- School of Computer Science and Engineering, University of New South Wales, Sydney, NSW, Australia
| | - Dennis Del Favero
- School of Art & Design, University of New South Wales, Sydney, NSW, Australia
| |
Collapse
|
26
|
Black MH, Milbourn B, Chen NTM, McGarry S, Wali F, Ho ASV, Lee M, Bölte S, Falkmer T, Girdler S. The use of wearable technology to measure and support abilities, disabilities and functional skills in autistic youth: a scoping review. Scand J Child Adolesc Psychiatr Psychol 2020; 8:48-69. [PMID: 33520778 PMCID: PMC7685500 DOI: 10.21307/sjcapp-2020-006] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
Abstract
Background: Wearable technology (WT) to measure and support social and non-social functioning in Autism Spectrum Disorder (ASD) has been a growing interest of researchers over the past decade. There is however limited understanding of the WTs currently available for autistic individuals, and how they measure functioning in this population. Objective: This scoping review explored the use of WTs for measuring and supporting abilities, disabilities and functional skills in autistic youth. Method: Four electronic databases were searched to identify literature investigating the use of WT in autistic youth, resulting in a total of 33 studies being reviewed. Descriptive and content analysis was conducted, with studies subsequently mapped to the ASD International Classification of Functioning, Disability and Health Core-sets and the ICF Child and Youth Version (ICF-CY). Results: Studies were predominately pilot studies for novel devices. WTs measured a range of physiological and behavioural functions to objectively measure stereotypical motor movements, social function, communication, and emotion regulation in autistic youth in the context of a range of environments and activities. Conclusions: While this review raises promising prospects for the use of WTs for autistic youth, the current evidence is limited and requires further investigation.
Collapse
Affiliation(s)
- Melissa H Black
- School of Occupational Therapy, Social Work and Speech Pathology, Curtin University, Perth, Western Australia.,Curtin Autism Research Group, Curtin University, Perth, Western Australia
| | - Benjamin Milbourn
- School of Occupational Therapy, Social Work and Speech Pathology, Curtin University, Perth, Western Australia.,Curtin Autism Research Group, Curtin University, Perth, Western Australia
| | - Nigel T M Chen
- School of Occupational Therapy, Social Work and Speech Pathology, Curtin University, Perth, Western Australia.,Curtin Autism Research Group, Curtin University, Perth, Western Australia
| | - Sarah McGarry
- School of Occupational Therapy, Social Work and Speech Pathology, Curtin University, Perth, Western Australia
| | - Fatema Wali
- School of Occupational Therapy, Social Work and Speech Pathology, Curtin University, Perth, Western Australia
| | - Armilda S V Ho
- School of Occupational Therapy, Social Work and Speech Pathology, Curtin University, Perth, Western Australia
| | - Mika Lee
- School of Occupational Therapy, Social Work and Speech Pathology, Curtin University, Perth, Western Australia
| | - Sven Bölte
- School of Occupational Therapy, Social Work and Speech Pathology, Curtin University, Perth, Western Australia.,Curtin Autism Research Group, Curtin University, Perth, Western Australia.,Center of Neurodevelopmental Disorders (KIND), Centre for Psychiatry Research; Dep. of Women's and Children's Health, Karolinska Institutet & Stockholm Health Care Services, Region Stockholm, Stockholm, Sweden.,Child and Adolescent Psychiatry, Stockholm Health Care Services, Region Stockholm, Stockholm, Sweden
| | - Torbjorn Falkmer
- School of Occupational Therapy, Social Work and Speech Pathology, Curtin University, Perth, Western Australia.,Curtin Autism Research Group, Curtin University, Perth, Western Australia.,Pain and Rehabilitation Centre, Dep. of Medical and Health Sciences, Linkoping University, Linkoping, Sweden
| | - Sonya Girdler
- School of Occupational Therapy, Social Work and Speech Pathology, Curtin University, Perth, Western Australia.,Curtin Autism Research Group, Curtin University, Perth, Western Australia
| |
Collapse
|
27
|
Eysenbach G, Haber N, Voss C, Tamura S, Daniels J, Ma J, Chiang B, Ramachandran S, Schwartz J, Winograd T, Feinstein C, Wall DP. Toward Continuous Social Phenotyping: Analyzing Gaze Patterns in an Emotion Recognition Task for Children With Autism Through Wearable Smart Glasses. J Med Internet Res 2020; 22:e13810. [PMID: 32319961 PMCID: PMC7203617 DOI: 10.2196/13810] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2019] [Revised: 06/28/2019] [Accepted: 02/09/2020] [Indexed: 12/13/2022] Open
Abstract
BACKGROUND Several studies have shown that facial attention differs in children with autism. Measuring eye gaze and emotion recognition in children with autism is challenging, as standard clinical assessments must be delivered in clinical settings by a trained clinician. Wearable technologies may be able to bring eye gaze and emotion recognition into natural social interactions and settings. OBJECTIVE This study aimed to test: (1) the feasibility of tracking gaze using wearable smart glasses during a facial expression recognition task and (2) the ability of these gaze-tracking data, together with facial expression recognition responses, to distinguish children with autism from neurotypical controls (NCs). METHODS We compared the eye gaze and emotion recognition patterns of 16 children with autism spectrum disorder (ASD) and 17 children without ASD via wearable smart glasses fitted with a custom eye tracker. Children identified static facial expressions of images presented on a computer screen along with nonsocial distractors while wearing Google Glass and the eye tracker. Faces were presented in three trials, during one of which children received feedback in the form of the correct classification. We employed hybrid human-labeling and computer vision-enabled methods for pupil tracking and world-gaze translation calibration. We analyzed the impact of gaze and emotion recognition features in a prediction task aiming to distinguish children with ASD from NC participants. RESULTS Gaze and emotion recognition patterns enabled the training of a classifier that distinguished ASD and NC groups. However, it was unable to significantly outperform other classifiers that used only age and gender features, suggesting that further work is necessary to disentangle these effects. CONCLUSIONS Although wearable smart glasses show promise in identifying subtle differences in gaze tracking and emotion recognition patterns in children with and without ASD, the present form factor and data do not allow for these differences to be reliably exploited by machine learning systems. Resolving these challenges will be an important step toward continuous tracking of the ASD phenotype.
Collapse
Affiliation(s)
| | - Nick Haber
- Graduate School of Education, Stanford University, Stanford, CA, United States
| | - Catalin Voss
- Department of Computer Science, Stanford University, Stanford, CA, United States
| | | | | | - Jeffrey Ma
- Departments of Pediatrics, Biomedical Data Science, Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, United States
| | - Bryan Chiang
- Departments of Pediatrics, Biomedical Data Science, Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, United States
| | - Shasta Ramachandran
- Departments of Pediatrics, Biomedical Data Science, Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, United States
| | - Jessey Schwartz
- Departments of Pediatrics, Biomedical Data Science, Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, United States
| | - Terry Winograd
- Department of Computer Science, Stanford University, Stanford, CA, United States
| | - Carl Feinstein
- Departments of Pediatrics, Biomedical Data Science, Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, United States
| | - Dennis P Wall
- Departments of Pediatrics, Biomedical Data Science, Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, United States
| |
Collapse
|
28
|
Kalantarian H, Jedoui K, Dunlap K, Schwartz J, Washington P, Husic A, Tariq Q, Ning M, Kline A, Wall DP. The Performance of Emotion Classifiers for Children With Parent-Reported Autism: Quantitative Feasibility Study. JMIR Ment Health 2020; 7:e13174. [PMID: 32234701 PMCID: PMC7160704 DOI: 10.2196/13174] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/18/2018] [Revised: 07/03/2019] [Accepted: 02/23/2020] [Indexed: 12/21/2022] Open
Abstract
BACKGROUND Autism spectrum disorder (ASD) is a developmental disorder characterized by deficits in social communication and interaction, and restricted and repetitive behaviors and interests. The incidence of ASD has increased in recent years; it is now estimated that approximately 1 in 40 children in the United States are affected. Due in part to increasing prevalence, access to treatment has become constrained. Hope lies in mobile solutions that provide therapy through artificial intelligence (AI) approaches, including facial and emotion detection AI models developed by mainstream cloud providers, available directly to consumers. However, these solutions may not be sufficiently trained for use in pediatric populations. OBJECTIVE Emotion classifiers available off-the-shelf to the general public through Microsoft, Amazon, Google, and Sighthound are well-suited to the pediatric population, and could be used for developing mobile therapies targeting aspects of social communication and interaction, perhaps accelerating innovation in this space. This study aimed to test these classifiers directly with image data from children with parent-reported ASD recruited through crowdsourcing. METHODS We used a mobile game called Guess What? that challenges a child to act out a series of prompts displayed on the screen of the smartphone held on the forehead of his or her care provider. The game is intended to be a fun and engaging way for the child and parent to interact socially, for example, the parent attempting to guess what emotion the child is acting out (eg, surprised, scared, or disgusted). During a 90-second game session, as many as 50 prompts are shown while the child acts, and the video records the actions and expressions of the child. Due in part to the fun nature of the game, it is a viable way to remotely engage pediatric populations, including the autism population through crowdsourcing. We recruited 21 children with ASD to play the game and gathered 2602 emotive frames following their game sessions. These data were used to evaluate the accuracy and performance of four state-of-the-art facial emotion classifiers to develop an understanding of the feasibility of these platforms for pediatric research. RESULTS All classifiers performed poorly for every evaluated emotion except happy. None of the classifiers correctly labeled over 60.18% (1566/2602) of the evaluated frames. Moreover, none of the classifiers correctly identified more than 11% (6/51) of the angry frames and 14% (10/69) of the disgust frames. CONCLUSIONS The findings suggest that commercial emotion classifiers may be insufficiently trained for use in digital approaches to autism treatment and treatment tracking. Secure, privacy-preserving methods to increase labeled training data are needed to boost the models' performance before they can be used in AI-enabled approaches to social therapy of the kind that is common in autism treatments.
Collapse
Affiliation(s)
- Haik Kalantarian
- Department of Pediatrics, Stanford University, Stanford, CA, United States
- Department of Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Khaled Jedoui
- Department of Mathematics, Stanford University, Stanford, CA, United States
| | - Kaitlyn Dunlap
- Department of Pediatrics, Stanford University, Stanford, CA, United States
- Department of Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Jessey Schwartz
- Department of Pediatrics, Stanford University, Stanford, CA, United States
- Department of Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Peter Washington
- Department of Pediatrics, Stanford University, Stanford, CA, United States
- Department of Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Arman Husic
- Department of Pediatrics, Stanford University, Stanford, CA, United States
- Department of Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Qandeel Tariq
- Department of Pediatrics, Stanford University, Stanford, CA, United States
- Department of Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Michael Ning
- Department of Pediatrics, Stanford University, Stanford, CA, United States
- Department of Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Aaron Kline
- Department of Pediatrics, Stanford University, Stanford, CA, United States
- Department of Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Dennis Paul Wall
- Department of Pediatrics, Stanford University, Stanford, CA, United States
- Department of Biomedical Data Science, Stanford University, Stanford, CA, United States
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, United States
| |
Collapse
|
29
|
Washington P, Paskov KM, Kalantarian H, Stockham N, Voss C, Kline A, Patnaik R, Chrisman B, Varma M, Tariq Q, Dunlap K, Schwartz J, Haber N, Wall DP. Feature Selection and Dimension Reduction of Social Autism Data. PACIFIC SYMPOSIUM ON BIOCOMPUTING. PACIFIC SYMPOSIUM ON BIOCOMPUTING 2020; 25:707-718. [PMID: 31797640 PMCID: PMC6927820] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
Abstract
Autism Spectrum Disorder (ASD) is a complex neuropsychiatric condition with a highly heterogeneous phenotype. Following the work of Duda et al., which uses a reduced feature set from the Social Responsiveness Scale, Second Edition (SRS) to distinguish ASD from ADHD, we performed item-level question selection on answers to the SRS to determine whether ASD can be distinguished from non-ASD using a similarly small subset of questions. To explore feature redundancies between the SRS questions, we performed filter, wrapper, and embedded feature selection analyses. To explore the linearity of the SRS-related ASD phenotype, we then compressed the 65-question SRS into low-dimension representations using PCA, t-SNE, and a denoising autoencoder. We measured the performance of a multilayer perceptron (MLP) classifier with the top-ranking questions as input. Classification using only the top-rated question resulted in an AUC of over 92% for SRS-derived diagnoses and an AUC of over 83% for dataset-specific diagnoses. High redundancy of features have implications towards replacing the social behaviors that are targeted in behavioral diagnostics and interventions, where digital quantification of certain features may be obfuscated due to privacy concerns. We similarly evaluated the performance of an MLP classifier trained on the low-dimension representations of the SRS, finding that the denoising autoencoder achieved slightly higher performance than the PCA and t-SNE representations.
Collapse
Affiliation(s)
- Peter Washington
- Department of Bioengineering, Stanford University, Palo Alto, CA, USA
| | | | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
30
|
Washington P, Kalantarian H, Tariq Q, Schwartz J, Dunlap K, Chrisman B, Varma M, Ning M, Kline A, Stockham N, Paskov K, Voss C, Haber N, Wall DP. Validity of Online Screening for Autism: Crowdsourcing Study Comparing Paid and Unpaid Diagnostic Tasks. J Med Internet Res 2019; 21:e13668. [PMID: 31124463 PMCID: PMC6552453 DOI: 10.2196/13668] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2019] [Revised: 04/15/2019] [Accepted: 04/16/2019] [Indexed: 12/05/2022] Open
Abstract
BACKGROUND Obtaining a diagnosis of neuropsychiatric disorders such as autism requires long waiting times that can exceed a year and can be prohibitively expensive. Crowdsourcing approaches may provide a scalable alternative that can accelerate general access to care and permit underserved populations to obtain an accurate diagnosis. OBJECTIVE We aimed to perform a series of studies to explore whether paid crowd workers on Amazon Mechanical Turk (AMT) and citizen crowd workers on a public website shared on social media can provide accurate online detection of autism, conducted via crowdsourced ratings of short home video clips. METHODS Three online studies were performed: (1) a paid crowdsourcing task on AMT (N=54) where crowd workers were asked to classify 10 short video clips of children as "Autism" or "Not autism," (2) a more complex paid crowdsourcing task (N=27) with only those raters who correctly rated ≥8 of the 10 videos during the first study, and (3) a public unpaid study (N=115) identical to the first study. RESULTS For Study 1, the mean score of the participants who completed all questions was 7.50/10 (SD 1.46). When only analyzing the workers who scored ≥8/10 (n=27/54), there was a weak negative correlation between the time spent rating the videos and the sensitivity (ρ=-0.44, P=.02). For Study 2, the mean score of the participants rating new videos was 6.76/10 (SD 0.59). The average deviation between the crowdsourced answers and gold standard ratings provided by two expert clinical research coordinators was 0.56, with an SD of 0.51 (maximum possible SD is 3). All paid crowd workers who scored 8/10 in Study 1 either expressed enjoyment in performing the task in Study 2 or provided no negative comments. For Study 3, the mean score of the participants who completed all questions was 6.67/10 (SD 1.61). There were weak correlations between age and score (r=0.22, P=.014), age and sensitivity (r=-0.19, P=.04), number of family members with autism and sensitivity (r=-0.195, P=.04), and number of family members with autism and precision (r=-0.203, P=.03). A two-tailed t test between the scores of the paid workers in Study 1 and the unpaid workers in Study 3 showed a significant difference (P<.001). CONCLUSIONS Many paid crowd workers on AMT enjoyed answering screening questions from videos, suggesting higher intrinsic motivation to make quality assessments. Paid crowdsourcing provides promising screening assessments of pediatric autism with an average deviation <20% from professional gold standard raters, which is potentially a clinically informative estimate for parents. Parents of children with autism likely overfit their intuition to their own affected child. This work provides preliminary demographic data on raters who may have higher ability to recognize and measure features of autism across its wide range of phenotypic manifestations.
Collapse
Affiliation(s)
- Peter Washington
- Department of Bioengineering, Stanford University, Stanford, CA, United States
| | - Haik Kalantarian
- Department of Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Qandeel Tariq
- Department of Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Jessey Schwartz
- Department of Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Kaitlyn Dunlap
- Department of Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Brianna Chrisman
- Department of Bioengineering, Stanford University, Stanford, CA, United States
| | - Maya Varma
- Department of Computer Science, Stanford University, Stanford, CA, United States
| | - Michael Ning
- Department of Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Aaron Kline
- Department of Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Nathaniel Stockham
- Department of Neuroscience, Stanford University, Stanford, CA, United States
| | - Kelley Paskov
- Department of Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Catalin Voss
- Department of Computer Science, Stanford University, Stanford, CA, United States
| | - Nick Haber
- Department of Biomedical Data Science, Stanford University, Stanford, CA, United States
- Department of Pediatrics, Stanford University, Stanford, CA, United States
- Department of Psychology, Stanford University, Stanford, CA, United States
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, United States
| | - Dennis Paul Wall
- Department of Pediatrics, Stanford University, Stanford, CA, United States
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, United States
- Division of Systems Medicine, Department of Biomedical Data Science, Stanford University, Palo Alto, CA, United States
| |
Collapse
|
31
|
Voss C, Schwartz J, Daniels J, Kline A, Haber N, Washington P, Tariq Q, Robinson TN, Desai M, Phillips JM, Feinstein C, Winograd T, Wall DP. Effect of Wearable Digital Intervention for Improving Socialization in Children With Autism Spectrum Disorder: A Randomized Clinical Trial. JAMA Pediatr 2019; 173:446-454. [PMID: 30907929 PMCID: PMC6503634 DOI: 10.1001/jamapediatrics.2019.0285] [Citation(s) in RCA: 75] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/25/2018] [Accepted: 01/17/2019] [Indexed: 11/14/2022]
Abstract
Importance Autism behavioral therapy is effective but expensive and difficult to access. While mobile technology-based therapy can alleviate wait-lists and scale for increasing demand, few clinical trials exist to support its use for autism spectrum disorder (ASD) care. Objective To evaluate the efficacy of Superpower Glass, an artificial intelligence-driven wearable behavioral intervention for improving social outcomes of children with ASD. Design, Setting, and Participants A randomized clinical trial in which participants received the Superpower Glass intervention plus standard of care applied behavioral analysis therapy and control participants received only applied behavioral analysis therapy. Assessments were completed at the Stanford University Medical School, and enrolled participants used the Superpower Glass intervention in their homes. Children aged 6 to 12 years with a formal ASD diagnosis who were currently receiving applied behavioral analysis therapy were included. Families were recruited between June 2016 and December 2017. The first participant was enrolled on November 1, 2016, and the last appointment was completed on April 11, 2018. Data analysis was conducted between April and October 2018. Interventions The Superpower Glass intervention, deployed via Google Glass (worn by the child) and a smartphone app, promotes facial engagement and emotion recognition by detecting facial expressions and providing reinforcing social cues. Families were asked to conduct 20-minute sessions at home 4 times per week for 6 weeks. Main Outcomes and Measures Four socialization measures were assessed using an intention-to-treat analysis with a Bonferroni test correction. Results Overall, 71 children (63 boys [89%]; mean [SD] age, 8.38 [2.46] years) diagnosed with ASD were enrolled (40 [56.3%] were randomized to treatment, and 31 (43.7%) were randomized to control). Children receiving the intervention showed significant improvements on the Vineland Adaptive Behaviors Scale socialization subscale compared with treatment as usual controls (mean [SD] treatment impact, 4.58 [1.62]; P = .005). Positive mean treatment effects were also found for the other 3 primary measures but not to a significance threshold of P = .0125. Conclusions and Relevance The observed 4.58-point average gain on the Vineland Adaptive Behaviors Scale socialization subscale is comparable with gains observed with standard of care therapy. To our knowledge, this is the first randomized clinical trial to demonstrate efficacy of a wearable digital intervention to improve social behavior of children with ASD. The intervention reinforces facial engagement and emotion recognition, suggesting either or both could be a mechanism of action driving the observed improvement. This study underscores the potential of digital home therapy to augment the standard of care. Trial Registration ClinicalTrials.gov identifier: NCT03569176.
Collapse
Affiliation(s)
- Catalin Voss
- Department of Computer Science, Stanford University, Stanford, California
| | - Jessey Schwartz
- Department of Pediatrics (Systems Medicine), Stanford University, Stanford, California
| | - Jena Daniels
- Department of Pediatrics (Systems Medicine), Stanford University, Stanford, California
| | - Aaron Kline
- Department of Pediatrics (Systems Medicine), Stanford University, Stanford, California
| | - Nick Haber
- Department of Pediatrics (Systems Medicine), Stanford University, Stanford, California
| | - Peter Washington
- Department of Pediatrics (Systems Medicine), Stanford University, Stanford, California
| | - Qandeel Tariq
- Department of Pediatrics (Systems Medicine), Stanford University, Stanford, California
| | - Thomas N. Robinson
- Department of Pediatrics (Systems Medicine), Stanford University, Stanford, California
- Departments of Pediatrics (Stanford Solutions Science Lab) and Medicine, Stanford University, Stanford, California
| | - Manisha Desai
- Department of Pediatrics (Systems Medicine), Stanford University, Stanford, California
| | - Jennifer M. Phillips
- Departments of Pediatrics (Stanford Solutions Science Lab) and Medicine, Stanford University, Stanford, California
| | - Carl Feinstein
- Departments of Pediatrics (Stanford Solutions Science Lab) and Medicine, Stanford University, Stanford, California
| | - Terry Winograd
- Department of Computer Science, Stanford University, Stanford, California
| | - Dennis P. Wall
- Department of Pediatrics (Systems Medicine), Stanford University, Stanford, California
- Department of Biomedical Data Science, Stanford University, Stanford, California
- Department of Psychiatry and Behavioral Sciences (by courtesy), Stanford University, Stanford, California
| |
Collapse
|
32
|
Daniels J, Schwartz JN, Voss C, Haber N, Fazel A, Kline A, Washington P, Feinstein C, Winograd T, Wall DP. Exploratory study examining the at-home feasibility of a wearable tool for social-affective learning in children with autism. NPJ Digit Med 2018; 1:32. [PMID: 31304314 PMCID: PMC6550272 DOI: 10.1038/s41746-018-0035-3] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2018] [Revised: 05/18/2018] [Accepted: 05/25/2018] [Indexed: 12/30/2022] Open
Abstract
Although standard behavioral interventions for autism spectrum disorder (ASD) are effective therapies for social deficits, they face criticism for being time-intensive and overdependent on specialists. Earlier starting age of therapy is a strong predictor of later success, but waitlists for therapies can be 18 months long. To address these complications, we developed Superpower Glass, a machine-learning-assisted software system that runs on Google Glass and an Android smartphone, designed for use during social interactions. This pilot exploratory study examines our prototype tool’s potential for social-affective learning for children with autism. We sent our tool home with 14 families and assessed changes from intake to conclusion through the Social Responsiveness Scale (SRS-2), a facial affect recognition task (EGG), and qualitative parent reports. A repeated-measures one-way ANOVA demonstrated a decrease in SRS-2 total scores by an average 7.14 points (F(1,13) = 33.20, p = <.001, higher scores indicate higher ASD severity). EGG scores also increased by an average 9.55 correct responses (F(1,10) = 11.89, p = <.01). Parents reported increased eye contact and greater social acuity. This feasibility study supports using mobile technologies for potential therapeutic purposes.
Collapse
Affiliation(s)
- Jena Daniels
- 1Division of Systems Medicine, Department of Pediatrics, Stanford University, Palo Alto, CA USA
| | - Jessey N Schwartz
- 1Division of Systems Medicine, Department of Pediatrics, Stanford University, Palo Alto, CA USA
| | - Catalin Voss
- 2Department of Computer Science, Stanford University, Palo Alto, CA USA
| | - Nick Haber
- 1Division of Systems Medicine, Department of Pediatrics, Stanford University, Palo Alto, CA USA
| | - Azar Fazel
- 1Division of Systems Medicine, Department of Pediatrics, Stanford University, Palo Alto, CA USA
| | - Aaron Kline
- 1Division of Systems Medicine, Department of Pediatrics, Stanford University, Palo Alto, CA USA
| | - Peter Washington
- 2Department of Computer Science, Stanford University, Palo Alto, CA USA
| | - Carl Feinstein
- 3Department of Psychiatry and Behavioral Sciences, Stanford University, Palo Alto, CA USA
| | - Terry Winograd
- 2Department of Computer Science, Stanford University, Palo Alto, CA USA
| | - Dennis P Wall
- 1Division of Systems Medicine, Department of Pediatrics, Stanford University, Palo Alto, CA USA.,3Department of Psychiatry and Behavioral Sciences, Stanford University, Palo Alto, CA USA.,4Department of Biomedical Data Science, Stanford University, Palo Alto, CA USA
| |
Collapse
|