1
|
Harada Y, Ohyama J, Sano M, Ishii N, Maida K, Wada M, Wada M. Temporal characteristics of facial ensemble in individuals with autism spectrum disorder: examination from arousal and attentional allocation. Front Psychiatry 2024; 15:1328708. [PMID: 38439795 PMCID: PMC10910007 DOI: 10.3389/fpsyt.2024.1328708] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Accepted: 02/02/2024] [Indexed: 03/06/2024] Open
Abstract
Introduction Individuals with Autism Spectrum Disorder (ASD) show atypical recognition of facial emotions, which has been suggested to stem from arousal and attention allocation. Recent studies have focused on the ability to perceive an average expression from multiple spatially different expressions. This study investigated the effect of autistic traits on temporal ensemble, that is, the perception of the average expression from multiple changing expressions. Methods We conducted a simplified temporal-ensemble task and analyzed behavioral responses, pupil size, and viewing times for eyes of a face. Participants with and without diagnosis of ASD viewed serial presentations of facial expressions that randomly switched between emotional and neutral. The temporal ratio of the emotional expressions was manipulated. The participants estimated the intensity of the facial emotions for the overall presentation. Results We obtained three major results: (a) many participants with ASD were less susceptible to the ratio of anger expression for temporal ensembles, (b) they produced significantly greater pupil size for angry expressions (within-participants comparison) and smaller pupil size for sad expressions (between-groups comparison), and (c) pupil size and viewing time to eyes were not correlated with the temporal ensemble. Discussion These results suggest atypical temporal integration of anger expression and arousal characteristics in individuals with ASD; however, the atypical integration is not fully explained by arousal or attentional allocation.
Collapse
Affiliation(s)
- Yuki Harada
- Developmental Disorders Section, Department of Rehabilitation for Brain Functions, Research Institute of National Rehabilitation Center for Persons with Disabilities, Tokorozawa, Saitama, Japan
- Faculty of Humanities, Kyoto University of Advanced Science, Kyoto, Japan
| | - Junji Ohyama
- Human Augmentation Research Center, National Institute of Advanced Industrial Science and Technology, Kashiwa, Chiba, Japan
| | - Misako Sano
- Developmental Disorders Section, Department of Rehabilitation for Brain Functions, Research Institute of National Rehabilitation Center for Persons with Disabilities, Tokorozawa, Saitama, Japan
- Graduate School of Medicine, Nagoya University, Nagoya, Aichi, Japan
| | - Naomi Ishii
- Developmental Disorders Section, Department of Rehabilitation for Brain Functions, Research Institute of National Rehabilitation Center for Persons with Disabilities, Tokorozawa, Saitama, Japan
| | - Keiko Maida
- Developmental Disorders Section, Department of Rehabilitation for Brain Functions, Research Institute of National Rehabilitation Center for Persons with Disabilities, Tokorozawa, Saitama, Japan
| | - Megumi Wada
- Developmental Disorders Section, Department of Rehabilitation for Brain Functions, Research Institute of National Rehabilitation Center for Persons with Disabilities, Tokorozawa, Saitama, Japan
- Graduate School of Contemporary Psychology, Rikkyo University, Niiza, Saitama, Japan
| | - Makoto Wada
- Developmental Disorders Section, Department of Rehabilitation for Brain Functions, Research Institute of National Rehabilitation Center for Persons with Disabilities, Tokorozawa, Saitama, Japan
| |
Collapse
|
2
|
Pandya S, Jain S, Verma J. A comprehensive analysis towards exploring the promises of AI-related approaches in autism research. Comput Biol Med 2024; 168:107801. [PMID: 38064848 DOI: 10.1016/j.compbiomed.2023.107801] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2023] [Revised: 11/09/2023] [Accepted: 11/29/2023] [Indexed: 01/10/2024]
Abstract
Autism Spectrum Disorder (ASD) is a neurodevelopmental condition that presents challenges in communication, social interaction, repetitive behaviour, and limited interests. Detecting ASD at an early stage is crucial for timely interventions and an improved quality of life. In recent times, Artificial Intelligence (AI) has been increasingly used in ASD research. The rise in ASD diagnoses is due to the growing number of ASD cases and the recognition of the importance of early detection, which leads to better symptom management. This study explores the potential of AI in identifying early indicators of autism, aligning with the United Nations Sustainable Development Goals (SDGs) of Good Health and Well-being (Goal 3) and Peace, Justice, and Strong Institutions (Goal 16). The paper aims to provide a comprehensive overview of the current state-of-the-art AI-based autism classification by reviewing recent publications from the last decade. It covers various modalities such as Eye gaze, Facial Expression, Motor skill, MRI/fMRI, and EEG, and multi-modal approaches primarily grouped into behavioural and biological markers. The paper presents a timeline spanning from the history of ASD to recent developments in the field of AI. Additionally, the paper provides a category-wise detailed analysis of the AI-based application in ASD with a diagrammatic summarization to convey a holistic summary of different modalities. It also reports on the successes and challenges of applying AI for ASD detection while providing publicly available datasets. The paper paves the way for future scope and directions, providing a complete and systematic overview for researchers in the field of ASD.
Collapse
Affiliation(s)
- Shivani Pandya
- Department of Computer Science and Engineering, Nirma University, Ahmedabad, Gujarat 382481, India.
| | - Swati Jain
- Department of Computer Science and Engineering, Nirma University, Ahmedabad, Gujarat 382481, India.
| | - Jaiprakash Verma
- Department of Computer Science and Engineering, Nirma University, Ahmedabad, Gujarat 382481, India.
| |
Collapse
|
3
|
Koehler JC, Falter-Wagner CM. Digitally assisted diagnostics of autism spectrum disorder. Front Psychiatry 2023; 14:1066284. [PMID: 36816410 PMCID: PMC9928948 DOI: 10.3389/fpsyt.2023.1066284] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Accepted: 01/11/2023] [Indexed: 02/04/2023] Open
Abstract
Digital technologies have the potential to support psychiatric diagnostics and, in particular, differential diagnostics of autism spectrum disorder in the near future, making clinical decisions more objective, reliable and evidence-based while reducing clinical resources. Multimodal automatized measurement of symptoms at cognitive, behavioral, and neuronal levels combined with artificial intelligence applications offer promising strides toward personalized prognostics and treatment strategies. In addition, these new technologies could enable systematic and continuous assessment of longitudinal symptom development, beyond the usual scope of clinical practice. Early recognition of exacerbation and simplified, as well as detailed, progression control would become possible. Ultimately, digitally assisted diagnostics will advance early recognition. Nonetheless, digital technologies cannot and should not substitute clinical decision making that takes the comprehensive complexity of individual longitudinal and cross-section presentation of autism spectrum disorder into account. Yet, they might aid the clinician by objectifying decision processes and provide a welcome relief to resources in the clinical setting.
Collapse
Affiliation(s)
- Jana Christina Koehler
- Department of Psychiatry and Psychotherapy, Medical Faculty, LMU Munich, Munich, Germany
| | | |
Collapse
|
4
|
Quinde-Zlibut J, Munshi A, Biswas G, Cascio CJ. Identifying and describing subtypes of spontaneous empathic facial expression production in autistic adults. J Neurodev Disord 2022; 14:43. [PMID: 35915404 PMCID: PMC9342940 DOI: 10.1186/s11689-022-09451-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Accepted: 07/08/2022] [Indexed: 11/17/2022] Open
Abstract
BACKGROUND It is unclear whether atypical patterns of facial expression production metrics in autism reflect the dynamic and nuanced nature of facial expressions across people or a true diagnostic difference. Furthermore, the heterogeneity observed across autism symptomatology suggests a need for more adaptive and personalized social skills programs. Towards this goal, it would be useful to have a more concrete and empirical understanding of the different expressiveness profiles within the autistic population and how they differ from neurotypicals. METHODS We used automated facial coding and an unsupervised clustering approach to limit inter-individual variability in facial expression production that may have otherwise obscured group differences in previous studies, allowing an "apples-to-apples" comparison between autistic and neurotypical adults. Specifically, we applied k-means clustering to identify subtypes of facial expressiveness in an autism group (N = 27) and a neurotypical control group (N = 57) separately. The two most stable clusters from these analyses were then further characterized and compared based on their expressiveness and emotive congruence to emotionally charged stimuli. RESULTS Our main finding was that a subset of autistic adults in our sample show heightened spontaneous facial expressions irrespective of image valence. We did not find evidence for greater incongruous (i.e., inappropriate) facial expressions in autism. Finally, we found a negative trend between expressiveness and emotion recognition within the autism group. CONCLUSION The results from our previous study on self-reported empathy and current expressivity findings point to a higher degree of facial expressions recruited for emotional resonance in autism that may not always be adaptive (e.g., experiencing similar emotional resonance regardless of valence). These findings also build on previous work indicating that facial expression intensity is not diminished in autism and suggest the need for intervention programs to focus on emotion recognition and social skills in the context of both negative and positive emotions.
Collapse
Affiliation(s)
- Jennifer Quinde-Zlibut
- Graduate Program in Neuroscience, Vanderbilt University, Nashville, USA. .,Frist Center for Autism and Innovation, Vanderbilt University, Nashville, USA.
| | - Anabil Munshi
- grid.152326.10000 0001 2264 7217Institute for Software Integrated Systems, Vanderbilt University, Nashville, USA
| | - Gautam Biswas
- grid.152326.10000 0001 2264 7217Institute for Software Integrated Systems, Vanderbilt University, Nashville, USA
| | - Carissa J. Cascio
- grid.412807.80000 0004 1936 9916Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center, Nashville, USA
| |
Collapse
|
5
|
Sleiman E, Mutlu OC, Surabhi S, Husic A, Kline A, Washington P, Wall DP. Deep Learning-Based Autism Spectrum Disorder Detection Using Emotion Features From Video Recordings (Preprint). JMIR BIOMEDICAL ENGINEERING 2022. [DOI: 10.2196/39982] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
6
|
Liu J, Wang Z, Xu K, Ji B, Zhang G, Wang Y, Deng J, Xu Q, Xu X, Liu H. Early Screening of Autism in Toddlers via Response-To-Instructions Protocol. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:3914-3924. [PMID: 32966227 DOI: 10.1109/tcyb.2020.3017866] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Early screening of autism spectrum disorder (ASD) is crucial since early intervention evidently confirms significant improvement of functional social behavior in toddlers. This article attempts to bootstrap the response-to-instructions (RTIs) protocol with vision-based solutions in order to assist professional clinicians with an automatic autism diagnosis. The correlation between detected objects and toddler's emotional features, such as gaze, is constructed to analyze their autistic symptoms. Twenty toddlers between 16-32 months of age, 15 of whom diagnosed with ASD, participated in this study. The RTI method is validated against human codings, and group differences between ASD and typically developing (TD) toddlers are analyzed. The results suggest that the agreement between clinical diagnosis and the RTI method achieves 95% for all 20 subjects, which indicates vision-based solutions are highly feasible for automatic autistic diagnosis.
Collapse
|
7
|
Dong H, Chen D, Zhang L, Ke H, Li X. Subject sensitive EEG discrimination with fast reconstructable CNN driven by reinforcement learning: A case study of ASD evaluation. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.04.009] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
|
8
|
Lecciso F, Levante A, Fabio RA, Caprì T, Leo M, Carcagnì P, Distante C, Mazzeo PL, Spagnolo P, Petrocchi S. Emotional Expression in Children With ASD: A Pre-Study on a Two-Group Pre-Post-Test Design Comparing Robot-Based and Computer-Based Training. Front Psychol 2021; 12:678052. [PMID: 34366997 PMCID: PMC8334177 DOI: 10.3389/fpsyg.2021.678052] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2021] [Accepted: 06/17/2021] [Indexed: 12/29/2022] Open
Abstract
Several studies have found a delay in the development of facial emotion recognition and expression in children with an autism spectrum condition (ASC). Several interventions have been designed to help children to fill this gap. Most of them adopt technological devices (i.e., robots, computers, and avatars) as social mediators and reported evidence of improvement. Few interventions have aimed at promoting emotion recognition and expression abilities and, among these, most have focused on emotion recognition. Moreover, a crucial point is the generalization of the ability acquired during treatment to naturalistic interactions. This study aimed to evaluate the effectiveness of two technological-based interventions focused on the expression of basic emotions comparing a robot-based type of training with a "hybrid" computer-based one. Furthermore, we explored the engagement of the hybrid technological device introduced in the study as an intermediate step to facilitate the generalization of the acquired competencies in naturalistic settings. A two-group pre-post-test design was applied to a sample of 12 children (M = 9.33; ds = 2.19) with autism. The children were included in one of the two groups: group 1 received a robot-based type of training (n = 6); and group 2 received a computer-based type of training (n = 6). Pre- and post-intervention evaluations (i.e., time) of facial expression and production of four basic emotions (happiness, sadness, fear, and anger) were performed. Non-parametric ANOVAs found significant time effects between pre- and post-interventions on the ability to recognize sadness [t (1) = 7.35, p = 0.006; pre: M (ds) = 4.58 (0.51); post: M (ds) = 5], and to express happiness [t (1) = 5.72, p = 0.016; pre: M (ds) = 3.25 (1.81); post: M (ds) = 4.25 (1.76)], and sadness [t (1) = 10.89, p < 0; pre: M (ds) = 1.5 (1.32); post: M (ds) = 3.42 (1.78)]. The group*time interactions were significant for fear [t (1) = 1.019, p = 0.03] and anger expression [t (1) = 1.039, p = 0.03]. However, Mann-Whitney comparisons did not show significant differences between robot-based and computer-based training. Finally, no difference was found in the levels of engagement comparing the two groups in terms of the number of voice prompts given during interventions. Albeit the results are preliminary and should be interpreted with caution, this study suggests that two types of technology-based training, one mediated via a humanoid robot and the other via a pre-settled video of a peer, perform similarly in promoting facial recognition and expression of basic emotions in children with an ASC. The findings represent the first step to generalize the abilities acquired in a laboratory-trained situation to naturalistic interactions.
Collapse
Affiliation(s)
- Flavia Lecciso
- Department of History, Society and Human Studies, University of Salento, Lecce, Italy.,Laboratory of Applied Psychology and Intervention, University of Salento, Lecce, Italy
| | - Annalisa Levante
- Department of History, Society and Human Studies, University of Salento, Lecce, Italy.,Laboratory of Applied Psychology and Intervention, University of Salento, Lecce, Italy
| | - Rosa Angela Fabio
- Department of Clinical and Experimental Medicine, University of Messina, Messina, Italy
| | - Tindara Caprì
- Department of Clinical and Experimental Medicine, University of Messina, Messina, Italy
| | - Marco Leo
- Institute of Applied Sciences and Intelligent Systems, National Research Council, Lecce, Italy
| | - Pierluigi Carcagnì
- Institute of Applied Sciences and Intelligent Systems, National Research Council, Lecce, Italy
| | - Cosimo Distante
- Institute of Applied Sciences and Intelligent Systems, National Research Council, Lecce, Italy
| | - Pier Luigi Mazzeo
- Institute of Applied Sciences and Intelligent Systems, National Research Council, Lecce, Italy
| | - Paolo Spagnolo
- Institute of Applied Sciences and Intelligent Systems, National Research Council, Lecce, Italy
| | - Serena Petrocchi
- Faculty of Biomedical Sciences, Università della Svizzera Italiana, Lugano, Switzerland
| |
Collapse
|
9
|
Levante A, Petrocchi S, Bianco F, Castelli I, Colombi C, Keller R, Narzisi A, Masi G, Lecciso F. Psychological Impact of COVID-19 Outbreak on Families of Children with Autism Spectrum Disorder and Typically Developing Peers: An Online Survey. Brain Sci 2021; 11:808. [PMID: 34207173 PMCID: PMC8235600 DOI: 10.3390/brainsci11060808] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2021] [Revised: 06/11/2021] [Accepted: 06/15/2021] [Indexed: 11/19/2022] Open
Abstract
BACKGROUND When COVID-19 was declared as a pandemic, many countries imposed severe lockdowns that changed families' routines and negatively impacted on parents' and children's mental health. Several studies on families with children with autism spectrum disorder (ASD) revealed that lockdown increased the difficulties faced by individuals with ASD, as well as parental distress. No studies have analyzed the interplay between parental distress, children's emotional responses, and adaptive behaviors in children with ASD considering the period of the mandatory lockdown. Furthermore, we compared families with children on the spectrum and families with typically developing (TD) children in terms of their distress, children's emotional responses, and behavioral adaptation. METHODS In this study, 120 parents of children aged 5-10 years (53 with ASD) participated. RESULTS In the four tested models, children's positive and negative emotional responses mediated the impact of parental distress on children's playing activities. In the ASD group, parents reported that their children expressed more positive emotions, but fewer playing activities, than TD children. Families with children on the spectrum reported greater behavioral problems during the lockdown and more parental distress. CONCLUSIONS Our findings inform the interventions designed for parents to reduce distress and to develop coping strategies to better manage the caregiver-child relationship.
Collapse
Affiliation(s)
- Annalisa Levante
- Department of History, Society, and Human Studies, University of Salento, 73100 Lecce, Italy;
- Laboratory of Applied Psychology, Department of History, Society, and Human Studies, University of Salento, 73100 Lecce, Italy;
| | - Serena Petrocchi
- Laboratory of Applied Psychology, Department of History, Society, and Human Studies, University of Salento, 73100 Lecce, Italy;
- Faculty of Biomedical Sciences, Università della Svizzera Italiana, Via Buffi 13, 6900 Lugano, Switzerland
| | - Federica Bianco
- Department of Human and Social Sciences, University of Bergamo, 23129 Bergamo, Italy; (F.B.); (I.C.)
| | - Ilaria Castelli
- Department of Human and Social Sciences, University of Bergamo, 23129 Bergamo, Italy; (F.B.); (I.C.)
| | - Costanza Colombi
- IRCCS Stella Maris Foundation, 56018 Pisa, Italy; (C.C.); (A.N.); (G.M.)
- Department of Psychiatry, University of Michigan, Ann Arbor, MI 48109, USA
| | - Roberto Keller
- Adult Autism Center, Mental Health Department, Local Health Unit ASL Città di Torino, 10138 Turin, Italy;
| | - Antonio Narzisi
- IRCCS Stella Maris Foundation, 56018 Pisa, Italy; (C.C.); (A.N.); (G.M.)
| | - Gabriele Masi
- IRCCS Stella Maris Foundation, 56018 Pisa, Italy; (C.C.); (A.N.); (G.M.)
| | - Flavia Lecciso
- Department of History, Society, and Human Studies, University of Salento, 73100 Lecce, Italy;
- Laboratory of Applied Psychology, Department of History, Society, and Human Studies, University of Salento, 73100 Lecce, Italy;
| |
Collapse
|
10
|
Minaee S, Minaei M, Abdolrashidi A. Deep-Emotion: Facial Expression Recognition Using Attentional Convolutional Network. SENSORS (BASEL, SWITZERLAND) 2021; 21:3046. [PMID: 33925371 PMCID: PMC8123912 DOI: 10.3390/s21093046] [Citation(s) in RCA: 81] [Impact Index Per Article: 27.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Revised: 04/20/2021] [Accepted: 04/23/2021] [Indexed: 11/16/2022]
Abstract
Facial expression recognition has been an active area of research over the past few decades, and it is still challenging due to the high intra-class variation. Traditional approaches for this problem rely on hand-crafted features such as SIFT, HOG, and LBP, followed by a classifier trained on a database of images or videos. Most of these works perform reasonably well on datasets of images captured in a controlled condition but fail to perform as well on more challenging datasets with more image variation and partial faces. In recent years, several works proposed an end-to-end framework for facial expression recognition using deep learning models. Despite the better performance of these works, there are still much room for improvement. In this work, we propose a deep learning approach based on attentional convolutional network that is able to focus on important parts of the face and achieves significant improvement over previous models on multiple datasets, including FER-2013, CK+, FERG, and JAFFE. We also use a visualization technique that is able to find important facial regions to detect different emotions based on the classifier's output. Through experimental results, we show that different emotions are sensitive to different parts of the face.
Collapse
Affiliation(s)
| | - Mehdi Minaei
- CS Department, Sama Technical College, Azad University, Tonekabon 46817, Iran;
| | | |
Collapse
|
11
|
Abrami A, Gunzler S, Kilbane C, Ostrand R, Ho B, Cecchi G. Automated Computer Vision Assessment of Hypomimia in Parkinson Disease: Proof-of-Principle Pilot Study. J Med Internet Res 2021; 23:e21037. [PMID: 33616535 PMCID: PMC7939934 DOI: 10.2196/21037] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2020] [Revised: 07/30/2020] [Accepted: 12/18/2020] [Indexed: 01/30/2023] Open
Abstract
BACKGROUND Facial expressions require the complex coordination of 43 different facial muscles. Parkinson disease (PD) affects facial musculature leading to "hypomimia" or "masked facies." OBJECTIVE We aimed to determine whether modern computer vision techniques can be applied to detect masked facies and quantify drug states in PD. METHODS We trained a convolutional neural network on images extracted from videos of 107 self-identified people with PD, along with 1595 videos of controls, in order to detect PD hypomimia cues. This trained model was applied to clinical interviews of 35 PD patients in their on and off drug motor states, and seven journalist interviews of the actor Alan Alda obtained before and after he was diagnosed with PD. RESULTS The algorithm achieved a test set area under the receiver operating characteristic curve of 0.71 on 54 subjects to detect PD hypomimia, compared to a value of 0.75 for trained neurologists using the United Parkinson Disease Rating Scale-III Facial Expression score. Additionally, the model accuracy to classify the on and off drug states in the clinical samples was 63% (22/35), in contrast to an accuracy of 46% (16/35) when using clinical rater scores. Finally, each of Alan Alda's seven interviews were successfully classified as occurring before (versus after) his diagnosis, with 100% accuracy (7/7). CONCLUSIONS This proof-of-principle pilot study demonstrated that computer vision holds promise as a valuable tool for PD hypomimia and for monitoring a patient's motor state in an objective and noninvasive way, particularly given the increasing importance of telemedicine.
Collapse
Affiliation(s)
- Avner Abrami
- IBM Research - Computational Biology Center, Yorktown Heights, NY, United States
| | - Steven Gunzler
- Parkinson's and Movement Disorders Center, Neurological Institute, University Hospitals Cleveland Medical Center, Cleveland, OH, United States
| | - Camilla Kilbane
- Parkinson's and Movement Disorders Center, Neurological Institute, University Hospitals Cleveland Medical Center, Cleveland, OH, United States
| | - Rachel Ostrand
- IBM Research - Computational Biology Center, Yorktown Heights, NY, United States
| | - Bryan Ho
- Department of Neurology, Tufts Medical Center, Boston, MA, United States
| | - Guillermo Cecchi
- IBM Research - Computational Biology Center, Yorktown Heights, NY, United States
| |
Collapse
|
12
|
Kowallik AE, Pohl M, Schweinberger SR. Facial Imitation Improves Emotion Recognition in Adults with Different Levels of Sub-Clinical Autistic Traits. J Intell 2021; 9:jintelligence9010004. [PMID: 33450891 PMCID: PMC7838766 DOI: 10.3390/jintelligence9010004] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2020] [Revised: 11/27/2020] [Accepted: 12/23/2020] [Indexed: 01/20/2023] Open
Abstract
We used computer-based automatic expression analysis to investigate the impact of imitation on facial emotion recognition with a baseline-intervention-retest design. The participants: 55 young adults with varying degrees of autistic traits, completed an emotion recognition task with images of faces displaying one of six basic emotional expressions. This task was then repeated with instructions to imitate the expressions. During the experiment, a camera captured the participants’ faces for an automatic evaluation of their imitation performance. The instruction to imitate enhanced imitation performance as well as emotion recognition. Of relevance, emotion recognition improvements in the imitation block were larger in people with higher levels of autistic traits, whereas imitation enhancements were independent of autistic traits. The finding that an imitation instruction improves emotion recognition, and that imitation is a positive within-participant predictor of recognition accuracy in the imitation block supports the idea of a link between motor expression and perception in the processing of emotions, which might be mediated by the mirror neuron system. However, because there was no evidence that people with higher autistic traits differ in their imitative behavior per se, their disproportional emotion recognition benefits could have arisen from indirect effects of imitation instructions
Collapse
Affiliation(s)
- Andrea E. Kowallik
- Early Support and Counselling Center Jena, Herbert Feuchte Stiftungsverbund, 07743 Jena, Germany
- Social Potential in Autism Research Unit, Friedrich Schiller University, 07743 Jena, Germany
- Department of General Psychology and Cognitive Neuroscience, Friedrich Schiller University Jena, Am Steiger 3/Haus 1, 07743 Jena, Germany;
- Correspondence: (A.E.K.); (S.R.S.); Tel.: +49-(0)-3641-945181 (S.R.S.); Fax: +49-(0)-3641-945182 (S.R.S.)
| | - Maike Pohl
- Department of General Psychology and Cognitive Neuroscience, Friedrich Schiller University Jena, Am Steiger 3/Haus 1, 07743 Jena, Germany;
| | - Stefan R. Schweinberger
- Early Support and Counselling Center Jena, Herbert Feuchte Stiftungsverbund, 07743 Jena, Germany
- Social Potential in Autism Research Unit, Friedrich Schiller University, 07743 Jena, Germany
- Department of General Psychology and Cognitive Neuroscience, Friedrich Schiller University Jena, Am Steiger 3/Haus 1, 07743 Jena, Germany;
- Michael Stifel Center Jena for Data-Driven and Simulation Science, Friedrich Schiller University, 07743 Jena, Germany
- Swiss Center for Affective Science, University of Geneva, 1202 Geneva, Switzerland
- Correspondence: (A.E.K.); (S.R.S.); Tel.: +49-(0)-3641-945181 (S.R.S.); Fax: +49-(0)-3641-945182 (S.R.S.)
| |
Collapse
|
13
|
An Artificial Intelligence Based Approach Towards Inclusive Healthcare Provisioning in Society 5.0: A Perspective on Brain Disorder. Brain Inform 2021. [DOI: 10.1007/978-3-030-86993-9_15] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022] Open
|
14
|
The Criterion Validity of the First Year Inventory and the Quantitative-CHecklist for Autism in Toddlers: A Longitudinal Study. Brain Sci 2020; 10:brainsci10100729. [PMID: 33066155 PMCID: PMC7601960 DOI: 10.3390/brainsci10100729] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2020] [Revised: 09/22/2020] [Accepted: 10/12/2020] [Indexed: 12/14/2022] Open
Abstract
Pediatric surveillance through screening procedures is needed to detect warning signs of risk for Autism Spectrum Disorder under 24 months of age and to promote early diagnosis and treatment. The main purpose of this study is to extend the literature regarding the psychometric properties of two screening tools, the First Year Inventory (FYI) and the Quantitative-CHecklist for Autism in Toddler (Q-CHAT), testing their criterion validity. They were administered during a three-wave approach involving the general population. At T1, 657 children were tested with the FYI and 36 of them were found to be at risk. At T2, 545 were tested with the Q-CHAT and 29 of them were found to be at risk. At T3, 12 out of the 36 children with a high score on the FYI and 11 out of the 29 children with a high score on the Q-CHAT were compared to 15 typically developing children. The criterion validity was tested considering the severity of the autistic symptoms, emotional/behavioral problems, and limited global functioning as criteria. Accuracy parameters were also calculated. Furthermore, we investigated which dimension of each questionnaire better predicted the aforementioned criterion. The results corroborated the hypotheses and confirmed the criterion validity of FYI and Q-CHAT.
Collapse
|
15
|
Nakamura K, Ohta A, Uesaki S, Maeda M, Kawabata H. Geometric morphometric analysis of Japanese female facial shape in relation to psychological impression space. Heliyon 2020; 6:e05148. [PMID: 33072915 PMCID: PMC7549058 DOI: 10.1016/j.heliyon.2020.e05148] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2020] [Revised: 08/22/2020] [Accepted: 09/29/2020] [Indexed: 11/23/2022] Open
Abstract
Facial appearance has essential consequences in various social interactions. Previous studies have shown that although people can perceive a variety of impressions from a face, these impressions may form from a relatively small number of core dimensions in the psychological impression space (e.g., valence and dominance). However, few studies have thus far examined which facial shape features contribute to perceptions of the core trait impression dimensions for Asian female faces. This study aimed to identify the commonalities between various facial impressions of Japanese female faces and determine the facial shape components associated with such impressions by applying geometric morphometric (GMM) analysis. In Experiment 1 (Modeling study), Japanese female faces were evaluated in terms of 18 trait adjectives that are frequently used to describe facial appearance in daily life. We found that Japanese female facial appearance is indeed evaluated mainly on the valence and dominance dimensions. In Experiment 2 (Validation study), we confirmed that all the trait impressions were quantitatively manipulated by transforming the facial shape features associated with valence and dominance. Our results provide evidence that various facial impressions derived from these two underlying dimensions can be quantitatively manipulated by transforming facial shape using the GMM techniques.
Collapse
Affiliation(s)
- Koyo Nakamura
- Faculty of Science and Engineering, Waseda University, Japan
- Japan Society for the Promotion of Science, Japan
- Keio Advanced Research Centers, Japan
| | - Anri Ohta
- R&D, Sunstar Inc., Takatsuki, Osaka, Japan
| | | | | | - Hideaki Kawabata
- Department of Psychology, Faculty of Letters, Keio University, Japan
| |
Collapse
|
16
|
de Belen RAJ, Bednarz T, Sowmya A, Del Favero D. Computer vision in autism spectrum disorder research: a systematic review of published studies from 2009 to 2019. Transl Psychiatry 2020; 10:333. [PMID: 32999273 PMCID: PMC7528087 DOI: 10.1038/s41398-020-01015-w] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/05/2020] [Revised: 09/04/2020] [Accepted: 09/09/2020] [Indexed: 11/29/2022] Open
Abstract
The current state of computer vision methods applied to autism spectrum disorder (ASD) research has not been well established. Increasing evidence suggests that computer vision techniques have a strong impact on autism research. The primary objective of this systematic review is to examine how computer vision analysis has been useful in ASD diagnosis, therapy and autism research in general. A systematic review of publications indexed on PubMed, IEEE Xplore and ACM Digital Library was conducted from 2009 to 2019. Search terms included ['autis*' AND ('computer vision' OR 'behavio* imaging' OR 'behavio* analysis' OR 'affective computing')]. Results are reported according to PRISMA statement. A total of 94 studies are included in the analysis. Eligible papers are categorised based on the potential biological/behavioural markers quantified in each study. Then, different computer vision approaches that were employed in the included papers are described. Different publicly available datasets are also reviewed in order to rapidly familiarise researchers with datasets applicable to their field and to accelerate both new behavioural and technological work on autism research. Finally, future research directions are outlined. The findings in this review suggest that computer vision analysis is useful for the quantification of behavioural/biological markers which can further lead to a more objective analysis in autism research.
Collapse
Affiliation(s)
| | - Tomasz Bednarz
- School of Art & Design, University of New South Wales, Sydney, NSW, Australia
| | - Arcot Sowmya
- School of Computer Science and Engineering, University of New South Wales, Sydney, NSW, Australia
| | - Dennis Del Favero
- School of Art & Design, University of New South Wales, Sydney, NSW, Australia
| |
Collapse
|
17
|
Washington P, Park N, Srivastava P, Voss C, Kline A, Varma M, Tariq Q, Kalantarian H, Schwartz J, Patnaik R, Chrisman B, Stockham N, Paskov K, Haber N, Wall DP. Data-Driven Diagnostics and the Potential of Mobile Artificial Intelligence for Digital Therapeutic Phenotyping in Computational Psychiatry. BIOLOGICAL PSYCHIATRY. COGNITIVE NEUROSCIENCE AND NEUROIMAGING 2020; 5:759-769. [PMID: 32085921 PMCID: PMC7292741 DOI: 10.1016/j.bpsc.2019.11.015] [Citation(s) in RCA: 37] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/20/2019] [Revised: 11/24/2019] [Accepted: 11/25/2019] [Indexed: 01/11/2023]
Abstract
Data science and digital technologies have the potential to transform diagnostic classification. Digital technologies enable the collection of big data, and advances in machine learning and artificial intelligence enable scalable, rapid, and automated classification of medical conditions. In this review, we summarize and categorize various data-driven methods for diagnostic classification. In particular, we focus on autism as an example of a challenging disorder due to its highly heterogeneous nature. We begin by describing the frontier of data science methods for the neuropsychiatry of autism. We discuss early signs of autism as defined by existing pen-and-paper-based diagnostic instruments and describe data-driven feature selection techniques for determining the behaviors that are most salient for distinguishing children with autism from neurologically typical children. We then describe data-driven detection techniques, particularly computer vision and eye tracking, that provide a means of quantifying behavioral differences between cases and controls. We also describe methods of preserving the privacy of collected videos and prior efforts of incorporating humans in the diagnostic loop. Finally, we summarize existing digital therapeutic interventions that allow for data capture and longitudinal outcome tracking as the diagnosis moves along a positive trajectory. Digital phenotyping of autism is paving the way for quantitative psychiatry more broadly and will set the stage for more scalable, accessible, and precise diagnostic techniques in the field.
Collapse
Affiliation(s)
- Peter Washington
- Department of Bioengineering, Stanford University, Stanford, California
| | - Natalie Park
- Department of Biological Sciences, Columbia University, New York, New York
| | - Parishkrita Srivastava
- Department of Molecular and Cell Biology, University of California, Berkeley, Berkeley, California
| | - Catalin Voss
- Department of Computer Science, Stanford University, Stanford, California
| | - Aaron Kline
- Department of Pediatrics (Systems Medicine), Stanford University, Stanford, California; Department of Biomedical Data Science, Stanford University, Stanford, California
| | - Maya Varma
- Department of Computer Science, Stanford University, Stanford, California
| | - Qandeel Tariq
- Department of Pediatrics (Systems Medicine), Stanford University, Stanford, California; Department of Biomedical Data Science, Stanford University, Stanford, California
| | - Haik Kalantarian
- Department of Pediatrics (Systems Medicine), Stanford University, Stanford, California; Department of Biomedical Data Science, Stanford University, Stanford, California
| | - Jessey Schwartz
- Department of Pediatrics (Systems Medicine), Stanford University, Stanford, California; Department of Biomedical Data Science, Stanford University, Stanford, California
| | - Ritik Patnaik
- Department of Computer Science, Massachusetts Institute of Technology, Cambridge, Massachusetts
| | - Brianna Chrisman
- Department of Bioengineering, Stanford University, Stanford, California
| | | | - Kelley Paskov
- Department of Biomedical Data Science, Stanford University, Stanford, California
| | - Nick Haber
- School of Education, Stanford University, Stanford, California
| | - Dennis P Wall
- Department of Pediatrics (Systems Medicine), Stanford University, Stanford, California; Department of Biomedical Data Science, Stanford University, Stanford, California; Department of Psychiatry and Behavioral Sciences (by courtesy), Stanford University, Stanford, California.
| |
Collapse
|
18
|
Kalantarian H, Jedoui K, Dunlap K, Schwartz J, Washington P, Husic A, Tariq Q, Ning M, Kline A, Wall DP. The Performance of Emotion Classifiers for Children With Parent-Reported Autism: Quantitative Feasibility Study. JMIR Ment Health 2020; 7:e13174. [PMID: 32234701 PMCID: PMC7160704 DOI: 10.2196/13174] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/18/2018] [Revised: 07/03/2019] [Accepted: 02/23/2020] [Indexed: 12/21/2022] Open
Abstract
BACKGROUND Autism spectrum disorder (ASD) is a developmental disorder characterized by deficits in social communication and interaction, and restricted and repetitive behaviors and interests. The incidence of ASD has increased in recent years; it is now estimated that approximately 1 in 40 children in the United States are affected. Due in part to increasing prevalence, access to treatment has become constrained. Hope lies in mobile solutions that provide therapy through artificial intelligence (AI) approaches, including facial and emotion detection AI models developed by mainstream cloud providers, available directly to consumers. However, these solutions may not be sufficiently trained for use in pediatric populations. OBJECTIVE Emotion classifiers available off-the-shelf to the general public through Microsoft, Amazon, Google, and Sighthound are well-suited to the pediatric population, and could be used for developing mobile therapies targeting aspects of social communication and interaction, perhaps accelerating innovation in this space. This study aimed to test these classifiers directly with image data from children with parent-reported ASD recruited through crowdsourcing. METHODS We used a mobile game called Guess What? that challenges a child to act out a series of prompts displayed on the screen of the smartphone held on the forehead of his or her care provider. The game is intended to be a fun and engaging way for the child and parent to interact socially, for example, the parent attempting to guess what emotion the child is acting out (eg, surprised, scared, or disgusted). During a 90-second game session, as many as 50 prompts are shown while the child acts, and the video records the actions and expressions of the child. Due in part to the fun nature of the game, it is a viable way to remotely engage pediatric populations, including the autism population through crowdsourcing. We recruited 21 children with ASD to play the game and gathered 2602 emotive frames following their game sessions. These data were used to evaluate the accuracy and performance of four state-of-the-art facial emotion classifiers to develop an understanding of the feasibility of these platforms for pediatric research. RESULTS All classifiers performed poorly for every evaluated emotion except happy. None of the classifiers correctly labeled over 60.18% (1566/2602) of the evaluated frames. Moreover, none of the classifiers correctly identified more than 11% (6/51) of the angry frames and 14% (10/69) of the disgust frames. CONCLUSIONS The findings suggest that commercial emotion classifiers may be insufficiently trained for use in digital approaches to autism treatment and treatment tracking. Secure, privacy-preserving methods to increase labeled training data are needed to boost the models' performance before they can be used in AI-enabled approaches to social therapy of the kind that is common in autism treatments.
Collapse
Affiliation(s)
- Haik Kalantarian
- Department of Pediatrics, Stanford University, Stanford, CA, United States
- Department of Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Khaled Jedoui
- Department of Mathematics, Stanford University, Stanford, CA, United States
| | - Kaitlyn Dunlap
- Department of Pediatrics, Stanford University, Stanford, CA, United States
- Department of Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Jessey Schwartz
- Department of Pediatrics, Stanford University, Stanford, CA, United States
- Department of Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Peter Washington
- Department of Pediatrics, Stanford University, Stanford, CA, United States
- Department of Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Arman Husic
- Department of Pediatrics, Stanford University, Stanford, CA, United States
- Department of Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Qandeel Tariq
- Department of Pediatrics, Stanford University, Stanford, CA, United States
- Department of Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Michael Ning
- Department of Pediatrics, Stanford University, Stanford, CA, United States
- Department of Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Aaron Kline
- Department of Pediatrics, Stanford University, Stanford, CA, United States
- Department of Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Dennis Paul Wall
- Department of Pediatrics, Stanford University, Stanford, CA, United States
- Department of Biomedical Data Science, Stanford University, Stanford, CA, United States
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, United States
| |
Collapse
|
19
|
Petrocchi S, Levante A, Lecciso F. Systematic Review of Level 1 and Level 2 Screening Tools for Autism Spectrum Disorders in Toddlers. Brain Sci 2020; 10:brainsci10030180. [PMID: 32204563 PMCID: PMC7139816 DOI: 10.3390/brainsci10030180] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2020] [Revised: 03/16/2020] [Accepted: 03/17/2020] [Indexed: 02/06/2023] Open
Abstract
The present study provides a systematic review of level 1 and level 2 screening tools for the early detection of autism under 24 months of age and an evaluation of the psychometric and measurement properties of their studies. Methods: Seven databases (e.g., Scopus, EBSCOhost Research Database) were screened and experts in the autism spectrum disorders (ASD) field were questioned; Preferred Reporting Items for Systematic review and Meta-Analysis (PRISMA) guidelines and Consensus-based Standard for the selection of health Measurement INstruments (COSMIN) checklist were applied. Results: the study included 52 papers and 16 measures; most of them were questionnaires, and the Modified-CHecklist for Autism in Toddler (M-CHAT) was the most extensively tested. The measures' strengths (analytical evaluation of methodological quality according to COSMIN) and limitations (in term of Negative Predictive Value, Positive Predictive Value, sensitivity, and specificity) were described; the quality of the studies, assessed with the application of the COSMIN checklist, highlighted the necessity of further validation studies for all the measures. According to COSMIN results, the M-CHAT, First Years Inventory (FYI), and Quantitative-CHecklist for Autism in Toddler (Q-CHAT) seem to be promising measures that may be applied systematically by health professionals in the future.
Collapse
Affiliation(s)
- Serena Petrocchi
- Institute of Communication and Health, Università della Svizzera Italiana, Via Buffi 13, 6900 Lugano, Switzerland
- Lab of Applied Psychology and Intervention, Department of History, Society and Human Studies, University of Salento, Via di Valesio, 73100 Lecce, Italy; (A.L.); (F.L.)
- Applied Research Division for Cognitive and Psychological Science, IRCCS European Institute of Oncology, Via Ripamonti 435, 20141 Milano, Italy
- Correspondence:
| | - Annalisa Levante
- Lab of Applied Psychology and Intervention, Department of History, Society and Human Studies, University of Salento, Via di Valesio, 73100 Lecce, Italy; (A.L.); (F.L.)
- Department of History, Society and Human Studies, University of Salento, Via di Valesio, 73100 Lecce, Italy
| | - Flavia Lecciso
- Lab of Applied Psychology and Intervention, Department of History, Society and Human Studies, University of Salento, Via di Valesio, 73100 Lecce, Italy; (A.L.); (F.L.)
- Department of History, Society and Human Studies, University of Salento, Via di Valesio, 73100 Lecce, Italy
| |
Collapse
|
20
|
Analysis of Facial Information for Healthcare Applications: A Survey on Computer Vision-Based Approaches. INFORMATION 2020. [DOI: 10.3390/info11030128] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022] Open
Abstract
This paper gives an overview of the cutting-edge approaches that perform facial cue analysis in the healthcare area. The document is not limited to global face analysis but it also concentrates on methods related to local cues (e.g., the eyes). A research taxonomy is introduced by dividing the face in its main features: eyes, mouth, muscles, skin, and shape. For each facial feature, the computer vision-based tasks aiming at analyzing it and the related healthcare goals that could be pursued are detailed.
Collapse
|
21
|
Real-Time Facial Affective Computing on Mobile Devices. SENSORS 2020; 20:s20030870. [PMID: 32041323 PMCID: PMC7039298 DOI: 10.3390/s20030870] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/17/2019] [Revised: 01/17/2020] [Accepted: 01/24/2020] [Indexed: 11/16/2022]
Abstract
Convolutional Neural Networks (CNNs) have become one of the state-of-the-art methods for various computer vision and pattern recognition tasks including facial affective computing. Although impressive results have been obtained in facial affective computing using CNNs, the computational complexity of CNNs has also increased significantly. This means high performance hardware is typically indispensable. Most existing CNNs are thus not generalizable enough for mobile devices, where the storage, memory and computational power are limited. In this paper, we focus on the design and implementation of CNNs on mobile devices for real-time facial affective computing tasks. We propose a light-weight CNN architecture which well balances the performance and computational complexity. The experimental results show that the proposed architecture achieves high performance while retaining the low computational complexity compared with state-of-the-art methods. We demonstrate the feasibility of a CNN architecture in terms of speed, memory and storage consumption for mobile devices by implementing a real-time facial affective computing application on an actual mobile device.
Collapse
|
22
|
Grossard C, Dapogny A, Cohen D, Bernheim S, Juillet E, Hamel F, Hun S, Bourgeois J, Pellerin H, Serret S, Bailly K, Chaby L. Children with autism spectrum disorder produce more ambiguous and less socially meaningful facial expressions: an experimental study using random forest classifiers. Mol Autism 2020; 11:5. [PMID: 31956394 PMCID: PMC6958757 DOI: 10.1186/s13229-020-0312-2] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2019] [Accepted: 01/01/2020] [Indexed: 01/19/2023] Open
Abstract
Background Computer vision combined with human annotation could offer a novel method for exploring facial expression (FE) dynamics in children with autism spectrum disorder (ASD). Methods We recruited 157 children with typical development (TD) and 36 children with ASD in Paris and Nice to perform two experimental tasks to produce FEs with emotional valence. FEs were explored by judging ratings and by random forest (RF) classifiers. To do so, we located a set of 49 facial landmarks in the task videos, we generated a set of geometric and appearance features and we used RF classifiers to explore how children with ASD differed from TD children when producing FEs. Results Using multivariate models including other factors known to predict FEs (age, gender, intellectual quotient, emotion subtype, cultural background), ratings from expert raters showed that children with ASD had more difficulty producing FEs than TD children. In addition, when we explored how RF classifiers performed, we found that classification tasks, except for those for sadness, were highly accurate and that RF classifiers needed more facial landmarks to achieve the best classification for children with ASD. Confusion matrices showed that when RF classifiers were tested in children with ASD, anger was often confounded with happiness. Limitations The sample size of the group of children with ASD was lower than that of the group of TD children. By using several control calculations, we tried to compensate for this limitation. Conclusion Children with ASD have more difficulty producing socially meaningful FEs. The computer vision methods we used to explore FE dynamics also highlight that the production of FEs in children with ASD carries more ambiguity.
Collapse
Affiliation(s)
- Charline Grossard
- 1Service de Psychiatrie de l'Enfant et de l'Adolescent, GH Pitié-Salpêtrière Charles Foix, APHP.6, Paris, France.,2Institut des Systèmes Intelligents et de Robotique, Sorbonne Université, ISIR CNRS UMR 7222, Paris, France
| | - Arnaud Dapogny
- 2Institut des Systèmes Intelligents et de Robotique, Sorbonne Université, ISIR CNRS UMR 7222, Paris, France
| | - David Cohen
- 1Service de Psychiatrie de l'Enfant et de l'Adolescent, GH Pitié-Salpêtrière Charles Foix, APHP.6, Paris, France.,2Institut des Systèmes Intelligents et de Robotique, Sorbonne Université, ISIR CNRS UMR 7222, Paris, France
| | - Sacha Bernheim
- 2Institut des Systèmes Intelligents et de Robotique, Sorbonne Université, ISIR CNRS UMR 7222, Paris, France
| | - Estelle Juillet
- 1Service de Psychiatrie de l'Enfant et de l'Adolescent, GH Pitié-Salpêtrière Charles Foix, APHP.6, Paris, France
| | - Fanny Hamel
- 1Service de Psychiatrie de l'Enfant et de l'Adolescent, GH Pitié-Salpêtrière Charles Foix, APHP.6, Paris, France
| | | | | | - Hugues Pellerin
- 1Service de Psychiatrie de l'Enfant et de l'Adolescent, GH Pitié-Salpêtrière Charles Foix, APHP.6, Paris, France
| | | | - Kevin Bailly
- 2Institut des Systèmes Intelligents et de Robotique, Sorbonne Université, ISIR CNRS UMR 7222, Paris, France
| | - Laurence Chaby
- 1Service de Psychiatrie de l'Enfant et de l'Adolescent, GH Pitié-Salpêtrière Charles Foix, APHP.6, Paris, France.,2Institut des Systèmes Intelligents et de Robotique, Sorbonne Université, ISIR CNRS UMR 7222, Paris, France.,4Institut de Psychologie, Université de Paris, 92100 Boulogne-Billancourt, France
| |
Collapse
|
23
|
Kowallik AE, Schweinberger SR. Sensor-Based Technology for Social Information Processing in Autism: A Review. SENSORS 2019; 19:s19214787. [PMID: 31689906 PMCID: PMC6864871 DOI: 10.3390/s19214787] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 10/11/2019] [Revised: 10/29/2019] [Accepted: 10/30/2019] [Indexed: 11/16/2022]
Abstract
The prevalence of autism spectrum disorders (ASD) has increased strongly over the past decades, and so has the demand for adequate behavioral assessment and support for persons affected by ASD. Here we provide a review on original research that used sensor technology for an objective assessment of social behavior, either with the aim to assist the assessment of autism or with the aim to use this technology for intervention and support of people with autism. Considering rapid technological progress, we focus (1) on studies published within the last 10 years (2009–2019), (2) on contact- and irritation-free sensor technology that does not constrain natural movement and interaction, and (3) on sensory input from the face, the voice, or body movements. We conclude that sensor technology has already demonstrated its great potential for improving both behavioral assessment and interventions in autism spectrum disorders. We also discuss selected examples for recent theoretical questions related to the understanding of psychological changes and potentials in autism. In addition to its applied potential, we argue that sensor technology—when implemented by appropriate interdisciplinary teams—may even contribute to such theoretical issues in understanding autism.
Collapse
Affiliation(s)
- Andrea E Kowallik
- Early Support and Counselling Center Jena, Herbert Feuchte Stiftungsverbund, 07743 Jena, Germany.
- Social Potential in Autism Research Unit, Friedrich Schiller University, 07743 Jena, Germany.
- Department of General Psychology and Cognitive Neuroscience, Friedrich Schiller University Jena, Am Steiger 3/Haus 1, 07743 Jena, Germany.
| | - Stefan R Schweinberger
- Early Support and Counselling Center Jena, Herbert Feuchte Stiftungsverbund, 07743 Jena, Germany.
- Social Potential in Autism Research Unit, Friedrich Schiller University, 07743 Jena, Germany.
- Department of General Psychology and Cognitive Neuroscience, Friedrich Schiller University Jena, Am Steiger 3/Haus 1, 07743 Jena, Germany.
- Michael Stifel Center Jena for Data-Driven and Simulation Science, Friedrich Schiller University, 07743 Jena, Germany.
- Swiss Center for Affective Science, University of Geneva, 1202 Geneva, Switzerland.
| |
Collapse
|
24
|
Computational Analysis of Deep Visual Data for Quantifying Facial Expression Production. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app9214542] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The computational analysis of facial expressions is an emerging research topic that could overcome the limitations of human perception and get quick and objective outcomes in the assessment of neurodevelopmental disorders (e.g., Autism Spectrum Disorders, ASD). Unfortunately, there have been only a few attempts to quantify facial expression production and most of the scientific literature aims at the easier task of recognizing if either a facial expression is present or not. Some attempts to face this challenging task exist but they do not provide a comprehensive study based on the comparison between human and automatic outcomes in quantifying children’s ability to produce basic emotions. Furthermore, these works do not exploit the latest solutions in computer vision and machine learning. Finally, they generally focus only on a homogeneous (in terms of cognitive capabilities) group of individuals. To fill this gap, in this paper some advanced computer vision and machine learning strategies are integrated into a framework aimed to computationally analyze how both ASD and typically developing children produce facial expressions. The framework locates and tracks a number of landmarks (virtual electromyography sensors) with the aim of monitoring facial muscle movements involved in facial expression production. The output of these virtual sensors is then fused to model the individual ability to produce facial expressions. Gathered computational outcomes have been correlated with the evaluation provided by psychologists and evidence has been given that shows how the proposed framework could be effectively exploited to deeply analyze the emotional competence of ASD children to produce facial expressions.
Collapse
|
25
|
Abstract
The proposed method has 30 streams, i.e., 15 spatial streams and 15 temporal streams. Each spatial stream corresponds to each temporal stream. Therefore, this work correlates with the symmetry concept. It is a difficult task to classify video-based facial expression owing to the gap between the visual descriptors and the emotions. In order to bridge the gap, a new video descriptor for facial expression recognition is presented to aggregate spatial and temporal convolutional features across the entire extent of a video. The designed framework integrates a state-of-the-art 30 stream and has a trainable spatial–temporal feature aggregation layer. This framework is end-to-end trainable for video-based facial expression recognition. Thus, this framework can effectively avoid overfitting to the limited emotional video datasets, and the trainable strategy can learn to better represent an entire video. The different schemas for pooling spatial–temporal features are investigated, and the spatial and temporal streams are best aggregated by utilizing the proposed method. The extensive experiments on two public databases, BAUM-1s and eNTERFACE05, show that this framework has promising performance and outperforms the state-of-the-art strategies.
Collapse
|