1
|
Important Preliminary Insights for Designing Successful Communication between a Robotic Learning Assistant and Children with Autism Spectrum Disorder in Germany. ROBOTICS 2022. [DOI: 10.3390/robotics11060141] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022] Open
Abstract
Early therapeutic intervention programs help children diagnosed with Autism Spectrum Disorder (ASD) to improve their socio-emotional and functional skills. To relieve the children’s caregivers while ensuring that the children are adequately supported in their training exercises, new technologies may offer suitable solutions. This study investigates the potential of a robotic learning assistant which is planned to monitor the children’s state of engagement and to intervene with appropriate motivational nudges when necessary. To analyze stakeholder requirements, interviews with parents as well as therapists of children with ASD were conducted. Besides a general positive attitude towards the usage of new technologies, we received some important insights for the design of the robot and its interaction with the children. One strongly accentuated aspect was the robot’s adequate and context-specific communication behavior, which we plan to address via an AI-based engagement detection system. Further aspects comprise for instance customizability, adaptability, and variability of the robot’s behavior, which should further be not too distracting while still being highly predictable.
Collapse
|
2
|
Zhang K, Yuan Y, Chen J, Wang G, Chen Q, Luo M. Eye Tracking Research on the Influence of Spatial Frequency and Inversion Effect on Facial Expression Processing in Children with Autism Spectrum Disorder. Brain Sci 2022; 12:brainsci12020283. [PMID: 35204046 PMCID: PMC8870542 DOI: 10.3390/brainsci12020283] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Revised: 02/11/2022] [Accepted: 02/16/2022] [Indexed: 12/10/2022] Open
Abstract
Facial expression processing mainly depends on whether the facial features related to expressions can be fully acquired, and whether the appropriate processing strategies can be adopted according to different conditions. Children with autism spectrum disorder (ASD) have difficulty accurately recognizing facial expressions and responding appropriately, which is regarded as an important cause of their social disorders. This study used eye tracking technology to explore the internal processing mechanism of facial expressions in children with ASD under the influence of spatial frequency and inversion effects for improving their social disorders. The facial expression recognition rate and eye tracking characteristics of children with ASD and typical developing (TD) children on the facial area of interest were recorded and analyzed. The multi-factor mixed experiment results showed that the facial expression recognition rate of children with ASD under various conditions was significantly lower than that of TD children. TD children had more visual attention to the eyes area. However, children with ASD preferred the features of the mouth area, and lacked visual attention and processing of the eyes area. When the face was inverted, TD children had the inversion effect under all three spatial frequency conditions, which was manifested as a significant decrease in expression recognition rate. However, children with ASD only had the inversion effect under the LSF condition, indicating that they mainly used a featural processing method and had the capacity of configural processing under the LSF condition. The eye tracking results showed that when the face was inverted or facial feature information was weakened, both children with ASD and TD children would adjust their facial expression processing strategies accordingly, to increase the visual attention and information processing of their preferred areas. The fixation counts and fixation duration of TD children on the eyes area increased significantly, while the fixation duration of children with ASD on the mouth area increased significantly. The results of this study provided theoretical and practical support for facial expression intervention in children with ASD.
Collapse
Affiliation(s)
- Kun Zhang
- National Engineering Research Center for E-Learning, Faculty of Artificial Intelligence in Education, Central China Normal University, Wuhan 430079, China; (K.Z.); (Y.Y.); (Q.C.); (M.L.)
- National Engineering Laboratory for Educational Big Data, Faculty of Artificial Intelligence in Education, Central China Normal University, Wuhan 430079, China
| | - Yishuang Yuan
- National Engineering Research Center for E-Learning, Faculty of Artificial Intelligence in Education, Central China Normal University, Wuhan 430079, China; (K.Z.); (Y.Y.); (Q.C.); (M.L.)
- National Engineering Laboratory for Educational Big Data, Faculty of Artificial Intelligence in Education, Central China Normal University, Wuhan 430079, China
| | - Jingying Chen
- National Engineering Research Center for E-Learning, Faculty of Artificial Intelligence in Education, Central China Normal University, Wuhan 430079, China; (K.Z.); (Y.Y.); (Q.C.); (M.L.)
- National Engineering Laboratory for Educational Big Data, Faculty of Artificial Intelligence in Education, Central China Normal University, Wuhan 430079, China
- Correspondence:
| | - Guangshuai Wang
- School of Computer Science, Wuhan University, Wuhan 430072, China;
| | - Qian Chen
- National Engineering Research Center for E-Learning, Faculty of Artificial Intelligence in Education, Central China Normal University, Wuhan 430079, China; (K.Z.); (Y.Y.); (Q.C.); (M.L.)
- National Engineering Laboratory for Educational Big Data, Faculty of Artificial Intelligence in Education, Central China Normal University, Wuhan 430079, China
| | - Meijuan Luo
- National Engineering Research Center for E-Learning, Faculty of Artificial Intelligence in Education, Central China Normal University, Wuhan 430079, China; (K.Z.); (Y.Y.); (Q.C.); (M.L.)
- National Engineering Laboratory for Educational Big Data, Faculty of Artificial Intelligence in Education, Central China Normal University, Wuhan 430079, China
| |
Collapse
|
3
|
Zhang Q, Wu R, Zhu S, Le J, Chen Y, Lan C, Yao S, Zhao W, Kendrick KM. Facial emotion training as an intervention in autism spectrum disorder: A meta-analysis of randomized controlled trials. Autism Res 2021; 14:2169-2182. [PMID: 34286900 DOI: 10.1002/aur.2565] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2020] [Revised: 06/03/2021] [Accepted: 06/04/2021] [Indexed: 11/11/2022]
Abstract
A large number of computer-based training programs have been developed as an intervention to help individuals with autism spectrum disorders (ASD) improve their facial emotion recognition ability, as well as social skills. However, it is unclear to what extent these facial emotion training programs can produce beneficial, long-lasting, and generalizable results. Using standard meta-analytic techniques, we investigated the effects of facial emotion training including generalization and maintenance restricted to randomized control trial studies comprising a total of 595 individuals with ASD. Our findings revealed that the intervention resulted in a robust improvement in emotion recognition for individuals receiving training compared with controls. However, while there was also some evidence for generalization of training effects, the small number of studies which conducted follow-ups and assessed social skills reported that improvements were not maintained and there was no evidence for general improvement in social skills. Overall, the analysis revealed a medium effect size in training improvement indicating that facial emotion training may be an effective method for enhancing emotion recognition skills in ASD although more studies are required to assess maintenance of effects and possible general improvements in social skills. LAY SUMMARY: Facial emotion training as an intervention may be a potential way to help improve emotion recognition in autism spectrum disorder (ASD), however robust empirical support for its efficacy has not been sufficiently established. Here, we conducted a meta-analysis of previous studies to summarize the effects of facial emotion training on ASD. Our results show that the training produces a robust improvement in subsequent emotion recognition, while maintenance and generalization effects still need further investigation. To date, no experimentally verified improvements in social skills have been reported.
Collapse
Affiliation(s)
- Qianqian Zhang
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Laboratory for NeuroInformation of Ministry of Education, Center for Information in Medicine, University of Electronic Science and Technology of China, Chengdu, China
| | - Renjing Wu
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Laboratory for NeuroInformation of Ministry of Education, Center for Information in Medicine, University of Electronic Science and Technology of China, Chengdu, China
| | - Siyu Zhu
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Laboratory for NeuroInformation of Ministry of Education, Center for Information in Medicine, University of Electronic Science and Technology of China, Chengdu, China
| | - Jiao Le
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Laboratory for NeuroInformation of Ministry of Education, Center for Information in Medicine, University of Electronic Science and Technology of China, Chengdu, China
| | - Yuanshu Chen
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Laboratory for NeuroInformation of Ministry of Education, Center for Information in Medicine, University of Electronic Science and Technology of China, Chengdu, China
| | - Chunmei Lan
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Laboratory for NeuroInformation of Ministry of Education, Center for Information in Medicine, University of Electronic Science and Technology of China, Chengdu, China
| | - Shuxia Yao
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Laboratory for NeuroInformation of Ministry of Education, Center for Information in Medicine, University of Electronic Science and Technology of China, Chengdu, China
| | - Weihua Zhao
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Laboratory for NeuroInformation of Ministry of Education, Center for Information in Medicine, University of Electronic Science and Technology of China, Chengdu, China
| | - Keith M Kendrick
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Laboratory for NeuroInformation of Ministry of Education, Center for Information in Medicine, University of Electronic Science and Technology of China, Chengdu, China
| |
Collapse
|
4
|
Webster PJ, Wang S, Li X. Review: Posed vs. Genuine Facial Emotion Recognition and Expression in Autism and Implications for Intervention. Front Psychol 2021; 12:653112. [PMID: 34305720 PMCID: PMC8300960 DOI: 10.3389/fpsyg.2021.653112] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2021] [Accepted: 06/02/2021] [Indexed: 12/03/2022] Open
Abstract
Different styles of social interaction are one of the core characteristics of autism spectrum disorder (ASD). Social differences among individuals with ASD often include difficulty in discerning the emotions of neurotypical people based on their facial expressions. This review first covers the rich body of literature studying differences in facial emotion recognition (FER) in those with ASD, including behavioral studies and neurological findings. In particular, we highlight subtle emotion recognition and various factors related to inconsistent findings in behavioral studies of FER in ASD. Then, we discuss the dual problem of FER – namely facial emotion expression (FEE) or the production of facial expressions of emotion. Despite being less studied, social interaction involves both the ability to recognize emotions and to produce appropriate facial expressions. How others perceive facial expressions of emotion in those with ASD has remained an under-researched area. Finally, we propose a method for teaching FER [FER teaching hierarchy (FERTH)] based on recent research investigating FER in ASD, considering the use of posed vs. genuine emotions and static vs. dynamic stimuli. We also propose two possible teaching approaches: (1) a standard method of teaching progressively from simple drawings and cartoon characters to more complex audio-visual video clips of genuine human expressions of emotion with context clues or (2) teaching in a field of images that includes posed and genuine emotions to improve generalizability before progressing to more complex audio-visual stimuli. Lastly, we advocate for autism interventionists to use FER stimuli developed primarily for research purposes to facilitate the incorporation of well-controlled stimuli to teach FER and bridge the gap between intervention and research in this area.
Collapse
Affiliation(s)
- Paula J Webster
- Department of Chemical and Biomedical Engineering, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV, United States
| | - Shuo Wang
- Department of Chemical and Biomedical Engineering, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV, United States
| | - Xin Li
- Lane Department of Computer Science and Electrical Engineering, West Virginia University, Morgantown, WV, United States
| |
Collapse
|
5
|
Caine JA, Klein B, Edwards SL. The Impact of a Novel Mimicry Task for Increasing Emotion Recognition in Adults with Autism Spectrum Disorder and Alexithymia: Protocol for a Randomized Controlled Trial. JMIR Res Protoc 2021; 10:e24543. [PMID: 34170257 PMCID: PMC8386358 DOI: 10.2196/24543] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2020] [Revised: 12/15/2020] [Accepted: 02/24/2021] [Indexed: 11/17/2022] Open
Abstract
Background Impaired facial emotion expression recognition (FEER) has typically been considered a correlate of autism spectrum disorder (ASD). Now, the alexithymia hypothesis is suggesting that this emotion processing problem is instead related to alexithymia, which frequently co-occurs with ASD. By combining predictive coding theories of ASD and simulation theories of emotion recognition, it is suggested that facial mimicry may improve the training of FEER in ASD and alexithymia. Objective This study aims to evaluate a novel mimicry task to improve FEER in adults with and without ASD and alexithymia. Additionally, this study will aim to determine the contributions of alexithymia and ASD to FEER ability and assess which of these 2 populations benefit from this training task. Methods Recruitment will primarily take place through an ASD community group with emphasis put on snowball recruiting. Included will be 64 consenting adults equally divided between participants without an ASD and participants with an ASD. Participants will be screened online using the Kessler Psychological Distress Scale (K-10; cut-off score of 22), Autism Spectrum Quotient (AQ-10), and Toronto Alexithymia Scale (TAS-20) followed by a clinical interview with a provisional psychologist at the Federation University psychology clinic. The clinical interview will include assessment of ability, anxiety, and depression as well as discussion of past ASD diagnosis and confirmatory administration of the Autism Mental Status Exam (AMSE). Following the clinical interview, the participant will complete the Bermond-Vorst Alexithymia Questionnaire (BVAQ) and then undertake a baseline assessment of FEER. Consenting participants will then be assigned using a permuted blocked randomization method into either the control task condition or the mimicry task condition. A brief measure of satisfaction of the task and a debriefing session will conclude the study. Results The study has Federation University Human Research Ethics Committee approval and is registered with the Australian New Zealand Clinical Trials. Participant recruitment is predicted to begin in the third quarter of 2021. Conclusions This study will be the first to evaluate the use of a novel facial mimicry task condition to increase FEER in adults with ASD and alexithymia. If efficacious, this task could prove useful as a cost-effective adjunct intervention that could be used at home and thus remove barriers to entry. This study will also explore the unique effectiveness of this task in people without an ASD, with an ASD, and with alexithymia. Trial Registration Australian New Zealand Clinical Trial Registry ACTRN12619000705189p; https://www.anzctr.org.au/Trial/Registration/TrialReview.aspx?id=377455 International Registered Report Identifier (IRRID) PRR1-10.2196/24543
Collapse
Affiliation(s)
- Joshua A Caine
- School of Science Psychology and Sport, Federation University Australia, Ballarat, Australia.,Deputy Vice Chancellor of Research & Innovation Portfolio, Federation University Australia, Ballarat, Australia.,Health Innovation and Transformation Centre, Federation University Australia, Ballarat, Australia.,Biopsychosocial and eHealth Research & Innovation, Federation University Australia, Ballarat, Australia
| | - Britt Klein
- Deputy Vice Chancellor of Research & Innovation Portfolio, Federation University Australia, Ballarat, Australia.,Health Innovation and Transformation Centre, Federation University Australia, Ballarat, Australia.,Biopsychosocial and eHealth Research & Innovation, Federation University Australia, Ballarat, Australia
| | - Stephen L Edwards
- School of Science Psychology and Sport, Federation University Australia, Ballarat, Australia.,Health Innovation and Transformation Centre, Federation University Australia, Ballarat, Australia.,Biopsychosocial and eHealth Research & Innovation, Federation University Australia, Ballarat, Australia
| |
Collapse
|
6
|
Visual Influences on Auditory Behavioral, Neural, and Perceptual Processes: A Review. J Assoc Res Otolaryngol 2021; 22:365-386. [PMID: 34014416 PMCID: PMC8329114 DOI: 10.1007/s10162-021-00789-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2020] [Accepted: 02/07/2021] [Indexed: 01/03/2023] Open
Abstract
In a naturalistic environment, auditory cues are often accompanied by information from other senses, which can be redundant with or complementary to the auditory information. Although the multisensory interactions derived from this combination of information and that shape auditory function are seen across all sensory modalities, our greatest body of knowledge to date centers on how vision influences audition. In this review, we attempt to capture the state of our understanding at this point in time regarding this topic. Following a general introduction, the review is divided into 5 sections. In the first section, we review the psychophysical evidence in humans regarding vision's influence in audition, making the distinction between vision's ability to enhance versus alter auditory performance and perception. Three examples are then described that serve to highlight vision's ability to modulate auditory processes: spatial ventriloquism, cross-modal dynamic capture, and the McGurk effect. The final part of this section discusses models that have been built based on available psychophysical data and that seek to provide greater mechanistic insights into how vision can impact audition. The second section reviews the extant neuroimaging and far-field imaging work on this topic, with a strong emphasis on the roles of feedforward and feedback processes, on imaging insights into the causal nature of audiovisual interactions, and on the limitations of current imaging-based approaches. These limitations point to a greater need for machine-learning-based decoding approaches toward understanding how auditory representations are shaped by vision. The third section reviews the wealth of neuroanatomical and neurophysiological data from animal models that highlights audiovisual interactions at the neuronal and circuit level in both subcortical and cortical structures. It also speaks to the functional significance of audiovisual interactions for two critically important facets of auditory perception-scene analysis and communication. The fourth section presents current evidence for alterations in audiovisual processes in three clinical conditions: autism, schizophrenia, and sensorineural hearing loss. These changes in audiovisual interactions are postulated to have cascading effects on higher-order domains of dysfunction in these conditions. The final section highlights ongoing work seeking to leverage our knowledge of audiovisual interactions to develop better remediation approaches to these sensory-based disorders, founded in concepts of perceptual plasticity in which vision has been shown to have the capacity to facilitate auditory learning.
Collapse
|
7
|
Briot K, Pizano A, Bouvard M, Amestoy A. New Technologies as Promising Tools for Assessing Facial Emotion Expressions Impairments in ASD: A Systematic Review. Front Psychiatry 2021; 12:634756. [PMID: 34025469 PMCID: PMC8131507 DOI: 10.3389/fpsyt.2021.634756] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/28/2020] [Accepted: 03/25/2021] [Indexed: 11/13/2022] Open
Abstract
The ability to recognize and express emotions from facial expressions are essential for successful social interactions. Facial Emotion Recognition (FER) and Facial Emotion Expressions (FEEs), both of which seem to be impaired in Autism Spectrum Disorders (ASD) and contribute to socio-communicative difficulties, participate in the diagnostic criteria for ASD. Only a few studies have focused on FEEs processing and the rare behavioral studies of FEEs in ASD have yielded mixed results. Here, we review studies comparing the production of FEEs between participants with ASD and non-ASD control subjects, with a particular focus on the use of automatic facial expression analysis software. A systematic literature search in accordance with the PRISMA statement identified 20 reports published up to August 2020 concerning the use of new technologies to evaluate both spontaneous and voluntary FEEs in participants with ASD. Overall, the results highlight the importance of considering socio-demographic factors and psychiatric co-morbidities which may explain the previous inconsistent findings, particularly regarding quantitative data on spontaneous facial expressions. There is also reported evidence for an inadequacy of FEEs in individuals with ASD in relation to expected emotion, with a lower quality and coordination of facial muscular movements. Spatial and kinematic approaches to characterizing the synchrony, symmetry and complexity of facial muscle movements thus offer clues to identifying and exploring promising new diagnostic targets. These findings have allowed hypothesizing that there may be mismatches between mental representations and the production of FEEs themselves in ASD. Such considerations are in line with the Facial Feedback Hypothesis deficit in ASD as part of the Broken Mirror Theory, with the results suggesting impairments of neural sensory-motor systems involved in processing emotional information and ensuring embodied representations of emotions, which are the basis of human empathy. In conclusion, new technologies are promising tools for evaluating the production of FEEs in individuals with ASD, and controlled studies involving larger samples of patients and where possible confounding factors are considered, should be conducted in order to better understand and counter the difficulties in global emotional processing in ASD.
Collapse
Affiliation(s)
- Kellen Briot
- Medical Sciences Department, University of Bordeaux, Bordeaux, France.,Pôle Universitaire de Psychiatrie de l'Enfant et de l'Adolescent, Centre Hospitalier Charles-Perrens, Bordeaux, France.,Aquitaine Institute for Cognitive and Integrative Neuroscience (INCIA), UMR 5287, CNRS, Bordeaux, France
| | - Adrien Pizano
- Medical Sciences Department, University of Bordeaux, Bordeaux, France.,Pôle Universitaire de Psychiatrie de l'Enfant et de l'Adolescent, Centre Hospitalier Charles-Perrens, Bordeaux, France.,Aquitaine Institute for Cognitive and Integrative Neuroscience (INCIA), UMR 5287, CNRS, Bordeaux, France
| | - Manuel Bouvard
- Medical Sciences Department, University of Bordeaux, Bordeaux, France.,Pôle Universitaire de Psychiatrie de l'Enfant et de l'Adolescent, Centre Hospitalier Charles-Perrens, Bordeaux, France.,Aquitaine Institute for Cognitive and Integrative Neuroscience (INCIA), UMR 5287, CNRS, Bordeaux, France
| | - Anouck Amestoy
- Medical Sciences Department, University of Bordeaux, Bordeaux, France.,Pôle Universitaire de Psychiatrie de l'Enfant et de l'Adolescent, Centre Hospitalier Charles-Perrens, Bordeaux, France.,Aquitaine Institute for Cognitive and Integrative Neuroscience (INCIA), UMR 5287, CNRS, Bordeaux, France
| |
Collapse
|
8
|
de Belen RAJ, Bednarz T, Sowmya A, Del Favero D. Computer vision in autism spectrum disorder research: a systematic review of published studies from 2009 to 2019. Transl Psychiatry 2020; 10:333. [PMID: 32999273 PMCID: PMC7528087 DOI: 10.1038/s41398-020-01015-w] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/05/2020] [Revised: 09/04/2020] [Accepted: 09/09/2020] [Indexed: 11/29/2022] Open
Abstract
The current state of computer vision methods applied to autism spectrum disorder (ASD) research has not been well established. Increasing evidence suggests that computer vision techniques have a strong impact on autism research. The primary objective of this systematic review is to examine how computer vision analysis has been useful in ASD diagnosis, therapy and autism research in general. A systematic review of publications indexed on PubMed, IEEE Xplore and ACM Digital Library was conducted from 2009 to 2019. Search terms included ['autis*' AND ('computer vision' OR 'behavio* imaging' OR 'behavio* analysis' OR 'affective computing')]. Results are reported according to PRISMA statement. A total of 94 studies are included in the analysis. Eligible papers are categorised based on the potential biological/behavioural markers quantified in each study. Then, different computer vision approaches that were employed in the included papers are described. Different publicly available datasets are also reviewed in order to rapidly familiarise researchers with datasets applicable to their field and to accelerate both new behavioural and technological work on autism research. Finally, future research directions are outlined. The findings in this review suggest that computer vision analysis is useful for the quantification of behavioural/biological markers which can further lead to a more objective analysis in autism research.
Collapse
Affiliation(s)
| | - Tomasz Bednarz
- School of Art & Design, University of New South Wales, Sydney, NSW, Australia
| | - Arcot Sowmya
- School of Computer Science and Engineering, University of New South Wales, Sydney, NSW, Australia
| | - Dennis Del Favero
- School of Art & Design, University of New South Wales, Sydney, NSW, Australia
| |
Collapse
|
9
|
Wieckowski AT, White SW. Attention Modification to Attenuate Facial Emotion Recognition Deficits in Children with Autism: A Pilot Study. J Autism Dev Disord 2020; 50:30-41. [PMID: 31520245 PMCID: PMC11034769 DOI: 10.1007/s10803-019-04223-6] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
Diminished attending to faces may contribute to the impairments in emotion recognition and expression in autism spectrum disorder (ASD). The current study evaluated the acceptability, feasibility, and preliminary efficacy of an attention modification intervention designed to attenuate deficits in facial emotion recognition (FER). During the 10-session experimental treatment, children (n = 8) with ASD watched dynamic videos of people expressing different emotions with the facial features highlighted to guide children's attention. Children and their parents generally rated the treatment as acceptable and helpful. Although FER improvement was not apparent on task-based measures, parents reported slight improvements and decreased socioemotional problems following treatment. Results suggest that further research on visual attention retraining for ASD, within an experimental therapeutic program, may be promising.
Collapse
Affiliation(s)
| | - Susan W White
- Department of Psychology, Virginia Tech, Blacksburg, VA, 24061, USA.
- Center for Youth Development and Intervention, The University of Alabama, 200 Hackberry Lane, 101 McMillan Bldg., Box 870348, Tuscaloosa, AL, 35487, USA.
| |
Collapse
|
10
|
Capriola-Hall NN, Wieckowski AT, Swain D, Tech V, Aly S, Youssef A, Abbott AL, White SW. Group Differences in Facial Emotion Expression in Autism: Evidence for the Utility of Machine Classification. Behav Ther 2019; 50:828-838. [PMID: 31208691 DOI: 10.1016/j.beth.2018.12.004] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/21/2018] [Revised: 12/15/2018] [Accepted: 12/18/2018] [Indexed: 10/27/2022]
Abstract
Effective social communication relies, in part, on accurate nonverbal expression of emotion. To evaluate the nature of facial emotion expression (FEE) deficits in children with autism spectrum disorder (ASD), we compared 20 youths with ASD to a sample of typically developing (TD) youth (n = 20) using a machine-based classifier of FEE. Results indicate group differences in FEE for overall accuracy across emotions. In particular, a significant group difference in accuracy of FEE was observed when participants were prompted by a video of a human expressing an emotion, F(2, 36) = 4.99, p = .032, η2 = .12. Specifically, youth with ASD made significantly more errors in FEE relative to TD youth. Findings support continued refinement of machine-based approaches to assess and potentially remediate FEE impairment in youth with ASD.
Collapse
|
11
|
Leo M, Carcagnì P, Distante C, Spagnolo P, Mazzeo PL, Rosato AC, Petrocchi S, Pellegrino C, Levante A, De Lumè F, Lecciso F. Computational Assessment of Facial Expression Production in ASD Children. SENSORS (BASEL, SWITZERLAND) 2018; 18:E3993. [PMID: 30453518 PMCID: PMC6263710 DOI: 10.3390/s18113993] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/04/2018] [Revised: 11/09/2018] [Accepted: 11/14/2018] [Indexed: 12/01/2022]
Abstract
In this paper, a computational approach is proposed and put into practice to assess the capability of children having had diagnosed Autism Spectrum Disorders (ASD) to produce facial expressions. The proposed approach is based on computer vision components working on sequence of images acquired by an off-the-shelf camera in unconstrained conditions. Action unit intensities are estimated by analyzing local appearance and then both temporal and geometrical relationships, learned by Convolutional Neural Networks, are exploited to regularize gathered estimates. To cope with stereotyped movements and to highlight even subtle voluntary movements of facial muscles, a personalized and contextual statistical modeling of non-emotional face is formulated and used as a reference. Experimental results demonstrate how the proposed pipeline can improve the analysis of facial expressions produced by ASD children. A comparison of system's outputs with the evaluations performed by psychologists, on the same group of ASD children, makes evident how the performed quantitative analysis of children's abilities helps to go beyond the traditional qualitative ASD assessment/diagnosis protocols, whose outcomes are affected by human limitations in observing and understanding multi-cues behaviors such as facial expressions.
Collapse
Affiliation(s)
- Marco Leo
- Institute of Applied Sciences and Intelligent Systems, National Research Council of Italy, via Monteroni, 73100 Lecce, Italy.
| | - Pierluigi Carcagnì
- Institute of Applied Sciences and Intelligent Systems, National Research Council of Italy, via Monteroni, 73100 Lecce, Italy.
| | - Cosimo Distante
- Institute of Applied Sciences and Intelligent Systems, National Research Council of Italy, via Monteroni, 73100 Lecce, Italy.
| | - Paolo Spagnolo
- Institute of Applied Sciences and Intelligent Systems, National Research Council of Italy, via Monteroni, 73100 Lecce, Italy.
| | - Pier Luigi Mazzeo
- Institute of Applied Sciences and Intelligent Systems, National Research Council of Italy, via Monteroni, 73100 Lecce, Italy.
| | | | - Serena Petrocchi
- USI, Institute of Communication and Health, Via Buffi 6, 6900 Lugano, Switzerland.
| | | | - Annalisa Levante
- Dipartimento di Storia, University of Salento, Società e Studi Sull' Uomo, Studium 2000-Edificio 5-Via di Valesio, 73100 Lecce, Italy.
| | - Filomena De Lumè
- Dipartimento di Storia, University of Salento, Società e Studi Sull' Uomo, Studium 2000-Edificio 5-Via di Valesio, 73100 Lecce, Italy.
| | - Flavia Lecciso
- Dipartimento di Storia, University of Salento, Società e Studi Sull' Uomo, Studium 2000-Edificio 5-Via di Valesio, 73100 Lecce, Italy.
| |
Collapse
|
12
|
McKay D. Introduction to the Special Issue: Integration of Technological Advances in Cognitive-Behavior Therapy. Behav Ther 2018; 49:851-852. [PMID: 30316484 DOI: 10.1016/j.beth.2018.08.001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/02/2018] [Revised: 08/03/2018] [Accepted: 08/03/2018] [Indexed: 11/19/2022]
Abstract
A wide range of technological approaches has been adopted in assessment and intervention using cognitive-behavior therapy (CBT). The articles that comprise this special issue cover a diversity of areas, including motion-tracking devices, ecological momentary assessment, facial recognition software to provide rapid feedback, audio and tablet-based CBT-administered procedures, web-based acceptance program for stress reduction, and videoconferencing for delivery of anxiety treatment in youth. It is expected that technological advances will continue to lead to additional advances in CBT delivery.
Collapse
|