1
|
Benabderrahmane B, Gharzouli M, Benlecheb A. A novel multi-modal model to assist the diagnosis of autism spectrum disorder using eye-tracking data. Health Inf Sci Syst 2024; 12:40. [PMID: 39105163 PMCID: PMC11297859 DOI: 10.1007/s13755-024-00299-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2023] [Accepted: 07/16/2024] [Indexed: 08/07/2024] Open
Abstract
Background and objective Timely and accurate detection of Autism Spectrum Disorder (ASD) is essential for early intervention and improved patient outcomes. This study aims to harness the power of machine learning (ML) techniques to improve ASD detection by incorporating temporal eye-tracking data. We developed a novel ML model to leverage eye scan paths, sequences of distances of eye movement, and a sequence of fixation durations, enhancing the temporal aspect of the analysis for more effective ASD identification. Methods We utilized a dataset of eye-tracking data without augmentation to train our ML model, which consists of a CNN-GRU-ANN architecture. The model was trained using gaze maps, the sequences of distances between eye fixations, and durations of fixations and saccades. Additionally, we employed a validation dataset to assess the model's performance and compare it with other works. Results Our ML model demonstrated superior performance in ASD detection compared to the VGG-16 model. By incorporating temporal information from eye-tracking data, our model achieved higher accuracy, precision, and recall. The novel addition of sequence-based features allowed our model to effectively distinguish between ASD and typically developing individuals, achieving an impressive precision value of 93.10% on the validation dataset. Conclusion This study presents an ML-based approach to ASD detection by utilizing machine learning techniques and incorporating temporal eye-tracking data. Our findings highlight the potential of temporal analysis for improved ASD detection and provide a promising direction for further advancements in the field of eye-tracking-based diagnosis and intervention for neurodevelopmental disorders.
Collapse
Affiliation(s)
- Brahim Benabderrahmane
- MISC Laboratory, University of Abdelhamid Mehri Constantine 2, 25000 Constantine, Algeria
| | - Mohamed Gharzouli
- MISC Laboratory, University of Abdelhamid Mehri Constantine 2, 25000 Constantine, Algeria
| | - Amira Benlecheb
- MISC Laboratory, University of Abdelhamid Mehri Constantine 2, 25000 Constantine, Algeria
| |
Collapse
|
2
|
Thorsson M, Galazka MA, Åsberg Johnels J, Hadjikhani N. A novel end-to-end dual-camera system for eye gaze synchrony assessment in face-to-face interaction. Atten Percept Psychophys 2024; 86:2221-2230. [PMID: 37099200 PMCID: PMC11480169 DOI: 10.3758/s13414-023-02679-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/11/2023] [Indexed: 04/27/2023]
Abstract
Quantification of face-to-face interaction can provide highly relevant information in cognitive and psychological science research. Current commercial glint-dependent solutions suffer from several disadvantages and limitations when applied in face-to-face interaction, including data loss, parallax errors, the inconvenience and distracting effect of wearables, and/or the need for several cameras to capture each person. Here we present a novel eye-tracking solution, consisting of a dual-camera system used in conjunction with an individually optimized deep learning approach that aims to overcome some of these limitations. Our data show that this system can accurately classify gaze location within different areas of the face of two interlocutors, and capture subtle differences in interpersonal gaze synchrony between two individuals during a (semi-)naturalistic face-to-face interaction.
Collapse
Affiliation(s)
- Max Thorsson
- Gillberg Neuropsychiatry Centre, Institute of Neuroscience and Physiology, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden.
| | - Martyna A Galazka
- Gillberg Neuropsychiatry Centre, Institute of Neuroscience and Physiology, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
| | - Jakob Åsberg Johnels
- Gillberg Neuropsychiatry Centre, Institute of Neuroscience and Physiology, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Section of Speech and Language Pathology, Institute of Neuroscience and Physiology, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
| | - Nouchine Hadjikhani
- Gillberg Neuropsychiatry Centre, Institute of Neuroscience and Physiology, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
3
|
Sá RODS, Michelassi GDC, Butrico DDS, Franco FDO, Sumiya FM, Portolese J, Brentani H, Nunes FLS, Machado-Lima A. Enhancing ensemble classifiers utilizing gaze tracking data for autism spectrum disorder diagnosis. Comput Biol Med 2024; 182:109184. [PMID: 39353297 DOI: 10.1016/j.compbiomed.2024.109184] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2024] [Revised: 08/28/2024] [Accepted: 09/20/2024] [Indexed: 10/04/2024]
Abstract
PROBLEM Diagnosing Autism Spectrum Disorder (ASD) remains a significant challenge, especially in regions where access to specialists is limited. Computer-based approaches offer a promising solution to make diagnosis more accessible. Eye tracking has emerged as a valuable technique in aiding the diagnosis of ASD. Typically, individuals' gaze patterns are monitored while they view videos designed according to established paradigms. In a previous study, we developed a method to classify individuals as having ASD or Typical Development (TD) by processing eye-tracking data using Random Forest ensembles, with a focus on a paradigm known as joint attention. AIM This article aims to enhance our previous work by evaluating alternative algorithms and ensemble strategies, with a particular emphasis on the role of anticipation features in diagnosis. METHODS Utilizing stimuli based on joint attention and the concept of "floating regions of interest" from our earlier research, we identified features that indicate gaze anticipation or delay. We then tested seven class balancing strategies, applied seven dimensionality reduction algorithms, and combined them with five different classifier induction algorithms. Finally, we employed the stacking technique to construct an ensemble model. RESULTS Our findings showed a significant improvement, achieving an F1-score of 95.5%, compared to the 82% F1-score from our previous work, through the use of a heterogeneous stacking meta-classifier composed of diverse induction algorithms. CONCLUSION While there remains an opportunity to explore new algorithms and features, the approach proposed in this article has the potential to be applied in clinical practice, contributing to increased accessibility to ASD diagnosis.
Collapse
Affiliation(s)
- Rafaela Oliveira da Silva Sá
- School of Arts, Sciences and Humanities (EACH) of the University of Sao Paulo (USP), Rua Arlindo Béttio, 1000 - Ermelino Matarazzo, São Paulo, 03828-000, São Paulo, Brazil.
| | - Gabriel de Castro Michelassi
- School of Arts, Sciences and Humanities (EACH) of the University of Sao Paulo (USP), Rua Arlindo Béttio, 1000 - Ermelino Matarazzo, São Paulo, 03828-000, São Paulo, Brazil.
| | - Diego Dos Santos Butrico
- Department of Psychiatry, University of Sao Paulo's School of Medicine (FMUSP), Rua Doutor Ovídio Pires de Campos, 785 - Cerqueira César, São Paulo, 05403-010, São Paulo, Brazil.
| | - Felipe de Oliveira Franco
- Department of Psychiatry, University of Sao Paulo's School of Medicine (FMUSP), Rua Doutor Ovídio Pires de Campos, 785 - Cerqueira César, São Paulo, 05403-010, São Paulo, Brazil.
| | - Fernando Mitsuo Sumiya
- Department of Psychiatry, University of Sao Paulo's School of Medicine (FMUSP), Rua Doutor Ovídio Pires de Campos, 785 - Cerqueira César, São Paulo, 05403-010, São Paulo, Brazil.
| | - Joana Portolese
- Department of Psychiatry, University of Sao Paulo's School of Medicine (FMUSP), Rua Doutor Ovídio Pires de Campos, 785 - Cerqueira César, São Paulo, 05403-010, São Paulo, Brazil.
| | - Helena Brentani
- Department of Psychiatry, University of Sao Paulo's School of Medicine (FMUSP), Rua Doutor Ovídio Pires de Campos, 785 - Cerqueira César, São Paulo, 05403-010, São Paulo, Brazil.
| | - Fátima L S Nunes
- School of Arts, Sciences and Humanities (EACH) of the University of Sao Paulo (USP), Rua Arlindo Béttio, 1000 - Ermelino Matarazzo, São Paulo, 03828-000, São Paulo, Brazil.
| | - Ariane Machado-Lima
- School of Arts, Sciences and Humanities (EACH) of the University of Sao Paulo (USP), Rua Arlindo Béttio, 1000 - Ermelino Matarazzo, São Paulo, 03828-000, São Paulo, Brazil.
| |
Collapse
|
4
|
Jia SJ, Jing JQ, Yang CJ. A Review on Autism Spectrum Disorder Screening by Artificial Intelligence Methods. J Autism Dev Disord 2024:10.1007/s10803-024-06429-9. [PMID: 38842671 DOI: 10.1007/s10803-024-06429-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/30/2024] [Indexed: 06/07/2024]
Abstract
PURPOSE With the increasing prevalence of autism spectrum disorders (ASD), the importance of early screening and diagnosis has been subject to considerable discussion. Given the subtle differences between ASD children and typically developing children during the early stages of development, it is imperative to investigate the utilization of automatic recognition methods powered by artificial intelligence. We aim to summarize the research work on this topic and sort out the markers that can be used for identification. METHODS We searched the papers published in the Web of Science, PubMed, Scopus, Medline, SpringerLink, Wiley Online Library, and EBSCO databases from 1st January 2013 to 13th November 2023, and 43 articles were included. RESULTS These articles mainly divided recognition markers into five categories: gaze behaviors, facial expressions, motor movements, voice features, and task performance. Based on the above markers, the accuracy of artificial intelligence screening ranged from 62.13 to 100%, the sensitivity ranged from 69.67 to 100%, the specificity ranged from 54 to 100%. CONCLUSION Therefore, artificial intelligence recognition holds promise as a tool for identifying children with ASD. However, it still needs to continually enhance the screening model and improve accuracy through multimodal screening, thereby facilitating timely intervention and treatment.
Collapse
Affiliation(s)
- Si-Jia Jia
- Faculty of Education, East China Normal University, Shanghai, China
| | - Jia-Qi Jing
- Faculty of Education, East China Normal University, Shanghai, China
| | - Chang-Jiang Yang
- Faculty of Education, East China Normal University, Shanghai, China.
- China Research Institute of Care and Education of Infants and Young, Shanghai, China.
| |
Collapse
|
5
|
Thorsson M, Galazka MA, Åsberg Johnels J, Hadjikhani N. Influence of autistic traits and communication role on eye contact behavior during face-to-face interaction. Sci Rep 2024; 14:8162. [PMID: 38589489 PMCID: PMC11001951 DOI: 10.1038/s41598-024-58701-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2023] [Accepted: 04/02/2024] [Indexed: 04/10/2024] Open
Abstract
Eye contact is a central component in face-to-face interactions. It is important in structuring communicative exchanges and offers critical insights into others' interests and intentions. To better understand eye contact in face-to-face interactions, we applied a novel, non-intrusive deep-learning-based dual-camera system and investigated associations between eye contact and autistic traits as well as self-reported eye contact discomfort during a referential communication task, where participants and the experimenter had to guess, in turn, a word known by the other individual. Corroborating previous research, we found that participants' eye gaze and mutual eye contact were inversely related to autistic traits. In addition, our findings revealed different behaviors depending on the role in the dyad: listening and guessing were associated with increased eye contact compared with describing words. In the listening and guessing condition, only a subgroup who reported eye contact discomfort had a lower amount of eye gaze and eye contact. When describing words, higher autistic traits were associated with reduced eye gaze and eye contact. Our data indicate that eye contact is inversely associated with autistic traits when describing words, and that eye gaze is modulated by the communicative role in a conversation.
Collapse
Affiliation(s)
- Max Thorsson
- Gillberg Neuropsychiatry Centre, Institute of Neuroscience and Physiology, University of Gothenburg, Gothenburg, Sweden.
| | - Martyna A Galazka
- Gillberg Neuropsychiatry Centre, Institute of Neuroscience and Physiology, University of Gothenburg, Gothenburg, Sweden
- Division of Cognition and Communication, Department of Applied Information Technology, University of Gothenburg, Gothenburg, Sweden
| | - Jakob Åsberg Johnels
- Gillberg Neuropsychiatry Centre, Institute of Neuroscience and Physiology, University of Gothenburg, Gothenburg, Sweden
- Section of Speech and Language Pathology, Institute of Neuroscience and Physiology, University of Gothenburg, Gothenburg, Sweden
| | - Nouchine Hadjikhani
- Gillberg Neuropsychiatry Centre, Institute of Neuroscience and Physiology, University of Gothenburg, Gothenburg, Sweden
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
6
|
Minissi ME, Altozano A, Marín-Morales J, Chicchi Giglioli IA, Mantovani F, Alcañiz M. Biosignal comparison for autism assessment using machine learning models and virtual reality. Comput Biol Med 2024; 171:108194. [PMID: 38428095 DOI: 10.1016/j.compbiomed.2024.108194] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Revised: 02/08/2024] [Accepted: 02/18/2024] [Indexed: 03/03/2024]
Abstract
Clinical assessment procedures encounter challenges in terms of objectivity because they rely on subjective data. Computational psychiatry proposes overcoming this limitation by introducing biosignal-based assessments able to detect clinical biomarkers, while virtual reality (VR) can offer ecological settings for measurement. Autism spectrum disorder (ASD) is a neurodevelopmental disorder where many biosignals have been tested to improve assessment procedures. However, in ASD research there is a lack of studies systematically comparing biosignals for the automatic classification of ASD when recorded simultaneously in ecological settings, and comparisons among previous studies are challenging due to methodological inconsistencies. In this study, we examined a VR screening tool consisting of four virtual scenes, and we compared machine learning models based on implicit (motor skills and eye movements) and explicit (behavioral responses) biosignals. Machine learning models were developed for each biosignal within the virtual scenes and then combined into a final model per biosignal. A linear support vector classifier with recursive feature elimination was used and tested using nested cross-validation. The final model based on motor skills exhibited the highest robustness in identifying ASD, achieving an AUC of 0.89 (SD = 0.08). The best behavioral model showed an AUC of 0.80, while further research is needed for the eye-movement models due to limitations with the eye-tracking glasses. These findings highlight the potential of motor skills in enhancing objectivity and reliability in the early assessment of ASD compared to other biosignals.
Collapse
Affiliation(s)
- Maria Eleonora Minissi
- Instituto Universitario de Investigación en Tecnología Centrada en El Ser Humano (HUMAN-tech), Universitat Politécnica de Valencia, Valencia, Spain.
| | - Alberto Altozano
- Instituto Universitario de Investigación en Tecnología Centrada en El Ser Humano (HUMAN-tech), Universitat Politécnica de Valencia, Valencia, Spain
| | - Javier Marín-Morales
- Instituto Universitario de Investigación en Tecnología Centrada en El Ser Humano (HUMAN-tech), Universitat Politécnica de Valencia, Valencia, Spain
| | - Irene Alice Chicchi Giglioli
- Instituto Universitario de Investigación en Tecnología Centrada en El Ser Humano (HUMAN-tech), Universitat Politécnica de Valencia, Valencia, Spain
| | - Fabrizia Mantovani
- Centre for Studies in Communication Sciences "Luigi Anolli" (CESCOM), Department of Human Sciences for Education ''Riccardo Massa'', University of Milano - Bicocca, Building U16, Via Tomas Mann, 20162, Milan, Italy
| | - Mariano Alcañiz
- Instituto Universitario de Investigación en Tecnología Centrada en El Ser Humano (HUMAN-tech), Universitat Politécnica de Valencia, Valencia, Spain
| |
Collapse
|
7
|
程 蓉, 赵 众, 侯 文, 周 刚, 廖 昊, 张 雪, 李 晶. [Machine learning algorithms for identifying autism spectrum disorder through eye-tracking in different intention videos]. ZHONGGUO DANG DAI ER KE ZA ZHI = CHINESE JOURNAL OF CONTEMPORARY PEDIATRICS 2024; 26:151-157. [PMID: 38436312 PMCID: PMC10921872 DOI: 10.7499/j.issn.1008-8830.2309073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Accepted: 12/25/2023] [Indexed: 03/05/2024]
Abstract
OBJECTIVES To investigate the differences in visual perception between children with autism spectrum disorder (ASD) and typically developing (TD) children when watching different intention videos, and to explore the feasibility of machine learning algorithms in objectively distinguishing between ASD children and TD children. METHODS A total of 58 children with ASD and 50 TD children were enrolled and were asked to watch the videos containing joint intention and non-joint intention, and the gaze duration and frequency in different areas of interest were used as original indicators to construct classifier-based models. The models were evaluated in terms of the indicators such as accuracy, sensitivity, and specificity. RESULTS When using eight common classifiers, including support vector machine, linear discriminant analysis, decision tree, random forest, and K-nearest neighbors (with K values of 1, 3, 5, and 7), based on the original feature indicators, the highest classification accuracy achieved was 81.90%. A feature reconstruction approach with a decision tree classifier was used to further improve the accuracy of classification, and then the model showed the accuracy of 91.43%, the specificity of 89.80%, and the sensitivity of 92.86%, with an area under the receiver operating characteristic curve of 0.909 (P<0.001). CONCLUSIONS The machine learning model based on eye-tracking data can accurately distinguish ASD children from TD children, which provides a scientific basis for developing rapid and objective ASD screening tools.
Collapse
Affiliation(s)
- 蓉 程
- 中国科学院大学心理学系北京100049
- 深圳大学机电与控制工程学院,广东深圳518010
| | | | | | | | | | | | | |
Collapse
|
8
|
Jaiswal A, Washington P. Using #ActuallyAutistic on Twitter for Precision Diagnosis of Autism Spectrum Disorder: Machine Learning Study. JMIR Form Res 2024; 8:e52660. [PMID: 38354045 PMCID: PMC10902768 DOI: 10.2196/52660] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Revised: 11/19/2023] [Accepted: 12/10/2023] [Indexed: 03/02/2024] Open
Abstract
BACKGROUND The increasing use of social media platforms has given rise to an unprecedented surge in user-generated content, with millions of individuals publicly sharing their thoughts, experiences, and health-related information. Social media can serve as a useful means to study and understand public health. Twitter (subsequently rebranded as "X") is one such social media platform that has proven to be a valuable source of rich information for both the general public and health officials. We conducted the first study applying Twitter data mining to autism screening. OBJECTIVE This study used Twitter as the primary source of data to study the behavioral characteristics and real-time emotional projections of individuals identifying with autism spectrum disorder (ASD). We aimed to improve the rigor of ASD analytics research by using the digital footprint of an individual to study the linguistic patterns of individuals with ASD. METHODS We developed a machine learning model to distinguish individuals with autism from their neurotypical peers based on the textual patterns from their public communications on Twitter. We collected 6,515,470 tweets from users' self-identification with autism using "#ActuallyAutistic" and a separate control group to identify linguistic markers associated with ASD traits. To construct the data set, we targeted English-language tweets using the search query "#ActuallyAutistic" posted from January 1, 2014, to December 31, 2022. From these tweets, we identified unique users who used keywords such as "autism" OR "autistic" OR "neurodiverse" in their profile description and collected all the tweets from their timeline. To build the control group data set, we formulated a search query excluding the hashtag, "-#ActuallyAutistic," and collected 1000 tweets per day during the same time period. We trained a word2vec model and an attention-based, bidirectional long short-term memory model to validate the performance of per-tweet and per-profile classification models. We also illustrate the utility of the data set through common natural language processing tasks such as sentiment analysis and topic modeling. RESULTS Our tweet classifier reached a 73% accuracy, a 0.728 area under the receiver operating characteristic curve score, and an 0.71 F1-score using word2vec representations fed into a logistic regression model, while the user profile classifier achieved an 0.78 area under the receiver operating characteristic curve score and an F1-score of 0.805 using an attention-based, bidirectional long short-term memory model. This is a promising start, demonstrating the potential for effective digital phenotyping studies and large-scale intervention using text data mined from social media. CONCLUSIONS Textual differences in social media communications can help researchers and clinicians conduct symptomatology studies in natural settings.
Collapse
Affiliation(s)
- Aditi Jaiswal
- Department of Information and Computer Sciences, University of Hawaii at Manoa, Honolulu, HI, United States
| | - Peter Washington
- Department of Information and Computer Sciences, University of Hawaii at Manoa, Honolulu, HI, United States
| |
Collapse
|
9
|
Ochi K, Kojima M, Ono N, Kuroda M, Owada K, Sagayama S, Yamasue H. Objective assessment of autism spectrum disorder based on performance in structured interpersonal acting-out tasks with prosodic stability and variability. Autism Res 2024; 17:395-409. [PMID: 38151701 DOI: 10.1002/aur.3080] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Accepted: 12/01/2023] [Indexed: 12/29/2023]
Abstract
In this study, we sought to objectively and quantitatively characterize the prosodic features of autism spectrum disorder (ASD) via the characteristics of prosody in a newly developed structured speech experiment. Male adults with high-functioning ASD and age/intelligence-matched men with typical development (TD) were asked to read 29 brief scripts aloud in response to preceding auditory stimuli. To investigate whether (1) highly structured acting-out tasks can uncover the prosodic of difference between those with ASD and TD, and (2) the prosodic stableness and flexibleness can be used for objective automatic assessment of ASD, we compared prosodic features such as fundamental frequency, intensity, and mora duration. The results indicate that individuals with ASD exhibit stable pitch registers or volume levels in some affective vocal-expression scenarios, such as those involving anger or sadness, compared with TD and those with TD. However, unstable prosody was observed in some timing control or emphasis tasks in the participants with ASD. Automatic classification of the ASD and TD groups using a support vector machine (SVM) with speech features exhibited an accuracy of 90.4%. A machine learning-based assessment of the degree of ASD core symptoms using support vector regression (SVR) also had good performance. These results may inform the development of a new easy-to-use assessment tool for ASD core symptoms using recorded audio signals.
Collapse
Affiliation(s)
- Keiko Ochi
- Graduate School of Informatics, Kyoto University, Kyoto, Japan
| | - Masaki Kojima
- Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | - Nobutaka Ono
- Graduate School of Systems Design, Tokyo Metropolitan University, Tokyo, Japan
| | - Miho Kuroda
- Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | - Keiho Owada
- Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | | | - Hidenori Yamasue
- Graduate School of Medicine, University of Tokyo, Tokyo, Japan
- Department of Psychiatry, Hamamatsu University School of Medicine, Hamamatsu City, Japan
| |
Collapse
|
10
|
Nedungadi P, Shah SM, Stokes MA, Kumar Nair V, Moorkoth A, Raman R. Mapping autism's research landscape: trends in autism screening and its alignment with sustainable development goals. Front Psychiatry 2024; 14:1294254. [PMID: 38361829 PMCID: PMC10868528 DOI: 10.3389/fpsyt.2023.1294254] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Accepted: 12/29/2023] [Indexed: 02/17/2024] Open
Abstract
Introduction Autism Spectrum Disorder is a complex neurodevelopmental syndrome that profoundly affects social interactions, communication, and sensory perception. The research traced the evolution of autism research from 2011-2022, specifically focusing on the screening and diagnosis of children and students. Methods Through an analysis of 12,262 publications using the PRISMA framework, bibliographic coupling, science mapping, and citation analysis, this study illuminates the growth trajectory of ASD research and significant disparities in diagnosis and services. Results The study indicates an increasing trend in autism research, with a strong representation of female authorship. Open Access journals show a higher average citation impact compared to their closed counterparts. A keyword co-occurrence analysis revealed four central research themes: Child Development and Support Systems, Early Identification and Intervention, Prevalence and Etiology, and Mental Health. The pandemic's onset has prioritized research areas like mental health, telehealth, and service accessibility. Discussion Recommendations on a global level stress the importance of developing timely biological markers for ASD, amplifying Disability Inclusion research, and personalizing mental health services to bridge these critical service gaps. These strategies, underpinned by interdisciplinary collaboration and telehealth innovation, particularly in low-resource settings, can offer a roadmap for inclusive, context-sensitive interventions at local levels that directly support SDG3's aim for health and well-being for all.
Collapse
Affiliation(s)
- Prema Nedungadi
- Amrita School of Computing, Amrita Vishwa Vidyapeetham, Kollam, India
| | | | | | | | - Ajit Moorkoth
- Seed Special Education Center, Dubai, United Arab Emirates
| | - Raghu Raman
- Amrita School of Business Amritapuri, Amrita Vishwa Vidyapeetham University, Coimbatore, Tamil Nadu, India
| |
Collapse
|
11
|
Mukherjee D, Bhavnani S, Lockwood Estrin G, Rao V, Dasgupta J, Irfan H, Chakrabarti B, Patel V, Belmonte MK. Digital tools for direct assessment of autism risk during early childhood: A systematic review. AUTISM : THE INTERNATIONAL JOURNAL OF RESEARCH AND PRACTICE 2024; 28:6-31. [PMID: 36336996 PMCID: PMC10771029 DOI: 10.1177/13623613221133176] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
LAY ABSTRACT The challenge of finding autistic children, and finding them early enough to make a difference for them and their families, becomes all the greater in parts of the world where human and material resources are in short supply. Poverty of resources delays interventions, translating into a poverty of outcomes. Digital tools carry potential to lessen this delay because they can be administered by non-specialists in children's homes, schools or other everyday environments, they can measure a wide range of autistic behaviours objectively and they can automate analysis without requiring an expert in computers or statistics. This literature review aimed to identify and describe digital tools for screening children who may be at risk for autism. These tools are predominantly at the 'proof-of-concept' stage. Both portable (laptops, mobile phones, smart toys) and fixed (desktop computers, virtual-reality platforms) technologies are used to present computerised games, or to record children's behaviours or speech. Computerised analysis of children's interactions with these technologies differentiates children with and without autism, with promising results. Tasks assessing social responses and hand and body movements are the most reliable in distinguishing autistic from typically developing children. Such digital tools hold immense potential for early identification of autism spectrum disorder risk at a large scale. Next steps should be to further validate these tools and to evaluate their applicability in a variety of settings. Crucially, stakeholders from underserved communities globally must be involved in this research, lest it fail to capture the issues that these stakeholders are facing.
Collapse
Affiliation(s)
- Debarati Mukherjee
- Indian Institute of Public Health - Bengaluru, Public Health Foundation of India, India
| | | | | | - Vaisnavi Rao
- Institute for Democracy and Economic Affairs (IDEAS), Malaysia
| | | | | | | | - Vikram Patel
- Child Development Group, Sangath, India
- Harvard Medical School, USA
- Harvard T.H. Chan School of Public Health, USA
| | | |
Collapse
|
12
|
Soltiyeva A, Oliveira W, Madina A, Adilkhan S, Urmanov M, Hamari J. My Lovely Granny's Farm: An immersive virtual reality training system for children with autism spectrum disorder. EDUCATION AND INFORMATION TECHNOLOGIES 2023:1-21. [PMID: 37361850 PMCID: PMC10199436 DOI: 10.1007/s10639-023-11862-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Accepted: 04/25/2023] [Indexed: 06/28/2023]
Abstract
One of the biggest difficulties faced by children with Autism Spectrum Disorder during their learning process and general life, is communication and social interaction. In recent years, researchers and practitioners have invested in different approaches to improving aspects of their communication and learning. However, there is still no consolidated approach and the community is still looking for new approaches that can meet this need. Addressing this challenge, in this article we propose a novelty approach (i.e., an Adaptive Immersive Virtual Reality Training System), aiming to enrich social interaction and communication skills for children with Autism Spectrum Disorder. In this adaptive system (called My Lovely Granny's Farm), the behavior of the virtual trainer changes depending on the mood and actions of the users (i.e., patients/learners). Additionally, we conducted an initial observational study by monitoring the behavior of children with autism in a virtual environment. In the initial study, the system was offered to users with a high degree of interactivity so that they might practice various social situations in a safe and controlled environment. The results demonstrate that the use of the system can allow patients who needed treatment to receive therapy without leaving home. Our approach is the first experience of treating children with autism in Kazakhstan and can contribute to improving the communication and social interaction of children with Autism Spectrum Disorder. We contribute to the community of educational technologies and mental health by providing a system that can improve communication among children with autism and providing insights on how to design this kind of system.
Collapse
Affiliation(s)
- Aiganym Soltiyeva
- Faculty of Engineering and Natural Sciences, Suleyman Demirel University, Kaskelen, Kazakhstan
| | - Wilk Oliveira
- Gamification Group, Faculty of Information Technology and Communication Sciences, Tampere University, Tampere, Finland
| | - Alimanova Madina
- Faculty of Engineering and Natural Sciences, Suleyman Demirel University, Kaskelen, Kazakhstan
| | - Shyngys Adilkhan
- Faculty of Engineering and Natural Sciences, Suleyman Demirel University, Kaskelen, Kazakhstan
| | - Marat Urmanov
- Faculty of Engineering and Natural Sciences, Suleyman Demirel University, Kaskelen, Kazakhstan
| | - Juho Hamari
- Gamification Group, Faculty of Information Technology and Communication Sciences, Tampere University, Tampere, Finland
| |
Collapse
|
13
|
Minissi ME, Gómez-Zaragozá L, Marín-Morales J, Mantovani F, Sirera M, Abad L, Cervera-Torres S, Gómez-García S, Chicchi Giglioli IA, Alcañiz M. The whole-body motor skills of children with autism spectrum disorder taking goal-directed actions in virtual reality. Front Psychol 2023; 14:1140731. [PMID: 37089733 PMCID: PMC10117537 DOI: 10.3389/fpsyg.2023.1140731] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Accepted: 03/13/2023] [Indexed: 04/08/2023] Open
Abstract
Many symptoms of the autism spectrum disorder (ASD) are evident in early infancy, but ASD is usually diagnosed much later by procedures lacking objective measurements. It is necessary to anticipate the identification of ASD by improving the objectivity of the procedure and the use of ecological settings. In this context, atypical motor skills are reaching consensus as a promising ASD biomarker, regardless of the level of symptom severity. This study aimed to assess differences in the whole-body motor skills between 20 children with ASD and 20 children with typical development during the execution of three tasks resembling regular activities presented in virtual reality. The virtual tasks asked to perform precise and goal-directed actions with different limbs vary in their degree of freedom of movement. Parametric and non-parametric statistical methods were applied to analyze differences in children's motor skills. The findings endorsed the hypothesis that when particular goal-directed movements are required, the type of action could modulate the presence of motor abnormalities in ASD. In particular, the ASD motor abnormalities emerged in the task requiring to take with the upper limbs goal-directed actions with low degree of freedom. The motor abnormalities covered (1) the body part mainly involved in the action, and (2) further body parts not directly involved in the movement. Findings were discussed against the background of atypical prospective control of movements and visuomotor discoordination in ASD. These findings contribute to advance the understanding of motor skills in ASD while deepening ecological and objective assessment procedures based on VR.
Collapse
Affiliation(s)
- Maria Eleonora Minissi
- Instituto Universitario de Investigación en Tecnología Centrada en el Ser Humano (HUMAN-tech), Universitat Politécnica de Valencia, Valencia, Spain
| | - Lucía Gómez-Zaragozá
- Instituto Universitario de Investigación en Tecnología Centrada en el Ser Humano (HUMAN-tech), Universitat Politécnica de Valencia, Valencia, Spain
| | - Javier Marín-Morales
- Instituto Universitario de Investigación en Tecnología Centrada en el Ser Humano (HUMAN-tech), Universitat Politécnica de Valencia, Valencia, Spain
| | - Fabrizia Mantovani
- Centre for Studies in Communication Sciences “Luigi Anolli” (CESCOM), Department of Human Sciences for Education “Riccardo Massa”, University of Milano - Bicocca, Milan, Italy
| | - Marian Sirera
- Red Cenit, Centros de Desarrollo Cognitivo, Valencia, Spain
| | - Luis Abad
- Red Cenit, Centros de Desarrollo Cognitivo, Valencia, Spain
| | - Sergio Cervera-Torres
- Instituto Universitario de Investigación en Tecnología Centrada en el Ser Humano (HUMAN-tech), Universitat Politécnica de Valencia, Valencia, Spain
| | - Soledad Gómez-García
- Facultad de Magisterio y Ciencias de la Educación, Universidad Católica de Valencia, Valencia, Spain
| | - Irene Alice Chicchi Giglioli
- Instituto Universitario de Investigación en Tecnología Centrada en el Ser Humano (HUMAN-tech), Universitat Politécnica de Valencia, Valencia, Spain
| | - Mariano Alcañiz
- Instituto Universitario de Investigación en Tecnología Centrada en el Ser Humano (HUMAN-tech), Universitat Politécnica de Valencia, Valencia, Spain
| |
Collapse
|
14
|
From virtual to prosocial reality: The effects of prosocial virtual reality games on preschool Children's prosocial tendencies in real life environments. COMPUTERS IN HUMAN BEHAVIOR 2023. [DOI: 10.1016/j.chb.2022.107546] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
15
|
Previously Marzena Szkodo MOR, Micai M, Caruso A, Fulceri F, Fazio M, Scattoni ML. Technologies to support the diagnosis and/or treatment of neurodevelopmental disorders: A systematic review. Neurosci Biobehav Rev 2023; 145:105021. [PMID: 36581169 DOI: 10.1016/j.neubiorev.2022.105021] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Revised: 12/13/2022] [Accepted: 12/23/2022] [Indexed: 12/27/2022]
Abstract
In recent years, there has been a great interest in utilizing technology in mental health research. The rapid technological development has encouraged researchers to apply technology as a part of a diagnostic process or treatment of Neurodevelopmental Disorders (NDDs). With the large number of studies being published comes an urgent need to inform clinicians and researchers about the latest advances in this field. Here, we methodically explore and summarize findings from studies published between August 2019 and February 2022. A search strategy led to the identification of 4108 records from PubMed and APA PsycInfo databases. 221 quantitative studies were included, covering a wide range of technologies used for diagnosis and/or treatment of NDDs, with the biggest focus on Autism Spectrum Disorder (ASD). The most popular technologies included machine learning, functional magnetic resonance imaging, electroencephalogram, magnetic resonance imaging, and neurofeedback. The results of the review indicate that technology-based diagnosis and intervention for NDD population is promising. However, given a high risk of bias of many studies, more high-quality research is needed.
Collapse
Affiliation(s)
| | - Martina Micai
- Research Coordination and Support Service, Istituto Superiore di Sanità, Viale Regina Elena 299, 00161 Rome, Italy.
| | - Angela Caruso
- Research Coordination and Support Service, Istituto Superiore di Sanità, Viale Regina Elena 299, 00161 Rome, Italy.
| | - Francesca Fulceri
- Research Coordination and Support Service, Istituto Superiore di Sanità, Viale Regina Elena 299, 00161 Rome, Italy.
| | - Maria Fazio
- Department of Mathematics, Computer Science, Physics and Earth Sciences (MIFT), University of Messina, Viale F. Stagno d'Alcontres, 31, 98166 Messina, Italy.
| | - Maria Luisa Scattoni
- Research Coordination and Support Service, Istituto Superiore di Sanità, Viale Regina Elena 299, 00161 Rome, Italy.
| |
Collapse
|
16
|
Hibbard PB. Virtual Reality for Vision Science. Curr Top Behav Neurosci 2023; 65:131-159. [PMID: 36723780 DOI: 10.1007/7854_2023_416] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
Virtual reality (VR) allows us to create visual stimuli that are both immersive and reactive. VR provides many new opportunities in vision science. In particular, it allows us to present wide field-of-view, immersive visual stimuli; for observers to actively explore the environments that we create; and for us to understand how visual information is used in the control of behaviour. In contrast with traditional psychophysical experiments, VR provides much greater flexibility in creating environments and tasks that are more closely aligned with our everyday experience. These benefits of VR are of particular value in developing our theories of the behavioural goals of the visual system and explaining how visual information is processed to achieve these goals. The use of VR in vision science presents a number of technical challenges, relating to how the available software and hardware limit our ability to accurately specify the visual information that defines our virtual environments and the interpretation of data gathered in experiments with a freely moving observer in a responsive environment.
Collapse
Affiliation(s)
- Paul B Hibbard
- Department of Psychology, University of Essex, Colchester, UK.
| |
Collapse
|
17
|
Shirwaikar RD, Sarwari I, Najam M, M SH. Has Machine Learning Enhanced the Diagnosis of Autism Spectrum Disorder? Crit Rev Biomed Eng 2023; 51:1-14. [PMID: 37522537 DOI: 10.1615/critrevbiomedeng.v51.i1.10] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/01/2023]
Abstract
Autism spectrum disorder (ASD) is a complex neurological condition that limits an individual's capacity for communication and learning throughout their life. Although symptoms of Autism can be diagnosed in individuals of different ages, it is labeled as a developmental disorder because symptoms typically start to show up in the initial 2 years of childhood. Autism has no single known cause but multiple factors contribute to its etiology in children. Because symptoms and severity of ASD vary in every individual, there could be many causes. Detection of ASD in the early stages is crucial for providing a path for rehabilitation that enhances the quality of life and integrates the ASD person into the social, family, and professional spheres. Assessment of ASD includes experienced observers in neutral environments, which brings constraints and biases to a lack of credibility and fails to accurately reflect performance in terms of real-world scenarios. To get around these limitations, the conducted review offers a thorough analysis of the impact on the individual and the ones living around them and most recent research on how these techniques are implemented in the diagnosis of ASD. As a result of improvements in technology, assessments now include processing unconventional data than can be collected from measurements arising out of laboratory chemistry or of electrophysiological origin. Examples of these technologies include virtual reality and sensors including eye-tracking imaging. Studies have been conducted towards recognition of emotion and brain networks to identify functional connectivity and discriminate between people with ASD and people who are thought to be typically developing. Diagnosis of Autism has recently made substantial use of long short term memory (LSTM), convolutional neural network (CNN) and its variants, the random forest (RF) and naive Bayes (NB) machine learning techniques. It is hoped that researchers will develop methodologies that increase the probability of identification of ASD in its varied forms and contribute towards improved lifestyle for patients with ASD and those affected by the pathology.
Collapse
Affiliation(s)
- Rudresh Deepak Shirwaikar
- Department of Computer Engineering, Agnel Institute of Technology and Design (AITD), Goa University, Assagao, Goa, India, 403507
| | - Iram Sarwari
- Department of Information Science and Engineering, Ramaiah Institute of Technology (RIT), Bangalore, Karnataka, India 560064
| | - Mehwish Najam
- Department of Information Science and Engineering, Ramaiah Institute of Technology (RIT), Bangalore, Karnataka, India 560064
| | - Shama H M
- BMS Institute of Technology and Management (BMSIT), Bangalore, Karnataka, India 560064
| |
Collapse
|
18
|
Wei Q, Cao H, Shi Y, Xu X, Li T. Machine learning based on eye-tracking data to identify Autism Spectrum Disorder: A systematic review and meta-analysis. J Biomed Inform 2023; 137:104254. [PMID: 36509416 DOI: 10.1016/j.jbi.2022.104254] [Citation(s) in RCA: 13] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Revised: 11/15/2022] [Accepted: 11/17/2022] [Indexed: 12/13/2022]
Abstract
BACKGROUND Machine learning has been widely used to identify Autism Spectrum Disorder (ASD) based on eye-tracking, but its accuracy is uncertain. We aimed to summarize the available evidence on the performances of machine learning algorithms in classifying ASD and typically developing (TD) individuals based on eye-tracking data. METHODS We searched Medline, Embase, Web of Science, Scopus, Cochrane Library, IEEE Xplore Digital Library, Wan Fang Database, China National Knowledge Infrastructure, Chinese BioMedical Literature Database, VIP Database for Chinese Technical Periodicals, from database inception to December 24, 2021. Studies using machine learning methods to classify ASD and TD individuals based on eye-tracking technologies were included. We extracted the data on study population, model performances, algorithms of machine learning, and paradigms of eye-tracking. This study is registered with PROSPERO, CRD42022296037. RESULTS 261 articles were identified, of which 24 studies with sample sizes ranging from 28 to 141 were included (n = 1396 individuals). Machine learning based on eye-tracking yielded the pooled classified accuracy of 81 % (I2 = 73 %), specificity of 79 % (I2 = 61 %), and sensitivity of 84 % (I2 = 61 %) in classifying ASD and TD individuals. In subgroup analysis, the accuracy was 88 % (95 % CI: 85-91 %), 79 % (95 % CI: 72-84 %), 71 % (95 % CI: 59-91 %) for preschool-aged, school-aged, and adolescent-adult group. Eye-tracking stimuli and machine learning algorithms varied widely across studies, with social, static, and active stimuli and Support Vector Machine and Random Forest most commonly reported. Regarding the model performance evaluation, 15 studies reported their final results on validation datasets, four based on testing datasets, and five did not report whether they used validation datasets. Most studies failed to report the information on eye-tracking hardware and the implementation process. CONCLUSION Using eye-tracking data, machine learning has shown potential in identifying ASD individuals with high accuracy, especially in preschool-aged children. However, the heterogeneity between studies, the absence of test set-based performance evaluations, the small sample size, and the non-standardized implementation of eye-tracking might deteriorate the reliability of results. Further well-designed and well-executed studies with comprehensive and transparent reporting are needed to determine the optimal eye-tracking paradigms and machine learning algorithms.
Collapse
Affiliation(s)
- Qiuhong Wei
- Children Nutrition Research Center, Children's Hospital of Chongqing Medical University, National Clinical Research Center for Child Health and Disorders, Ministry of Education Key Laboratory of Child Development and Disorders, China International Science and Technology Cooperation Base of Child Development and Critical Disorders, Chongqing Key Laboratory of Childhood Nutrition and Health, Chongqing, China
| | - Huiling Cao
- Department of Neonatology, Children's Hospital of Chongqing Medical University, Chongqing, China
| | - Yuan Shi
- Department of Neonatology, Children's Hospital of Chongqing Medical University, Chongqing, China
| | - Ximing Xu
- Big Data Center for Children's Medical Care, Children's Hospital of Chongqing Medical University, Chongqing, China.
| | - Tingyu Li
- Children Nutrition Research Center, Children's Hospital of Chongqing Medical University, National Clinical Research Center for Child Health and Disorders, Ministry of Education Key Laboratory of Child Development and Disorders, China International Science and Technology Cooperation Base of Child Development and Critical Disorders, Chongqing Key Laboratory of Childhood Nutrition and Health, Chongqing, China.
| |
Collapse
|
19
|
Assessment of the validity and feasibility of a novel virtual reality test of emotion regulation in patients with bipolar disorder and their unaffected relatives. J Affect Disord 2022; 318:217-223. [PMID: 36089075 DOI: 10.1016/j.jad.2022.09.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Revised: 08/29/2022] [Accepted: 09/06/2022] [Indexed: 11/20/2022]
Abstract
BACKGROUND Emotion dysregulation has been suggested as an endophenotype of bipolar disorder (BD). Neuroimaging studies show aberrant neural activity during emotion regulation in remitted patients with BD and their unaffected first-degree relatives (UR) compared to healthy controls (HC). However, behavioural studies produce conflicting - generally negative findings - possibly due to limited sensitivity and ecological validity of current behavioural paradigms. METHODS This study aimed to explore emotion regulation in BD (n = 30) and UR (n = 26) relative to HC (n = 47) by using a novel emotion regulation task in virtual reality (VR). Participants were instructed to either react naturally to, or dampen, their emotional response to highly positive or highly negative scenarios presented in first-person 360-degree spherically camera-recorded VR environments. Participants also completed a more traditional computerised task of emotion regulation for comparison purposes. RESULTS Patients with BD exhibited difficulties with down-regulating their negative emotions in the VR paradigm compared to HC and UR (ps ≤ .04), whereas UR did not differ from HC (p = .97). There was no emotion regulation difference between groups in the more traditional computerised task (ps ≥ .40). LIMITATIONS The small sample size limits generalisability. CONCLUSIONS The results suggest trait-related reduced ability to down-regulate negative emotions in BD patients compared to HC in the VR paradigm, but not in the more traditional task of emotion regulation. This may indicate that VR provides a more sensitive measure relative to traditional paradigms. The findings provided no support for aberrant emotional regulation as an endophenotype of BD given the normal emotion regulation performance in UR.
Collapse
|
20
|
Wiebe A, Kannen K, Selaskowski B, Mehren A, Thöne AK, Pramme L, Blumenthal N, Li M, Asché L, Jonas S, Bey K, Schulze M, Steffens M, Pensel MC, Guth M, Rohlfsen F, Ekhlas M, Lügering H, Fileccia H, Pakos J, Lux S, Philipsen A, Braun N. Virtual reality in the diagnostic and therapy for mental disorders: A systematic review. Clin Psychol Rev 2022; 98:102213. [PMID: 36356351 DOI: 10.1016/j.cpr.2022.102213] [Citation(s) in RCA: 21] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2021] [Revised: 08/21/2022] [Accepted: 10/11/2022] [Indexed: 01/27/2023]
Abstract
BACKGROUND Virtual reality (VR) technologies are playing an increasingly important role in the diagnostics and treatment of mental disorders. OBJECTIVE To systematically review the current evidence regarding the use of VR in the diagnostics and treatment of mental disorders. DATA SOURCE Systematic literature searches via PubMed (last literature update: 9th of May 2022) were conducted for the following areas of psychopathology: Specific phobias, panic disorder and agoraphobia, social anxiety disorder, generalized anxiety disorder, posttraumatic stress disorder (PTSD), obsessive-compulsive disorder, eating disorders, dementia disorders, attention-deficit/hyperactivity disorder, depression, autism spectrum disorder, schizophrenia spectrum disorders, and addiction disorders. ELIGIBILITY CRITERIA To be eligible, studies had to be published in English, to be peer-reviewed, to report original research data, to be VR-related, and to deal with one of the above-mentioned areas of psychopathology. STUDY EVALUATION For each study included, various study characteristics (including interventions and conditions, comparators, major outcomes and study designs) were retrieved and a risk of bias score was calculated based on predefined study quality criteria. RESULTS Across all areas of psychopathology, k = 9315 studies were inspected, of which k = 721 studies met the eligibility criteria. From these studies, 43.97% were considered assessment-related, 55.48% therapy-related, and 0.55% were mixed. The highest research activity was found for VR exposure therapy in anxiety disorders, PTSD and addiction disorders, where the most convincing evidence was found, as well as for cognitive trainings in dementia and social skill trainings in autism spectrum disorder. CONCLUSION While VR exposure therapy will likely find its way successively into regular patient care, there are also many other promising approaches, but most are not yet mature enough for clinical application. REVIEW REGISTRATION PROSPERO register CRD42020188436. FUNDING The review was funded by budgets from the University of Bonn. No third party funding was involved.
Collapse
Affiliation(s)
- Annika Wiebe
- Department of Psychiatry and Psychotherapy, University Hospital Bonn, Bonn, Germany
| | - Kyra Kannen
- Department of Psychiatry and Psychotherapy, University Hospital Bonn, Bonn, Germany
| | - Benjamin Selaskowski
- Department of Psychiatry and Psychotherapy, University Hospital Bonn, Bonn, Germany
| | - Aylin Mehren
- Department of Psychiatry and Psychotherapy, University Hospital Bonn, Bonn, Germany
| | - Ann-Kathrin Thöne
- School of Child and Adolescent Cognitive Behavior Therapy (AKiP), Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Lisa Pramme
- Department of Psychiatry and Psychotherapy, University Hospital Bonn, Bonn, Germany
| | - Nike Blumenthal
- Department of Psychiatry and Psychotherapy, University Hospital Bonn, Bonn, Germany
| | - Mengtong Li
- Department of Psychiatry and Psychotherapy, University Hospital Bonn, Bonn, Germany
| | - Laura Asché
- Department of Psychiatry and Psychotherapy, University Hospital Bonn, Bonn, Germany
| | - Stephan Jonas
- Institute for Digital Medicine, University Hospital Bonn, Bonn, Germany
| | - Katharina Bey
- Department of Psychiatry and Psychotherapy, University Hospital Bonn, Bonn, Germany
| | - Marcel Schulze
- Department of Psychiatry and Psychotherapy, University Hospital Bonn, Bonn, Germany
| | - Maria Steffens
- Department of Psychiatry and Psychotherapy, University Hospital Bonn, Bonn, Germany
| | - Max Christian Pensel
- Department of Psychiatry and Psychotherapy, University Hospital Bonn, Bonn, Germany
| | - Matthias Guth
- Department of Psychiatry and Psychotherapy, University Hospital Bonn, Bonn, Germany
| | - Felicia Rohlfsen
- Department of Psychiatry and Psychotherapy, University Hospital Bonn, Bonn, Germany
| | - Mogda Ekhlas
- Department of Psychiatry and Psychotherapy, University Hospital Bonn, Bonn, Germany
| | - Helena Lügering
- Department of Psychiatry and Psychotherapy, University Hospital Bonn, Bonn, Germany
| | - Helena Fileccia
- Department of Psychiatry and Psychotherapy, University Hospital Bonn, Bonn, Germany
| | - Julian Pakos
- Department of Psychiatry and Psychotherapy, University Hospital Bonn, Bonn, Germany
| | - Silke Lux
- Department of Psychiatry and Psychotherapy, University Hospital Bonn, Bonn, Germany
| | - Alexandra Philipsen
- Department of Psychiatry and Psychotherapy, University Hospital Bonn, Bonn, Germany
| | - Niclas Braun
- Department of Psychiatry and Psychotherapy, University Hospital Bonn, Bonn, Germany.
| |
Collapse
|
21
|
Pons P, Navas-Medrano S, Soler-Dominguez JL. Extended reality for mental health: Current trends and future challenges. FRONTIERS IN COMPUTER SCIENCE 2022. [DOI: 10.3389/fcomp.2022.1034307] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Virtual and augmented reality have been used to diagnose and treat several mental health disorders for decades. Technological advances in these fields have facilitated the availability of commercial solutions for end customers and practitioners. However, there are still some barriers and limitations that prevent these technologies from being widely used by professionals on a daily basis. In addition, the COVID-19 pandemic has exposed a variety of new scenarios in which these technologies could play an essential role, like providing remote treatment. Disorders that traditionally had received less attention are also getting in the spotlight, such as depression or obsessive-compulsive disorder. Improvements in equipment and hardware, like Mixed Reality Head Mounted Displays, could help open new opportunities in the mental health field. Extended reality (XR) is an umbrella term meant to comprise Virtual reality (VR), mixed reality (MR), and augmented reality (AR). While XR applications are eminently visual, other senses are being explored in literature around multisensory interactions, such as auditory, olfactory, or haptic feedback. Applying such stimuli within XR experiences around mental disorders is still under-explored and could greatly enrich the therapeutic experience. This manuscript reviews recent research regarding the use of XR for mental health scenarios, highlighting trends, and potential applications as well as areas for improvement. It also discusses future challenges and research areas in upcoming topics such as the use of wearables, multisensory, and multimodal interaction. The main goal of this paper is to unpack how these technologies could be applied to XR scenarios for mental health to exploit their full potential and follow the path of other health technologies by promoting personalized medicine.
Collapse
|
22
|
Gaze Fixation and Visual Searching Behaviors during an Immersive Virtual Reality Social Skills Training Experience for Children and Youth with Autism Spectrum Disorder: A Pilot Study. Brain Sci 2022; 12:brainsci12111568. [DOI: 10.3390/brainsci12111568] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 11/04/2022] [Accepted: 11/16/2022] [Indexed: 11/19/2022] Open
Abstract
Children and youth with Autism Spectrum Disorder (ASD) display difficulties recognizing and interacting with behavioral expressions of emotion, a deficit that makes social interaction problematic. Social skills training is foundational to the treatment of ASD, yet this intervention is costly, time-consuming, lacks objectivity, and is difficult to deliver in real-world settings. This pilot project investigated the use of an immersive virtual reality (IVR) headset to simulate real-world social interactions for children/youth with ASD. The primary objective was to describe gaze fixation and visual search behaviors during the simulated activity. Ten participants were enrolled and completed one social-skills training session in the IVR. The results demonstrate differential patterns between participants with mild, moderate, and severe ASD in the location and duration of gaze fixation as well as the patterns of visual searching. Although the results are preliminary, these differences may shed light on phenotypes within the continuum of ASD. Additionally, there may be value in quantifying gaze and visual search behaviors as an objective metric of interventional effectiveness for social-skills training therapy.
Collapse
|
23
|
Savickaite S, Husselman TA, Taylor R, Millington E, Hayashibara E, Arthur T. Applications of virtual reality (VR) in autism research: current trends and taxonomy of definitions. JOURNAL OF ENABLING TECHNOLOGIES 2022. [DOI: 10.1108/jet-05-2022-0038] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
PurposeRecent work could further improve the use of VR technology by advocating the use of psychological theories in task design and highlighting certain properties of VR configurations and human – VR interactions. The variety of VR technology used in the trials prevents us from establishing a systematic relationship between the technology type and its effectiveness. As such, more research is needed to study this link, and our piece is an attempt to shed a spotlight on the issue.Design/methodology/approachTo explore recent developments in the field, the authors followed the procedures of scoping review by Savickaite et al. (2022) and included publications from 2021 to 2022.FindingsIn this updated analysis, it was clear that the research themes emerging over the last two years were similar to those identified previously. Social training and intervention work still dominates the research area, in spite of recent calls from the autism community to broaden the scientific understanding of neurodivergent experiences and daily living behaviours. Although, autism is often characterised by difficulties with social interactions, it is just one part of the presentation. Sensory differences, motor difficulties and repetitive behaviours are also important facets of the condition, as well as various wider aspects of health, wellbeing and quality of life. However, many of these topics appear to be understudied in research on VR applications for autism.Originality/valueVR stands out from other representational technologies because of its immersion, presence and interactivity and has grown into its own niche. The question of what constitutes a truly immersive experience has resurfaced. We can no longer deny that VR has established itself in autism research. As the number of studies continues to grow, it is a perfect time to reconsider and update our notion of definitions of immersion and its reliance on hardware.
Collapse
|
24
|
Abstract
Whereas traditional teaching environments encourage lively and engaged interaction and reward extrovert qualities, introverts, and others with symptoms that make social engagement difficult, such as autism spectrum disorder (ASD), are often disadvantaged. This population is often more engaged in quieter, low-key learning environments and often does not speak up and answer questions in traditional lecture-style classes. These individuals are often passed over in school and later in their careers for not speaking up and are assumed to not be as competent as their gregarious and outgoing colleagues. With the rise of the metaverse and democratization of virtual reality (VR) technology, post-secondary education is especially poised to capitalize on the immersive learning environments social VR provides and prepare students for the future of work, where virtual collaboration will be key. This study seeks to reconsider the role of VR and the metaverse for introverts and those with ASD. The metaverse has the potential to continue the social and workplace changes already accelerated by the pandemic and open new avenues for communication and collaboration for a more inclusive audience and tomorrow.
Collapse
|