1
|
Michelot B, Corneyllie A, Thevenet M, Duffner S, Perrin F. A modular machine learning tool for holistic and fine-grained behavioral analysis. Behav Res Methods 2024; 57:24. [PMID: 39702505 DOI: 10.3758/s13428-024-02511-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/06/2024] [Indexed: 12/21/2024]
Abstract
Artificial intelligence techniques offer promising avenues for exploring human body features from videos, yet no freely accessible tool has reliably provided holistic and fine-grained behavioral analyses to date. To address this, we developed a machine learning tool based on a two-level approach: a first lower-level processing using computer vision for extracting fine-grained and comprehensive behavioral features such as skeleton or facial points, gaze, and action units; a second level of machine learning classification coupled with explainability providing modularity, to determine which behavioral features are triggered by specific environments. To validate our tool, we filmed 16 participants across six conditions, varying according to the presence of a person ("Pers"), a sound ("Snd"), or silence ("Rest"), and according to emotional levels using self-referential ("Self") and control ("Ctrl") stimuli. We demonstrated the effectiveness of our approach by extracting and correcting behavior from videos using two computer vision software (OpenPose and OpenFace) and by training two algorithms (XGBoost and long short-term memory [LSTM]) to differentiate between experimental conditions. High classification rates were achieved for "Pers" conditions versus "Snd" or "Rest" (AUC = 0.8-0.9), with explainability revealing actions units and gaze as key features. Additionally, moderate classification rates were attained for "Snd" versus "Rest" (AUC = 0.7), attributed to action units, limbs and head points, as well as for "Self" versus "Ctrl" (AUC = 0.7-0.8), due to facial points. These findings were consistent with a more conventional hypothesis-driven approach. Overall, our study suggests that our tool is well suited for holistic and fine-grained behavioral analysis and offers modularity for extension into more complex naturalistic environments.
Collapse
Affiliation(s)
- Bruno Michelot
- CAP Team, Centre de Recherche en Neurosciences de Lyon - INSERM U1028 - CNRS UMR 5292 - UCBL - UJM, 95 Boulevard Pinel, 69675, Bron, France.
| | - Alexandra Corneyllie
- CAP Team, Centre de Recherche en Neurosciences de Lyon - INSERM U1028 - CNRS UMR 5292 - UCBL - UJM, 95 Boulevard Pinel, 69675, Bron, France
| | - Marc Thevenet
- CAP Team, Centre de Recherche en Neurosciences de Lyon - INSERM U1028 - CNRS UMR 5292 - UCBL - UJM, 95 Boulevard Pinel, 69675, Bron, France
| | - Stefan Duffner
- IMAGINE Team, Laboratoire d'InfoRmatique en Image et Systèmes d'information - UMR 5205 CNRS - INSA Lyon, Université Claude Bernard Lyon 1 - Université Lumière Lyon 2 - École Centrale de Lyon, Lyon, France
| | - Fabien Perrin
- CAP Team, Centre de Recherche en Neurosciences de Lyon - INSERM U1028 - CNRS UMR 5292 - UCBL - UJM, 95 Boulevard Pinel, 69675, Bron, France
| |
Collapse
|
2
|
Thompson A, Ruch D, Bridge JA, Fontanella C, Beauchaine TP. Self-injury and suicidal behaviors in high-risk adolescents: Distal predictors, proximal correlates, and interactive effects of impulsivity and emotion dysregulation. Dev Psychopathol 2024:1-14. [PMID: 39494962 DOI: 10.1017/s0954579424001342] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2024]
Abstract
Suicide rates are rising among U.S. youth, yet our understanding of developmental mechanisms associated with increased suicide risk is limited. One high-risk pathway involves an interaction between heritable trait impulsivity and emotion dysregulation (ED). Together, these confer increased vulnerability to nonsuicidal self-injury (NSSI), suicide ideation (SI), and suicide attempts (SAs). Previous work, however, has been limited to homogeneous samples. We extend the Impulsivity × ED hypothesis to a more diverse sample of adolescents (N = 344, ages 12-15 at Baseline, 107 males and 237 females) who were treated for major depression and assessed four times over two years. In multilevel models, the impulsivity × ED interaction was associated with higher levels and worse trajectories of NSSI, SI, and SAs. As expected, stressful life events were also associated with poorer trajectories for all outcomes, and NSSI was associated with future and concurrent SI and SAs. These findings extend one developmental pathway of risk for self-harming and suicidal behaviors to more diverse adolescents, with potential implications for prevention.
Collapse
Affiliation(s)
- Amanda Thompson
- The Center for Suicide Prevention and Research, Abigail Wexner Research Institute at Nationwide Children's Hospital, Columbus, OH, USA
| | - Donna Ruch
- The Center for Suicide Prevention and Research, Abigail Wexner Research Institute at Nationwide Children's Hospital, Columbus, OH, USA
- Departments of Pediatrics and Psychiatry & Behavioral Health, The Ohio State University College of Medicine, Columbus, OH, USA
| | - Jeffrey A Bridge
- The Center for Suicide Prevention and Research, Abigail Wexner Research Institute at Nationwide Children's Hospital, Columbus, OH, USA
- Departments of Pediatrics and Psychiatry & Behavioral Health, The Ohio State University College of Medicine, Columbus, OH, USA
| | - Cynthia Fontanella
- The Center for Suicide Prevention and Research, Abigail Wexner Research Institute at Nationwide Children's Hospital, Columbus, OH, USA
- Departments of Pediatrics and Psychiatry & Behavioral Health, The Ohio State University College of Medicine, Columbus, OH, USA
| | | |
Collapse
|
3
|
Liu Y, Song Y, Li H, Leng Z, Li M, Chen H. Impaired facial emotion recognition in individuals with bipolar disorder. Asian J Psychiatr 2024; 102:104250. [PMID: 39321753 DOI: 10.1016/j.ajp.2024.104250] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/08/2024] [Revised: 09/06/2024] [Accepted: 09/20/2024] [Indexed: 09/27/2024]
Abstract
BACKGROUND Individuals with bipolar disorder (BD) often struggle with emotional regulation and social interactions, partly due to difficulties in accurately recognizing facial emotions. METHODS From September 2021 to February 2023, 69 BD individuals-comprising 23 with bipolar manic/hypomanic episode (BME), 23 with bipolar depressive episode (BDE), 23 with bipolar euthymic (EUT)-and 23 healthy controls (HCs) were enrolled. Diagnosis adhered to DSM-IV criteria using M.I.N.I 5.0, alongside assessments via Hamilton Depression Scale 17 and Young Manic Rating Scale. Recognition tasks involving 84 facial expression images across six categories. The Wilcoxon rank-sum test compares two groups, while the Kruskal-Wallis test compares multiple groups with subsequent adjusted pairwise comparisons. RESULTS The overall correct recognition rate of facial expressions in the BD group (79 %) was significantly lower than that of the HC group (83 %) (P=0.004). Primary differences were noted in neutral (93 % vs. 100 %, P=0.012) and fear (79 % vs. 86 %, P=0.023) expressions. Within the BD group, correct recognition rates were 71 % for BME, 80 % for BDE, and 80 % for EUT, all lower than in the HC group. Significant differences in correct recognition rates of neutral, fear, and joy expressions were observed among the four groups (P<0.05), with the BME group exhibiting the lowest rate. Misidentification of facial expressions was more frequent in the BD group compared to the HC group, particularly among negative expressions. CONCLUSION Patients with BD demonstrate lower correct recognition and higher misidentification rates of facial expressions, with those experiencing manic episodes showing impaired recognition of neutral, joy, and fear expressions.
Collapse
Affiliation(s)
- Yiyang Liu
- Peking University Sixth Hospital, Peking University Institute of Mental Health, NHC Key Laboratory of Mental Health (Peking University), National Clinical Research Center for Mental Disorders (Peking University Sixth Hospital), Beijing 100191, China
| | - Yuqing Song
- Peking University Sixth Hospital, Peking University Institute of Mental Health, NHC Key Laboratory of Mental Health (Peking University), National Clinical Research Center for Mental Disorders (Peking University Sixth Hospital), Beijing 100191, China
| | - Hui Li
- Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine, Shanghai 200030, China
| | - Zhiwei Leng
- Health Policy and Economic Research Platform, the State Key Infrastructure for Translational Medicine, Institute of Clinical Medicine, Peking Union Medical College Hospital, Beijing 100730, China
| | - Mengqian Li
- Department of Psychosomatic Medicine, The First Affiliated Hospital of Nanchang University, Nanchang 330006, China.
| | - Hongguang Chen
- Peking University Sixth Hospital, Peking University Institute of Mental Health, NHC Key Laboratory of Mental Health (Peking University), National Clinical Research Center for Mental Disorders (Peking University Sixth Hospital), Beijing 100191, China.
| |
Collapse
|
4
|
Maxwell JW, Sanchez DN, Ruthruff E. Infrequent facial expressions of emotion do not bias attention. PSYCHOLOGICAL RESEARCH 2023; 87:2449-2459. [PMID: 37258662 DOI: 10.1007/s00426-023-01844-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2022] [Accepted: 05/22/2023] [Indexed: 06/02/2023]
Abstract
Despite the obvious importance of facial expressions of emotion, most studies have found that they do not bias attention. A critical limitation, however, is that these studies generally present face distractors on all trials of the experiment. For other kinds of emotional stimuli, such as emotional scenes, infrequently presented stimuli elicit greater attentional bias than frequently presented stimuli, perhaps due to suppression or habituation. The goal of the current study then was to test whether such modulation of attentional bias by distractor frequency generalizes to facial expressions of emotion. In Experiment 1, both angry and happy faces were unable to bias attention, despite being infrequently presented. Even when the location of these face cues were more unpredictable-presented in one of two possible locations-still no attentional bias was observed (Experiment 2). Moreover, there was no bottom-up influence for angry and happy faces shown under high or low perceptual load (Experiment 3). We conclude that task-irrelevant posed facial expressions of emotion cannot bias attention even when presented infrequently.
Collapse
Affiliation(s)
- Joshua W Maxwell
- Department of Psychology, 1 University of New Mexico, Albuquerque, NM, 87131, USA.
| | - Danielle N Sanchez
- Department of Psychology, 1 University of New Mexico, Albuquerque, NM, 87131, USA
| | - Eric Ruthruff
- Department of Psychology, 1 University of New Mexico, Albuquerque, NM, 87131, USA
| |
Collapse
|
5
|
Burgess R, Culpin I, Costantini I, Bould H, Nabney I, Pearson RM. Quantifying the efficacy of an automated facial coding software using videos of parents. Front Psychol 2023; 14:1223806. [PMID: 37583610 PMCID: PMC10425266 DOI: 10.3389/fpsyg.2023.1223806] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Accepted: 07/10/2023] [Indexed: 08/17/2023] Open
Abstract
Introduction This work explores the use of an automated facial coding software - FaceReader - as an alternative and/or complementary method to manual coding. Methods We used videos of parents (fathers, n = 36; mothers, n = 29) taken from the Avon Longitudinal Study of Parents and Children. The videos-obtained during real-life parent-infant interactions in the home-were coded both manually (using an existing coding scheme) and by FaceReader. We established a correspondence between the manual and automated coding categories - namely Positive, Neutral, Negative, and Surprise - before contingency tables were employed to examine the software's detection rate and quantify the agreement between manual and automated coding. By employing binary logistic regression, we examined the predictive potential of FaceReader outputs in determining manually classified facial expressions. An interaction term was used to investigate the impact of gender on our models, seeking to estimate its influence on the predictive accuracy. Results We found that the automated facial detection rate was low (25.2% for fathers, 24.6% for mothers) compared to manual coding, and discuss some potential explanations for this (e.g., poor lighting and facial occlusion). Our logistic regression analyses found that Surprise and Positive expressions had strong predictive capabilities, whilst Negative expressions performed poorly. Mothers' faces were more important for predicting Positive and Neutral expressions, whilst fathers' faces were more important in predicting Negative and Surprise expressions. Discussion We discuss the implications of our findings in the context of future automated facial coding studies, and we emphasise the need to consider gender-specific influences in automated facial coding research.
Collapse
Affiliation(s)
- R. Burgess
- The Digital Health Engineering Group, Merchant Venturers Building, University of Bristol, Bristol, United Kingdom
| | - I. Culpin
- The Centre for Academic Mental Health, Bristol Medical School, Bristol, United Kingdom
- Florence Nightingale Faculty of Nursing, Midwifery and Palliative Care, King’s College London, London, United Kingdom
| | - I. Costantini
- The Centre for Academic Mental Health, Bristol Medical School, Bristol, United Kingdom
| | - H. Bould
- The Centre for Academic Mental Health, Bristol Medical School, Bristol, United Kingdom
- The Medical Research Council Integrative Epidemiology Unit, University of Bristol, Bristol, United Kingdom
- The Gloucestershire Health and Care NHS Foundation Trust, Gloucester, United Kingdom
| | - I. Nabney
- The Digital Health Engineering Group, Merchant Venturers Building, University of Bristol, Bristol, United Kingdom
| | - R. M. Pearson
- The Centre for Academic Mental Health, Bristol Medical School, Bristol, United Kingdom
- The Department of Psychology, Manchester Metropolitan University, Manchester, United Kingdom
| |
Collapse
|
6
|
Coding infant engagement in the Face-to-Face Still-Face paradigm using deep neural networks. Infant Behav Dev 2023; 71:101827. [PMID: 36806017 DOI: 10.1016/j.infbeh.2023.101827] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Revised: 02/09/2023] [Accepted: 02/14/2023] [Indexed: 02/19/2023]
Abstract
BACKGROUND The Face-to-Face Still-Face (FFSF) task is a validated and commonly used observational measure of mother-infant socio-emotional interactions. With the ascendence of deep learning-based facial emotion recognition, it is possible that common complex tasks, such as the coding of FFSF videos, could be coded with a high degree of accuracy by deep neural networks (DNNs). The primary objective of this study was to test the accuracy of four DNN image classification models against the coding of infant engagement conducted by two trained independent manual raters. METHODS 68 mother-infant dyads completed the FFSF task at three timepoints. Two trained independent raters undertook second-by-second manual coding of infant engagement into one of four classes: 1) positive affect, 2) neutral affect, 3) object/environment engagement, and 4) negative affect. RESULTS Training four different DNN models on 40,000 images, we achieved a maximum accuracy of 99.5% on image classification of infant frames taken from recordings of the FFSF task with a maximum inter-rater reliability (Cohen's κ-value) of 0.993. LIMITATIONS This study inherits all sampling and experimental limitations of the original study from which the data was taken, namely a relatively small and primarily White sample. CONCLUSIONS Based on the extremely high classification accuracy, these findings suggest that DNNs could be used to code infant engagement in FFSF recordings. DNN image classification models may also have the potential to improve the efficiency of coding all observational tasks with applications across multiple fields of human behavior research.
Collapse
|
7
|
James KM, Balderrama-Durbin C, Kobezak HM, Recchia N, Foster CE, Gibb BE. Dynamics of Affective Reactivity during Mother-Daughter Interactions: The Impact of Adolescent Non-Suicidal Self-Injury. Res Child Adolesc Psychopathol 2023; 51:597-611. [PMID: 36607473 DOI: 10.1007/s10802-022-01011-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/18/2022] [Indexed: 01/07/2023]
Abstract
Non-suicidal self-injury (NSSI) is an alarming public health concern that is particularly widespread among adolescents. The current study examined affective responses during mother-daughter interactions in adolescent girls with and without a history of NSSI. Participants were 60 girls aged 13-17 with (n = 27) and without (n = 33) a history of NSSI and their mothers. Adolescents and their mothers completed two interaction tasks: one positive and one negative. During these interactions, facial affect was assessed via electromyography (EMG). Results of Actor-Partner Interdependence Modeling (APIM) revealed several intra- and interpersonal disruptions in affect during both tasks among dyads in which the adolescent had an NSSI history. Findings suggest deficits in both self- and co-regulation of facial affect during mother-daughter interactions involving dyads in which the adolescents reports NSSI. Ultimately, if replicated and extended in longitudinal research, these disruptions may prove to be promising targets of intervention to reduce risk for future NSSI in adolescent girls.
Collapse
Affiliation(s)
- Kiera M James
- Department of Psychology, Binghamton University (SUNY), Binghamton, NY, 13902, USA. .,Department of Psychology, University of Pittsburgh, Pittsburgh, PA, 15260, USA.
| | | | - Holly M Kobezak
- Department of Psychology, Binghamton University (SUNY), Binghamton, NY, 13902, USA
| | - Nicolette Recchia
- Department of Psychology, Binghamton University (SUNY), Binghamton, NY, 13902, USA
| | - Claire E Foster
- Department of Psychology, Binghamton University (SUNY), Binghamton, NY, 13902, USA
| | - Brandon E Gibb
- Department of Psychology, Binghamton University (SUNY), Binghamton, NY, 13902, USA
| |
Collapse
|
8
|
Carpenter KLH, Hahemi J, Campbell K, Lippmann SJ, Baker JP, Egger HL, Espinosa S, Vermeer S, Sapiro G, Dawson G. Digital Behavioral Phenotyping Detects Atypical Pattern of Facial Expression in Toddlers with Autism. Autism Res 2021; 14:488-499. [PMID: 32924332 PMCID: PMC7920907 DOI: 10.1002/aur.2391] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2020] [Revised: 08/16/2020] [Accepted: 08/24/2020] [Indexed: 12/21/2022]
Abstract
Commonly used screening tools for autism spectrum disorder (ASD) generally rely on subjective caregiver questionnaires. While behavioral observation is more objective, it is also expensive, time-consuming, and requires significant expertise to perform. As such, there remains a critical need to develop feasible, scalable, and reliable tools that can characterize ASD risk behaviors. This study assessed the utility of a tablet-based behavioral assessment for eliciting and detecting one type of risk behavior, namely, patterns of facial expression, in 104 toddlers (ASD N = 22) and evaluated whether such patterns differentiated toddlers with and without ASD. The assessment consisted of the child sitting on his/her caregiver's lap and watching brief movies shown on a smart tablet while the embedded camera recorded the child's facial expressions. Computer vision analysis (CVA) automatically detected and tracked facial landmarks, which were used to estimate head position and facial expressions (Positive, Neutral, All Other). Using CVA, specific points throughout the movies were identified that reliably differentiate between children with and without ASD based on their patterns of facial movement and expressions (area under the curves for individual movies ranging from 0.62 to 0.73). During these instances, children with ASD more frequently displayed Neutral expressions compared to children without ASD, who had more All Other expressions. The frequency of All Other expressions was driven by non-ASD children more often displaying raised eyebrows and an open mouth, characteristic of engagement/interest. Preliminary results suggest computational coding of facial movements and expressions via a tablet-based assessment can detect differences in affective expression, one of the early, core features of ASD. LAY SUMMARY: This study tested the use of a tablet in the behavioral assessment of young children with autism. Children watched a series of developmentally appropriate movies and their facial expressions were recorded using the camera embedded in the tablet. Results suggest that computational assessments of facial expressions may be useful in early detection of symptoms of autism.
Collapse
Affiliation(s)
- Kimberly L H Carpenter
- Duke Center for Autism and Brain Development, Department of Psychiatry and Behavioral Sciences, Duke University School of Medicine, Durham, North Carolina, USA
| | - Jordan Hahemi
- Duke Center for Autism and Brain Development, Department of Psychiatry and Behavioral Sciences, Duke University School of Medicine, Durham, North Carolina, USA
- Department of Electrical and Computer Engineering, Duke University, Durham, North Carolina, USA
| | - Kathleen Campbell
- Duke Center for Autism and Brain Development, Department of Psychiatry and Behavioral Sciences, Duke University School of Medicine, Durham, North Carolina, USA
- Department of Pediatrics, University of Utah, Salt Lake City, Utah, USA
| | - Steven J Lippmann
- Department of Population Health Sciences, Duke University School of Medicine, Durham, North Carolina, USA
| | - Jeffrey P Baker
- Department of Pediatrics, Duke University School of Medicine, Durham, North Carolina, USA
| | - Helen L Egger
- Duke Center for Autism and Brain Development, Department of Psychiatry and Behavioral Sciences, Duke University School of Medicine, Durham, North Carolina, USA
- NYU Langone Child Study Center, New York University, New York, New York, USA
| | - Steven Espinosa
- Duke Center for Autism and Brain Development, Department of Psychiatry and Behavioral Sciences, Duke University School of Medicine, Durham, North Carolina, USA
- Department of Electrical and Computer Engineering, Duke University, Durham, North Carolina, USA
| | - Saritha Vermeer
- Duke Center for Autism and Brain Development, Department of Psychiatry and Behavioral Sciences, Duke University School of Medicine, Durham, North Carolina, USA
| | - Guillermo Sapiro
- Departments of Biomedical Engineering Computer Science, and Mathematics, Duke University, Durham, North Carolina, USA
| | - Geraldine Dawson
- Duke Center for Autism and Brain Development, Department of Psychiatry and Behavioral Sciences, Duke University School of Medicine, Durham, North Carolina, USA
- Duke Institute for Brain Sciences, Duke University, Durham, North Carolina, USA
| |
Collapse
|
9
|
van Heerden A, Leppanen J, Rotheram-Borus MJ, Worthman CM, Kohrt BA, Skeen S, Giese S, Hughes R, Bohmer L, Tomlinson M. Emerging Opportunities Provided by Technology to Advance Research in Child Health Globally. Glob Pediatr Health 2020; 7:2333794X20917570. [PMID: 32523976 PMCID: PMC7235657 DOI: 10.1177/2333794x20917570] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2019] [Revised: 02/04/2020] [Accepted: 03/06/2020] [Indexed: 11/15/2022] Open
Abstract
Current approaches to longitudinal assessment of children's developmental and psychological well-being, as mandated in the United Nations Sustainable Development Goals, are expensive and time consuming. Substantive understanding of global progress toward these goals will require a suite of new robust, cost-effective research tools designed to assess key developmental processes in diverse settings. While first steps have been taken toward this end through efforts such as the National Institutes of Health's Toolbox, experience-near approaches including naturalistic observation have remained too costly and time consuming to scale to the population level. This perspective presents 4 emerging technologies with high potential for advancing the field of child health and development research, namely (1) affective computing, (2) ubiquitous computing, (3) eye tracking, and (4) machine learning. By drawing attention of scientists, policy makers, investors/funders, and the media to the applications and potential risks of these emerging opportunities, we hope to inspire a fresh wave of innovation and new solutions to the global challenges faced by children and their families.
Collapse
Affiliation(s)
- Alastair van Heerden
- Human Sciences Research Council, Pietermaritzburg, South Africa.,University of the Witwatersrand, Johannesburg, South Africa
| | | | | | | | | | - Sarah Skeen
- Stellenbosch University, Stellenbosch, Western Cape, South Africa
| | | | - Rob Hughes
- The Children's Investment Fund Foundation, London, UK
| | - Lisa Bohmer
- Conrad N. Hilton Foundation, Westlake Village, CA, USA
| | - Mark Tomlinson
- Stellenbosch University, Stellenbosch, Western Cape, South Africa
| |
Collapse
|
10
|
Abstract
As avid users of technology, adolescents are a key demographic to engage when designing and developing technology applications for health. There are multiple opportunities for improving adolescent health, from promoting preventive behaviors to providing guidance for adolescents with chronic illness in supporting treatment adherence and transition to adult health care systems. This article will provide a brief overview of current technologies and then highlight new technologies being used specifically for adolescent health, such as artificial intelligence, virtual and augmented reality, and machine learning. Because there is paucity of evidence in this field, we will make recommendations for future research.
Collapse
Affiliation(s)
- Ana Radovic
- Department of Pediatrics, School of Medicine, University of Pittsburgh and University of Pittsburgh Medical Center Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania;
| | - Sherif M Badawy
- Department of Pediatrics, Feinberg School of Medicine, Northwestern University, Chicago, Illinois; and.,Division of Hematology, Oncology, Neurooncology, and Stem Cell Transplantation, Ann & Robert H. Lurie Children's Hospital of Chicago, Chicago, Illinois
| |
Collapse
|