1
|
Baugerud GA, Johnson MS, Dianiska R, Røed RK, Powell MB, Lamb ME, Hassan SZ, Sabet SS, Hicks S, Salehi P, Riegler MA, Halvorsen P, Quas J. Using an AI-based avatar for interviewer training at Children's Advocacy Centers: Proof of Concept. CHILD MALTREATMENT 2024:10775595241263017. [PMID: 38889731 DOI: 10.1177/10775595241263017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/20/2024]
Abstract
This proof-of- concept study focused on interviewers' behaviors and perceptions when interacting with a dynamic AI child avatar alleging abuse. Professionals (N = 68) took part in a virtual reality (VR) study in which they questioned an avatar presented as a child victim of sexual or physical abuse. Of interest was how interviewers questioned the avatar, how productive the child avatar was in response, and how interviewers perceived the VR interaction. Findings suggested alignment between interviewers' virtual questioning approaches and interviewers' typical questioning behavior in real-world investigative interviews, with a diverse range of questions used to elicit disclosures from the child avatar. The avatar responded to most question types as children typically do, though more nuanced programming of the avatar's productivity in response to complex question types is needed. Participants rated the avatar positively and felt comfortable with the VR experience. Results underscored the potential of AI-based interview training as a scalable, standardized alternative to traditional methods.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | - Saaed S Sabet
- Simula Metropolitan Center for Digital Engineering AS, Lysaker, Norway
| | - Steven Hicks
- Simula Metropolitan Center for Digital Engineering AS, Lysaker, Norway
| | - Pegah Salehi
- Simula Metropolitan Center for Digital Engineering AS, Lysaker, Norway
| | - Michael A Riegler
- Simula Metropolitan Center for Digital Engineering AS, Lysaker, Norway
| | - Pål Halvorsen
- Simula Metropolitan Center for Digital Engineering AS, Lysaker, Norway
| | - Jodi Quas
- University of California Irvine, Irvine, CA, USA
| |
Collapse
|
2
|
Krause N, Gewehr E, Barbe H, Merschhemke M, Mensing F, Siegel B, Müller JL, Volbert R, Fromberger P, Tamm A, Pülschen S. How to prepare for conversations with children about suspicions of sexual abuse? Evaluation of an interactive virtual reality training for student teachers. CHILD ABUSE & NEGLECT 2024; 149:106677. [PMID: 38335563 DOI: 10.1016/j.chiabu.2024.106677] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Revised: 01/15/2024] [Accepted: 01/24/2024] [Indexed: 02/12/2024]
Abstract
BACKGROUND Training for child interviewing in case of suspected (sexual) abuse must include ongoing practice, expert feedback and performance evaluation. Computer-based interview simulations including these components have shown efficacy in promoting open-ended questioning skills. OBJECTIVE We evaluated ViContact, a training program for childcare professionals on conversations with children in case of suspected abuse. PARTICIPANTS AND SETTING 110 student teachers were divided into four groups and took part either in a two-hour virtual reality training through verbal interaction with virtual children, followed by automated, personalized feedback (VR), two days of online seminar training on conversation skills, related knowledge and action strategies (ST), a combination of both (ST + VR), or no training (control group, CG). METHODS We conducted a pre-registered, randomized-controlled evaluation study. Pre-post changes on three behavioral outcomes in the VR conversations and two questionnaire scores (self-efficacy and - undesirable - naïve confidence in one's own judgment of an abuse suspicion) were analyzed via mixed ANOVA interaction effects. RESULTS Combined training vs. CG led to improvements in the proportion of recommended questions (ηp2 = 0.75), supportive utterances (ηp2 = 0.36), and self-efficacy (ηp2 = 0.77; all ps < .001). Both interventions alone improved the proportion of recommended questions (VR: ηp2 = 0.67, ST: ηp2 = 0.68, ps < .001) and self-efficacy (VR: ηp2 = 0.24, ST: ηp2 = 0.65, ps < .001), but not supportive utterances (VR: ηp2 = 0.10, ST: ηp2 = 0.13, both n. s.). CONCLUSIONS The combination of VR and ST proved most beneficial. Thus, VR exercises should not replace, but rather complement classical training approaches.
Collapse
Affiliation(s)
| | - Elsa Gewehr
- Psychologische Hochschule Berlin, Germany; Universität Kassel, Germany
| | - Hermann Barbe
- Klinik für Psychiatrie und Psychotherapie, Forensische Psychiatrie Universitätsmedizin Göttingen, Germany
| | | | | | - Bruno Siegel
- Klinik für Psychiatrie und Psychotherapie, Forensische Psychiatrie Universitätsmedizin Göttingen, Germany
| | - Jürgen L Müller
- Klinik für Psychiatrie und Psychotherapie, Forensische Psychiatrie Universitätsmedizin Göttingen, Germany
| | | | - Peter Fromberger
- Klinik für Psychiatrie und Psychotherapie, Forensische Psychiatrie Universitätsmedizin Göttingen, Germany
| | - Anett Tamm
- Psychologische Hochschule Berlin, Germany
| | | |
Collapse
|
3
|
Hsu CW, Gross J, Colombo M, Hayne H. Look into my eyes: a "faceless" avatar interviewer lowers reporting threshold for adult eyewitnesses. Mem Cognit 2023; 51:1761-1773. [PMID: 37072575 PMCID: PMC10638134 DOI: 10.3758/s13421-023-01424-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/03/2023] [Indexed: 04/20/2023]
Abstract
Evidential interviewing is often used to gather important information, which can determine the outcome of a criminal case. An interviewer's facial features, however, may impact reporting during this task. Here, we investigated adults' interview performance using a novel tool-a faceless avatar interviewer-designed to minimize the impact of an interviewer's visual communication signals, potentially enhancing memory performance. Adults were interviewed about the details of a video by (1) a human-appearing avatar or a human interviewer (Experiment 1; N = 105) or (2) a human-appearing avatar or a faceless avatar interviewer (Experiment 2; N = 109). Participants assigned to the avatar interviewer condition were (1) asked whether they thought the interviewer was either computer or human operated (Experiment 1) or (2) explicitly told that the interviewer was either computer or human operated (Experiment 2). Adults' memory performance was statistically equivalent when they were interviewed by a human-appearing avatar or a human interviewer, but, relative to the human-appearing avatar, adults who were interviewed by a faceless avatar reported more correct (but also incorrect) details in response to free-recall questions. Participants who indicated that the avatar interviewer was computer operated-as opposed to human operated-provided more accurate memory reports, but specifically telling participants that the avatar was computer operated or human operated had no influence on their memory reports. The present study introduced a novel interviewing tool and highlighted the possible cognitive and social influences of an interviewer's facial features on adults' report of a witnessed event.
Collapse
Affiliation(s)
- Che-Wei Hsu
- Department of Psychology, University of Otago, Dunedin, New Zealand.
- Department of Psychological Medicine, University of Otago, PO Box 54, Dunedin, 9054, New Zealand.
| | - Julien Gross
- Department of Psychology, University of Otago, Dunedin, New Zealand
| | - Marea Colombo
- Department of Psychology, University of Otago, Dunedin, New Zealand
| | - Harlene Hayne
- Department of Psychology, University of Otago, Dunedin, New Zealand
- School of Population Health, Curtin University, Perth, Australia
| |
Collapse
|
4
|
Røed RK, Powell MB, Riegler MA, Baugerud GA. A field assessment of child abuse investigators' engagement with a child-avatar to develop interviewing skills. CHILD ABUSE & NEGLECT 2023; 143:106324. [PMID: 37390589 DOI: 10.1016/j.chiabu.2023.106324] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 06/05/2023] [Accepted: 06/19/2023] [Indexed: 07/02/2023]
Abstract
BACKGROUND Child investigative interviewing is a complex skill requiring specialised training. A critical training element is practice. Simulations with digital avatars are cost-effective options for delivering training. This study of real-world data provides novel insights evaluating a large number of trainees' engagement with LiveSimulation (LiveSim), an online child-avatar that involves a trainee selecting a question (i.e., an option-tree) and the avatar responding with the level of detail appropriate for the question type. While LiveSim has been shown to facilitate learning of open-ended questions, its utility (from a user engagement perspective) remains to be examined. OBJECTIVE We evaluated trainees' engagement with LiveSim, focusing on patterns of interaction (e.g., amount), appropriateness of the prompt structure, and the programme's technical compatibility. PARTICIPANTS AND SETTING Professionals (N = 606, mainly child protection workers and police) being offered the avatar as part of an intensive course on how to interview a child conducted between 2009 and 2018. METHODS For descriptive analysis, Visual Basic for Applications coding in Excel was applied to evaluate engagement and internal attributes of LiveSim. A compatibility study of the programme was run testing different hardware focusing on access and function. RESULTS The trainees demonstrated good engagement with the programme across a variety of measures, including number and timing of activity completions. Overall, knowing the utility of avatars, our results provide strong support for the notion that a technically simple avatar like LiveSim awake user engagement. This is important knowledge in further development of learning simulations using next-generation technology.
Collapse
Affiliation(s)
- Ragnhild Klingenberg Røed
- Department of Social Work, Child Welfare and Social Policy, Faculty of Social Sciences, Oslo Metropolitan University, Oslo, Norway.
| | - Martine B Powell
- Centre for Investigative Interviewing, Griffith Criminology Institute, Griffith University, Brisbane, Australia
| | | | - Gunn Astrid Baugerud
- Department of Social Work, Child Welfare and Social Policy, Faculty of Social Sciences, Oslo Metropolitan University, Oslo, Norway
| |
Collapse
|
5
|
Røed RK, Baugerud GA, Hassan SZ, Sabet SS, Salehi P, Powell MB, Riegler MA, Halvorsen P, Johnson MS. Enhancing questioning skills through child avatar chatbot training with feedback. Front Psychol 2023; 14:1198235. [PMID: 37519386 PMCID: PMC10374201 DOI: 10.3389/fpsyg.2023.1198235] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Accepted: 06/21/2023] [Indexed: 08/01/2023] Open
Abstract
Training child investigative interviewing skills is a specialized task. Those being trained need opportunities to practice their skills in realistic settings and receive immediate feedback. A key step in ensuring the availability of such opportunities is to develop a dynamic, conversational avatar, using artificial intelligence (AI) technology that can provide implicit and explicit feedback to trainees. In the iterative process, use of a chatbot avatar to test the language and conversation model is crucial. The model is fine-tuned with interview data and realistic scenarios. This study used a pre-post training design to assess the learning effects on questioning skills across four child interview sessions that involved training with a child avatar chatbot fine-tuned with interview data and realistic scenarios. Thirty university students from the areas of child welfare, social work, and psychology were divided into two groups; one group received direct feedback (n = 12), whereas the other received no feedback (n = 18). An automatic coding function in the language model identified the question types. Information on question types was provided as feedback in the direct feedback group only. The scenario included a 6-year-old girl being interviewed about alleged physical abuse. After the first interview session (baseline), all participants watched a video lecture on memory, witness psychology, and questioning before they conducted two additional interview sessions and completed a post-experience survey. One week later, they conducted a fourth interview and completed another post-experience survey. All chatbot transcripts were coded for interview quality. The language model's automatic feedback function was found to be highly reliable in classifying question types, reflecting the substantial agreement among the raters [Cohen's kappa (κ) = 0.80] in coding open-ended, cued recall, and closed questions. Participants who received direct feedback showed a significantly higher improvement in open-ended questioning than those in the non-feedback group, with a significant increase in the number of open-ended questions used between the baseline and each of the other three chat sessions. This study demonstrates that child avatar chatbot training improves interview quality with regard to recommended questioning, especially when combined with direct feedback on questioning.
Collapse
Affiliation(s)
- Ragnhild Klingenberg Røed
- Department of Social Work, Child Welfare and Social Policy, Faculty of Social Science, Oslo Metropolitan University, Oslo, Norway
| | - Gunn Astrid Baugerud
- Department of Social Work, Child Welfare and Social Policy, Faculty of Social Science, Oslo Metropolitan University, Oslo, Norway
| | - Syed Zohaib Hassan
- Department of Computer Science, Faculty of Technology, Art and Design, Oslo Metropolitan University, Oslo, Norway
| | - Saeed S. Sabet
- Department of Computer Science, Faculty of Technology, Art and Design, Oslo Metropolitan University, Oslo, Norway
- Simula Metropolitan Center for Digital Engineering, Oslo, Norway
| | - Pegah Salehi
- Department of Computer Science, Faculty of Technology, Art and Design, Oslo Metropolitan University, Oslo, Norway
| | - Martine B. Powell
- Center for Investigative Interviewing, Griffith Criminology Institute, Griffith University, Brisbane, QLD, Australia
| | | | - Pål Halvorsen
- Simula Metropolitan Center for Digital Engineering, Oslo, Norway
| | - Miriam S. Johnson
- Department of Behavioral Science, Faculty of Health Science, Oslo Metropolitan University, Oslo, Norway
| |
Collapse
|
6
|
Haginoya S, Ibe T, Yamamoto S, Yoshimoto N, Mizushi H, Santtila P. AI avatar tells you what happened: The first test of using AI-operated children in simulated interviews to train investigative interviewers. Front Psychol 2023; 14:1133621. [PMID: 36910814 PMCID: PMC9995382 DOI: 10.3389/fpsyg.2023.1133621] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Accepted: 01/31/2023] [Indexed: 02/25/2023] Open
Abstract
Previous research has shown that simulated child sexual abuse (CSA) interview training using avatars paired with feedback and modeling improves interview quality. However, to make this approach scalable, the classification of interviewer questions needs to be automated. We tested an automated question classification system for these avatar interviews while also providing automated interventions (feedback and modeling) to improve interview quality. Forty-two professionals conducted two simulated CSA interviews online and were randomly provided with no intervention, feedback, or modeling after the first interview. Feedback consisted of the outcome of the alleged case and comments on the quality of the interviewer's questions. Modeling consisted of learning points and videos illustrating good and bad questioning methods. The total percentage of agreement in question coding between human operators and the automated classification was 72% for the main categories (recommended vs. not recommended) and 52% when 11 subcategories were considered. The intervention groups improved from first to second interview while this was not the case in the no intervention group (intervention x time: p = 0.007, ηp 2 = 0.28). Automated question classification worked well for classifying the interviewers' questions allowing interventions to improve interview quality.
Collapse
Affiliation(s)
| | | | - Shota Yamamoto
- Forensic Science Laboratory, Hokkaido Prefectural Police Headquarters, Sapporo, Hokkaido, Japan
| | - Naruyo Yoshimoto
- Graduate School of Medicine, University of the Ryukyus, Nishihara, Okinawa, Japan
| | - Hazuki Mizushi
- Graduate School of Humanities and Human Sciences, Hiroshima Shudo University, Hiroshima, Japan
| | - Pekka Santtila
- NYU Shanghai and NYU-ECNU Institute for Social Development, Shanghai, China
| |
Collapse
|