1
|
Hsu WC, Lee MH. Semantic Technology and Anthropomorphism. JOURNAL OF GLOBAL INFORMATION MANAGEMENT 2023. [DOI: 10.4018/jgim.318661] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2023]
Abstract
A long-standing debate exists on whether robots need personality. With voice assistants (VAs) (i.e., Google Assistant and Apple's Siri) as the research context, this study employed the stimulus-organism-response (SOR) model and the theory of reasoned action to investigate how the personalities that VAs display (i.e., humanlike traits and behavior traits) influence perceived risk, perceived enjoyment, trust, attitude to use, and continued usage intention. The results show that when VAs have more humanlike linguistic traits, such as tone and phrasing, and more positive behavior traits, such as politeness and helpfulness, users enjoy using VAs more, have more trust in VAs, and display a greater willingness to continue using VAs. Unlike past studies focusing on technical aspects, the results of this study provide decision-makers with a new perspective, showing that using more humanlike designs and giving VAs unique personalities can build user trust and increase willingness to use VAs.
Collapse
Affiliation(s)
- Wen-Chin Hsu
- Department of Information Management, National Central University, Taiwan
| | - Mu-Heng Lee
- Department of Information Management, National Central University, Taiwan
| |
Collapse
|
2
|
Interacting with a Chatbot-Based Advising System: Understanding the Effect of Chatbot Personality and User Gender on Behavior. INFORMATICS 2022. [DOI: 10.3390/informatics9040081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Chatbots with personality have been shown to affect engagement and user subjective satisfaction. Yet, the design of most chatbots focuses on functionality and accuracy rather than an interpersonal communication style. Existing studies on personality-imbued chatbots have mostly assessed the effect of chatbot personality on user preference and satisfaction. However, the influence of chatbot personality on behavioral qualities, such as users’ trust, engagement, and perceived authenticity of the chatbots, is largely unexplored. To bridge this gap, this study contributes: (1) A detailed design of a personality-imbued chatbot used in academic advising. (2) Empirical findings of an experiment with students who interacted with three different versions of the chatbot. Each version, vetted by psychology experts, represents one of the three dominant traits, agreeableness, conscientiousness, and extraversion. The experiment focused on the effect of chatbot personality on trust, authenticity, engagement, and intention to use the chatbot. Furthermore, we assessed whether gender plays a role in students’ perception of the personality-imbued chatbots. Our findings show a positive impact of chatbot personality on perceived chatbot authenticity and intended engagement, while student gender does not play a significant role in the students’ perception of chatbots.
Collapse
|
3
|
Koivunen S, Ala-Luopa S, Olsson T, Haapakorpi A. The March of Chatbots into Recruitment: Recruiters’ Experiences, Expectations, and Design Opportunities. Comput Support Coop Work 2022. [DOI: 10.1007/s10606-022-09429-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
AbstractOrganizations’ hiring processes are increasingly shaped by various digital tools and e-recruitment systems. However, there is little understanding of the recruiters’ needs for and expectations towards new systems. This paper investigates recruitment chatbots as an emergent form of e-recruitment, offering a low-threshold channel for recruiter-applicant interaction. The rapid spread of chatbots and the casual nature of their user interfaces raise questions about the perceived benefits, risks, and suitable roles in this sensitive application area. To this end, we conducted 13 semi-structured interviews, including 11 interviews with people who are utilizing recruitment chatbots and two people from companies that are developing recruitment chatbots. The findings provide a qualitative account of their expectations and motivations, early experiences, and perceived opportunities regarding the current and future use of chatbots in recruitment. While chatbots answer the need for attracting new candidates, they have also introduced new challenges and work tasks for the recruiters. The paper offers considerations that can help to redesign recruitment bots from the recruiter’s viewpoint.
Collapse
|
4
|
Razavi SZ, Schubert LK, van Orden K, Ali MR, Kane B, Hoque E. Discourse Behavior of Older Adults Interacting With a Dialogue Agent Competent in Multiple Topics. ACM T INTERACT INTEL 2022. [DOI: 10.1145/3484510] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/01/2022]
Abstract
We present a conversational agent designed to provide realistic conversational practice to older adults at risk of isolation or social anxiety, and show the results of a content analysis on a corpus of data collected from experiments with elderly patients interacting with our system. The conversational agent, represented by a virtual avatar, is designed to hold multiple sessions of casual conversation with older adults. Throughout each interaction, the system analyzes the prosodic and nonverbal behavior of users and provides feedback to the user in the form of periodic comments and suggestions on how to improve. Our avatar is unique in its ability to hold natural dialogues on a wide range of everyday topics – 27 topics in three groups, developed in collaboration with a team of gerontologists. The three groups vary in “degrees of intimacy”, and as such in degrees of cognitive difficulty for the user. After collecting data from 9 participants who interacted with the avatar for 7-9 sessions over a period of 3-4 weeks, we present results concerning dialogue behavior and inferred sentiment of the users. Analysis of the dialogues reveals correlations such as greater elaborateness for more difficult topics, increasing elaborateness with successive sessions, stronger sentiments in topics concerned with life goals rather than routine activities, and stronger self-disclosure for more intimate topics. In addition to their intrinsic interest, these results also reflect positively on the sophistication and practical applicability of our dialogue system.
Collapse
|
5
|
Velentza AM, Fachantidis N, Pliasa S. Which One? Choosing Favorite Robot After Different Styles of Storytelling and Robots' Conversation. Front Robot AI 2021; 8:700005. [PMID: 34568435 PMCID: PMC8458710 DOI: 10.3389/frobt.2021.700005] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2021] [Accepted: 07/19/2021] [Indexed: 12/02/2022] Open
Abstract
The influence of human-care service robots in human–robot interaction is becoming of great importance, because of the roles that the robots are taking in today’s and future society. Thus, we need to identify how humans can interact, collaborate, and learn from social robots more efficiently. Additionally, it is important to determine the robots’ modalities that can increase the humans’ perceived likeness and knowledge acquisition and enhance human–robot collaboration. The present study aims to identify the optimal social service robots’ modalities that enhance the human learning process and level of enjoyment from the interaction and even attract the humans’ attention to choosing a robot to collaborate with it. Our target group was college students, pre-service teachers. For this purpose, we designed two experiments, each one split in two parts. Both the experiments were between groups, and human participants had the chance to watch the Nao robot performing a storytelling exercise about the history of robots in a museum-educational activity via video annotations. The robot’s modalities were manipulated on its body movements (expressive arm and head gestures) while performing the storytelling, friendly attitude expressions and storytelling, and personality traits. After the robot’s storytelling, participants filled out a knowledge acquisition questionnaire and a self-reported enjoyment level questionnaire. In the second part, we introduce the idea of participants witnessing a conversation between the robots with the different modalities, and they were asked to choose the robot with which they want to collaborate in a similar activity. Results indicated that participants prefer to collaborate with robots with a cheerful personality and expressive body movements. Especially when they were asked to choose between two robots that were cheerful and had expressive body movements, they preferred the one which originally told them the story. Moreover, participants did not prefer to collaborate with a robot with an extremely friendly attitude and storytelling style.
Collapse
Affiliation(s)
- Anna-Maria Velentza
- School of Educational and Social Policies, University of Macedonia, Thessaloniki, Greece.,Laboratory of Informatics and Robotics in Education and Society (LIRES) Robotics Lab, University of Macedonia, Thessaloniki, Greece
| | - Nikolaos Fachantidis
- School of Educational and Social Policies, University of Macedonia, Thessaloniki, Greece.,Laboratory of Informatics and Robotics in Education and Society (LIRES) Robotics Lab, University of Macedonia, Thessaloniki, Greece
| | - Sofia Pliasa
- School of Educational and Social Policies, University of Macedonia, Thessaloniki, Greece.,Laboratory of Informatics and Robotics in Education and Society (LIRES) Robotics Lab, University of Macedonia, Thessaloniki, Greece
| |
Collapse
|
6
|
Abstract
This article attempts to bridge the gap between widely discussed ethical principles of Human-centered AI (HCAI) and practical steps for effective governance. Since HCAI systems are developed and implemented in multiple organizational structures, I propose 15 recommendations at three levels of governance: team, organization, and industry. The recommendations are intended to increase the reliability, safety, and trustworthiness of HCAI systems: (1) reliable systems based on sound software engineering practices, (2) safety culture through business management strategies, and (3) trustworthy certification by independent oversight. Software engineering practices within teams include audit trails to enable analysis of failures, software engineering workflows, verification and validation testing, bias testing to enhance fairness, and explainable user interfaces. The safety culture within organizations comes from management strategies that include leadership commitment to safety, hiring and training oriented to safety, extensive reporting of failures and near misses, internal review boards for problems and future plans, and alignment with industry standard practices. The trustworthiness certification comes from industry-wide efforts that include government interventions and regulation, accounting firms conducting external audits, insurance companies compensating for failures, non-governmental and civil society organizations advancing design principles, and professional organizations and research institutes developing standards, policies, and novel ideas. The larger goal of effective governance is to limit the dangers and increase the benefits of HCAI to individuals, organizations, and society.
Collapse
|
7
|
Chattopadhyay D, Ma T, Sharifi H, Martyn-Nemeth P. Computer-Controlled Virtual Humans in Patient-Facing Systems: Systematic Review and Meta-Analysis. J Med Internet Res 2020; 22:e18839. [PMID: 32729837 PMCID: PMC7426801 DOI: 10.2196/18839] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2020] [Revised: 05/08/2020] [Accepted: 05/20/2020] [Indexed: 02/06/2023] Open
Abstract
BACKGROUND Virtual humans (VH) are computer-generated characters that appear humanlike and simulate face-to-face conversations using verbal and nonverbal cues. Unlike formless conversational agents, like smart speakers or chatbots, VH bring together the capabilities of both a conversational agent and an interactive avatar (computer-represented digital characters). Although their use in patient-facing systems has garnered substantial interest, it is unknown to what extent VH are effective in health applications. OBJECTIVE The purpose of this review was to examine the effectiveness of VH in patient-facing systems. The design and implementation characteristics of these systems were also examined. METHODS Electronic bibliographic databases were searched for peer-reviewed articles with relevant key terms. Studies were included in the systematic review if they designed or evaluated VH in patient-facing systems. Of the included studies, studies that used a randomized controlled trial to evaluate VH were included in the meta-analysis; they were then summarized using the PICOTS framework (population, intervention, comparison group, outcomes, time frame, setting). Summary effect sizes, using random-effects models, were calculated, and the risk of bias was assessed. RESULTS Among the 8,125 unique records identified, 53 articles describing 33 unique systems, were qualitatively, systematically reviewed. Two distinct design categories emerged - simple VH and VH augmented with health sensors and trackers. Of the 53 articles, 16 (26 studies) with 44 primary and 22 secondary outcomes were included in the meta-analysis. Meta-analysis of the 44 primary outcome measures revealed a significant difference between intervention and control conditions, favoring the VH intervention (SMD = .166, 95% CI .039-.292, P=.012), but with evidence of some heterogeneity, I2=49.3%. There were more cross-sectional (k=15) than longitudinal studies (k=11). The intervention was delivered using a personal computer in most studies (k=18), followed by a tablet (k=4), mobile kiosk (k=2), head-mounted display (k=1), and a desktop computer in a community center (k=1). CONCLUSIONS We offer evidence for the efficacy of VH in patient-facing systems. Considering that studies included different population and outcome types, more focused analysis is needed in the future. Future studies also need to identify what features of virtual human interventions contribute toward their effectiveness.
Collapse
Affiliation(s)
- Debaleena Chattopadhyay
- Department of Computer Science, University of Illinois at Chicago, Chicago, IL, United States
| | - Tengteng Ma
- Department of Information and Decision Sciences, University of Illinois at Chicago, Chicago, IL, United States
| | - Hasti Sharifi
- Department of Computer Science, University of Illinois at Chicago, Chicago, IL, United States
| | - Pamela Martyn-Nemeth
- Department of Biobehavioral Health Science, University of Illinois at Chicago, Chicago, IL, United States
| |
Collapse
|
8
|
Multimodal Interaction: Correlates of Learners’ Metacognitive Skill Training Negotiation Experience. INFORMATION 2020. [DOI: 10.3390/info11080381] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Metacognitive training reflects knowledge, consideration and control over decision-making and task performance evident in any social and learning context. Interest in understanding the best account of effective (win-win) negotiation emerges in different social and cultural interactions worldwide. The research presented in this paper explores an extended study of metacognitive training system during negotiation using an embodied conversational agent. It elaborates on the findings from the usability evaluation employing 40 adult learners pre- and postinteraction with the system, reporting on the usability and metacognitive, individual- and community-level related attributes. Empirical evidence indicates (a) higher levels of self-efficacy, individual readiness to change and civic action after user-system experience, (b) significant and positive direct associations between self-efficacy, self-regulation, interpersonal and problem-solving skills, individual readiness to change, mastery goal orientation and civic action pre- and postinteraction and (c) gender differences in the perceptions of system usability performance according to country of origin. Theoretical and practical implications in tandem with future research avenues are discussed in light of embodied conversational agent metacognitive training in negotiation.
Collapse
|
9
|
Wenskovitch J, Zhou M, Collins C, Chang R, Dowling M, Endert A, Xu K, Rhyne TM. Putting the "I" in Interaction: Interactive Interfaces Personalized to Individuals. IEEE COMPUTER GRAPHICS AND APPLICATIONS 2020; 40:73-82. [PMID: 32356729 DOI: 10.1109/mcg.2020.2982465] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Interactive data exploration and analysis is an inherently personal process. One's background, experience, interests, cognitive style, personality, and other sociotechnical factors often shape such a process, as well as the provenance of exploring, analyzing, and interpreting data. This Viewpoint posits both what personal information and how such personal information could be taken into account to design more effective visual analytic systems, a valuable and under-explored direction.
Collapse
|