1
|
Glickman M, Sharot T. AI-induced hyper-learning in humans. Curr Opin Psychol 2024; 60:101900. [PMID: 39348730 DOI: 10.1016/j.copsyc.2024.101900] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2024] [Revised: 08/13/2024] [Accepted: 09/09/2024] [Indexed: 10/02/2024]
Abstract
Humans evolved to learn from one another. Today, however, learning opportunities often emerge from interactions with AI systems. Here, we argue that learning from AI systems resembles learning from other humans, but may be faster and more efficient. Such 'hyper learning' can occur because AI: (i) provides a high signal-to-noise ratio that facilitates learning, (ii) has greater data processing ability, enabling it to generate persuasive arguments, and (iii) is perceived (in some domains) to have superior knowledge compared to humans. As a result, humans more quickly adopt biases from AI, are often more easily persuaded by AI, and exhibit novel problem-solving strategies after interacting with AI. Greater awareness of AI's influences is needed to mitigate the potential negative outcomes of human-AI interactions.
Collapse
Affiliation(s)
- Moshe Glickman
- Affective Brain Lab, Department of Experimental Psychology, University College London, London, UK; Max Planck UCL Centre for Computational Psychiatry and Ageing Research, University College London, London, UK.
| | - Tali Sharot
- Affective Brain Lab, Department of Experimental Psychology, University College London, London, UK; Max Planck UCL Centre for Computational Psychiatry and Ageing Research, University College London, London, UK; Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA.
| |
Collapse
|
2
|
Rubin M, Arnon H, Huppert JD, Perry A. Considering the Role of Human Empathy in AI-Driven Therapy. JMIR Ment Health 2024; 11:e56529. [PMID: 38861302 PMCID: PMC11200042 DOI: 10.2196/56529] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/18/2024] [Revised: 04/18/2024] [Accepted: 04/23/2024] [Indexed: 06/12/2024] Open
Abstract
Recent breakthroughs in artificial intelligence (AI) language models have elevated the vision of using conversational AI support for mental health, with a growing body of literature indicating varying degrees of efficacy. In this paper, we ask when, in therapy, it will be easier to replace humans and, conversely, in what instances, human connection will still be more valued. We suggest that empathy lies at the heart of the answer to this question. First, we define different aspects of empathy and outline the potential empathic capabilities of humans versus AI. Next, we consider what determines when these aspects are needed most in therapy, both from the perspective of therapeutic methodology and from the perspective of patient objectives. Ultimately, our goal is to prompt further investigation and dialogue, urging both practitioners and scholars engaged in AI-mediated therapy to keep these questions and considerations in mind when investigating AI implementation in mental health.
Collapse
Affiliation(s)
- Matan Rubin
- Psychology Department, Hebrew University of Jerusalem, Jerusalem, Israel
| | - Hadar Arnon
- Psychology Department, Hebrew University of Jerusalem, Jerusalem, Israel
| | - Jonathan D Huppert
- Psychology Department, Hebrew University of Jerusalem, Jerusalem, Israel
| | - Anat Perry
- Psychology Department, Hebrew University of Jerusalem, Jerusalem, Israel
| |
Collapse
|
3
|
Knafo D. Artificial Intelligence on The Couch. Staying Human Post-AI. Am J Psychoanal 2024; 84:155-180. [PMID: 38937609 DOI: 10.1057/s11231-024-09449-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/29/2024]
Abstract
This paper examines the human relationship to technology, and AI in particular, including the proposition that algorithms are the new unconscious. Key is the question of how much human ability will be duplicated and transcended by general machine intelligence. More and more people are seeking connection via social media and interaction with artificial beings. The paper examines what it means to be human and which of these traits are already or will be replicated by AI. Therapy bots already exist. It is easier to envision AI therapy guided by CBT manuals than psychoanalytic techniques. Yet, a demonstration of how AI can already perform dream analysis reaching beyond a dream's manifest content is presented. The reader is left to consider whether these findings demand a new role for psychoanalysis in supporting, sustaining, and reframing our humanity as we create technology that transcends our abilities.
Collapse
Affiliation(s)
- Danielle Knafo
- , 10 Grace Avenue, Suite #7, Great Neck, NY, 11021, USA.
| |
Collapse
|
4
|
Tagesson A, Stenseke J. Do you feel like (A)I feel? Front Psychol 2024; 15:1347890. [PMID: 38873497 PMCID: PMC11169700 DOI: 10.3389/fpsyg.2024.1347890] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Accepted: 05/15/2024] [Indexed: 06/15/2024] Open
|
5
|
Varghese C, Harrison EM, O'Grady G, Topol EJ. Artificial intelligence in surgery. Nat Med 2024; 30:1257-1268. [PMID: 38740998 DOI: 10.1038/s41591-024-02970-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2024] [Accepted: 04/03/2024] [Indexed: 05/16/2024]
Abstract
Artificial intelligence (AI) is rapidly emerging in healthcare, yet applications in surgery remain relatively nascent. Here we review the integration of AI in the field of surgery, centering our discussion on multifaceted improvements in surgical care in the preoperative, intraoperative and postoperative space. The emergence of foundation model architectures, wearable technologies and improving surgical data infrastructures is enabling rapid advances in AI interventions and utility. We discuss how maturing AI methods hold the potential to improve patient outcomes, facilitate surgical education and optimize surgical care. We review the current applications of deep learning approaches and outline a vision for future advances through multimodal foundation models.
Collapse
Affiliation(s)
- Chris Varghese
- Department of Surgery, University of Auckland, Auckland, New Zealand
| | - Ewen M Harrison
- Centre for Medical Informatics, Usher Institute, University of Edinburgh, Edinburgh, UK
| | - Greg O'Grady
- Department of Surgery, University of Auckland, Auckland, New Zealand
- Auckland Bioengineering Institute, University of Auckland, Auckland, New Zealand
| | - Eric J Topol
- Scripps Research Translational Institute, La Jolla, CA, USA.
| |
Collapse
|
6
|
Yin Y, Jia N, Wakslak CJ. AI can help people feel heard, but an AI label diminishes this impact. Proc Natl Acad Sci U S A 2024; 121:e2319112121. [PMID: 38551835 PMCID: PMC10998586 DOI: 10.1073/pnas.2319112121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Accepted: 01/29/2024] [Indexed: 04/02/2024] Open
Abstract
People want to "feel heard" to perceive that they are understood, validated, and valued. Can AI serve the deeply human function of making others feel heard? Our research addresses two fundamental issues: Can AI generate responses that make human recipients feel heard, and how do human recipients react when they believe the response comes from AI? We conducted an experiment and a follow-up study to disentangle the effects of actual source of a message and the presumed source. We found that AI-generated messages made recipients feel more heard than human-generated messages and that AI was better at detecting emotions. However, recipients felt less heard when they realized that a message came from AI (vs. human). Finally, in a follow-up study where the responses were rated by third-party raters, we found that compared with humans, AI demonstrated superior discipline in offering emotional support, a crucial element in making individuals feel heard, while avoiding excessive practical suggestions, which may be less effective in achieving this goal. Our research underscores the potential and limitations of AI in meeting human psychological needs. These findings suggest that while AI demonstrates enhanced capabilities to provide emotional support, the devaluation of AI responses poses a key challenge for effectively leveraging AI's capabilities.
Collapse
Affiliation(s)
- Yidan Yin
- Lloyd Greif Center for Entrepreneurial Studies, Marshall School of Business, University of Southern California, Los Angeles, CA90089
| | - Nan Jia
- Department of Management and Organization, Marshall School of Business, University of Southern California, Los Angeles, CA90089
| | - Cheryl J. Wakslak
- Department of Management and Organization, Marshall School of Business, University of Southern California, Los Angeles, CA90089
| |
Collapse
|
7
|
Inzlicht M, Cameron CD, D'Cruz J, Bloom P. In praise of empathic AI. Trends Cogn Sci 2024; 28:89-91. [PMID: 38160068 DOI: 10.1016/j.tics.2023.12.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Revised: 12/01/2023] [Accepted: 12/12/2023] [Indexed: 01/03/2024]
Abstract
In this article we investigate the societal implications of empathic artificial intelligence (AI), asking how its seemingly empathic expressions make people feel. We highlight AI's unique ability to simulate empathy without the same biases that afflict humans. While acknowledging serious pitfalls, we propose that AI expressions of empathy could improve human welfare.
Collapse
Affiliation(s)
- Michael Inzlicht
- Department of Psychology, University of Toronto, Toronto, Ontario M1C 1A4, Canada; Rotman School of Management, University of Toronto, Toronto, Ontario M5S 3E6, Canada.
| | - C Daryl Cameron
- Department of Psychology, The Pennsylvania State University, University Park, PA 16802, USA; The Rock Ethics Institute, The Pennsylvania State University, University Park, PA 16802, USA
| | - Jason D'Cruz
- Department of Philosophy, University at Albany SUNY, Albany, NY 12222, USA
| | - Paul Bloom
- Department of Psychology, University of Toronto, Toronto, Ontario M1C 1A4, Canada; Department of Psychology, Yale University, New Haven, CT 06520-8047, USA
| |
Collapse
|
8
|
Brinkmann L, Baumann F, Bonnefon JF, Derex M, Müller TF, Nussberger AM, Czaplicka A, Acerbi A, Griffiths TL, Henrich J, Leibo JZ, McElreath R, Oudeyer PY, Stray J, Rahwan I. Machine culture. Nat Hum Behav 2023; 7:1855-1868. [PMID: 37985914 DOI: 10.1038/s41562-023-01742-2] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2023] [Accepted: 10/03/2023] [Indexed: 11/22/2023]
Abstract
The ability of humans to create and disseminate culture is often credited as the single most important factor of our success as a species. In this Perspective, we explore the notion of 'machine culture', culture mediated or generated by machines. We argue that intelligent machines simultaneously transform the cultural evolutionary processes of variation, transmission and selection. Recommender algorithms are altering social learning dynamics. Chatbots are forming a new mode of cultural transmission, serving as cultural models. Furthermore, intelligent machines are evolving as contributors in generating cultural traits-from game strategies and visual art to scientific results. We provide a conceptual framework for studying the present and anticipated future impact of machines on cultural evolution, and present a research agenda for the study of machine culture.
Collapse
Affiliation(s)
- Levin Brinkmann
- Center for Humans and Machines, Max Planck Institute for Human Development, Berlin, Germany.
| | - Fabian Baumann
- Center for Humans and Machines, Max Planck Institute for Human Development, Berlin, Germany
| | | | - Maxime Derex
- Toulouse School of Economics, Toulouse, France
- Institute for Advanced Study in Toulouse, Toulouse, France
| | - Thomas F Müller
- Center for Humans and Machines, Max Planck Institute for Human Development, Berlin, Germany
| | - Anne-Marie Nussberger
- Center for Humans and Machines, Max Planck Institute for Human Development, Berlin, Germany
| | - Agnieszka Czaplicka
- Center for Humans and Machines, Max Planck Institute for Human Development, Berlin, Germany
| | - Alberto Acerbi
- Department of Sociology and Social Research, University of Trento, Trento, Italy
| | - Thomas L Griffiths
- Department of Psychology and Department of Computer Science, Princeton University, Princeton, NJ, USA
| | - Joseph Henrich
- Department of Human Evolutionary Biology, Harvard University, Cambridge, MA, USA
| | | | - Richard McElreath
- Max Planck Institute for Evolutionary Anthropology, Leipzig, Germany
| | | | - Jonathan Stray
- Center for Human-Compatible Artificial Intelligence, University of California, Berkeley, Berkeley, CA, USA
| | - Iyad Rahwan
- Center for Humans and Machines, Max Planck Institute for Human Development, Berlin, Germany.
| |
Collapse
|
9
|
McDonald IR, Blocker ES, Weyman EA, Smith N, Dwyer AA. What Are the Best Practices for Co-Creating Patient-Facing Educational Materials? A Scoping Review of the Literature. Healthcare (Basel) 2023; 11:2615. [PMID: 37830651 PMCID: PMC10572900 DOI: 10.3390/healthcare11192615] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Revised: 09/13/2023] [Accepted: 09/21/2023] [Indexed: 10/14/2023] Open
Abstract
Co-creating patient-facing educational materials (PEMs) can enhance person-centered care by responding to patient priorities and unmet needs. Little data exist on 'best practices' for co-creation. We followed the Arksey and O'Malley framework to conduct a systematic literature search of nine databases (MEDLINE, PubMed, EMBASE, CINAHL, PsycINFO, Web of Science, Cochrane Library, Joanna Briggs Institute, TRIP-April, 2022) to identify empirical studies published in English on PEM co-creation to distill 'best practices'. Following an independent dual review of articles, data were collated into tables, and thematic analysis was employed to synthesize 'best practices' that were validated by a patient experienced in co-creating PEMs. Bias was not assessed, given the study heterogeneity. Of 6998 retrieved articles, 44 were included for data extraction/synthesis. Studies utilized heterogeneous methods spanning a range of health conditions/populations. Only 5/45 (11%) studies defined co-creation, 14 (32%) used a guiding framework, and 18 (41%) used validated evaluation tools. Six 'best practices' were identified: (1) begin with a review of the literature, (2) utilize a framework to inform the process, (3) involve clinical and patient experts from the beginning, (4) engage diverse perspectives, (5) ensure patients have the final decision, and (6) employ validated evaluation tools. This scoping review highlights the need for clear definitions and validated evaluation measures to guide and assess the co-creation process. Identified 'best practices' are relevant for use with diverse patient populations and health issues to enhance person-centered care.
Collapse
Affiliation(s)
- Isabella R. McDonald
- William F. Connell School of Nursing, Boston College, Chestnut Hill, MA 02467, USA; (I.R.M.); (E.S.B.); (E.A.W.)
| | - Elizabeth S. Blocker
- William F. Connell School of Nursing, Boston College, Chestnut Hill, MA 02467, USA; (I.R.M.); (E.S.B.); (E.A.W.)
| | - Elizabeth A. Weyman
- William F. Connell School of Nursing, Boston College, Chestnut Hill, MA 02467, USA; (I.R.M.); (E.S.B.); (E.A.W.)
| | - Neil Smith
- “I Am HH” Patient Organization, Dallas, TX 75238, USA;
| | - Andrew A. Dwyer
- William F. Connell School of Nursing, Boston College, Chestnut Hill, MA 02467, USA; (I.R.M.); (E.S.B.); (E.A.W.)
- Massachusetts General Hospital—Harvard Center for Reproductive Medicine, Boston, MA 02114, USA
| |
Collapse
|