1
|
Momen A, Hugenberg K, Wiese E. Social perception of robots is shaped by beliefs about their minds. Sci Rep 2024; 14:5459. [PMID: 38443378 PMCID: PMC10914716 DOI: 10.1038/s41598-024-53187-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Accepted: 01/29/2024] [Indexed: 03/07/2024] Open
Abstract
Roboticists often imbue robots with human-like physical features to increase the likelihood that they are afforded benefits known to be associated with anthropomorphism. Similarly, deepfakes often employ computer-generated human faces to attempt to create convincing simulacra of actual humans. In the present work, we investigate whether perceivers' higher-order beliefs about faces (i.e., whether they represent actual people or android robots) modulate the extent to which perceivers deploy face-typical processing for social stimuli. Past work has shown that perceivers' recognition performance is more impacted by the inversion of faces than objects, thus highlighting that faces are processed holistically (i.e., as Gestalt), whereas objects engage feature-based processing. Here, we use an inversion task to examine whether face-typical processing is attenuated when actual human faces are labeled as non-human (i.e., android robot). This allows us to employ a task shown to be differentially sensitive to social (i.e., faces) and non-social (i.e., objects) stimuli while also randomly assigning face stimuli to seem real or fake. The results show smaller inversion effects when face stimuli were believed to represent android robots compared to when they were believed to represent humans. This suggests that robots strongly resembling humans may still fail to be perceived as "social" due pre-existing beliefs about their mechanistic nature. Theoretical and practical implications of this research are discussed.
Collapse
Affiliation(s)
- Ali Momen
- United States Air Force Academy, Colorado Springs, CO, USA.
- George Mason University, Fairfax, VA, USA.
| | | | - Eva Wiese
- George Mason University, Fairfax, VA, USA.
- Berlin Institute of Technology, Berlin, Germany.
| |
Collapse
|
2
|
Abubshait A, Weis PP, Momen A, Wiese E. Perceptual discrimination in the face perception of robots is attenuated compared to humans. Sci Rep 2023; 13:16708. [PMID: 37794045 PMCID: PMC10550918 DOI: 10.1038/s41598-023-42510-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Accepted: 09/11/2023] [Indexed: 10/06/2023] Open
Abstract
When interacting with groups of robots, we tend to perceive them as a homogenous group where all group members have similar capabilities. This overgeneralization of capabilities is potentially due to a lack of perceptual experience with robots or a lack of motivation to see them as individuals (i.e., individuation). This can undermine trust and performance in human-robot teams. One way to overcome this issue is by designing robots that can be individuated such that each team member can be provided tasks based on its actual skills. In two experiments, we examine if humans can effectively individuate robots: Experiment 1 (n = 225) investigates how individuation performance of robot stimuli compares to that of human stimuli that either belong to a social ingroup or outgroup. Experiment 2 (n = 177) examines to what extent robots' physical human-likeness (high versus low) affects individuation performance. Results show that although humans are able to individuate robots, they seem to individuate them to a lesser extent than both ingroup and outgroup human stimuli (Experiment 1). Furthermore, robots that are physically more humanlike are initially individuated better compared to robots that are physically less humanlike; this effect, however, diminishes over the course of the experiment, suggesting that the individuation of robots can be learned quite quickly (Experiment 2). Whether differences in individuation performance with robot versus human stimuli is primarily due to a reduced perceptual experience with robot stimuli or due to motivational aspects (i.e., robots as potential social outgroup) should be examined in future studies.
Collapse
Affiliation(s)
- Abdulaziz Abubshait
- Italian Institute of Technology, Genoa, Italy.
- George Mason University, Fairfax, VA, USA.
| | - Patrick P Weis
- George Mason University, Fairfax, VA, USA
- Julius Maximilians University, Würzburg, Germany
| | - Ali Momen
- George Mason University, Fairfax, VA, USA
- Air Force Academy, Colorado Springs, CO, USA
| | - Eva Wiese
- George Mason University, Fairfax, VA, USA
- Berlin Institute of Technology, Berlin, Germany
| |
Collapse
|
3
|
Hortensius R, Wiese E. A neurocognitive view on the depiction of social robots. Behav Brain Sci 2023; 46:e38. [PMID: 37017057 DOI: 10.1017/s0140525x22001637] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/06/2023]
Abstract
While we applaud the careful breakdown by Clark and Fischer of the representation of social robots held by the human user, we emphasise that a neurocognitive perspective is crucial to fully capture how people perceive and construe social robots at the behavioural and brain levels.
Collapse
Affiliation(s)
- Ruud Hortensius
- Department of Psychology, Utrecht University, 3584 CS Utrecht, The Netherlands
| | - Eva Wiese
- Cognitive Psychology & Ergonomics, Institute of Psychology and Ergonomics, School of Mechanical Engineering and Transport Systems, Berlin Institute of Technology, D-10587 Berlin, Germany ://sites.google.com/view/gmuscilab
| |
Collapse
|
4
|
Momen A, Hugenberg K, Wiese E. Robots engage face-processing less strongly than humans. Front Neurogenom 2022; 3:959578. [PMID: 38235446 PMCID: PMC10790943 DOI: 10.3389/fnrgo.2022.959578] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Accepted: 09/20/2022] [Indexed: 01/19/2024]
Abstract
Robot faces often differ from human faces in terms of their facial features (e.g., lack of eyebrows) and spatial relationships between these features (e.g., disproportionately large eyes), which can influence the degree to which social brain [i.e., Fusiform Face Area (FFA), Superior Temporal Sulcus (STS); Haxby et al., 2000] areas process them as social individuals that can be discriminated from other agents in terms of their perceptual features and person attributes. Of interest in this work is whether robot stimuli are processed in a less social manner than human stimuli. If true, this could undermine human-robot interactions (HRIs) because human partners could potentially fail to perceive robots as individual agents with unique features and capabilities-a phenomenon known as outgroup homogeneity-potentially leading to miscalibration of trust and errors in allocation of task responsibilities. In this experiment, we use the face inversion paradigm (as a proxy for neural activation in social brain areas) to examine whether face processing differs between human and robot face stimuli: if robot faces are perceived as less face-like than human-faces, the difference in recognition performance for faces presented upright compared to upside down (i.e., inversion effect) should be less pronounced for robot faces than human faces. The results demonstrate a reduced face inversion effect with robot vs. human faces, supporting the hypothesis that robot faces are processed in a less face-like manner. This suggests that roboticists should attend carefully to the design of robot faces and evaluate them based on their ability to engage face-typical processes. Specific design recommendations on how to accomplish this goal are provided in the discussion.
Collapse
Affiliation(s)
- Ali Momen
- Warfighter Effectiveness Research Center, United States Air Force Academy, Colorado Springs, CO, United States
- Department of Psychology, George Mason University, Fairfax, VA, United States
| | - Kurt Hugenberg
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN, United States
| | - Eva Wiese
- Department of Psychology, George Mason University, Fairfax, VA, United States
- Institute for Psychology and Ergonomics, Technical University of Berlin, Berlin, Germany
| |
Collapse
|
5
|
Abstract
OBJECTIVE Human problem solvers possess the ability to outsource parts of their mental processing onto cognitive "helpers" (cognitive offloading). However, suboptimal decisions regarding which helper to recruit for which task occur frequently. Here, we investigate if understanding and adjusting a specific subcomponent of mental models-beliefs about task-specific expertise-regarding these helpers could provide a comparatively easy way to improve offloading decisions. BACKGROUND Mental models afford the storage of beliefs about a helper that can be retrieved when needed. METHODS Arithmetic and social problems were solved by 192 participants. Participants could, in addition to solving a task on their own, offload cognitive processing onto a human, a robot, or one of two smartphone apps. These helpers were introduced with either task-specific (e.g., stating that an app would use machine learning to "recognize faces" and "read emotions") or task-unspecific (e.g., stating that an app was built for solving "complex cognitive tasks") descriptions of their expertise. RESULTS Providing task-specific expertise information heavily altered offloading behavior for apps but much less so for humans or robots. This suggests (1) strong preexisting mental models of human and robot helpers and (2) a strong impact of mental model adjustment for novel helpers like unfamiliar smartphone apps. CONCLUSION Creating and refining mental models is an easy approach to adjust offloading preferences and thus improve interactions with cognitive environments. APPLICATION To efficiently work in environments in which problem-solving includes consulting other people or cognitive tools ("helpers"), accurate mental models-especially regarding task-relevant expertise-are a crucial prerequisite.
Collapse
Affiliation(s)
| | - Eva Wiese
- 3298 George Mason University, Virginia, USA
| |
Collapse
|
6
|
Kohn SC, de Visser EJ, Wiese E, Lee YC, Shaw TH. Measurement of Trust in Automation: A Narrative Review and Reference Guide. Front Psychol 2021; 12:604977. [PMID: 34737716 PMCID: PMC8562383 DOI: 10.3389/fpsyg.2021.604977] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2020] [Accepted: 08/25/2021] [Indexed: 02/05/2023] Open
Abstract
With the rise of automated and autonomous agents, research examining Trust in Automation (TiA) has attracted considerable attention over the last few decades. Trust is a rich and complex construct which has sparked a multitude of measures and approaches to study and understand it. This comprehensive narrative review addresses known methods that have been used to capture TiA. We examined measurements deployed in existing empirical works, categorized those measures into self-report, behavioral, and physiological indices, and examined them within the context of an existing model of trust. The resulting work provides a reference guide for researchers, providing a list of available TiA measurement methods along with the model-derived constructs that they capture including judgments of trustworthiness, trust attitudes, and trusting behaviors. The article concludes with recommendations on how to improve the current state of TiA measurement.
Collapse
Affiliation(s)
| | - Ewart J de Visser
- Warfighter Effectiveness Research Center, United States Air Force Academy, Colorado Springs, CO, United States
| | - Eva Wiese
- George Mason University, Fairfax, VA, United States
| | - Yi-Ching Lee
- George Mason University, Fairfax, VA, United States
| | - Tyler H Shaw
- George Mason University, Fairfax, VA, United States
| |
Collapse
|
7
|
Abubshait A, Beatty PJ, McDonald CG, Hassall CD, Krigolson OE, Wiese E. Correction to: A win-win situation: Does familiarity with a social robot modulate feedback monitoring and learning? Cogn Affect Behav Neurosci 2021; 21:678. [PMID: 33942275 DOI: 10.3758/s13415-021-00902-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Affiliation(s)
- Abdulaziz Abubshait
- George Mason University, Fairfax, VA, USA.
- Italian Institute of Technology, Genova, Italy.
| | | | | | | | - Olav E Krigolson
- Department of Psychology, University of Victoria, Victoria, BC, Canada
| | - Eva Wiese
- George Mason University, Fairfax, VA, USA
- Institute of Psychology and Ergonomics, Berlin Institute of Technology, Berlin, Germany
| |
Collapse
|
8
|
Krueger F, Wiese E. Specialty Grand Challenge Article- Social Neuroergonomics. Front Neuroergon 2021; 2:654597. [PMID: 38235251 PMCID: PMC10790868 DOI: 10.3389/fnrgo.2021.654597] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/16/2021] [Accepted: 03/15/2021] [Indexed: 01/19/2024]
Affiliation(s)
- Frank Krueger
- School of Systems Biology, George Mason University, Fairfax, VA, United States
| | - Eva Wiese
- Institute of Psychology and Ergonomics, Berlin, Germany
| |
Collapse
|
9
|
Wiese E, Weis PP, Bigman Y, Kapsaskis K, Gray K. It’s a Match: Task Assignment in Human–Robot Collaboration Depends on Mind Perception. Int J Soc Robot 2021. [DOI: 10.1007/s12369-021-00771-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
AbstractRobots are becoming more available for workplace collaboration, but many questions remain. Are people actually willing to assign collaborative tasks to robots? And if so, exactly which tasks will they assign to what kinds of robots? Here we leverage psychological theories on person-job fit and mind perception to investigate task assignment in human–robot collaborative work. We propose that people will assign robots to jobs based on their “perceived mind,” and also that people will show predictable social biases in their collaboration decisions. In this study, participants performed an arithmetic (i.e., calculating differences) and a social (i.e., judging emotional states) task, either alone or by collaborating with one of two robots: an emotionally capable robot or an emotionally incapable robot. Decisions to collaborate (i.e., to assign the robots to generate the answer) rates were high across all trials, especially for tasks that participants found challenging (i.e., the arithmetic task). Collaboration was predicted by perceived robot-task fit, such that the emotional robot was assigned the social task. Interestingly, the arithmetic task was assigned more to the emotionally incapable robot, despite the emotionally capable robot being equally capable of computation. This is consistent with social biases (e.g., gender bias) in mind perception and person-job fit. The theoretical and practical implications of this work for HRI are being discussed.
Collapse
|
10
|
Tulk Jesso S, Kennedy WG, Wiese E. Behavioral Cues of Humanness in Complex Environments: How People Engage With Human and Artificially Intelligent Agents in a Multiplayer Videogame. Front Robot AI 2020; 7:531805. [PMID: 33501306 PMCID: PMC7806100 DOI: 10.3389/frobt.2020.531805] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2020] [Accepted: 09/30/2020] [Indexed: 11/13/2022] Open
Abstract
The development of AI that can socially engage with humans is exciting to imagine, but such advanced algorithms might prove harmful if people are no longer able to detect when they are interacting with non-humans in online environments. Because we cannot fully predict how socially intelligent AI will be applied, it is important to conduct research into how sensitive humans are to behaviors of humans compared to those produced by AI. This paper presents results from a behavioral Turing Test, in which participants interacted with a human, or a simple or "social" AI within a complex videogame environment. Participants (66 total) played an open world, interactive videogame with one of these co-players and were instructed that they could interact non-verbally however they desired for 30 min, after which time they would indicate their beliefs about the agent, including three Likert measures of how much participants trusted and liked the co-player, the extent to which they perceived them as a "real person," and an interview about the overall perception and what cues participants used to determine humanness. T-tests, Analysis of Variance and Tukey's HSD was used to analyze quantitative data, and Cohen's Kappa and χ2 was used to analyze interview data. Our results suggest that it was difficult for participants to distinguish between humans and the social AI on the basis of behavior. An analysis of in-game behaviors, survey data and qualitative responses suggest that participants associated engagement in social interactions with humanness within the game.
Collapse
Affiliation(s)
- Stephanie Tulk Jesso
- Social and Cognitive Interactions Lab, Department of Psychology, George Mason University, Fairfax, VA, United States
| | - William G. Kennedy
- Center for Social Complexity, Department of Computational Data Science, College of Science, George Mason University, Fairfax, VA, United States
| | - Eva Wiese
- Social and Cognitive Interactions Lab, Department of Psychology, George Mason University, Fairfax, VA, United States
| |
Collapse
|
11
|
Abubshait A, Momen A, Wiese E. Pre-exposure to Ambiguous Faces Modulates Top-Down Control of Attentional Orienting to Counterpredictive Gaze Cues. Front Psychol 2020; 11:2234. [PMID: 33013584 PMCID: PMC7509110 DOI: 10.3389/fpsyg.2020.02234] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2020] [Accepted: 08/10/2020] [Indexed: 11/13/2022] Open
Abstract
Understanding and reacting to others' nonverbal social signals, such as changes in gaze direction (i.e., gaze cue), are essential for social interactions, as it is important for processes such as joint attention and mentalizing. Although attentional orienting in response to gaze cues has a strong reflexive component, accumulating evidence shows that it can be top-down controlled by context information regarding the signals' social relevance. For example, when a gazer is believed to be an entity "with a mind" (i.e., mind perception), people exert more top-down control on attention orienting. Although increasing an agent's physical human-likeness can enhance mind perception, it could have negative consequences on top-down control of social attention when a gazer's physical appearance is categorically ambiguous (i.e., difficult to categorize as human or nonhuman), as resolving this ambiguity would require using cognitive resources that otherwise could be used to top-down control attention orienting. To examine this question, we used mouse-tracking to explore if categorically ambiguous agents are associated with increased processing costs (Experiment 1), whether categorically ambiguous stimuli negatively impact top-down control of social attention (Experiment 2), and if resolving the conflict related to the agent's categorical ambiguity (using exposure) would restore top-down control to orient attention (Experiment 3). The findings suggest that categorically ambiguous stimuli are associated with cognitive conflict, which negatively impact the ability to exert top-down control on attentional orienting in a counterpredicitive gaze-cueing paradigm; this negative impact, however, is attenuated when being pre-exposed to the stimuli prior to the gaze-cueing task. Taken together, these findings suggest that manipulating physical human-likeness is a powerful way to affect mind perception in human-robot interaction (HRI) but has a diminishing returns effect on social attention when it is categorically ambiguous due to drainage of cognitive resources and impairment of top-down control.
Collapse
Affiliation(s)
| | - Ali Momen
- Department of Psychology, George Mason University, Fairfax, VA, United States
| | - Eva Wiese
- Department of Psychology, George Mason University, Fairfax, VA, United States
| |
Collapse
|
12
|
Weis PP, Wiese E. Problem Solvers Adjust Cognitive Offloading Based on Performance Goals. Cogn Sci 2020; 43:e12802. [PMID: 31858630 DOI: 10.1111/cogs.12802] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2019] [Revised: 09/09/2019] [Accepted: 10/28/2019] [Indexed: 11/28/2022]
Abstract
When incorporating the environment into mental processing (cf., cognitive offloading), one creates novel cognitive strategies that have the potential to improve task performance. Improved performance can, for example, mean faster problem solving, more accurate solutions, or even higher grades at university.1 Although cognitive offloading has frequently been associated with improved performance, it is yet unclear how flexible problem solvers are at matching their offloading habits with their current performance goals (can people improve goal-related instead of generic performance, e.g., when being in a hurry and aiming for a "quick and dirty" solution?). Here, we asked participants to solve a cognitive task, provided them with different goals-maximizing speed (SPD) or accuracy (ACC), respectively-and measured how frequently (Experiment 1) and how proficiently (Experiment 2) they made use of a novel external resource to support their cognitive processing. Experiment 1 showed that offloading behavior varied with goals: Participants offloaded less in the SPD than in the ACC condition. Experiment 2 showed that this differential offloading behavior was associated with high goal-related performance: fast answers in the SPD, accurate answers in the ACC condition. Simultaneously, goal-unrelated performance was sacrificed: inaccurate answers in the SPD, slow answers in the ACC condition. The findings support the notion of humans as canny offloaders who are able to successfully incorporate their environment in pursuit of their current cognitive goals. Future efforts should be focused on the finding's generalizability, for example, to settings without feedback or with high mental workload.
Collapse
Affiliation(s)
| | - Eva Wiese
- Department of Psychology, George Mason University
| |
Collapse
|
13
|
Wiese E, Abubshait A, Azarian B, Blumberg EJ. Brain stimulation to left prefrontal cortex modulates attentional orienting to gaze cues. Philos Trans R Soc Lond B Biol Sci 2020; 374:20180430. [PMID: 30852996 DOI: 10.1098/rstb.2018.0430] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
In social interactions, we rely on non-verbal cues like gaze direction to understand the behaviour of others. How we react to these cues is determined by the degree to which we believe that they originate from an entity with a mind capable of having internal states and showing intentional behaviour, a process called mind perception. While prior work has established a set of neural regions linked to mind perception, research has just begun to examine how mind perception affects social-cognitive mechanisms like gaze processing on a neuronal level. In the current experiment, participants performed a social attention task (i.e. attentional orienting to gaze cues) with either a human or a robot agent (i.e. manipulation of mind perception) while transcranial direct current stimulation (tDCS) was applied to prefrontal and temporo-parietal brain areas. The results show that temporo-parietal stimulation did not modulate mechanisms of social attention, neither in response to the human nor in response to the robot agent, whereas prefrontal stimulation enhanced attentional orienting in response to human gaze cues and attenuated attentional orienting in response to robot gaze cues. The findings suggest that mind perception modulates low-level mechanisms of social cognition via prefrontal structures, and that a certain degree of mind perception is essential in order for prefrontal stimulation to affect mechanisms of social attention. This article is part of the theme issue 'From social brains to social robots: applying neurocognitive insights to human-robot interaction'.
Collapse
Affiliation(s)
- Eva Wiese
- Department of Psychology, Social and Cognitive Interactions Lab, George Mason University, Fairfax, VA , USA
| | - Abdulaziz Abubshait
- Department of Psychology, Social and Cognitive Interactions Lab, George Mason University, Fairfax, VA , USA
| | - Bobby Azarian
- Department of Psychology, Social and Cognitive Interactions Lab, George Mason University, Fairfax, VA , USA
| | - Eric J Blumberg
- Department of Psychology, Social and Cognitive Interactions Lab, George Mason University, Fairfax, VA , USA
| |
Collapse
|
14
|
Abstract
Humans frequently use external (environment-based) strategies to supplement their internal (brain-based) thought. In the memory domain, whether to solve a problem using external or internal retrieval depends on the accessibility of external information, judgment of mnemonic ability, and on the problem's visual features. It likely also depends on the accessibility of internal information. Here, we asked whether internal accessibility contributes to strategy choice even when visual features bear no information on internal accessibility. Specifically, 114 participants were to validate alphanumerical equations (e.g., A + 2 = C) whose visual appearance (Addends 2, 3, or 4) signified different difficulty levels. First, some equations were presented more frequently than others, allowing participants to establish efficient internal access to the correct solution via memory retrieval rather than counting up the alphabet. Second, participants viewed the equations again but could access the correct solution externally using a computer mouse. We hypothesized that external strategy use should selectively decrease for frequently learned equations and irrespectively of the task's visual features. Results mostly confirm our hypothesis. Exploratory analyses further suggest that participants partially used a sequential "try-internal-retrieval-first" mechanism to establish the adaptive behavior. Implications for intervention methods aimed at improving interactive cognition are discussed. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
Collapse
|
15
|
Abstract
OBJECTIVE A distributed cognitive system is a system in which cognitive processes are distributed between brain-based internal and environment-based external resources. In the current experiment, we examined the influence of metacognitive processes on external resource use (i.e., cognitive offloading) in such systems. BACKGROUND High-tech working environments oftentimes represent distributed cognitive systems. Because cognitive offloading can both support and harm performance, depending on the specific circumstances, it is essential to understand when and why people offload their cognition. METHOD We used an extension of the mental rotation paradigm. It allowed participants to rotate stimuli either internally as in the original paradigm or with a rotation knob that afforded rotating stimuli externally on a computer screen. Two parameters were manipulated: the knob's actual reliability (AR) and an instruction altering participants' beliefs about the knob's reliability (believed reliability; BR). We measured cognitive offloading proportion and perceived knob utility. RESULTS Participants were able to quickly and dynamically adjust their cognitive offloading proportion and subjective utility assessments in response to AR, suggesting a high level of offloading proficiency. However, when BR instructions were presented that falsely described the knob's reliability to be lower than it actually was, participants reduced cognitive offloading substantially. CONCLUSION The extent to which people offload their cognition is not based solely on utility maximization; it is additionally affected by possibly erroneous preexisting beliefs. APPLICATION To support users in efficiently operating in a distributed cognitive system, an external resource's utility should be made transparent, and preexisting beliefs should be adjusted prior to interaction.
Collapse
Affiliation(s)
| | - Eva Wiese
- George Mason University, Fairfax, VA, USA
| |
Collapse
|
16
|
Abstract
As nonhuman agents are integrated into the workforce, the question becomes to what extent advice seeking in technology-infused environments depends on the perceived fit between agent and task and whether humans are willing to consider advice from nonhuman agents. In this experiment, participants sought advice from human, robot, or computer agents when performing a social or analytical task, with the task being either known or unknown when selecting an agent. In the agent-1st condition, participants 1st chose an adviser and then got their task assignment; in the task-1st condition, participants 1st received the task assignment and then chose an adviser. In the agent-1st condition, we expected participants to prefer human to nonhuman advisers and to subsequently comply more with their advice when they were assigned the social as opposed to the analytical task. In the task-1st condition, we expected advice seeking and compliance to be guided by stereotypical assumptions regarding an agent's task expertise. The findings indicate that the human was chosen more often than were the nonhuman agents in the agent-1st condition, whereas adviser choices were calibrated based on perceived agent-task fit in the task-1st condition. Compliance rates were not generally calibrated based on agent-task fit. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Collapse
Affiliation(s)
| | - Eva Wiese
- Psychology Department, George Mason University
| |
Collapse
|
17
|
Abstract
OBJECTIVE The authors investigate whether nonhuman agents, such as computers or robots, produce a social conformity effect in human operators and examine to what extent potential conformist behavior varies as a function of the human-likeness of the group members and the type of task that has to be performed. BACKGROUND People conform due to normative and/or informational motivations in human-human interactions, and conformist behavior is modulated by factors related to the individual as well as factors associated with the group, context, and culture. Studies have yet to examine whether nonhuman agents also induce social conformity. METHOD Participants were assigned to a computer, robot, or human group and completed both a social and analytical task with the respective group. RESULTS Conformity measures (percentage of times participants answered in line with agents on critical trials) subjected to a 3 × 2 mixed ANOVA showed significantly higher conformity rates for the analytical versus the social task as well as a modulation of conformity depending of the perceived agent-task fit. CONCLUSION Findings indicate that nonhuman agents were able to exert a social conformity effect, which was modulated further by the perceived match between agent and task type. Participants conformed to comparable degrees with agents during the analytical task but conformed significantly more strongly on the social task as the group's human-likeness increased. APPLICATION Results suggest that users may react differently to the influence of nonhuman agent groups with the potential for variability in conformity depending on the domain of the task.
Collapse
Affiliation(s)
| | - Eva Wiese
- George Mason University, Fairfax, Virginia
| |
Collapse
|
18
|
Wiese E, Mandell A, Shaw T, Smith M. Implicit mind perception alters vigilance performance because of cognitive conflict processing. J Exp Psychol Appl 2018; 25:25-40. [PMID: 30265050 DOI: 10.1037/xap0000186] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Knowing the internal states of others is essential to predicting behavior in social interactions and requires that the general characteristic of "having a mind" is granted to our interaction partners. Mind perception is a highly automatic process and can potentially cause a cognitive conflict when interacting with agents whose mind status is ambiguous, such as artificial agents. We investigate whether mind perception negatively impacts performance on tasks involving artificial agents because of cognitive conflict processing caused by a potentially increased difficulty to categorize them as human versus nonhuman. Experiment 1 shows that an ambiguous humanoid stimulus negatively impacts performance on a vigilance task that is known to be sensitive to the drainage of cognitive resources. This negative effect on performance vanishes when participants are preexposed to the stimulus before the vigilance task (Experiment 2 and 3). The effect of preexposure on performance recovery is independent of whether participants explicitly resolve the cognitive conflict by answering mind-related questions (Experiment 2) or implicitly by judging the stimuli on a set of physical features (Experiment 3). Together, the findings suggest that mind perception is so automatic that it cannot be suppressed even if it has negative effects on cognitive performance. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Collapse
|
19
|
Momen A, Wiese E. Mind perception modulates social attention in real-time human-robot interaction. Front Hum Neurosci 2018. [DOI: 10.3389/conf.fnhum.2018.227.00079] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
20
|
Momen A, Wiese E. Perceived robot personality affects social attention in real-time human-robot interaction. Front Hum Neurosci 2018. [DOI: 10.3389/conf.fnhum.2018.227.00108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
21
|
Abubshait A, Weis P, Wiese E. Effects of embodiment on social attention mechanisms in human-robot interaction. Front Hum Neurosci 2018. [DOI: 10.3389/conf.fnhum.2018.227.00080] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
22
|
Tulk S, Wiese E. Reasoning About Information Provided by Bots. Front Hum Neurosci 2018. [DOI: 10.3389/conf.fnhum.2018.227.00123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
23
|
Wiese E, Metta G, Wykowska A. Robots As Intentional Agents: Using Neuroscientific Methods to Make Robots Appear More Social. Front Psychol 2017; 8:1663. [PMID: 29046651 PMCID: PMC5632653 DOI: 10.3389/fpsyg.2017.01663] [Citation(s) in RCA: 100] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2017] [Accepted: 09/11/2017] [Indexed: 12/30/2022] Open
Abstract
Robots are increasingly envisaged as our future cohabitants. However, while considerable progress has been made in recent years in terms of their technological realization, the ability of robots to interact with humans in an intuitive and social way is still quite limited. An important challenge for social robotics is to determine how to design robots that can perceive the user's needs, feelings, and intentions, and adapt to users over a broad range of cognitive abilities. It is conceivable that if robots were able to adequately demonstrate these skills, humans would eventually accept them as social companions. We argue that the best way to achieve this is using a systematic experimental approach based on behavioral and physiological neuroscience methods such as motion/eye-tracking, electroencephalography, or functional near-infrared spectroscopy embedded in interactive human-robot paradigms. This approach requires understanding how humans interact with each other, how they perform tasks together and how they develop feelings of social connection over time, and using these insights to formulate design principles that make social robots attuned to the workings of the human brain. In this review, we put forward the argument that the likelihood of artificial agents being perceived as social companions can be increased by designing them in a way that they are perceived as intentional agents that activate areas in the human brain involved in social-cognitive processing. We first review literature related to social-cognitive processes and mechanisms involved in human-human interactions, and highlight the importance of perceiving others as intentional agents to activate these social brain areas. We then discuss how attribution of intentionality can positively affect human-robot interaction by (a) fostering feelings of social connection, empathy and prosociality, and by (b) enhancing performance on joint human-robot tasks. Lastly, we describe circumstances under which attribution of intentionality to robot agents might be disadvantageous, and discuss challenges associated with designing social robots that are inspired by neuroscientific principles.
Collapse
Affiliation(s)
- Eva Wiese
- Department of Psychology, George Mason University, Fairfax, VA, United States
| | | | | |
Collapse
|
24
|
Abstract
In social robotics, the term Uncanny Valley describes the phenomenon that linear increases in human-likeness of an agent do not entail an equally linear increase in favorable reactions towards that agent. Instead, a pronounced dip or ‘valley’ at around 70% human-likeness emerges. One currently popular view to explain this drop in favorable reactions is delivered by the Categorical Perception Hypothesis. It is suggested that categorization of agents with mixed human and non-human features is associated with additional cognitive costs and that these costs are the cause of the Uncanny Valley. However, the nature of the cognitive costs is still matter of debate. The current study explores whether the cognitive costs associated with stimulus categorization around the Uncanny Valley could be due to cognitive conflict as evoked by simultaneous activation of two categories. Using the mouse tracking technique, we show that cognitive conflict indeed peaks around the Uncanny Valley region of human-likeness. Our findings lay the foundation for investigating the effects of cognitive conflict on positive affect towards agents of around 70% human-likeness, possibly leading to the unraveling of the origins of the Uncanny Valley.
Collapse
|
25
|
Abstract
When we interact with others, we make inferences about their internal states (i.e., intentions, emotions) and use this information to understand and predict their behavior. Reasoning about the internal states of others is referred to as mentalizing, and presupposes that our social partners are believed to have a mind. Seeing mind in others increases trust, prosocial behaviors and feelings of social connection, and leads to improved joint performance. However, while human agents trigger mind perception by default, artificial agents are not automatically treated as intentional entities but need to be designed to do so. The panel addresses this issue by discussing how mind attribution to robots and other automated agents can be elicited by design, what the effects of mind perception are on attitudes and performance in human-robot and human-machine interaction and what behavioral and neuroscientific paradigms can be used to investigate these questions. Application areas covered include social robotics, automation, driver-vehicle interfaces, and others.
Collapse
Affiliation(s)
- Eva Wiese
- Department of Psychology, George Mason University
| | - Tyler Shaw
- Department of Psychology, George Mason University
| | - Daniel Lofaro
- Department of Electrical and Computer Engineering, George Mason University
| | | |
Collapse
|
26
|
Abstract
As interactions with non-human agents increase, it is important to understand and predict the consequences of human interactions with them. Social facilitation has a longstanding history within the realm of social psychology and is characterized by the presence of other humans having a beneficial effect on performance on easy tasks and inhibiting performance on difficult tasks. While social facilitation has been shown across task types and experimental conditions with human agents, very little research has examined whether this effect can also be induced by non-human agents and, if so, to what degree the level of humanness and embodiment of those agents influences that effect. In the current experiment, we apply a common social facilitation task (i.e., numerical distance judgments) to investigate to what extent the presence of agents of varying degrees of humanness benefits task performance. Results show a significant difference in performance between easy and difficult task conditions, but show no significant improvement in task performance in the social presence conditions compared to performing the task alone. This suggests that the presence of others did not have a positive effect on performance, at least not when social presence was manipulated via still images. Implications of this finding for future studies, as well as for human-robot interaction are discussed.
Collapse
|
27
|
Abubshait A, Wiese E. You Look Human, But Act Like a Machine: Agent Appearance and Behavior Modulate Different Aspects of Human-Robot Interaction. Front Psychol 2017; 8:1393. [PMID: 28878703 PMCID: PMC5572356 DOI: 10.3389/fpsyg.2017.01393] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2017] [Accepted: 07/31/2017] [Indexed: 11/15/2022] Open
Abstract
Gaze following occurs automatically in social interactions, but the degree to which gaze is followed depends on whether an agent is perceived to have a mind, making its behavior socially more relevant for the interaction. Mind perception also modulates the attitudes we have toward others, and determines the degree of empathy, prosociality, and morality invested in social interactions. Seeing mind in others is not exclusive to human agents, but mind can also be ascribed to non-human agents like robots, as long as their appearance and/or behavior allows them to be perceived as intentional beings. Previous studies have shown that human appearance and reliable behavior induce mind perception to robot agents, and positively affect attitudes and performance in human–robot interaction. What has not been investigated so far is whether different triggers of mind perception have an independent or interactive effect on attitudes and performance in human–robot interaction. We examine this question by manipulating agent appearance (human vs. robot) and behavior (reliable vs. random) within the same paradigm and examine how congruent (human/reliable vs. robot/random) versus incongruent (human/random vs. robot/reliable) combinations of these triggers affect performance (i.e., gaze following) and attitudes (i.e., agent ratings) in human–robot interaction. The results show that both appearance and behavior affect human–robot interaction but that the two triggers seem to operate in isolation, with appearance more strongly impacting attitudes, and behavior more strongly affecting performance. The implications of these findings for human–robot interaction are discussed.
Collapse
Affiliation(s)
| | - Eva Wiese
- Department of Psychology, George Mason University, FairfaxVA, United States
| |
Collapse
|
28
|
Perez-Osorio J, Müller HJ, Wiese E, Wykowska A. Correction: Gaze Following Is Modulated by Expectations Regarding Others' Action Goals. PLoS One 2017; 12:e0170852. [PMID: 28107537 PMCID: PMC5249129 DOI: 10.1371/journal.pone.0170852] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
|
29
|
Abstract
Trust in automation is an important topic in the field of human factors and has a substantial impact on both attitudes towards and performance with automated systems. One variable that has been shown to influence trust is the degree of human-likeness that is displayed by the automated system with the main finding being that increased human-like appearance leads to increased ratings of trust. In the current study, we investigate whether humanness unanimously leads to higher trust or whether the degree to which an agent is trusted depends on context variables (i.e., task type). For that purpose, we created a task with a social (i.e., judging emotional states from the eye region) and an analytical component (i.e., mathematical task) and measured how strongly participants complied to human, avatar or computer agents when performing the social versus the analytical version with them. We hypothesized that human-like agents are trusted more on social tasks, while machine-like agents are trusted more on analytical tasks. In line with our hypothesis, the results show that, human agents are in general not trusted more than automated agents but that the degree to which an agent is trusted depends on the anticipated expertise of an agent for a given task. The findings suggest that when designing automated systems that are supposed to interact with humans, the degree of humanness of the agent needs to match the degree to which a task requires social skills.
Collapse
|
30
|
Abstract
Previous research has demonstrated reliable effects of social pressure on conformity and social decision-making in human-human interaction. The current study investigates whether non-human agents are also capable of inducing similar social pressure effects; in particular, we examined whether the degree of physical human-likeness of an agent (i.e., appearance) modulates conformity and whether potential effects of agent type on conformity are modulated further by task ambiguity. To answer these questions, participants performed a line judgment task together with agents of different degrees of humanness (human, robot, computer) in either a high or low ambiguity version of the task. We expected an increase in conformity rates for agents with increasing levels of physical humanness, as well as for increasing levels of task ambiguity. Results showed low-level conformity with all agents, with a significant difference in conformity between the high and low ambiguity version of the task (i.e., stronger compliance for the high versus the low task); the degree of humanness, however, did not have an influence on conformity rates (neither alone or in combination with task type). The results suggest that when performing a task together with others, participants always conform to some degree with the social interaction partner independent of its level of humanness; the level of conformity, however, depends on task ambiguity with stronger compliance across agents for more ambiguous tasks.
Collapse
|
31
|
Özdem C, Wiese E, Wykowska A, Müller H, Brass M, Van Overwalle F. Believing androids - fMRI activation in the right temporo-parietal junction is modulated by ascribing intentions to non-human agents. Soc Neurosci 2016; 12:582-593. [PMID: 27391213 DOI: 10.1080/17470919.2016.1207702] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
Attributing mind to interaction partners has been shown to increase the social relevance we ascribe to others' actions and to modulate the amount of attention dedicated to them. However, it remains unclear how the relationship between higher-order mind attribution and lower-level attention processes is established in the brain. In this neuroimaging study, participants saw images of an anthropomorphic robot that moved its eyes left- or rightwards to signal the appearance of an upcoming stimulus in the same (valid cue) or opposite location (invalid cue). Independently, participants' beliefs about the intentionality underlying the observed eye movements were manipulated by describing the eye movements as under human control or preprogrammed. As expected, we observed a validity effect behaviorally and neurologically (increased response times and activation in the invalid vs. valid condition). More importantly, we observed that this effect was more pronounced for the condition in which the robot's behavior was believed to be controlled by a human, as opposed to be preprogrammed. This interaction effect between cue validity and belief was, however, only found at the neural level and was manifested as a significant increase of activation in bilateral anterior temporoparietal junction.
Collapse
Affiliation(s)
- Ceylan Özdem
- a Department of Psychology , Vrije Universiteit Brussels , Brussels , Belgium
| | - Eva Wiese
- b Department of Psychology , George Mason University , Fairfax , VA , USA.,c Department of Psychology , Ludwig Maximilians-Universiteit , Munchen , Germany
| | - Agnieszka Wykowska
- d Engineering Psychology, Division of Human Work Sciences , Luleå University of Technology , Luleå , Sweden.,e Chair for Cognitive Systems , Technische Universität München , Munich , Germany
| | - Hermann Müller
- c Department of Psychology , Ludwig Maximilians-Universiteit , Munchen , Germany
| | - Marcel Brass
- f Ghent Institute for Functional and Metabolic Imaging , University of Ghent , Ghent , Belgium
| | - Frank Van Overwalle
- a Department of Psychology , Vrije Universiteit Brussels , Brussels , Belgium
| |
Collapse
|
32
|
Martini MC, Gonzalez CA, Wiese E. Correction: Seeing Minds in Others - Can Agents with Robotic Appearance Have Human-Like Preferences? PLoS One 2016; 11:e0149766. [PMID: 26872149 PMCID: PMC4752504 DOI: 10.1371/journal.pone.0149766] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
|
33
|
Abstract
Performance in many work environments depends on appropriate levels of trust being established between humans and their colleagues, as well as with various automated agents. Although there is a large literature on trust, it is diversified according to academic discipline, with little contact between or integration across disciplines. Empirical studies directly comparing and characterizing the similarities and differences between human-human and human-automation trust are relatively rare. Additionally, the neural mechanisms of trust have only been studied in the context of interpersonal trust and not human-automation trust. This panel represents an initial attempt to bridge these gaps. The panelists will discuss recent research aimed at characterizing human-human and human-agent trust in relation to one another and with respect to the underlying neural mechanisms. Finally, the panelists will discuss the design and training implications of these recent research findings.
Collapse
|
34
|
Wiese E. [Nevirapine--the first non-nucleoside inhibitor of reverse transcriptase in the battle against AIDS]. Pharm Unserer Zeit 1997; 26:99-100. [PMID: 9289739] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
|
35
|
Markowska J, Wiese E, Markowski M. [Side effects of drug treatment for ovarian cancer after administration of antiemetic drugs]. Ginekol Pol 1993; 64:438-42. [PMID: 8144054] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2023] Open
Abstract
The occurring frequency of 14 most common chemotherapy and anti-nausea drug side-effects was examined. The studies were performed on 29 women with ovarian cancer treated by total number of 125 chemotherapy courses (schedule PAC and Acy) and additionally, in order to eliminate nausea caused by the chemotherapy, by anti-nausea drugs (Zofran, Solu-Medrol, Droperidol, Metoclopramide + Fenactil, Torecan). Zofran caused the fewest number of side-effects, solu-medrol inhibited nausea and vomiting significantly, however it caused many side-effects such as flush on a face, restlessness, incitement and headaches. Torecan did not prevent patients from vomiting. The greatest number of side-effects was observed after droperidol and metoclopramide + fenactil treatment.
Collapse
|
36
|
Czub M, Markowska J, Tomczak P, Wiese E, Markowski M. [Survival, complications and quality of life in patients with breast cancer after ovariectomy and hormonal therapy]. Ginekol Pol 1993; 64:149-53. [PMID: 8359743] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/30/2023] Open
Abstract
Seventy one women with breast cancer in clinical stage IIIa were treated by chemotherapy and radical operation on the basis five features, namely: survivals, relapses, metastases, quality of life, and post-therapy complications. The two treatment methods were compared. The dependence between survivals and time elapsed between breast surgery and ovariectomy was evaluated. Women treated by ovariectomy suffer from; menopause symptoms, osteoporosis, blood coagulation distortions more after than women treated by hormonotherapy. Tamoxifen therapy increases the rate of breast cancer relapses and probably it is the cause of breast cancer metastases into liver. Women who underwent hormonal castration are professionally active more after them women treated by ovariectomy. Time elapsed between breast surgery and ovariectomy does not affect survivals in stage IIIa. In stage IIIb however, performing later ovariectomy prolongs survivals.
Collapse
Affiliation(s)
- M Czub
- Katedry Onkologii AM, Poznaniu
| | | | | | | | | |
Collapse
|
37
|
Koropatnick J, Winning R, Wiese E, Heschl M, Gedamu L, Duerksen J. Acute treatment of mice with cadmium salts results in amplification of the metallothionein-1 gene in liver. Nucleic Acids Res 1985; 13:5423-39. [PMID: 2412205 PMCID: PMC321881 DOI: 10.1093/nar/13.15.5423] [Citation(s) in RCA: 16] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022] Open
Abstract
A variety of genes have been shown to change copy number during development, including rRNA genes in amphibians and chorion proteins in insects. Dihydrofolate reductase and metallothionein-1 (MT-1) genes are present in high copy number in cultured mammalian cells subjected to low levels of agents that will select for cells with amplified copies of specific genes. Recent studies have shown that the metallothionein-1 gene in mouse liver is regulated at the transcriptional level by treatment with heavy metals. We report here that, at cadmium concentrations 5 to 10-fold higher than that required to induce maximal transcription of the MT-1 gene, there is a 2 to 3-fold increase in MT-1 gene concentration in liver nuclear DNA by 6 hours after induction, and extra copies persist up to 3 weeks in the absence of further heavy metal treatment. The extra MT-1 gene copies that appear 6 hours after cadmium treatment are in a conformation that renders them relatively nuclease insensitive.
Collapse
|