1
|
Perello-March J, Burns CG, Woodman R, Birrell S, Elliott MT. How Do Drivers Perceive Risks During Automated Driving Scenarios? An fNIRS Neuroimaging Study. HUMAN FACTORS 2024; 66:2244-2263. [PMID: 37357740 PMCID: PMC11344369 DOI: 10.1177/00187208231185705] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Accepted: 06/15/2023] [Indexed: 06/27/2023]
Abstract
OBJECTIVE Using brain haemodynamic responses to measure perceived risk from traffic complexity during automated driving. BACKGROUND Although well-established during manual driving, the effects of driver risk perception during automated driving remain unknown. The use of fNIRS in this paper for assessing drivers' states posits it could become a novel method for measuring risk perception. METHODS Twenty-three volunteers participated in an empirical driving simulator experiment with automated driving capability. Driving conditions involved suburban and urban scenarios with varying levels of traffic complexity, culminating in an unexpected hazardous event. Perceived risk was measured via fNIRS within the prefrontal cortical haemoglobin oxygenation and from self-reports. RESULTS Prefrontal cortical haemoglobin oxygenation levels significantly increased, following self-reported perceived risk and traffic complexity, particularly during the hazardous scenario. CONCLUSION This paper has demonstrated that fNIRS is a valuable research tool for measuring variations in perceived risk from traffic complexity during highly automated driving. Even though the responsibility over the driving task is delegated to the automated system and dispositional trust is high, drivers perceive moderate risk when traffic complexity builds up gradually, reflected in a corresponding significant increase in blood oxygenation levels, with both subjective (self-reports) and objective (fNIRS) increasing further during the hazardous scenario. APPLICATION Little is known regarding the effects of drivers' risk perception with automated driving. Building upon our experimental findings, future work can use fNIRS to investigate the mental processes for risk assessment and the effects of perceived risk on driving behaviours to promote the safe adoption of automated driving technology.
Collapse
Affiliation(s)
- Jaume Perello-March
- National Transport Design Centre, Centre for Future Transport and Cities, Coventry University, Coventry, UK
| | - Christopher G Burns
- School of Aerospace, Transport and Manufacturing (SATM), Cranfield University, Cranfield, UK
| | | | - Stewart Birrell
- National Transport Design Centre, Centre for Future Transport and Cities, Coventry University, Coventry, UK
| | | |
Collapse
|
2
|
Li Y, Wu B, Huang Y, Luan S. Developing trustworthy artificial intelligence: insights from research on interpersonal, human-automation, and human-AI trust. Front Psychol 2024; 15:1382693. [PMID: 38694439 PMCID: PMC11061529 DOI: 10.3389/fpsyg.2024.1382693] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2024] [Accepted: 04/04/2024] [Indexed: 05/04/2024] Open
Abstract
The rapid advancement of artificial intelligence (AI) has impacted society in many aspects. Alongside this progress, concerns such as privacy violation, discriminatory bias, and safety risks have also surfaced, highlighting the need for the development of ethical, responsible, and socially beneficial AI. In response, the concept of trustworthy AI has gained prominence, and several guidelines for developing trustworthy AI have been proposed. Against this background, we demonstrate the significance of psychological research in identifying factors that contribute to the formation of trust in AI. Specifically, we review research findings on interpersonal, human-automation, and human-AI trust from the perspective of a three-dimension framework (i.e., the trustor, the trustee, and their interactive context). The framework synthesizes common factors related to trust formation and maintenance across different trust types. These factors point out the foundational requirements for building trustworthy AI and provide pivotal guidance for its development that also involves communication, education, and training for users. We conclude by discussing how the insights in trust research can help enhance AI's trustworthiness and foster its adoption and application.
Collapse
Affiliation(s)
- Yugang Li
- CAS Key Laboratory for Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of the Chinese Academy of Sciences, Beijing, China
| | - Baizhou Wu
- CAS Key Laboratory for Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of the Chinese Academy of Sciences, Beijing, China
| | - Yuqi Huang
- CAS Key Laboratory for Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of the Chinese Academy of Sciences, Beijing, China
| | - Shenghua Luan
- CAS Key Laboratory for Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of the Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
3
|
Chu Y, Liu P. Automation complacency on the road. ERGONOMICS 2023; 66:1730-1749. [PMID: 37139680 DOI: 10.1080/00140139.2023.2210793] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/26/2023] [Accepted: 05/02/2023] [Indexed: 05/05/2023]
Abstract
Given that automation complacency, a hitherto controversial concept, is already used to blame and punish human drivers in current accident investigations and courts, it is essential to map complacency research in driving automation and determine whether current research can support its legitimate usage in these practical fields. Here, we reviewed its status quo in the domain and conducted a thematic analysis. We then discussed five fundamental challenges that might undermine its scientific legitimation: conceptual confusion exists in whether it is an individual versus systems problem; uncertainties exist in current evidence of complacency; valid measures specific to complacency are lacking; short-term laboratory experiments cannot address the long-term nature of complacency and thus their findings may lack external validity; and no effective interventions directly target complacency prevention. The Human Factors/Ergonomics community has a responsibility to minimise its usage and defend human drivers who rely on automation that is far from perfect.Practitioner summary: Human drivers are accused of complacency and overreliance on driving automation in accident investigations and courts. Our review work shows that current academic research in the driving automation domain cannot support its legitimate usage in these practical fields. Its misuse will create a new form of consumer harms.
Collapse
Affiliation(s)
- Yueying Chu
- Center for Psychological Sciences, Zhejiang University, Hangzhou, PR China
| | - Peng Liu
- Center for Psychological Sciences, Zhejiang University, Hangzhou, PR China
| |
Collapse
|
4
|
Towards detecting the level of trust in the skills of a virtual assistant from the user’s speech. COMPUT SPEECH LANG 2023. [DOI: 10.1016/j.csl.2023.101487] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
|
5
|
Saßmannshausen T, Burggräf P, Hassenzahl M, Wagner J. Human trust in otherware - a systematic literature review bringing all antecedents together. ERGONOMICS 2022:1-23. [PMID: 36062352 DOI: 10.1080/00140139.2022.2120634] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/05/2021] [Accepted: 08/28/2022] [Indexed: 06/15/2023]
Abstract
Technological systems are becoming increasingly smarter, which causes a shift in the way they are seen: from tools used to execute specific tasks to social counterparts with whom to cooperate. To ensure that these interactions are successful, trust has proven to be the most important driver. We conducted an extensive and structured review with the goal to reveal all previously researched antecedents influencing the human trust in technology-based counterparts. In doing so, we synthesised 179 papers and uncovered 479 trust antecedents. We assigned these antecedents to four main groups. Three of them have been explored before: environment, trustee, and trustor. Within this paper, we argue for a fourth group, the interaction. This quadripartition allows the inclusion of antecedents that were not considered previously. Moreover, we critically question the practice of uncovering more and more trust antecedents, which already led to an opaque plethora and thus becomes increasingly complex for practitioners. Practitioner summary: Future designers of intelligent and interactive technology will have to consider trust to a greater extent. We emphasise that there are far more trust antecedents - and interdependencies - to consider than the ethically motivated discussions about "Trustworthy AI" suggest. For this purpose, we derived a trust map as a sound basis.
Collapse
Affiliation(s)
- Till Saßmannshausen
- Chair of International Production Engineering and Management, University of Siegen, Siegen, Germany
| | - Peter Burggräf
- Chair of International Production Engineering and Management, University of Siegen, Siegen, Germany
| | - Marc Hassenzahl
- Chair of Ubiquitous Design, University of Siegen, Siegen, Germany
| | - Johannes Wagner
- Chair of International Production Engineering and Management, University of Siegen, Siegen, Germany
| |
Collapse
|
6
|
Cai J, Sun Q, Mu Z, Sun X. Psychometric properties of the Chinese version of the trust between People and Automation Scale (TPAS) in Chinese adults. PSICOLOGIA-REFLEXAO E CRITICA 2022; 35:15. [PMID: 35644898 PMCID: PMC9148861 DOI: 10.1186/s41155-022-00219-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2021] [Accepted: 05/10/2022] [Indexed: 11/29/2022] Open
Abstract
Trust in automation plays a leading role in human-automation interaction. As there lack of scales measuring trust in automation in China, the purpose of this study was to adapt the trust between People and Automation Scale (TPAS) into Chinese and to demonstrate its psychometric properties among Chinese adults. A total of 310 Chinese adults were randomly selected as sample 1, and 508 Chinese adults as sample 2. Results of the item analysis revealed that each item had a good quality, and the exploratory factor analysis (EFA) and confirmatory factor analysis (CFA) suggested that the two-factor model with 12 items was the best fitting model. In addition, the TPAS was positively correlated with Interpersonal Trust Scale (ITS), proving good evidence based on relations to other variables to support the TPAS. In sum, the study suggested that the Chinese version of the TPAS could be used as an effective tool to assess trust in automation in the Chinese context.
Collapse
|
7
|
Kohn SC, de Visser EJ, Wiese E, Lee YC, Shaw TH. Measurement of Trust in Automation: A Narrative Review and Reference Guide. Front Psychol 2021; 12:604977. [PMID: 34737716 PMCID: PMC8562383 DOI: 10.3389/fpsyg.2021.604977] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2020] [Accepted: 08/25/2021] [Indexed: 02/05/2023] Open
Abstract
With the rise of automated and autonomous agents, research examining Trust in Automation (TiA) has attracted considerable attention over the last few decades. Trust is a rich and complex construct which has sparked a multitude of measures and approaches to study and understand it. This comprehensive narrative review addresses known methods that have been used to capture TiA. We examined measurements deployed in existing empirical works, categorized those measures into self-report, behavioral, and physiological indices, and examined them within the context of an existing model of trust. The resulting work provides a reference guide for researchers, providing a list of available TiA measurement methods along with the model-derived constructs that they capture including judgments of trustworthiness, trust attitudes, and trusting behaviors. The article concludes with recommendations on how to improve the current state of TiA measurement.
Collapse
Affiliation(s)
| | - Ewart J de Visser
- Warfighter Effectiveness Research Center, United States Air Force Academy, Colorado Springs, CO, United States
| | - Eva Wiese
- George Mason University, Fairfax, VA, United States
| | - Yi-Ching Lee
- George Mason University, Fairfax, VA, United States
| | - Tyler H Shaw
- George Mason University, Fairfax, VA, United States
| |
Collapse
|
8
|
Saßmannshausen T, Burggräf P, Wagner J, Hassenzahl M, Heupel T, Steinberg F. Trust in artificial intelligence within production management - an exploration of antecedents. ERGONOMICS 2021; 64:1333-1350. [PMID: 33939596 DOI: 10.1080/00140139.2021.1909755] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/22/2020] [Accepted: 03/22/2021] [Indexed: 06/12/2023]
Abstract
Industry 4.0, big data, predictive analytics, and robotics are leading to a paradigm shift on the shop floor of industrial production. However, complex, cognitive tasks are also subject of change, due to the development of artificial intelligence (AI). Smart assistants are finding their way into the world of knowledge work and require cooperation with humans. Here, trust is an essential factor that determines the success of human-AI cooperation. Within this article, an analysis within production management identifies possible antecedent variables on trust in AI and evaluates these due to interaction scenarios with AI. The results of this research are five antecedents for human trust in AI within production management. From these results, preliminary design guidelines are derived for a socially sustainable human-AI interaction in future production management systems. Practitioner summary: In the future, artificial intelligence will assist cognitive tasks in production management. In order to make good decisions, humans trust in AI has to be well calibrated. For trustful human-AI interactions, it is beneficial that humans subjectively perceive AI as capable and comprehensible and that they themselves are digitally competent.
Collapse
Affiliation(s)
- Till Saßmannshausen
- International Production Engineering and Management, University of Siegen, Siegen, Germany
| | - Peter Burggräf
- International Production Engineering and Management, University of Siegen, Siegen, Germany
| | - Johannes Wagner
- International Production Engineering and Management, University of Siegen, Siegen, Germany
| | | | | | - Fabian Steinberg
- International Production Engineering and Management, University of Siegen, Siegen, Germany
| |
Collapse
|
9
|
Hopko SK, Mehta RK. Neural Correlates of Trust in Automation: Considerations and Generalizability Between Technology Domains. FRONTIERS IN NEUROERGONOMICS 2021; 2:731327. [PMID: 38235218 PMCID: PMC10790920 DOI: 10.3389/fnrgo.2021.731327] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/26/2021] [Accepted: 08/10/2021] [Indexed: 01/19/2024]
Abstract
Investigations into physiological or neurological correlates of trust has increased in popularity due to the need for a continuous measure of trust, including for trust-sensitive or adaptive systems, measurements of trustworthiness or pain points of technology, or for human-in-the-loop cyber intrusion detection. Understanding the limitations and generalizability of the physiological responses between technology domains is important as the usefulness and relevance of results is impacted by fundamental characteristics of the technology domains, corresponding use cases, and socially acceptable behaviors of the technologies. While investigations into the neural correlates of trust in automation has grown in popularity, there is limited understanding of the neural correlates of trust, where the vast majority of current investigations are in cyber or decision aid technologies. Thus, the relevance of these correlates as a deployable measure for other domains and the robustness of the measures to varying use cases is unknown. As such, this manuscript discusses the current-state-of-knowledge in trust perceptions, factors that influence trust, and corresponding neural correlates of trust as generalizable between domains.
Collapse
Affiliation(s)
- Sarah K. Hopko
- Neuroergonomics Lab, Department of Industrial and Systems Engineering, Texas A&M University, College Station, TX, United States
| | | |
Collapse
|
10
|
Wang LR, Malcolm J, Arnaout A, Humphrey-Murto S, LaDonna KA. Real-World Patient Experience of Long-Term Hybrid Closed-Loop Insulin Pump Use. Can J Diabetes 2021; 45:750-756.e3. [PMID: 33958309 DOI: 10.1016/j.jcjd.2021.02.006] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/26/2020] [Revised: 02/01/2021] [Accepted: 02/24/2021] [Indexed: 11/29/2022]
Abstract
OBJECTIVES Understanding of patient experiences and adaptations to hybrid closed-loop (HCL) pumps beyond the confines of short-term clinical trials is needed to inform best practices surrounding this emerging technology. We investigated long-term, real-world patient experiences with HCL technology. METHODS In semistructured interviews, 21 adults with type 1 diabetes at a single Canadian tertiary diabetes centre discussed their transition to use of Medtronic MiniMed 670G auto-mode. Interviews were audio-recorded, transcribed and analyzed iteratively to identify emerging themes. RESULTS Participants' mean age was 50±13 years, 12 of the 21 participants were female, baseline glycated hemoglobin (A1C) was 7.9±1.0% and auto-mode duration was 9.3±4.6 months. Three had discontinued auto-mode. Most participants praised auto-mode for reducing hypoglycemia, stabilizing glucose overnight and improving A1C, while also reporting frustration with frequency of alarms and user input, sensor quality and inadequate response to hyperglycemia. Participants with the highest baseline A1Cs (8.8% to 9.8%) tended to report immense satisfaction and trust in auto-mode, meeting their primary expectations of improved glycemic control. In contrast, participants with controlled diabetes (A1C <7.5%) had hoped to offload active management, but experienced significant cognitive and emotional labour associated with relinquishing control during suboptimal auto-mode performance. Participants were commonly aware of workarounds to "trick" the pump, and almost all participants with A1C <7.5% tried at least 1 workaround. CONCLUSIONS In the real-world setting, patients' goals and satisfaction with auto-mode appeared to vary considerably with their baseline diabetes control. Patients with the most suboptimal glycemic control described the greatest benefits and easiest adaptation process, challenging commonly held assumptions for patient selection for pump therapy.
Collapse
Affiliation(s)
- Linda R Wang
- Division of Endocrinology and Metabolism, Department of Medicine, University of Ottawa, Ottawa, Ontario, Canada
| | - Janine Malcolm
- Division of Endocrinology and Metabolism, Department of Medicine, University of Ottawa, Ottawa, Ontario, Canada
| | - Amel Arnaout
- Division of Endocrinology and Metabolism, Department of Medicine, University of Ottawa, Ottawa, Ontario, Canada
| | - Susan Humphrey-Murto
- Division of Rheumatology, Department of Medicine, University of Ottawa, Ottawa, Ontario, Canada
| | - Kori A LaDonna
- Department of Innovation in Medical Education, University of Ottawa, Ottawa, Ontario, Canada; Department of Medicine, University of Ottawa, Ottawa, Ontario, Canada.
| |
Collapse
|
11
|
Chen Y, Zahedi FM, Abbasi A, Dobolyi D. Trust calibration of automated security IT artifacts: A multi-domain study of phishing-website detection tools. INFORMATION & MANAGEMENT 2021. [DOI: 10.1016/j.im.2020.103394] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
12
|
Okamura K, Yamada S. Adaptive trust calibration for human-AI collaboration. PLoS One 2020; 15:e0229132. [PMID: 32084201 PMCID: PMC7034851 DOI: 10.1371/journal.pone.0229132] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2019] [Accepted: 01/30/2020] [Indexed: 11/19/2022] Open
Abstract
Safety and efficiency of human-AI collaboration often depend on how humans could appropriately calibrate their trust towards the AI agents. Over-trusting the autonomous system sometimes causes serious safety issues. Although many studies focused on the importance of system transparency in keeping proper trust calibration, the research in detecting and mitigating improper trust calibration remains very limited. To fill these research gaps, we propose a method of adaptive trust calibration that consists of a framework for detecting the inappropriate calibration status by monitoring the user’s reliance behavior and cognitive cues called “trust calibration cues” to prompt the user to reinitiate trust calibration. We evaluated our framework and four types of trust calibration cues in an online experiment using a drone simulator. A total of 116 participants performed pothole inspection tasks by using the drone’s automatic inspection, the reliability of which could fluctuate depending upon the weather conditions. The participants needed to decide whether to rely on automatic inspection or to do the inspection manually. The results showed that adaptively presenting simple cues could significantly promote trust calibration during over-trust.
Collapse
Affiliation(s)
- Kazuo Okamura
- Department of Informatics, School of Multidisciplinary Sciences, The Graduate University for Advanced Studies (SOKENDAI), Tokyo, Japan
- * E-mail:
| | - Seiji Yamada
- Department of Informatics, School of Multidisciplinary Sciences, The Graduate University for Advanced Studies (SOKENDAI), Tokyo, Japan
- Digital Content and Media Sciences Research Division, National Institute of Informatics, Tokyo, Japan
| |
Collapse
|
13
|
de Visser EJ, Beatty PJ, Estepp JR, Kohn S, Abubshait A, Fedota JR, McDonald CG. Learning From the Slips of Others: Neural Correlates of Trust in Automated Agents. Front Hum Neurosci 2018; 12:309. [PMID: 30147648 PMCID: PMC6095965 DOI: 10.3389/fnhum.2018.00309] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2018] [Accepted: 07/16/2018] [Indexed: 11/29/2022] Open
Abstract
With the rise of increasingly complex artificial intelligence (AI), there is a need to design new methods to monitor AI in a transparent, human-aware manner. Decades of research have demonstrated that people, who are not aware of the exact performance levels of automated algorithms, often experience a mismatch in expectations. Consequently, they will often provide either too little or too much trust in an algorithm. Detecting such a mismatch in expectations, or trust calibration, remains a fundamental challenge in research investigating the use of automation. Due to the context-dependent nature of trust, universal measures of trust have not been established. Trust is a difficult construct to investigate because even the act of reflecting on how much a person trusts a certain agent can change the perception of that agent. We hypothesized that electroencephalograms (EEGs) would be able to provide such a universal index of trust without the need of self-report. In this work, EEGs were recorded for 21 participants (mean age = 22.1; 13 females) while they observed a series of algorithms perform a modified version of a flanker task. Each algorithm's degree of credibility and reliability were manipulated. We hypothesized that neural markers of action monitoring, such as the observational error-related negativity (oERN) and observational error positivity (oPe), are potential candidates for monitoring computer algorithm performance. Our findings demonstrate that (1) it is possible to reliably elicit both the oERN and oPe while participants monitored these computer algorithms, (2) the oPe, as opposed to the oERN, significantly distinguished between high and low reliability algorithms, and (3) the oPe significantly correlated with subjective measures of trust. This work provides the first evidence for the utility of neural correlates of error monitoring for examining trust in computer algorithms.
Collapse
Affiliation(s)
- Ewart J. de Visser
- Human Factors and Applied Cognition, Department of Psychology, George Mason University, Fairfax, VA, United States
- Warfighter Effectiveness Research Center, Department of Behavioral Sciences and Leadership, United States Air Force Academy, Colorado Springs, CO, United States
| | - Paul J. Beatty
- Cognitive and Behavioral Neuroscience, Department of Psychology, George Mason University, Fairfax, VA, United States
| | - Justin R. Estepp
- 711 Human Performance Wing/RHCPA, Air Force Research Laboratory, Wright-Patterson Air Force Base, Dayton, OH, United States
| | - Spencer Kohn
- Human Factors and Applied Cognition, Department of Psychology, George Mason University, Fairfax, VA, United States
| | - Abdulaziz Abubshait
- Human Factors and Applied Cognition, Department of Psychology, George Mason University, Fairfax, VA, United States
| | - John R. Fedota
- Intramural Research Program, National Institute on Drug Abuse, National Institutes of Health, Baltimore, MD, United States
| | - Craig G. McDonald
- Cognitive and Behavioral Neuroscience, Department of Psychology, George Mason University, Fairfax, VA, United States
| |
Collapse
|