1
|
Boyce V, Hawkins RD, Goodman ND, Frank MC. Interaction structure constrains the emergence of conventions in group communication. Proc Natl Acad Sci U S A 2024; 121:e2403888121. [PMID: 38968102 PMCID: PMC11252989 DOI: 10.1073/pnas.2403888121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2024] [Accepted: 06/05/2024] [Indexed: 07/07/2024] Open
Abstract
Real-world communication frequently requires language producers to address more than one comprehender at once, yet most psycholinguistic research focuses on one-on-one communication. As the audience size grows, interlocutors face new challenges that do not arise in dyads. They must consider multiple perspectives and weigh multiple sources of feedback to build shared understanding. Here, we ask which properties of the group's interaction structure facilitate successful communication. We used a repeated reference game paradigm in which directors instructed between one and five matchers to choose specific targets out of a set of abstract figures. Across 313 games (N = 1,319 participants), we manipulated several key constraints on the group's interaction, including the amount of feedback that matchers could give to directors and the availability of peer interaction between matchers. Across groups of different sizes and interaction constraints, describers produced increasingly efficient utterances and matchers made increasingly accurate selections. Critically, however, we found that smaller groups and groups with less-constrained interaction structures ("thick channels") showed stronger convergence to group-specific conventions than large groups with constrained interaction structures ("thin channels"), which struggled with convention formation. Overall, these results shed light on the core structural factors that enable communication to thrive in larger groups.
Collapse
Affiliation(s)
- Veronica Boyce
- Psychology Department, Stanford University, Stanford, CA94305
| | - Robert D. Hawkins
- Psychology Department, University of Wisconsin–Madison, Madison, WI53715
| | - Noah D. Goodman
- Psychology Department, Stanford University, Stanford, CA94305
- Computer Science Department, Stanford University, Stanford, CA94305
| | | |
Collapse
|
2
|
Aguilar L, Gath-Morad M, Grübel J, Ermatinger J, Zhao H, Wehrli S, Sumner RW, Zhang C, Helbing D, Hölscher C. Experiments as Code and its application to VR studies in human-building interaction. Sci Rep 2024; 14:9883. [PMID: 38688980 PMCID: PMC11061313 DOI: 10.1038/s41598-024-60791-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Accepted: 04/26/2024] [Indexed: 05/02/2024] Open
Abstract
Experiments as Code (ExaC) is a concept for reproducible, auditable, debuggable, reusable, & scalable experiments. Experiments are a crucial tool to understand Human-Building Interactions (HBI) and build a coherent theory around it. However, a common concern for experiments is their auditability and reproducibility. Experiments are usually designed, provisioned, managed, and analyzed by diverse teams of specialists (e.g., researchers, technicians, engineers) and may require many resources (e.g., cloud infrastructure, specialized equipment). Although researchers strive to document experiments accurately, this process is often lacking. Consequently, it is difficult to reproduce these experiments. Moreover, when it is necessary to create a similar experiment, the "wheel is very often reinvented". It appears easier to start from scratch than trying to reuse existing work. Thus valuable embedded best practices and previous experiences are lost. In behavioral studies, such as in HBI, this has contributed to the reproducibility crisis. To tackle these challenges, we propose the ExaC paradigm, which not only documents the whole experiment, but additionally provides the automation code to provision, deploy, manage, and analyze the experiment. To this end, we define the ExaC concept, provide a taxonomy for the components of a practical implementation, and provide a proof of concept with an HBI desktop VR experiment that demonstrates the benefits of its "as code" representation, that is, reproducibility, auditability, debuggability, reusability, & scalability.
Collapse
Affiliation(s)
- Leonel Aguilar
- Chair of Cognitive Science, ETH Zürich, Zurich, Switzerland.
- Data Science, Systems and Services Group, ETH Zürich, Zurich, Switzerland.
| | - Michal Gath-Morad
- Chair of Cognitive Science, ETH Zürich, Zurich, Switzerland
- Cambridge Cognitive Architecture, University of Cambridge, Cambridge, UK
| | - Jascha Grübel
- Chair of Cognitive Science, ETH Zürich, Zurich, Switzerland
- Geo-information Science and Remote Sensing Laboratory, Wageningen University, Wageningen, The Netherlands
- Game Technology Center, ETH Zürich, Zurich, Switzerland
- Visual Computing Group, Harvard University, Cambridge, USA
- Center for Sustainable Future Mobility, ETH Zürich, Zurich, Switzerland
- Geoinformation Engineering Group, ETH Zürich, Zurich, Switzerland
| | | | - Hantao Zhao
- School of Cyber Science and Engineering, Southeast University, Nanjing, China
- Purple Mountain Laboratories, Nanjing, China
| | - Stefan Wehrli
- Decision Science Laboratory, ETH Zürich, Zurich, Switzerland
| | - Robert W Sumner
- Geo-information Science and Remote Sensing Laboratory, Wageningen University, Wageningen, The Netherlands
| | - Ce Zhang
- Data Science, Systems and Services Group, ETH Zürich, Zurich, Switzerland
| | - Dirk Helbing
- Decision Science Laboratory, ETH Zürich, Zurich, Switzerland
- Chair of Computational Social Science, ETH Zr̈ich, Zurich, Switzerland
| | - Christoph Hölscher
- Chair of Cognitive Science, ETH Zürich, Zurich, Switzerland
- Decision Science Laboratory, ETH Zürich, Zurich, Switzerland
| |
Collapse
|
3
|
Almaatouq A, Alsobay M, Yin M, Watts DJ. The Effects of Group Composition and Dynamics on Collective Performance. Top Cogn Sci 2024; 16:302-321. [PMID: 37925669 DOI: 10.1111/tops.12706] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 10/04/2023] [Accepted: 10/24/2023] [Indexed: 11/07/2023]
Abstract
As organizations gravitate to group-based structures, the problem of improving performance through judicious selection of group members has preoccupied scientists and managers alike. However, which individual attributes best predict group performance remains poorly understood. Here, we describe a preregistered experiment in which we simultaneously manipulated four widely studied attributes of group compositions: skill level, skill diversity, social perceptiveness, and cognitive style diversity. We find that while the average skill level of group members, skill diversity, and social perceptiveness are significant predictors of group performance, skill level dominates all other factors combined. Additionally, we explore the relationship between patterns of collaborative behavior and performance outcomes and find that any potential gains in solution quality from additional communication between the group members are outweighed by the overhead time cost, leading to lower overall efficiency. However, groups exhibiting more "turn-taking" behavior are considerably faster and thus more efficient. Finally, contrary to our expectation, we find that group compositional factors (i.e., skill level and social perceptiveness) are not associated with the amount of communication between group members nor turn-taking dynamics.
Collapse
Affiliation(s)
| | - Mohammed Alsobay
- Sloan School of Management, Massachusetts Institute of Technology
| | - Ming Yin
- Department of Computer Science, Purdue University
| | - Duncan J Watts
- Department of Computer and Information Science, University of Pennsylvania
- The Annenberg School of Communication, University of Pennsylvania
- Operations, Information, and Decisions Department, University of Pennsylvania
| |
Collapse
|
4
|
Jahani E, Gallagher N, Merhout F, Cavalli N, Guilbeault D, Leng Y, Bail CA. An Online experiment during the 2020 US-Iran crisis shows that exposure to common enemies can increase political polarization. Sci Rep 2022; 12:19304. [PMID: 36369344 PMCID: PMC9652360 DOI: 10.1038/s41598-022-23673-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Accepted: 11/03/2022] [Indexed: 11/13/2022] Open
Abstract
A longstanding theory indicates that the threat of a common enemy can mitigate conflict between members of rival groups. We tested this hypothesis in a pre-registered experiment where 1670 Republicans and Democrats in the United States were asked to complete an online social learning task with a bot that was labeled as a member of the opposing party. Prior to this task, we exposed respondents to primes about (a) a common enemy (involving Iran and Russia); (b) a patriotic event; or (c) a neutral, apolitical prime. Though we observed no significant differences in the behavior of Democrats as a result of priming, we found that Republicans-and particularly those with very strong conservative views-were significantly less likely to learn from Democrats when primed about a common enemy. Because our study was in the field during the 2020 Iran Crisis, we were able to further evaluate this finding via a natural experiment-Republicans who participated in our study after the crisis were even less influenced by the beliefs of Democrats than those Republicans who participated before this event. These findings indicate common enemies may not reduce inter-group conflict in highly polarized societies, and contribute to a growing number of studies that find evidence of asymmetric political polarization in the United States. We conclude by discussing the implications of these findings for research in social psychology, political conflict, and the rapidly expanding field of computational social science.
Collapse
Affiliation(s)
- Eaman Jahani
- grid.47840.3f0000 0001 2181 7878Department of Statistics, University of California, Berkeley, 367 Evans Hall, Berkeley, CA 94720-3860 USA
| | - Natalie Gallagher
- grid.16750.350000 0001 2097 5006Department of Psychology, Princeton University, South Dr, Princeton, NJ 08540 USA
| | - Friedolin Merhout
- grid.5254.60000 0001 0674 042XDepartment of Sociology, University of Copenhagen, 1353 Copenhagen K, Denmark ,grid.5254.60000 0001 0674 042XCenter for Social Data Science, University of Copenhagen, Øster Farimagsgade 5, 1353 Copenhagen, Denmark
| | - Nicolo Cavalli
- grid.7945.f0000 0001 2165 6939Carlo F. Dondena Centre, Bocconi University, 1 Via Guglielmo Röntgen, 20136 Milan, Italy ,grid.4991.50000 0004 1936 8948Nuffield College and Department of Sociology, Oxford University, 1 New Road, Oxford, OX1 1NF UK
| | - Douglas Guilbeault
- grid.47840.3f0000 0001 2181 7878Haas School of Business, University of California, Berkeley, 2220 Piedmont Ave, Berkeley, CA 94720 USA
| | - Yan Leng
- grid.89336.370000 0004 1936 9924McCombs School of Business, University of Texas at Austin, 300 MLK Jr., Austin, TX 78712 USA
| | - Christopher A. Bail
- grid.26009.3d0000 0004 1936 7961Department of Sociology, Duke University, 254 Soc. Psych Hall, Durham, NC 27708 USA ,grid.26009.3d0000 0004 1936 7961Sanford School of Public Policy, Duke University, Durham, NC 27708 USA
| |
Collapse
|
5
|
A variational-autoencoder approach to solve the hidden profile task in hybrid human-machine teams. PLoS One 2022; 17:e0272168. [PMID: 35917306 PMCID: PMC9345362 DOI: 10.1371/journal.pone.0272168] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2021] [Accepted: 07/13/2022] [Indexed: 11/19/2022] Open
Abstract
Algorithmic agents, popularly known as bots, have been accused of spreading misinformation online and supporting fringe views. Collectives are vulnerable to hidden-profile environments, where task-relevant information is unevenly distributed across individuals. To do well in this task, information aggregation must equally weigh minority and majority views against simple but inefficient majority-based decisions. In an experimental design, human volunteers working in teams of 10 were asked to solve a hidden-profile prediction task. We trained a variational auto-encoder (VAE) to learn people’s hidden information distribution by observing how people’s judgments correlated over time. A bot was designed to sample responses from the VAE latent embedding to selectively support opinions proportionally to their under-representation in the team. We show that the presence of a single bot (representing 10% of team members) can significantly increase the polarization between minority and majority opinions by making minority opinions less prone to social influence. Although the effects on hybrid team performance were small, the bot presence significantly influenced opinion dynamics and individual accuracy. These findings show that self-supervized machine learning techniques can be used to design algorithms that can sway opinion dynamics and group outcomes.
Collapse
|
6
|
Brinkmann L, Gezerli D, Kleist KV, Müller TF, Rahwan I, Pescetelli N. Hybrid social learning in human-algorithm cultural transmission. PHILOSOPHICAL TRANSACTIONS. SERIES A, MATHEMATICAL, PHYSICAL, AND ENGINEERING SCIENCES 2022; 380:20200426. [PMID: 35599570 PMCID: PMC9126184 DOI: 10.1098/rsta.2020.0426] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2023]
Abstract
Humans are impressive social learners. Researchers of cultural evolution have studied the many biases shaping cultural transmission by selecting who we copy from and what we copy. One hypothesis is that with the advent of superhuman algorithms a hybrid type of cultural transmission, namely from algorithms to humans, may have long-lasting effects on human culture. We suggest that algorithms might show (either by learning or by design) different behaviours, biases and problem-solving abilities than their human counterparts. In turn, algorithmic-human hybrid problem solving could foster better decisions in environments where diversity in problem-solving strategies is beneficial. This study asks whether algorithms with complementary biases to humans can boost performance in a carefully controlled planning task, and whether humans further transmit algorithmic behaviours to other humans. We conducted a large behavioural study and an agent-based simulation to test the performance of transmission chains with human and algorithmic players. We show that the algorithm boosts the performance of immediately following participants but this gain is quickly lost for participants further down the chain. Our findings suggest that algorithms can improve performance, but human bias may hinder algorithmic solutions from being preserved. This article is part of the theme issue 'Emergent phenomena in complex physical and socio-technical systems: from cells to societies'.
Collapse
Affiliation(s)
- L. Brinkmann
- Center for Humans and Machines, Max Planck Institute for Human Development, Lentzeallee 94, Berlin 14195, Germany
| | - D. Gezerli
- Center for Humans and Machines, Max Planck Institute for Human Development, Lentzeallee 94, Berlin 14195, Germany
| | - K. V. Kleist
- Center for Humans and Machines, Max Planck Institute for Human Development, Lentzeallee 94, Berlin 14195, Germany
| | - T. F. Müller
- Center for Humans and Machines, Max Planck Institute for Human Development, Lentzeallee 94, Berlin 14195, Germany
| | - I. Rahwan
- Center for Humans and Machines, Max Planck Institute for Human Development, Lentzeallee 94, Berlin 14195, Germany
| | - N. Pescetelli
- Center for Humans and Machines, Max Planck Institute for Human Development, Lentzeallee 94, Berlin 14195, Germany
- Department of Humanities and Social Sciences, New Jersey Institute of Technology, Newark, NJ, USA
| |
Collapse
|
7
|
Brinkmann L, Gezerli D, Kleist KV, Müller TF, Rahwan I, Pescetelli N. Hybrid social learning in human-algorithm cultural transmission. PHILOSOPHICAL TRANSACTIONS. SERIES A, MATHEMATICAL, PHYSICAL, AND ENGINEERING SCIENCES 2022. [PMID: 35599570 DOI: 10.6084/m9.figshare.c.5885349] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
Humans are impressive social learners. Researchers of cultural evolution have studied the many biases shaping cultural transmission by selecting who we copy from and what we copy. One hypothesis is that with the advent of superhuman algorithms a hybrid type of cultural transmission, namely from algorithms to humans, may have long-lasting effects on human culture. We suggest that algorithms might show (either by learning or by design) different behaviours, biases and problem-solving abilities than their human counterparts. In turn, algorithmic-human hybrid problem solving could foster better decisions in environments where diversity in problem-solving strategies is beneficial. This study asks whether algorithms with complementary biases to humans can boost performance in a carefully controlled planning task, and whether humans further transmit algorithmic behaviours to other humans. We conducted a large behavioural study and an agent-based simulation to test the performance of transmission chains with human and algorithmic players. We show that the algorithm boosts the performance of immediately following participants but this gain is quickly lost for participants further down the chain. Our findings suggest that algorithms can improve performance, but human bias may hinder algorithmic solutions from being preserved. This article is part of the theme issue 'Emergent phenomena in complex physical and socio-technical systems: from cells to societies'.
Collapse
Affiliation(s)
- L Brinkmann
- Center for Humans and Machines, Max Planck Institute for Human Development, Lentzeallee 94, Berlin 14195, Germany
| | - D Gezerli
- Center for Humans and Machines, Max Planck Institute for Human Development, Lentzeallee 94, Berlin 14195, Germany
| | - K V Kleist
- Center for Humans and Machines, Max Planck Institute for Human Development, Lentzeallee 94, Berlin 14195, Germany
| | - T F Müller
- Center for Humans and Machines, Max Planck Institute for Human Development, Lentzeallee 94, Berlin 14195, Germany
| | - I Rahwan
- Center for Humans and Machines, Max Planck Institute for Human Development, Lentzeallee 94, Berlin 14195, Germany
| | - N Pescetelli
- Center for Humans and Machines, Max Planck Institute for Human Development, Lentzeallee 94, Berlin 14195, Germany
- Department of Humanities and Social Sciences, New Jersey Institute of Technology, Newark, NJ, USA
| |
Collapse
|
8
|
Simulating behavior to help researchers build experiments. Behav Res Methods 2022:10.3758/s13428-022-01899-0. [PMID: 35768741 DOI: 10.3758/s13428-022-01899-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/01/2022] [Indexed: 11/08/2022]
Abstract
Testing that an experiment works as intended is critical for identifying design problems and catching technical errors that could invalidate the results. Testing is also time-consuming because of the need to manually run the experiment. This makes testing the experiment costly for researchers, and therefore testing is less comprehensive than in other kinds of software development where tools to automate and speed up the testing process are widely used. In this paper, we describe an approach that substantially reduces the time required to test behavioral experiments: automated simulation of participant behavior. We describe how software that is used to build experiments can use information contained in the experiment's code to automatically generate plausible participant behavior. We demonstrate this through an implementation using jsPsych. We then describe four potential scenarios where automated simulation of participant behavior can improve the way researchers build experiments. Each scenario includes a demo and accompanying code. The full set of examples can be found at https://jspsych.github.io/simulation-examples/ .
Collapse
|
9
|
Task complexity moderates group synergy. Proc Natl Acad Sci U S A 2021; 118:2101062118. [PMID: 34479999 PMCID: PMC8433503 DOI: 10.1073/pnas.2101062118] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2021] [Accepted: 07/02/2021] [Indexed: 01/20/2023] Open
Abstract
Scientists and managers alike have been preoccupied with the question of whether and, if so, under what conditions groups of interacting problem solvers outperform autonomous individuals. Here we describe an experiment in which individuals and groups were evaluated on a series of tasks of varying complexity. We find that groups are as fast as the fastest individual and more efficient than the most efficient individual when the task is complex but not when the task is simple. We then precisely quantify synergistic gains and process losses associated with interacting groups, finding that the balance between the two depends on complexity. Our study has the potential to reconcile conflicting findings about group synergy in previous work. Complexity—defined in terms of the number of components and the nature of the interdependencies between them—is clearly a relevant feature of all tasks that groups perform. Yet the role that task complexity plays in determining group performance remains poorly understood, in part because no clear language exists to express complexity in a way that allows for straightforward comparisons across tasks. Here we avoid this analytical difficulty by identifying a class of tasks for which complexity can be varied systematically while keeping all other elements of the task unchanged. We then test the effects of task complexity in a preregistered two-phase experiment in which 1,200 individuals were evaluated on a series of tasks of varying complexity (phase 1) and then randomly assigned to solve similar tasks either in interacting groups or as independent individuals (phase 2). We find that interacting groups are as fast as the fastest individual and more efficient than the most efficient individual for complex tasks but not for simpler ones. Leveraging our highly granular digital data, we define and precisely measure group process losses and synergistic gains and show that the balance between the two switches signs at intermediate values of task complexity. Finally, we find that interacting groups generate more solutions more rapidly and explore the solution space more broadly than independent problem solvers, finding higher-quality solutions than all but the highest-scoring individuals.
Collapse
|
10
|
Noriega A, Meizner D, Camacho D, Enciso J, Quiroz-Mercado H, Morales-Canton V, Almaatouq A, Pentland A. Screening Diabetic Retinopathy Using an Automated Retinal Image Analysis System in Independent and Assistive Use Cases in Mexico: Randomized Controlled Trial. JMIR Form Res 2021; 5:e25290. [PMID: 34435963 PMCID: PMC8430849 DOI: 10.2196/25290] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2020] [Revised: 04/12/2021] [Accepted: 05/19/2021] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND The automated screening of patients at risk of developing diabetic retinopathy represents an opportunity to improve their midterm outcome and lower the public expenditure associated with direct and indirect costs of common sight-threatening complications of diabetes. OBJECTIVE This study aimed to develop and evaluate the performance of an automated deep learning-based system to classify retinal fundus images as referable and nonreferable diabetic retinopathy cases, from international and Mexican patients. In particular, we aimed to evaluate the performance of the automated retina image analysis (ARIA) system under an independent scheme (ie, only ARIA screening) and 2 assistive schemes (ie, hybrid ARIA plus ophthalmologist screening), using a web-based platform for remote image analysis to determine and compare the sensibility and specificity of the 3 schemes. METHODS A randomized controlled experiment was performed where 17 ophthalmologists were asked to classify a series of retinal fundus images under 3 different conditions. The conditions were to (1) screen the fundus image by themselves (solo); (2) screen the fundus image after exposure to the retina image classification of the ARIA system (ARIA answer); and (3) screen the fundus image after exposure to the classification of the ARIA system, as well as its level of confidence and an attention map highlighting the most important areas of interest in the image according to the ARIA system (ARIA explanation). The ophthalmologists' classification in each condition and the result from the ARIA system were compared against a gold standard generated by consulting and aggregating the opinion of 3 retina specialists for each fundus image. RESULTS The ARIA system was able to classify referable vs nonreferable cases with an area under the receiver operating characteristic curve of 98%, a sensitivity of 95.1%, and a specificity of 91.5% for international patient cases. There was an area under the receiver operating characteristic curve of 98.3%, a sensitivity of 95.2%, and a specificity of 90% for Mexican patient cases. The ARIA system performance was more successful than the average performance of the 17 ophthalmologists enrolled in the study. Additionally, the results suggest that the ARIA system can be useful as an assistive tool, as sensitivity was significantly higher in the experimental condition where ophthalmologists were exposed to the ARIA system's answer prior to their own classification (93.3%), compared with the sensitivity of the condition where participants assessed the images independently (87.3%; P=.05). CONCLUSIONS These results demonstrate that both independent and assistive use cases of the ARIA system present, for Latin American countries such as Mexico, a substantial opportunity toward expanding the monitoring capacity for the early detection of diabetes-related blindness.
Collapse
Affiliation(s)
- Alejandro Noriega
- MIT Media Laboratory, Massachusetts Institute of Technology, Cambridge, MA, United States.,Prosperia Salud, Mexico City, Mexico
| | - Daniela Meizner
- Retina Department, Asociación para Evitar la Ceguera en México, Mexico City, Mexico
| | - Dalia Camacho
- Prosperia Salud, Mexico City, Mexico.,Engineering Academic Division, Instituto Tecnológico Autónomo de México, Mexico City, Mexico
| | - Jennifer Enciso
- Prosperia Salud, Mexico City, Mexico.,Posgrado de Ciencias Bioquímicas, Universidad Nacional Autónoma de México, Mexico City, Mexico
| | - Hugo Quiroz-Mercado
- Retina Department, Asociación para Evitar la Ceguera en México, Mexico City, Mexico
| | | | - Abdullah Almaatouq
- Sloan School of Management, Massachusetts Institute of Technology, Cambridge, MA, United States
| | - Alex Pentland
- MIT Media Laboratory, Massachusetts Institute of Technology, Cambridge, MA, United States
| |
Collapse
|
11
|
HuGoS: a virtual environment for studying collective human behavior from a swarm intelligence perspective. SWARM INTELLIGENCE 2021. [DOI: 10.1007/s11721-021-00199-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
AbstractSwarm intelligence studies self-organized collective behavior resulting from interactions between individuals, typically in animals and artificial agents. Some studies from cognitive science have also demonstrated self-organization mechanisms in humans, often in pairs. Further research into the topic of human swarm intelligence could provide a better understanding of new behaviors and larger human collectives. This requires studies with multiple human participants in controlled experiments in a wide variety of scenarios, where a rich scope of possible interactions can be isolated and captured. In this paper, we present HuGoS—‘Humans Go Swarming’—a multi-user virtual environment implemented using the Unity game development platform, as a comprehensive tool for experimentation in human swarm intelligence. We demonstrate the functionality of HuGoS with naïve participants in a browser-based implementation, in a coordination task involving collective decision-making, messaging and signaling, and stigmergy. By making HuGoS available as open-source software, we hope to facilitate further research in the field of human swarm intelligence.
Collapse
|
12
|
Cedeno-Mieles V, Hu Z, Ren Y, Deng X, Contractor N, Ekanayake S, Epstein JM, Goode BJ, Korkmaz G, Kuhlman CJ, Machi D, Macy M, Marathe MV, Ramakrishnan N, Saraf P, Self N. Data analysis and modeling pipelines for controlled networked social science experiments. PLoS One 2020; 15:e0242453. [PMID: 33232347 PMCID: PMC7685486 DOI: 10.1371/journal.pone.0242453] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2020] [Accepted: 11/03/2020] [Indexed: 11/19/2022] Open
Abstract
There is large interest in networked social science experiments for understanding human behavior at-scale. Significant effort is required to perform data analytics on experimental outputs and for computational modeling of custom experiments. Moreover, experiments and modeling are often performed in a cycle, enabling iterative experimental refinement and data modeling to uncover interesting insights and to generate/refute hypotheses about social behaviors. The current practice for social analysts is to develop tailor-made computer programs and analytical scripts for experiments and modeling. This often leads to inefficiencies and duplication of effort. In this work, we propose a pipeline framework to take a significant step towards overcoming these challenges. Our contribution is to describe the design and implementation of a software system to automate many of the steps involved in analyzing social science experimental data, building models to capture the behavior of human subjects, and providing data to test hypotheses. The proposed pipeline framework consists of formal models, formal algorithms, and theoretical models as the basis for the design and implementation. We propose a formal data model, such that if an experiment can be described in terms of this model, then our pipeline software can be used to analyze data efficiently. The merits of the proposed pipeline framework is elaborated by several case studies of networked social science experiments.
Collapse
Affiliation(s)
- Vanessa Cedeno-Mieles
- Department of Computer Science, Virginia Tech, Blacksburg, VA, United States of America
- Escuela Superior Politécnica del Litoral, ESPOL, Guayaquil, Ecuador
| | - Zhihao Hu
- Department of Statistics, Virginia Tech, Blacksburg, VA, United States of America
| | - Yihui Ren
- Computational Science Initiative, Brookhaven National Laboratory, Upton, NY, United States of America
| | - Xinwei Deng
- Department of Statistics, Virginia Tech, Blacksburg, VA, United States of America
| | - Noshir Contractor
- Department of Industrial Engineering and Management Sciences, Northwestern University, Evanston, IL, United States of America
| | - Saliya Ekanayake
- Lawrence Berkeley National Laboratory, Berkeley, CA, United States of America
| | - Joshua M. Epstein
- Department of Epidemiology, New York University, New York, NY, United States of America
| | - Brian J. Goode
- Biocomplexity Institute, Virginia Tech, Blacksburg, VA, United States of America
| | - Gizem Korkmaz
- Biocomplexity Institute & Initiative, University of Virginia, Charlottesville, VA, United States of America
| | - Chris J. Kuhlman
- Biocomplexity Institute & Initiative, University of Virginia, Charlottesville, VA, United States of America
| | - Dustin Machi
- Biocomplexity Institute & Initiative, University of Virginia, Charlottesville, VA, United States of America
| | - Michael Macy
- Department of Sociology, Cornell University, Ithaca, NY, United States of America
| | - Madhav V. Marathe
- Biocomplexity Institute & Initiative, University of Virginia, Charlottesville, VA, United States of America
- Department of Computer Science, University of Virginia, Charlottesville, VA, United States of America
| | - Naren Ramakrishnan
- Department of Computer Science, Virginia Tech, Blacksburg, VA, United States of America
- Discovery Analytics Center, Virginia Tech, Blacksburg, VA, United States of America
| | - Parang Saraf
- Discovery Analytics Center, Virginia Tech, Blacksburg, VA, United States of America
| | - Nathan Self
- Discovery Analytics Center, Virginia Tech, Blacksburg, VA, United States of America
| |
Collapse
|