1
|
Harris LT. The Neuroscience of Human and Artificial Intelligence Presence. Annu Rev Psychol 2024; 75:433-466. [PMID: 37906951 DOI: 10.1146/annurev-psych-013123-123421] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2023]
Abstract
Two decades of social neuroscience and neuroeconomics research illustrate the brain mechanisms that are engaged when people consider human beings, often in comparison to considering artificial intelligence (AI) as a nonhuman control. AI as an experimental control preserves agency and facilitates social interactions but lacks a human presence, providing insight into brain mechanisms that are engaged by human presence and the presence of AI. Here, I review this literature to determine how the brain instantiates human and AI presence across social perception and decision-making paradigms commonly used to realize a social context. People behave toward humans differently than they do toward AI. Moreover, brain regions more engaged by humans compared to AI extend beyond the social cognition brain network to all parts of the brain, and the brain sometimes is engaged more by AI than by humans. Finally, I discuss gaps in the literature, limitations in current neuroscience approaches, and how an understanding of the brain correlates of human and AI presence can inform social science in the wild.
Collapse
Affiliation(s)
- Lasana T Harris
- Department of Experimental Psychology, University College London, London, United Kingdom;
- Alan Turing Institute, London, United Kingdom
| |
Collapse
|
2
|
Schultz J, Frith CD. Animacy and the prediction of behaviour. Neurosci Biobehav Rev 2022; 140:104766. [DOI: 10.1016/j.neubiorev.2022.104766] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2022] [Revised: 06/24/2022] [Accepted: 07/01/2022] [Indexed: 10/17/2022]
|
3
|
Thellman S, de Graaf M, Ziemke T. Mental State Attribution to Robots: A Systematic Review of Conceptions, Methods, and Findings. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2022. [DOI: 10.1145/3526112] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
The topic of mental state attribution to robots has been approached by researchers from a variety of disciplines, including psychology, neuroscience, computer science, and philosophy. As a consequence, the empirical studies that have been conducted so far exhibit considerable diversity in terms of how the phenomenon is described and how it is approached from a theoretical and methodological standpoint. This literature review addresses the need for a shared scientific understanding of mental state attribution to robots by systematically and comprehensively collating conceptions, methods, and findings from 155 empirical studies across multiple disciplines. The findings of the review include that: (1) the terminology used to describe mental state attribution to robots is diverse but largely homogenous in usage; (2) the tendency to attribute mental states to robots is determined by factors such as the age and motivation of the human as well as the behavior, appearance, and identity of the robot; (3) there is a
computer < robot < human
pattern in the tendency to attribute mental states that appears to be moderated by the presence of socially interactive behavior; (4) there are conflicting findings in the empirical literature that stem from different sources of evidence, including self-report and non-verbal behavioral or neurological data. The review contributes toward more cumulative research on the topic and opens up for a transdisciplinary discussion about the nature of the phenomenon and what types of research methods are appropriate for investigation.
Collapse
|
4
|
Jiang Q, Wang Q, Li H. The neural and cognitive time course of intention reasoning: Electrophysiological evidence from ERPs. Q J Exp Psychol (Hove) 2020; 74:733-745. [PMID: 33124938 DOI: 10.1177/1747021820974213] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Intention is a typical mental state in the theory of mind. However, to date, there have been theoretical debates on the conceptual structure of intention. The neural and cognitive time course of intention reasoning remains unclear. The present event-related potential (ERP) study had two purposes: first, to investigate the neural correlates of intention reasoning based on a differentiated conceptual structure distinguishing desire and intention; second, to investigate the neural basis of intention reasoning for different agents. Thus, we compared the neural activity elicited by intention reasoning for self and for others when the intention matched or mismatched the desire of the agent. The results revealed that three ERP components distinguished among different types of intention reasoning. A negative-going ERP deflection with right frontal distribution between 400 and 500 ms might reflect the cognitive conflict involved in intention reasoning, a right frontal late positive component might be associated with the categorisation of agents, and a centro-parietal late slow wave might indicate the conceptual mental operations associated with decoupling mechanisms in intention processing. These findings implied the neural and cognitive time course of intention reasoning and provided neural evidence for the differentiated conception of intention.
Collapse
Affiliation(s)
- Qin Jiang
- Research Centre of Psychology and Education, School of Marxism, Guangxi University, Nanning, China
| | - Qi Wang
- Department of Psychology, Sun Yat-Sen University, Guangzhou, China
| | - Hong Li
- College of Psychology and Sociology, Shenzhen University, Shenzhen, China
| |
Collapse
|
5
|
Xiong X, Yu Z, Ma T, Luo N, Wang H, Lu X, Fan H. Weighted Brain Network Metrics for Decoding Action Intention Understanding Based on EEG. Front Hum Neurosci 2020; 14:232. [PMID: 32714168 PMCID: PMC7343772 DOI: 10.3389/fnhum.2020.00232] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2020] [Accepted: 05/27/2020] [Indexed: 11/23/2022] Open
Abstract
Background: Understanding the action intentions of others is important for social and human-robot interactions. Recently, many state-of-the-art approaches have been proposed for decoding action intention understanding. Although these methods have some advantages, it is still necessary to design other tools that can more efficiently classify the action intention understanding signals. New Method: Based on EEG, we first applied phase lag index (PLI) and weighted phase lag index (WPLI) to construct functional connectivity matrices in five frequency bands and 63 micro-time windows, then calculated nine graph metrics from these matrices and subsequently used the network metrics as features to classify different brain signals related to action intention understanding. Results: Compared with the single methods (PLI or WPLI), the combination method (PLI+WPLI) demonstrates some overwhelming victories. Most of the average classification accuracies exceed 70%, and some of them approach 80%. In statistical tests of brain network, many significantly different edges appear in the frontal, occipital, parietal, and temporal regions. Conclusions: Weighted brain networks can effectively retain data information. The integrated method proposed in this study is extremely effective for investigating action intention understanding. Both the mirror neuron and mentalizing systems participate as collaborators in the process of action intention understanding.
Collapse
Affiliation(s)
- Xingliang Xiong
- Key Laboratory of Child Development and Learning Science of Ministry of Education, School of Biological Science & Medical Engineering, Southeast University, Nanjing, China
| | - Zhenhua Yu
- College of Computer Science and Technology, Xi'an University of Science and Technology, Xi'an, China
| | - Tian Ma
- College of Computer Science and Technology, Xi'an University of Science and Technology, Xi'an, China
| | - Ning Luo
- Institute of Software, Chinese Academy of Sciences, Beijing, China
| | - Haixian Wang
- Key Laboratory of Child Development and Learning Science of Ministry of Education, School of Biological Science & Medical Engineering, Southeast University, Nanjing, China
| | - Xuesong Lu
- Department of Rehabilitation, Zhongda Hospital, Southeast University, Nanjing, China
| | - Hui Fan
- Co-innovation Center of Shandong Colleges and Universities: Future Intelligent Computing, Shandong Technology and Business University, Yantai, China
| |
Collapse
|
6
|
Zheltyakova M, Kireev M, Korotkov A, Medvedev S. Neural mechanisms of deception in a social context: an fMRI replication study. Sci Rep 2020; 10:10713. [PMID: 32612101 PMCID: PMC7329834 DOI: 10.1038/s41598-020-67721-z] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2020] [Accepted: 06/12/2020] [Indexed: 12/21/2022] Open
Abstract
Deception is a form of manipulation aimed at misleading another person by conveying false or truthful messages. Manipulative truthful statements could be considered as sophisticated deception and elicit an increased cognitive load. However, only one fMRI study reported its neural correlates. To provide independent evidence for sophisticated deception, we carried out an fMRI study replicating the experimental paradigm and Bayesian statistical approach utilized in that study. During the experiment, participants played a game against an opponent by sending deliberate deceptive or honest messages. Compared to truth-telling, deceptive intentions, regardless of how they were fulfilled, were associated with increased BOLD signals in the bilateral temporoparietal junction (TPJ), left precuneus, and right superior temporal sulcus (STS). The right TPJ participates in the attribution of mental states, acting in a social context, and moral behaviour. Moreover, the other revealed brain areas have been considered nodes in the theory of mind brain neural system. Therefore, the obtained results reflect an increased demand for socio‑cognitive processes associated with deceptive intentions. We replicated the original study showing the involvement of the right TPJ and expanded upon it by revealing the involvement of the left TPJ, left precuneus and right STS in actions with deceptive intentions.
Collapse
Affiliation(s)
- Maya Zheltyakova
- N.P. Bechtereva Institute of the Human Brain, Russian Academy of Sciences, St. Petersburg, Russia
| | - Maxim Kireev
- N.P. Bechtereva Institute of the Human Brain, Russian Academy of Sciences, St. Petersburg, Russia.
| | - Alexander Korotkov
- N.P. Bechtereva Institute of the Human Brain, Russian Academy of Sciences, St. Petersburg, Russia
| | - Svyatoslav Medvedev
- N.P. Bechtereva Institute of the Human Brain, Russian Academy of Sciences, St. Petersburg, Russia
| |
Collapse
|
7
|
Xiong X, Yu Z, Ma T, Wang H, Lu X, Fan H. Classifying action intention understanding EEG signals based on weighted brain network metric features. Biomed Signal Process Control 2020. [DOI: 10.1016/j.bspc.2020.101893] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
8
|
Wiese E, Abubshait A, Azarian B, Blumberg EJ. Brain stimulation to left prefrontal cortex modulates attentional orienting to gaze cues. Philos Trans R Soc Lond B Biol Sci 2020; 374:20180430. [PMID: 30852996 DOI: 10.1098/rstb.2018.0430] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
In social interactions, we rely on non-verbal cues like gaze direction to understand the behaviour of others. How we react to these cues is determined by the degree to which we believe that they originate from an entity with a mind capable of having internal states and showing intentional behaviour, a process called mind perception. While prior work has established a set of neural regions linked to mind perception, research has just begun to examine how mind perception affects social-cognitive mechanisms like gaze processing on a neuronal level. In the current experiment, participants performed a social attention task (i.e. attentional orienting to gaze cues) with either a human or a robot agent (i.e. manipulation of mind perception) while transcranial direct current stimulation (tDCS) was applied to prefrontal and temporo-parietal brain areas. The results show that temporo-parietal stimulation did not modulate mechanisms of social attention, neither in response to the human nor in response to the robot agent, whereas prefrontal stimulation enhanced attentional orienting in response to human gaze cues and attenuated attentional orienting in response to robot gaze cues. The findings suggest that mind perception modulates low-level mechanisms of social cognition via prefrontal structures, and that a certain degree of mind perception is essential in order for prefrontal stimulation to affect mechanisms of social attention. This article is part of the theme issue 'From social brains to social robots: applying neurocognitive insights to human-robot interaction'.
Collapse
Affiliation(s)
- Eva Wiese
- Department of Psychology, Social and Cognitive Interactions Lab, George Mason University, Fairfax, VA , USA
| | - Abdulaziz Abubshait
- Department of Psychology, Social and Cognitive Interactions Lab, George Mason University, Fairfax, VA , USA
| | - Bobby Azarian
- Department of Psychology, Social and Cognitive Interactions Lab, George Mason University, Fairfax, VA , USA
| | - Eric J Blumberg
- Department of Psychology, Social and Cognitive Interactions Lab, George Mason University, Fairfax, VA , USA
| |
Collapse
|
9
|
Proverbio AM, Ornaghi L, Gabaro V. How face blurring affects body language processing of static gestures in women and men. Soc Cogn Affect Neurosci 2019; 13:590-603. [PMID: 29767792 PMCID: PMC6022678 DOI: 10.1093/scan/nsy033] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2017] [Accepted: 05/04/2018] [Indexed: 11/13/2022] Open
Abstract
The role of facial coding in body language comprehension was investigated by event-related potential recordings in 31 participants viewing 800 photographs of gestures (iconic, deictic and emblematic), which could be congruent or incongruent with their caption. Facial information was obscured by blurring in half of the stimuli. The task consisted of evaluating picture/caption congruence. Quicker response times were observed in women than in men to congruent stimuli, and a cost for incongruent vs congruent stimuli was found only in men. Face obscuration did not affect accuracy in women as reflected by omission percentages, nor reduced their cognitive potentials, thus suggesting a better comprehension of face deprived pantomimes. N170 response (modulated by congruity and face presence) peaked later in men than in women. Late positivity was much larger for congruent stimuli in the female brain, regardless of face blurring. Face presence specifically activated the right superior temporal and fusiform gyri, cingulate cortex and insula, according to source reconstruction. These regions have been reported to be insufficiently activated in face-avoiding individuals with social deficits. Overall, the results corroborate the hypothesis that females might be more resistant to the lack of facial information or better at understanding body language in face-deprived social information.
Collapse
Affiliation(s)
- Alice Mado Proverbio
- Department of Psychology, Neuro-MI Center for Neuroscience, University of Milano-Bicocca, Milano, Italy
| | - Laura Ornaghi
- Department of Psychology, Neuro-MI Center for Neuroscience, University of Milano-Bicocca, Milano, Italy
| | - Veronica Gabaro
- Department of Psychology, Neuro-MI Center for Neuroscience, University of Milano-Bicocca, Milano, Italy
| |
Collapse
|
10
|
Spatiotemporal Phase Synchronization in Adaptive Reconfiguration from Action Observation Network to Mentalizing Network for Understanding Other's Action Intention. Brain Topogr 2017; 31:447-467. [PMID: 29264681 DOI: 10.1007/s10548-017-0614-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2017] [Accepted: 12/13/2017] [Indexed: 10/18/2022]
Abstract
In action intention understanding, the mirror system is involved in perception-action matching process and the mentalizing system underlies higher-level intention inference. By analyzing the dynamic functional connectivity in α (8-12 Hz) and β (12-30 Hz) frequency bands over a "hand-cup interaction" observation task, this study investigates the topological transition from the action observation network (AON) to the mentalizing network (MZN), and estimates their functional relevance for intention identification from other's different action kinematics. Sequential brain microstates were extracted based on event-related potentials (ERPs), in which significantly differing neuronal responses were found in N170-P200 related to perceptually matching kinematic profiles and P400-700 involved in goal inference. Inter-electrode weighted phase lag index analysis on the ERP microstates revealed a shift of hub centrality salient in α frequency band, from the AON dominated by left-lateral frontal-premotor-temporal and temporal-parietooccipital synchronizations to the MZN consisting of more bilateral frontal-parietal and temporal-parietal synchronizations. As compared with usual actions, intention identification of unintelligible actions induces weaker synchronizations in the AON but dramatically increased connectivity in right frontal-temporal-parietal regions of the MZN, indicating a spatiotemporally complementary effect between the functional network configurations involved in mirror and mentalizing processes. Perceptual processing in observing usual/unintelligible actions decreases/increases requirements for intention inference, which would induce less/greater functional network reorganization on the way to mentalization. From the comparison, our study suggests that the adaptive topological changes from the AON to the MZN indicate implicit causal association between the mirror and mentalizing systems for decoding others' intentionality.
Collapse
|
11
|
Kupferberg A, Iacoboni M, Flanagin V, Huber M, Kasparbauer A, Baumgartner T, Hasler G, Schmidt F, Borst C, Glasauer S. Fronto-parietal coding of goal-directed actions performed by artificial agents. Hum Brain Mapp 2017; 39:1145-1162. [PMID: 29205671 DOI: 10.1002/hbm.23905] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2017] [Revised: 11/17/2017] [Accepted: 11/22/2017] [Indexed: 11/11/2022] Open
Abstract
With advances in technology, artificial agents such as humanoid robots will soon become a part of our daily lives. For safe and intuitive collaboration, it is important to understand the goals behind their motor actions. In humans, this process is mediated by changes in activity in fronto-parietal brain areas. The extent to which these areas are activated when observing artificial agents indicates the naturalness and easiness of interaction. Previous studies indicated that fronto-parietal activity does not depend on whether the agent is human or artificial. However, it is unknown whether this activity is modulated by observing grasping (self-related action) and pointing actions (other-related action) performed by an artificial agent depending on the action goal. Therefore, we designed an experiment in which subjects observed human and artificial agents perform pointing and grasping actions aimed at two different object categories suggesting different goals. We found a signal increase in the bilateral inferior parietal lobule and the premotor cortex when tool versus food items were pointed to or grasped by both agents, probably reflecting the association of hand actions with the functional use of tools. Our results show that goal attribution engages the fronto-parietal network not only for observing a human but also a robotic agent for both self-related and social actions. The debriefing after the experiment has shown that actions of human-like artificial agents can be perceived as being goal-directed. Therefore, humans will be able to interact with service robots intuitively in various domains such as education, healthcare, public service, and entertainment.
Collapse
Affiliation(s)
- Aleksandra Kupferberg
- Division of Molecular Psychiatry, Translational Research Center, University Hospital of Psychiatry University of Bern, Bern, Switzerland
| | - Marco Iacoboni
- David Geffen School of Medicine at UCLA, Ahmanson-Lovelace Brain Mapping Center, Semel Institute for Neuroscience and Human Behavior, Brain Research Institute, Los Angeles, California
| | - Virginia Flanagin
- German Center for Vertigo and Balance Disorders DSGZ, Ludwig-Maximilian University Munich, München, Germany.,Center for Sensorimotor Research, Department of Neurology, Ludwig-Maximilian University, München, Germany
| | - Markus Huber
- Center for Sensorimotor Research, Department of Neurology, Ludwig-Maximilian University, München, Germany
| | | | - Thomas Baumgartner
- Department of Social Psychology and Social Neuroscience, University of Bern, Bern, Switzerland
| | - Gregor Hasler
- Division of Molecular Psychiatry, Translational Research Center, University Hospital of Psychiatry University of Bern, Bern, Switzerland
| | - Florian Schmidt
- Department of Robotics, DLR, Oberpfaffenhofen, Bavaria, Germany
| | - Christoph Borst
- Department of Robotics, DLR, Oberpfaffenhofen, Bavaria, Germany
| | - Stefan Glasauer
- German Center for Vertigo and Balance Disorders DSGZ, Ludwig-Maximilian University Munich, München, Germany.,Center for Sensorimotor Research, Department of Neurology, Ludwig-Maximilian University, München, Germany
| |
Collapse
|
12
|
Neural coding of prior expectations in hierarchical intention inference. Sci Rep 2017; 7:1278. [PMID: 28455527 PMCID: PMC5430911 DOI: 10.1038/s41598-017-01414-y] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2016] [Accepted: 03/28/2017] [Indexed: 11/22/2022] Open
Abstract
The ability to infer other people’s intentions is crucial for successful human social interactions. Such inference relies on an adaptive interplay of sensory evidence and prior expectations. Crucially, this interplay would also depend on the type of intention inferred, i.e., on how abstract the intention is. However, what neural mechanisms adjust the interplay of prior and sensory evidence to the abstractness of the intention remains conjecture. We addressed this question in two separate fMRI experiments, which exploited action scenes depicting different types of intentions (Superordinate vs. Basic; Social vs. Non-social), and manipulated both prior and sensory evidence. We found that participants increasingly relied on priors as sensory evidence became scarcer. Activity in the medial prefrontal cortex (mPFC) reflected this interplay between the two sources of information. Moreover, the more abstract the intention to infer (Superordinate > Basic, Social > Non-Social), the greater the modulation of backward connectivity between the mPFC and the temporo-parietal junction (TPJ), resulting in an increased influence of priors over the intention inference. These results suggest a critical role for the fronto-parietal network in adjusting the relative weight of prior and sensory evidence during hierarchical intention inference.
Collapse
|
13
|
Jaywant A, Shiffrar M, Roy S, Cronin-Golomb A. Impaired perception of biological motion in Parkinson's disease. Neuropsychology 2016; 30:720-30. [PMID: 26949927 DOI: 10.1037/neu0000276] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
OBJECTIVE We examined biological motion perception in Parkinson's disease (PD). Biological motion perception is related to one's own motor function and depends on the integrity of brain areas affected in PD, including posterior superior temporal sulcus. If deficits in biological motion perception exist, they may be specific to perceiving natural/fast walking patterns that individuals with PD can no longer perform, and may correlate with disease-related motor dysfunction. METHOD Twenty-six nondemented individuals with PD and 24 control participants viewed videos of point-light walkers and scrambled versions that served as foils, and indicated whether each video depicted a human walking. Point-light walkers varied by gait type (natural, parkinsonian) and speed (0.5, 1.0, 1.5 m/s). Participants also completed control tasks (object motion, coherent motion perception), a contrast sensitivity assessment, and a walking assessment. RESULTS The PD group demonstrated significantly less sensitivity to biological motion than the control group (p < .001, Cohen's d = 1.22), regardless of stimulus gait type or speed, with a less substantial deficit in object motion perception (p = .02, Cohen's d = .68). There was no group difference in coherent motion perception. Although individuals with PD had slower walking speed and shorter stride length than control participants, gait parameters did not correlate with biological motion perception. Contrast sensitivity and coherent motion perception also did not correlate with biological motion perception. CONCLUSION PD leads to a deficit in perceiving biological motion, which is independent of gait dysfunction and low-level vision changes, and may therefore arise from difficulty perceptually integrating form and motion cues in posterior superior temporal sulcus. (PsycINFO Database Record
Collapse
Affiliation(s)
- Abhishek Jaywant
- Department of Psychological and Brain Sciences, Boston University
| | - Maggie Shiffrar
- Office of Research & Graduate Studies, California State University Northridge
| | | | | |
Collapse
|
14
|
Hofree G, Urgen BA, Winkielman P, Saygin AP. Observation and imitation of actions performed by humans, androids, and robots: an EMG study. Front Hum Neurosci 2015; 9:364. [PMID: 26150782 PMCID: PMC4473002 DOI: 10.3389/fnhum.2015.00364] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2014] [Accepted: 06/08/2015] [Indexed: 11/20/2022] Open
Abstract
Understanding others' actions is essential for functioning in the physical and social world. In the past two decades research has shown that action perception involves the motor system, supporting theories that we understand others' behavior via embodied motor simulation. Recently, empirical approach to action perception has been facilitated by using well-controlled artificial stimuli, such as robots. One broad question this approach can address is what aspects of similarity between the observer and the observed agent facilitate motor simulation. Since humans have evolved among other humans and animals, using artificial stimuli such as robots allows us to probe whether our social perceptual systems are specifically tuned to process other biological entities. In this study, we used humanoid robots with different degrees of human-likeness in appearance and motion along with electromyography (EMG) to measure muscle activity in participants' arms while they either observed or imitated videos of three agents produce actions with their right arm. The agents were a Human (biological appearance and motion), a Robot (mechanical appearance and motion), and an Android (biological appearance and mechanical motion). Right arm muscle activity increased when participants imitated all agents. Increased muscle activation was found also in the stationary arm both during imitation and observation. Furthermore, muscle activity was sensitive to motion dynamics: activity was significantly stronger for imitation of the human than both mechanical agents. There was also a relationship between the dynamics of the muscle activity and motion dynamics in stimuli. Overall our data indicate that motor simulation is not limited to observation and imitation of agents with a biological appearance, but is also found for robots. However we also found sensitivity to human motion in the EMG responses. Combining data from multiple methods allows us to obtain a more complete picture of action understanding and the underlying neural computations.
Collapse
Affiliation(s)
- Galit Hofree
- Department of Psychology, University of California, San Diego, San Diego, CAUSA
| | - Burcu A. Urgen
- Department of Cognitive Science, University of California, San Diego, San Diego, CAUSA
| | - Piotr Winkielman
- Department of Psychology, University of California, San Diego, San Diego, CAUSA
- Behavioural Science Group, Warwick Business School, University of Warwick, CoventryUK
- Department of Psychology, University of Social Sciences and Humanities, WarsawPoland
| | - Ayse P. Saygin
- Department of Cognitive Science, University of California, San Diego, San Diego, CAUSA
| |
Collapse
|
15
|
Wang Y, Quadflieg S. In our own image? Emotional and neural processing differences when observing human-human vs human-robot interactions. Soc Cogn Affect Neurosci 2015; 10:1515-24. [PMID: 25911418 PMCID: PMC4631149 DOI: 10.1093/scan/nsv043] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2014] [Accepted: 04/14/2015] [Indexed: 11/13/2022] Open
Abstract
Notwithstanding the significant role that human-robot interactions (HRI) will play in the near future, limited research has explored the neural correlates of feeling eerie in response to social robots. To address this empirical lacuna, the current investigation examined brain activity using functional magnetic resonance imaging while a group of participants (n = 26) viewed a series of human-human interactions (HHI) and HRI. Although brain sites constituting the mentalizing network were found to respond to both types of interactions, systematic neural variation across sites signaled diverging social-cognitive strategies during HHI and HRI processing. Specifically, HHI elicited increased activity in the left temporal-parietal junction indicative of situation-specific mental state attributions, whereas HRI recruited the precuneus and the ventromedial prefrontal cortex (VMPFC) suggestive of script-based social reasoning. Activity in the VMPFC also tracked feelings of eeriness towards HRI in a parametric manner, revealing a potential neural correlate for a phenomenon known as the uncanny valley. By demonstrating how understanding social interactions depends on the kind of agents involved, this study highlights pivotal sub-routes of impression formation and identifies prominent challenges in the use of humanoid robots.
Collapse
Affiliation(s)
- Yin Wang
- Division of Psychology, New York University
- Abu Dhabi, Abu Dhabi, UAE
| | - Susanne Quadflieg
- Division of Psychology, New York University
- Abu Dhabi, Abu Dhabi, UAE
| |
Collapse
|
16
|
Solomonova E. First-person experience and yoga research: studying neural correlates of an intentional practice. Front Hum Neurosci 2015; 9:85. [PMID: 25762918 PMCID: PMC4340189 DOI: 10.3389/fnhum.2015.00085] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2014] [Accepted: 02/02/2015] [Indexed: 11/24/2022] Open
|
17
|
Mason RA, Just MA. Physics instruction induces changes in neural knowledge representation during successive stages of learning. Neuroimage 2015; 111:36-48. [PMID: 25665967 DOI: 10.1016/j.neuroimage.2014.12.086] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2014] [Revised: 11/19/2014] [Accepted: 12/29/2014] [Indexed: 11/18/2022] Open
Abstract
Incremental instruction on the workings of a set of mechanical systems induced a progression of changes in the neural representations of the systems. The neural representations of four mechanical systems were assessed before, during, and after three phases of incremental instruction (which first provided information about the system components, then provided partial causal information, and finally provided full functional information). In 14 participants, the neural representations of four systems (a bathroom scale, a fire extinguisher, an automobile braking system, and a trumpet) were assessed using three recently developed techniques: (1) machine learning and classification of multi-voxel patterns; (2) localization of consistently responding voxels; and (3) representational similarity analysis (RSA). The neural representations of the systems progressed through four stages, or states, involving spatially and temporally distinct multi-voxel patterns: (1) initially, the representation was primarily visual (occipital cortex); (2) it subsequently included a large parietal component; (3) it eventually became cortically diverse (frontal, parietal, temporal, and medial frontal regions); and (4) at the end, it demonstrated a strong frontal cortex weighting (frontal and motor regions). At each stage of knowledge, it was possible for a classifier to identify which one of four mechanical systems a participant was thinking about, based on their brain activation patterns. The progression of representational states was suggestive of progressive stages of learning: (1) encoding information from the display; (2) mental animation, possibly involving imagining the components moving; (3) generating causal hypotheses associated with mental animation; and finally (4) determining how a person (probably oneself) would interact with the system. This interpretation yields an initial, cortically-grounded, theory of learning of physical systems that potentially can be related to cognitive learning theories by suggesting links between cortical representations, stages of learning, and the understanding of simple systems.
Collapse
Affiliation(s)
- Robert A Mason
- Center for Cognitive Brain Imaging, Psychology Department, Carnegie Mellon University, Pittsburgh, PA 15213, USA.
| | - Marcel Adam Just
- Center for Cognitive Brain Imaging, Psychology Department, Carnegie Mellon University, Pittsburgh, PA 15213, USA
| |
Collapse
|
18
|
Anterior insula activity reflects the effects of intentionality on the anticipation of aversive stimulation. J Neurosci 2014; 34:11339-48. [PMID: 25143614 DOI: 10.1523/jneurosci.1126-14.2014] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
If someone causes you harm, your affective reaction to that person might be profoundly influenced by your inferences about the intentionality of their actions. In the present study, we aimed to understand how affective responses to a biologically salient aversive outcome administered by others are modulated by the extent to which a given individual is judged to have deliberately or inadvertently delivered the outcome. Using fMRI, we examined how neural responses to anticipation and receipt of an aversive stimulus are modulated by this fundamental social judgment. We found that affective evaluations about an individual whose actions led to either noxious or neutral consequences for the subject did indeed depend on the perceived intentions of that individual. At the neural level, activity in the anterior insula correlated with the interaction between perceived intentionality and anticipated outcome valence, suggesting that this region reflects the influence of mental state attribution on aversive expectations.
Collapse
|
19
|
|
20
|
Raos V, Kilintari M, Savaki HE. Viewing a forelimb induces widespread cortical activations. Neuroimage 2014; 89:122-42. [DOI: 10.1016/j.neuroimage.2013.12.010] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2013] [Revised: 11/06/2013] [Accepted: 12/08/2013] [Indexed: 10/25/2022] Open
|
21
|
Abstract
Predictive coding posits that neural systems make forward-looking predictions about incoming information. Neural signals contain information not about the currently perceived stimulus, but about the difference between the observed and the predicted stimulus. We propose to extend the predictive coding framework from high-level sensory processing to the more abstract domain of theory of mind; that is, to inferences about others' goals, thoughts, and personalities. We review evidence that, across brain regions, neural responses to depictions of human behavior, from biological motion to trait descriptions, exhibit a key signature of predictive coding: reduced activity to predictable stimuli. We discuss how future experiments could distinguish predictive coding from alternative explanations of this response profile. This framework may provide an important new window on the neural computations underlying theory of mind.
Collapse
Affiliation(s)
- Jorie Koster-Hale
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA.
| | | |
Collapse
|
22
|
Van Overwalle F, Baetens K, Mariën P, Vandekerckhove M. Social cognition and the cerebellum: a meta-analysis of over 350 fMRI studies. Neuroimage 2013; 86:554-72. [PMID: 24076206 DOI: 10.1016/j.neuroimage.2013.09.033] [Citation(s) in RCA: 306] [Impact Index Per Article: 27.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2013] [Revised: 09/02/2013] [Accepted: 09/12/2013] [Indexed: 01/31/2023] Open
Abstract
This meta-analysis explores the role of the cerebellum in social cognition. Recent meta-analyses of neuroimaging studies since 2008 demonstrate that the cerebellum is only marginally involved in social cognition and emotionality, with a few meta-analyses pointing to an involvement of at most 54% of the individual studies. In this study, novel meta-analyses of over 350 fMRI studies, dividing up the domain of social cognition in homogeneous subdomains, confirmed this low involvement of the cerebellum in conditions that trigger the mirror network (e.g., when familiar movements of body parts are observed) and the mentalizing network (when no moving body parts or unfamiliar movements are present). There is, however, one set of mentalizing conditions that strongly involve the cerebellum in 50-100% of the individual studies. In particular, when the level of abstraction is high, such as when behaviors are described in terms of traits or permanent characteristics, in terms of groups rather than individuals, in terms of the past (episodic autobiographic memory) or the future rather than the present, or in terms of hypothetical events that may happen. An activation likelihood estimation (ALE) meta-analysis conducted in this study reveals that the cerebellum is critically implicated in social cognition and that the areas of the cerebellum which are consistently involved in social cognitive processes show extensive overlap with the areas involved in sensorimotor (during mirror and self-judgments tasks) as well as in executive functioning (across all tasks). We discuss the role of the cerebellum in social cognition in general and in higher abstraction mentalizing in particular. We also point out a number of methodological limitations of some available studies on the social brain that hamper the detection of cerebellar activity.
Collapse
Affiliation(s)
- Frank Van Overwalle
- Faculty of Psychology and Educational Sciences, Vrije Universiteit Brussel, Pleinlaan 2, 1050 Brussels, Belgium.
| | - Kris Baetens
- Faculty of Psychology and Educational Sciences, Vrije Universiteit Brussel, Pleinlaan 2, 1050 Brussels, Belgium
| | - Peter Mariën
- Faculty of Arts, Department of Clinical and Experimental Neurolinguistics, CLIN, Vrije Universiteit Brussel, Pleinlaan 2, B-1050 Brussels, Belgium; Department of Neurology and Memory Clinic, ZNA Middelheim Hospital, Lindendreef 1, B-2020 Antwerp, Belgium
| | - Marie Vandekerckhove
- Faculty of Psychology and Educational Sciences, Vrije Universiteit Brussel, Pleinlaan 2, 1050 Brussels, Belgium
| |
Collapse
|
23
|
Juan E, Frum C, Bianchi-Demicheli F, Wang YW, Lewis JW, Cacioppo S. Beyond human intentions and emotions. Front Hum Neurosci 2013; 7:99. [PMID: 23543838 PMCID: PMC3608908 DOI: 10.3389/fnhum.2013.00099] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2012] [Accepted: 03/08/2013] [Indexed: 11/29/2022] Open
Abstract
Although significant advances have been made in our understanding of the neural basis of action observation and intention understanding in the last few decades by studies demonstrating the involvement of a specific brain network (action observation network; AON), these have been largely based on experimental studies in which people have been considered as strictly isolated entities. However, we, as social species, spend much more of our time performing actions interacting with others. Research shows that a person's position along the continuum of perceived social isolation/bonding to others is associated with a variety of physical and mental health effects. Thus, there is a crucial need to better understand the neural basis of intention understanding performed in interpersonal and emotional contexts. To address this issue, we performed a meta-analysis using of functional magnetic resonance imaging (fMRI) studies over the past decade that examined brain and cortical network processing associated with understanding the intention of others actions vs. those associated with passionate love for others. Both overlapping and distinct cortical and subcortical regions were identified for intention and love, respectively. These findings provide scientists and clinicians with a set of brain regions that can be targeted for future neuroscientific studies on intention understanding, and help develop neurocognitive models of pair-bonding.
Collapse
Affiliation(s)
- Elsa Juan
- Psychology Department, University of Geneva Geneva, Switzerland
| | | | | | | | | | | |
Collapse
|
24
|
Tylén K, Allen M, Hunter BK, Roepstorff A. Interaction vs. observation: distinctive modes of social cognition in human brain and behavior? A combined fMRI and eye-tracking study. Front Hum Neurosci 2012; 6:331. [PMID: 23267322 PMCID: PMC3525956 DOI: 10.3389/fnhum.2012.00331] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2012] [Accepted: 11/28/2012] [Indexed: 11/13/2022] Open
Abstract
Human cognition has usually been approached on the level of individual minds and brains, but social interaction is a challenging case. Is it best thought of as a self-contained individual cognitive process aiming at an "understanding of the other," or should it rather be approached as an collective, inter-personal process where individual cognitive components interact on a moment-to-moment basis to form coupled dynamics? In a combined fMRI and eye-tracking study we directly contrasted these models of social cognition. We found that the perception of situations affording social contingent responsiveness (e.g., someone offering or showing you an object) elicited activations in regions of the right posterior temporal sulcus and yielded greater pupil dilation corresponding to a model of coupled dynamics (joint action). In contrast, the social-cognitive perception of someone "privately" manipulating an object elicited activation in medial prefrontal cortex, the right inferior frontal gyrus and right inferior parietal lobus, regions normally associated with Theory of Mind and with the mirror neuron system. Our findings support a distinction in social cognition between social observation and social interaction, and demonstrate that simple ostensive cues may shift participants' experience, behavior, and brain activity between these modes. The identification of a distinct, interactive mode has implications for research on social cognition, both in everyday life and in clinical conditions.
Collapse
Affiliation(s)
- Kristian Tylén
- The Interacting Minds Group, Center for Functionally Integrative Neuroscience, Aarhus University Aarhus, Denmark ; Department for Aesthetics and Communication, Center for Semiotics, Aarhus University Aarhus, Denmark
| | | | | | | |
Collapse
|
25
|
Carter EJ, Williams DL, Minshew NJ, Lehman JF. Is he being bad? Social and language brain networks during social judgment in children with autism. PLoS One 2012; 7:e47241. [PMID: 23082151 PMCID: PMC3474836 DOI: 10.1371/journal.pone.0047241] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2012] [Accepted: 09/11/2012] [Indexed: 11/19/2022] Open
Abstract
Individuals with autism often violate social rules and have lower accuracy in identifying and explaining inappropriate social behavior. Twelve children with autism (AD) and thirteen children with typical development (TD) participated in this fMRI study of the neurofunctional basis of social judgment. Participants indicated in which of two pictures a boy was being bad (Social condition) or which of two pictures was outdoors (Physical condition). In the within-group Social-Physical comparison, TD children used components of mentalizing and language networks [bilateral inferior frontal gyrus (IFG), bilateral medial prefrontal cortex (mPFC), and bilateral posterior superior temporal sulcus (pSTS)], whereas AD children used a network that was primarily right IFG and bilateral pSTS, suggesting reduced use of social and language networks during this social judgment task. A direct group comparison on the Social-Physical contrast showed that the TD group had greater mPFC, bilateral IFG, and left superior temporal pole activity than the AD group. No regions were more active in the AD group than in the group with TD in this comparison. Both groups successfully performed the task, which required minimal language. The groups also performed similarly on eyetracking measures, indicating that the activation results probably reflect the use of a more basic strategy by the autism group rather than performance disparities. Even though language was unnecessary, the children with TD recruited language areas during the social task, suggesting automatic encoding of their knowledge into language; however, this was not the case for the children with autism. These findings support behavioral research indicating that, whereas children with autism may recognize socially inappropriate behavior, they have difficulty using spoken language to explain why it is inappropriate. The fMRI results indicate that AD children may not automatically use language to encode their social understanding, making expression and generalization of this knowledge more difficult.
Collapse
Affiliation(s)
- Elizabeth J Carter
- Robotics Institute, Carnegie Mellon University, Pittsburgh, Pennsylvania, USA.
| | | | | | | |
Collapse
|
26
|
Hernik M, Southgate V. Nine-months-old infants do not need to know what the agent prefers in order to reason about its goals: on the role of preference and persistence in infants' goal-attribution. Dev Sci 2012; 15:714-22. [PMID: 22925518 PMCID: PMC3593001 DOI: 10.1111/j.1467-7687.2012.01151.x] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2010] [Accepted: 11/18/2011] [Indexed: 11/27/2022]
Abstract
Human infants readily interpret others' actions as goal-directed and their understanding of previous goals shapes their expectations about an agent's future goal-directed behavior in a changed situation. According to a recent proposal (Luo & Baillargeon, 2005), infants' goal-attributions are not sufficient to support such expectations if the situational change involves broadening the set of choice-options available to the agent, and the agent's preferences among this broadened set are not known. The present study falsifies this claim by showing that 9-month-olds expect the agent to continue acting towards the previous goal even if additional choice-options become available for which there is no preference-related evidence. We conclude that infants do not need to know about the agent's preferences in order to form expectations about its goal-directed actions. Implications for the role of action persistency and action selectivity are discussed.
Collapse
Affiliation(s)
- Mikolaj Hernik
- Research Department of Educational, Clinical and Health Psychology, University College London, UK.
| | | |
Collapse
|
27
|
Ma N, Vandekerckhove M, Van Hoeck N, Van Overwalle F. Distinct recruitment of temporo-parietal junction and medial prefrontal cortex in behavior understanding and trait identification. Soc Neurosci 2012; 7:591-605. [PMID: 22568489 DOI: 10.1080/17470919.2012.686925] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Abstract
It has been suggested that the temporo-parietal junction (TPJ) is involved in inferring immediate goals and intentions from behaviors, whereas the medial prefrontal cortex (mPFC) integrates social information, such as traits, at a more abstract level. To explore the differential role of the TPJ and mPFC, participants read several verbal descriptions about an agent. Embedded in a factorial design, in one-half of the trials (behavior condition), the agent was engaged in a simple goal-directed behavior, whereas in the other half this description was absent. In another half of the trials (trait condition), the participants had to answer a question about a trait of the agent, whereas in the other half the question was about the agent's physical appearance. The results revealed that the dorsal mPFC was recruited when participants inferred the agent's trait, irrespective of a behavioral description. In contrast, the TPJ, posterior superior temporal sulcus (pSTS), anterior intraparietal sulcus, and premotor cortex were activated when goal-directed behavioral information was presented, irrespective of a trait question. These findings confirm that in a social context, the TPJ (and pSTS) is activated for understanding goal-directed behaviors, whereas the mPFC is involved in processing traits.
Collapse
Affiliation(s)
- Ning Ma
- Department of Psychology, Vrije Universiteit Brussel, Brussels, Belgium.
| | | | | | | |
Collapse
|
28
|
Gowen E, Poliakoff E. How does visuomotor priming differ for biological and non-biological stimuli? A review of the evidence. PSYCHOLOGICAL RESEARCH 2012; 76:407-20. [DOI: 10.1007/s00426-011-0389-5] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2011] [Accepted: 10/24/2011] [Indexed: 10/14/2022]
|
29
|
Abstract
AbstractProfessional military training makes tough demands on soldiers’ perceptual and motor skills, as well as on their physical fitness and cognitive capabilities in the course of preparation for stressful operational environments. In this pilot study we attempted to identify difference in pattern of neural responses between extensively trained, professional mission-ready soldiers and novice soldiers during audiovisual simulation of mission conditions. We performed fMRI scanning on a few volunteers during presentation of semantically relevant video-clips of real combat from Afghanistan to evaluate influence of military training on mental responses of soldiers. We showed that for professional mission-ready soldiers a week before their deployment to Afghanistan, videoclips with deadly ambush combat induce greater overall brain activation compared to novice soldiers. Missionready soldiers showed greater activation in premotor/prefrontal cortex, posterior parietal cortex, and posterior temporal cortex. These results imply that fMRI technique could be used as challenging step forward in the multidimensional evaluation of military training influence on neural responses and operational capabilities of professional soldiers. This is extremely important not only for potential failure prevention and mere success of the mission, but even more for the survival and the well-being of the servicemen and servicewomen.
Collapse
|
30
|
McKay LS, Simmons DR, McAleer P, Marjoram D, Piggot J, Pollick FE. Do distinct atypical cortical networks process biological motion information in adults with Autism Spectrum Disorders? Neuroimage 2011; 59:1524-33. [PMID: 21888982 DOI: 10.1016/j.neuroimage.2011.08.033] [Citation(s) in RCA: 50] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2011] [Revised: 08/09/2011] [Accepted: 08/11/2011] [Indexed: 10/17/2022] Open
Abstract
Whether people with Autism Spectrum Disorders (ASDs) have a specific deficit when processing biological motion has been a topic of much debate. We used psychophysical methods to determine individual behavioural thresholds in a point-light direction discrimination paradigm for a small but carefully matched groups of adults (N=10 per group) with and without ASDs. These thresholds were used to derive individual stimulus levels in an identical fMRI task, with the purpose of equalising task performance across all participants whilst inside the scanner. The results of this investigation show that despite comparable behavioural performance both inside and outside the scanner, the group with ASDs shows a different pattern of BOLD activation from the TD group in response to the same stimulus levels. Furthermore, connectivity analysis suggests that the main differences between the groups are that the TD group utilise a unitary network with information passing from temporal to parietal regions, whilst the ASD group utilise two distinct networks; one utilising motion sensitive areas and another utilising form selective areas. Furthermore, a temporal-parietal link that is present in the TD group is missing in the ASD group. We tentatively propose that these differences may occur due to early dysfunctional connectivity in the brains of people with ASDs, which to some extent is compensated for by rewiring in high functioning adults.
Collapse
Affiliation(s)
- Lawrie S McKay
- School of Psychology, University of Glasgow, Glasgow G12 8QB, UK.
| | | | | | | | | | | |
Collapse
|
31
|
Proverbio AM, Riva F, Paganelli L, Cappa SF, Canessa N, Perani D, Zani A. Neural coding of cooperative vs. affective human interactions: 150 ms to code the action's purpose. PLoS One 2011; 6:e22026. [PMID: 21760948 PMCID: PMC3131384 DOI: 10.1371/journal.pone.0022026] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2011] [Accepted: 06/13/2011] [Indexed: 11/18/2022] Open
Abstract
The timing and neural processing of the understanding of social interactions was investigated by presenting scenes in which 2 people performed cooperative or affective actions. While the role of the human mirror neuron system (MNS) in understanding actions and intentions is widely accepted, little is known about the time course within which these aspects of visual information are automatically extracted. Event-Related Potentials were recorded in 35 university students perceiving 260 pictures of cooperative (e.g., 2 people dragging a box) or affective (e.g., 2 people smiling and holding hands) interactions. The action's goal was automatically discriminated at about 150–170 ms, as reflected by occipito/temporal N170 response. The swLORETA inverse solution revealed the strongest sources in the right posterior cingulate cortex (CC) for affective actions and in the right pSTS for cooperative actions. It was found a right hemispheric asymmetry that involved the fusiform gyrus (BA37), the posterior CC, and the medial frontal gyrus (BA10/11) for the processing of affective interactions, particularly in the 155–175 ms time window. In a later time window (200–250 ms) the processing of cooperative interactions activated the left post-central gyrus (BA3), the left parahippocampal gyrus, the left superior frontal gyrus (BA10), as well as the right premotor cortex (BA6). Women showed a greater response discriminative of the action's goal compared to men at P300 and anterior negativity level (220–500 ms). These findings might be related to a greater responsiveness of the female vs. male MNS. In addition, the discriminative effect was bilateral in women and was smaller and left-sided in men. Evidence was provided that perceptually similar social interactions are discriminated on the basis of the agents' intentions quite early in neural processing, differentially activating regions devoted to face/body/action coding, the limbic system and the MNS.
Collapse
|
32
|
Grafton ST, Tipper CM. Decoding intention: a neuroergonomic perspective. Neuroimage 2011; 59:14-24. [PMID: 21651985 DOI: 10.1016/j.neuroimage.2011.05.064] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2011] [Revised: 05/20/2011] [Accepted: 05/21/2011] [Indexed: 11/25/2022] Open
Abstract
Decoding the intentions of other people based on non-linguistic cues such as their body movement is a major requirement of many jobs. Whether it is maintaining security at an airport or negotiating with locals in a foreign country, there is a need to maximize the effectiveness of training or real-time performance in this decoding process. This review considers the potential utility of neuroergonomic solutions, and in particular, of electroencephalographic (EEG) methods for augmenting action understanding. Focus is given to body movements and hand-object interactions, where there is a rapid growth in relevant science. The interpretation of EEG-based signals is reinforced by a consideration of functional magnetic resonance imaging experiments demonstrating underlying brain mechanisms that support goal oriented action. While no EEG method is currently implemented as a practical application for enhancing the understanding of unspoken intentions, there are a number of promising approaches that merit further development.
Collapse
Affiliation(s)
- Scott T Grafton
- Department of Psychological and Brain Science, University of California, Santa Barbara, CA 93106-9660, USA.
| | | |
Collapse
|