1
|
Zhang S, Wang L, Jiang Y. Visual mental imagery of nonpredictive central social cues triggers automatic attentional orienting. Cognition 2024; 254:105968. [PMID: 39362053 DOI: 10.1016/j.cognition.2024.105968] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2023] [Revised: 07/18/2024] [Accepted: 09/24/2024] [Indexed: 10/05/2024]
Abstract
Previous research has demonstrated that social cues (e.g., eye gaze, walking direction of biological motion) can automatically guide people's focus of attention, a well-known phenomenon called social attention. The current research shows that voluntarily generated social cues via visual mental imagery, without being physically presented, can produce robust attentional orienting similar to the classic social attentional orienting effect. Combining a visual imagery task with a dot-probe task, we found that imagining a non-predictive gaze cue could orient attention towards the gazed-at hemifield. Such attentional effect persisted even when the imagery gaze cue was counter-predictive of the target hemifield, and could be generalized to biological motion cue. Besides, this effect could not be simply attributed to low-level motion signal embedded in gaze cues. More importantly, an eye-tracking experiment carefully monitoring potential eye movements demonstrated the imagery-induced attentional orienting effect induced by social cues, but not by non-social cues (i.e., arrows), suggesting that such effect is specialized to visual imagery of social cues. These findings accentuate the demarcation between social and non-social attentional orienting, and may take a preliminary step in conceptualizing voluntary visual imagery as a form of internally directed attention.
Collapse
Affiliation(s)
- Shujia Zhang
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Li Wang
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China.
| | - Yi Jiang
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
2
|
Elkington S, Brown M, Wright K, Regan J, Pattarnaraskouwski K, Steel C, Hales S, Holmes E, Morant N. Experiences of imagery-based treatment for anxiety in bipolar disorder: A qualitative study embedded within the image based emotion regulation feasibility randomised controlled trial. Psychol Psychother 2024; 97:531-548. [PMID: 38940581 DOI: 10.1111/papt.12538] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/22/2023] [Accepted: 05/15/2024] [Indexed: 06/29/2024]
Abstract
OBJECTIVES Intrusive mental imagery is associated with anxiety in bipolar disorder (BD) and presents a novel treatment target. Imagery-based treatments show promise in targeting anxiety and improving mood instability. This qualitative study explored experiences of receiving up to 12 sessions of a brief structured psychological intervention: Image-Based Emotion Regulation (IBER), which targets maladaptive mental imagery in the context of BD with an aim to modify the emotional impact of these images. DESIGN A qualitative study embedded within the Image Based Emotion Regulation (IBER) feasibility randomised controlled trial. METHODS Semi-structured interviews were conducted with 12 participants in the treatment arm of the trial who received IBER + treatment as usual. Data were analysed using thematic analysis. RESULTS Despite some initial scepticism about imagery-focused treatment, all participants expressed broadly positive accounts of treatment experiences. High levels of engagement with imagery modification techniques, beneficial use of techniques post treatment and improvements in anxiety management and agency were described by some. Three sub-groups were identified: those who reported a powerful transformative impact of treatment; those who embedded some new techniques into their daily lives, and those who felt they had techniques to use when needed. No participants reported overall negative experiences of the IBER treatment. CONCLUSIONS Findings from this study highlight the value for treatment recipients of modifying the underlying meanings associated with maladaptive imagery, and the personalised skills development to manage anxiety within bipolar disorders. Findings can inform treatment refinements and further trial-based evaluations.
Collapse
Affiliation(s)
| | - Michael Brown
- Pembroke College, University of Cambridge, Cambridge, UK
| | | | | | | | - Craig Steel
- Oxford Health NHS Foundation Trust and University of Oxford, Oxford, UK
| | - Susie Hales
- Oxford University Hospitals NHS Trust, Oxford, UK
| | - Emily Holmes
- Uppsala University and Karolinska Institutet, Stockholm, Sweden
| | | |
Collapse
|
3
|
Siena MJ, Simons JS. Metacognitive Awareness and the Subjective Experience of Remembering in Aphantasia. J Cogn Neurosci 2024; 36:1578-1598. [PMID: 38319889 DOI: 10.1162/jocn_a_02120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2024]
Abstract
Individuals with aphantasia, a nonclinical condition typically characterized by mental imagery deficits, often report reduced episodic memory. However, findings have hitherto rested largely on subjective self-reports, with few studies experimentally investigating both objective and subjective aspects of episodic memory in aphantasia. In this study, we tested both aspects of remembering in aphantasic individuals using a custom 3-D object and spatial memory task that manipulated visuospatial perspective, which is considered to be a key factor determining the subjective experience of remembering. Objective and subjective measures of memory performance were taken for both object and spatial memory features under different perspective conditions. Surprisingly, aphantasic participants were found to be unimpaired on all objective memory measures, including those for object memory features, despite reporting weaker overall mental imagery experience and lower subjective vividness ratings on the memory task. These results add to newly emerging evidence that aphantasia is a heterogenous condition, where some aphantasic individuals may lack metacognitive awareness of mental imagery rather than mental imagery itself. In addition, we found that both participant groups remembered object memory features with greater precision when encoded and retrieved in the first person versus third person, suggesting a first-person perspective might facilitate subjective memory reliving by enhancing the representational quality of scene contents.
Collapse
|
4
|
Dawes AJ, Keogh R, Pearson J. Multisensory subtypes of aphantasia: Mental imagery as supramodal perception in reverse. Neurosci Res 2024; 201:50-59. [PMID: 38029861 DOI: 10.1016/j.neures.2023.11.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Accepted: 11/20/2023] [Indexed: 12/01/2023]
Abstract
Cognitive neuroscience research on mental imagery has largely focused on the visual imagery modality in unimodal task contexts. Recent studies have uncovered striking individual differences in visual imagery capacity, with some individuals reporting a subjective absence of conscious visual imagery ability altogether ("aphantasia"). However, naturalistic mental imagery is often multi-sensory, and preliminary findings suggest that many individuals with aphantasia also report a subjective lack of mental imagery in other sensory domains (such as auditory or olfactory imagery). In this paper, we perform a series of cluster analyses on the multi-sensory imagery questionnaire scores of two large groups of aphantasic subjects, defining latent sub-groups in this sample population. We demonstrate that aphantasia is a heterogenous phenomenon characterised by dominant sub-groups of individuals with visual aphantasia (those who report selective visual imagery absence) and multi-sensory aphantasia (those who report an inability to generate conscious mental imagery in any sensory modality). We replicate our findings in a second large sample and show that more unique aphantasia sub-types also exist, such as individuals with selectively preserved mental imagery in only one sensory modality (e.g. intact auditory imagery). We outline the implications of our findings for network theories of mental imagery, discussing how unique aphantasia aetiologies with distinct self-report patterns might reveal alterations to various levels of the sensory processing hierarchy implicated in mental imagery.
Collapse
Affiliation(s)
| | - Rebecca Keogh
- School of Psychological Sciences, Macquarie University, Sydney, Australia
| | - Joel Pearson
- School of Psychology, University of New South Wales, Sydney, Australia
| |
Collapse
|
5
|
Pace T, Koenig-Robert R, Pearson J. Different Mechanisms for Supporting Mental Imagery and Perceptual Representations: Modulation Versus Excitation. Psychol Sci 2023; 34:1229-1243. [PMID: 37782827 DOI: 10.1177/09567976231198435] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/04/2023] Open
Abstract
Recent research suggests imagery is functionally equivalent to a weak form of visual perception. Here we report evidence across five independent experiments on adults that perception and imagery are supported by fundamentally different mechanisms: Whereas perceptual representations are largely formed via increases in excitatory activity, imagery representations are largely supported by modulating nonimagined content. We developed two behavioral techniques that allowed us to first put the visual system into a state of adaptation and then probe the additivity of perception and imagery. If imagery drives similar excitatory visual activity to perception, pairing imagery with perceptual adapters should increase the state of adaptation. Whereas pairing weak perception with adapters increased measures of adaptation, pairing imagery reversed their effects. Further experiments demonstrated that these nonadditive effects were due to imagery weakening representations of nonimagined content. Together these data provide empirical evidence that the brain uses categorically different mechanisms to represent imagery and perception.
Collapse
Affiliation(s)
- Thomas Pace
- School of Psychology, University of New South Wales
| | | | - Joel Pearson
- School of Psychology, University of New South Wales
| |
Collapse
|
6
|
Hu Y, Yu Q. Spatiotemporal dynamics of self-generated imagery reveal a reverse cortical hierarchy from cue-induced imagery. Cell Rep 2023; 42:113242. [PMID: 37831604 DOI: 10.1016/j.celrep.2023.113242] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Revised: 08/25/2023] [Accepted: 09/25/2023] [Indexed: 10/15/2023] Open
Abstract
Visual imagery allows for the construction of rich internal experience in our mental world. However, it has remained poorly understood how imagery experience derives volitionally as opposed to being cue driven. Here, using electroencephalography and functional magnetic resonance imaging, we systematically investigate the spatiotemporal dynamics of self-generated imagery by having participants volitionally imagining one of the orientations from a learned pool. We contrast self-generated imagery with cue-induced imagery, where participants imagined line orientations based on associative cues acquired previously. Our results reveal overlapping neural signatures of cue-induced and self-generated imagery. Yet, these neural signatures display substantially differential sensitivities to the two types of imagery: self-generated imagery is supported by an enhanced involvement of the anterior cortex in representing imagery contents. By contrast, cue-induced imagery is supported by enhanced imagery representations in the posterior visual cortex. These results jointly support a reverse cortical hierarchy in generating and maintaining imagery contents in self-generated versus externally cued imagery.
Collapse
Affiliation(s)
- Yiheng Hu
- Institute of Neuroscience, Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China; University of Chinese Academy of Sciences, Beijing 100049, China
| | - Qing Yu
- Institute of Neuroscience, Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China.
| |
Collapse
|
7
|
Gjorgieva E, Morales-Torres R, Cabeza R, Woldorff MG. Neural retrieval processes occur more rapidly for visual mental images that were previously encoded with high-vividness. Cereb Cortex 2023; 33:10234-10244. [PMID: 37526263 DOI: 10.1093/cercor/bhad278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Revised: 06/26/2023] [Accepted: 06/27/2023] [Indexed: 08/02/2023] Open
Abstract
Visual mental imagery refers to our ability to experience visual images in the absence of sensory stimulation. Studies have shown that visual mental imagery can improve episodic memory. However, we have limited understanding of the neural mechanisms underlying this improvement. Using electroencephalography, we examined the neural processes associated with the retrieval of previously generated visual mental images, focusing on how the vividness at generation can modulate retrieval processes. Participants viewed word stimuli referring to common objects, forming a visual mental image of each word and rating the vividness of the mental image. This was followed by a surprise old/new recognition task. We compared retrieval performance for items rated as high- versus low-vividness at encoding. High-vividness items were retrieved with faster reaction times and higher confidence ratings in the memory judgment. While controlling for confidence, neural measures indicated that high-vividness items produced an earlier decrease in alpha-band activity at retrieval compared with low-vividness items, suggesting an earlier memory reinstatement. Even when low-vividness items were remembered with high confidence, they were not retrieved as quickly as high-vividness items. These results indicate that when highly vivid mental images are encoded, the speed of their retrieval occurs more rapidly, relative to low-vivid items.
Collapse
Affiliation(s)
- Eva Gjorgieva
- Department of Psychology and Neuroscience, Duke University, Durham, NC 27708, United States
- Center for Cognitive Neuroscience, Duke Institute for Brain Sciences, Duke University, Durham, NC 27708, United States
| | - Ricardo Morales-Torres
- Department of Psychology and Neuroscience, Duke University, Durham, NC 27708, United States
- Center for Cognitive Neuroscience, Duke Institute for Brain Sciences, Duke University, Durham, NC 27708, United States
| | - Roberto Cabeza
- Department of Psychology and Neuroscience, Duke University, Durham, NC 27708, United States
- Center for Cognitive Neuroscience, Duke Institute for Brain Sciences, Duke University, Durham, NC 27708, United States
| | - Marty G Woldorff
- Department of Psychology and Neuroscience, Duke University, Durham, NC 27708, United States
- Center for Cognitive Neuroscience, Duke Institute for Brain Sciences, Duke University, Durham, NC 27708, United States
- Departtment of Psychiatry, Duke University, Durham, NC 27708, United States
| |
Collapse
|
8
|
Song Y, Shin W, Kim P, Jeong J. Neural representations for multi-context visuomotor adaptation and the impact of common representation on multi-task performance: a multivariate decoding approach. Front Hum Neurosci 2023; 17:1221944. [PMID: 37822708 PMCID: PMC10562562 DOI: 10.3389/fnhum.2023.1221944] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Accepted: 08/30/2023] [Indexed: 10/13/2023] Open
Abstract
The human brain's remarkable motor adaptability stems from the formation of context representations and the use of a common context representation (e.g., an invariant task structure across task contexts) derived from structural learning. However, direct evaluation of context representations and structural learning in sensorimotor tasks remains limited. This study aimed to rigorously distinguish neural representations of visual, movement, and context levels crucial for multi-context visuomotor adaptation and investigate the association between representation commonality across task contexts and adaptation performance using multivariate decoding analysis with fMRI data. Here, we focused on three distinct task contexts, two of which share a rotation structure (i.e., visuomotor rotation contexts with -90° and +90° rotations, in which the mouse cursor's movement was rotated 90 degrees counterclockwise and clockwise relative to the hand-movement direction, respectively) and the remaining one does not (i.e., mirror-reversal context where the horizontal movement of the computer mouse was inverted). This study found that visual representations (i.e., visual direction) were decoded in the occipital area, while movement representations (i.e., hand-movement direction) were decoded across various visuomotor-related regions. These findings are consistent with prior research and the widely recognized roles of those areas. Task-context representations (i.e., either -90° rotation, +90° rotation, or mirror-reversal) were also distinguishable in various brain regions. Notably, these regions largely overlapped with those encoding visual and movement representations. This overlap suggests a potential intricate dependency of encoding visual and movement directions on the context information. Moreover, we discovered that higher task performance is associated with task-context representation commonality, as evidenced by negative correlations between task performance and task-context-decoding accuracy in various brain regions, potentially supporting structural learning. Importantly, despite limited similarities between tasks (e.g., rotation and mirror-reversal contexts), such association was still observed, suggesting an efficient mechanism in the brain that extracts commonalities from different task contexts (such as visuomotor rotations or mirror-reversal) at multiple structural levels, from high-level abstractions to lower-level details. In summary, while illuminating the intricate interplay between visuomotor processing and context information, our study highlights the efficiency of learning mechanisms, thereby paving the way for future exploration of the brain's versatile motor ability.
Collapse
Affiliation(s)
- Youngjo Song
- Department of Bio and Brain Engineering, College of Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, South Korea
| | - Wooree Shin
- Department of Bio and Brain Engineering, College of Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, South Korea
- Program of Brain and Cognitive Engineering, College of Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, South Korea
| | - Pyeongsoo Kim
- Department of Bio and Brain Engineering, College of Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, South Korea
| | - Jaeseung Jeong
- Department of Brain and Cognitive Sciences, College of Life Science and Bioengineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, South Korea
| |
Collapse
|
9
|
Koenig-Robert R, El Omar H, Pearson J. Implicit bias training can remove bias from subliminal stimuli, restoring choice divergence: A proof-of-concept study. PLoS One 2023; 18:e0289313. [PMID: 37506067 PMCID: PMC10381032 DOI: 10.1371/journal.pone.0289313] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Accepted: 07/14/2023] [Indexed: 07/30/2023] Open
Abstract
Subliminal information can influence our conscious life. Subliminal stimuli can influence cognitive tasks, while endogenous subliminal neural information can sway decisions before volition. Are decisions inextricably biased towards subliminal information? Or can they diverge away from subliminal biases via training? We report that implicit bias training can remove biases from subliminal sensory primes. We first show that subliminal stimuli biased an imagery-content decision task. Participants (n = 17) had to choose one of two different patterns to subsequently imagine. Subliminal primes significantly biased decisions towards imagining the primed option. Then, we trained participants (n = 7) to choose the non-primed option, via post choice feedback. This training was successful despite participants being unaware of the purpose or structure of the reward schedule. This implicit bias training persisted up to one week later. Our proof-of-concept study indicates that decisions might not always have to be biased towards non-conscious information, but instead can diverge from subliminal primes through training.
Collapse
Affiliation(s)
- Roger Koenig-Robert
- Future Minds Lab, School of Psychology, University of New South Wales, Sydney, Australia
| | - Hashim El Omar
- Future Minds Lab, School of Psychology, University of New South Wales, Sydney, Australia
| | - Joel Pearson
- Future Minds Lab, School of Psychology, University of New South Wales, Sydney, Australia
| |
Collapse
|
10
|
Olson JA, Cyr M, Artenie DZ, Strandberg T, Hall L, Tompkins ML, Raz A, Johansson P. Emulating future neurotechnology using magic. Conscious Cogn 2023; 107:103450. [PMID: 36566673 DOI: 10.1016/j.concog.2022.103450] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2022] [Revised: 11/24/2022] [Accepted: 11/25/2022] [Indexed: 12/24/2022]
Abstract
Recent developments in neuroscience and artificial intelligence have allowed machines to decode mental processes with growing accuracy. Neuroethicists have speculated that perfecting these technologies may result in reactions ranging from an invasion of privacy to an increase in self-understanding. Yet, evaluating these predictions is difficult given that people are poor at forecasting their reactions. To address this, we developed a paradigm using elements of performance magic to emulate future neurotechnologies. We led 59 participants to believe that a (sham) neurotechnological machine could infer their preferences, detect their errors, and reveal their deep-seated attitudes. The machine gave participants randomly assigned positive or negative feedback about their brain's supposed attitudes towards charity. Around 80% of participants in both groups provided rationalisations for this feedback, which shifted their attitudes in the manipulated direction but did not influence donation behaviour. Our paradigm reveals how people may respond to prospective neurotechnologies, which may inform neuroethical frameworks.
Collapse
Affiliation(s)
- Jay A Olson
- Department of Psychology, McGill University, 2001 McGill College Ave., Montreal, QC H3A 1G1, Canada.
| | - Mariève Cyr
- Faculty of Medicine and Health Sciences, McGill University, 3605 De la Montagne St., Montreal, QC H3G 2M1, Canada
| | - Despina Z Artenie
- Department of Psychology, Université du Québec à Montréal, 100 Sherbrooke St. W., Montreal, QC H2X 3P2, Canada
| | - Thomas Strandberg
- Lund University Cognitive Science, Lund University, Box 192, S-221 00, Lund, Sweden
| | - Lars Hall
- Lund University Cognitive Science, Lund University, Box 192, S-221 00, Lund, Sweden
| | - Matthew L Tompkins
- Lund University Cognitive Science, Lund University, Box 192, S-221 00, Lund, Sweden
| | - Amir Raz
- Institute for Interdisciplinary Behavioral and Brain Sciences, Chapman University, 9401 Jeronimo Road, Irvine, CA 92618, USA
| | - Petter Johansson
- Lund University Cognitive Science, Lund University, Box 192, S-221 00, Lund, Sweden.
| |
Collapse
|
11
|
Hudson M, Johnson MI. Definition and attributes of the emotional memory images underlying psychophysiological dis-ease. Front Psychol 2022; 13:947952. [PMID: 36452371 PMCID: PMC9702567 DOI: 10.3389/fpsyg.2022.947952] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2022] [Accepted: 10/24/2022] [Indexed: 08/12/2023] Open
Abstract
BACKGROUND Previously, we proposed a "Split-second Unlearning" model to explain how emotional memories could be preventing clients from adapting to the stressors of daily living, thus forming a barrier to learning, health and well-being. We suggested that these emotional memories were mental images stored inside the mind as 'emotional memory images' (EMIs). OBJECTIVE To elaborate on the nature of these emotional memory images within the context of split-second learning and unlearning and the broader field of psychoanalysis, to initiate a conversation among scholars concerning the path that future healthcare research, practice, and policy should take. METHOD A narrative review of the attributes of EMIs utilizing relevant and contentious research and/or scholarly publications on the topic, facilitated by observations and approaches used in clinical practice. Results: We propose a refined definition of EMIs as Trauma induced, non-conscious, contiguously formed multimodal mental imagery, which triggers an amnesic, anachronistic, stress response within a split-second. The systematic appraisal of each attribute of an EMI supports the idea that the EMI is distinct from similar entities described in literature, enabling further sophistication of our Split-second Unlearning model of psychophysiological dis-ease. CONCLUSION Exploration of the concept of EMIs provides further insight on mechanisms associated with psychophysiological dis-ease and opportunities for therapeutic approaches.
Collapse
Affiliation(s)
| | - Mark I. Johnson
- Centre for Pain Research, Leeds Beckett University, Leeds, United Kingdom
| |
Collapse
|
12
|
Jääskeläinen IP, Glerean E, Klucharev V, Shestakova A, Ahveninen J. Do sparse brain activity patterns underlie human cognition? Neuroimage 2022; 263:119633. [PMID: 36115589 PMCID: PMC10921366 DOI: 10.1016/j.neuroimage.2022.119633] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Revised: 09/12/2022] [Accepted: 09/13/2022] [Indexed: 10/31/2022] Open
Abstract
Accumulating multivariate pattern analysis (MVPA) results from fMRI studies suggest that information is represented in fingerprint patterns of activations and deactivations during perception, emotions, and cognition. We postulate that these fingerprint patterns might reflect neuronal-population level sparse code documented in two-photon calcium imaging studies in animal models, i.e., information represented in specific and reproducible ensembles of a few percent of active neurons amidst widespread inhibition in neural populations. We suggest that such representations constitute a fundamental organizational principle via interacting across multiple levels of brain hierarchy, thus giving rise to perception, emotions, and cognition.
Collapse
Affiliation(s)
- Iiro P Jääskeläinen
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, Aalto University School of Science, Espoo, Finland; International Laboratory of Social Neurobiology, Institute of Cognitive Neuroscience, HSE University, Moscow, Russian Federation
| | - Enrico Glerean
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, Aalto University School of Science, Espoo, Finland; International Laboratory of Social Neurobiology, Institute of Cognitive Neuroscience, HSE University, Moscow, Russian Federation
| | - Vasily Klucharev
- International Laboratory of Social Neurobiology, Institute of Cognitive Neuroscience, HSE University, Moscow, Russian Federation
| | - Anna Shestakova
- International Laboratory of Social Neurobiology, Institute of Cognitive Neuroscience, HSE University, Moscow, Russian Federation
| | - Jyrki Ahveninen
- Massachusetts General Hospital, Harvard Medical School, Massachusetts Institute of Technology Athinoula A Martinos Center for Biomedical Imaging, Charlestown, MA, United States
| |
Collapse
|
13
|
Experimental evidence for involvement of monocular channels in mental rotation. Psychon Bull Rev 2022; 30:575-584. [PMID: 36279047 DOI: 10.3758/s13423-022-02195-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/27/2022] [Indexed: 11/08/2022]
Abstract
According to the prevailing view, cognitive processes of mental rotation are carried out by visuospatial perceptual circuits located primarily in high cortical areas. Here, we examined the functional involvement of (mostly subcortical) monocular channels in mental rotation tasks. Images of two rotated objects (0°, 50°, 100°, or 150°; identical or mirrored) were presented either to one eye (monocular) or segregated between the eyes (interocular). The results indicated a causal role for low monocular visual channels in mental rotation: Response times for identical ("same") objects at high angular disparities (100°, 150°) were shorter when both objects were presented to a single eye than when each object was presented to a different eye. We suggest that mental rotation processes rely on cortico-subcortical loops that support visuospatial perception. More generally, the findings highlight the potential contribution of lower-level mechanisms to what are typically considered to be high-level cognitive functions, such as mental representation.
Collapse
|
14
|
Yang H, Ogawa K. Decoding of Motor Imagery Involving Whole-body Coordination. Neuroscience 2022; 501:131-142. [PMID: 35952995 DOI: 10.1016/j.neuroscience.2022.07.029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Revised: 06/08/2022] [Accepted: 07/28/2022] [Indexed: 11/29/2022]
Abstract
The present study investigated whether different types of motor imageries can be classified based on the location of the activation peaks or the multivariate pattern analysis (MVPA) of functional magnetic resonance imaging (fMRI) and compared the difference between visual motor imagery (VI) and kinesthetic motor imagery (KI). During fMRI scanning sessions, 25 participants imagined four movements included in the Motor Imagery Questionnaire-Revised (MIQ-R): knee lift, jump, arm movement, and waist bend. These four imagined movements were then classified based on the peak location or the patterns of fMRI signal values. We divided the participants into two groups based on whether they found it easier to generate VI (VI group, n = 10) or KI (KI group, n = 15). Our results show that the imagined movements can be classified using both the location of the activation peak and the spatial activation patterns within the sensorimotor cortex, and MVPA performs better than the activation peak classification. Furthermore, our result reveals that the KI group achieved a higher MVPA decoding accuracy within the left primary somatosensory cortex than the VI group, suggesting that the modality of motor imagery differently affects the classification performance in distinct brain regions.
Collapse
Affiliation(s)
- Huixiang Yang
- Department of Psychology, Graduate School of Humanities and Human Sciences, Hokkaido University, Japan
| | - Kenji Ogawa
- Department of Psychology, Graduate School of Humanities and Human Sciences, Hokkaido University, Japan.
| |
Collapse
|
15
|
Park S, Serences JT. Relative precision of top-down attentional modulations is lower in early visual cortex compared to mid- and high-level visual areas. J Neurophysiol 2022; 127:504-518. [PMID: 35020526 PMCID: PMC8836715 DOI: 10.1152/jn.00300.2021] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Revised: 01/06/2022] [Accepted: 01/06/2022] [Indexed: 02/03/2023] Open
Abstract
Top-down spatial attention enhances cortical representations of behaviorally relevant visual information and increases the precision of perceptual reports. However, little is known about the relative precision of top-down attentional modulations in different visual areas, especially compared with the highly precise stimulus-driven responses that are observed in early visual cortex. For example, the precision of attentional modulations in early visual areas may be limited by the relatively coarse spatial selectivity and the anatomical connectivity of the areas in prefrontal cortex that generate and relay the top-down signals. Here, we used functional MRI (fMRI) and human participants to assess the precision of bottom-up spatial representations evoked by high-contrast stimuli across the visual hierarchy. Then, we examined the relative precision of top-down attentional modulations in the absence of spatially specific bottom-up drive. Whereas V1 showed the largest relative difference between the precision of top-down attentional modulations and the precision of bottom-up modulations, midlevel areas such as V4 showed relatively smaller differences between the precision of top-down and bottom-up modulations. Overall, this interaction between visual areas (e.g., V1 vs. V4) and the relative precision of top-down and bottom-up modulations suggests that the precision of top-down attentional modulations is limited by the representational fidelity of areas that generate and relay top-down feedback signals.NEW & NOTEWORTHY When the relative precision of purely top-down and bottom-up signals were compared across visual areas, early visual areas like V1 showed higher bottom-up precision compared with top-down precision. In contrast, midlevel areas showed similar levels of top-down and bottom-up precision. This result suggests that the precision of top-down attentional modulations may be limited by the relatively coarse spatial selectivity and the anatomical connectivity of the areas generating and relaying the signals.
Collapse
Affiliation(s)
- Sunyoung Park
- Department of Psychology, University of California San Diego, La Jolla, California
| | - John T Serences
- Department of Psychology, University of California San Diego, La Jolla, California
- Neurosciences Graduate Program, University of California San Diego, La Jolla, California
| |
Collapse
|
16
|
Neuroscience and CSR: Using EEG for Assessing the Effectiveness of Branded Videos Related to Environmental Issues. SUSTAINABILITY 2022. [DOI: 10.3390/su14031347] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
The majority of studies evaluating the effectiveness of branded CSR campaigns are concentrated and base their conclusions on data collection through self-reporting questionnaires. Although such studies provide insights for evaluating the effectiveness of CSR communication methods, analysing the message that is communicated, the communication channel used and the explicit brain responses of those for whom the message is intended, they lack the ability to fully encapsulate the problem of communicating environmental messages by not taking into consideration what the recipients’ implicit brain reactions are presenting. Therefore, this study aims to investigate the effectiveness of CSR video communications relating to environmental issues through the lens of the recipients’ implicit self, by employing neuroscience-based assessments. For the examination of implicit brain perception, an electroencephalogram (EEG) was used, and the collected data was analysed through three indicators identified as the most influential indicators on human behaviour. These three indicators are emotional valence, the level of brain engagement and cognitive load. The study is conducted on individuals from the millennial generation in Thessaloniki, Greece, whose implicit brain responses to seven branded commercial videos are recorded. The seven videos were a part of CSR campaigns addressing environmental issues. Simultaneously, the self-reporting results from the participants were gathered for a comparison between the explicit and implicit brain responses. One of the key findings of the study is that the explicit and implicit brain responses differ to the extent that the CSR video communications’ brain friendliness has to be taken into account in the future, to ensure success. The results of the study provide an insight for the future creation process, conceptualisation, design and content of the effective CSR communication, in regard to environmental issues.
Collapse
|
17
|
Dance CJ, Ward J, Simner J. What is the Link Between Mental Imagery and Sensory Sensitivity? Insights from Aphantasia. Perception 2021; 50:757-782. [PMID: 34463590 PMCID: PMC8438787 DOI: 10.1177/03010066211042186] [Citation(s) in RCA: 34] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2021] [Accepted: 08/05/2021] [Indexed: 12/16/2022]
Abstract
People with aphantasia have impoverished visual imagery so struggle to form mental pictures in the mind's eye. By testing people with and without aphantasia, we investigate the relationship between sensory imagery and sensory sensitivity (i.e., hyper- or hypo-reactivity to incoming signals through the sense organs). In Experiment 1 we first show that people with aphantasia report impaired imagery across multiple domains (e.g., olfactory, gustatory etc.) rather than simply vision. Importantly, we also show that imagery is related to sensory sensitivity: aphantasics reported not only lower imagery, but also lower sensory sensitivity. In Experiment 2, we showed a similar relationship between imagery and sensitivity in the general population. Finally, in Experiment 3 we found behavioural corroboration in a Pattern Glare Task, in which aphantasics experienced less visual discomfort and fewer visual distortions typically associated with sensory sensitivity. Our results suggest for the very first time that sensory imagery and sensory sensitivity are related, and that aphantasics are characterised by both lower imagery, and lower sensitivity. Our results also suggest that aphantasia (absence of visual imagery) may be more accurately defined as a subtype of a broader imagery deficit we name dysikonesia, in which weak or absent imagery occurs across multiple senses.
Collapse
Affiliation(s)
- C. J. Dance
- School of Psychology, University of Sussex, Brighton, UK
| | | | | |
Collapse
|
18
|
Monzel M, Keidel K, Reuter M. Imagine, and you will find - Lack of attentional guidance through visual imagery in aphantasics. Atten Percept Psychophys 2021; 83:2486-2497. [PMID: 33880710 PMCID: PMC8302533 DOI: 10.3758/s13414-021-02307-z] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/24/2021] [Indexed: 12/22/2022]
Abstract
Aphantasia is the condition of reduced or absent voluntary imagery. So far, behavioural differences between aphantasics and non-aphantasics have hardly been studied as the base rate of those affected is quite low. The aim of the study was to examine if attentional guidance in aphantasics is impaired by their lack of visual imagery. In two visual search tasks, an already established one by Moriya (Attention, Perception, & Psychophysics, 80(5), 1127-1142, 2018) and a newly developed one, we examined whether aphantasics are primed less by their visual imagery than non-aphantasics. The sample in Study 1 consisted of 531 and the sample in Study 2 consisted of 325 age-matched pairs of aphantasics and non-aphantasics. Moriya's Task was not capable of showing the expected effect, whereas the new developed task was. These results could mainly be attributed to different task characteristics. Therefore, a lack of attentional guidance through visual imagery in aphantasics can be assumed and interpreted as new evidence in the imagery debate, showing that mental images actually influence information processing and are not merely epiphenomena of propositional processing.
Collapse
Affiliation(s)
- Merlin Monzel
- Personality Psychology and Biological Psychology, Department of Psychology, University of Bonn, Kaiser-Karl-Ring 9, 53111, Bonn, Germany.
| | - Kristof Keidel
- Personality Psychology and Biological Psychology, Department of Psychology, University of Bonn, Kaiser-Karl-Ring 9, 53111, Bonn, Germany
| | - Martin Reuter
- Personality Psychology and Biological Psychology, Department of Psychology, University of Bonn, Kaiser-Karl-Ring 9, 53111, Bonn, Germany
- Center for Economics and Neuroscience (CENs), Laboratory of Neurogenetics, University of Bonn, Bonn, Germany
| |
Collapse
|
19
|
Neafsey EJ. Conscious intention and human action: Review of the rise and fall of the readiness potential and Libet's clock. Conscious Cogn 2021; 94:103171. [PMID: 34325185 DOI: 10.1016/j.concog.2021.103171] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2021] [Revised: 06/26/2021] [Accepted: 07/04/2021] [Indexed: 11/15/2022]
Abstract
Is consciousness-the subjective awareness of the sensations, perceptions, beliefs, desires, and intentions of mental life-a genuine cause of human action or a mere impotent epiphenomenon accompanying the brain's physical activity but utterly incapable of making anything actually happen? This article will review the history and current status of experiments and commentary related to Libet's influential paper (Brain 106:623-664, 1983) whose conclusion "that cerebral initiation even of a spontaneous voluntary act …can and usually does begin unconsciously" has had a huge effect on debate about the efficacy of conscious intentions. Early (up to 2008) and more recent (2008 on) experiments replicating and criticizing Libet's conclusions and especially his methods will be discussed, focusing especially on recent observations that the readiness potential (RP) may only be an "artifact of averaging" and that, when intention is measured using "tone probes," the onset of intention is found much earlier and often before the onset of the RP. Based on these findings, Libet's methodology was flawed and his results are no longer valid reasons for rejecting Fodor's "good old commonsense belief/desire psychology" that "my wanting is causally responsible for my reaching.".
Collapse
Affiliation(s)
- Edward J Neafsey
- Loyola University Chicago Stritch School of Medicine, Department of Molecular Pharmacology and Neuroscience, 2160 S. First Ave., Maywood, IL 60153, United States.
| |
Collapse
|
20
|
Psychogios A, Dimitriadis N. Brain-Adjusted Relational Leadership: A Social-Constructed Consciousness Approach to Leader-Follower Interaction. Front Psychol 2021; 12:672217. [PMID: 34326795 PMCID: PMC8313727 DOI: 10.3389/fpsyg.2021.672217] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2021] [Accepted: 06/02/2021] [Indexed: 11/22/2022] Open
Abstract
Relationship-based approaches to leadership represent one of the fastest-growing leadership fields and help us to understand better organizational leadership. Relation-based approaches emphasize the relationship and interaction between the leader and the follower. The emphasis is placed on the way that they interact and influence each other at attaining mutual goals. It is known that leaders are linked to followers and vice versa in a sense of responding to other's needs toward the achievement of mutual goals. Leaders and followers are an essential part of this social process implying that they are losing their traditional identity rooted in the formal organizational structure (manager-subordinate) and become inseparable actors of a co-constructing process of leadership. What is less known though is the way that leadership actors are linked to each other and in particular how they try to understand how to do that in the workplace. What is even less understood is the importance and role of consciousness in this relationship. Especially since consciousness appears to be both a fundamental and a very elusive element in human relations. Therefore, this paper conceptually explores the concept of consciousness within the context of the social brain theory to argue that leadership actors need to rethink their approach to individuality and focus on mutually dependent relations with each other. This paper contributes to the field of Neuro-management by introducing the concept of Homo Relationalis. In this respect, we suggest that leadership is not just a socially constructed element but also a social brain constructed phenomenon that requires an understanding of the human brain as a social organ. We further recommend a new approach of applying cognitive style analysis to capture the duality of leader/follower in the same person, following the self-illusion theory. Finally, we conclude that we need to further emphasize a social brain-adjusted relational leadership approach and we introduce two new cognitive styles that can help capture the essence of it.
Collapse
Affiliation(s)
- Alexandros Psychogios
- Birmingham City University, Birmingham City Business School, Birmingham, United Kingdom
| | | |
Collapse
|
21
|
Boccia M, Sulpizio V, Bencivenga F, Guariglia C, Galati G. Neural representations underlying mental imagery as unveiled by representation similarity analysis. Brain Struct Funct 2021; 226:1511-1531. [PMID: 33821379 PMCID: PMC8096739 DOI: 10.1007/s00429-021-02266-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2020] [Accepted: 03/23/2021] [Indexed: 11/30/2022]
Abstract
It is commonly acknowledged that visual imagery and perception rely on the same content-dependent brain areas in the high-level visual cortex (HVC). However, the way in which our brain processes and organizes previous acquired knowledge to allow the generation of mental images is still a matter of debate. Here, we performed a representation similarity analysis of three previous fMRI experiments conducted in our laboratory to characterize the neural representation underlying imagery and perception of objects, buildings and faces and to disclose possible dissimilarities in the neural structure of such representations. To this aim, we built representational dissimilarity matrices (RDMs) by computing multivariate distances between the activity patterns associated with each pair of stimuli in the content-dependent areas of the HVC and HC. We found that spatial information is widely coded in the HVC during perception (i.e. RSC, PPA and OPA) and imagery (OPA and PPA). Also, visual information seems to be coded in both preferred and non-preferred regions of the HVC, supporting a distributed view of encoding. Overall, the present results shed light upon the spatial coding of imagined and perceived exemplars in the HVC.
Collapse
Affiliation(s)
- Maddalena Boccia
- Department of Psychology, "Sapienza" University of Rome, Via dei Marsi, 78, 00185, Rome, Italy. .,Cognitive and Motor Rehabilitation and Neuroimaging Unit, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy.
| | - Valentina Sulpizio
- Department of Psychology, "Sapienza" University of Rome, Via dei Marsi, 78, 00185, Rome, Italy.,Cognitive and Motor Rehabilitation and Neuroimaging Unit, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy
| | - Federica Bencivenga
- Department of Psychology, "Sapienza" University of Rome, Via dei Marsi, 78, 00185, Rome, Italy.,Cognitive and Motor Rehabilitation and Neuroimaging Unit, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy.,PhD Program in Behavioral Neuroscience, Sapienza University of Rome, Rome, Italy
| | - Cecilia Guariglia
- Department of Psychology, "Sapienza" University of Rome, Via dei Marsi, 78, 00185, Rome, Italy.,Cognitive and Motor Rehabilitation and Neuroimaging Unit, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy
| | - Gaspare Galati
- Department of Psychology, "Sapienza" University of Rome, Via dei Marsi, 78, 00185, Rome, Italy.,Cognitive and Motor Rehabilitation and Neuroimaging Unit, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy
| |
Collapse
|
22
|
Koenig-Robert R, Pearson J. Why do imagery and perception look and feel so different? Philos Trans R Soc Lond B Biol Sci 2021; 376:20190703. [PMID: 33308061 PMCID: PMC7741076 DOI: 10.1098/rstb.2019.0703] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/12/2020] [Indexed: 12/16/2022] Open
Abstract
Despite the past few decades of research providing convincing evidence of the similarities in function and neural mechanisms between imagery and perception, for most of us, the experience of the two are undeniably different, why? Here, we review and discuss the differences between imagery and perception and the possible underlying causes of these differences, from function to neural mechanisms. Specifically, we discuss the directional flow of information (top-down versus bottom-up), the differences in targeted cortical layers in primary visual cortex and possible different neural mechanisms of modulation versus excitation. For the first time in history, neuroscience is beginning to shed light on this long-held mystery of why imagery and perception look and feel so different. This article is part of the theme issue 'Offline perception: voluntary and spontaneous perceptual experiences without matching external stimulation'.
Collapse
Affiliation(s)
| | - Joel Pearson
- School of Psychology, The University of New South Wales, Sydney, Australia
| |
Collapse
|
23
|
Keogh R, Pearson J. Attention driven phantom vision: measuring the sensory strength of attentional templates and their relation to visual mental imagery and aphantasia. Philos Trans R Soc Lond B Biol Sci 2021; 376:20190688. [PMID: 33308064 PMCID: PMC7741074 DOI: 10.1098/rstb.2019.0688] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/12/2020] [Indexed: 01/08/2023] Open
Abstract
When we search for an object in an array or anticipate attending to a future object, we create an 'attentional template' of the object. The definitions of attentional templates and visual imagery share many similarities as well as many of the same neural characteristics. However, the phenomenology of these attentional templates and their neural similarities to visual imagery and perception are rarely, if ever discussed. Here, we investigate the relationship between these two forms of non-retinal phantom vision through the use of the binocular rivalry technique, which allows us to measure the sensory strength of attentional templates in the absence of concurrent perceptual stimuli. We find that attentional templates correlate with both feature-based attention and visual imagery. Attentional templates, like imagery, were significantly disrupted by the presence of irrelevant visual stimuli, while feature-based attention was not. We also found that a special population who lack the ability to visualize (aphantasia), showed evidence of feature-based attention when measured using the binocular rivalry paradigm, but not attentional templates. Taken together, these data suggest functional similarities between attentional templates and visual imagery, advancing the theory of visual imagery as a general simulation tool used across cognition. This article is part of the theme issue 'Offline perception: voluntary and spontaneous perceptual experiences without matching external stimulation'.
Collapse
Affiliation(s)
- Rebecca Keogh
- School of Psychology, The University of New South Wales, Sydney, Australia
| | | |
Collapse
|
24
|
Abstract
Historically, mental imagery has been defined as an experiential state-as something necessarily conscious. But most behavioural or neuroimaging experiments on mental imagery-including the most famous ones-do not actually take the conscious experience of the subject into consideration. Further, recent research highlights that there are very few behavioural or neural differences between conscious and unconscious mental imagery. I argue that treating mental imagery as not necessarily conscious (as potentially unconscious) would bring much needed explanatory unification to mental imagery research. It would also help us to reassess some of the recent aphantasia findings inasmuch as at least some subjects with aphantasia would be best described as having unconscious mental imagery. This article is part of the theme issue 'Offline perception: voluntary and spontaneous perceptual experiences without matching external stimulation'.
Collapse
Affiliation(s)
- Bence Nanay
- Centre for Philosophical Psychology, University of Antwerp, Antwerp, Belgium
| |
Collapse
|
25
|
Pernu TK, Elzein N. From Neuroscience to Law: Bridging the Gap. Front Psychol 2020; 11:1862. [PMID: 33192747 PMCID: PMC7642893 DOI: 10.3389/fpsyg.2020.01862] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2020] [Accepted: 07/07/2020] [Indexed: 11/13/2022] Open
Abstract
Since our moral and legal judgments are focused on our decisions and actions, one would expect information about the neural underpinnings of human decision-making and action-production to have a significant bearing on those judgments. However, despite the wealth of empirical data, and the public attention it has attracted in the past few decades, the results of neuroscientific research have had relatively little influence on legal practice. It is here argued that this is due, at least partly, to the discussion on the relationship of the neurosciences and law mixing up a number of separate issues that have different relevance on our moral and legal judgments. The approach here is hierarchical; more and less feasible ways in which neuroscientific data could inform such judgments are separated from each other. The neurosciences and other physical views on human behavior and decision-making do have the potential to have an impact on our legal reasoning. However, this happens in various different ways, and too often appeal to any neural data is assumed to be automatically relevant to shaping our moral and legal judgments. Our physicalist intuitions easily favor neural-level explanations to mental-level ones. But even if you were to subscribe to some reductionist variant of physicalism, it would not follow that all neural data should be automatically relevant to our moral and legal reasoning. However, the neurosciences can give us indirect evidence for reductive physicalism, which can then lead us to challenge the very idea of free will. Such a development can, ultimately, also have repercussions on law and legal practice.
Collapse
Affiliation(s)
- Tuomas K. Pernu
- Helsinki Collegium for Advanced Studies, University of Helsinki, Helsinki, Finland
- Department of Philosophy, King’s College London, London, United Kingdom
| | - Nadine Elzein
- University of Oxford, Lady Margaret Hall, Oxford, United Kingdom
| |
Collapse
|
26
|
Decoding motor imagery and action planning in the early visual cortex: Overlapping but distinct neural mechanisms. Neuroimage 2020; 218:116981. [DOI: 10.1016/j.neuroimage.2020.116981] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2019] [Revised: 05/18/2020] [Accepted: 05/19/2020] [Indexed: 11/22/2022] Open
|
27
|
Koenig-Robert R, Pearson J. Decoding Nonconscious Thought Representations during Successful Thought Suppression. J Cogn Neurosci 2020; 32:2272-2284. [PMID: 32762524 DOI: 10.1162/jocn_a_01617] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Controlling our thoughts is central to mental well-being, and its failure is at the crux of a number of mental disorders. Paradoxically, behavioral evidence shows that thought suppression often fails. Despite the broad importance of understanding the mechanisms of thought control, little is known about the fate of neural representations of suppressed thoughts. Using fMRI, we investigated the brain areas involved in controlling visual thoughts and tracked suppressed thought representations using multivoxel pattern analysis. Participants were asked to either visualize a vegetable/fruit or suppress any visual thoughts about those objects. Surprisingly, the content (object identity) of successfully suppressed thoughts was still decodable in visual areas with algorithms trained on imagery. This suggests that visual representations of suppressed thoughts are still present despite reports that they are not. Thought generation was associated with the left hemisphere, and thought suppression was associated with right hemisphere engagement. Furthermore, general linear model analyses showed that subjective success in thought suppression was correlated with engagement of executive areas, whereas thought-suppression failure was associated with engagement of visual and memory-related areas. These results suggest that the content of suppressed thoughts exists hidden from awareness, seemingly without an individual's knowledge, providing a compelling reason why thought suppression is so ineffective. These data inform models of unconscious thought production and could be used to develop new treatment approaches to disorders involving maladaptive thoughts.
Collapse
|
28
|
Kristensen AB, Subhi Y, Puthusserypady S. Vocal Imagery vs Intention: Viability of Vocal-Based EEG-BCI Paradigms. IEEE Trans Neural Syst Rehabil Eng 2020; 28:1750-1759. [PMID: 32746304 DOI: 10.1109/tnsre.2020.3004924] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
The viability of electroencephalogram (EEG) based vocal imagery (VIm) and vocal intention (VInt) Brain-Computer Interface (BCI) systems has been investigated in this study. Four different types of experimental tasks related to humming has been designed and exploited here. They are: (i) non-task specific (NTS), (ii) motor task (MT), (iii) VIm task, and (iv) VInt task. EEG signals from seventeen participants for each of these tasks were recorded from 16 electrode locations on the scalp and its features were extracted and analysed using common spatial pattern (CSP) filter. These features were subsequently fed into a support vector machine (SVM) classifier for classification. This analysis aimed to perform a binary classification, predicting whether the subject was performing one task or the other. Results from an extensive analysis showed a mean classification accuracy of 88.9% for VIm task and 91.1% for VInt task. This study clearly shows that VIm can be classified with ease and is a viable paradigm to integrate in BCIs. Such systems are not only useful for people with speech problems, but in general for people who use BCI systems to help them out in their everyday life, giving them another dimension of system control.
Collapse
|
29
|
López-García D, Sobrado A, Peñalver JMG, Górriz JM, Ruz M. Multivariate Pattern Analysis Techniques for Electroencephalography Data to Study Flanker Interference Effects. Int J Neural Syst 2020; 30:2050024. [PMID: 32496140 DOI: 10.1142/s0129065720500240] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
A central challenge in cognitive neuroscience is to understand the neural mechanisms that underlie the capacity to control our behavior according to internal goals. Flanker tasks, which require responding to stimuli surrounded by distracters that trigger incompatible action tendencies, are frequently used to measure this conflict. Even though the interference generated in these situations has been broadly studied, multivariate analysis techniques can shed new light into the underlying neural mechanisms. The current study is an initial approximation to adapt an interference Flanker paradigm embedded in a Demand-Selection Task (DST) to a format that allows measuring concurrent high-density electroencephalography (EEG). We used multivariate pattern analysis (MVPA) to decode conflict-related electrophysiological markers associated with congruent or incongruent target events in a time-frequency resolved way. Our results replicate findings obtained with other analysis approaches and offer new information regarding the dynamics of the underlying mechanisms, which show signs of reinstantiation. Our findings, some of which could not have been obtained with classic analytical strategies, open novel avenues of research.
Collapse
Affiliation(s)
- David López-García
- Mind, Brain and Behavior Research Center, University of Granada, Granada, 18071 Spain
| | - Alberto Sobrado
- Mind, Brain and Behavior Research Center, University of Granada, Granada, 18071 Spain
| | - José M G Peñalver
- Mind, Brain and Behavior Research Center, University of Granada, Granada, 18071 Spain
| | - Juan Manuel Górriz
- Department of Signal Theory, Telematics and Communications, University of Granada, Granada, 18071 Spain
| | - María Ruz
- Mind, Brain and Behavior Research Center, Department of Experimental Psychology, University of Granada, Granada, 18071 Spain
| |
Collapse
|
30
|
Pupillometric decoding of high-level musical imagery. Conscious Cogn 2019; 77:102862. [PMID: 31863916 DOI: 10.1016/j.concog.2019.102862] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2019] [Revised: 12/11/2019] [Accepted: 12/12/2019] [Indexed: 11/22/2022]
Abstract
Humans report imagining sound where no physical sound is present: we replay conversations, practice speeches, and "hear" music all within the confines of our minds. Research has identified neural substrates underlying auditory imagery; yet deciphering its explicit contents has been elusive. Here we present a novel pupillometric method for decoding what individuals hear "inside their heads". Independent of light, pupils dilate and constrict in response to noradrenergic activity. Hence, stimuli evoking unique and reliable patterns of attention and arousal even when imagined should concurrently produce identifiable patterns of pupil-size dynamics (PSDs). Participants listened to and then silently imagined music while eye-tracked. Using machine learning algorithms, we decoded the imagined songs within- and across-participants following classifier-training on PSDs collected during both imagination and perception. Echoing findings in vision, cross-domain decoding accuracy increased with imagery strength. These data suggest that light-independent PSDs are a neural signature sensitive enough to decode imagination.
Collapse
|
31
|
Pearson J. The human imagination: the cognitive neuroscience of visual mental imagery. Nat Rev Neurosci 2019; 20:624-634. [DOI: 10.1038/s41583-019-0202-9] [Citation(s) in RCA: 181] [Impact Index Per Article: 36.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
|