1
|
Painter DR, Norwood MF, Marsh CH, Hine T, Woodman C, Libera M, Harvie D, Dungey K, Chen B, Bernhardt J, Gan L, Jones S, Zeeman H. Virtual reality gameplay classification illustrates the multidimensionality of visuospatial neglect. Brain Commun 2024; 6:fcae145. [PMID: 39165478 PMCID: PMC11333965 DOI: 10.1093/braincomms/fcae145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Revised: 01/19/2024] [Accepted: 05/01/2024] [Indexed: 08/22/2024] Open
Abstract
Brain injuries can significantly impact mental processes and lead to hidden disabilities not easily detectable. Traditional methods for assessing these impacts are imprecise, leading to unreliable prevalence estimates and treatments with uncertain effectiveness. Immersive virtual reality has shown promise for assessment, but its use as a standalone tool is rare. Our research focused on developing and validating a standalone immersive virtual reality classification system for unilateral spatial neglect, a condition common following brain injury characterized by inattention to one side of space. Our study involved 51 brain injury inpatients and 30 controls, all engaging with 'The Attention Atlas', an immersive virtual reality game for testing visual search skills. Our classification system aimed to identify patients with neglect, 'minor atypicality' (indicative of inattention not consistent enough to be labelled as neglect) or non-neglect. This categorization was based on a simple mathematical definition, utilizing gameplay to describe spatial orientation (to the left or right side) and attentional challenge (indicative of search inefficiency). These metrics were benchmarked against a normative model to detect atypical visual search, which refers to gameplay beyond the usual bounds. The combination of neglected side, orientation and challenge factors was used to categorize neglect. We discovered a strong correlation between atypical visual search patterns and neglect risk factors, such as middle cerebral artery stroke, parietal injuries and existing neglect diagnoses (Poisson regression incidence rate ratio = 7.18, 95% confidence interval = 4.41-11.90). In our study, immersive virtual reality-identified neglect in one-fourth of the patients (n = 13, 25.5%), minor atypicality in 17.6% (n = 9) and non-neglect in the majority, 56.9% (n = 29). This contrasts with standard assessments, which detected neglect in 17.6% (n = 9) of cases and had no intermediate category. Our analysis determined six categories of neglect, the most common being left hemispace neglect with above-median orientation and challenge scores. Traditional assessments were not significantly more accurate (accuracy = 84.3%, P = 0.06) than a blanket assumption of non-neglect. Traditional assessments were also relatively insensitive in detecting immersive virtual reality-identified neglect (53.8%), particularly in less severe cases and those involving right-side inattention. Our findings underline the effectiveness of immersive virtual reality in revealing various dimensions of neglect, surpassing traditional methods in sensitivity and detail and operating independently from them. To integrate immersive virtual reality into real-world clinical settings, collaboration with healthcare professionals, patients and other stakeholders is crucial to ensure practical applicability and accessibility.
Collapse
Affiliation(s)
- David R Painter
- The Hopkins Centre, Menzies Health Institute Queensland, Griffith University, Nathan, Queensland, 4111, Australia
| | - Michael F Norwood
- The Hopkins Centre, Menzies Health Institute Queensland, Griffith University, Nathan, Queensland, 4111, Australia
| | - Chelsea H Marsh
- The Hopkins Centre, Menzies Health Institute Queensland, Griffith University, Nathan, Queensland, 4111, Australia
- School of Applied Psychology, Griffith University, Gold Coast, Queensland, 4215, Australia
| | - Trevor Hine
- The Hopkins Centre, Menzies Health Institute Queensland, Griffith University, Nathan, Queensland, 4111, Australia
- School of Applied Psychology, Griffith University, Mount Gravatt, Queensland, 4215, Australia
| | - Christie Woodman
- Neurosciences Rehabilitation Unit, Gold Coast University Hospital, Gold Coast, Queensland, 4215, Australia
| | - Marilia Libera
- Psychology Department, Logan Hospital, Logan, Queensland, 4131, Australia
| | - Daniel Harvie
- The Hopkins Centre, Menzies Health Institute Queensland, Griffith University, Nathan, Queensland, 4111, Australia
- Allied Health and Human Performance, Innovation, Implementation and Clinical Translation in Health (IIMPACT in Health), University South Australia, Adelaide, 5001, South Australia, Australia
| | - Kelly Dungey
- School of Applied Psychology, Griffith University, Mount Gravatt, Queensland, 4215, Australia
| | - Ben Chen
- Allied Health and Rehabilitation, Emergency and Specialty Services, Gold Coast Health, Gold Coast, Queensland, 4215, Australia
| | - Julie Bernhardt
- Florey Institute of Neuroscience and Mental Health, Austin Campus, Heidelberg, 3084, Victoria, Australia
| | - Leslie Gan
- Rehabilitation Unit, Logan Hospital, Meadowbrook, Queensland, 4131, Australia
| | - Susan Jones
- School of Applied Psychology, Griffith University, Mount Gravatt, Queensland, 4215, Australia
| | - Heidi Zeeman
- The Hopkins Centre, Menzies Health Institute Queensland, Griffith University, Nathan, Queensland, 4111, Australia
| |
Collapse
|
2
|
Andersen SK, Hillyard SA. The time course of feature-selective attention inside and outside the focus of spatial attention. Proc Natl Acad Sci U S A 2024; 121:e2309975121. [PMID: 38588433 PMCID: PMC11032453 DOI: 10.1073/pnas.2309975121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Accepted: 03/11/2024] [Indexed: 04/10/2024] Open
Abstract
Research on attentional selection of stimulus features has yielded seemingly contradictory results. On the one hand, many experiments in humans and animals have observed a "global" facilitation of attended features across the entire visual field, even when spatial attention is focused on a single location. On the other hand, several event-related potential studies in humans reported that attended features are enhanced at the attended location only. The present experiment demonstrates that these conflicting results can be explained by differences in the timing of attentional allocation inside and outside the spatial focus of attention. Participants attended to fields of either red or blue randomly moving dots on either the left or right side of fixation with the task of detecting brief coherent motion targets. Recordings of steady-state visual evoked potentials elicited by the flickering stimuli allowed concurrent measurement of the time course of feature-selective attention in visual cortex on both the attended and the unattended sides. The onset of feature-selective attentional modulation on the attended side occurred around 150 ms earlier than on the unattended side. This finding that feature-selective attention is not spatially global from the outset but extends to unattended locations after a temporal delay resolves previous contradictions between studies finding global versus hierarchical selection of features and provides insight into the fundamental relationship between feature-based and location-based (spatial) attention mechanisms.
Collapse
Affiliation(s)
- Søren K. Andersen
- Department of Psychology, University of Southern Denmark, Odense MDK-5230, Denmark
- School of Psychology, University of Aberdeen, AberdeenAB24 3FX, United Kingdom
| | - Steven A. Hillyard
- Department of Neurosciences, University of California at San Diego, La Jolla, CA92093
- Leibniz Institute for Neurobiology, Magdeburg39118, Germany
| |
Collapse
|
3
|
Norwood MF, Painter DR, Marsh CH, Reid C, Hine T, Harvie DS, Jones S, Dungey K, Chen B, Libera M, Gan L, Bernhardt J, Kendall E, Zeeman H. The attention atlas virtual reality platform maps three-dimensional (3D) attention in unilateral spatial neglect patients: a protocol. BRAIN IMPAIR 2023; 24:548-567. [PMID: 38167362 DOI: 10.1017/brimp.2022.15] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
BACKGROUND Deficits in visuospatial attention, known as neglect, are common following brain injury, but underdiagnosed and poorly treated, resulting in long-term cognitive disability. In clinical settings, neglect is often assessed using simple pen-and-paper tests. While convenient, these cannot characterise the full spectrum of neglect. This protocol reports a research programme that compares traditional neglect assessments with a novel virtual reality attention assessment platform: The Attention Atlas (AA). METHODS/DESIGN The AA was codesigned by researchers and clinicians to meet the clinical need for improved neglect assessment. The AA uses a visual search paradigm to map the attended space in three dimensions and seeks to identify the optimal parameters that best distinguish neglect from non-neglect, and the spectrum of neglect, by providing near-time feedback to clinicians on system-level behavioural performance. A series of experiments will address procedural, scientific, patient, and clinical feasibility domains. RESULTS Analyses focuses on descriptive measures of reaction time, accuracy data for target localisation, and histogram-based raycast attentional mapping analysis; which measures the individual's orientation in space, and inter- and intra-individual variation of visuospatial attention. We will compare neglect and control data using parametric between-subjects analyses. We present example individual-level results produced in near-time during visual search. CONCLUSIONS The development and validation of the AA is part of a new generation of translational neuroscience that exploits the latest advances in technology and brain science, including technology repurposed from the consumer gaming market. This approach to rehabilitation has the potential for highly accurate, highly engaging, personalised care.
Collapse
Affiliation(s)
- Michael Francis Norwood
- The Hopkins Centre, Menzies Health Institute Queensland, Griffith University, Meadowbrook, QLD, Australia
| | - David Ross Painter
- The Hopkins Centre, Menzies Health Institute Queensland, Griffith University, Meadowbrook, QLD, Australia
| | - Chelsea Hannah Marsh
- The Hopkins Centre, Menzies Health Institute Queensland, Griffith University, Meadowbrook, QLD, Australia
- School of Applied Psychology, Griffith University, Gold Coast, QLD, Australia
| | - Connor Reid
- Technical Partners Health (TPH), Griffith University, Nathan, QLD, Australia
| | - Trevor Hine
- School of Applied Psychology, Griffith University, Mt Gravatt, QLD, Australia
| | - Daniel S Harvie
- The Hopkins Centre, Menzies Health Institute Queensland, Griffith University, Meadowbrook, QLD, Australia
- Innovation, Implementation and Clinical Translation in Health (IIMPACT in Health), Allied Health and Human Performance, University of South Australia, Adelaide, SA, Australia
| | - Susan Jones
- Neurosciences Rehabilitation Unit, Gold Coast University Hospital, Gold Coast, QLD, Australia
| | - Kelly Dungey
- Neurosciences Rehabilitation Unit, Gold Coast University Hospital, Gold Coast, QLD, Australia
| | - Ben Chen
- Allied Health and Rehabilitation, Emergency and Specialty Services, Gold Coast Health, Gold Coast, QLD, Australia
| | - Marilia Libera
- Psychology Department, Logan Hospital, Logan, QLD, Australia
| | - Leslie Gan
- Rehabilitation Unit, Logan Hospital, Meadowbrook, QLD, Australia
| | - Julie Bernhardt
- Florey Institute of Neuroscience and Mental Health, Heidelberg, VIC, Australia
| | - Elizabeth Kendall
- The Hopkins Centre, Menzies Health Institute Queensland, Griffith University, Meadowbrook, QLD, Australia
| | - Heidi Zeeman
- The Hopkins Centre, Menzies Health Institute Queensland, Griffith University, Meadowbrook, QLD, Australia
| |
Collapse
|
4
|
Painter DR, Norwood MF, Marsh CH, Hine T, Harvie D, Libera M, Bernhardt J, Gan L, Zeeman H. Immersive virtual reality gameplay detects visuospatial atypicality, including unilateral spatial neglect, following brain injury: a pilot study. J Neuroeng Rehabil 2023; 20:161. [PMID: 37996834 PMCID: PMC10668447 DOI: 10.1186/s12984-023-01283-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Accepted: 11/12/2023] [Indexed: 11/25/2023] Open
Abstract
BACKGROUND In neurorehabilitation, problems with visuospatial attention, including unilateral spatial neglect, are prevalent and routinely assessed by pen-and-paper tests, which are limited in accuracy and sensitivity. Immersive virtual reality (VR), which motivates a much wider (more intuitive) spatial behaviour, promises new futures for identifying visuospatial atypicality in multiple measures, which reflects cognitive and motor diversity across individuals with brain injuries. METHODS In this pilot study, we had 9 clinician controls (mean age 43 years; 4 males) and 13 neurorehabilitation inpatients (mean age 59 years; 9 males) recruited a mean of 41 days post-injury play a VR visual search game. Primary injuries included 7 stroke, 4 traumatic brain injury, 2 other acquired brain injury. Three patients were identified as having left sided neglect prior to taking part in the VR. Response accuracy, reaction time, and headset and controller raycast orientation quantified gameplay. Normative modelling identified the typical gameplay bounds, and visuospatial atypicality was defined as gameplay beyond these bounds. RESULTS The study found VR to be feasible, with only minor instances of motion sickness, positive user experiences, and satisfactory system usability. Crucially, the analytical method, which emphasized identifying 'visuospatial atypicality,' proved effective. Visuospatial atypicality was more commonly observed in patients compared to controls and was prevalent in both groups of patients-those with and without neglect. CONCLUSION Our research indicates that normative modelling of VR gameplay is a promising tool for identifying visuospatial atypicality after acute brain injury. This approach holds potential for a detailed examination of neglect.
Collapse
Affiliation(s)
- David R Painter
- The Hopkins Centre, Menzies Health Institute Queensland, Griffith University, 170 Kessels Rd, Nathan, QLD, 4111, Australia
| | - Michael F Norwood
- The Hopkins Centre, Menzies Health Institute Queensland, Griffith University, 170 Kessels Rd, Nathan, QLD, 4111, Australia.
| | - Chelsea H Marsh
- The Hopkins Centre, Menzies Health Institute Queensland, Griffith University, 170 Kessels Rd, Nathan, QLD, 4111, Australia
- School of Applied Psychology, Griffith University, Gold Coast, QLD, Australia
| | - Trevor Hine
- The Hopkins Centre, Menzies Health Institute Queensland, Griffith University, 170 Kessels Rd, Nathan, QLD, 4111, Australia
- School of Applied Psychology, Griffith University, Mount Gravatt, QLD, Australia
| | - Daniel Harvie
- The Hopkins Centre, Menzies Health Institute Queensland, Griffith University, 170 Kessels Rd, Nathan, QLD, 4111, Australia
- Allied Health and Human Performance, Innovation, Implementation and Clinical Translation in Health (IIMPACT in Health), University South Australia, Adelaide, SA, Australia
| | - Marilia Libera
- Psychology Department, Logan Hospital, Logan, QLD, Australia
| | - Julie Bernhardt
- Florey Institute of Neuroscience and Mental Health, Heidelberg, VIC, Australia
| | - Leslie Gan
- Rehabilitation Unit, Logan Hospital, Meadowbrook, QLD, Australia
| | - Heidi Zeeman
- The Hopkins Centre, Menzies Health Institute Queensland, Griffith University, 170 Kessels Rd, Nathan, QLD, 4111, Australia
| |
Collapse
|
5
|
Zhong C, Ding Y, Qu Z. Distinct roles of theta and alpha oscillations in the process of contingent attentional capture. Front Hum Neurosci 2023; 17:1220562. [PMID: 37609570 PMCID: PMC10440541 DOI: 10.3389/fnhum.2023.1220562] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2023] [Accepted: 07/20/2023] [Indexed: 08/24/2023] Open
Abstract
Introduction Visual spatial attention can be captured by a salient color singleton that is contingent on the target feature. A previous study reported that theta (4-7 Hz) and alpha (8-14 Hz) oscillations were related to contingent attentional capture, but the corresponding attentional mechanisms of these oscillations remain unclear. Methods In this study, we analyzed the electroencephalogram data of our previous study to investigate the roles of capture-related theta and alpha oscillation activities. Different from the previous study that used color-changed placeholders as irrelevant cues, the present study adopted abrupt onsets of color singleton cues which tend to elicit phase-locked neural activities. In Experiment 1, participants completed a peripheral visual search task in which spatially uninformative color singleton cues were inside the spatial attentional window and a central rapid serial visual presentation (RSVP) task in which the same cues were outside the spatial attentional window. In Experiment 2, participants completed a color RSVP task and a size RSVP task in which the peripheral color singleton cues were contingent and not contingent on target feature, respectively. Results In Experiment 1, spatially uninformative color singleton cues elicited lateralized theta activities when they were contingent on target feature, irrespective of whether they were inside or outside the spatial attentional window. In contrast, the same color singleton cues elicited alpha lateralization only when they were inside the spatial attentional window. In Experiment 2, we further found that theta lateralization vanished if the color singleton cues were not contingent on target feature. Discussion These results suggest distinct roles of theta and alpha oscillations in the process of contingent attentional capture initiated by abrupt onsets of singleton cues. Theta activities may reflect global enhancement of target feature, while alpha activities may be related to attentional engagement to spatially relevant singleton cues. These lateralized neural oscillations, together with the distractor-elicited N2pc component, might consist of multiple stages of attentional processes during contingent attentional capture.
Collapse
Affiliation(s)
- Chupeng Zhong
- Department of Psychology, Sun Yat-sen University, Guangzhou, China
| | - Yulong Ding
- Key Laboratory of Brain, Cognition and Education Sciences (South China Normal University), Ministry of Education, Guangzhou, China
- School of Psychology, South China Normal University, Guangzhou, China
| | - Zhe Qu
- Department of Psychology, Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
6
|
Chiarella SG, Simione L, D'Angiò M, Raffone A, Di Pace E. The mechanisms of selective attention in phenomenal consciousness. Conscious Cogn 2023; 107:103446. [PMID: 36508897 DOI: 10.1016/j.concog.2022.103446] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Revised: 10/15/2022] [Accepted: 11/18/2022] [Indexed: 12/13/2022]
Abstract
In three experiments we investigated the effects of selective attention in iconic memory and fragile-visual short-term memory (VSTM), which have been related to phenomenal consciousness. We used a novel retro-cue paradigm with different delays (early vs late) and object priorities (high vs equal vs low), to investigate (a) attentional costs and benefits and the role of (b) bottom-up factors and (c) fragile-VSTM in feature-based attentional selection. Experiment 1 showed that attentional costs modulate visual maintenance at longer delays, while Experiment 2 showed that by reducing the time exposure of the memory array from 250 ms to 100 ms, as a bottom-up factor, participants were not able to select the objects based on their priorities. Finally, Experiment 3 showed that a pattern mask presented before the transfer in visual working memory, attenuates the overall performance while preserving the priority effect. The implications for phenomenal consciousness before conscious access are discussed.
Collapse
Affiliation(s)
- Salvatore G Chiarella
- Sapienza University of Rome, Department of Psychology, Rome, Italy; Institute of Cognitive Sciences and Technologies (ISTC), National Research Council (CNR), Rome, Italy.
| | - Luca Simione
- Institute of Cognitive Sciences and Technologies (ISTC), National Research Council (CNR), Rome, Italy
| | - Monia D'Angiò
- Sapienza University of Rome, Department of Psychology, Rome, Italy
| | - Antonino Raffone
- Sapienza University of Rome, Department of Psychology, Rome, Italy; ECONA, Interuniversity Center, Rome, Italy; School of Buddhist Studies, Philosophy, and Comparative Religions, Nalanda University, Rajgir, India
| | - Enrico Di Pace
- Sapienza University of Rome, Department of Psychology, Rome, Italy
| |
Collapse
|
7
|
Renton AI, Painter DR, Mattingley JB. Optimising the classification of feature-based attention in frequency-tagged electroencephalography data. Sci Data 2022; 9:296. [PMID: 35697741 PMCID: PMC9192640 DOI: 10.1038/s41597-022-01398-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2021] [Accepted: 05/17/2022] [Indexed: 11/09/2022] Open
Abstract
Brain-computer interfaces (BCIs) are a rapidly expanding field of study and require accurate and reliable real-time decoding of patterns of neural activity. These protocols often exploit selective attention, a neural mechanism that prioritises the sensory processing of task-relevant stimulus features (feature-based attention) or task-relevant spatial locations (spatial attention). Within the visual modality, attentional modulation of neural responses to different inputs is well indexed by steady-state visual evoked potentials (SSVEPs). These signals are reliably present in single-trial electroencephalography (EEG) data, are largely resilient to common EEG artifacts, and allow separation of neural responses to numerous concurrently presented visual stimuli. To date, efforts to use single-trial SSVEPs to classify visual attention for BCI control have largely focused on spatial attention rather than feature-based attention. Here, we present a dataset that allows for the development and benchmarking of algorithms to classify feature-based attention using single-trial EEG data. The dataset includes EEG and behavioural responses from 30 healthy human participants who performed a feature-based motion discrimination task on frequency tagged visual stimuli.
Collapse
Affiliation(s)
- Angela I Renton
- The University of Queensland, Queensland Brain Institute, St Lucia, 4072, Australia.
- The University of Queensland, School of Information Technology and Electrical Engineering, St Lucia, Australia.
| | - David R Painter
- The University of Queensland, Queensland Brain Institute, St Lucia, 4072, Australia
| | - Jason B Mattingley
- The University of Queensland, Queensland Brain Institute, St Lucia, 4072, Australia
- The University of Queensland, School of Psychology, St Lucia, 4072, Australia
- Canadian Institute for Advanced Research (CIFAR), Toronto, Canada
| |
Collapse
|
8
|
Gundlach C, Forschack N, Müller MM. Suppression of Unattended Features Is Independent of Task Relevance. Cereb Cortex 2021; 32:2437-2446. [PMID: 34564718 DOI: 10.1093/cercor/bhab351] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2021] [Revised: 07/15/2021] [Accepted: 08/11/2021] [Indexed: 11/13/2022] Open
Abstract
Feature-based attention serves the separation of relevant from irrelevant features. While global amplification of attended features is coherently described as a key mechanism for feature-based attention, nature and constituting factors of neural suppressive interactions are far less clear. One aspect of global amplification is its flexible modulation by the task relevance of the to-be-attended stimulus. We examined whether suppression is similarly modulated by their respective task relevance or is mandatory for all unattended features. For this purpose, participants saw a display of randomly moving dots with 3 distinct colors and were asked to report brief events of coherent motion for a cued color. Of the 2 unattended colored clouds, one contained distracting motion events while the other was irrelevant and without such motion events throughout the experiment. We used electroencephalography-derived steady-state visual-evoked potentials to investigate early visual processing of the attended, unattended, and irrelevant color under sustained feature-based attention. The analysis revealed a biphasic process with an early amplification of the to-be-attended color followed by suppression of the to-be-ignored color relative to a pre-cue baseline. Importantly, the neural dynamics for the unattended and always irrelevant color were comparable. Suppression is thus a mandatory mechanism affecting all unattended stimuli irrespective of their task relevance.
Collapse
Affiliation(s)
- Christopher Gundlach
- Experimental Psychology and Methods, Universität Leipzig, 04109 Leipzig, Germany
| | - Norman Forschack
- Experimental Psychology and Methods, Universität Leipzig, 04109 Leipzig, Germany
| | - Matthias M Müller
- Experimental Psychology and Methods, Universität Leipzig, 04109 Leipzig, Germany
| |
Collapse
|
9
|
Painter DR, Kim JJ, Renton AI, Mattingley JB. Joint control of visually guided actions involves concordant increases in behavioural and neural coupling. Commun Biol 2021; 4:816. [PMID: 34188170 PMCID: PMC8242020 DOI: 10.1038/s42003-021-02319-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2019] [Accepted: 05/18/2021] [Indexed: 02/06/2023] Open
Abstract
It is often necessary for individuals to coordinate their actions with others. In the real world, joint actions rely on the direct observation of co-actors and rhythmic cues. But how are joint actions coordinated when such cues are unavailable? To address this question, we recorded brain activity while pairs of participants guided a cursor to a target either individually (solo control) or together with a partner (joint control) from whom they were physically and visibly separated. Behavioural patterns revealed that joint action involved real-time coordination between co-actors and improved accuracy for the lower performing co-actor. Concurrent neural recordings and eye tracking revealed that joint control affected cognitive processing across multiple stages. Joint control involved increases in both behavioural and neural coupling - both quantified as interpersonal correlations - peaking at action completion. Correspondingly, a neural offset response acted as a mechanism for and marker of interpersonal neural coupling, underpinning successful joint actions.
Collapse
Affiliation(s)
- David R Painter
- The University of Queensland, Queensland Brain Institute, St Lucia, Australia.
- Hopkins Centre, Menzies Health Institute Queensland, Griffith University, Gold Coast, Queensland, Australia.
- Menzies Health Institute Queensland, Griffith University, Gold Coast, Queensland, Australia.
| | - Jeffrey J Kim
- The University of Queensland, Queensland Brain Institute, St Lucia, Australia
- The University of Queensland, School of Psychology, St Lucia, Australia
| | - Angela I Renton
- The University of Queensland, Queensland Brain Institute, St Lucia, Australia
| | - Jason B Mattingley
- The University of Queensland, Queensland Brain Institute, St Lucia, Australia
- The University of Queensland, School of Psychology, St Lucia, Australia
- Canadian Institute for Advanced Research (CIFAR), Toronto, Canada
| |
Collapse
|
10
|
Wang Y, Yan J, Yin Z, Ren S, Dong M, Zheng C, Zhang W, Liang J. How Native Background Affects Human Performance in Real-World Visual Object Detection: An Event-Related Potential Study. Front Neurosci 2021; 15:665084. [PMID: 33994938 PMCID: PMC8119748 DOI: 10.3389/fnins.2021.665084] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2021] [Accepted: 03/31/2021] [Indexed: 11/13/2022] Open
Abstract
Visual processing refers to the process of perceiving, analyzing, synthesizing, manipulating, transforming, and thinking of visual objects. It is modulated by both stimulus-driven and goal-directed factors and manifested in neural activities that extend from visual cortex to high-level cognitive areas. Extensive body of studies have investigated the neural mechanisms of visual object processing using synthetic or curated visual stimuli. However, synthetic or curated images generally do not accurately reflect the semantic links between objects and their backgrounds, and previous studies have not provided answers to the question of how the native background affects visual target detection. The current study bridged this gap by constructing a stimulus set of natural scenes with two levels of complexity and modulating participants' attention to actively or passively attend to the background contents. Behaviorally, the decision time was elongated when the background was complex or when the participants' attention was distracted from the detection task, and the object detection accuracy was decreased when the background was complex. The results of event-related potentials (ERP) analysis explicated the effects of scene complexity and attentional state on the brain responses in occipital and centro-parietal areas, which were suggested to be associated with varied attentional cueing and sensory evidence accumulation effects in different experimental conditions. Our results implied that efficient visual processing of real-world objects may involve a competition process between context and distractors that co-exist in the native background, and extensive attentional cues and fine-grained but semantically irrelevant scene information were perhaps detrimental to real-world object detection.
Collapse
Affiliation(s)
- Yue Wang
- School of Electronic Engineering, Xidian University, Xi'an, China
| | - Jianpu Yan
- School of Electronic Engineering, Xidian University, Xi'an, China
| | - Zhongliang Yin
- School of Life Science and Technology, Xidian University, Xi'an, China
| | - Shenghan Ren
- School of Life Science and Technology, Xidian University, Xi'an, China
| | - Minghao Dong
- School of Life Science and Technology, Xidian University, Xi'an, China
| | - Changli Zheng
- Southwest China Research Institute of Electronic Equipment, Chengdu, China
| | - Wei Zhang
- Southwest China Research Institute of Electronic Equipment, Chengdu, China
| | - Jimin Liang
- School of Electronic Engineering, Xidian University, Xi'an, China
| |
Collapse
|
11
|
Adam KCS, Chang L, Rangan N, Serences JT. Steady-State Visually Evoked Potentials and Feature-based Attention: Preregistered Null Results and a Focused Review of Methodological Considerations. J Cogn Neurosci 2021; 33:695-724. [PMID: 33416444 PMCID: PMC8354379 DOI: 10.1162/jocn_a_01665] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
Abstract
Feature-based attention is the ability to selectively attend to a particular feature (e.g., attend to red but not green items while looking for the ketchup bottle in your refrigerator), and steady-state visually evoked potentials (SSVEPs) measured from the human EEG signal have been used to track the neural deployment of feature-based attention. Although many published studies suggest that we can use trial-by-trial cues to enhance relevant feature information (i.e., greater SSVEP response to the cued color), there is ongoing debate about whether participants may likewise use trial-by-trial cues to voluntarily ignore a particular feature. Here, we report the results of a preregistered study in which participants either were cued to attend or to ignore a color. Counter to prior work, we found no attention-related modulation of the SSVEP response in either cue condition. However, positive control analyses revealed that participants paid some degree of attention to the cued color (i.e., we observed a greater P300 component to targets in the attended vs. the unattended color). In light of these unexpected null results, we conducted a focused review of methodological considerations for studies of feature-based attention using SSVEPs. In the review, we quantify potentially important stimulus parameters that have been used in the past (e.g., stimulation frequency, trial counts) and we discuss the potential importance of these and other task factors (e.g., feature-based priming) for SSVEP studies.
Collapse
|
12
|
Feature-based attention is not confined by object boundaries: Spatially global enhancement of irrelevant features. Psychon Bull Rev 2021; 28:1252-1260. [PMID: 33687666 DOI: 10.3758/s13423-021-01897-x] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/07/2021] [Indexed: 11/08/2022]
Abstract
Theories of visual attention differ in what they identify as the core unit of selection. Feature-based theories emphasize basic visual features (e.g., color, motion), demonstrated through enhancement of attended features throughout the visual field, while object-based theories propose that attention enhances all features belonging to the same object. These theories make distinct predictions about the processing of features that are not attended primarily: Object-based theories predict that such secondary, task-irrelevant features are enhanced within object boundaries, while feature-based theories predict enhancement of irrelevant features across locations, regardless of objecthood. To test these two accounts, we had participants attend a set of colored dots among distractor dots (moving coherently upward or downward) to detect brief luminance decreases, while simultaneously detecting speed changes in other sets of dots in the opposite visual field. In the first experiment, we demonstrate that participants have higher speed detection rates in the dot array that matched the motion direction of the attended color array, although motion direction was task-irrelevant. In a second experiment, we manipulated the probability that speed changes occurred in the matching motion direction and found that enhancement of the irrelevant motion direction persisted even when it was detrimental for task performance, suggesting that spatially global effects of feature-based attention cannot easily be flexibly adjusted. Overall, these results indicate that features that are not primarily attended are enhanced globally, surpassing object boundaries.
Collapse
|
13
|
Renton AI, Painter DR, Mattingley JB. Differential Deployment of Visual Attention During Interactive Approach and Avoidance Behavior. Cereb Cortex 2020; 29:2366-2383. [PMID: 29750259 DOI: 10.1093/cercor/bhy105] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2018] [Revised: 04/17/2018] [Accepted: 04/19/2018] [Indexed: 01/23/2023] Open
Abstract
The ability to coordinate approach and avoidance actions in dynamic environments represents the boundary between extinction and the continued survival of many animal species. It is therefore crucial that sensory systems allocate limited attentional resources to the most relevant information to facilitate planning and execution of appropriate actions. Prominent theories of how attention regulates visual processing focus on the distinction between behaviorally relevant and irrelevant visual inputs. To date, however, no study has directly compared the deployment of attention to visual inputs relevant for approach and avoidance behaviors, which naturally occur in dynamic, interactive environments. In two experiments, we combined electroencephalography, frequency tagging, and eye gaze measures to investigate whether the deployment of visual selective attention differs for items relevant for approach and avoidance actions. Participants maneuvered a cursor to approach and avoid contact with moving items in a continuous interactive task. The results indicated that while the approach and avoidance tasks recruited equivalent attentional resources overall, attentional biases were directed toward task-relevant items during approach, and away from task-relevant items during avoidance. We conclude that the deployment of visual attention is guided not only by relevance to a behavioral goal, but also by the nature of that goal.
Collapse
Affiliation(s)
- Angela I Renton
- School of Psychology, The University of Queensland, St Lucia, Australia.,Queensland Brain Institute, The University of Queensland, St Lucia, Australia
| | - David R Painter
- School of Psychology, The University of Queensland, St Lucia, Australia
| | - Jason B Mattingley
- School of Psychology, The University of Queensland, St Lucia, Australia.,Queensland Brain Institute, The University of Queensland, St Lucia, Australia
| |
Collapse
|
14
|
de Lissa P, Caldara R, Nicholls V, Miellet S. In pursuit of visual attention: SSVEP frequency-tagging moving targets. PLoS One 2020; 15:e0236967. [PMID: 32750065 PMCID: PMC7402507 DOI: 10.1371/journal.pone.0236967] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2020] [Accepted: 07/16/2020] [Indexed: 11/19/2022] Open
Abstract
Previous research has shown that visual attention does not always exactly follow gaze direction, leading to the concepts of overt and covert attention. However, it is not yet clear how such covert shifts of visual attention to peripheral regions impact the processing of the targets we directly foveate as they move in our visual field. The current study utilised the co-registration of eye-position and EEG recordings while participants tracked moving targets that were embedded with a 30 Hz frequency tag in a Steady State Visually Evoked Potentials (SSVEP) paradigm. When the task required attention to be divided between the moving target (overt attention) and a peripheral region where a second target might appear (covert attention), the SSVEPs elicited by the tracked target at the 30 Hz frequency band were significantly, but transiently, lower than when participants did not have to covertly monitor for a second target. Our findings suggest that neural responses of overt attention are only briefly reduced when attention is divided between covert and overt areas. This neural evidence is in line with theoretical accounts describing attention as a pool of finite resources, such as the perceptual load theory. Altogether, these results have practical implications for many real-world situations where covert shifts of attention may discretely reduce visual processing of objects even when they are directly being tracked with the eyes.
Collapse
Affiliation(s)
- Peter de Lissa
- Department of Psychology, Eye and Brain Mapping Laboratory (iBMLab), University of Fribourg, Fribourg, Switzerland
- * E-mail:
| | - Roberto Caldara
- Department of Psychology, Eye and Brain Mapping Laboratory (iBMLab), University of Fribourg, Fribourg, Switzerland
| | - Victoria Nicholls
- Department of Psychology, University of Bournemouth, Poole, United Kingdom
| | - Sebastien Miellet
- Active Vision Lab, School of Psychology, University of Wollongong, Wollongong, Australia
| |
Collapse
|
15
|
Smout CA, Garrido MI, Mattingley JB. Global effects of feature-based attention depend on surprise. Neuroimage 2020; 215:116785. [DOI: 10.1016/j.neuroimage.2020.116785] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2019] [Revised: 03/09/2020] [Accepted: 03/26/2020] [Indexed: 10/24/2022] Open
|
16
|
Evidence accumulation during perceptual decision-making is sensitive to the dynamics of attentional selection. Neuroimage 2020; 220:117093. [PMID: 32599268 DOI: 10.1016/j.neuroimage.2020.117093] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2019] [Revised: 06/19/2020] [Accepted: 06/22/2020] [Indexed: 11/20/2022] Open
Abstract
The ability to select and combine multiple sensory inputs in support of accurate decisions is a hallmark of adaptive behaviour. Attentional selection is often needed to prioritize task-relevant stimuli relative to irrelevant, potentially distracting stimuli. As most studies of perceptual decision-making to date have made use of task-relevant stimuli only, relatively little is known about how attention modulates decision making. To address this issue, we developed a novel 'integrated' decision-making task, in which participants judged the average direction of successive target motion signals while ignoring concurrent and spatially overlapping distractor motion signals. In two experiments that varied the role of attentional selection, we used regression to quantify the influence of target and distractor stimuli on behaviour. Using electroencephalography, we characterised the neural correlates of decision making, attentional selection and feature-specific responses to target and distractor signals. While targets strongly influenced perceptual decisions and associated neural activity, we also found that concurrent and spatially coincident distractors exerted a measurable bias on both behaviour and brain activity. Our findings suggest that attention operates as a real-time but imperfect filter during perceptual decision-making by dynamically modulating the contributions of task-relevant and irrelevant sensory inputs.
Collapse
|
17
|
Seibold VC, Stepper MY, Rolke B. Temporal attention boosts perceptual effects of spatial attention and feature-based attention. Brain Cogn 2020; 142:105570. [PMID: 32447188 DOI: 10.1016/j.bandc.2020.105570] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2019] [Revised: 03/20/2020] [Accepted: 04/14/2020] [Indexed: 11/29/2022]
Abstract
Temporal attention, that is, the process of anticipating the occurrence of a stimulus at a given time point, has been shown to improve perceptual processing of visual stimuli. In the present study, we investigated whether and how temporal attention interacts with spatial attention and feature-based attention in visual selection. To monitor the influence of the three different attention dimensions on perceptual processing, we measured event-related potentials (ERPs). Our participants performed a visual search task, in which a colored singleton was presented amongst homogenous distractors. We manipulated spatial and feature-based attention by requiring participants to respond only to target singletons in a particular color and at a to-be-attended spatial location. We manipulated temporal attention by means of an explicit temporal cue that announced either validly or invalidly the occurrence of the search display. We obtained early ERP effects of spatial attention and feature-based attention at the validly cued but not at the invalidly cued time point. Taken together, our results suggest that temporal attention boosts early effects of spatial and feature-based attention.
Collapse
Affiliation(s)
- Verena C Seibold
- Evolutionary Cognition, Department of Psychology, University of Tübingen, Germany.
| | - Madeleine Y Stepper
- Evolutionary Cognition, Department of Psychology, University of Tübingen, Germany
| | - Bettina Rolke
- Evolutionary Cognition, Department of Psychology, University of Tübingen, Germany
| |
Collapse
|
18
|
Luo C, Ding N. Visual target detection in a distracting background relies on neural encoding of both visual targets and background. Neuroimage 2020; 216:116870. [PMID: 32339773 DOI: 10.1016/j.neuroimage.2020.116870] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2019] [Revised: 04/11/2020] [Accepted: 04/17/2020] [Indexed: 02/06/2023] Open
Abstract
The ability to detect visual targets in complex background varies across individuals and are affected by factors such as stimulus saliency and top-down attention. Here, we investigated how the saliency of visual background (naturalistic cartoon video vs. blank screen) and top-down attention (single vs. dual tasks) separately affect individual ability to detect visual targets. Behaviorally, we found that target detection accuracy decreased and reaction time elongated when the background was salient or during dual tasking. The EEG response to visual background was recorded using a novel stimulus tagging technique. This response was strongest in occipital electrodes and was sensitive to background saliency but not dual tasking. In contrast, the event-related potential (ERP) evoked by the visual target was strongest in central electrodes, and was affected by both background saliency and dual tasking. With a cartoon background, the EEG responses to visual targets, presented in the central visual field, and the EEG responses to peripheral visual background could both predict individual target detection performance. When these two responses were combined, better prediction was achieved. These results suggest that neural processing of visual targets and background jointly contribute to individual visual target detection performance.
Collapse
Affiliation(s)
- Cheng Luo
- Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Sciences, Zhejiang University, Hangzhou, 310027, China
| | - Nai Ding
- Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Sciences, Zhejiang University, Hangzhou, 310027, China; Research Center for Advanced Artificial Intelligence Theory, Zhejiang Lab, Hangzhou, 311121, China.
| |
Collapse
|
19
|
Varlet M, Nozaradan S, Nijhuis P, Keller PE. Neural tracking and integration of ‘self’ and ‘other’ in improvised interpersonal coordination. Neuroimage 2020; 206:116303. [DOI: 10.1016/j.neuroimage.2019.116303] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2019] [Revised: 10/10/2019] [Accepted: 10/18/2019] [Indexed: 11/24/2022] Open
|
20
|
Renton AI, Mattingley JB, Painter DR. Optimising non-invasive brain-computer interface systems for free communication between naïve human participants. Sci Rep 2019; 9:18705. [PMID: 31822715 PMCID: PMC6904487 DOI: 10.1038/s41598-019-55166-y] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2019] [Accepted: 11/22/2019] [Indexed: 12/22/2022] Open
Abstract
Free communication is one of the cornerstones of modern civilisation. While manual keyboards currently allow us to interface with computers and manifest our thoughts, a next frontier is communication without manual input. Brain-computer interface (BCI) spellers often achieve this by decoding patterns of neural activity as users attend to flickering keyboard displays. To date, the highest performing spellers report typing rates of ~10.00 words/minute. While impressive, these rates are typically calculated for experienced users repetitively typing single phrases. It is therefore not clear whether naïve users are able to achieve such high rates with the added cognitive load of genuine free communication, which involves continuously generating and spelling novel words and phrases. In two experiments, we developed an open-source, high-performance, non-invasive BCI speller and examined its feasibility for free communication. The BCI speller required users to focus their visual attention on a flickering keyboard display, thereby producing unique cortical activity patterns for each key, which were decoded using filter-bank canonical correlation analysis. In Experiment 1, we tested whether seventeen naïve users could maintain rapid typing during prompted free word association. We found that information transfer rates were indeed slower during this free communication task than during typing of a cued character sequence. In Experiment 2, we further evaluated the speller's efficacy for free communication by developing a messaging interface, allowing users to engage in free conversation. The results showed that free communication was possible, but that information transfer was reduced by voluntary textual corrections and turn-taking during conversation. We evaluated a number of factors affecting the suitability of BCI spellers for free communication, and make specific recommendations for improving classification accuracy and usability. Overall, we found that developing a BCI speller for free communication requires a focus on usability over reduced character selection time, and as such, future performance appraisals should be based on genuine free communication scenarios.
Collapse
Affiliation(s)
- Angela I Renton
- Queensland Brain Institute, The University of Queensland, St Lucia, 4072, Australia.
| | - Jason B Mattingley
- Queensland Brain Institute, The University of Queensland, St Lucia, 4072, Australia
- School of Psychology, The University of Queensland, St Lucia, 4072, Australia
- Canadian Institute for Advanced Research (CIFAR), Toronto, Canada
| | - David R Painter
- School of Psychology, The University of Queensland, St Lucia, 4072, Australia
| |
Collapse
|
21
|
Painter DR, Dwyer MF, Kamke MR, Mattingley JB. Stimulus-Driven Cortical Hyperexcitability in Individuals with Charles Bonnet Hallucinations. Curr Biol 2018; 28:3475-3480.e3. [DOI: 10.1016/j.cub.2018.08.058] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2018] [Revised: 08/10/2018] [Accepted: 08/29/2018] [Indexed: 01/23/2023]
|
22
|
Abstract
OBJECTIVE Feature-based attention (FBA) helps one detect objects with a particular color, motion, or orientation. FBA works globally; the attended feature is enhanced at all positions in the visual field. This global property of FBA lets one use stimuli presented in the peripheral visual field to track attention in a task presented centrally. The present study explores the use of SSVEPs, generated by flicker presented peripherally, to track attention in a visual search task presented centrally. We evaluate whether this use of EEG to track FBA is robust enough to track attention when performing visual search within a dynamic 3D environment presented with a head-mounted display (HMD). APPROACH Observers first performed a visual search task presented in the central visual field within a stationary virtual environment. The purpose of this first experiment was to establish whether flicker presented peripherally can produce SSVEPs during HMD use. The second experiment placed observers in a dynamic virtual environment in which observers moved around a racetrack. Peripheral flicker was again used to track attention to the color of the target in the visual search task. MAIN RESULTS SSVEPs produced by flicker in the peripheral visual field are influenced strongly by attention in observers with stationary or moving viewpoints. Offline classification results show that one can track an observer's attended color, which suggests that these methods may provide a viable means for tracking FBA in a real-time task. SIGNIFICANCE Current FBA and brain-computer interface (BCI) studies primarily use foveal flicker to produce SSVEP responses. The present study's finding that one can use peripherally-presented flicker to track attention in dynamic virtual environments promises a more flexible and practical approach to BCIs based on FBA.
Collapse
Affiliation(s)
- Veronica C Chu
- Department of Cognitive Sciences, University of California, Irvine, Irvine, CA 92627, United States of America
| | | |
Collapse
|
23
|
Collins E, Robinson AK, Behrmann M. Distinct neural processes for the perception of familiar versus unfamiliar faces along the visual hierarchy revealed by EEG. Neuroimage 2018; 181:120-131. [PMID: 29966716 DOI: 10.1016/j.neuroimage.2018.06.080] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2018] [Revised: 06/19/2018] [Accepted: 06/28/2018] [Indexed: 12/29/2022] Open
Abstract
Humans recognize faces with ease, despite the complexity of the task and of the visual system which underlies it. Different spatial regions, including both the core and extended face processing networks, and distinct temporal stages of processing have been implicated in face recognition, but there is ongoing controversy regarding the extent to which the mechanisms for recognizing a familiar face differ from those for an unfamiliar face. Here, we used electroencephalogram (EEG) and flicker SSVEP, a high signal-to-noise approach, and searchlight decoding methods to elucidate the mechanisms mediating the processing of familiar and unfamiliar faces in the time domain. Familiar and unfamiliar faces were presented periodically at 15 Hz, 6 Hz and 3.75 Hz either upright or inverted in separate blocks, with the rationale that faster frequencies require shorter processing times per image and tap into fundamentally different levels of visual processing. The 15 Hz trials, likely to reflect early visual processing, exhibited enhanced neural responses for familiar over unfamiliar face trials, but only when the faces were upright. In contrast, decoding methods revealed similar classification accuracies for upright and inverted faces for both familiar and unfamiliar faces. For the 6 Hz frequency, familiar faces had lower amplitude responses than unfamiliar faces, and decoding familiarity was more accurate for upright compared with inverted faces. Finally, the 3.75 Hz frequency revealed no main effects of familiarity, but decoding showed significant correlations with behavioral ratings of face familiarity, suggesting that activity evoked by this slow presentation frequency reflected higher-level, cognitive aspects of familiarity processing. This three-way dissociation between frequencies reveals that fundamentally different stages of the visual hierarchy are modulated by face familiarity. The combination of experimental and analytical approaches used here represent a novel method for elucidating spatio-temporal characteristics within the visual system.
Collapse
Affiliation(s)
- Elliot Collins
- Department of Psychology and Center for the Neural Basis of Cognition, Carnegie Mellon University, USA; School of Medicine, University of Pittsburgh, Pittsburgh, USA.
| | - Amanda K Robinson
- Department of Psychology and Center for the Neural Basis of Cognition, Carnegie Mellon University, USA; School of Psychology, The University of Sydney, Australia; ARC Centre of Excellence in Cognition and its Disorders, Department of Cognitive Science, Macquarie University, Australia
| | - Marlene Behrmann
- Department of Psychology and Center for the Neural Basis of Cognition, Carnegie Mellon University, USA
| |
Collapse
|
24
|
It takes two to tango: Suppression of task-irrelevant features requires (spatial) competition. Neuroimage 2018; 178:485-492. [PMID: 29860080 DOI: 10.1016/j.neuroimage.2018.05.073] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2018] [Revised: 04/30/2018] [Accepted: 05/30/2018] [Indexed: 11/21/2022] Open
Abstract
In a recent electrophysiological study, we reported on global facilitation but local suppression of color stimuli in feature-based attention in human early visual cortex. Subjects attended to one of two centrally located superimposed red/blue random dot kinematograms (RDKs). Task-irrelevant single RDKs in the same colors were presented in the left and right periphery, respectively. Suppression of the to-be-ignored color was only present in the centrally located RDK but not in the one with the same color in the periphery. This result was at odds with the idea of active suppression of task-irrelevant features across the entire visual field. In the present study, we introduced competition in the periphery by superimposing the RDKs at the task-irrelevant location as well. With such competition, we found suppression of the task-irrelevant color in the centrally and peripherally located RDKs. Results clearly demonstrate that suppression of task-irrelevant features at task-irrelevant locations requires (spatial) competitive interactions and is not an inherent neural mechanism in feature-based attention as was found for global facilitation.
Collapse
|
25
|
Tompary A, Al-Aidroos N, Turk-Browne NB. Attending to What and Where: Background Connectivity Integrates Categorical and Spatial Attention. J Cogn Neurosci 2018; 30:1281-1297. [PMID: 29791296 DOI: 10.1162/jocn_a_01284] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
Top-down attention prioritizes the processing of goal-relevant information throughout visual cortex based on where that information is found in space and what it looks like. Whereas attentional goals often have both spatial and featural components, most research on the neural basis of attention has examined these components separately. Here we investigated how these attentional components are integrated by examining the attentional modulation of functional connectivity between visual areas with different selectivity. Specifically, we used fMRI to measure temporal correlations between spatially selective regions of early visual cortex and category-selective regions in ventral temporal cortex while participants performed a task that benefitted from both spatial and categorical attention. We found that categorical attention modulated the connectivity of category-selective areas, but only with retinotopic areas that coded for the spatially attended location. Similarly, spatial attention modulated the connectivity of retinotopic areas only with the areas coding for the attended category. This pattern of results suggests that attentional modulation of connectivity is driven both by spatial selection and featural biases. Combined with exploratory analyses of frontoparietal areas that track these changes in connectivity among visual areas, this study begins to shed light on how different components of attention are integrated in support of more complex behavioral goals.
Collapse
|
26
|
Jiang Y, Wu X, Gao X. A category-specific top-down attentional set can affect the neural responses outside the current focus of attention. Neurosci Lett 2017; 659:80-85. [DOI: 10.1016/j.neulet.2017.07.029] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2017] [Revised: 07/04/2017] [Accepted: 07/17/2017] [Indexed: 11/17/2022]
|
27
|
Forschack N, Andersen SK, Müller MM. Global Enhancement but Local Suppression in Feature-based Attention. J Cogn Neurosci 2017; 29:619-627. [DOI: 10.1162/jocn_a_01075] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
A key property of feature-based attention is global facilitation of the attended feature throughout the visual field. Previously, we presented superimposed red and blue randomly moving dot kinematograms (RDKs) flickering at a different frequency each to elicit frequency-specific steady-state visual evoked potentials (SSVEPs) that allowed us to analyze neural dynamics in early visual cortex when participants shifted attention to one of the two colors. Results showed amplification of the attended and suppression of the unattended color as measured by SSVEP amplitudes. Here, we tested whether the suppression of the unattended color also operates globally. To this end, we presented superimposed flickering red and blue RDKs in the center of a screen and a red and blue RDK in the left and right periphery, respectively, also flickering at different frequencies. Participants shifted attention to one color of the superimposed RDKs in the center to discriminate coherent motion events in the attended from the unattended color RDK, whereas the peripheral RDKs were task irrelevant. SSVEP amplitudes elicited by the centrally presented RDKs confirmed the previous findings of amplification and suppression. For peripherally located RDKs, we found the expected SSVEP amplitude increase, relative to precue baseline when color matched the one of the centrally attended RDK. We found no reduction in SSVEP amplitude relative to precue baseline, when the peripheral color matched the unattended one of the central RDK, indicating that, while facilitation in feature-based attention operates globally, suppression seems to be linked to the location of focused attention.
Collapse
Affiliation(s)
- Norman Forschack
- 1Max-Planck-Institute for Human Cognitive and Brain Sciences, Leipzig
| | | | | |
Collapse
|
28
|
Zhang D, Hong B, Gao S, Röder B. Exploring the temporal dynamics of sustained and transient spatial attention using steady-state visual evoked potentials. Exp Brain Res 2017; 235:1575-1591. [PMID: 28258437 DOI: 10.1007/s00221-017-4907-6] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2016] [Accepted: 02/07/2017] [Indexed: 01/23/2023]
Abstract
While the behavioral dynamics as well as the functional network of sustained and transient attention have extensively been studied, their underlying neural mechanisms have most often been investigated in separate experiments. In the present study, participants were instructed to perform an audio-visual spatial attention task. They were asked to attend to either the left or the right hemifield and to respond to deviant transient either auditory or visual stimuli. Steady-state visual evoked potentials (SSVEPs) elicited by two task irrelevant pattern reversing checkerboards flickering at 10 and 15 Hz in the left and the right hemifields, respectively, were used to continuously monitor the locus of spatial attention. The amplitude and phase of the SSVEPs were extracted for single trials and were separately analyzed. Sustained attention to one hemifield (spatial attention) as well as to the auditory modality (intermodal attention) increased the inter-trial phase locking of the SSVEP responses, whereas briefly presented visual and auditory stimuli decreased the single-trial SSVEP amplitude between 200 and 500 ms post-stimulus. This transient change of the single-trial amplitude was restricted to the SSVEPs elicited by the reversing checkerboard in the spatially attended hemifield and thus might reflect a transient re-orienting of attention towards the brief stimuli. Thus, the present results demonstrate independent, but interacting neural mechanisms of sustained and transient attentional orienting.
Collapse
Affiliation(s)
- Dan Zhang
- Biological Psychology and Neuropsychology, University of Hamburg, Von-Melle-Park 11, 20146, Hamburg, Germany. .,Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, China. .,Department of Psychology, School of Social Sciences, Tsinghua University, Beijing, 100084, China.
| | - Bo Hong
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, China
| | - Shangkai Gao
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, 100084, China
| | - Brigitte Röder
- Biological Psychology and Neuropsychology, University of Hamburg, Von-Melle-Park 11, 20146, Hamburg, Germany
| |
Collapse
|
29
|
Gordon N, Koenig-Robert R, Tsuchiya N, van Boxtel JJ, Hohwy J. Neural markers of predictive coding under perceptual uncertainty revealed with Hierarchical Frequency Tagging. eLife 2017; 6. [PMID: 28244874 PMCID: PMC5360443 DOI: 10.7554/elife.22749] [Citation(s) in RCA: 45] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2016] [Accepted: 02/26/2017] [Indexed: 01/23/2023] Open
Abstract
There is a growing understanding that both top-down and bottom-up signals underlie perception. But it is not known how these signals integrate with each other and how this depends on the perceived stimuli’s predictability. ‘Predictive coding’ theories describe this integration in terms of how well top-down predictions fit with bottom-up sensory input. Identifying neural markers for such signal integration is therefore essential for the study of perception and predictive coding theories. To achieve this, we combined EEG methods that preferentially tag different levels in the visual hierarchy. Importantly, we examined intermodulation components as a measure of integration between these signals. Our results link the different signals to core aspects of predictive coding, and suggest that top-down predictions indeed integrate with bottom-up signals in a manner that is modulated by the predictability of the sensory input, providing evidence for predictive coding and opening new avenues to studying such interactions in perception. DOI:http://dx.doi.org/10.7554/eLife.22749.001
Collapse
Affiliation(s)
- Noam Gordon
- Cognition and Philosophy Lab, Philosophy Department, Monash University, Clayton, Australia
| | | | - Naotsugu Tsuchiya
- Monash Institute of Cognitive and Clinical Neurosciences, Monash University, Clayton, Australia.,School of Psychological Sciences, Monash University, Clayton, Australia
| | - Jeroen Ja van Boxtel
- Monash Institute of Cognitive and Clinical Neurosciences, Monash University, Clayton, Australia.,School of Psychological Sciences, Monash University, Clayton, Australia
| | - Jakob Hohwy
- Cognition and Philosophy Lab, Philosophy Department, Monash University, Clayton, Australia
| |
Collapse
|
30
|
Cohen MX, Gulbinaite R. Rhythmic entrainment source separation: Optimizing analyses of neural responses to rhythmic sensory stimulation. Neuroimage 2016; 147:43-56. [PMID: 27916666 DOI: 10.1016/j.neuroimage.2016.11.036] [Citation(s) in RCA: 48] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2016] [Revised: 10/30/2016] [Accepted: 11/13/2016] [Indexed: 01/23/2023] Open
Abstract
Steady-state evoked potentials (SSEPs) are rhythmic brain responses to rhythmic sensory stimulation, and are often used to study perceptual and attentional processes. We present a data analysis method for maximizing the signal-to-noise ratio of the narrow-band steady-state response in the frequency and time-frequency domains. The method, termed rhythmic entrainment source separation (RESS), is based on denoising source separation approaches that take advantage of the simultaneous but differential projection of neural activity to multiple electrodes or sensors. Our approach is a combination and extension of existing multivariate source separation methods. We demonstrate that RESS performs well on both simulated and empirical data, and outperforms conventional SSEP analysis methods based on selecting electrodes with the strongest SSEP response, as well as several other linear spatial filters. We also discuss the potential confound of overfitting, whereby the filter captures noise in absence of a signal. Matlab scripts are available to replicate and extend our simulations and methods. We conclude with some practical advice for optimizing SSEP data analyses and interpreting the results.
Collapse
Affiliation(s)
- Michael X Cohen
- Radboud University and Radboud University Medical Center, Donders Center for Neuroscience, Netherlands.
| | | |
Collapse
|
31
|
Retter TL, Rossion B. Uncovering the neural magnitude and spatio-temporal dynamics of natural image categorization in a fast visual stream. Neuropsychologia 2016; 91:9-28. [DOI: 10.1016/j.neuropsychologia.2016.07.028] [Citation(s) in RCA: 77] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2016] [Revised: 07/19/2016] [Accepted: 07/21/2016] [Indexed: 10/21/2022]
|
32
|
Chang CF, Liang WK, Lai CL, Hung DL, Juan CH. Theta Oscillation Reveals the Temporal Involvement of Different Attentional Networks in Contingent Reorienting. Front Hum Neurosci 2016; 10:264. [PMID: 27375459 PMCID: PMC4891329 DOI: 10.3389/fnhum.2016.00264] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2015] [Accepted: 05/19/2016] [Indexed: 11/25/2022] Open
Abstract
In the visual world, rapidly reorienting to relevant objects outside the focus of attention is vital for survival. This ability from the interaction between goal-directed and stimulus-driven attentional control is termed contingent reorienting. Neuroimaging studies have demonstrated activations of the ventral and dorsal attentional networks (DANs) which exhibit right hemisphere dominance, but the temporal dynamics of the attentional networks still remain unclear. The present study used event-related potential (ERP) to index the locus of spatial attention and Hilbert-Huang transform (HHT) to acquire the time-frequency information during contingent reorienting. The ERP results showed contingent reorienting induced significant N2pc on both hemispheres. In contrast, our time-frequency analysis found further that, unlike the N2pc, theta oscillation during contingent reorienting differed between hemispheres and experimental sessions. The inter-trial coherence (ITC) of the theta oscillation demonstrated that the two sides of the attentional networks became phase-locked to contingent reorienting at different stages. The left attentional networks were associated with contingent reorienting in the first experimental session whereas the bilateral attentional networks play a more important role in this process in the subsequent session. This phase-locked information suggests a dynamic temporal evolution of the involvement of different attentional networks in contingent reorienting and a potential role of the left ventral network in the first session.
Collapse
Affiliation(s)
- Chi-Fu Chang
- Institute of Cognitive Neuroscience, National Central University Taoyuan City, Taiwan
| | - Wei-Kuang Liang
- Institute of Cognitive Neuroscience, National Central University Taoyuan City, Taiwan
| | - Chiou-Lian Lai
- Department of Neurology, Kaohsiung Medical University Kaohsiung City, Taiwan
| | - Daisy L Hung
- Institute of Cognitive Neuroscience, National Central University Taoyuan City, Taiwan
| | - Chi-Hung Juan
- Institute of Cognitive Neuroscience, National Central University Taoyuan City, Taiwan
| |
Collapse
|
33
|
Fagioli S, Macaluso E. Neural Correlates of Divided Attention in Natural Scenes. J Cogn Neurosci 2016; 28:1392-405. [PMID: 27167404 DOI: 10.1162/jocn_a_00980] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Individuals are able to split attention between separate locations, but divided spatial attention incurs the additional requirement of monitoring multiple streams of information. Here, we investigated divided attention using photos of natural scenes, where the rapid categorization of familiar objects and prior knowledge about the likely positions of objects in the real world might affect the interplay between these spatial and nonspatial factors. Sixteen participants underwent fMRI during an object detection task. They were presented with scenes containing either a person or a car, located on the left or right side of the photo. Participants monitored either one or both object categories, in one or both visual hemifields. First, we investigated the interplay between spatial and nonspatial attention by comparing conditions of divided attention between categories and/or locations. We then assessed the contribution of top-down processes versus stimulus-driven signals by separately testing the effects of divided attention in target and nontarget trials. The results revealed activation of a bilateral frontoparietal network when dividing attention between the two object categories versus attending to a single category but no main effect of dividing attention between spatial locations. Within this network, the left dorsal premotor cortex and the left intraparietal sulcus were found to combine task- and stimulus-related signals. These regions showed maximal activation when participants monitored two categories at spatially separate locations and the scene included a nontarget object. We conclude that the dorsal frontoparietal cortex integrates top-down and bottom-up signals in the presence of distractors during divided attention in real-world scenes.
Collapse
Affiliation(s)
| | - Emiliano Macaluso
- IRCCS Santa Lucia Foundation, Rome, Italy.,Lyon Neuroscience Research Center
| |
Collapse
|
34
|
Inattentional Deafness: Visual Load Leads to Time-Specific Suppression of Auditory Evoked Responses. J Neurosci 2016; 35:16046-54. [PMID: 26658858 DOI: 10.1523/jneurosci.2931-15.2015] [Citation(s) in RCA: 77] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
UNLABELLED Due to capacity limits on perception, conditions of high perceptual load lead to reduced processing of unattended stimuli (Lavie et al., 2014). Accumulating work demonstrates the effects of visual perceptual load on visual cortex responses, but the effects on auditory processing remain poorly understood. Here we establish the neural mechanisms underlying "inattentional deafness"--the failure to perceive auditory stimuli under high visual perceptual load. Participants performed a visual search task of low (target dissimilar to nontarget items) or high (target similar to nontarget items) load. On a random subset (50%) of trials, irrelevant tones were presented concurrently with the visual stimuli. Brain activity was recorded with magnetoencephalography, and time-locked responses to the visual search array and to the incidental presence of unattended tones were assessed. High, compared to low, perceptual load led to increased early visual evoked responses (within 100 ms from onset). This was accompanied by reduced early (∼ 100 ms from tone onset) auditory evoked activity in superior temporal sulcus and posterior middle temporal gyrus. A later suppression of the P3 "awareness" response to the tones was also observed under high load. A behavioral experiment revealed reduced tone detection sensitivity under high visual load, indicating that the reduction in neural responses was indeed associated with reduced awareness of the sounds. These findings support a neural account of shared audiovisual resources, which, when depleted under load, leads to failures of sensory perception and awareness. SIGNIFICANCE STATEMENT The present work clarifies the neural underpinning of inattentional deafness under high visual load. The findings of near-simultaneous load effects on both visual and auditory evoked responses suggest shared audiovisual processing capacity. Temporary depletion of shared capacity in perceptually demanding visual tasks leads to a momentary reduction in sensory processing of auditory stimuli, resulting in inattentional deafness. The dynamic "push-pull" pattern of load effects on visual and auditory processing furthers our understanding of both the neural mechanisms of attention and of cross-modal effects across visual and auditory processing. These results also offer an explanation for many previous failures to find cross-modal effects in experiments where the visual load effects may not have coincided directly with auditory sensory processing.
Collapse
|
35
|
Andersen SK, Müller MM. Driving steady-state visual evoked potentials at arbitrary frequencies using temporal interpolation of stimulus presentation. BMC Neurosci 2015; 16:95. [PMID: 26690632 PMCID: PMC4687115 DOI: 10.1186/s12868-015-0234-7] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2015] [Accepted: 10/29/2015] [Indexed: 12/04/2022] Open
Abstract
Background
Steady-state visual evoked potentials have been utilized widely in basic and applied research in recent years. These oscillatory responses of the visual cortex are elicited by flickering stimuli. They have the same fundamental frequency as the driving stimulus and are highly sensitive to manipulations of attention and stimulus properties. While standard computer monitors offer great flexibility in the choice of visual stimuli for driving SSVEPs, the frequencies that can be elicited are limited to integer divisors of the monitor’s refresh rate. Results To avoid this technical constraint, we devised an interpolation technique for stimulus presentation, with which SSVEPs can be elicited at arbitrary frequencies. We tested this technique with monitor refresh rates of 85 and 120 Hz. At a refresh rate of 85 Hz, interpolated presentation produced artifacts in the recorded spectrum in the form of additional peaks not located at the stimulated frequency or its harmonics. However, at a refresh rate of 120 Hz, these artifacts did not occur and the spectrum elicited by an interpolated flicker became indistinguishable from the spectrum obtained by non-interpolated presentation of the same frequency. Conclusions Our interpolation technique eliminates frequency limitations of the common non-interpolated presentation technique and has many possible applications for future research.
Collapse
Affiliation(s)
- Søren K Andersen
- School of Psychology, University of Aberdeen, William Guild Building, Aberdeen, AB24 3FX, UK.
| | - Matthias M Müller
- Institute of Psychology, University of Leipzig, Neumarkt 9-19, 04109, Leipzig, Germany.
| |
Collapse
|
36
|
Li HH, Carrasco M, Heeger DJ. Deconstructing Interocular Suppression: Attention and Divisive Normalization. PLoS Comput Biol 2015; 11:e1004510. [PMID: 26517321 PMCID: PMC4627721 DOI: 10.1371/journal.pcbi.1004510] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2015] [Accepted: 08/20/2015] [Indexed: 11/18/2022] Open
Abstract
In interocular suppression, a suprathreshold monocular target can be rendered invisible by a salient competitor stimulus presented in the other eye. Despite decades of research on interocular suppression and related phenomena (e.g., binocular rivalry, flash suppression, continuous flash suppression), the neural processing underlying interocular suppression is still unknown. We developed and tested a computational model of interocular suppression. The model included two processes that contributed to the strength of interocular suppression: divisive normalization and attentional modulation. According to the model, the salient competitor induced a stimulus-driven attentional modulation selective for the location and orientation of the competitor, thereby increasing the gain of neural responses to the competitor and reducing the gain of neural responses to the target. Additional suppression was induced by divisive normalization in the model, similar to other forms of visual masking. To test the model, we conducted psychophysics experiments in which both the size and the eye-of-origin of the competitor were manipulated. For small and medium competitors, behavioral performance was consonant with a change in the response gain of neurons that responded to the target. But large competitors induced a contrast-gain change, even when the competitor was split between the two eyes. The model correctly predicted these results and outperformed an alternative model in which the attentional modulation was eye specific. We conclude that both stimulus-driven attention (selective for location and feature) and divisive normalization contribute to interocular suppression. In interocular suppression, a visible target presented in one eye can be rendered invisible by a competing image (the competitor) presented in the other eye. This phenomenon is a striking demonstration of the discrepancy between physical inputs to the visual system and perception, and it also allows neuroscientists to study how perceptual systems regulate competing information. Interocular suppression has been explained by mutually suppressive interactions (modeled by divisive normalization) between neurons that respond differentially to the two eyes. Attention, which selects relevant information in natural viewing condition, has also been found to play a role in interocular suppression. But the specific role of attentional modulation is still an open question. In this study, we proposed a computational model of interocular suppression integrating both attentional modulation and divisive normalization. By modeling the hypothetical neural responses and fitting the model to psychophysical data, we showed that interocular suppression involves an attentional modulation selective for the orientation of the competitor, and covering the spatial extent of the competitor. We conclude that both attention and divisive normalization contribute to interocular suppression, and that their impacts are distinguishable.
Collapse
Affiliation(s)
- Hsin-Hung Li
- Department of Psychology, New York University, New York, New York, United States of America
| | - Marisa Carrasco
- Department of Psychology, New York University, New York, New York, United States of America
- Center for Neural Science, New York University, New York, New York, United States of America
| | - David J. Heeger
- Department of Psychology, New York University, New York, New York, United States of America
- Center for Neural Science, New York University, New York, New York, United States of America
- * E-mail:
| |
Collapse
|
37
|
Attentional Selection of Feature Conjunctions Is Accomplished by Parallel and Independent Selection of Single Features. J Neurosci 2015; 35:9912-9. [PMID: 26156992 DOI: 10.1523/jneurosci.5268-14.2015] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
UNLABELLED Experiments that study feature-based attention have often examined situations in which selection is based on a single feature (e.g., the color red). However, in more complex situations relevant stimuli may not be set apart from other stimuli by a single defining property but by a specific combination of features. Here, we examined sustained attentional selection of stimuli defined by conjunctions of color and orientation. Human observers attended to one out of four concurrently presented superimposed fields of randomly moving horizontal or vertical bars of red or blue color to detect brief intervals of coherent motion. Selective stimulus processing in early visual cortex was assessed by recordings of steady-state visual evoked potentials (SSVEPs) elicited by each of the flickering fields of stimuli. We directly contrasted attentional selection of single features and feature conjunctions and found that SSVEP amplitudes on conditions in which selection was based on a single feature only (color or orientation) exactly predicted the magnitude of attentional enhancement of SSVEPs when attending to a conjunction of both features. Furthermore, enhanced SSVEP amplitudes elicited by attended stimuli were accompanied by equivalent reductions of SSVEP amplitudes elicited by unattended stimuli in all cases. We conclude that attentional selection of a feature-conjunction stimulus is accomplished by the parallel and independent facilitation of its constituent feature dimensions in early visual cortex. SIGNIFICANCE STATEMENT The ability to perceive the world is limited by the brain's processing capacity. Attention affords adaptive behavior by selectively prioritizing processing of relevant stimuli based on their features (location, color, orientation, etc.). We found that attentional mechanisms for selection of different features belonging to the same object operate independently and in parallel: concurrent attentional selection of two stimulus features is simply the sum of attending to each of those features separately. This result is key to understanding attentional selection in complex (natural) scenes, where relevant stimuli are likely to be defined by a combination of stimulus features.
Collapse
|
38
|
Painter DR, Dux PE, Mattingley JB. Causal involvement of visual area MT in global feature-based enhancement but not contingent attentional capture. Neuroimage 2015; 118:90-102. [DOI: 10.1016/j.neuroimage.2015.06.019] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2015] [Revised: 05/25/2015] [Accepted: 06/04/2015] [Indexed: 12/17/2022] Open
|
39
|
Painter DR, Dux PE, Mattingley JB. Distinct roles of the intraparietal sulcus and temporoparietal junction in attentional capture from distractor features: An individual differences approach. Neuropsychologia 2015; 74:50-62. [DOI: 10.1016/j.neuropsychologia.2015.02.029] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2014] [Revised: 02/19/2015] [Accepted: 02/20/2015] [Indexed: 10/23/2022]
|
40
|
Norcia AM, Appelbaum LG, Ales JM, Cottereau BR, Rossion B. The steady-state visual evoked potential in vision research: A review. J Vis 2015; 15:4. [PMID: 26024451 PMCID: PMC4581566 DOI: 10.1167/15.6.4] [Citation(s) in RCA: 539] [Impact Index Per Article: 59.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2014] [Accepted: 01/05/2015] [Indexed: 02/07/2023] Open
Abstract
Periodic visual stimulation and analysis of the resulting steady-state visual evoked potentials were first introduced over 80 years ago as a means to study visual sensation and perception. From the first single-channel recording of responses to modulated light to the present use of sophisticated digital displays composed of complex visual stimuli and high-density recording arrays, steady-state methods have been applied in a broad range of scientific and applied settings.The purpose of this article is to describe the fundamental stimulation paradigms for steady-state visual evoked potentials and to illustrate these principles through research findings across a range of applications in vision science.
Collapse
|
41
|
|