1
|
Li J, Song H, Huang X, Fu Y, Guan C, Chen L, Shen M, Chen H. Testing the memory encoding cost theory using the multiple cues paradigm. Vision Res 2025; 228:108552. [PMID: 39889619 DOI: 10.1016/j.visres.2025.108552] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Revised: 01/19/2025] [Accepted: 01/23/2025] [Indexed: 02/03/2025]
Abstract
Recent developments have introduced the Memory Encoding Cost (MEC) theory to explain the role of attention in exogenous spatial cueing effects. According to this theory, the cost effect (when comparing invalid to neutral cues) arises from attentional suppression resulting from memory encoding of the cue. Conversely, the benefit effect (when comparing valid to neutral cues) is thought to result from a combination of attentional facilitation caused by the cue and encoding-related attentional suppression. This study tests the MEC theory by investigating whether encoding-induced cost increases as the number of cues presented increases. In Experiment 1, participants identified a target letter, which was occasionally preceded by one or three exogenous cues. The results showed that multiple cues resulted in a larger cost effect and a smaller (or even reversed) benefit effect compared to a single cue. This asymmetry between cost and benefit effects was consistently observed across experiments, even when controlling for factors like forward masking and target salience in Experiment 2, or using placeholders as in prior research in Experiment 3. These findings are more consistent with the MEC theory than with traditional attention models. In conclusion, our results provide strong support for the MEC theory, highlighting the importance of both attentional facilitation and encoding-induced suppression in explaining exogenous spatial cueing effects.
Collapse
Affiliation(s)
- Jian Li
- Department of Psychology and Behavioral Sciences Zhejiang University China; Nanjing Brain Hospital Affiliated to Nanjing Medical University China
| | - Huixin Song
- Department of Psychology and Behavioral Sciences Zhejiang University China
| | - Xiaoqi Huang
- Department of Psychology and Behavioral Sciences Zhejiang University China
| | - Yingtao Fu
- Department of Psychology and Behavioral Sciences Zhejiang University China
| | - Chenxiao Guan
- Department of Psychology and Behavioral Sciences Zhejiang University China
| | - Luo Chen
- Department of Psychology and Behavioral Sciences Zhejiang University China.
| | - Mowei Shen
- Department of Psychology and Behavioral Sciences Zhejiang University China.
| | - Hui Chen
- Department of Psychology and Behavioral Sciences Zhejiang University China.
| |
Collapse
|
2
|
Madison A, Callahan-Flintoft C, Thurman SM, Hoffing RAC, Touryan J, Ries AJ. Fixation-related potentials during a virtual navigation task: The influence of image statistics on early cortical processing. Atten Percept Psychophys 2025:10.3758/s13414-024-03002-5. [PMID: 39849263 DOI: 10.3758/s13414-024-03002-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/04/2024] [Indexed: 01/25/2025]
Abstract
Historically, electrophysiological correlates of scene processing have been studied with experiments using static stimuli presented for discrete timescales where participants maintain a fixed eye position. Gaps remain in generalizing these findings to real-world conditions where eye movements are made to select new visual information and where the environment remains stable but changes with our position and orientation in space, driving dynamic visual stimulation. Co-recording of eye movements and electroencephalography (EEG) is an approach to leverage fixations as time-locking events in the EEG recording under free-viewing conditions to create fixation-related potentials (FRPs), providing a neural snapshot in which to study visual processing under naturalistic conditions. The current experiment aimed to explore the influence of low-level image statistics-specifically, luminance and a metric of spatial frequency (slope of the amplitude spectrum)-on the early visual components evoked from fixation onsets in a free-viewing visual search and navigation task using a virtual environment. This research combines FRPs with an optimized approach to remove ocular artifacts and deconvolution modeling to correct for overlapping neural activity inherent in any free-viewing paradigm. The results suggest that early visual components-namely, the lambda response and N1-of the FRPs are sensitive to luminance and spatial frequency around fixation, separate from modulation due to underlying differences in eye-movement characteristics. Together, our results demonstrate the utility of studying the influence of image statistics on FRPs using a deconvolution modeling approach to control for overlapping neural activity and oculomotor covariates.
Collapse
Affiliation(s)
- Anna Madison
- U.S. DEVCOM Army Research Laboratory, Humans in Complex Systems, Aberdeen Proving Ground, MD, USA
- Warfighter Effectiveness Research Center, Department of Behavioral Sciences & Leadership, 2354 Fairchild Drive, Suite 6, U.S. Air Force Academy, CO, 80840, USA
| | - Chloe Callahan-Flintoft
- Warfighter Effectiveness Research Center, Department of Behavioral Sciences & Leadership, 2354 Fairchild Drive, Suite 6, U.S. Air Force Academy, CO, 80840, USA
| | - Steven M Thurman
- U.S. DEVCOM Army Research Laboratory, Humans in Complex Systems, Aberdeen Proving Ground, MD, USA
| | - Russell A Cohen Hoffing
- U.S. DEVCOM Army Research Laboratory, Humans in Complex Systems, Aberdeen Proving Ground, MD, USA
| | - Jonathan Touryan
- U.S. DEVCOM Army Research Laboratory, Humans in Complex Systems, Aberdeen Proving Ground, MD, USA
| | - Anthony J Ries
- U.S. DEVCOM Army Research Laboratory, Humans in Complex Systems, Aberdeen Proving Ground, MD, USA.
- Warfighter Effectiveness Research Center, Department of Behavioral Sciences & Leadership, 2354 Fairchild Drive, Suite 6, U.S. Air Force Academy, CO, 80840, USA.
| |
Collapse
|
3
|
Robinson AK, Grootswagers T, Shatek SM, Behrmann M, Carlson TA. Dynamics of visual object coding within and across the hemispheres: Objects in the periphery. SCIENCE ADVANCES 2025; 11:eadq0889. [PMID: 39742491 DOI: 10.1126/sciadv.adq0889] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/25/2024] [Accepted: 11/20/2024] [Indexed: 01/03/2025]
Abstract
The human brain continuously integrates information across its two hemispheres to construct a coherent representation of the perceptual world. Characterizing how visual information is represented in each hemisphere over time is crucial for understanding how hemispheric transfer contributes to perception. Here, we investigated information processing within each hemisphere over time and the degree to which it is distinct or duplicated across hemispheres. We presented participants with object images lateralized to the left or right visual fields while measuring their brain activity with electroencephalography. Stimulus coding was more robust and emerged earlier in the contralateral than the ipsilateral hemisphere. Presentation of two stimuli, one to each hemifield, reduced the fidelity of representations in both hemispheres relative to one stimulus alone, signifying hemispheric interference. Last, we found that processing within the contralateral, but not ipsilateral, hemisphere was biased to image-related over concept-related information. Together, these results suggest that hemispheric transfer operates to filter irrelevant information and efficiently prioritize processing of meaning.
Collapse
Affiliation(s)
- Amanda K Robinson
- School of Psychology, The University of Queensland, Brisbane, Australia
- Queensland Brain Institute, The University of Queensland, Brisbane, Australia
- School of Psychology, University of Sydney, Sydney, Australia
| | - Tijl Grootswagers
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia
- School of Computer, Data and Mathematical Sciences, Western Sydney University, Sydney, Australia
| | - Sophia M Shatek
- School of Psychology, University of Sydney, Sydney, Australia
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Marlene Behrmann
- Department of Psychology, Carnegie Mellon University, Pittsburgh, PA 15213, USA
- Department of Ophthalmology, University of Pittsburgh, Pittsburgh, PA 15260, USA
| | | |
Collapse
|
4
|
Huang Y, Li H, Qiu S, Ding X, Li M, Liu W, Fan Z, Cheng X. Distinct serial dependence between small and large numerosity processing. PSYCHOLOGICAL RESEARCH 2024; 89:41. [PMID: 39739125 DOI: 10.1007/s00426-024-02071-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2023] [Accepted: 12/20/2024] [Indexed: 01/02/2025]
Abstract
The serial dependence effect (SDE) is a perceptual bias where current stimuli are perceived as more similar to recently seen stimuli, possibly enhancing the stability and continuity of visual perception. Although SDE has been observed across many visual features, it remains unclear whether humans rely on a single mechanism of SDE to support numerosity processing across two distinct numerical ranges: subitizing (i.e., small numerosity processing, likely related to early object recognition) and estimation (i.e., large numerosity processing, likely related to ensemble numerosity extraction). Here, we show that subitizing and estimation exhibit distinct SDE patterns. Subitizing is characterized by an asymmetric SDE, whereas estimation demonstrates a symmetric SDE. Specifically, in subitizing, the SDE occurs only when the current magnitude is smaller than the previous magnitude but not when it is larger. In contrast, the SDE in estimation is present in both scenarios. We propose that these differences arise from distinct underlying mechanisms. A perceptual mechanism-namely, a 'temporal hysteresis' account, can explain the asymmetrical SDE in subitizing since object individuation resources are easily activated but resistant to deactivation. Conversely, a combination of perceptual and post-perceptual mechanisms can account for the SDEs in estimation, as both perceptual and post-perceptual interference can reduce the SDEs. Critically, a novel type of SDE characterized by reduced processing precision is found in subitizing only, implying that the continuity and stability of numerical processing can be dissociable in dynamic situations where numerical information is integrated over time. Our findings reveal the multifaceted nature of SDE mechanisms and suggest their engagement with cognitive modules likely subserving different functionalities.
Collapse
Affiliation(s)
- Yue Huang
- School of Psychology, Central China Normal University (CCNU), Wuhan, 430079, China
- Key Laboratory of Adolescent Cyberpsychology and Behavior (CCNU), Ministry of Education, Wuhan, 430079, China
- Key Laboratory of Human Development and Mental Health of Hubei Province, Wuhan, 430079, China
| | - Haokun Li
- School of Psychology, Central China Normal University (CCNU), Wuhan, 430079, China
- Key Laboratory of Adolescent Cyberpsychology and Behavior (CCNU), Ministry of Education, Wuhan, 430079, China
- Key Laboratory of Human Development and Mental Health of Hubei Province, Wuhan, 430079, China
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, 100091, China
| | - Shiming Qiu
- School of Psychology, Central China Normal University (CCNU), Wuhan, 430079, China
- Key Laboratory of Adolescent Cyberpsychology and Behavior (CCNU), Ministry of Education, Wuhan, 430079, China
- Key Laboratory of Human Development and Mental Health of Hubei Province, Wuhan, 430079, China
| | - Xianfeng Ding
- School of Psychology, Central China Normal University (CCNU), Wuhan, 430079, China
- Key Laboratory of Adolescent Cyberpsychology and Behavior (CCNU), Ministry of Education, Wuhan, 430079, China
- Key Laboratory of Human Development and Mental Health of Hubei Province, Wuhan, 430079, China
| | - Min Li
- School of Psychology, Central China Normal University (CCNU), Wuhan, 430079, China
- Key Laboratory of Adolescent Cyberpsychology and Behavior (CCNU), Ministry of Education, Wuhan, 430079, China
- Key Laboratory of Human Development and Mental Health of Hubei Province, Wuhan, 430079, China
| | - Wangjuan Liu
- School of Psychology, Central China Normal University (CCNU), Wuhan, 430079, China
- Key Laboratory of Adolescent Cyberpsychology and Behavior (CCNU), Ministry of Education, Wuhan, 430079, China
- Key Laboratory of Human Development and Mental Health of Hubei Province, Wuhan, 430079, China
| | - Zhao Fan
- School of Psychology, Central China Normal University (CCNU), Wuhan, 430079, China.
- Key Laboratory of Adolescent Cyberpsychology and Behavior (CCNU), Ministry of Education, Wuhan, 430079, China.
- Key Laboratory of Human Development and Mental Health of Hubei Province, Wuhan, 430079, China.
| | - Xiaorong Cheng
- School of Psychology, Central China Normal University (CCNU), Wuhan, 430079, China.
- Key Laboratory of Adolescent Cyberpsychology and Behavior (CCNU), Ministry of Education, Wuhan, 430079, China.
- Key Laboratory of Human Development and Mental Health of Hubei Province, Wuhan, 430079, China.
| |
Collapse
|
5
|
Marsicano G, Bertini C, Ronconi L. Decoding cognition in neurodevelopmental, psychiatric and neurological conditions with multivariate pattern analysis of EEG data. Neurosci Biobehav Rev 2024; 164:105795. [PMID: 38977116 DOI: 10.1016/j.neubiorev.2024.105795] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2024] [Revised: 06/21/2024] [Accepted: 07/03/2024] [Indexed: 07/10/2024]
Abstract
Multivariate pattern analysis (MVPA) of electroencephalographic (EEG) data represents a revolutionary approach to investigate how the brain encodes information. By considering complex interactions among spatio-temporal features at the individual level, MVPA overcomes the limitations of univariate techniques, which often fail to account for the significant inter- and intra-individual neural variability. This is particularly relevant when studying clinical populations, and therefore MVPA of EEG data has recently started to be employed as a tool to study cognition in brain disorders. Here, we review the insights offered by this methodology in the study of anomalous patterns of neural activity in conditions such as autism, ADHD, schizophrenia, dyslexia, neurological and neurodegenerative disorders, within different cognitive domains (perception, attention, memory, consciousness). Despite potential drawbacks that should be attentively addressed, these studies reveal a peculiar sensitivity of MVPA in unveiling dysfunctional and compensatory neurocognitive dynamics of information processing, which often remain blind to traditional univariate approaches. Such higher sensitivity in characterizing individual neurocognitive profiles can provide unique opportunities to optimise assessment and promote personalised interventions.
Collapse
Affiliation(s)
- Gianluca Marsicano
- Department of Psychology, University of Bologna, Viale Berti Pichat 5, Bologna 40121, Italy; Centre for Studies and Research in Cognitive Neuroscience, University of Bologna, Via Rasi e Spinelli 176, Cesena 47023, Italy.
| | - Caterina Bertini
- Department of Psychology, University of Bologna, Viale Berti Pichat 5, Bologna 40121, Italy; Centre for Studies and Research in Cognitive Neuroscience, University of Bologna, Via Rasi e Spinelli 176, Cesena 47023, Italy.
| | - Luca Ronconi
- School of Psychology, Vita-Salute San Raffaele University, Milan, Italy; Division of Neuroscience, IRCCS San Raffaele Scientific Institute, Milan, Italy.
| |
Collapse
|
6
|
Melcher D, Alaberkyan A, Anastasaki C, Liu X, Deodato M, Marsicano G, Almeida D. An early effect of the parafoveal preview on post-saccadic processing of English words. Atten Percept Psychophys 2024:10.3758/s13414-024-02916-4. [PMID: 38956003 DOI: 10.3758/s13414-024-02916-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/05/2024] [Indexed: 07/04/2024]
Abstract
A key aspect of efficient visual processing is to use current and previous information to make predictions about what we will see next. In natural viewing, and when looking at words, there is typically an indication of forthcoming visual information from extrafoveal areas of the visual field before we make an eye movement to an object or word of interest. This "preview effect" has been studied for many years in the word reading literature and, more recently, in object perception. Here, we integrated methods from word recognition and object perception to investigate the timing of the preview on neural measures of word recognition. Through a combined use of EEG and eye-tracking, a group of multilingual participants took part in a gaze-contingent, single-shot saccade experiment in which words appeared in their parafoveal visual field. In valid preview trials, the same word was presented during the preview and after the saccade, while in the invalid condition, the saccade target was a number string that turned into a word during the saccade. As hypothesized, the valid preview greatly reduced the fixation-related evoked response. Interestingly, multivariate decoding analyses revealed much earlier preview effects than previously reported for words, and individual decoding performance correlated with participant reading scores. These results demonstrate that a parafoveal preview can influence relatively early aspects of post-saccadic word processing and help to resolve some discrepancies between the word and object literatures.
Collapse
Affiliation(s)
- David Melcher
- Psychology Program, Division of Science, New York University Abu Dhabi, PO Box 129188, Abu Dhabi, United Arab Emirates.
- Center for Brain and Health, NYUAD Research Institute, New York University Abu Dhabi, PO Box 129188, Abu Dhabi, United Arab Emirates.
| | - Ani Alaberkyan
- Psychology Program, Division of Science, New York University Abu Dhabi, PO Box 129188, Abu Dhabi, United Arab Emirates
| | - Chrysi Anastasaki
- Psychology Program, Division of Science, New York University Abu Dhabi, PO Box 129188, Abu Dhabi, United Arab Emirates
| | - Xiaoyi Liu
- Psychology Program, Division of Science, New York University Abu Dhabi, PO Box 129188, Abu Dhabi, United Arab Emirates
- Department of Psychology, Princeton University, Washington Rd, Princeton, NJ, 08540, USA
| | - Michele Deodato
- Psychology Program, Division of Science, New York University Abu Dhabi, PO Box 129188, Abu Dhabi, United Arab Emirates
- Center for Brain and Health, NYUAD Research Institute, New York University Abu Dhabi, PO Box 129188, Abu Dhabi, United Arab Emirates
| | - Gianluca Marsicano
- Department of Psychology, University of Bologna, Viale Berti Pichat 5, 40121, Bologna, Italy
- Centre for Studies and Research in Cognitive Neuroscience, University of Bologna, Via Rasi e Spinelli 176, 47023, Cesena, Italy
| | - Diogo Almeida
- Psychology Program, Division of Science, New York University Abu Dhabi, PO Box 129188, Abu Dhabi, United Arab Emirates
| |
Collapse
|
7
|
Moerel D, Psihoyos J, Carlson TA. The Time-Course of Food Representation in the Human Brain. J Neurosci 2024; 44:e1101232024. [PMID: 38740441 PMCID: PMC11211715 DOI: 10.1523/jneurosci.1101-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Revised: 03/29/2024] [Accepted: 03/29/2024] [Indexed: 05/16/2024] Open
Abstract
Humans make decisions about food every day. The visual system provides important information that forms a basis for these food decisions. Although previous research has focused on visual object and category representations in the brain, it is still unclear how visually presented food is encoded by the brain. Here, we investigate the time-course of food representations in the brain. We used time-resolved multivariate analyses of electroencephalography (EEG) data, obtained from human participants (both sexes), to determine which food features are represented in the brain and whether focused attention is needed for this. We recorded EEG while participants engaged in two different tasks. In one task, the stimuli were task relevant, whereas in the other task, the stimuli were not task relevant. Our findings indicate that the brain can differentiate between food and nonfood items from ∼112 ms after the stimulus onset. The neural signal at later latencies contained information about food naturalness, how much the food was transformed, as well as the perceived caloric content. This information was present regardless of the task. Information about whether food is immediately ready to eat, however, was only present when the food was task relevant and presented at a slow presentation rate. Furthermore, the recorded brain activity correlated with the behavioral responses in an odd-item-out task. The fast representation of these food features, along with the finding that this information is used to guide food categorization decision-making, suggests that these features are important dimensions along which the representation of foods is organized.
Collapse
Affiliation(s)
- Denise Moerel
- School of Psychology, University of Sydney, Sydney, New South Wales 2050, Australia
| | - James Psihoyos
- School of Psychology, University of Sydney, Sydney, New South Wales 2050, Australia
| | - Thomas A Carlson
- School of Psychology, University of Sydney, Sydney, New South Wales 2050, Australia
| |
Collapse
|
8
|
Wu H, Li F, Chu W, Li Y, Niu Y, Shi G, Zhang L, Chen Y. Semantic image sorting method for RSVP presentation. J Neural Eng 2024; 21:036018. [PMID: 38688262 DOI: 10.1088/1741-2552/ad4593] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2023] [Accepted: 04/30/2024] [Indexed: 05/02/2024]
Abstract
Objective.The rapid serial visual presentation (RSVP) paradigm, which is based on the electroencephalogram (EEG) technology, is an effective approach for object detection. It aims to detect the event-related potentials (ERP) components evoked by target images for rapid identification. However, the object detection performance within this paradigm is affected by the visual disparity between adjacent images in a sequence. Currently, there is no objective metric to quantify this visual difference. Consequently, a reliable image sorting method is required to ensure the generation of a smooth sequence for effective presentation.Approach. In this paper, we propose a novel semantic image sorting method for sorting RSVP sequences, which aims at generating sequences that are perceptually smoother in terms of the human visual experience.Main results. We conducted a comparative analysis between our method and two existing methods for generating RSVP sequences using both qualitative and quantitative assessments. A qualitative evaluation revealed that the sequences generated by our method were smoother in subjective vision and were more effective in evoking stronger ERP components than those generated by the other two methods. Quantitatively, our method generated semantically smoother sequences than the other two methods. Furthermore, we employed four advanced approaches to classify single-trial EEG signals evoked by each of the three methods. The classification results of the EEG signals evoked by our method were superior to those of the other two methods.Significance. In summary, the results indicate that the proposed method can significantly enhance the object detection performance in RSVP-based sequences.
Collapse
Affiliation(s)
- Hao Wu
- Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, School of Artificial Intelligence, Xidian University, Xi'an, People's Republic of China
| | - Fu Li
- Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, School of Artificial Intelligence, Xidian University, Xi'an, People's Republic of China
| | - Wenlong Chu
- Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, School of Artificial Intelligence, Xidian University, Xi'an, People's Republic of China
| | - Yang Li
- Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, School of Artificial Intelligence, Xidian University, Xi'an, People's Republic of China
| | - Yi Niu
- Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, School of Artificial Intelligence, Xidian University, Xi'an, People's Republic of China
| | - Guangming Shi
- Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, School of Artificial Intelligence, Xidian University, Xi'an, People's Republic of China
| | - Lijian Zhang
- Beijing Institute of Mechanical Equipment, Beijing, People's Republic of China
| | - Yuanfang Chen
- Beijing Institute of Mechanical Equipment, Beijing, People's Republic of China
| |
Collapse
|
9
|
Grootswagers T, Robinson AK, Shatek SM, Carlson TA. Mapping the dynamics of visual feature coding: Insights into perception and integration. PLoS Comput Biol 2024; 20:e1011760. [PMID: 38190390 PMCID: PMC10798643 DOI: 10.1371/journal.pcbi.1011760] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Revised: 01/19/2024] [Accepted: 12/13/2023] [Indexed: 01/10/2024] Open
Abstract
The basic computations performed in the human early visual cortex are the foundation for visual perception. While we know a lot about these computations, a key missing piece is how the coding of visual features relates to our perception of the environment. To investigate visual feature coding, interactions, and their relationship to human perception, we investigated neural responses and perceptual similarity judgements to a large set of visual stimuli that varied parametrically along four feature dimensions. We measured neural responses using electroencephalography (N = 16) to 256 grating stimuli that varied in orientation, spatial frequency, contrast, and colour. We then mapped the response profiles of the neural coding of each visual feature and their interactions, and related these to independently obtained behavioural judgements of stimulus similarity. The results confirmed fundamental principles of feature coding in the visual system, such that all four features were processed simultaneously but differed in their dynamics, and there was distinctive conjunction coding for different combinations of features in the neural responses. Importantly, modelling of the behaviour revealed that every stimulus feature contributed to perceptual judgements, despite the untargeted nature of the behavioural task. Further, the relationship between neural coding and behaviour was evident from initial processing stages, signifying that the fundamental features, not just their interactions, contribute to perception. This study highlights the importance of understanding how feature coding progresses through the visual hierarchy and the relationship between different stages of processing and perception.
Collapse
Affiliation(s)
- Tijl Grootswagers
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia
- School of Computer, Data and Mathematical Sciences, Western Sydney University, Sydney, Australia
| | - Amanda K. Robinson
- Queensland Brain Institute, The University of Queensland, Brisbane, Australia
| | - Sophia M. Shatek
- School of Psychology, The University of Sydney, Sydney, Australia
| | | |
Collapse
|
10
|
Lowe BG, Robinson JE, Yamamoto N, Hogendoorn H, Johnston P. Same but different: The latency of a shared expectation signal interacts with stimulus attributes. Cortex 2023; 168:143-156. [PMID: 37716110 DOI: 10.1016/j.cortex.2023.08.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Revised: 07/13/2023] [Accepted: 08/07/2023] [Indexed: 09/18/2023]
Abstract
Predictive coding theories assert that perceptual inference is a hierarchical process of belief updating, wherein the onset of unexpected sensory data causes so-called prediction error responses that calibrate erroneous inferences. Given the functionally specialised organisation of visual cortex, it is assumed that prediction error propagation interacts with the specific visual attribute violating an expectation. We sought to test this within the temporal domain by applying time-resolved decoding methods to electroencephalography (EEG) data evoked by contextual trajectory violations of either brightness, size, or orientation within a bound stimulus. We found that following ∼170 ms post stimulus onset, responses to both size violations and orientation violations were decodable from physically identical control trials in which no attributes were violated. These two violation types were then directly compared, with attribute-specific signalling being decoded from 265 ms. Temporal generalisation suggested that this dissociation was driven by latency shifts in shared expectation signalling between the two conditions. Using a novel temporal bias method, we then found that this shared signalling occurred earlier for size violations than orientation violations. To our knowledge, we are among the first to decode expectation violations in humans using EEG and have demonstrated a temporal dissociation in attribute-specific expectancy violations.
Collapse
Affiliation(s)
- Benjamin G Lowe
- School of Psychology and Counselling, Queensland University of Technology (QUT), Kelvin Grove, QLD, Australia; Perception in Action Research Centre & School of Psychological Sciences, Macquarie University, Macquarie Park, NSW, Australia.
| | - Jonathan E Robinson
- Monash Centre for Consciousness & Contemplative Studies, Monash University, Clayton, VIC, Australia
| | - Naohide Yamamoto
- School of Psychology and Counselling, Queensland University of Technology (QUT), Kelvin Grove, QLD, Australia; Centre for Vision and Eye Research, Queensland University of Technology (QUT), Kelvin Grove, QLD, Australia
| | - Hinze Hogendoorn
- School of Psychology and Counselling, Queensland University of Technology (QUT), Kelvin Grove, QLD, Australia; Melbourne School of Psychological Science, University of Melbourne, Parkville, VIC, Australia
| | - Patrick Johnston
- School of Exercise Science and Nutrition Sciences, Queensland University of Technology (QUT), Kelvin Grove, QLD, Australia
| |
Collapse
|
11
|
Robinson AK, Quek GL, Carlson TA. Visual Representations: Insights from Neural Decoding. Annu Rev Vis Sci 2023; 9:313-335. [PMID: 36889254 DOI: 10.1146/annurev-vision-100120-025301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/10/2023]
Abstract
Patterns of brain activity contain meaningful information about the perceived world. Recent decades have welcomed a new era in neural analyses, with computational techniques from machine learning applied to neural data to decode information represented in the brain. In this article, we review how decoding approaches have advanced our understanding of visual representations and discuss efforts to characterize both the complexity and the behavioral relevance of these representations. We outline the current consensus regarding the spatiotemporal structure of visual representations and review recent findings that suggest that visual representations are at once robust to perturbations, yet sensitive to different mental states. Beyond representations of the physical world, recent decoding work has shone a light on how the brain instantiates internally generated states, for example, during imagery and prediction. Going forward, decoding has remarkable potential to assess the functional relevance of visual representations for human behavior, reveal how representations change across development and during aging, and uncover their presentation in various mental disorders.
Collapse
Affiliation(s)
- Amanda K Robinson
- Queensland Brain Institute, The University of Queensland, Brisbane, Australia;
| | - Genevieve L Quek
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia;
| | | |
Collapse
|
12
|
Smit S, Moerel D, Zopf R, Rich AN. Vicarious touch: Overlapping neural patterns between seeing and feeling touch. Neuroimage 2023; 278:120269. [PMID: 37423272 DOI: 10.1016/j.neuroimage.2023.120269] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Revised: 07/04/2023] [Accepted: 07/06/2023] [Indexed: 07/11/2023] Open
Abstract
Simulation theories propose that vicarious touch arises when seeing someone else being touched triggers corresponding representations of being touched. Prior electroencephalography (EEG) findings show that seeing touch modulates both early and late somatosensory responses (measured with or without direct tactile stimulation). Functional Magnetic Resonance Imaging (fMRI) studies have shown that seeing touch increases somatosensory cortical activation. These findings have been taken to suggest that when we see someone being touched, we simulate that touch in our sensory systems. The somatosensory overlap when seeing and feeling touch differs between individuals, potentially underpinning variation in vicarious touch experiences. Increases in amplitude (EEG) or cerebral blood flow response (fMRI), however, are limited in that they cannot test for the information contained in the neural signal: seeing touch may not activate the same information as feeling touch. Here, we use time-resolved multivariate pattern analysis on whole-brain EEG data from people with and without vicarious touch experiences to test whether seen touch evokes overlapping neural representations with the first-hand experience of touch. Participants felt touch to the fingers (tactile trials) or watched carefully matched videos of touch to another person's fingers (visual trials). In both groups, EEG was sufficiently sensitive to allow decoding of touch location (little finger vs. thumb) on tactile trials. However, only in individuals who reported feeling touch when watching videos of touch could a classifier trained on tactile trials distinguish touch location on visual trials. This demonstrates that, for people who experience vicarious touch, there is overlap in the information about touch location held in the neural patterns when seeing and feeling touch. The timecourse of this overlap implies that seeing touch evokes similar representations to later stages of tactile processing. Therefore, while simulation may underlie vicarious tactile sensations, our findings suggest this involves an abstracted representation of directly felt touch.
Collapse
Affiliation(s)
- Sophie Smit
- Perception in Action Research Centre & School of Psychological Sciences, Macquarie University, 16 University Ave, NSW 2109, Australia.
| | - Denise Moerel
- Perception in Action Research Centre & School of Psychological Sciences, Macquarie University, 16 University Ave, NSW 2109, Australia; School of Psychology, The University of Sydney, Griffith Taylor Building A19, Camperdown, NSW 2050, Australia
| | - Regine Zopf
- Department of Psychosomatic Medicine and Psychotherapy, Jena University Hospital, Philosophenweg 3, Jena 07743, Federal Republic of Germany
| | - Anina N Rich
- Perception in Action Research Centre & School of Psychological Sciences, Macquarie University, 16 University Ave, NSW 2109, Australia
| |
Collapse
|
13
|
Quattrone D, Santambrogio F, Scarpellini A, Sgherzi F, Poles I, Clementi L, Santambrogio MD. Analysis and Classification of Event-Related Potentials During Image Observation. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38083339 DOI: 10.1109/embc40787.2023.10340052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
In the field of cognitive neuroscience, researchers have conducted extensive studies on object categorization using Event-Related Potential (ERP) analysis, specifically by analyzing electroencephalographic (EEG) response signals triggered by visual stimuli. The most common approach for visual ERP analysis is to use a low presentation rate of images and an active task where participants actively discriminate between target and non-target images. However, researchers are also interested in understanding how the human brain processes visual information in real-world scenarios. To simulate real-life object recognition, this study proposes an analysis pipeline of visual ERPs evoked by images presented in a Rapid Serial Visual Presentation (RSVP) paradigm. Such an approach allows for the investigation of recurrent patterns of visual ERP signals across specific categories and subjects. The pipeline includes segmentation of the EEGs in epochs, and the use of the resulting features as inputs for Support Vector Machine (SVM) classification. Results demonstrate common ERP patterns across the selected categories and the ability to obtain discriminant information from single visual stimuli presented in the RSVP paradigm.
Collapse
|
14
|
Sörensen LKA, Bohté SM, de Jong D, Slagter HA, Scholte HS. Mechanisms of human dynamic object recognition revealed by sequential deep neural networks. PLoS Comput Biol 2023; 19:e1011169. [PMID: 37294830 DOI: 10.1371/journal.pcbi.1011169] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2022] [Accepted: 05/09/2023] [Indexed: 06/11/2023] Open
Abstract
Humans can quickly recognize objects in a dynamically changing world. This ability is showcased by the fact that observers succeed at recognizing objects in rapidly changing image sequences, at up to 13 ms/image. To date, the mechanisms that govern dynamic object recognition remain poorly understood. Here, we developed deep learning models for dynamic recognition and compared different computational mechanisms, contrasting feedforward and recurrent, single-image and sequential processing as well as different forms of adaptation. We found that only models that integrate images sequentially via lateral recurrence mirrored human performance (N = 36) and were predictive of trial-by-trial responses across image durations (13-80 ms/image). Importantly, models with sequential lateral-recurrent integration also captured how human performance changes as a function of image presentation durations, with models processing images for a few time steps capturing human object recognition at shorter presentation durations and models processing images for more time steps capturing human object recognition at longer presentation durations. Furthermore, augmenting such a recurrent model with adaptation markedly improved dynamic recognition performance and accelerated its representational dynamics, thereby predicting human trial-by-trial responses using fewer processing resources. Together, these findings provide new insights into the mechanisms rendering object recognition so fast and effective in a dynamic visual world.
Collapse
Affiliation(s)
- Lynn K A Sörensen
- Department of Psychology, University of Amsterdam, Amsterdam, Netherlands
- Amsterdam Brain & Cognition (ABC), University of Amsterdam, Amsterdam, Netherlands
| | - Sander M Bohté
- Machine Learning Group, Centrum Wiskunde & Informatica, Amsterdam, Netherlands
- Swammerdam Institute of Life Sciences (SILS), University of Amsterdam, Amsterdam, Netherlands
- Bernoulli Institute, Rijksuniversiteit Groningen, Groningen, Netherlands
| | - Dorina de Jong
- Istituto Italiano di Tecnologia, Center for Translational Neurophysiology of Speech and Communication, (CTNSC), Ferrara, Italy
- Università di Ferrara, Dipartimento di Scienze Biomediche e Chirurgico Specialistiche, Ferrara, Italy
| | - Heleen A Slagter
- Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, Amsterdam, Netherlands
- Institute of Brain and Behaviour Amsterdam, Vrije Universiteit Amsterdam, Amsterdam, Netherlands
| | - H Steven Scholte
- Department of Psychology, University of Amsterdam, Amsterdam, Netherlands
- Amsterdam Brain & Cognition (ABC), University of Amsterdam, Amsterdam, Netherlands
| |
Collapse
|
15
|
Hebart MN, Contier O, Teichmann L, Rockter AH, Zheng CY, Kidder A, Corriveau A, Vaziri-Pashkam M, Baker CI. THINGS-data, a multimodal collection of large-scale datasets for investigating object representations in human brain and behavior. eLife 2023; 12:e82580. [PMID: 36847339 PMCID: PMC10038662 DOI: 10.7554/elife.82580] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2022] [Accepted: 02/25/2023] [Indexed: 03/01/2023] Open
Abstract
Understanding object representations requires a broad, comprehensive sampling of the objects in our visual world with dense measurements of brain activity and behavior. Here, we present THINGS-data, a multimodal collection of large-scale neuroimaging and behavioral datasets in humans, comprising densely sampled functional MRI and magnetoencephalographic recordings, as well as 4.70 million similarity judgments in response to thousands of photographic images for up to 1,854 object concepts. THINGS-data is unique in its breadth of richly annotated objects, allowing for testing countless hypotheses at scale while assessing the reproducibility of previous findings. Beyond the unique insights promised by each individual dataset, the multimodality of THINGS-data allows combining datasets for a much broader view into object processing than previously possible. Our analyses demonstrate the high quality of the datasets and provide five examples of hypothesis-driven and data-driven applications. THINGS-data constitutes the core public release of the THINGS initiative (https://things-initiative.org) for bridging the gap between disciplines and the advancement of cognitive neuroscience.
Collapse
Affiliation(s)
- Martin N Hebart
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of HealthBethesdaUnited States
- Vision and Computational Cognition Group, Max Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
- Department of Medicine, Justus Liebig University GiessenGiessenGermany
| | - Oliver Contier
- Vision and Computational Cognition Group, Max Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
- Max Planck School of Cognition, Max Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
| | - Lina Teichmann
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of HealthBethesdaUnited States
| | - Adam H Rockter
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of HealthBethesdaUnited States
| | - Charles Y Zheng
- Machine Learning Core, National Institute of Mental Health, National Institutes of HealthBethesdaUnited States
| | - Alexis Kidder
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of HealthBethesdaUnited States
| | - Anna Corriveau
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of HealthBethesdaUnited States
| | - Maryam Vaziri-Pashkam
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of HealthBethesdaUnited States
| | - Chris I Baker
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of HealthBethesdaUnited States
| |
Collapse
|
16
|
Teichmann L, Moerel D, Rich AN, Baker CI. The nature of neural object representations during dynamic occlusion. Cortex 2022; 153:66-86. [PMID: 35597052 PMCID: PMC9247008 DOI: 10.1016/j.cortex.2022.04.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Revised: 03/18/2022] [Accepted: 04/01/2022] [Indexed: 12/01/2022]
Abstract
Objects disappearing briefly from sight due to occlusion is an inevitable occurrence in everyday life. Yet we generally have a strong experience that occluded objects continue to exist, despite the fact that they objectively disappear. This indicates that neural object representations must be maintained during dynamic occlusion. However, it is unclear what the nature of such representation is and in particular whether it is perception-like or more abstract, for example, reflecting limited features such as position or movement direction only. In this study, we address this question by examining how different object features such as object shape, luminance, and position are represented in the brain when a moving object is dynamically occluded. We apply multivariate decoding methods to Magnetoencephalography (MEG) data to track how object representations unfold over time. Our methods allow us to contrast the representations of multiple object features during occlusion and enable us to compare the neural responses evoked by visible and occluded objects. The results show that object position information is represented during occlusion to a limited extent while object identity features are not maintained through the period of occlusion. Together, this suggests that the nature of object representations during dynamic occlusion is different from visual representations during perception.
Collapse
Affiliation(s)
- Lina Teichmann
- Perception in Action Research Centre & School of Psychological Sciences, Macquarie University, 16 University Ave, North Ryde, NSW, 2109, Australia; Laboratory of Brain and Cognition, 10 Center Drive, 10/4C104, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, 20892, USA.
| | - Denise Moerel
- Perception in Action Research Centre & School of Psychological Sciences, Macquarie University, 16 University Ave, North Ryde, NSW, 2109, Australia; School of Psychology, University of Sydney, Sydney, NSW, Australia.
| | - Anina N Rich
- Perception in Action Research Centre & School of Psychological Sciences, Macquarie University, 16 University Ave, North Ryde, NSW, 2109, Australia.
| | - Chris I Baker
- Laboratory of Brain and Cognition, 10 Center Drive, 10/4C104, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, 20892, USA.
| |
Collapse
|
17
|
Wang R, Janini D, Konkle T. Mid-level Feature Differences Support Early Animacy and Object Size Distinctions: Evidence from Electroencephalography Decoding. J Cogn Neurosci 2022; 34:1670-1680. [PMID: 35704550 PMCID: PMC9438936 DOI: 10.1162/jocn_a_01883] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Responses to visually presented objects along the cortical surface of the human brain have a large-scale organization reflecting the broad categorical divisions of animacy and object size. Emerging evidence indicates that this topographical organization is supported by differences between objects in mid-level perceptual features. With regard to the timing of neural responses, images of objects quickly evoke neural responses with decodable information about animacy and object size, but are mid-level features sufficient to evoke these rapid neural responses? Or is slower iterative neural processing required to untangle information about animacy and object size from mid-level features, requiring hundreds of milliseconds more processing time? To answer this question, we used EEG to measure human neural responses to images of objects and their texform counterparts-unrecognizable images that preserve some mid-level feature information about texture and coarse form. We found that texform images evoked neural responses with early decodable information about both animacy and real-world size, as early as responses evoked by original images. Furthermore, successful cross-decoding indicates that both texform and original images evoke information about animacy and size through a common underlying neural basis. Broadly, these results indicate that the visual system contains a mid-level feature bank carrying linearly decodable information on animacy and size, which can be rapidly activated without requiring explicit recognition or protracted temporal processing.
Collapse
|
18
|
Shatek SM, Robinson AK, Grootswagers T, Carlson TA. Capacity for movement is an organisational principle in object representations. Neuroimage 2022; 261:119517. [PMID: 35901917 DOI: 10.1016/j.neuroimage.2022.119517] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2022] [Revised: 07/22/2022] [Accepted: 07/24/2022] [Indexed: 11/18/2022] Open
Abstract
The ability to perceive moving objects is crucial for threat identification and survival. Recent neuroimaging evidence has shown that goal-directed movement is an important element of object processing in the brain. However, prior work has primarily used moving stimuli that are also animate, making it difficult to disentangle the effect of movement from aliveness or animacy in representational categorisation. In the current study, we investigated the relationship between how the brain processes movement and aliveness by including stimuli that are alive but still (e.g., plants), and stimuli that are not alive but move (e.g., waves). We examined electroencephalographic (EEG) data recorded while participants viewed static images of moving or non-moving objects that were either natural or artificial. Participants classified the images according to aliveness, or according to capacity for movement. Movement explained significant variance in the neural data over and above that of aliveness, showing that capacity for movement is an important dimension in the representation of visual objects in humans.
Collapse
Affiliation(s)
- Sophia M Shatek
- School of Psychology, University of Sydney, Camperdown, NSW 2006, Australia.
| | - Amanda K Robinson
- School of Psychology, University of Sydney, Camperdown, NSW 2006, Australia; Queensland Brain Institute, The University of Queensland, QLD, Australia
| | - Tijl Grootswagers
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Australia
| | - Thomas A Carlson
- School of Psychology, University of Sydney, Camperdown, NSW 2006, Australia
| |
Collapse
|
19
|
Moshel ML, Robinson AK, Carlson TA, Grootswagers T. Are you for real? Decoding realistic AI-generated faces from neural activity. Vision Res 2022; 199:108079. [PMID: 35749833 DOI: 10.1016/j.visres.2022.108079] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Revised: 05/30/2022] [Accepted: 06/06/2022] [Indexed: 11/17/2022]
Abstract
Can we trust our eyes? Until recently, we rarely had to question whether what we see is indeed what exists, but this is changing. Artificial neural networks can now generate realistic images that challenge our perception of what is real. This new reality can have significant implications for cybersecurity, counterfeiting, fake news, and border security. We investigated how the human brain encodes and interprets realistic artificially generated images using behaviour and brain imaging. We found that we could reliably decode AI generated faces using people's neural activity. However, while at a group level people performed near chance classifying real and realistic fakes, participants tended to interchange the labels, classifying real faces as realistic fakes and vice versa. Understanding this difference between brain and behavioural responses may be key in determining the 'real' in our new reality. Stimuli, code, and data for this study can be found at https://osf.io/n2z73/.
Collapse
Affiliation(s)
- Michoel L Moshel
- School of Psychology, University of Sydney, NSW, Australia; School of Psychology, Macquarie University, NSW, Australia.
| | - Amanda K Robinson
- School of Psychology, University of Sydney, NSW, Australia; Queensland Brain Institute, The University of Queensland, QLD, Australia
| | | | - Tijl Grootswagers
- School of Psychology, University of Sydney, NSW, Australia; The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, NSW, Australia
| |
Collapse
|
20
|
Gurariy G, Mruczek REB, Snow JC, Caplovitz GP. Using High-Density Electroencephalography to Explore Spatiotemporal Representations of Object Categories in Visual Cortex. J Cogn Neurosci 2022; 34:967-987. [PMID: 35286384 PMCID: PMC9169880 DOI: 10.1162/jocn_a_01845] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Visual object perception involves neural processes that unfold over time and recruit multiple regions of the brain. Here, we use high-density EEG to investigate the spatiotemporal representations of object categories across the dorsal and ventral pathways. In , human participants were presented with images from two animate object categories (birds and insects) and two inanimate categories (tools and graspable objects). In , participants viewed images of tools and graspable objects from a different stimulus set, one in which a shape confound that often exists between these categories (elongation) was controlled for. To explore the temporal dynamics of object representations, we employed time-resolved multivariate pattern analysis on the EEG time series data. This was performed at the electrode level as well as in source space of two regions of interest: one encompassing the ventral pathway and another encompassing the dorsal pathway. Our results demonstrate shape, exemplar, and category information can be decoded from the EEG signal. Multivariate pattern analysis within source space revealed that both dorsal and ventral pathways contain information pertaining to shape, inanimate object categories, and animate object categories. Of particular interest, we note striking similarities obtained in both ventral stream and dorsal stream regions of interest. These findings provide insight into the spatio-temporal dynamics of object representation and contribute to a growing literature that has begun to redefine the traditional role of the dorsal pathway.
Collapse
|
21
|
Moerel D, Grootswagers T, Robinson AK, Shatek SM, Woolgar A, Carlson TA, Rich AN. The time-course of feature-based attention effects dissociated from temporal expectation and target-related processes. Sci Rep 2022; 12:6968. [PMID: 35484363 PMCID: PMC9050682 DOI: 10.1038/s41598-022-10687-x] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2021] [Accepted: 03/28/2022] [Indexed: 11/17/2022] Open
Abstract
Selective attention prioritises relevant information amongst competing sensory input. Time-resolved electrophysiological studies have shown stronger representation of attended compared to unattended stimuli, which has been interpreted as an effect of attention on information coding. However, because attention is often manipulated by making only the attended stimulus a target to be remembered and/or responded to, many reported attention effects have been confounded with target-related processes such as visual short-term memory or decision-making. In addition, attention effects could be influenced by temporal expectation about when something is likely to happen. The aim of this study was to investigate the dynamic effect of attention on visual processing using multivariate pattern analysis of electroencephalography (EEG) data, while (1) controlling for target-related confounds, and (2) directly investigating the influence of temporal expectation. Participants viewed rapid sequences of overlaid oriented grating pairs while detecting a "target" grating of a particular orientation. We manipulated attention, one grating was attended and the other ignored (cued by colour), and temporal expectation, with stimulus onset timing either predictable or not. We controlled for target-related processing confounds by only analysing non-target trials. Both attended and ignored gratings were initially coded equally in the pattern of responses across EEG sensors. An effect of attention, with preferential coding of the attended stimulus, emerged approximately 230 ms after stimulus onset. This attention effect occurred even when controlling for target-related processing confounds, and regardless of stimulus onset expectation. These results provide insight into the effect of feature-based attention on the dynamic processing of competing visual information.
Collapse
Affiliation(s)
- Denise Moerel
- School of Psychological Sciences, Macquarie University, Sydney, Australia.
- Perception in Action Research Centre, Macquarie University, Sydney, Australia.
- School of Psychology, University of Sydney, Sydney, Australia.
| | - Tijl Grootswagers
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia
- School of Psychology, University of Sydney, Sydney, Australia
| | - Amanda K Robinson
- School of Psychology, University of Sydney, Sydney, Australia
- Queensland Brain Institute, The University of Queensland, Brisbane, Australia
| | - Sophia M Shatek
- School of Psychology, University of Sydney, Sydney, Australia
| | - Alexandra Woolgar
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, UK
| | | | - Anina N Rich
- School of Psychological Sciences, Macquarie University, Sydney, Australia
- Perception in Action Research Centre, Macquarie University, Sydney, Australia
- Centre for Elite Performance, Expertise and Training, Macquarie University, Sydney, Australia
| |
Collapse
|
22
|
Ratcliffe O, Shapiro K, Staresina BP. Fronto-medial theta coordinates posterior maintenance of working memory content. Curr Biol 2022; 32:2121-2129.e3. [PMID: 35385693 PMCID: PMC9616802 DOI: 10.1016/j.cub.2022.03.045] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2021] [Revised: 10/11/2021] [Accepted: 03/15/2022] [Indexed: 12/25/2022]
Abstract
How does the human brain manage multiple bits of information to guide goal-directed behavior? Successful working memory (WM) functioning has consistently been linked to oscillatory power in the theta frequency band (4–8 Hz) over fronto-medial cortex (fronto-medial theta [FMT]). Specifically, FMT is thought to reflect the mechanism of an executive sub-system that coordinates maintenance of memory contents in posterior regions. However, direct evidence for the role of FMT in controlling specific WM content is lacking. Here, we collected high-density electroencephalography (EEG) data while participants engaged in WM-dependent tasks and then used multivariate decoding methods to examine WM content during the maintenance period. Engagement of WM was accompanied by a focal increase in FMT. Importantly, decoding of WM content was driven by posterior sites, which, in turn, showed increased functional theta coupling with fronto-medial channels. Finally, we observed a significant slowing of FMT frequency with increasing WM load, consistent with the hypothesized broadening of a theta “duty cycle” to accommodate additional WM items. Together, these findings demonstrate that frontal theta orchestrates posterior maintenance of WM content. Moreover, the observed frequency slowing elucidates the function of FMT oscillations by specifically supporting phase-coding accounts of WM. FMT power supports WM functions During WM performance, posterior/parietal regions are coupled with FMT Multivariate decoding of WM content is mediated by these same posterior channels Frontal theta frequency slows with WM load supporting phase-coding models
Collapse
Affiliation(s)
- Oliver Ratcliffe
- School of Psychology, University of Birmingham, Edgbaston, Birmingham B15 2TT, UK
| | - Kimron Shapiro
- School of Psychology, University of Birmingham, Edgbaston, Birmingham B15 2TT, UK
| | - Bernhard P Staresina
- Department of Experimental Psychology, University of Oxford, Oxford, UK; Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK.
| |
Collapse
|
23
|
Chen J, Zhang Y. Chinese Character Processing in Visual Masking. Front Psychol 2022; 12:763705. [PMID: 35283806 PMCID: PMC8907841 DOI: 10.3389/fpsyg.2021.763705] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2021] [Accepted: 12/27/2021] [Indexed: 12/01/2022] Open
Abstract
It has not been clarified if attention influences perception of targets in visual masking. Three forms of common masks (random pattern, para-/metacontrast, and four dots) were thus chosen in the present study and presented with character targets in three temporal sequences (forward, backward, and sandwiched mask or forward-backward mask combination). In order to pinpoint the level of processing where masking arises, character targets were varied in depth of processing from random arrangements of strokes up to real Chinese characters. The attentional influence was examined under perceptual discrimination and lexical decision tasks, respectively. The results revealed significant interactions among four factors (mask form, temporal sequence, depth of processing, and task). Identification of character targets in each form of mask sequence varied with task demand, with greater suppression in the perceptual discrimination task. These findings suggested that attentional demand can bias processing in favor of task-related information in visual masking. Variations in masking effects may be contributed by both attentional demand and spatio-temporal interaction.
Collapse
Affiliation(s)
- Juan Chen
- Center for Cognition and Brain Disorders, The Affiliated Hospital of Hangzhou Normal University, Hangzhou, China
- Deqing Hospital of Hangzhou Normal University, Hangzhou, China
- Zhejiang Key Laboratory for Research in Assessment of Cognitive Impairments, Hangzhou, China
| | - Ye Zhang
- Center for Cognition and Brain Disorders, The Affiliated Hospital of Hangzhou Normal University, Hangzhou, China
- Deqing Hospital of Hangzhou Normal University, Hangzhou, China
- Zhejiang Key Laboratory for Research in Assessment of Cognitive Impairments, Hangzhou, China
| |
Collapse
|
24
|
Unraveling the Neural Mechanisms Which Encode Rapid Streams of Visual Input. J Neurosci 2022; 42:1170-1172. [PMID: 35173038 DOI: 10.1523/jneurosci.2013-21.2021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2021] [Revised: 12/02/2021] [Accepted: 12/10/2021] [Indexed: 11/21/2022] Open
|
25
|
Robinson AK, Rich AN, Woolgar A. Linking the Brain with Behavior: The Neural Dynamics of Success and Failure in Goal-directed Behavior. J Cogn Neurosci 2022; 34:639-654. [DOI: 10.1162/jocn_a_01818] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
The human brain is extremely flexible and capable of rapidly selecting relevant information in accordance with task goals. Regions of frontoparietal cortex flexibly represent relevant task information such as task rules and stimulus features when participants perform tasks successfully, but less is known about how information processing breaks down when participants make mistakes. This is important for understanding whether and when information coding recorded with neuroimaging is directly meaningful for behavior. Here, we used magnetoencephalography to assess the temporal dynamics of information processing and linked neural responses with goal-directed behavior by analyzing how they changed on behavioral error. Participants performed a difficult stimulus–response task using two stimulus–response mapping rules. We used time-resolved multivariate pattern analysis to characterize the progression of information coding from perceptual information about the stimulus, cue and rule coding, and finally, motor response. Response-aligned analyses revealed a ramping up of perceptual information before a correct response, suggestive of internal evidence accumulation. Strikingly, when participants made a stimulus-related error, and not when they made other types of errors, patterns of activity initially reflected the stimulus presented, but later reversed, and accumulated toward a representation of the “incorrect” stimulus. This suggests that the patterns recorded at later time points reflect an internally generated stimulus representation that was used to make the (incorrect) decision. These results illustrate the orderly and overlapping temporal dynamics of information coding in perceptual decision-making and show a clear link between neural patterns in the late stages of processing and behavior.
Collapse
|
26
|
Grootswagers T, Zhou I, Robinson AK, Hebart MN, Carlson TA. Human EEG recordings for 1,854 concepts presented in rapid serial visual presentation streams. Sci Data 2022; 9:3. [PMID: 35013331 PMCID: PMC8748587 DOI: 10.1038/s41597-021-01102-7] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2021] [Accepted: 11/03/2021] [Indexed: 01/07/2023] Open
Abstract
The neural basis of object recognition and semantic knowledge has been extensively studied but the high dimensionality of object space makes it challenging to develop overarching theories on how the brain organises object knowledge. To help understand how the brain allows us to recognise, categorise, and represent objects and object categories, there is a growing interest in using large-scale image databases for neuroimaging experiments. In the current paper, we present THINGS-EEG, a dataset containing human electroencephalography responses from 50 subjects to 1,854 object concepts and 22,248 images in the THINGS stimulus set, a manually curated and high-quality image database that was specifically designed for studying human vision. The THINGS-EEG dataset provides neuroimaging recordings to a systematic collection of objects and concepts and can therefore support a wide array of research to understand visual object processing in the human brain.
Collapse
Affiliation(s)
- Tijl Grootswagers
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia.
- School of Psychology, The University of Sydney, Sydney, Australia.
| | - Ivy Zhou
- School of Psychology, The University of Sydney, Sydney, Australia
| | | | - Martin N Hebart
- Vision and Computational Cognition Group, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Thomas A Carlson
- School of Psychology, The University of Sydney, Sydney, Australia
| |
Collapse
|
27
|
Luo C, Chen W, VanRullen R, Zhang Y, Gaspar CM. Nudging the N170 forward with prior stimulation-Bridging the gap between N170 and recognition potential. Hum Brain Mapp 2021; 43:1214-1230. [PMID: 34786780 PMCID: PMC8837586 DOI: 10.1002/hbm.25716] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2021] [Revised: 10/26/2021] [Accepted: 10/27/2021] [Indexed: 11/25/2022] Open
Abstract
Evoked response potentials are often divided up into numerous components, each with their own body of literature. But is there less variety than we might suppose? In this study, we nudge one component into looking like another. Both the N170 and recognition potential (RP) are N1 components in response to familiar objects. However, the RP is often measured with a forward mask that ends at stimulus onset whereas the N170 is often measured with no masking at all. This study investigates how inter‐stimulus interval (ISI) may delay and distort the N170 into an RP by manipulating the temporal gap (ISI) between forward mask and target. The results revealed reverse relationships between the ISI on the one hand, and the N170 latency, single‐trial N1 jitter (an approximation of N1 width) and reaction time on the other hand. Importantly, we find that scalp topographies have a unique signature at the N1 peak across all conditions, from the longest gap (N170) to the shortest (RP). These findings prove that the mask‐delayed N1 is still the same N170, even under conditions that are normally associated with a different component like the RP. In general, our results suggest greater synthesis in the study of event related potential components.
Collapse
Affiliation(s)
- Canhuang Luo
- Center for Cognition and Brain Disorders, The Affiliated Hospital of Hangzhou Normal University, Hangzhou, China.,Institute of Psychological Sciences, Hangzhou Normal University, Hangzhou, China.,Zhejiang Key Laboratory for Research in Assessment of Cognitive Impairments, Hangzhou, China.,Université de Toulouse, UPS, Centre de Recherche Cerveau et Cognition, Toulouse, France.,CerCo, CNRS UMR 5549, Toulouse, France
| | - Wei Chen
- Khalifa University of Science and Technology, Abu Dhabi, United Arab Emirates
| | - Rufin VanRullen
- Université de Toulouse, UPS, Centre de Recherche Cerveau et Cognition, Toulouse, France.,CerCo, CNRS UMR 5549, Toulouse, France
| | - Ye Zhang
- Center for Cognition and Brain Disorders, The Affiliated Hospital of Hangzhou Normal University, Hangzhou, China.,Institute of Psychological Sciences, Hangzhou Normal University, Hangzhou, China.,Zhejiang Key Laboratory for Research in Assessment of Cognitive Impairments, Hangzhou, China
| | - Carl Michael Gaspar
- Center for Cognition and Brain Disorders, The Affiliated Hospital of Hangzhou Normal University, Hangzhou, China.,Institute of Psychological Sciences, Hangzhou Normal University, Hangzhou, China.,Zhejiang Key Laboratory for Research in Assessment of Cognitive Impairments, Hangzhou, China.,Zayed University, Abu Dhabi, United Arab Emirates
| |
Collapse
|
28
|
Retter TL, Jiang F, Webster MA, Michel C, Schiltz C, Rossion B. Varying Stimulus Duration Reveals Consistent Neural Activity and Behavior for Human Face Individuation. Neuroscience 2021; 472:138-156. [PMID: 34333061 DOI: 10.1016/j.neuroscience.2021.07.025] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2020] [Revised: 07/14/2021] [Accepted: 07/15/2021] [Indexed: 11/27/2022]
Abstract
Establishing consistent relationships between neural activity and behavior is a challenge in human cognitive neuroscience research. We addressed this issue using variable time constraints in an oddball frequency-sweep design for visual discrimination of complex images (face exemplars). Sixteen participants viewed sequences of ascending presentation durations, from 25 to 333 ms (40-3 Hz stimulation rate) while their electroencephalogram (EEG) was recorded. Throughout each sequence, the same unfamiliar face picture was repeated with variable size and luminance changes while different unfamiliar facial identities appeared every 1 s (1 Hz). A neural face individuation response, tagged at 1 Hz and its unique harmonics, emerged over the occipito-temporal cortex at 50 ms stimulus duration (25-100 ms across individuals), with an optimal response reached at 170 ms stimulus duration. In a subsequent experiment, identity changes appeared non-periodically within fixed-frequency sequences while the same participants performed an explicit face individuation task. The behavioral face individuation response also emerged at 50 ms presentation time, and behavioral accuracy correlated with individual participants' neural response amplitude in a weighted middle stimulus duration range (50-125 ms). Moreover, the latency of the neural response peaking between 180 and 200 ms correlated strongly with individuals' behavioral accuracy in this middle duration range, as measured independently. These observations point to the minimal (50 ms) and optimal (170 ms) stimulus durations for human face individuation and provide novel evidence that inter-individual differences in the magnitude and latency of early, high-level neural responses are predictive of behavioral differences in performance at this function.
Collapse
Affiliation(s)
- Talia L Retter
- Psychological Sciences Research Institute, Institute of Neuroscience, UCLouvain, Belgium; Department of Psychology, Center for Integrative Neuroscience, University of Nevada, Reno, USA; Department of Behavioural and Cognitive Sciences, Institute of Cognitive Science & Assessment, University of Luxembourg, Luxembourg.
| | - Fang Jiang
- Department of Psychology, Center for Integrative Neuroscience, University of Nevada, Reno, USA
| | - Michael A Webster
- Department of Psychology, Center for Integrative Neuroscience, University of Nevada, Reno, USA
| | - Caroline Michel
- Psychological Sciences Research Institute, Institute of Neuroscience, UCLouvain, Belgium
| | - Christine Schiltz
- Department of Behavioural and Cognitive Sciences, Institute of Cognitive Science & Assessment, University of Luxembourg, Luxembourg
| | - Bruno Rossion
- Psychological Sciences Research Institute, Institute of Neuroscience, UCLouvain, Belgium; Université de Lorraine, CNRS, CRAN, F-54000 Nancy, France; Université de Lorraine, CHRU-Nancy, Service de Neurologie, F-54000 Nancy, France
| |
Collapse
|
29
|
Popovkina DV, Palmer J, Moore CM, Boynton GM. Is there a serial bottleneck in visual object recognition? J Vis 2021; 21:15. [PMID: 33704373 PMCID: PMC7961120 DOI: 10.1167/jov.21.3.15] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/04/2022] Open
Abstract
Divided attention has little effect for simple tasks, such as luminance detection, but it has large effects for complex tasks, such as semantic categorization of masked words. Here, we asked whether the semantic categorization of visual objects shows divided attention effects as large as those observed for words, or as small as those observed for simple feature judgments. Using a dual-task paradigm with nameable object stimuli, performance was compared with the predictions of serial and parallel models. At the extreme, parallel processes with unlimited capacity predict no effect of divided attention; alternatively, an all-or-none serial process makes two predictions: a large divided attention effect (lower accuracy for dual-task trials, compared to single-task trials) and a negative response correlation in dual-task trials (a given response is more likely to be incorrect when the response about the other stimulus is correct). These predictions were tested in two experiments examining object judgments. In both experiments, there was a large divided attention effect and a small negative correlation in responses. The magnitude of these effects was larger than for simple features, but smaller than for words. These effects were consistent with serial models, and rule out some but not all parallel models. More broadly, the results help establish one of the first examples of likely serial processing in perception.
Collapse
Affiliation(s)
- Dina V Popovkina
- Department of Psychology, University of Washington, Seattle, WA, USA.,
| | - John Palmer
- Department of Psychology, University of Washington, Seattle, WA, USA.,
| | - Cathleen M Moore
- Department of Psychological and Brain Sciences, University of Iowa, Iowa City, IA, USA.,
| | | |
Collapse
|
30
|
Dynamics of fMRI patterns reflect sub-second activation sequences and reveal replay in human visual cortex. Nat Commun 2021; 12:1795. [PMID: 33741933 PMCID: PMC7979874 DOI: 10.1038/s41467-021-21970-2] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2020] [Accepted: 02/16/2021] [Indexed: 01/31/2023] Open
Abstract
Neural computations are often fast and anatomically localized. Yet, investigating such computations in humans is challenging because non-invasive methods have either high temporal or spatial resolution, but not both. Of particular relevance, fast neural replay is known to occur throughout the brain in a coordinated fashion about which little is known. We develop a multivariate analysis method for functional magnetic resonance imaging that makes it possible to study sequentially activated neural patterns separated by less than 100 ms with precise spatial resolution. Human participants viewed five images individually and sequentially with speeds up to 32 ms between items. Probabilistic pattern classifiers were trained on activation patterns in visual and ventrotemporal cortex during individual image trials. Applied to sequence trials, probabilistic classifier time courses allow the detection of neural representations and their order. Order detection remains possible at speeds up to 32 ms between items (plus 100 ms per item). The frequency spectrum of the sequentiality metric distinguishes between sub- versus supra-second sequences. Importantly, applied to resting-state data our method reveals fast replay of task-related stimuli in visual cortex. This indicates that non-hippocampal replay occurs even after tasks without memory requirements and shows that our method can be used to detect such spontaneously occurring replay.
Collapse
|
31
|
Zhang C, Qiu S, Wang S, Wei W, He H. Temporal Dynamics on Decoding Target Stimuli in Rapid Serial Visual Presentation using Magnetoencephalography. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:2954-2958. [PMID: 33018626 DOI: 10.1109/embc44109.2020.9176174] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Rapid serial visual presentation (RSVP) is a high efficient paradigm in brain-computer interface (BCI). Target detection accuracy is the first consideration of RSVP-BCI. But the influence of different frequency bands and time ranges on decoding accuracy are still an open questions. Moreover, the underlying neural dynamic of the rapid target detecting process is still unclear. Methods: This work focused the temporal dynamic of the responses triggered by target stimuli in a static RSVP paradigm using paired structural Magnetic Resonance Imaging (MRI) and magnetoencephalography (MEG) signals with different frequency bands. Multivariate pattern analysis (MVPA) was applied on the MEG signal with different frequency bands and time points after stimuli onset. Cortical neuronal activation estimation technology was also applied to present the temporal-spatial dynamic on cortex surface. Results: The MVPA results showed that the low frequency signals (0.1 - 7 Hz) yield highest decoding accuracy, and the decoding power reached its peak at 0.4 second after target stimuli onset. The cortical neuronal activation method identified the target stimuli triggered regions, like bilateral parahippocampal cortex, precentral gyrus and insula cortex, and the averaged time series were presented.
Collapse
|
32
|
Tovar DA, Murray MM, Wallace MT. Selective Enhancement of Object Representations through Multisensory Integration. J Neurosci 2020; 40:5604-5615. [PMID: 32499378 PMCID: PMC7363464 DOI: 10.1523/jneurosci.2139-19.2020] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2019] [Revised: 04/17/2020] [Accepted: 05/21/2020] [Indexed: 11/21/2022] Open
Abstract
Objects are the fundamental building blocks of how we create a representation of the external world. One major distinction among objects is between those that are animate versus those that are inanimate. In addition, many objects are specified by more than a single sense, yet the nature by which multisensory objects are represented by the brain remains poorly understood. Using representational similarity analysis of male and female human EEG signals, we show enhanced encoding of audiovisual objects when compared with their corresponding visual and auditory objects. Surprisingly, we discovered that the often-found processing advantages for animate objects were not evident under multisensory conditions. This was due to a greater neural enhancement of inanimate objects-which are more weakly encoded under unisensory conditions. Further analysis showed that the selective enhancement of inanimate audiovisual objects corresponded with an increase in shared representations across brain areas, suggesting that the enhancement was mediated by multisensory integration. Moreover, a distance-to-bound analysis provided critical links between neural findings and behavior. Improvements in neural decoding at the individual exemplar level for audiovisual inanimate objects predicted reaction time differences between multisensory and unisensory presentations during a Go/No-Go animate categorization task. Links between neural activity and behavioral measures were most evident at intervals of 100-200 ms and 350-500 ms after stimulus presentation, corresponding to time periods associated with sensory evidence accumulation and decision-making, respectively. Collectively, these findings provide key insights into a fundamental process the brain uses to maximize the information it captures across sensory systems to perform object recognition.SIGNIFICANCE STATEMENT Our world is filled with ever-changing sensory information that we are able to seamlessly transform into a coherent and meaningful perceptual experience. We accomplish this feat by combining different stimulus features into objects. However, despite the fact that these features span multiple senses, little is known about how the brain combines the various forms of sensory information into object representations. Here, we used EEG and machine learning to study how the brain processes auditory, visual, and audiovisual objects. Surprisingly, we found that nonliving (i.e., inanimate) objects, which are more difficult to process with one sense alone, benefited the most from engaging multiple senses.
Collapse
Affiliation(s)
- David A Tovar
- School of Medicine, Vanderbilt University, Nashville, Tennessee 37240
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, Tennessee 37240
| | - Micah M Murray
- The Laboratory for Investigative Neurophysiology (The LINE), Department of Radiology, Lausanne University Hospital and University of Lausanne (CHUV-UNIL), 1011 Lausanne, Switzerland
- Sensory, Cognitive and Perceptual Neuroscience Section, Center for Biomedical Imaging (CIBM) of Lausanne and Geneva, 1015 Lausanne, Switzerland
- Department of Ophthalmology, Fondation Asile des aveugles and University of Lausanne, 1002 Lausanne, Switzerland
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee 37240
| | - Mark T Wallace
- School of Medicine, Vanderbilt University, Nashville, Tennessee 37240
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, Tennessee 37240
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee 37240
- Department of Psychology, Vanderbilt University, Nashville, Tennessee 37240
- Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center, Nashville, Tennessee 37240
- Department of Pharmacology, Vanderbilt University, Nashville, Tennessee 37240
| |
Collapse
|
33
|
Li J, Guo B, Cui L, Huang H, Meng M. Dissociated modulations of multivoxel activation patterns in the ventral and dorsal visual pathways by the temporal dynamics of stimuli. Brain Behav 2020; 10:e01673. [PMID: 32496013 PMCID: PMC7375111 DOI: 10.1002/brb3.1673] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/15/2019] [Revised: 04/12/2020] [Accepted: 04/30/2020] [Indexed: 01/29/2023] Open
Abstract
INTRODUCTION Previous studies suggested temporal limitations of visual object identification in the ventral pathway. Moreover, multivoxel pattern analyses (MVPA) of fMRI activation have shown reliable encoding of various object categories including faces and tools in the ventral pathway. By contrast, the dorsal pathway is involved in reaching a target and grasping a tool, and quicker in processing the temporal dynamics of stimulus change. However, little is known about how activation patterns in both pathways may change according to the temporal dynamics of stimulus change. METHODS Here, we measured fMRI responses of two consecutive stimuli with varying interstimulus intervals (ISIs), and we compared how the two visual pathways respond to the dynamics of stimuli by using MVPA and information-based searchlight mapping. RESULTS We found that the temporal dynamics of stimuli modulate responses of the two visual pathways in opposite directions. Specifically, slower temporal dynamics (longer ISIs) led to greater activity and better MVPA results in the ventral pathway. However, faster temporal dynamics (shorter ISIs) led to greater activity and better MVPA results in the dorsal pathway. CONCLUSIONS These results are the first to show how temporal dynamics of stimulus change modulated multivoxel fMRI activation pattern change. And such temporal dynamic response function in different ROIs along the two visual pathways may shed lights on understanding functional relationship and organization of these ROIs.
Collapse
Affiliation(s)
- Jiaxin Li
- School of PsychologySouth China Normal UniversityGuangzhouChina
| | - Bingbing Guo
- School of PsychologySouth China Normal UniversityGuangzhouChina
| | - Lin Cui
- School of PsychologySouth China Normal UniversityGuangzhouChina
| | - Hong Huang
- School of PsychologySouth China Normal UniversityGuangzhouChina
| | - Ming Meng
- School of PsychologySouth China Normal UniversityGuangzhouChina
- Key Laboratory of BrainCognition and Education Sciences (South China Normal University)Ministry of EducationGuangzhouChina
- Center for Studies of Psychological ApplicationSouth China Normal UniversityGuangzhouChina
- Guangdong Key Laboratory of Mental Health and Cognitive ScienceSouth China Normal UniversityGuangzhouChina
| |
Collapse
|
34
|
Grootswagers T, Robinson AK, Shatek SM, Carlson TA. Untangling featural and conceptual object representations. Neuroimage 2019; 202:116083. [PMID: 31400529 DOI: 10.1016/j.neuroimage.2019.116083] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2019] [Revised: 07/29/2019] [Accepted: 08/06/2019] [Indexed: 10/26/2022] Open
Abstract
How are visual inputs transformed into conceptual representations by the human visual system? The contents of human perception, such as objects presented on a visual display, can reliably be decoded from voxel activation patterns in fMRI, and in evoked sensor activations in MEG and EEG. A prevailing question is the extent to which brain activation associated with object categories is due to statistical regularities of visual features within object categories. Here, we assessed the contribution of mid-level features to conceptual category decoding using EEG and a novel fast periodic decoding paradigm. Our study used a stimulus set consisting of intact objects from the animate (e.g., fish) and inanimate categories (e.g., chair) and scrambled versions of the same objects that were unrecognizable and preserved their visual features (Long et al., 2018). By presenting the images at different periodic rates, we biased processing to different levels of the visual hierarchy. We found that scrambled objects and their intact counterparts elicited similar patterns of activation, which could be used to decode the conceptual category (animate or inanimate), even for the unrecognizable scrambled objects. Animacy decoding for the scrambled objects, however, was only possible at the slowest periodic presentation rate. Animacy decoding for intact objects was faster, more robust, and could be achieved at faster presentation rates. Our results confirm that the mid-level visual features preserved in the scrambled objects contribute to animacy decoding, but also demonstrate that the dynamics vary markedly for intact versus scrambled objects. Our findings suggest a complex interplay between visual feature coding and categorical representations that is mediated by the visual system's capacity to use image features to resolve a recognisable object.
Collapse
Affiliation(s)
- Tijl Grootswagers
- School of Psychology, University of Sydney, Sydney, NSW, Australia; Perception in Action Research Centre, Macquarie University, Sydney, NSW, Australia.
| | - Amanda K Robinson
- School of Psychology, University of Sydney, Sydney, NSW, Australia; Perception in Action Research Centre, Macquarie University, Sydney, NSW, Australia
| | - Sophia M Shatek
- School of Psychology, University of Sydney, Sydney, NSW, Australia
| | - Thomas A Carlson
- School of Psychology, University of Sydney, Sydney, NSW, Australia
| |
Collapse
|