1
|
Ossadtchi A, Semenkov I, Zhuravleva A, Kozunov V, Serikov O, Voloshina E. Representational dissimilarity component analysis (ReDisCA). Neuroimage 2024; 301:120868. [PMID: 39343110 DOI: 10.1016/j.neuroimage.2024.120868] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2024] [Revised: 09/20/2024] [Accepted: 09/23/2024] [Indexed: 10/01/2024] Open
Abstract
The principle of Representational Similarity Analysis (RSA) posits that neural representations reflect the structure of encoded information, allowing exploration of spatial and temporal organization of brain information processing. Traditional RSA when applied to EEG or MEG data faces challenges in accessing activation time series at the brain source level due to modeling complexities and insufficient geometric/anatomical data. To overcome this, we introduce Representational Dissimilarity Component Analysis (ReDisCA), a method for estimating spatial-temporal components in EEG or MEG responses aligned with a target representational dissimilarity matrix (RDM). ReDisCA yields informative spatial filters and associated topographies, offering insights into the location of "representationally relevant" sources. Applied to evoked response time series, ReDisCA produces temporal source activation profiles with the desired RDM. Importantly, while ReDisCA does not require inverse modeling its output is consistent with EEG and MEG observation equation and can be used as an input to rigorous source localization procedures. Demonstrating ReDisCA's efficacy through simulations and comparison with conventional methods, we show superior source localization accuracy and apply the method to real EEG and MEG datasets, revealing physiologically plausible representational structures without inverse modeling. ReDisCA adds to the family of inverse modeling free methods such as independent component analysis (Makeig, 1995), Spatial spectral decomposition (Nikulin, 2011), and Source power comodulation (Dähne, 2014) designed for extraction sources with desired properties from EEG or MEG data. Extending its utility beyond EEG and MEG analysis, ReDisCA is likely to find application in fMRI data analysis and exploration of representational structures emerging in multilayered artificial neural networks.
Collapse
Affiliation(s)
- Alexei Ossadtchi
- Higher School of Economics, Moscow, Russia; LIFT, Life Improvement by Future Technologies Institute, Moscow, Russia; Artificial Intelligence Research Institute, Moscow, Russia.
| | - Ilia Semenkov
- Higher School of Economics, Moscow, Russia; Artificial Intelligence Research Institute, Moscow, Russia
| | - Anna Zhuravleva
- Higher School of Economics, Moscow, Russia; Artificial Intelligence Research Institute, Moscow, Russia
| | - Vladimir Kozunov
- MEG Centre, Moscow State University of Psychology and Education, Russia
| | - Oleg Serikov
- AI Initiative, King Abdullah University of Science and Technology, Kingdom of Saudi Arabia
| | - Ekaterina Voloshina
- Higher School of Economics, Moscow, Russia; Artificial Intelligence Research Institute, Moscow, Russia
| |
Collapse
|
2
|
Moerel D, Rich AN, Woolgar A. Selective Attention and Decision-Making Have Separable Neural Bases in Space and Time. J Neurosci 2024; 44:e0224242024. [PMID: 39107058 PMCID: PMC11411586 DOI: 10.1523/jneurosci.0224-24.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2024] [Revised: 06/21/2024] [Accepted: 07/19/2024] [Indexed: 08/09/2024] Open
Abstract
Attention and decision-making processes are fundamental to cognition. However, they are usually experimentally confounded, making it difficult to link neural observations to specific processes. Here we separated the effects of selective attention from the effects of decision-making on brain activity obtained from human participants (both sexes), using a two-stage task where the attended stimulus and decision were orthogonal and separated in time. Multivariate pattern analyses of multimodal neuroimaging data revealed the dynamics of perceptual and decision-related information coding through time with magnetoencephalography (MEG), through space with functional magnetic resonance imaging (fMRI), and their combination (MEG-fMRI fusion). Our MEG results showed an effect of attention before decision-making could begin, and fMRI results showed an attention effect in early visual and frontoparietal regions. Model-based MEG-fMRI fusion suggested that attention boosted stimulus information in the frontoparietal and early visual regions before decision-making was possible. Together, our results suggest that attention affects neural stimulus representations in the frontoparietal regions independent of decision-making.
Collapse
Affiliation(s)
- Denise Moerel
- School of Psychological Sciences, Macquarie University, Sydney 2109, New South Wales, Australia
- Perception in Action Research Centre, Macquarie University, Sydney 2109, New South Wales, Australia
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney 2145, New South Wales, Australia
| | - Anina N Rich
- School of Psychological Sciences, Macquarie University, Sydney 2109, New South Wales, Australia
- Perception in Action Research Centre, Macquarie University, Sydney 2109, New South Wales, Australia
- Macquarie University Performance and Expertise Research Centre, Sydney 2109, New South Wales, Australia
| | - Alexandra Woolgar
- School of Psychological Sciences, Macquarie University, Sydney 2109, New South Wales, Australia
- Perception in Action Research Centre, Macquarie University, Sydney 2109, New South Wales, Australia
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge CB2 7EF, United Kingdom
| |
Collapse
|
3
|
Lifanov-Carr J, Griffiths BJ, Linde-Domingo J, Ferreira CS, Wilson M, Mayhew SD, Charest I, Wimber M. Reconstructing Spatiotemporal Trajectories of Visual Object Memories in the Human Brain. eNeuro 2024; 11:ENEURO.0091-24.2024. [PMID: 39242212 PMCID: PMC11439564 DOI: 10.1523/eneuro.0091-24.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Revised: 07/03/2024] [Accepted: 08/09/2024] [Indexed: 09/09/2024] Open
Abstract
How the human brain reconstructs, step-by-step, the core elements of past experiences is still unclear. Here, we map the spatiotemporal trajectories along which visual object memories are reconstructed during associative recall. Specifically, we inquire whether retrieval reinstates feature representations in a copy-like but reversed direction with respect to the initial perceptual experience, or alternatively, this reconstruction involves format transformations and regions beyond initial perception. Participants from two cohorts studied new associations between verbs and randomly paired object images, and subsequently recalled the objects when presented with the corresponding verb cue. We first analyze multivariate fMRI patterns to map where in the brain high- and low-level object features can be decoded during perception and retrieval, showing that retrieval is dominated by conceptual features, represented in comparatively late visual and parietal areas. A separately acquired EEG dataset is then used to track the temporal evolution of the reactivated patterns using similarity-based EEG-fMRI fusion. This fusion suggests that memory reconstruction proceeds from anterior frontotemporal to posterior occipital and parietal regions, in line with a conceptual-to-perceptual gradient but only partly following the same trajectories as during perception. Specifically, a linear regression statistically confirms that the sequential activation of ventral visual stream regions is reversed between image perception and retrieval. The fusion analysis also suggests an information relay to frontoparietal areas late during retrieval. Together, the results shed light onto the temporal dynamics of memory recall and the transformations that the information undergoes between the initial experience and its later reconstruction from memory.
Collapse
Affiliation(s)
- Julia Lifanov-Carr
- School of Psychology and Centre for Human Brain Health (CHBH), University of Birmingham, Birmingham B15 2TT, United Kingdom
| | - Benjamin J Griffiths
- School of Psychology and Centre for Human Brain Health (CHBH), University of Birmingham, Birmingham B15 2TT, United Kingdom
| | - Juan Linde-Domingo
- School of Psychology and Centre for Human Brain Health (CHBH), University of Birmingham, Birmingham B15 2TT, United Kingdom
- Department of Experimental Psychology, Mind, Brain and Behavior Research Center (CIMCYC), University of Granada, 18011 Granada, Spain
- Center for Adaptive Rationality, Max Planck Institute for Human Development, 14195 Berlin, Germany
| | - Catarina S Ferreira
- School of Psychology and Centre for Human Brain Health (CHBH), University of Birmingham, Birmingham B15 2TT, United Kingdom
| | - Martin Wilson
- School of Psychology and Centre for Human Brain Health (CHBH), University of Birmingham, Birmingham B15 2TT, United Kingdom
| | - Stephen D Mayhew
- Institute of Health and Neurodevelopment (IHN), School of Psychology, Aston University, Birmingham B4 7ET, United Kingdom
| | - Ian Charest
- Département de Psychologie, Université de Montréal, Montréal, Quebec H2V 2S9, Canada
| | - Maria Wimber
- School of Psychology and Centre for Human Brain Health (CHBH), University of Birmingham, Birmingham B15 2TT, United Kingdom
- School of Psychology & Neuroscience and Centre for Cognitive Neuroimaging (CCNi), University of Glasgow, Glasgow G12 8QB, United Kingdom
| |
Collapse
|
4
|
Zhang Y, Zhang W, Wang S, Lin N, Yu Y, He Y, Wang B, Jiang H, Lin P, Xu X, Qi X, Wang Z, Zhang X, Shang D, Liu Q, Cheng KT, Liu M. Semantic memory-based dynamic neural network using memristive ternary CIM and CAM for 2D and 3D vision. SCIENCE ADVANCES 2024; 10:eado1058. [PMID: 39141720 PMCID: PMC11323881 DOI: 10.1126/sciadv.ado1058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Accepted: 07/08/2024] [Indexed: 08/16/2024]
Abstract
The brain is dynamic, associative, and efficient. It reconfigures by associating the inputs with past experiences, with fused memory and processing. In contrast, AI models are static, unable to associate inputs with past experiences, and run on digital computers with physically separated memory and processing. We propose a hardware-software co-design, a semantic memory-based dynamic neural network using a memristor. The network associates incoming data with the past experience stored as semantic vectors. The network and the semantic memory are physically implemented on noise-robust ternary memristor-based computing-in-memory (CIM) and content-addressable memory (CAM) circuits, respectively. We validate our co-designs, using a 40-nm memristor macro, on ResNet and PointNet++ for classifying images and three-dimensional points from the MNIST and ModelNet datasets, which achieves not only accuracy on par with software but also a 48.1 and 15.9% reduction in computational budget. Moreover, it delivers a 77.6 and 93.3% reduction in energy consumption.
Collapse
Affiliation(s)
- Yue Zhang
- Department of Electrical and Electronic Engineering, the University of Hong Kong, Hong Kong, China
- ACCESS - AI Chip Center for Emerging Smart Systems, InnoHK Centers, Hong Kong Science Park, Hong Kong, China
- Institute of the Mind, the University of Hong Kong, Hong Kong, China
| | - Woyu Zhang
- Key Laboratory of Microelectronic Devices and Integrated Technology, Institute of Microelectronics, Chinese Academy of Sciences, Beijing 100049, China
- Key Lab of Fabrication Technologies for Integrated Circuits, Chinese Academy of Sciences, Beijing 100049, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Shaocong Wang
- Department of Electrical and Electronic Engineering, the University of Hong Kong, Hong Kong, China
- ACCESS - AI Chip Center for Emerging Smart Systems, InnoHK Centers, Hong Kong Science Park, Hong Kong, China
- Institute of the Mind, the University of Hong Kong, Hong Kong, China
| | - Ning Lin
- Department of Electrical and Electronic Engineering, the University of Hong Kong, Hong Kong, China
- ACCESS - AI Chip Center for Emerging Smart Systems, InnoHK Centers, Hong Kong Science Park, Hong Kong, China
- Institute of the Mind, the University of Hong Kong, Hong Kong, China
| | - Yifei Yu
- Department of Electrical and Electronic Engineering, the University of Hong Kong, Hong Kong, China
- ACCESS - AI Chip Center for Emerging Smart Systems, InnoHK Centers, Hong Kong Science Park, Hong Kong, China
- Institute of the Mind, the University of Hong Kong, Hong Kong, China
| | - Yangu He
- Department of Electrical and Electronic Engineering, the University of Hong Kong, Hong Kong, China
- ACCESS - AI Chip Center for Emerging Smart Systems, InnoHK Centers, Hong Kong Science Park, Hong Kong, China
- Institute of the Mind, the University of Hong Kong, Hong Kong, China
| | - Bo Wang
- Department of Electrical and Electronic Engineering, the University of Hong Kong, Hong Kong, China
- ACCESS - AI Chip Center for Emerging Smart Systems, InnoHK Centers, Hong Kong Science Park, Hong Kong, China
- Institute of the Mind, the University of Hong Kong, Hong Kong, China
| | - Hao Jiang
- State Key Laboratory of Integrated Chips and Systems, Frontier Institute of Chip and System, Fudan University, Shanghai 200433, China
| | - Peng Lin
- College of Computer Science and Technology, Zhejiang University, Zhejiang 310027, China
| | - Xiaoxin Xu
- Key Laboratory of Microelectronic Devices and Integrated Technology, Institute of Microelectronics, Chinese Academy of Sciences, Beijing 100049, China
- Key Lab of Fabrication Technologies for Integrated Circuits, Chinese Academy of Sciences, Beijing 100049, China
| | - Xiaojuan Qi
- Department of Electrical and Electronic Engineering, the University of Hong Kong, Hong Kong, China
| | - Zhongrui Wang
- Department of Electrical and Electronic Engineering, the University of Hong Kong, Hong Kong, China
- ACCESS - AI Chip Center for Emerging Smart Systems, InnoHK Centers, Hong Kong Science Park, Hong Kong, China
- Institute of the Mind, the University of Hong Kong, Hong Kong, China
| | - Xumeng Zhang
- State Key Laboratory of Integrated Chips and Systems, Frontier Institute of Chip and System, Fudan University, Shanghai 200433, China
| | - Dashan Shang
- Key Laboratory of Microelectronic Devices and Integrated Technology, Institute of Microelectronics, Chinese Academy of Sciences, Beijing 100049, China
- Key Lab of Fabrication Technologies for Integrated Circuits, Chinese Academy of Sciences, Beijing 100049, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Qi Liu
- Key Laboratory of Microelectronic Devices and Integrated Technology, Institute of Microelectronics, Chinese Academy of Sciences, Beijing 100049, China
- State Key Laboratory of Integrated Chips and Systems, Frontier Institute of Chip and System, Fudan University, Shanghai 200433, China
| | - Kwang-Ting Cheng
- ACCESS - AI Chip Center for Emerging Smart Systems, InnoHK Centers, Hong Kong Science Park, Hong Kong, China
- Department of Electronic and Computer Engineering, the Hong Kong University of Science and Technology, Hong Kong, China
| | - Ming Liu
- Key Laboratory of Microelectronic Devices and Integrated Technology, Institute of Microelectronics, Chinese Academy of Sciences, Beijing 100049, China
- State Key Laboratory of Integrated Chips and Systems, Frontier Institute of Chip and System, Fudan University, Shanghai 200433, China
| |
Collapse
|
5
|
Liu S, He W, Zhang M, Li Y, Ren J, Guan Y, Fan C, Li S, Gu R, Luo W. Emotional concepts shape the perceptual representation of body expressions. Hum Brain Mapp 2024; 45:e26789. [PMID: 39185719 PMCID: PMC11345699 DOI: 10.1002/hbm.26789] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2023] [Revised: 06/25/2024] [Accepted: 07/03/2024] [Indexed: 08/27/2024] Open
Abstract
Emotion perception interacts with how we think and speak, including our concept of emotions. Body expression is an important way of emotion communication, but it is unknown whether and how its perception is modulated by conceptual knowledge. In this study, we employed representational similarity analysis and conducted three experiments combining semantic similarity, mouse-tracking task, and one-back behavioral task with electroencephalography and functional magnetic resonance imaging techniques, the results of which show that conceptual knowledge predicted the perceptual representation of body expressions. Further, this prediction effect occurred at approximately 170 ms post-stimulus. The neural encoding of body expressions in the fusiform gyrus and lingual gyrus was impacted by emotion concept knowledge. Taken together, our results indicate that conceptual knowledge of emotion categories shapes the configural representation of body expressions in the ventral visual cortex, which offers compelling evidence for the constructed emotion theory.
Collapse
Affiliation(s)
- Shuaicheng Liu
- Research Center of Brain and Cognitive NeuroscienceLiaoning Normal UniversityDalianChina
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning ProvinceDalianChina
| | - Weiqi He
- Research Center of Brain and Cognitive NeuroscienceLiaoning Normal UniversityDalianChina
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning ProvinceDalianChina
| | - Mingming Zhang
- Research Center of Brain and Cognitive NeuroscienceLiaoning Normal UniversityDalianChina
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning ProvinceDalianChina
| | - Yiwen Li
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain ResearchBeijing Normal UniversityBeijingChina
| | - Jie Ren
- Research Center of Brain and Cognitive NeuroscienceLiaoning Normal UniversityDalianChina
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning ProvinceDalianChina
| | - Yuanhao Guan
- Research Center of Brain and Cognitive NeuroscienceLiaoning Normal UniversityDalianChina
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning ProvinceDalianChina
| | - Cong Fan
- Research Center of Brain and Cognitive NeuroscienceLiaoning Normal UniversityDalianChina
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning ProvinceDalianChina
| | - Shuaixia Li
- Research Center of Brain and Cognitive NeuroscienceLiaoning Normal UniversityDalianChina
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning ProvinceDalianChina
| | - Ruolei Gu
- Key Laboratory of Behavioral Science, Institute of PsychologyChinese Academy of SciencesBeijingChina
- Department of PsychologyUniversity of Chinese Academy of SciencesBeijingChina
| | - Wenbo Luo
- Research Center of Brain and Cognitive NeuroscienceLiaoning Normal UniversityDalianChina
- Key Laboratory of Brain and Cognitive Neuroscience, Liaoning ProvinceDalianChina
| |
Collapse
|
6
|
Lahner B, Dwivedi K, Iamshchinina P, Graumann M, Lascelles A, Roig G, Gifford AT, Pan B, Jin S, Ratan Murty NA, Kay K, Oliva A, Cichy R. Modeling short visual events through the BOLD moments video fMRI dataset and metadata. Nat Commun 2024; 15:6241. [PMID: 39048577 PMCID: PMC11269733 DOI: 10.1038/s41467-024-50310-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2023] [Accepted: 07/04/2024] [Indexed: 07/27/2024] Open
Abstract
Studying the neural basis of human dynamic visual perception requires extensive experimental data to evaluate the large swathes of functionally diverse brain neural networks driven by perceiving visual events. Here, we introduce the BOLD Moments Dataset (BMD), a repository of whole-brain fMRI responses to over 1000 short (3 s) naturalistic video clips of visual events across ten human subjects. We use the videos' extensive metadata to show how the brain represents word- and sentence-level descriptions of visual events and identify correlates of video memorability scores extending into the parietal cortex. Furthermore, we reveal a match in hierarchical processing between cortical regions of interest and video-computable deep neural networks, and we showcase that BMD successfully captures temporal dynamics of visual events at second resolution. With its rich metadata, BMD offers new perspectives and accelerates research on the human brain basis of visual event perception.
Collapse
Affiliation(s)
- Benjamin Lahner
- Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, USA.
| | - Kshitij Dwivedi
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany
- Department of Computer Science, Goethe University Frankfurt, Frankfurt am Main, Germany
| | - Polina Iamshchinina
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany
| | - Monika Graumann
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany
| | - Alex Lascelles
- Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, USA
| | - Gemma Roig
- Department of Computer Science, Goethe University Frankfurt, Frankfurt am Main, Germany
- The Hessian Center for AI (hessian.AI), Darmstadt, Germany
| | | | - Bowen Pan
- Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, USA
| | - SouYoung Jin
- Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, USA
| | - N Apurva Ratan Murty
- Department of Brain and Cognitive Science, MIT, Cambridge, MA, USA
- School of Psychology, Georgia Institute of Technology, Atlanta, GA, USA
| | - Kendrick Kay
- Center for Magnetic Resonance Research (CMRR), Department of Radiology, University of Minnesota, Minneapolis, MN, USA
| | - Aude Oliva
- Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, USA
| | - Radoslaw Cichy
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany
| |
Collapse
|
7
|
Liu X, Wei S, Zhao X, Bi Y, Hu L. Establishing the relationship between subjective perception and neural responses: Insights from correlation analysis and representational similarity analysis. Neuroimage 2024; 295:120650. [PMID: 38768740 DOI: 10.1016/j.neuroimage.2024.120650] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2024] [Revised: 05/12/2024] [Accepted: 05/17/2024] [Indexed: 05/22/2024] Open
Abstract
Exploring the relationship between sensory perception and brain responses holds important theoretical and clinical implications. However, commonly used methodologies like correlation analysis performed either intra- or inter- individually often yield inconsistent results across studies, limiting their generalizability. Representational similarity analysis (RSA), a method that assesses the perception-response relationship by calculating the correlation between behavioral and neural patterns, may offer a fresh perspective to reveal novel findings. Here, we delivered a series of graded sensory stimuli of four modalities (i.e., nociceptive somatosensory, non-nociceptive somatosensory, visual, and auditory) to/near the left or right hand of 107 healthy subjects and collected their single-trial perceptual ratings and electroencephalographic (EEG) responses. We examined the relationship between sensory perception and brain responses using within- and between-subject correlation analysis and RSA, and assessed their stability across different numbers of subjects and trials. We found that within-subject and between-subject correlations yielded distinct results: within-subject correlation revealed strong and reliable correlations between perceptual ratings and most brain responses, while between-subject correlation showed weak correlations that were vulnerable to the change of subject number. In addition to verifying the correlation results, RSA revealed some novel findings, i.e., correlations between behavioral and neural patterns were observed in some additional neural responses, such as "γ-ERS" in the visual modality. RSA results were sensitive to the trial number, but not to the subject number, suggesting that consistent results could be obtained for studies with relatively small sample sizes. In conclusion, our study provides a novel perspective on establishing the relationship between behavior and brain activity, emphasizing that RSA holds promise as a method for exploring this pattern relationship in future research.
Collapse
Affiliation(s)
- Xu Liu
- CAS Key Laboratory of Mental Health, Institute of Psychology, Beijing, 100101, China; Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, China
| | - Shiyu Wei
- CAS Key Laboratory of Mental Health, Institute of Psychology, Beijing, 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Xiangyue Zhao
- CAS Key Laboratory of Mental Health, Institute of Psychology, Beijing, 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Yanzhi Bi
- CAS Key Laboratory of Mental Health, Institute of Psychology, Beijing, 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, 100049, China.
| | - Li Hu
- CAS Key Laboratory of Mental Health, Institute of Psychology, Beijing, 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, 100049, China
| |
Collapse
|
8
|
Belyaeva I, Gabrielson B, Wang YP, Wilson TW, Calhoun VD, Stephen JM, Adali T. Learning Spatiotemporal Brain Dynamics in Adolescents via Multimodal MEG and fMRI Data Fusion Using Joint Tensor/Matrix Decomposition. IEEE Trans Biomed Eng 2024; 71:2189-2200. [PMID: 38345949 PMCID: PMC11240882 DOI: 10.1109/tbme.2024.3364704] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/15/2024]
Abstract
OBJECTIVE Brain function is understood to be regulated by complex spatiotemporal dynamics, and can be characterized by a combination of observed brain response patterns in time and space. Magnetoencephalography (MEG), with its high temporal resolution, and functional magnetic resonance imaging (fMRI), with its high spatial resolution, are complementary imaging techniques with great potential to reveal information about spatiotemporal brain dynamics. Hence, the complementary nature of these imaging techniques holds much promise to study brain function in time and space, especially when the two data types are allowed to fully interact. METHODS We employed coupled tensor/matrix factorization (CMTF) to extract joint latent components in the form of unique spatiotemporal brain patterns that can be used to study brain development and function on a millisecond scale. RESULTS Using the CMTF model, we extracted distinct brain patterns that revealed fine-grained spatiotemporal brain dynamics and typical sensory processing pathways informative of high-level cognitive functions in healthy adolescents. The components extracted from multimodal tensor fusion possessed better discriminative ability between high- and low-performance subjects than single-modality data-driven models. CONCLUSION Multimodal tensor fusion successfully identified spatiotemporal brain dynamics of brain function and produced unique components with high discriminatory power. SIGNIFICANCE The CMTF model is a promising tool for high-order, multimodal data fusion that exploits the functional resolution of MEG and fMRI, and provides a comprehensive picture of the developing brain in time and space.
Collapse
|
9
|
Koenig-Robert R, Quek GL, Grootswagers T, Varlet M. Movement trajectories as a window into the dynamics of emerging neural representations. Sci Rep 2024; 14:11499. [PMID: 38769313 PMCID: PMC11106280 DOI: 10.1038/s41598-024-62135-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2024] [Accepted: 05/14/2024] [Indexed: 05/22/2024] Open
Abstract
The rapid transformation of sensory inputs into meaningful neural representations is critical to adaptive human behaviour. While non-invasive neuroimaging methods are the de-facto method for investigating neural representations, they remain expensive, not widely available, time-consuming, and restrictive. Here we show that movement trajectories can be used to measure emerging neural representations with fine temporal resolution. By combining online computer mouse-tracking and publicly available neuroimaging data via representational similarity analysis (RSA), we show that movement trajectories track the unfolding of stimulus- and category-wise neural representations along key dimensions of the human visual system. We demonstrate that time-resolved representational structures derived from movement trajectories overlap with those derived from M/EEG (albeit delayed) and those derived from fMRI in functionally-relevant brain areas. Our findings highlight the richness of movement trajectories and the power of the RSA framework to reveal and compare their information content, opening new avenues to better understand human perception.
Collapse
Affiliation(s)
- Roger Koenig-Robert
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Penrith, NSW, 2751, Australia
- School of Psychology, University of New South Wales, Sydney, NSW, Australia
| | - Genevieve L Quek
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Penrith, NSW, 2751, Australia
| | - Tijl Grootswagers
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Penrith, NSW, 2751, Australia
- School of Computer, Data and Mathematical Sciences, Western Sydney University, Penrith, NSW, 2751, Australia
| | - Manuel Varlet
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Penrith, NSW, 2751, Australia.
- School of Psychology, Western Sydney University, Sydney, NSW, 2751, Australia.
| |
Collapse
|
10
|
Lee Masson H, Chen J, Isik L. A shared neural code for perceiving and remembering social interactions in the human superior temporal sulcus. Neuropsychologia 2024; 196:108823. [PMID: 38346576 DOI: 10.1016/j.neuropsychologia.2024.108823] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2023] [Revised: 01/15/2024] [Accepted: 02/09/2024] [Indexed: 02/20/2024]
Abstract
Recognizing and remembering social information is a crucial cognitive skill. Neural patterns in the superior temporal sulcus (STS) support our ability to perceive others' social interactions. However, despite the prominence of social interactions in memory, the neural basis of remembering social interactions is still unknown. To fill this gap, we investigated the brain mechanisms underlying memory of others' social interactions during free spoken recall of a naturalistic movie. By applying machine learning-based fMRI encoding analyses to densely labeled movie and recall data we found that a subset of the STS activity evoked by viewing social interactions predicted neural responses in not only held-out movie data, but also during memory recall. These results provide the first evidence that activity in the STS is reinstated in response to specific social content and that its reactivation underlies our ability to remember others' interactions. These findings further suggest that the STS contains representations of social interactions that are not only perceptually driven, but also more abstract or conceptual in nature.
Collapse
Affiliation(s)
- Haemy Lee Masson
- Department of Psychology, Durham University, Durham, DH1 3LE, United Kingdom; Department of Cognitive Science, Johns Hopkins University, Baltimore, MD, 21218, United States.
| | - Janice Chen
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, 21218, United States
| | - Leyla Isik
- Department of Cognitive Science, Johns Hopkins University, Baltimore, MD, 21218, United States.
| |
Collapse
|
11
|
Lahner B, Mohsenzadeh Y, Mullin C, Oliva A. Visual perception of highly memorable images is mediated by a distributed network of ventral visual regions that enable a late memorability response. PLoS Biol 2024; 22:e3002564. [PMID: 38557761 PMCID: PMC10984539 DOI: 10.1371/journal.pbio.3002564] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Accepted: 02/26/2024] [Indexed: 04/04/2024] Open
Abstract
Behavioral and neuroscience studies in humans and primates have shown that memorability is an intrinsic property of an image that predicts its strength of encoding into and retrieval from memory. While previous work has independently probed when or where this memorability effect may occur in the human brain, a description of its spatiotemporal dynamics is missing. Here, we used representational similarity analysis (RSA) to combine functional magnetic resonance imaging (fMRI) with source-estimated magnetoencephalography (MEG) to simultaneously measure when and where the human cortex is sensitive to differences in image memorability. Results reveal that visual perception of High Memorable images, compared to Low Memorable images, recruits a set of regions of interest (ROIs) distributed throughout the ventral visual cortex: a late memorability response (from around 300 ms) in early visual cortex (EVC), inferior temporal cortex, lateral occipital cortex, fusiform gyrus, and banks of the superior temporal sulcus. Image memorability magnitude results are represented after high-level feature processing in visual regions and reflected in classical memory regions in the medial temporal lobe (MTL). Our results present, to our knowledge, the first unified spatiotemporal account of visual memorability effect across the human cortex, further supporting the levels-of-processing theory of perception and memory.
Collapse
Affiliation(s)
- Benjamin Lahner
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| | - Yalda Mohsenzadeh
- The Brain and Mind Institute, The University of Western Ontario, London, Canada
- Department of Computer Science, The University of Western Ontario, London, Canada
- Vector Institute for Artificial Intelligence, Toronto, Ontario, Canada
| | - Caitlin Mullin
- Vision: Science to Application (VISTA), York University, Toronto, Ontario, Canada
| | - Aude Oliva
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| |
Collapse
|
12
|
Noda T, Aschauer DF, Chambers AR, Seiler JPH, Rumpel S. Representational maps in the brain: concepts, approaches, and applications. Front Cell Neurosci 2024; 18:1366200. [PMID: 38584779 PMCID: PMC10995314 DOI: 10.3389/fncel.2024.1366200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2024] [Accepted: 03/08/2024] [Indexed: 04/09/2024] Open
Abstract
Neural systems have evolved to process sensory stimuli in a way that allows for efficient and adaptive behavior in a complex environment. Recent technological advances enable us to investigate sensory processing in animal models by simultaneously recording the activity of large populations of neurons with single-cell resolution, yielding high-dimensional datasets. In this review, we discuss concepts and approaches for assessing the population-level representation of sensory stimuli in the form of a representational map. In such a map, not only are the identities of stimuli distinctly represented, but their relational similarity is also mapped onto the space of neuronal activity. We highlight example studies in which the structure of representational maps in the brain are estimated from recordings in humans as well as animals and compare their methodological approaches. Finally, we integrate these aspects and provide an outlook for how the concept of representational maps could be applied to various fields in basic and clinical neuroscience.
Collapse
Affiliation(s)
- Takahiro Noda
- Institute of Physiology, Focus Program Translational Neurosciences, University Medical Center, Johannes Gutenberg University-Mainz, Mainz, Germany
| | - Dominik F. Aschauer
- Institute of Physiology, Focus Program Translational Neurosciences, University Medical Center, Johannes Gutenberg University-Mainz, Mainz, Germany
| | - Anna R. Chambers
- Department of Otolaryngology – Head and Neck Surgery, Harvard Medical School, Boston, MA, United States
- Eaton Peabody Laboratories, Massachusetts Eye and Ear Infirmary, Boston, MA, United States
| | - Johannes P.-H. Seiler
- Institute of Physiology, Focus Program Translational Neurosciences, University Medical Center, Johannes Gutenberg University-Mainz, Mainz, Germany
| | - Simon Rumpel
- Institute of Physiology, Focus Program Translational Neurosciences, University Medical Center, Johannes Gutenberg University-Mainz, Mainz, Germany
| |
Collapse
|
13
|
Walbrin J, Downing PE, Sotero FD, Almeida J. Characterizing the discriminability of visual categorical information in strongly connected voxels. Neuropsychologia 2024; 195:108815. [PMID: 38311112 DOI: 10.1016/j.neuropsychologia.2024.108815] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2023] [Revised: 01/06/2024] [Accepted: 02/01/2024] [Indexed: 02/06/2024]
Abstract
Functional brain responses are strongly influenced by connectivity. Recently, we demonstrated a major example of this: category discriminability within occipitotemporal cortex (OTC) is enhanced for voxel sets that share strong functional connectivity to distal brain areas, relative to those that share lesser connectivity. That is, within OTC regions, sets of 'most-connected' voxels show improved multivoxel pattern discriminability for tool-, face-, and place stimuli relative to voxels with weaker connectivity to the wider brain. However, understanding whether these effects generalize to other domains (e.g. body perception network), and across different levels of the visual processing streams (e.g. dorsal as well as ventral stream areas) is an important extension of this work. Here, we show that this so-called connectivity-guided decoding (CGD) effect broadly generalizes across a wide range of categories (tools, faces, bodies, hands, places). This effect is robust across dorsal stream areas, but less consistent in earlier ventral stream areas. In the latter regions, category discriminability is generally very high, suggesting that extraction of category-relevant visual properties is less reliant on connectivity to downstream areas. Further, CGD effects are primarily expressed in a category-specific manner: For example, within the network of tool regions, discriminability of tool information is greater than non-tool information. The connectivity-guided decoding approach shown here provides a novel demonstration of the crucial relationship between wider brain connectivity and complex local-level functional responses at different levels of the visual processing streams. Further, this approach generates testable new hypotheses about the relationships between connectivity and local selectivity.
Collapse
Affiliation(s)
- Jon Walbrin
- Proaction Laboratory, Faculty of Psychology and Educational Sciences, University of Coimbra, Portugal; CINEICC, Faculty of Psychology and Educational Sciences, University of Coimbra, Portugal.
| | - Paul E Downing
- School of Human and Behavioural Sciences, Bangor University, Bangor, Wales
| | - Filipa Dourado Sotero
- Proaction Laboratory, Faculty of Psychology and Educational Sciences, University of Coimbra, Portugal; CINEICC, Faculty of Psychology and Educational Sciences, University of Coimbra, Portugal
| | - Jorge Almeida
- Proaction Laboratory, Faculty of Psychology and Educational Sciences, University of Coimbra, Portugal; CINEICC, Faculty of Psychology and Educational Sciences, University of Coimbra, Portugal
| |
Collapse
|
14
|
Nitsch A, Garvert MM, Bellmund JLS, Schuck NW, Doeller CF. Grid-like entorhinal representation of an abstract value space during prospective decision making. Nat Commun 2024; 15:1198. [PMID: 38336756 PMCID: PMC10858181 DOI: 10.1038/s41467-024-45127-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Accepted: 01/16/2024] [Indexed: 02/12/2024] Open
Abstract
How valuable a choice option is often changes over time, making the prediction of value changes an important challenge for decision making. Prior studies identified a cognitive map in the hippocampal-entorhinal system that encodes relationships between states and enables prediction of future states, but does not inherently convey value during prospective decision making. In this fMRI study, participants predicted changing values of choice options in a sequence, forming a trajectory through an abstract two-dimensional value space. During this task, the entorhinal cortex exhibited a grid-like representation with an orientation aligned to the axis through the value space most informative for choices. A network of brain regions, including ventromedial prefrontal cortex, tracked the prospective value difference between options. These findings suggest that the entorhinal grid system supports the prediction of future values by representing a cognitive map, which might be used to generate lower-dimensional value signals to guide prospective decision making.
Collapse
Affiliation(s)
- Alexander Nitsch
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.
| | - Mona M Garvert
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Max Planck Research Group NeuroCode, Max Planck Institute for Human Development, Berlin, Germany
- Max Planck UCL Centre for Computational Psychiatry and Aging Research, Berlin, Germany
- Faculty of Human Sciences, Julius-Maximilians-Universität Würzburg, Würzburg, Germany
| | - Jacob L S Bellmund
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Nicolas W Schuck
- Max Planck Research Group NeuroCode, Max Planck Institute for Human Development, Berlin, Germany
- Max Planck UCL Centre for Computational Psychiatry and Aging Research, Berlin, Germany
- Institute of Psychology, Universität Hamburg, Hamburg, Germany
| | - Christian F Doeller
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.
- Kavli Institute for Systems Neuroscience, Centre for Neural Computation, The Egil and Pauline Braathen and Fred Kavli Centre for Cortical Microcircuits, Jebsen Centre for Alzheimer's Disease, Norwegian University of Science and Technology, Trondheim, Norway.
- Wilhelm Wundt Institute for Psychology, Leipzig University, Leipzig, Germany.
- Department of Psychology, Technical University Dresden, Dresden, Germany.
| |
Collapse
|
15
|
Bánki A, Köster M, Cichy RM, Hoehl S. Communicative signals during joint attention promote neural processes of infants and caregivers. Dev Cogn Neurosci 2024; 65:101321. [PMID: 38061133 PMCID: PMC10754706 DOI: 10.1016/j.dcn.2023.101321] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2023] [Revised: 10/13/2023] [Accepted: 11/04/2023] [Indexed: 01/01/2024] Open
Abstract
Communicative signals such as eye contact increase infants' brain activation to visual stimuli and promote joint attention. Our study assessed whether communicative signals during joint attention enhance infant-caregiver dyads' neural responses to objects, and their neural synchrony. To track mutual attention processes, we applied rhythmic visual stimulation (RVS), presenting images of objects to 12-month-old infants and their mothers (n = 37 dyads), while we recorded dyads' brain activity (i.e., steady-state visual evoked potentials, SSVEPs) with electroencephalography (EEG) hyperscanning. Within dyads, mothers either communicatively showed the images to their infant or watched the images without communicative engagement. Communicative cues increased infants' and mothers' SSVEPs at central-occipital-parietal, and central electrode sites, respectively. Infants showed significantly more gaze behaviour to images during communicative engagement. Dyadic neural synchrony (SSVEP amplitude envelope correlations, AECs) was not modulated by communicative cues. Taken together, maternal communicative cues in joint attention increase infants' neural responses to objects, and shape mothers' own attention processes. We show that communicative cues enhance cortical visual processing, thus play an essential role in social learning. Future studies need to elucidate the effect of communicative cues on neural synchrony during joint attention. Finally, our study introduces RVS to study infant-caregiver neural dynamics in social contexts.
Collapse
Affiliation(s)
- Anna Bánki
- University of Vienna, Faculty of Psychology, Vienna, Austria.
| | - Moritz Köster
- University of Regensburg, Institute for Psychology, Regensburg, Germany; Freie Universität Berlin, Faculty of Education and Psychology, Berlin, Germany
| | | | - Stefanie Hoehl
- University of Vienna, Faculty of Psychology, Vienna, Austria
| |
Collapse
|
16
|
Vidaurre D. A generative model of electrophysiological brain responses to stimulation. eLife 2024; 12:RP87729. [PMID: 38231034 PMCID: PMC10945576 DOI: 10.7554/elife.87729] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2024] Open
Abstract
Each brain response to a stimulus is, to a large extent, unique. However this variability, our perceptual experience feels stable. Standard decoding models, which utilise information across several areas to tap into stimuli representation and processing, are fundamentally based on averages. Therefore, they can focus precisely on the features that are most stable across stimulus presentations. But which are these features exactly is difficult to address in the absence of a generative model of the signal. Here, I introduce genephys, a generative model of brain responses to stimulation publicly available as a Python package that, when confronted with a decoding algorithm, can reproduce the structured patterns of decoding accuracy that we observe in real data. Using this approach, I characterise how these patterns may be brought about by the different aspects of the signal, which in turn may translate into distinct putative neural mechanisms. In particular, the model shows that the features in the data that support successful decoding-and, therefore, likely reflect stable mechanisms of stimulus representation-have an oscillatory component that spans multiple channels, frequencies, and latencies of response; and an additive, slower response with a specific (cross-frequency) relation to the phase of the oscillatory component. At the individual trial level, still, responses are found to be highly variable, which can be due to various factors including phase noise and probabilistic activations.
Collapse
Affiliation(s)
- Diego Vidaurre
- Center for Functionally Integrative Neuroscience, Department of Clinical Medicine, Aarhus UniversityAarhusDenmark
- Department of Psychiatry, Oxford UniversityOxfordUnited Kingdom
| |
Collapse
|
17
|
Ozawa Y, Yoshimura N. Temporal Electroencephalography Traits Dissociating Tactile Information and Cross-Modal Congruence Effects. SENSORS (BASEL, SWITZERLAND) 2023; 24:45. [PMID: 38202907 PMCID: PMC10780639 DOI: 10.3390/s24010045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Revised: 12/16/2023] [Accepted: 12/19/2023] [Indexed: 01/12/2024]
Abstract
To explore whether temporal electroencephalography (EEG) traits can dissociate the physical properties of touching objects and the congruence effects of cross-modal stimuli, we applied a machine learning approach to two major temporal domain EEG traits, event-related potential (ERP) and somatosensory evoked potential (SEP), for each anatomical brain region. During a task in which participants had to identify one of two material surfaces as a tactile stimulus, a photo image that matched ('congruent') or mismatched ('incongruent') the material they were touching was given as a visual stimulus. Electrical stimulation was applied to the median nerve of the right wrist to evoke SEP while the participants touched the material. The classification accuracies using ERP extracted in reference to the tactile/visual stimulus onsets were significantly higher than chance levels in several regions in both congruent and incongruent conditions, whereas SEP extracted in reference to the electrical stimulus onsets resulted in no significant classification accuracies. Further analysis based on current source signals estimated using EEG revealed brain regions showing significant accuracy across conditions, suggesting that tactile-based object recognition information is encoded in the temporal domain EEG trait and broader brain regions, including the premotor, parietal, and somatosensory areas.
Collapse
Affiliation(s)
- Yusuke Ozawa
- School of Engineering, Tokyo Institute of Technology, Yokohama 226-8503, Japan;
| | - Natsue Yoshimura
- School of Computing, Tokyo Institute of Technology, Yokohama 226-8503, Japan
| |
Collapse
|
18
|
Amaral L, Besson G, Caparelli-Dáquer E, Bergström F, Almeida J. Temporal differences and commonalities between hand and tool neural processing. Sci Rep 2023; 13:22270. [PMID: 38097608 PMCID: PMC10721913 DOI: 10.1038/s41598-023-48180-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Accepted: 11/23/2023] [Indexed: 12/17/2023] Open
Abstract
Object recognition is a complex cognitive process that relies on how the brain organizes object-related information. While spatial principles have been extensively studied, less studied temporal dynamics may also offer valuable insights into this process, particularly when neural processing overlaps for different categories, as it is the case of the categories of hands and tools. Here we focus on the differences and/or similarities between the time-courses of hand and tool processing under electroencephalography (EEG). Using multivariate pattern analysis, we compared, for different time points, classification accuracy for images of hands or tools when compared to images of animals. We show that for particular time intervals (~ 136-156 ms and ~ 252-328 ms), classification accuracy for hands and for tools differs. Furthermore, we show that classifiers trained to differentiate between tools and animals generalize their learning to classification of hand stimuli between ~ 260-320 ms and ~ 376-500 ms after stimulus onset. Classifiers trained to distinguish between hands and animals, on the other hand, were able to extend their learning to the classification of tools at ~ 150 ms. These findings suggest variations in semantic features and domain-specific differences between the two categories, with later-stage similarities potentially related to shared action processing for hands and tools.
Collapse
Affiliation(s)
- L Amaral
- Proaction Laboratory, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal.
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC, USA.
| | - G Besson
- Proaction Laboratory, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
- CINEICC, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
| | - E Caparelli-Dáquer
- Laboratory of Electrical Stimulation of the Nervous System (LabEEL), Rio de Janeiro State University, Rio de Janeiro, Brazil
| | - F Bergström
- Proaction Laboratory, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
- CINEICC, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
- Department of Psychology, University of Gothenburg, Gothenburg, Sweden
| | - J Almeida
- Proaction Laboratory, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal.
- CINEICC, Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal.
| |
Collapse
|
19
|
Zhang Z, Chen T, Liu Y, Wang C, Zhao K, Liu CH, Fu X. Decoding the temporal representation of facial expression in face-selective regions. Neuroimage 2023; 283:120442. [PMID: 37926217 DOI: 10.1016/j.neuroimage.2023.120442] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2023] [Revised: 10/23/2023] [Accepted: 11/02/2023] [Indexed: 11/07/2023] Open
Abstract
The ability of humans to discern facial expressions in a timely manner typically relies on distributed face-selective regions for rapid neural computations. To study the time course in regions of interest for this process, we used magnetoencephalography (MEG) to measure neural responses participants viewed facial expressions depicting seven types of emotions (happiness, sadness, anger, disgust, fear, surprise, and neutral). Analysis of the time-resolved decoding of neural responses in face-selective sources within the inferior parietal cortex (IP-faces), lateral occipital cortex (LO-faces), fusiform gyrus (FG-faces), and posterior superior temporal sulcus (pSTS-faces) revealed that facial expressions were successfully classified starting from ∼100 to 150 ms after stimulus onset. Interestingly, the LO-faces and IP-faces showed greater accuracy than FG-faces and pSTS-faces. To examine the nature of the information processed in these face-selective regions, we entered with facial expression stimuli into a convolutional neural network (CNN) to perform similarity analyses against human neural responses. The results showed that neural responses in the LO-faces and IP-faces, starting ∼100 ms after the stimuli, were more strongly correlated with deep representations of emotional categories than with image level information from the input images. Additionally, we observed a relationship between the behavioral performance and the neural responses in the LO-faces and IP-faces, but not in the FG-faces and lpSTS-faces. Together, these results provided a comprehensive picture of the time course and nature of information involved in facial expression discrimination across multiple face-selective regions, which advances our understanding of how the human brain processes facial expressions.
Collapse
Affiliation(s)
- Zhihao Zhang
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Tong Chen
- Chongqing Key Laboratory of Non-Linear Circuit and Intelligent Information Processing, Southwest University, Chongqing 400715, China; Chongqing Key Laboratory of Artificial Intelligence and Service Robot Control Technology, Chongqing 400715, China
| | - Ye Liu
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Chongyang Wang
- Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China
| | - Ke Zhao
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China.
| | - Chang Hong Liu
- Department of Psychology, Bournemouth University, Dorset, United Kingdom
| | - Xiaolan Fu
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China.
| |
Collapse
|
20
|
Robinson AK, Quek GL, Carlson TA. Visual Representations: Insights from Neural Decoding. Annu Rev Vis Sci 2023; 9:313-335. [PMID: 36889254 DOI: 10.1146/annurev-vision-100120-025301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/10/2023]
Abstract
Patterns of brain activity contain meaningful information about the perceived world. Recent decades have welcomed a new era in neural analyses, with computational techniques from machine learning applied to neural data to decode information represented in the brain. In this article, we review how decoding approaches have advanced our understanding of visual representations and discuss efforts to characterize both the complexity and the behavioral relevance of these representations. We outline the current consensus regarding the spatiotemporal structure of visual representations and review recent findings that suggest that visual representations are at once robust to perturbations, yet sensitive to different mental states. Beyond representations of the physical world, recent decoding work has shone a light on how the brain instantiates internally generated states, for example, during imagery and prediction. Going forward, decoding has remarkable potential to assess the functional relevance of visual representations for human behavior, reveal how representations change across development and during aging, and uncover their presentation in various mental disorders.
Collapse
Affiliation(s)
- Amanda K Robinson
- Queensland Brain Institute, The University of Queensland, Brisbane, Australia;
| | - Genevieve L Quek
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia;
| | | |
Collapse
|
21
|
Xu P, Wang M, Zhang T, Zhang J, Jin Z, Li L. The role of middle frontal gyrus in working memory retrieval by the effect of target detection tasks: a simultaneous EEG-fMRI study. Brain Struct Funct 2023:10.1007/s00429-023-02687-y. [PMID: 37477712 DOI: 10.1007/s00429-023-02687-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Accepted: 07/11/2023] [Indexed: 07/22/2023]
Abstract
Maintained working memory (WM) representations have been shown to influence visual target detection selection, while the effect of the visual target detection process on WM retrieval remains largely unknown. In the current research, we used the dual-paradigm of the visual target detection task and the delayed matching task (DMT), which contained the following four conditions: the match condition: the DMT target contained the detection target; the mismatch condition: the DMT target contained the detection distractor; the neutral condition: only the detection target was presented; the catch condition: only the DMT target was presented. Twenty-six subjects were recruited in the experiment with simultaneous EEG-fMRI data. Behaviorally, faster responses were found in the mismatch condition than in the match and neutral conditions. The EEG data found a greater parieto-occipital N1 component in the mismatch condition compared to the neutral condition, and a greater frontal N2 component in the match condition than in the mismatch condition. Moreover, compared to the match and neutral conditions, weaker activations of the bilateral middle frontal gyrus (MFG) were observed in the mismatch condition. And the representational similarity analysis (RSA) revealed significant differences in the representational patterns of the bilateral MFG between mismatch and match conditions, as well as in the representational patterns of the left MFG between mismatch and neutral conditions. Additionally, the left MFG may be the brain source of the N1 component in the mismatch condition. These findings suggest that the mismatch between the DMT target and detection target affects early attention allocation and attentional control in WM retrieval, and the MFG may play an important role in WM retrieval by the effect of the target detection task. In conclusion, our work deepens the understanding of the neural mechanisms by which visual target detection affects WM retrieval.
Collapse
Affiliation(s)
- Ping Xu
- MOE Key Laboratory for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, Center for Psychiatry and Psychology, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China
| | - Min Wang
- Bioinformatics and BioMedical Bigdata Mining Laboratory, School of Big Health, Guizhou Medical University, Guiyang, China
| | - Tingting Zhang
- MOE Key Laboratory for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, Center for Psychiatry and Psychology, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China
| | - Junjun Zhang
- MOE Key Laboratory for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, Center for Psychiatry and Psychology, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China
| | - Zhenlan Jin
- MOE Key Laboratory for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, Center for Psychiatry and Psychology, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China
| | - Ling Li
- MOE Key Laboratory for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, Center for Psychiatry and Psychology, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China.
| |
Collapse
|
22
|
Broday-Dvir R, Norman Y, Harel M, Mehta AD, Malach R. Perceptual stability reflected in neuronal pattern similarities in human visual cortex. Cell Rep 2023; 42:112614. [PMID: 37285270 DOI: 10.1016/j.celrep.2023.112614] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Revised: 03/14/2023] [Accepted: 05/22/2023] [Indexed: 06/09/2023] Open
Abstract
The magnitude of neuronal activation is commonly considered a critical factor for conscious perception of visual content. However, this dogma contrasts with the phenomenon of rapid adaptation, in which the magnitude of neuronal activation drops dramatically in a rapid manner while the visual stimulus and the conscious experience it elicits remain stable. Here, we report that the profiles of multi-site activation patterns and their relational geometry-i.e., the similarity distances between activation patterns, as revealed using intracranial electroencephalographic (iEEG) recordings-are sustained during extended visual stimulation despite the major magnitude decrease. These results are compatible with the hypothesis that conscious perceptual content is associated with the neuronal pattern profiles and their similarity distances, rather than the overall activation magnitude, in human visual cortex.
Collapse
Affiliation(s)
- Rotem Broday-Dvir
- Department of Brain Sciences, Weizmann Institute of Science, Rehovot 76100, Israel
| | - Yitzhak Norman
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA 94143, USA
| | - Michal Harel
- Department of Brain Sciences, Weizmann Institute of Science, Rehovot 76100, Israel
| | - Ashesh D Mehta
- Department of Neurosurgery, Zucker School of Medicine at Hofstra/Northwell, and the Feinstein Institute for Medical Research, Manhasset, NY 11030, USA
| | - Rafael Malach
- Department of Brain Sciences, Weizmann Institute of Science, Rehovot 76100, Israel.
| |
Collapse
|
23
|
Sanchez-Garcia M, Chauhan T, Cottereau BR, Beyeler M. Efficient multi-scale representation of visual objects using a biologically plausible spike-latency code and winner-take-all inhibition. BIOLOGICAL CYBERNETICS 2023; 117:95-111. [PMID: 37004546 DOI: 10.1007/s00422-023-00956-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Accepted: 02/10/2023] [Indexed: 05/05/2023]
Abstract
Deep neural networks have surpassed human performance in key visual challenges such as object recognition, but require a large amount of energy, computation, and memory. In contrast, spiking neural networks (SNNs) have the potential to improve both the efficiency and biological plausibility of object recognition systems. Here we present a SNN model that uses spike-latency coding and winner-take-all inhibition (WTA-I) to efficiently represent visual stimuli using multi-scale parallel processing. Mimicking neuronal response properties in early visual cortex, images were preprocessed with three different spatial frequency (SF) channels, before they were fed to a layer of spiking neurons whose synaptic weights were updated using spike-timing-dependent-plasticity. We investigate how the quality of the represented objects changes under different SF bands and WTA-I schemes. We demonstrate that a network of 200 spiking neurons tuned to three SFs can efficiently represent objects with as little as 15 spikes per neuron. Studying how core object recognition may be implemented using biologically plausible learning rules in SNNs may not only further our understanding of the brain, but also lead to novel and efficient artificial vision systems.
Collapse
Affiliation(s)
| | - Tushar Chauhan
- The Picower Institute for Learning and Memory, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Boston, MA, USA
- CerCo CNRS UMR5549, Université de Toulouse III-Paul Sabatier, Toulouse, France
| | - Benoit R Cottereau
- CerCo CNRS UMR5549, Université de Toulouse III-Paul Sabatier, Toulouse, France
- IPAL, CNRS IRL 2955, Singapore, Singapore
| | - Michael Beyeler
- Department of Computer Science, University of California, Santa Barbara, CA, USA
- Department of Psychological & Brain Sciences, University of California, Santa Barbara, CA, USA
| |
Collapse
|
24
|
The Spatiotemporal Neural Dynamics of Object Recognition for Natural Images and Line Drawings. J Neurosci 2023; 43:484-500. [PMID: 36535769 PMCID: PMC9864561 DOI: 10.1523/jneurosci.1546-22.2022] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Revised: 11/18/2022] [Accepted: 11/30/2022] [Indexed: 12/24/2022] Open
Abstract
Drawings offer a simple and efficient way to communicate meaning. While line drawings capture only coarsely how objects look in reality, we still perceive them as resembling real-world objects. Previous work has shown that this perceived similarity is mirrored by shared neural representations for drawings and natural images, which suggests that similar mechanisms underlie the recognition of both. However, other work has proposed that representations of drawings and natural images become similar only after substantial processing has taken place, suggesting distinct mechanisms. To arbitrate between those alternatives, we measured brain responses resolved in space and time using fMRI and MEG, respectively, while human participants (female and male) viewed images of objects depicted as photographs, line drawings, or sketch-like drawings. Using multivariate decoding, we demonstrate that object category information emerged similarly fast and across overlapping regions in occipital, ventral-temporal, and posterior parietal cortex for all types of depiction, yet with smaller effects at higher levels of visual abstraction. In addition, cross-decoding between depiction types revealed strong generalization of object category information from early processing stages on. Finally, by combining fMRI and MEG data using representational similarity analysis, we found that visual information traversed similar processing stages for all types of depiction, yet with an overall stronger representation for photographs. Together, our results demonstrate broad commonalities in the neural dynamics of object recognition across types of depiction, thus providing clear evidence for shared neural mechanisms underlying recognition of natural object images and abstract drawings.SIGNIFICANCE STATEMENT When we see a line drawing, we effortlessly recognize it as an object in the world despite its simple and abstract style. Here we asked to what extent this correspondence in perception is reflected in the brain. To answer this question, we measured how neural processing of objects depicted as photographs and line drawings with varying levels of detail (from natural images to abstract line drawings) evolves over space and time. We find broad commonalities in the spatiotemporal dynamics and the neural representations underlying the perception of photographs and even abstract drawings. These results indicate a shared basic mechanism supporting recognition of drawings and natural images.
Collapse
|
25
|
Mononen T, Kujala J, Liljeström M, Leppäaho E, Kaski S, Salmelin R. The relationship between electrophysiological and hemodynamic measures of neural activity varies across picture naming tasks: A multimodal magnetoencephalography-functional magnetic resonance imaging study. Front Neurosci 2022; 16:1019572. [PMID: 36408411 PMCID: PMC9669574 DOI: 10.3389/fnins.2022.1019572] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Accepted: 10/06/2022] [Indexed: 11/06/2022] Open
Abstract
Different neuroimaging methods can yield different views of task-dependent neural engagement. Studies examining the relationship between electromagnetic and hemodynamic measures have revealed correlated patterns across brain regions but the role of the applied stimulation or experimental tasks in these correlation patterns is still poorly understood. Here, we evaluated the across-tasks variability of MEG-fMRI relationship using data recorded during three distinct naming tasks (naming objects and actions from action images, and objects from object images), from the same set of participants. Our results demonstrate that the MEG-fMRI correlation pattern varies according to the performed task, and that this variability shows distinct spectral profiles across brain regions. Notably, analysis of the MEG data alone did not reveal modulations across the examined tasks in the time-frequency windows emerging from the MEG-fMRI correlation analysis. Our results suggest that the electromagnetic-hemodynamic correlation could serve as a more sensitive proxy for task-dependent neural engagement in cognitive tasks than isolated within-modality measures.
Collapse
Affiliation(s)
- Tommi Mononen
- Department of Neuroscience and Biomedical Engineering, Aalto University School of Science, Espoo, Finland
- Aalto NeuroImaging, Aalto University, Espoo, Finland
- Department of Computer Science, Aalto University, Espoo, Finland
- Faculty of Biological and Environmental Sciences, University of Helsinki, Helsinki, Finland
- *Correspondence: Tommi Mononen,
| | - Jan Kujala
- Department of Neuroscience and Biomedical Engineering, Aalto University School of Science, Espoo, Finland
- Department of Psychology, University of Jyväskylä, Jyväskylä, Finland
| | - Mia Liljeström
- Department of Neuroscience and Biomedical Engineering, Aalto University School of Science, Espoo, Finland
- Aalto NeuroImaging, Aalto University, Espoo, Finland
- BioMag Laboratory, Helsinki University Hospital, Helsinki, Finland
| | - Eemeli Leppäaho
- Department of Computer Science, Aalto University, Espoo, Finland
| | - Samuel Kaski
- Department of Computer Science, Aalto University, Espoo, Finland
| | - Riitta Salmelin
- Department of Neuroscience and Biomedical Engineering, Aalto University School of Science, Espoo, Finland
- Aalto NeuroImaging, Aalto University, Espoo, Finland
| |
Collapse
|
26
|
Ebrahiminia F, Cichy RM, Khaligh-Razavi SM. A multivariate comparison of electroencephalogram and functional magnetic resonance imaging to electrocorticogram using visual object representations in humans. Front Neurosci 2022; 16:983602. [PMID: 36330341 PMCID: PMC9624066 DOI: 10.3389/fnins.2022.983602] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2022] [Accepted: 09/23/2022] [Indexed: 09/07/2024] Open
Abstract
Today, most neurocognitive studies in humans employ the non-invasive neuroimaging techniques functional magnetic resonance imaging (fMRI) and electroencephalogram (EEG). However, how the data provided by fMRI and EEG relate exactly to the underlying neural activity remains incompletely understood. Here, we aimed to understand the relation between EEG and fMRI data at the level of neural population codes using multivariate pattern analysis. In particular, we assessed whether this relation is affected when we change stimuli or introduce identity-preserving variations to them. For this, we recorded EEG and fMRI data separately from 21 healthy participants while participants viewed everyday objects in different viewing conditions, and then related the data to electrocorticogram (ECoG) data recorded for the same stimulus set from epileptic patients. The comparison of EEG and ECoG data showed that object category signals emerge swiftly in the visual system and can be detected by both EEG and ECoG at similar temporal delays after stimulus onset. The correlation between EEG and ECoG was reduced when object representations tolerant to changes in scale and orientation were considered. The comparison of fMRI and ECoG overall revealed a tighter relationship in occipital than in temporal regions, related to differences in fMRI signal-to-noise ratio. Together, our results reveal a complex relationship between fMRI, EEG, and ECoG signals at the level of population codes that critically depends on the time point after stimulus onset, the region investigated, and the visual contents used.
Collapse
Affiliation(s)
- Fatemeh Ebrahiminia
- Department of Stem Cells and Developmental Biology, Cell Science Research Center, Royan Institute for Stem Cell Biology and Technology, Academic Center for Education, Culture and Research (ACECR), Tehran, Iran
- School of Electrical and Computer Engineering, University of Tehran, Tehran, Iran
| | | | - Seyed-Mahdi Khaligh-Razavi
- Department of Stem Cells and Developmental Biology, Cell Science Research Center, Royan Institute for Stem Cell Biology and Technology, Academic Center for Education, Culture and Research (ACECR), Tehran, Iran
| |
Collapse
|
27
|
Higgins C, van Es MWJ, Quinn AJ, Vidaurre D, Woolrich MW. The relationship between frequency content and representational dynamics in the decoding of neurophysiological data. Neuroimage 2022; 260:119462. [PMID: 35872176 PMCID: PMC10565838 DOI: 10.1016/j.neuroimage.2022.119462] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Revised: 07/04/2022] [Accepted: 07/08/2022] [Indexed: 11/20/2022] Open
Abstract
Decoding of high temporal resolution, stimulus-evoked neurophysiological data is increasingly used to test theories about how the brain processes information. However, a fundamental relationship between the frequency spectra of the neural signal and the subsequent decoding accuracy timecourse is not widely recognised. We show that, in commonly used instantaneous signal decoding paradigms, each sinusoidal component of the evoked response is translated to double its original frequency in the subsequent decoding accuracy timecourses. We therefore recommend, where researchers use instantaneous signal decoding paradigms, that more aggressive low pass filtering is applied with a cut-off at one quarter of the sampling rate, to eliminate representational alias artefacts. However, this does not negate the accompanying interpretational challenges. We show that these can be resolved by decoding paradigms that utilise both a signal's instantaneous magnitude and its local gradient information as features for decoding. On a publicly available MEG dataset, this results in decoding accuracy metrics that are higher, more stable over time, and free of the technical and interpretational challenges previously characterised. We anticipate that a broader awareness of these fundamental relationships will enable stronger interpretations of decoding results by linking them more clearly to the underlying signal characteristics that drive them.
Collapse
Affiliation(s)
- Cameron Higgins
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK
| | - Mats W J van Es
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK.
| | - Andrew J Quinn
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK
| | - Diego Vidaurre
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK; Center of Functionally Integrative Neuroscience, Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| | - Mark W Woolrich
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK
| |
Collapse
|
28
|
Grootswagers T, McKay H, Varlet M. Unique contributions of perceptual and conceptual humanness to object representations in the human brain. Neuroimage 2022; 257:119350. [PMID: 35659994 DOI: 10.1016/j.neuroimage.2022.119350] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2021] [Revised: 05/09/2022] [Accepted: 05/31/2022] [Indexed: 01/18/2023] Open
Abstract
The human brain is able to quickly and accurately identify objects in a dynamic visual world. Objects evoke different patterns of neural activity in the visual system, which reflect object category memberships. However, the underlying dimensions of object representations in the brain remain unclear. Recent research suggests that objects similarity to humans is one of the main dimensions used by the brain to organise objects, but the nature of the human-similarity features driving this organisation are still unknown. Here, we investigate the relative contributions of perceptual and conceptual features of humanness to the representational organisation of objects in the human visual system. We collected behavioural judgements of human-similarity of various objects, which were compared with time-resolved neuroimaging responses to the same objects. The behavioural judgement tasks targeted either perceptual or conceptual humanness features to determine their respective contribution to perceived human-similarity. Behavioural and neuroimaging data revealed significant and unique contributions of both perceptual and conceptual features of humanness, each explaining unique variance in neuroimaging data. Furthermore, our results showed distinct spatio-temporal dynamics in the processing of conceptual and perceptual humanness features, with later and more lateralised brain responses to conceptual features. This study highlights the critical importance of social requirements in information processing and organisation in the human brain.
Collapse
Affiliation(s)
- Tijl Grootswagers
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, NSW, Australia.
| | - Harriet McKay
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, NSW, Australia
| | - Manuel Varlet
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, NSW, Australia
| |
Collapse
|
29
|
Bo K, Cui L, Yin S, Hu Z, Hong X, Kim S, Keil A, Ding M. Decoding the temporal dynamics of affective scene processing. Neuroimage 2022; 261:119532. [PMID: 35931307 DOI: 10.1016/j.neuroimage.2022.119532] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Revised: 07/01/2022] [Accepted: 08/01/2022] [Indexed: 10/31/2022] Open
Abstract
Natural images containing affective scenes are used extensively to investigate the neural mechanisms of visual emotion processing. Functional fMRI studies have shown that these images activate a large-scale distributed brain network that encompasses areas in visual, temporal, and frontal cortices. The underlying spatial and temporal dynamics, however, remain to be better characterized. We recorded simultaneous EEG-fMRI data while participants passively viewed affective images from the International Affective Picture System (IAPS). Applying multivariate pattern analysis to decode EEG data, and representational similarity analysis to fuse EEG data with simultaneously recorded fMRI data, we found that: (1) ∼80 ms after picture onset, perceptual processing of complex visual scenes began in early visual cortex, proceeding to ventral visual cortex at ∼100 ms, (2) between ∼200 and ∼300 ms (pleasant pictures: ∼200 ms; unpleasant pictures: ∼260 ms), affect-specific neural representations began to form, supported mainly by areas in occipital and temporal cortices, and (3) affect-specific neural representations were stable, lasting up to ∼2 s, and exhibited temporally generalizable activity patterns. These results suggest that affective scene representations in the brain are formed temporally in a valence-dependent manner and may be sustained by recurrent neural interactions among distributed brain areas.
Collapse
Affiliation(s)
- Ke Bo
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, FL 32611, USA; Department of Psychological and Brain Sciences, Dartmouth college, Hanover, NH 03755, USA
| | - Lihan Cui
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, FL 32611, USA
| | - Siyang Yin
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, FL 32611, USA
| | - Zhenhong Hu
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, FL 32611, USA
| | - Xiangfei Hong
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, FL 32611, USA; Shanghai Key Laboratory of Psychotic Disorders, Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine, Shanghai, 200030, China
| | - Sungkean Kim
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, FL 32611, USA; Department of Human-Computer Interaction, Hanyang University, Ansan, Republic of Korea
| | - Andreas Keil
- Department of Psychology, University of Florida, Gainesville, FL 32611, USA.
| | - Mingzhou Ding
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, FL 32611, USA.
| |
Collapse
|
30
|
Wang R, Janini D, Konkle T. Mid-level Feature Differences Support Early Animacy and Object Size Distinctions: Evidence from Electroencephalography Decoding. J Cogn Neurosci 2022; 34:1670-1680. [PMID: 35704550 PMCID: PMC9438936 DOI: 10.1162/jocn_a_01883] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Responses to visually presented objects along the cortical surface of the human brain have a large-scale organization reflecting the broad categorical divisions of animacy and object size. Emerging evidence indicates that this topographical organization is supported by differences between objects in mid-level perceptual features. With regard to the timing of neural responses, images of objects quickly evoke neural responses with decodable information about animacy and object size, but are mid-level features sufficient to evoke these rapid neural responses? Or is slower iterative neural processing required to untangle information about animacy and object size from mid-level features, requiring hundreds of milliseconds more processing time? To answer this question, we used EEG to measure human neural responses to images of objects and their texform counterparts-unrecognizable images that preserve some mid-level feature information about texture and coarse form. We found that texform images evoked neural responses with early decodable information about both animacy and real-world size, as early as responses evoked by original images. Furthermore, successful cross-decoding indicates that both texform and original images evoke information about animacy and size through a common underlying neural basis. Broadly, these results indicate that the visual system contains a mid-level feature bank carrying linearly decodable information on animacy and size, which can be rapidly activated without requiring explicit recognition or protracted temporal processing.
Collapse
|
31
|
Olsen AS, Høegh RMT, Hinrich JL, Madsen KH, Mørup M. Combining electro- and magnetoencephalography data using directional archetypal analysis. Front Neurosci 2022; 16:911034. [PMID: 35968377 PMCID: PMC9374169 DOI: 10.3389/fnins.2022.911034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Accepted: 07/11/2022] [Indexed: 11/20/2022] Open
Abstract
Metastable microstates in electro- and magnetoencephalographic (EEG and MEG) measurements are usually determined using modified k-means accounting for polarity invariant states. However, hard state assignment approaches assume that the brain traverses microstates in a discrete rather than continuous fashion. We present multimodal, multisubject directional archetypal analysis as a scale and polarity invariant extension to archetypal analysis using a loss function based on the Watson distribution. With this method, EEG/MEG microstates are modeled using subject- and modality-specific archetypes that are representative, distinct topographic maps between which the brain continuously traverses. Archetypes are specified as convex combinations of unit norm input data based on a shared generator matrix, thus assuming that the timing of neural responses to stimuli is consistent across subjects and modalities. The input data is reconstructed as convex combinations of archetypes using a subject- and modality-specific continuous archetypal mixing matrix. We showcase the model on synthetic data and an openly available face perception event-related potential data set with concurrently recorded EEG and MEG. In synthetic and unimodal experiments, we compare our model to conventional Euclidean multisubject archetypal analysis. We also contrast our model to a directional clustering model with discrete state assignments to highlight the advantages of modeling state trajectories rather than hard assignments. We find that our approach successfully models scale and polarity invariant data, such as microstates, accounting for intersubject and intermodal variability. The model is readily extendable to other modalities ensuring component correspondence while elucidating spatiotemporal signal variability.
Collapse
Affiliation(s)
- Anders S. Olsen
- Department of Applied Mathematics and Computer Science, Technical University of Denmark, Lyngby, Denmark
| | - Rasmus M. T. Høegh
- Department of Applied Mathematics and Computer Science, Technical University of Denmark, Lyngby, Denmark
- WS Audiology, Lynge, Denmark
| | - Jesper L. Hinrich
- Department of Applied Mathematics and Computer Science, Technical University of Denmark, Lyngby, Denmark
| | - Kristoffer H. Madsen
- Department of Applied Mathematics and Computer Science, Technical University of Denmark, Lyngby, Denmark
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Amager and Hvidovre, Hvidovre, Denmark
| | - Morten Mørup
- Department of Applied Mathematics and Computer Science, Technical University of Denmark, Lyngby, Denmark
| |
Collapse
|
32
|
Chen X, Liao M, Jiang P, Sun H, Liu L, Gong Q. Abnormal effective connectivity in visual cortices underlies stereopsis defects in amblyopia. Neuroimage Clin 2022; 34:103005. [PMID: 35421811 PMCID: PMC9011166 DOI: 10.1016/j.nicl.2022.103005] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2021] [Revised: 02/15/2022] [Accepted: 04/05/2022] [Indexed: 02/08/2023]
Abstract
Abnormal effective connectivity inherent stereopsis defects in amblyopia was studied. A weakened connection from V2v to LO2 relates to stereopsis defects in amblyopia. Higher-order visual cortices may serve as key nodes to the stereopsis defects. An independent longitudinal dataset was used to validate the obtained results.
The neural basis underlying stereopsis defects in patients with amblyopia remains unclear, which hinders the development of clinical therapy. This study aimed to investigate visual network abnormalities in patients with amblyopia and their associations with stereopsis function. Spectral dynamic causal modeling methods were employed for resting-state functional magnetic resonance imaging data to investigate the effective connectivity (EC) among 14 predefined regions of interest in the dorsal and ventral visual pathways. We adopted two independent datasets, including a cross-sectional and a longitudinal dataset. In the cross-sectional dataset, we compared group differences in EC between 31 patients with amblyopia (mean age: 26.39 years old) and 31 healthy controls (mean age: 25.71 years old) and investigated the association between EC and stereoacuity. In addition, we explored EC changes after perceptual learning in a novel longitudinal dataset including 9 patients with amblyopia (mean age: 15.78 years old). We found consistent evidence from the two datasets indicating that the aberrant EC from V2v to LO2 is crucial for the stereoscopic deficits in the patients with amblyopia: it was weaker in the patients than in the controls, showed a positive linear relationship with the stereoscopic function, and increased after perceptual learning in the patients. In addition, higher-level dorsal (V3d, V3A, and V3B) and ventral areas (LO1 and LO2) were important nodes in the network of abnormal ECs associated with stereoscopic deficits in the patients with amblyopia. Our research provides insights into the neural mechanism underlying stereopsis deficits in patients with amblyopia and provides candidate targets for focused stimulus interventions to enhance the efficacy of clinical treatment for the improvement of stereopsis deficiency.
Collapse
Affiliation(s)
- Xia Chen
- Department of Optometry and Visual Science, West China Hospital, Sichuan University, Chengdu, China
| | - Meng Liao
- Department of Optometry and Visual Science, West China Hospital, Sichuan University, Chengdu, China; Department of Ophthalmology, West China Hospital, Sichuan University, Chengdu, China
| | - Ping Jiang
- Huaxi MR Research Center (HMRRC), Department of Radiology, West China Hospital of Sichuan University, Chengdu, China; Research Unit of Psychoradiology, Chinese Academy of Medical Sciences, Chengdu, China; Functional and Molecular Imaging Key Laboratory of Sichuan Province, Chengdu, China.
| | - Huaiqiang Sun
- Huaxi MR Research Center (HMRRC), Department of Radiology, West China Hospital of Sichuan University, Chengdu, China; Imaging Research Core Facilities, West China Hospital of Sichuan University, Chengdu, Sichuan, China
| | - Longqian Liu
- Department of Optometry and Visual Science, West China Hospital, Sichuan University, Chengdu, China; Department of Ophthalmology, West China Hospital, Sichuan University, Chengdu, China.
| | - Qiyong Gong
- Huaxi MR Research Center (HMRRC), Department of Radiology, West China Hospital of Sichuan University, Chengdu, China; Research Unit of Psychoradiology, Chinese Academy of Medical Sciences, Chengdu, China; Functional and Molecular Imaging Key Laboratory of Sichuan Province, Chengdu, China
| |
Collapse
|
33
|
Basedau H, Peng KP, May A, Mehnert J. High-Density Electroencephalography-Informed Multiband Functional Magnetic Resonance Imaging Reveals Rhythm-Specific Activations Within the Trigeminal Nociceptive Network. Front Neurosci 2022; 16:802239. [PMID: 35651631 PMCID: PMC9149083 DOI: 10.3389/fnins.2022.802239] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2021] [Accepted: 03/30/2022] [Indexed: 11/17/2022] Open
Abstract
The interest in exploring trigeminal pain processing has grown in recent years, mainly due to various pathologies (such as migraine) related to this system. However, research efforts have mainly focused on understanding molecular mechanisms or studying pathological states. On the contrary, non-invasive imaging studies are limited by either spatial or temporal resolution depending on the modality used. This can be overcome by using multimodal imaging techniques such as simultaneous functional magnetic resonance imaging (fMRI) and electroencephalography (EEG). Although this technique has already been applied to neuroscientific research areas and consequently gained insights into diverse sensory systems and pathologies, only a few studies have applied EEG-fMRI in the field of pain processing and none in the trigeminal system. Focusing on trigeminal nociception, we used a trigeminal pain paradigm, which has been well-studied in either modality. For validation, we first acquired stand-alone measures with each imaging modality before fusing them in a simultaneous session. Furthermore, we introduced a new, yet simple, non-parametric correlation technique, which exploits trial-to-trial variance of both measurement techniques with Spearman’s correlations, to consolidate the results gained by the two modalities. This new technique does not presume a linear relationship and needs a few repetitions per subject. We also showed cross-validation by analyzing visual stimulations. Using these techniques, we showed that EEG power changes in the theta-band induced by trigeminal pain correlate with fMRI activation within the brainstem, whereas those of gamma-band oscillations correlate with BOLD signals in higher cortical areas.
Collapse
|
34
|
Gurariy G, Mruczek REB, Snow JC, Caplovitz GP. Using High-Density Electroencephalography to Explore Spatiotemporal Representations of Object Categories in Visual Cortex. J Cogn Neurosci 2022; 34:967-987. [PMID: 35286384 PMCID: PMC9169880 DOI: 10.1162/jocn_a_01845] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Visual object perception involves neural processes that unfold over time and recruit multiple regions of the brain. Here, we use high-density EEG to investigate the spatiotemporal representations of object categories across the dorsal and ventral pathways. In , human participants were presented with images from two animate object categories (birds and insects) and two inanimate categories (tools and graspable objects). In , participants viewed images of tools and graspable objects from a different stimulus set, one in which a shape confound that often exists between these categories (elongation) was controlled for. To explore the temporal dynamics of object representations, we employed time-resolved multivariate pattern analysis on the EEG time series data. This was performed at the electrode level as well as in source space of two regions of interest: one encompassing the ventral pathway and another encompassing the dorsal pathway. Our results demonstrate shape, exemplar, and category information can be decoded from the EEG signal. Multivariate pattern analysis within source space revealed that both dorsal and ventral pathways contain information pertaining to shape, inanimate object categories, and animate object categories. Of particular interest, we note striking similarities obtained in both ventral stream and dorsal stream regions of interest. These findings provide insight into the spatio-temporal dynamics of object representation and contribute to a growing literature that has begun to redefine the traditional role of the dorsal pathway.
Collapse
|
35
|
The spatiotemporal neural dynamics of object location representations in the human brain. Nat Hum Behav 2022; 6:796-811. [PMID: 35210593 PMCID: PMC9225954 DOI: 10.1038/s41562-022-01302-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2021] [Accepted: 01/14/2022] [Indexed: 12/30/2022]
Abstract
To interact with objects in complex environments, we must know what they are and where they are in spite of challenging viewing conditions. Here, we investigated where, how and when representations of object location and category emerge in the human brain when objects appear on cluttered natural scene images using a combination of functional magnetic resonance imaging, electroencephalography and computational models. We found location representations to emerge along the ventral visual stream towards lateral occipital complex, mirrored by gradual emergence in deep neural networks. Time-resolved analysis suggested that computing object location representations involves recurrent processing in high-level visual cortex. Object category representations also emerged gradually along the ventral visual stream, with evidence for recurrent computations. These results resolve the spatiotemporal dynamics of the ventral visual stream that give rise to representations of where and what objects are present in a scene under challenging viewing conditions.
Collapse
|
36
|
Barnes L, Goddard E, Woolgar A. Neural Coding of Visual Objects Rapidly Reconfigures to Reflect Subtrial Shifts in Attentional Focus. J Cogn Neurosci 2022; 34:806-822. [PMID: 35171251 DOI: 10.1162/jocn_a_01832] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Every day, we respond to the dynamic world around us by choosing actions to meet our goals. Flexible neural populations are thought to support this process by adapting to prioritize task-relevant information, driving coding in specialized brain regions toward stimuli and actions that are currently most important. Accordingly, human fMRI shows that activity patterns in frontoparietal cortex contain more information about visual features when they are task-relevant. However, if this preferential coding drives momentary focus, for example, to solve each part of a task in turn, it must reconfigure more quickly than we can observe with fMRI. Here, we used multivariate pattern analysis of magnetoencephalography data to test for rapid reconfiguration of stimulus information when a new feature becomes relevant within a trial. Participants saw two displays on each trial. They attended to the shape of a first target then the color of a second, or vice versa, and reported the attended features at a choice display. We found evidence of preferential coding for the relevant features in both trial phases, even as participants shifted attention mid-trial, commensurate with fast subtrial reconfiguration. However, we only found this pattern of results when the stimulus displays contained multiple objects and not in a simpler task with the same structure. The data suggest that adaptive coding in humans can operate on a fast, subtrial timescale, suitable for supporting periods of momentary focus when complex tasks are broken down into simpler ones, but may not always do so.
Collapse
Affiliation(s)
| | - Erin Goddard
- University of New South Wales, Sydney, Australia
| | - Alexandra Woolgar
- University of Cambridge, United Kingdom.,Macquarie University, Sydney, Australia
| |
Collapse
|
37
|
Robinson AK, Rich AN, Woolgar A. Linking the Brain with Behavior: The Neural Dynamics of Success and Failure in Goal-directed Behavior. J Cogn Neurosci 2022; 34:639-654. [DOI: 10.1162/jocn_a_01818] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
The human brain is extremely flexible and capable of rapidly selecting relevant information in accordance with task goals. Regions of frontoparietal cortex flexibly represent relevant task information such as task rules and stimulus features when participants perform tasks successfully, but less is known about how information processing breaks down when participants make mistakes. This is important for understanding whether and when information coding recorded with neuroimaging is directly meaningful for behavior. Here, we used magnetoencephalography to assess the temporal dynamics of information processing and linked neural responses with goal-directed behavior by analyzing how they changed on behavioral error. Participants performed a difficult stimulus–response task using two stimulus–response mapping rules. We used time-resolved multivariate pattern analysis to characterize the progression of information coding from perceptual information about the stimulus, cue and rule coding, and finally, motor response. Response-aligned analyses revealed a ramping up of perceptual information before a correct response, suggestive of internal evidence accumulation. Strikingly, when participants made a stimulus-related error, and not when they made other types of errors, patterns of activity initially reflected the stimulus presented, but later reversed, and accumulated toward a representation of the “incorrect” stimulus. This suggests that the patterns recorded at later time points reflect an internally generated stimulus representation that was used to make the (incorrect) decision. These results illustrate the orderly and overlapping temporal dynamics of information coding in perceptual decision-making and show a clear link between neural patterns in the late stages of processing and behavior.
Collapse
|
38
|
Kupers ER, Benson NC, Winawer J. A visual encoding model links magnetoencephalography signals to neural synchrony in human cortex. Neuroimage 2021; 245:118655. [PMID: 34687857 PMCID: PMC8788390 DOI: 10.1016/j.neuroimage.2021.118655] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2021] [Accepted: 10/11/2021] [Indexed: 01/23/2023] Open
Abstract
Synchronization of neuronal responses over large distances is hypothesized to be important for many cortical functions. However, no straightforward methods exist to estimate synchrony non-invasively in the living human brain. MEG and EEG measure the whole brain, but the sensors pool over large, overlapping cortical regions, obscuring the underlying neural synchrony. Here, we developed a model from stimulus to cortex to MEG sensors to disentangle neural synchrony from spatial pooling of the instrument. We find that synchrony across cortex has a surprisingly large and systematic effect on predicted MEG spatial topography. We then conducted visual MEG experiments and separated responses into stimulus-locked and broadband components. The stimulus-locked topography was similar to model predictions assuming synchronous neural sources, whereas the broadband topography was similar to model predictions assuming asynchronous sources. We infer that visual stimulation elicits two distinct types of neural responses, one highly synchronous and one largely asynchronous across cortex.
Collapse
Affiliation(s)
- Eline R Kupers
- Department of Psychology, New York University, New York, NY 10003, United States; Center for Neural Science, New York University, New York, NY 10003, United States; Department of Psychology, Stanford University, Stanford, CA 94305, United States.
| | - Noah C Benson
- Department of Psychology, New York University, New York, NY 10003, United States; Center for Neural Science, New York University, New York, NY 10003, United States; eSciences Institute, University of Washington, Seattle, WA 98195, United States
| | - Jonathan Winawer
- Department of Psychology, New York University, New York, NY 10003, United States; Center for Neural Science, New York University, New York, NY 10003, United States
| |
Collapse
|
39
|
Vidaurre D, Cichy RM, Woolrich MW. Dissociable Components of Information Encoding in Human Perception. Cereb Cortex 2021; 31:5664-5675. [PMID: 34291294 PMCID: PMC8568005 DOI: 10.1093/cercor/bhab189] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2021] [Revised: 05/01/2021] [Accepted: 06/02/2021] [Indexed: 11/25/2022] Open
Abstract
Brain decoding can predict visual perception from non-invasive electrophysiological data by combining information across multiple channels. However, decoding methods typically conflate the composite and distributed neural processes underlying perception that are together present in the signal, making it unclear what specific aspects of the neural computations involved in perception are reflected in this type of macroscale data. Using MEG data recorded while participants viewed a large number of naturalistic images, we analytically decomposed the brain signal into its oscillatory and non-oscillatory components, and used this decomposition to show that there are at least three dissociable stimulus-specific aspects to the brain data: a slow, non-oscillatory component, reflecting the temporally stable aspect of the stimulus representation; a global phase shift of the oscillation, reflecting the overall speed of processing of specific stimuli; and differential patterns of phase across channels, likely reflecting stimulus-specific computations. Further, we show that common cognitive interpretations of decoding analysis, in particular about how representations generalize across time, can benefit from acknowledging the multicomponent nature of the signal in the study of perception.
Collapse
Affiliation(s)
- Diego Vidaurre
- Department of Clinical Medicine, Center for Functionally Integrative Neuroscience, Aarhus University, Aarhus 8000, Denmark
- Department of Psychiatry, University of Oxford, Oxford OX37JX, UK
- Wellcome Trust Center for Integrative Neuroimaging, University of Oxford, Oxford OX37JX, UK
| | - Radoslaw M Cichy
- Department of Education and Psychology, Freie Universität Berlin, Berlin 14195, Germany
| | - Mark W Woolrich
- Department of Psychiatry, University of Oxford, Oxford OX37JX, UK
- Wellcome Trust Center for Integrative Neuroimaging, University of Oxford, Oxford OX37JX, UK
| |
Collapse
|
40
|
Lowe MX, Mohsenzadeh Y, Lahner B, Charest I, Oliva A, Teng S. Cochlea to categories: The spatiotemporal dynamics of semantic auditory representations. Cogn Neuropsychol 2021; 38:468-489. [PMID: 35729704 PMCID: PMC10589059 DOI: 10.1080/02643294.2022.2085085] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Revised: 03/31/2022] [Accepted: 05/25/2022] [Indexed: 10/17/2022]
Abstract
How does the auditory system categorize natural sounds? Here we apply multimodal neuroimaging to illustrate the progression from acoustic to semantically dominated representations. Combining magnetoencephalographic (MEG) and functional magnetic resonance imaging (fMRI) scans of observers listening to naturalistic sounds, we found superior temporal responses beginning ∼55 ms post-stimulus onset, spreading to extratemporal cortices by ∼100 ms. Early regions were distinguished less by onset/peak latency than by functional properties and overall temporal response profiles. Early acoustically-dominated representations trended systematically toward category dominance over time (after ∼200 ms) and space (beyond primary cortex). Semantic category representation was spatially specific: Vocalizations were preferentially distinguished in frontotemporal voice-selective regions and the fusiform; scenes and objects were distinguished in parahippocampal and medial place areas. Our results are consistent with real-world events coded via an extended auditory processing hierarchy, in which acoustic representations rapidly enter multiple streams specialized by category, including areas typically considered visual cortex.
Collapse
Affiliation(s)
- Matthew X. Lowe
- Computer Science and Artificial Intelligence Lab (CSAIL), MIT, Cambridge, MA
- Unlimited Sciences, Colorado Springs, CO
| | - Yalda Mohsenzadeh
- Computer Science and Artificial Intelligence Lab (CSAIL), MIT, Cambridge, MA
- The Brain and Mind Institute, The University of Western Ontario, London, ON, Canada
- Department of Computer Science, The University of Western Ontario, London, ON, Canada
| | - Benjamin Lahner
- Computer Science and Artificial Intelligence Lab (CSAIL), MIT, Cambridge, MA
| | - Ian Charest
- Département de Psychologie, Université de Montréal, Montréal, Québec, Canada
- Center for Human Brain Health, University of Birmingham, UK
| | - Aude Oliva
- Computer Science and Artificial Intelligence Lab (CSAIL), MIT, Cambridge, MA
| | - Santani Teng
- Computer Science and Artificial Intelligence Lab (CSAIL), MIT, Cambridge, MA
- Smith-Kettlewell Eye Research Institute (SKERI), San Francisco, CA
| |
Collapse
|
41
|
Dowdle LT, Ghose G, Chen CCC, Ugurbil K, Yacoub E, Vizioli L. Statistical power or more precise insights into neuro-temporal dynamics? Assessing the benefits of rapid temporal sampling in fMRI. Prog Neurobiol 2021; 207:102171. [PMID: 34492308 DOI: 10.1016/j.pneurobio.2021.102171] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2021] [Revised: 08/09/2021] [Accepted: 09/02/2021] [Indexed: 01/25/2023]
Abstract
Functional magnetic resonance imaging (fMRI), a non-invasive and widely used human neuroimaging method, is most known for its spatial precision. However, there is a growing interest in its temporal sensitivity. This is despite the temporal blurring of neuronal events by the blood oxygen level dependent (BOLD) signal, the peak of which lags neuronal firing by 4-6 seconds. Given this, the goal of this review is to answer a seemingly simple question - "What are the benefits of increased temporal sampling for fMRI?". To answer this, we have combined fMRI data collected at multiple temporal scales, from 323 to 1000 milliseconds, with a review of both historical and contemporary temporal literature. After a brief discussion of technological developments that have rekindled interest in temporal research, we next consider the potential statistical and methodological benefits. Most importantly, we explore how fast fMRI can uncover previously unobserved neuro-temporal dynamics - effects that are entirely missed when sampling at conventional 1 to 2 second rates. With the intrinsic link between space and time in fMRI, this temporal renaissance also delivers improvements in spatial precision. Far from producing only statistical gains, the array of benefits suggest that the continued temporal work is worth the effort.
Collapse
Affiliation(s)
- Logan T Dowdle
- Center for Magnetic Resonance Research, University of Minnesota, 2021 6th St SE, Minneapolis, MN, 55455, United States; Department of Neurosurgery, University of Minnesota, 500 SE Harvard St, Minneapolis, MN, 55455, United States; Department of Neuroscience, University of Minnesota, 321 Church St SE, Minneapolis, MN, 55455, United States.
| | - Geoffrey Ghose
- Center for Magnetic Resonance Research, University of Minnesota, 2021 6th St SE, Minneapolis, MN, 55455, United States; Department of Neuroscience, University of Minnesota, 321 Church St SE, Minneapolis, MN, 55455, United States
| | - Clark C C Chen
- Department of Neurosurgery, University of Minnesota, 500 SE Harvard St, Minneapolis, MN, 55455, United States
| | - Kamil Ugurbil
- Center for Magnetic Resonance Research, University of Minnesota, 2021 6th St SE, Minneapolis, MN, 55455, United States
| | - Essa Yacoub
- Center for Magnetic Resonance Research, University of Minnesota, 2021 6th St SE, Minneapolis, MN, 55455, United States
| | - Luca Vizioli
- Center for Magnetic Resonance Research, University of Minnesota, 2021 6th St SE, Minneapolis, MN, 55455, United States; Department of Neurosurgery, University of Minnesota, 500 SE Harvard St, Minneapolis, MN, 55455, United States.
| |
Collapse
|
42
|
Lu HY, Lorenc ES, Zhu H, Kilmarx J, Sulzer J, Xie C, Tobler PN, Watrous AJ, Orsborn AL, Lewis-Peacock J, Santacruz SR. Multi-scale neural decoding and analysis. J Neural Eng 2021; 18. [PMID: 34284369 PMCID: PMC8840800 DOI: 10.1088/1741-2552/ac160f] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2020] [Accepted: 07/20/2021] [Indexed: 12/15/2022]
Abstract
Objective. Complex spatiotemporal neural activity encodes rich information related to behavior and cognition. Conventional research has focused on neural activity acquired using one of many different measurement modalities, each of which provides useful but incomplete assessment of the neural code. Multi-modal techniques can overcome tradeoffs in the spatial and temporal resolution of a single modality to reveal deeper and more comprehensive understanding of system-level neural mechanisms. Uncovering multi-scale dynamics is essential for a mechanistic understanding of brain function and for harnessing neuroscientific insights to develop more effective clinical treatment. Approach. We discuss conventional methodologies used for characterizing neural activity at different scales and review contemporary examples of how these approaches have been combined. Then we present our case for integrating activity across multiple scales to benefit from the combined strengths of each approach and elucidate a more holistic understanding of neural processes. Main results. We examine various combinations of neural activity at different scales and analytical techniques that can be used to integrate or illuminate information across scales, as well the technologies that enable such exciting studies. We conclude with challenges facing future multi-scale studies, and a discussion of the power and potential of these approaches. Significance. This roadmap will lead the readers toward a broad range of multi-scale neural decoding techniques and their benefits over single-modality analyses. This Review article highlights the importance of multi-scale analyses for systematically interrogating complex spatiotemporal mechanisms underlying cognition and behavior.
Collapse
Affiliation(s)
- Hung-Yun Lu
- The University of Texas at Austin, Biomedical Engineering, Austin, TX, United States of America
| | - Elizabeth S Lorenc
- The University of Texas at Austin, Psychology, Austin, TX, United States of America.,The University of Texas at Austin, Institute for Neuroscience, Austin, TX, United States of America
| | - Hanlin Zhu
- Rice University, Electrical and Computer Engineering, Houston, TX, United States of America
| | - Justin Kilmarx
- The University of Texas at Austin, Mechanical Engineering, Austin, TX, United States of America
| | - James Sulzer
- The University of Texas at Austin, Mechanical Engineering, Austin, TX, United States of America.,The University of Texas at Austin, Institute for Neuroscience, Austin, TX, United States of America
| | - Chong Xie
- Rice University, Electrical and Computer Engineering, Houston, TX, United States of America
| | - Philippe N Tobler
- University of Zurich, Neuroeconomics and Social Neuroscience, Zurich, Switzerland
| | - Andrew J Watrous
- The University of Texas at Austin, Neurology, Austin, TX, United States of America
| | - Amy L Orsborn
- University of Washington, Electrical and Computer Engineering, Seattle, WA, United States of America.,University of Washington, Bioengineering, Seattle, WA, United States of America.,Washington National Primate Research Center, Seattle, WA, United States of America
| | - Jarrod Lewis-Peacock
- The University of Texas at Austin, Psychology, Austin, TX, United States of America.,The University of Texas at Austin, Institute for Neuroscience, Austin, TX, United States of America
| | - Samantha R Santacruz
- The University of Texas at Austin, Biomedical Engineering, Austin, TX, United States of America.,The University of Texas at Austin, Institute for Neuroscience, Austin, TX, United States of America
| |
Collapse
|
43
|
Grootswagers T, Robinson AK. Overfitting the Literature to One Set of Stimuli and Data. Front Hum Neurosci 2021; 15:682661. [PMID: 34305552 PMCID: PMC8295535 DOI: 10.3389/fnhum.2021.682661] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2021] [Accepted: 06/16/2021] [Indexed: 12/02/2022] Open
Abstract
A large number of papers in Computational Cognitive Neuroscience are developing and testing novel analysis methods using one specific neuroimaging dataset and problematic experimental stimuli. Publication bias and confirmatory exploration will result in overfitting to the limited available data. We highlight the problems with this specific dataset and argue for the need to collect more good quality open neuroimaging data using a variety of experimental stimuli, in order to test the generalisability of current published results, and allow for more robust results in future work.
Collapse
Affiliation(s)
- Tijl Grootswagers
- The MARCS Institute for Brain, Behaviour and Development, Sydney, NSW, Australia.,School of Psychology, Western Sydney University, Sydney, NSW, Australia.,School of Psychology, University of Sydney, Sydney, NSW, Australia
| | | |
Collapse
|
44
|
Opoku-Baah C, Schoenhaut AM, Vassall SG, Tovar DA, Ramachandran R, Wallace MT. Visual Influences on Auditory Behavioral, Neural, and Perceptual Processes: A Review. J Assoc Res Otolaryngol 2021; 22:365-386. [PMID: 34014416 PMCID: PMC8329114 DOI: 10.1007/s10162-021-00789-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2020] [Accepted: 02/07/2021] [Indexed: 01/03/2023] Open
Abstract
In a naturalistic environment, auditory cues are often accompanied by information from other senses, which can be redundant with or complementary to the auditory information. Although the multisensory interactions derived from this combination of information and that shape auditory function are seen across all sensory modalities, our greatest body of knowledge to date centers on how vision influences audition. In this review, we attempt to capture the state of our understanding at this point in time regarding this topic. Following a general introduction, the review is divided into 5 sections. In the first section, we review the psychophysical evidence in humans regarding vision's influence in audition, making the distinction between vision's ability to enhance versus alter auditory performance and perception. Three examples are then described that serve to highlight vision's ability to modulate auditory processes: spatial ventriloquism, cross-modal dynamic capture, and the McGurk effect. The final part of this section discusses models that have been built based on available psychophysical data and that seek to provide greater mechanistic insights into how vision can impact audition. The second section reviews the extant neuroimaging and far-field imaging work on this topic, with a strong emphasis on the roles of feedforward and feedback processes, on imaging insights into the causal nature of audiovisual interactions, and on the limitations of current imaging-based approaches. These limitations point to a greater need for machine-learning-based decoding approaches toward understanding how auditory representations are shaped by vision. The third section reviews the wealth of neuroanatomical and neurophysiological data from animal models that highlights audiovisual interactions at the neuronal and circuit level in both subcortical and cortical structures. It also speaks to the functional significance of audiovisual interactions for two critically important facets of auditory perception-scene analysis and communication. The fourth section presents current evidence for alterations in audiovisual processes in three clinical conditions: autism, schizophrenia, and sensorineural hearing loss. These changes in audiovisual interactions are postulated to have cascading effects on higher-order domains of dysfunction in these conditions. The final section highlights ongoing work seeking to leverage our knowledge of audiovisual interactions to develop better remediation approaches to these sensory-based disorders, founded in concepts of perceptual plasticity in which vision has been shown to have the capacity to facilitate auditory learning.
Collapse
Affiliation(s)
- Collins Opoku-Baah
- Neuroscience Graduate Program, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
| | - Adriana M Schoenhaut
- Neuroscience Graduate Program, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
| | - Sarah G Vassall
- Neuroscience Graduate Program, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
| | - David A Tovar
- Neuroscience Graduate Program, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
| | - Ramnarayan Ramachandran
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Department of Psychology, Vanderbilt University, Nashville, TN, USA
- Department of Hearing and Speech, Vanderbilt University Medical Center, Nashville, TN, USA
- Vanderbilt Vision Research Center, Nashville, TN, USA
| | - Mark T Wallace
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA.
- Department of Psychology, Vanderbilt University, Nashville, TN, USA.
- Department of Hearing and Speech, Vanderbilt University Medical Center, Nashville, TN, USA.
- Vanderbilt Vision Research Center, Nashville, TN, USA.
- Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center, Nashville, TN, USA.
- Department of Pharmacology, Vanderbilt University, Nashville, TN, USA.
| |
Collapse
|
45
|
Bayer M, Berhe O, Dziobek I, Johnstone T. Rapid Neural Representations of Personally Relevant Faces. Cereb Cortex 2021; 31:4699-4708. [PMID: 33987643 DOI: 10.1093/cercor/bhab116] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2020] [Revised: 03/15/2021] [Accepted: 04/08/2021] [Indexed: 01/27/2023] Open
Abstract
The faces of those most personally relevant to us are our primary source of social information, making their timely perception a priority. Recent research indicates that gender, age and identity of faces can be decoded from EEG/MEG data within 100 ms. Yet, the time course and neural circuitry involved in representing the personal relevance of faces remain unknown. We applied simultaneous EEG-fMRI to examine neural responses to emotional faces of female participants' romantic partners, friends, and a stranger. Combining EEG and fMRI in cross-modal representational similarity analyses, we provide evidence that representations of personal relevance start prior to structural encoding at 100 ms, with correlated representations in visual cortex, but also in prefrontal and midline regions involved in value representation, and monitoring and recall of self-relevant information. Our results add to an emerging body of research that suggests that models of face perception need to be updated to account for rapid detection of personal relevance in cortical circuitry beyond the core face processing network.
Collapse
Affiliation(s)
- Mareike Bayer
- Berlin School of Mind and Brain, Department of Psychology, Humboldt-Universität zu Berlin, 10999 Berlin, Germany
| | - Oksana Berhe
- Berlin School of Mind and Brain, Department of Psychology, Humboldt-Universität zu Berlin, 10999 Berlin, Germany.,Department of Psychiatry and Psychotherapy, Central Institute of Mental Health, Medical Faculty Mannheim/Heidelberg University, 68159 Mannheim, Germany
| | - Isabel Dziobek
- Berlin School of Mind and Brain, Department of Psychology, Humboldt-Universität zu Berlin, 10999 Berlin, Germany
| | - Tom Johnstone
- Centre for Integrative Neuroscience and Neurodynamics, School of Psychology and Clinical Language Sciences, The University of Reading, RG6 6AH Reading, UK.,School of Health Sciences, Swinburne University of Technology, 3184 Hawthorn, Australia
| |
Collapse
|
46
|
Spatiotemporal dynamics of responses to biological motion in the human brain. Cortex 2021; 136:124-139. [PMID: 33545617 DOI: 10.1016/j.cortex.2020.12.015] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2020] [Revised: 05/27/2020] [Accepted: 12/10/2020] [Indexed: 01/01/2023]
Abstract
We sought to understand the spatiotemporal characteristics of biological motion perception. We presented observers with biological motion walkers that differed in terms of form coherence or kinematics (i.e., the presence or absence of natural acceleration). Participants were asked to discriminate the facing direction of the stimuli while their magnetoencephalographic responses were concurrently imaged. We found that two univariate response components can be observed around ~200 msec and ~650 msec post-stimulus onset, each engaging lateral-occipital and parietal cortex prior to temporal and frontal cortex. Moreover, while univariate responses show biological motion form-specificity only after 300 msec, multivariate patterns specific to form can be well discriminated from those for local cues as early as 100 msec after stimulus onset. By finally examining the representational similarity of fMRI and MEG patterned responses, we show that early responses to biological motion are most likely sourced to occipital cortex while later responses likely originate from extrastriate body areas.
Collapse
|
47
|
Ubaldi S, Fairhall SL. fMRI-Indexed neural temporal tuning reveals the hierarchical organsiation of the face and person selective network. Neuroimage 2020; 227:117690. [PMID: 33385559 PMCID: PMC7611695 DOI: 10.1016/j.neuroimage.2020.117690] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2020] [Revised: 12/24/2020] [Accepted: 12/25/2020] [Indexed: 11/04/2022] Open
Abstract
Recognising and knowing about conspecifics is vital to human interaction and is served in the brain by a well-characterised cortical network. Understanding the temporal dynamics of this network is critical to gaining insight into both hierarchical organisation and regional coordination. Here, we combine the high spatial resolution of fMRI with a paradigm that permits investigation of differential temporal tuning across cortical regions. We cognitively under- and overload the system using the rapid presentation (100-1200msec) of famous faces and buildings. We observed an increase in activity as presentation rates slowed and a negative deflection when inter-stimulus intervals (ISIs) were extended to longer periods. The primary distinction in tuning patterns was between core (perceptual) and extended (non-perceptual) systems but there was also evidence for nested hierarchies within systems, as well as indications of widespread parallel processing. Extended regions demonstrated common temporal tuning across regions which may indicate coordinated activity as they cooperate to manifest the diverse cognitive representation accomplished by this network. With the support of an additional psychophysical study, we demonstrated that ISIs necessary for different levels of semantic access are consistent with temporal tuning patterns. Collectively, these results show that regions of the person-knowledge network operate over different temporal timescales consistent with hierarchical organisation.
Collapse
Affiliation(s)
- Silvia Ubaldi
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini 31, Rovereto, TN 38068, Italy
| | - Scott L Fairhall
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini 31, Rovereto, TN 38068, Italy.
| |
Collapse
|
48
|
Tong S, Liang X, Kumada T, Iwaki S. Putative ratios of facial attractiveness in a deep neural network. Vision Res 2020; 178:86-99. [PMID: 33186876 DOI: 10.1016/j.visres.2020.10.001] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2019] [Revised: 08/25/2020] [Accepted: 10/02/2020] [Indexed: 12/01/2022]
Abstract
Empirical evidence has shown that there is an ideal arrangement of facial features (ideal ratios) that can optimize the attractiveness of a person's face. These putative ratios define facial attractiveness in terms of spatial relations and provide important rules for measuring the attractiveness of a face. In this paper, we show that a deep neural network (DNN) model can learn putative ratios from face images based only on categorical annotation when no annotated facial features for attractiveness are explicitly given. To this end, we conducted three experiments. In Experiment 1, we trained a DNN model to recognize the attractiveness (female/male × high/low attractiveness) of face in the images using four category-specific neurons (CSNs). In Experiment 2, face-like images were generated by reversing the DNN model (e.g., deconvolution). These images depict the intuitive attributes encoded in CSNs of the four categories of facial attractiveness and reveal certain consistencies with reported evidence on the putative ratios. In Experiment 3, simulated psychophysical experiments on face images with varying putative ratios reveal changes in the activity of the CSNs that are remarkably similar to those of human judgements reported in a previous study. These results show that the trained DNN model can learn putative ratios as key features for the representation of facial attractiveness. This finding advances our understanding of facial attractiveness via DNN-based perspective approaches.
Collapse
Affiliation(s)
- Song Tong
- IST, Graduate School of Informatics, Kyoto University, Kyoto, Japan.
| | - Xuefeng Liang
- School of Artificial Intelligence, Xidian University, Xi'an, PR China.
| | - Takatsune Kumada
- IST, Graduate School of Informatics, Kyoto University, Kyoto, Japan.
| | - Sunao Iwaki
- Information Technology and Human Factors, AIST, Tsukuba, Japan.
| |
Collapse
|
49
|
May A, Schulte LH, Nolte G, Mehnert J. Partial Similarity Reveals Dynamics in Brainstem-Midbrain Networks during Trigeminal Nociception. Brain Sci 2020; 10:brainsci10090603. [PMID: 32887487 PMCID: PMC7563756 DOI: 10.3390/brainsci10090603] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2020] [Accepted: 09/01/2020] [Indexed: 11/28/2022] Open
Abstract
Imaging studies help us understand the important role of brainstem and midbrain regions in human trigeminal pain processing without solving the question of how these regions actually interact. In the current study, we describe this connectivity and its dynamics during nociception with a novel analytical approach called Partial Similarity (PS). We developed PS specifically to estimate the communication between individual hubs of the network in contrast to the overall communication within that network. Partial Similarity works on trial-to-trial variance of neuronal activity acquired with functional magnetic resonance imaging. It discovers direct communication between two hubs considering the remainder of the network as confounds. A similar method to PS is Representational Similarity, which works with ordinary correlations and does not consider any external influence on the communication between two hubs. Particularly the combination of Representational Similarity and Partial Similarity analysis unravels brainstem dynamics involved in trigeminal pain using the spinal trigeminal nucleus (STN)—the first relay station of peripheral trigeminal input—as a seed region. The combination of both methods can be valuable tools in discovering the network dynamics in fMRI and an important instrument for future insight into the nature of various neurological diseases like primary headaches.
Collapse
Affiliation(s)
- Arne May
- Department of Systems Neuroscience, University Medical Center Eppendorf, 20246 Hamburg, Germany; (A.M.); (L.H.S.)
| | - Laura Helene Schulte
- Department of Systems Neuroscience, University Medical Center Eppendorf, 20246 Hamburg, Germany; (A.M.); (L.H.S.)
| | - Guido Nolte
- Department of Neurophysiology and Pathophysiology, University Medical Center Eppendorf, 20246 Hamburg, Germany;
| | - Jan Mehnert
- Department of Systems Neuroscience, University Medical Center Eppendorf, 20246 Hamburg, Germany; (A.M.); (L.H.S.)
- Correspondence: ; Tel.: +49-40-7410-59711
| |
Collapse
|
50
|
Cichy RM, Oliva A. A M/EEG-fMRI Fusion Primer: Resolving Human Brain Responses in Space and Time. Neuron 2020; 107:772-781. [DOI: 10.1016/j.neuron.2020.07.001] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2020] [Revised: 06/25/2020] [Accepted: 06/30/2020] [Indexed: 10/23/2022]
|