2
|
Ryu H, Ju U, Wallraven C. Decoding visual fatigue in a visual search task selectively manipulated via myopia-correcting lenses. Front Neurosci 2024; 18:1307688. [PMID: 38660218 PMCID: PMC11039808 DOI: 10.3389/fnins.2024.1307688] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Accepted: 03/25/2024] [Indexed: 04/26/2024] Open
Abstract
Introduction Visual fatigue resulting from sustained, high-workload visual activities can significantly impact task performance and general wellbeing. So far, however, little is known about the underlying brain networks of visual fatigue. This study aimed to identify such potential networks using a unique paradigm involving myopia-correcting lenses known to directly modulate subjectively-perceived fatigue levels. Methods A sample of N = 31 myopia participants [right eye-SE: -3.77D (SD: 2.46); left eye-SE: -3.75D (SD: 2.45)] performed a demanding visual search task with varying difficulty levels, both with and without the lenses, while undergoing fMRI scanning. There were a total of 20 trials, after each of which participants rated the perceived difficulty and their subjective visual fatigue level. We used representational similarity analysis to decode brain regions associated with fatigue and difficulty, analyzing their individual and joint decoding pattern. Results and discussion Behavioral results showed correlations between fatigue and difficulty ratings and above all a significant reduction in fatigue levels when wearing the lenses. Imaging results implicated the cuneus, lingual gyrus, middle occipital gyrus (MOG), and declive for joint fatigue and difficulty decoding. Parts of the lingual gyrus were able to selectively decode perceived difficulty. Importantly, a broader network of visual and higher-level association areas showed exclusive decodability of fatigue (culmen, middle temporal gyrus (MTG), parahippocampal gyrus, precentral gyrus, and precuneus). Our findings enhance our understanding of processing within the context of visual search, attention, and mental workload and for the first time demonstrate that it is possible to decode subjectively-perceived visual fatigue during a challenging task from imaging data. Furthermore, the study underscores the potential of myopia-correcting lenses in investigating and modulating fatigue.
Collapse
Affiliation(s)
- Hyeongsuk Ryu
- Department of Brain and Cognitive Engineering, Korea University, Seoul, Republic of Korea
| | - Uijong Ju
- Department of Information Display, Kyunghee University, Seoul, Republic of Korea
| | - Christian Wallraven
- Department of Brain and Cognitive Engineering, Korea University, Seoul, Republic of Korea
- Department of Artificial Intelligence, Korea University, Seoul, Republic of Korea
| |
Collapse
|
3
|
Gong Z, Zhou M, Dai Y, Wen Y, Liu Y, Zhen Z. A large-scale fMRI dataset for the visual processing of naturalistic scenes. Sci Data 2023; 10:559. [PMID: 37612327 PMCID: PMC10447576 DOI: 10.1038/s41597-023-02471-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Accepted: 08/14/2023] [Indexed: 08/25/2023] Open
Abstract
One ultimate goal of visual neuroscience is to understand how the brain processes visual stimuli encountered in the natural environment. Achieving this goal requires records of brain responses under massive amounts of naturalistic stimuli. Although the scientific community has put a lot of effort into collecting large-scale functional magnetic resonance imaging (fMRI) data under naturalistic stimuli, more naturalistic fMRI datasets are still urgently needed. We present here the Natural Object Dataset (NOD), a large-scale fMRI dataset containing responses to 57,120 naturalistic images from 30 participants. NOD strives for a balance between sampling variation between individuals and sampling variation between stimuli. This enables NOD to be utilized not only for determining whether an observation is generalizable across many individuals, but also for testing whether a response pattern is generalized to a variety of naturalistic stimuli. We anticipate that the NOD together with existing naturalistic neuroimaging datasets will serve as a new impetus for our understanding of the visual processing of naturalistic stimuli.
Collapse
Affiliation(s)
- Zhengxin Gong
- Beijing Key Laboratory of Applied Experimental Psychology, Faculty of Psychology, Beijing Normal University, Beijing, 100875, China
| | - Ming Zhou
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, 100875, China
| | - Yuxuan Dai
- Beijing Key Laboratory of Applied Experimental Psychology, Faculty of Psychology, Beijing Normal University, Beijing, 100875, China
| | - Yushan Wen
- Beijing Key Laboratory of Applied Experimental Psychology, Faculty of Psychology, Beijing Normal University, Beijing, 100875, China
| | - Youyi Liu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, 100875, China.
| | - Zonglei Zhen
- Beijing Key Laboratory of Applied Experimental Psychology, Faculty of Psychology, Beijing Normal University, Beijing, 100875, China.
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, 100875, China.
| |
Collapse
|
4
|
Zhou M, Gong Z, Dai Y, Wen Y, Liu Y, Zhen Z. A large-scale fMRI dataset for human action recognition. Sci Data 2023; 10:415. [PMID: 37369643 DOI: 10.1038/s41597-023-02325-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2023] [Accepted: 06/21/2023] [Indexed: 06/29/2023] Open
Abstract
Human action recognition is a critical capability for our survival, allowing us to interact easily with the environment and others in everyday life. Although the neural basis of action recognition has been widely studied using a few action categories from simple contexts as stimuli, how the human brain recognizes diverse human actions in real-world environments still needs to be explored. Here, we present the Human Action Dataset (HAD), a large-scale functional magnetic resonance imaging (fMRI) dataset for human action recognition. HAD contains fMRI responses to 21,600 video clips from 30 participants. The video clips encompass 180 human action categories and offer a comprehensive coverage of complex activities in daily life. We demonstrate that the data are reliable within and across participants and, notably, capture rich representation information of the observed human actions. This extensive dataset, with its vast number of action categories and exemplars, has the potential to deepen our understanding of human action recognition in natural environments.
Collapse
Affiliation(s)
- Ming Zhou
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, 100875, China
| | - Zhengxin Gong
- Beijing Key Laboratory of Applied Experimental Psychology, Faculty of Psychology, Beijing Normal University, Beijing, 100875, China
| | - Yuxuan Dai
- Beijing Key Laboratory of Applied Experimental Psychology, Faculty of Psychology, Beijing Normal University, Beijing, 100875, China
| | - Yushan Wen
- Beijing Key Laboratory of Applied Experimental Psychology, Faculty of Psychology, Beijing Normal University, Beijing, 100875, China
| | - Youyi Liu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, 100875, China
| | - Zonglei Zhen
- Beijing Key Laboratory of Applied Experimental Psychology, Faculty of Psychology, Beijing Normal University, Beijing, 100875, China.
| |
Collapse
|
5
|
Kumar M, Anderson MJ, Antony JW, Baldassano C, Brooks PP, Cai MB, Chen PHC, Ellis CT, Henselman-Petrusek G, Huberdeau D, Hutchinson JB, Li YP, Lu Q, Manning JR, Mennen AC, Nastase SA, Richard H, Schapiro AC, Schuck NW, Shvartsman M, Sundaram N, Suo D, Turek JS, Turner D, Vo VA, Wallace G, Wang Y, Williams JA, Zhang H, Zhu X, Capota˘ M, Cohen JD, Hasson U, Li K, Ramadge PJ, Turk-Browne NB, Willke TL, Norman KA. BrainIAK: The Brain Imaging Analysis Kit. APERTURE NEURO 2022; 1. [PMID: 35939268 PMCID: PMC9351935 DOI: 10.52294/31bb5b68-2184-411b-8c00-a1dacb61e1da] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
Abstract
Functional magnetic resonance imaging (fMRI) offers a rich source of data for studying the neural basis of cognition. Here, we describe the Brain Imaging Analysis Kit (BrainIAK), an open-source, free Python package that provides computationally optimized solutions to key problems in advanced fMRI analysis. A variety of techniques are presently included in BrainIAK: intersubject correlation (ISC) and intersubject functional connectivity (ISFC), functional alignment via the shared response model (SRM), full correlation matrix analysis (FCMA), a Bayesian version of representational similarity analysis (BRSA), event segmentation using hidden Markov models, topographic factor analysis (TFA), inverted encoding models (IEMs), an fMRI data simulator that uses noise characteristics from real data (fmrisim), and some emerging methods. These techniques have been optimized to leverage the efficiencies of high-performance compute (HPC) clusters, and the same code can be seamlessly transferred from a laptop to a cluster. For each of the aforementioned techniques, we describe the data analysis problem that the technique is meant to solve and how it solves that problem; we also include an example Jupyter notebook for each technique and an annotated bibliography of papers that have used and/or described that technique. In addition to the sections describing various analysis techniques in BrainIAK, we have included sections describing the future applications of BrainIAK to real-time fMRI, tutorials that we have developed and shared online to facilitate learning the techniques in BrainIAK, computational innovations in BrainIAK, and how to contribute to BrainIAK. We hope that this manuscript helps readers to understand how BrainIAK might be useful in their research.
Collapse
Affiliation(s)
- Manoj Kumar
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ
| | - Michael J. Anderson
- Work done while at Parallel Computing Lab, Intel Corporation, Santa Clara, CA
| | - James W. Antony
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ
| | | | - Paula P. Brooks
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ
| | - Ming Bo Cai
- International Research Center for Neurointelligence (WPI-IRCN), UTIAS, The University of Tokyo, Japan
| | - Po-Hsuan Cameron Chen
- Work done while at Princeton Neuroscience Institute, Princeton University, Princeton, NJ
| | | | | | | | | | - Y. Peeta Li
- Department of Psychology, University of Oregon, Eugene, OR
| | - Qihong Lu
- Department of Psychology, Princeton University, Princeton, NJ
| | - Jeremy R. Manning
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH
| | - Anne C. Mennen
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ
| | - Samuel A. Nastase
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ
| | - Hugo Richard
- Parietal Team, Inria, Neurospin, CEA, Université Paris-Saclay, France
| | - Anna C. Schapiro
- Department of Psychology, University of Pennsylvania, Philadelphia, PA
| | - Nicolas W. Schuck
- Max Planck Research Group NeuroCode, Max Planck Institute for Human Development, Berlin, Germany; Max Planck UCL Centre for Computational Psychiatry and Ageing Research, Berlin, Germany
| | - Michael Shvartsman
- Work done while at Princeton Neuroscience Institute, Princeton University, Princeton, NJ
| | - Narayanan Sundaram
- Work done while at Parallel Computing Lab, Intel Corporation, Santa Clara, CA
| | - Daniel Suo
- epartment of Computer Science, Princeton University, Princeton, NJ
| | - Javier S. Turek
- Brain-Inspired Computing Lab, Intel Corporation, Hillsboro, OR
| | - David Turner
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ
| | - Vy A. Vo
- Brain-Inspired Computing Lab, Intel Corporation, Hillsboro, OR
| | - Grant Wallace
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ
| | - Yida Wang
- Work done while at Parallel Computing Lab, Intel Corporation, Santa Clara, CA
| | - Jamal A. Williams
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ; Department of Psychology, Princeton University, Princeton, NJ
| | - Hejia Zhang
- Work done while at Princeton Neuroscience Institute, Princeton University, Princeton, NJ
| | - Xia Zhu
- Brain-Inspired Computing Lab, Intel Corporation, Hillsboro, OR
| | - Mihai Capota˘
- Brain-Inspired Computing Lab, Intel Corporation, Hillsboro, OR
| | - Jonathan D. Cohen
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ; Department of Psychology, Princeton University, Princeton, NJ
| | - Uri Hasson
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ; Department of Psychology, Princeton University, Princeton, NJ
| | - Kai Li
- Department of Computer Science, Princeton University, Princeton, NJ
| | - Peter J. Ramadge
- Department of Electrical Engineering, and the Center for Statistics and Machine Learning, Princeton University, Princeton, NJ
| | | | | | - Kenneth A. Norman
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ; Department of Psychology, Princeton University, Princeton, NJ
| |
Collapse
|