1
|
Wu Y, Zhang Y, Mao Y, Feng K, Wei D, Song L. Reconstructing sources location of visual color cortex by the task-irrelevant visual stimuli through machine learning decoding. Heliyon 2022; 8:e12287. [PMID: 36582686 PMCID: PMC9792758 DOI: 10.1016/j.heliyon.2022.e12287] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2022] [Revised: 10/15/2022] [Accepted: 12/05/2022] [Indexed: 12/23/2022] Open
Abstract
Visual color sensing is generated by electrical discharges from endocranial neuronal sources that penetrate the skull and reach to the cerebral cortex. However, the space location of the source generated by this neural mechanism remains elusive. In this paper, we emulate the generation of visual color signal by task-irrelevant stimuli to activate brain neurons, where its consequences over the cerebral cortex is experimentally tracked. We first document the changes to brain color sensing using electroencephalography (EEG), and find that the sensing classification accuracy of primary visual cortex (V1) regions was positively correlated with the space correlation of visual evoked potential (VEP) power distribution under machine learning decoding. We then explore the decoded results to trace the brain activity neural source location of EEG inversion problem and assess its reconstructive possibility. We show that visual color EEG in V1 can reconstruct endocranial neuronal source location, through the machine learning decoding of channel location.
Collapse
Affiliation(s)
- Yijia Wu
- Academy for Engineering & Technology, Fudan University, Shang Hai, China,Shanghai East-bund Institute on Networking Systems of AI, Shang Hai, China,Corresponding author.
| | - Yanni Zhang
- Shanghai East-bund Institute on Networking Systems of AI, Shang Hai, China
| | - Yanjing Mao
- Academy for Engineering & Technology, Fudan University, Shang Hai, China
| | - Kaiqiang Feng
- Academy for Engineering & Technology, Fudan University, Shang Hai, China
| | - Donglai Wei
- Academy for Engineering & Technology, Fudan University, Shang Hai, China
| | - Liang Song
- Academy for Engineering & Technology, Fudan University, Shang Hai, China,Shanghai East-bund Institute on Networking Systems of AI, Shang Hai, China
| |
Collapse
|
2
|
Li Y, Tregillus KEM, Engel SA. Visual mode switching: Improved general compensation for environmental color changes requires only one exposure per day. J Vis 2022; 22:12. [PMID: 36098963 PMCID: PMC9482319 DOI: 10.1167/jov.22.10.12] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
When the visual environment changes, vision adapts in order to maintain accurate perception. For repeatedly encountered environmental changes, the visual system may learn to adjust immediately, a process called "visual mode switching." For example, following experience with red glasses, participants report that the glasses' redness fades instantly when they put the glasses on. Here we tested (1) whether once-daily experience suffices for learning to switch visual modes and (2) whether effects of mode switching apply to most stimuli affected by the environmental change. In Experiment 1, 12 participants wore bright red glasses for a single 5-hr period each day for 5 days, and we tested for changes in the perception of unique yellow, which contains neither red nor green. In Experiment 2, we tested how mode switching affects larger parts of the color space. Thirteen participants donned and removed the glasses multiple times a day for 5 days, and we used a dissimilarity rating task to measure and track perception of many different colors. Across days, immediately upon donning the glasses, the world appeared less and less reddish (Experiment 1), and colors across the whole color space appeared more and more normal (Experiment 2). These results indicate that mode switching can be acquired from a once-daily experience, and it applies to most stimuli in a given environment. These findings may help to predict when and how mode switching occurs outside the laboratory.
Collapse
Affiliation(s)
- Yanjun Li
- Department of Psychology, University of Minnesota, MN, USA.,
| | | | - Stephen A Engel
- Department of Psychology, University of Minnesota, MN, USA.,
| |
Collapse
|
3
|
Goddard E, Shooner C, Mullen KT. Magnetoencephalography contrast adaptation reflects perceptual adaptation. J Vis 2022; 22:16. [PMID: 36121660 PMCID: PMC9503227 DOI: 10.1167/jov.22.10.16] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Contrast adaptation is a fundamental visual process that has been extensively investigated and used to infer the selectivity of visual cortex. We recently reported an apparent disconnect between the effects of contrast adaptation on perception and functional magnetic resonance imaging BOLD response adaptation, in which adaptation between chromatic and achromatic stimuli measured psychophysically showed greater selectivity than adaptation measured using BOLD signals. Here we used magnetoencephalography (MEG) recordings of neural responses to the same chromatic and achromatic adaptation conditions to characterize the neural effects of contrast adaptation and to determine whether BOLD adaptation or MEG better reflect the measured perceptual effects. Participants viewed achromatic, L-M isolating, or S-cone isolating radial sinusoids before adaptation and after adaptation to each of the three contrast directions. We measured adaptation-related changes in the neural response to a range of stimulus contrast amplitudes using two measures of the MEG response: the overall response amplitude, and a novel time-resolved measure of the contrast response function, derived from a classification analysis combined with multidimensional scaling. Within-stimulus adaptation effects on the contrast response functions in each case showed a pattern of contrast-gain or a combination of contrast-gain and response-gain effects. Cross-stimulus adaptation conditions showed that adaptation effects were highly stimulus selective across early, ventral, and dorsal visual cortical areas, consistent with the perceptual effects.
Collapse
Affiliation(s)
- Erin Goddard
- McGill Vision Research, Department of Ophthalmology & Visual Sciences, McGill University Montreal, Quebec, Canada.,Present address: School of Psychology, UNSW, Sydney, Australia.,
| | - Christopher Shooner
- McGill Vision Research, Department of Ophthalmology & Visual Sciences, McGill University Montreal, Quebec, Canada.,
| | - Kathy T Mullen
- McGill Vision Research, Department of Ophthalmology & Visual Sciences, McGill University Montreal, Quebec, Canada.,
| |
Collapse
|
4
|
Hermann KL, Singh SR, Rosenthal IA, Pantazis D, Conway BR. Temporal dynamics of the neural representation of hue and luminance polarity. Nat Commun 2022; 13:661. [PMID: 35115511 PMCID: PMC8814185 DOI: 10.1038/s41467-022-28249-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2021] [Accepted: 01/12/2022] [Indexed: 11/09/2022] Open
Abstract
Hue and luminance contrast are basic visual features. Here we use multivariate analyses of magnetoencephalography data to investigate the timing of the neural computations that extract them, and whether they depend on common neural circuits. We show that hue and luminance-contrast polarity can be decoded from MEG data and, with lower accuracy, both features can be decoded across changes in the other feature. These results are consistent with the existence of both common and separable neural mechanisms. The decoding time course is earlier and more temporally precise for luminance polarity than hue, a result that does not depend on task, suggesting that luminance contrast is an updating signal that separates visual events. Meanwhile, cross-temporal generalization is slightly greater for representations of hue compared to luminance polarity, providing a neural correlate of the preeminence of hue in perceptual grouping and memory. Finally, decoding of luminance polarity varies depending on the hues used to obtain training and testing data. The pattern of results is consistent with observations that luminance contrast is mediated by both L-M and S cone sub-cortical mechanisms.
Collapse
Affiliation(s)
- Katherine L Hermann
- Laboratory of Sensorimotor Research, National Eye Institute, Bethesda, MD, 20892, USA
- Department of Psychology, Stanford University, Stanford, CA, 94305, USA
| | - Shridhar R Singh
- Laboratory of Sensorimotor Research, National Eye Institute, Bethesda, MD, 20892, USA
| | - Isabelle A Rosenthal
- Laboratory of Sensorimotor Research, National Eye Institute, Bethesda, MD, 20892, USA
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, 91125, USA
| | - Dimitrios Pantazis
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| | - Bevil R Conway
- Laboratory of Sensorimotor Research, National Eye Institute, Bethesda, MD, 20892, USA.
- National Institute of Mental Health, Bethesda, MD, 20892, USA.
| |
Collapse
|
5
|
Goddard E, Mullen KT. Attention selectively enhances stimulus information for surround over foveal stimulus representations in occipital cortex. J Vis 2021; 21:20. [PMID: 33749755 PMCID: PMC7991976 DOI: 10.1167/jov.21.3.20] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
By attending to part of a visual scene, we can prioritize processing of the most relevant visual information and so use our limited resources effectively. Previous functional magnetic resonance imaging (fMRI) work has shown that attention can increase overall blood-oxygen-level-dependent (BOLD) signal responsiveness but also enhances the stimulus information in terms of classifier performance. Here, we investigate how these effects vary across the visual field. We compare attention-enhanced fMRI-BOLD amplitude responses and classifier accuracy in fovea and surrounding stimulus regions using a set of four simple stimuli subdivided into a foveal region (1.4° diameter) and a surround region (15° diameter). We found dissociations between the effects of attention on average response and in enhancing stimulus information. In early visual cortex, we found that attention increased the amplitude of responses to both foveal and surround parts of the stimuli and increased classifier performance only for the surround stimulus. Conversely, ventral visual areas showed less change in average response but greater changes in decoding. Unlike for early visual cortex, in the ventral visual cortex attention produced similar changes in decoding for center and surround stimuli.
Collapse
Affiliation(s)
- Erin Goddard
- Department of Ophthalmology & Visual Sciences, McGill Vision Research, McGill University, Montreal, Quebec, Canada.,Present Address: School of Psychology, University of New South Wales, Sydney, New South Wales, Australia.,
| | - Kathy T Mullen
- Department of Ophthalmology & Visual Sciences, McGill Vision Research, McGill University, Montreal, Quebec, Canada.,
| |
Collapse
|
6
|
Goddard E, Mullen KT. fMRI representational similarity analysis reveals graded preferences for chromatic and achromatic stimulus contrast across human visual cortex. Neuroimage 2020; 215:116780. [PMID: 32276074 DOI: 10.1016/j.neuroimage.2020.116780] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2019] [Revised: 03/18/2020] [Accepted: 03/24/2020] [Indexed: 01/23/2023] Open
Abstract
Human visual cortex is partitioned into different functional areas that, from lower to higher, become increasingly selective and responsive to complex feature dimensions. Here we use a Representational Similarity Analysis (RSA) of fMRI-BOLD signals to make quantitative comparisons across LGN and multiple visual areas of the low-level stimulus information encoded in the patterns of voxel responses. Our stimulus set was picked to target the four functionally distinct subcortical channels that input visual cortex from the LGN: two achromatic sinewave stimuli that favor the responses of the high-temporal magnocellular and high-spatial parvocellular pathways, respectively, and two chromatic stimuli isolating the L/M-cone opponent and S-cone opponent pathways, respectively. Each stimulus type had three spatial extents to sample both foveal and para-central visual field. With the RSA, we compare quantitatively the response specializations for individual stimuli and combinations of stimuli in each area and how these change across visual cortex. First, our results replicate the known response preferences for motion/flicker in the dorsal visual areas. In addition, we identify two distinct gradients along the ventral visual stream. In the early visual areas (V1-V3), the strongest differential representation is for the achromatic high spatial frequency stimuli, suitable for form vision, and a very weak differentiation of chromatic versus achromatic contrast. Emerging in ventral occipital areas (V4, VO1 and VO2), however, is an increasingly strong separation of the responses to chromatic versus achromatic contrast and a decline in the high spatial frequency representation. These gradients provide new insight into how visual information is transformed across the visual cortex.
Collapse
Affiliation(s)
- Erin Goddard
- McGill Vision Research, Department of Ophthalmology & Visual Sciences, McGill University, Montreal, QC, H3G1A4, Canada
| | - Kathy T Mullen
- McGill Vision Research, Department of Ophthalmology & Visual Sciences, McGill University, Montreal, QC, H3G1A4, Canada.
| |
Collapse
|
7
|
Sato T, Nagai T, Kuriki I. Hue selectivity of collinear facilitation. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2020; 37:A154-A162. [PMID: 32400538 DOI: 10.1364/josaa.382870] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/13/2019] [Accepted: 02/15/2020] [Indexed: 06/11/2023]
Abstract
Collinear facilitation (CF) is the improvement of the detection sensitivity of the target when two high-contrast flanking stimuli (flankers) have the same visual properties. While it is known that CF does not occur between achromatic flanking stimuli and chromatic targets, or vice versa, it remains unclear whether CF occurs when the hue of the target and flankers are different. We measured CF for Gabor stimuli defined in an isoluminant plane using stimuli defined by isoluminant colors along isolated cone-opponent axes and in two diagonal directions. The measured CF varied with the difference in hue between the target and flankers. Moreover, increased thresholds were also observed. These results suggest that CF exhibits hue selectivity and involves a suppression as well as a facilitation component. The hue selectivity profile of these factors infer that the CF cannot be simply explained by the assumption of two independent cone opponent mechanisms.
Collapse
|
8
|
|