1
|
Morales-Torres R, Wing EA, Deng L, Davis SW, Cabeza R. Visual Recognition Memory of Scenes Is Driven by Categorical, Not Sensory, Visual Representations. J Neurosci 2024; 44:e1479232024. [PMID: 38569925 PMCID: PMC11112637 DOI: 10.1523/jneurosci.1479-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Revised: 02/07/2024] [Accepted: 02/14/2024] [Indexed: 04/05/2024] Open
Abstract
When we perceive a scene, our brain processes various types of visual information simultaneously, ranging from sensory features, such as line orientations and colors, to categorical features, such as objects and their arrangements. Whereas the role of sensory and categorical visual representations in predicting subsequent memory has been studied using isolated objects, their impact on memory for complex scenes remains largely unknown. To address this gap, we conducted an fMRI study in which female and male participants encoded pictures of familiar scenes (e.g., an airport picture) and later recalled them, while rating the vividness of their visual recall. Outside the scanner, participants had to distinguish each seen scene from three similar lures (e.g., three airport pictures). We modeled the sensory and categorical visual features of multiple scenes using both early and late layers of a deep convolutional neural network. Then, we applied representational similarity analysis to determine which brain regions represented stimuli in accordance with the sensory and categorical models. We found that categorical, but not sensory, representations predicted subsequent memory. In line with the previous result, only for the categorical model, the average recognition performance of each scene exhibited a positive correlation with the average visual dissimilarity between the item in question and its respective lures. These results strongly suggest that even in memory tests that ostensibly rely solely on visual cues (such as forced-choice visual recognition with similar distractors), memory decisions for scenes may be primarily influenced by categorical rather than sensory representations.
Collapse
Affiliation(s)
| | - Erik A Wing
- Rotman Research Institute, Baycrest Health Sciences, Toronto, Ontario M6A 2E1, Canada
| | - Lifu Deng
- Department of Psychology & Neuroscience, Duke University, Durham, North Carolina 27708
| | - Simon W Davis
- Department of Psychology & Neuroscience, Duke University, Durham, North Carolina 27708
- Department of Neurology, Duke University School of Medicine, Durham, North Carolina 27708
| | - Roberto Cabeza
- Department of Psychology & Neuroscience, Duke University, Durham, North Carolina 27708
| |
Collapse
|
2
|
Liu C, Cao B, Zhang J. s-TBN: A New Neural Decoding Model to Identify Stimulus Categories From Brain Activity Patterns. IEEE Trans Neural Syst Rehabil Eng 2024; 32:1934-1943. [PMID: 38722722 DOI: 10.1109/tnsre.2024.3399191] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/21/2024]
Abstract
Neural decoding is still a challenging and a hot topic in neurocomputing science. Recently, many studies have shown that brain network patterns containing rich spatiotemporal structural information represent the brain's activation information under external stimuli. In the traditional method, brain network features are directly obtained using the standard machine learning method and provide to a classifier, subsequently decoding external stimuli. However, this method cannot effectively extract the multidimensional structural information hidden in the brain network. Furthermore, studies on tensors have show that the tensor decomposition model can fully mine unique spatiotemporal structural characteristics of a spatiotemporal structure in data with a multidimensional structure. This research proposed a stimulus-constrained Tensor Brain Network (s-TBN) model that involves the tensor decomposition and stimulus category-constraint information. The model was verified on real neuroimaging data obtained via magnetoencephalograph and functional mangetic resonance imaging). Experimental results show that the s-TBN model achieve accuracy matrices of greater than 11.06% and 18.46% on the accuracy matrix compared with other methods on two modal datasets. These results prove the superiority of extracting discriminative characteristics using the STN model, especially for decoding object stimuli with semantic information.
Collapse
|
3
|
Ramanoël S, Durteste M, Bizeul A, Ozier‐Lafontaine A, Bécu M, Sahel J, Habas C, Arleo A. Selective neural coding of object, feature, and geometry spatial cues in humans. Hum Brain Mapp 2022; 43:5281-5295. [PMID: 35776524 PMCID: PMC9812241 DOI: 10.1002/hbm.26002] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Revised: 06/02/2022] [Accepted: 06/20/2022] [Indexed: 01/15/2023] Open
Abstract
Orienting in space requires the processing of visual spatial cues. The dominant hypothesis about the brain structures mediating the coding of spatial cues stipulates the existence of a hippocampal-dependent system for the representation of geometry and a striatal-dependent system for the representation of landmarks. However, this dual-system hypothesis is based on paradigms that presented spatial cues conveying either conflicting or ambiguous spatial information and that used the term landmark to refer to both discrete three-dimensional objects and wall features. Here, we test the hypothesis of complex activation patterns in the hippocampus and the striatum during visual coding. We also postulate that object-based and feature-based navigation are not equivalent instances of landmark-based navigation. We examined how the neural networks associated with geometry-, object-, and feature-based spatial navigation compared with a control condition in a two-choice behavioral paradigm using fMRI. We showed that the hippocampus was involved in all three types of cue-based navigation, whereas the striatum was more strongly recruited in the presence of geometric cues than object or feature cues. We also found that unique, specific neural signatures were associated with each spatial cue. Object-based navigation elicited a widespread pattern of activity in temporal and occipital regions relative to feature-based navigation. These findings extend the current view of a dual, juxtaposed hippocampal-striatal system for visual spatial coding in humans. They also provide novel insights into the neural networks mediating object versus feature spatial coding, suggesting a need to distinguish these two types of landmarks in the context of human navigation.
Collapse
Affiliation(s)
- Stephen Ramanoël
- Sorbonne Université, INSERM, CNRS, Institut de la VisionParisFrance,Université Côte d'Azur, LAMHESSNiceFrance
| | - Marion Durteste
- Sorbonne Université, INSERM, CNRS, Institut de la VisionParisFrance
| | - Alice Bizeul
- Sorbonne Université, INSERM, CNRS, Institut de la VisionParisFrance
| | | | - Marcia Bécu
- Sorbonne Université, INSERM, CNRS, Institut de la VisionParisFrance
| | - José‐Alain Sahel
- Sorbonne Université, INSERM, CNRS, Institut de la VisionParisFrance,CHNO des Quinze‐Vingts, INSERM‐DGOS CIC 1423ParisFrance,Fondation Ophtalmologique RothschildParisFrance,Department of OphtalmologyThe University of Pittsburgh School of MedicinePittsburghPennsylvaniaUSA
| | - Christophe Habas
- CHNO des Quinze‐Vingts, INSERM‐DGOS CIC 1423ParisFrance,Université Versailles St Quentin en YvelineParisFrance
| | - Angelo Arleo
- Sorbonne Université, INSERM, CNRS, Institut de la VisionParisFrance
| |
Collapse
|
4
|
Kaanders P, Nili H, O'Reilly JX, Hunt L. Medial Frontal Cortex Activity Predicts Information Sampling in Economic Choice. J Neurosci 2021; 41:8403-8413. [PMID: 34413207 PMCID: PMC8496191 DOI: 10.1523/jneurosci.0392-21.2021] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2021] [Revised: 06/17/2021] [Accepted: 08/07/2021] [Indexed: 01/05/2023] Open
Abstract
Decision-making not only requires agents to decide what to choose but also how much information to sample before committing to a choice. Previously established frameworks for economic choice argue for a deliberative process of evidence accumulation across time. These tacitly acknowledge a role of information sampling in that decisions are only made once sufficient evidence is acquired, yet few experiments have explicitly placed information sampling under the participant's control. Here, we use fMRI to investigate the neural basis of information sampling in economic choice by allowing participants (n = 30, sex not recorded) to actively sample information in a multistep decision task. We show that medial frontal cortex (MFC) activity is predictive of further information sampling before choice. Choice difficulty (inverse value difference, keeping sensory difficulty constant) was also encoded in MFC, but this effect was explained away by the inclusion of information sampling as a coregressor in the general linear model. A distributed network of regions across the prefrontal cortex encoded key features of the sampled information at the time it was presented. We propose that MFC is an important controller of the extent to which information is gathered before committing to an economic choice. This role may explain why MFC activity has been associated with evidence accumulation in previous studies in which information sampling was an implicit rather than explicit feature of the decision.SIGNIFICANCE STATEMENT The decisions we make are determined by the information we have sampled before committing to a choice. Accumulator frameworks of decision-making tacitly acknowledge the need to sample further information during the evidence accumulation process until a decision boundary is reached. However, relatively few studies explicitly place this decision to sample further information under the participant's control. In this fMRI study, we find that MFC activity is related to information sampling decisions in a multistep economic choice task. This suggests that an important role of evidence representations within MFC may be to guide adaptive sequential decisions to sample further information before committing to a final decision.
Collapse
Affiliation(s)
- Paula Kaanders
- Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford OX3 9DU, England
- Department of Experimental Psychology, University of Oxford, Oxford OX2 6GG, England
| | - Hamed Nili
- Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford OX3 9DU, England
- Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford OX3 9DU, England
| | - Jill X O'Reilly
- Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford OX3 9DU, England
- Department of Experimental Psychology, University of Oxford, Oxford OX2 6GG, England
| | - Laurence Hunt
- Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford OX3 9DU, England
- Department of Psychiatry, University of Oxford, Oxford OX3 7JX, England
| |
Collapse
|
5
|
Shi R, Zhao Y, Cao Z, Liu C, Kang Y, Zhang J. Categorizing objects from MEG signals using EEGNet. Cogn Neurodyn 2021; 16:365-377. [PMID: 35401863 PMCID: PMC8934895 DOI: 10.1007/s11571-021-09717-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Revised: 08/09/2021] [Accepted: 09/02/2021] [Indexed: 11/25/2022] Open
Abstract
Magnetoencephalography (MEG) signals have demonstrated their practical application to reading human minds. Current neural decoding studies have made great progress to build subject-wise decoding models to extract and discriminate the temporal/spatial features in neural signals. In this paper, we used a compact convolutional neural network-EEGNet-to build a common decoder across subjects, which deciphered the categories of objects (faces, tools, animals, and scenes) from MEG data. This study investigated the influence of the spatiotemporal structure of MEG on EEGNet's classification performance. Furthermore, the EEGNet replaced its convolution layers with two sets of parallel convolution structures to extract the spatial and temporal features simultaneously. Our results showed that the organization of MEG data fed into the EEGNet has an effect on EEGNet classification accuracy, and the parallel convolution structures in EEGNet are beneficial to extracting and fusing spatial and temporal MEG features. The classification accuracy demonstrated that the EEGNet succeeds in building the common decoder model across subjects, and outperforms several state-of-the-art feature fusing methods.
Collapse
Affiliation(s)
- Ran Shi
- School of Artificial Intelligence, Beijing Normal University, Beijing, 100875, China
| | - Yanyu Zhao
- School of Artificial Intelligence, Beijing Normal University, Beijing, 100875, China
| | - Zhiyuan Cao
- School of Artificial Intelligence, Beijing Normal University, Beijing, 100875, China
| | - Chunyu Liu
- School of Artificial Intelligence, Beijing Normal University, Beijing, 100875, China
| | - Yi Kang
- School of Artificial Intelligence, Beijing Normal University, Beijing, 100875, China
| | - Jiacai Zhang
- School of Artificial Intelligence, Beijing Normal University, Beijing, 100875, China
- Engineering Research Center of Intelligent Technology and Educational Application, Ministry of Education, Beijing, 100875, China
| |
Collapse
|
6
|
Liu C, Kang Y, Zhang L, Zhang J. Rapidly Decoding Image Categories From MEG Data Using a Multivariate Short-Time FC Pattern Analysis Approach. IEEE J Biomed Health Inform 2021; 25:1139-1150. [PMID: 32750957 DOI: 10.1109/jbhi.2020.3008731] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Recent advances in the development of multivariate analysis methods have led to the application of multivariate pattern analysis (MVPA) to investigate the interactions between brain regions using graph theory (functional connectivity, FC) and decode visual categories from functional magnetic resonance imaging (fMRI) data from a continuous multicategory paradigm. To estimate stable FC patterns from fMRI data, previous studies required long periods in the order of several minutes, in comparison to the human brain that categories visual stimuli within hundreds of milliseconds. Constructing short-time dynamic FC patterns in the order of milliseconds and decoding visual categories is a relatively novel concept. In this study, we developed a multivariate decoding algorithm based on FC patterns and applied it to magnetoencephalography (MEG) data. MEG data were recorded from participants presented with image stimuli in four categories (faces, scenes, animals and tools). MEG data from 17 participants demonstrate that short-time dynamic FC patterns yield brain activity patterns that can be used to decode visual categories with high accuracy. Our results show that FC patterns change over the time window, and FC patterns extracted in the time window of 0∼200 ms after the stimulus onset were most stable. Further, the categorizing accuracy peaked (the mean binary accuracy is above 78.6% at individual level) in the FC patterns estimated within the 0∼200 ms interval. These findings elucidate the underlying connectivity information during visual category processing on a relatively smaller time scale and demonstrate that the contribution of FC patterns to categorization fluctuates over time.
Collapse
|
7
|
Kaiser D, Inciuraite G, Cichy RM. Rapid contextualization of fragmented scene information in the human visual system. Neuroimage 2020; 219:117045. [PMID: 32540354 DOI: 10.1016/j.neuroimage.2020.117045] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2020] [Revised: 04/24/2020] [Accepted: 06/09/2020] [Indexed: 10/24/2022] Open
Abstract
Real-world environments are extremely rich in visual information. At any given moment in time, only a fraction of this information is available to the eyes and the brain, rendering naturalistic vision a collection of incomplete snapshots. Previous research suggests that in order to successfully contextualize this fragmented information, the visual system sorts inputs according to spatial schemata, that is knowledge about the typical composition of the visual world. Here, we used a large set of 840 different natural scene fragments to investigate whether this sorting mechanism can operate across the diverse visual environments encountered during real-world vision. We recorded brain activity using electroencephalography (EEG) while participants viewed incomplete scene fragments at fixation. Using representational similarity analysis on the EEG data, we tracked the fragments' cortical representations across time. We found that the fragments' typical vertical location within the environment (top or bottom) predicted their cortical representations, indexing a sorting of information according to spatial schemata. The fragments' cortical representations were most strongly organized by their vertical location at around 200 ms after image onset, suggesting rapid perceptual sorting of information according to spatial schemata. In control analyses, we show that this sorting is flexible with respect to visual features: it is neither explained by commonalities between visually similar indoor and outdoor scenes, nor by the feature organization emerging from a deep neural network trained on scene categorization. Demonstrating such a flexible sorting across a wide range of visually diverse scenes suggests a contextualization mechanism suitable for complex and variable real-world environments.
Collapse
Affiliation(s)
- Daniel Kaiser
- Department of Psychology, University of York, York, UK.
| | - Gabriele Inciuraite
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany
| | - Radoslaw M Cichy
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany; Berlin School of Mind and Brain, Humboldt-Universität Berlin, Berlin, Germany; Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
| |
Collapse
|
8
|
Kaiser D, Häberle G, Cichy RM. Real-world structure facilitates the rapid emergence of scene category information in visual brain signals. J Neurophysiol 2020; 124:145-151. [PMID: 32519577 DOI: 10.1152/jn.00164.2020] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
In everyday life, our visual surroundings are not arranged randomly but structured in predictable ways. Although previous studies have shown that the visual system is sensitive to such structural regularities, it remains unclear whether the presence of an intact structure in a scene also facilitates the cortical analysis of the scene's categorical content. To address this question, we conducted an EEG experiment during which participants viewed natural scene images that were either "intact" (with their quadrants arranged in typical positions) or "jumbled" (with their quadrants arranged into atypical positions). We then used multivariate pattern analysis to decode the scenes' category from the EEG signals (e.g., whether the participant had seen a church or a supermarket). The category of intact scenes could be decoded rapidly within the first 100 ms of visual processing. Critically, within 200 ms of processing, category decoding was more pronounced for the intact scenes compared with the jumbled scenes, suggesting that the presence of real-world structure facilitates the extraction of scene category information. No such effect was found when the scenes were presented upside down, indicating that the facilitation of neural category information is indeed linked to a scene's adherence to typical real-world structure rather than to differences in visual features between intact and jumbled scenes. Our results demonstrate that early stages of categorical analysis in the visual system exhibit tuning to the structure of the world that may facilitate the rapid extraction of behaviorally relevant information from rich natural environments.NEW & NOTEWORTHY Natural scenes are structured, with different types of information appearing in predictable locations. Here, we use EEG decoding to show that the visual brain uses this structure to efficiently analyze scene content. During early visual processing, the category of a scene (e.g., a church vs. a supermarket) could be more accurately decoded from EEG signals when the scene adhered to its typical spatial structure compared with when it did not.
Collapse
Affiliation(s)
- Daniel Kaiser
- Department of Psychology, University of York, York, United Kingdom
| | - Greta Häberle
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany.,Charité - Universitätsmedizin Berlin, Einstein Center for Neurosciences Berlin, Berlin, Germany.,Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Radoslaw M Cichy
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany.,Charité - Universitätsmedizin Berlin, Einstein Center for Neurosciences Berlin, Berlin, Germany.,Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Berlin, Germany.,Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany
| |
Collapse
|
9
|
Effects of Spatial Frequency Filtering Choices on the Perception of Filtered Images. Vision (Basel) 2020; 4:vision4020029. [PMID: 32466442 PMCID: PMC7355859 DOI: 10.3390/vision4020029] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2020] [Revised: 05/13/2020] [Accepted: 05/22/2020] [Indexed: 11/17/2022] Open
Abstract
The early visual system is composed of spatial frequency-tuned channels that break an image into its individual frequency components. Therefore, researchers commonly filter images for spatial frequencies to arrive at conclusions about the differential importance of high versus and low spatial frequency image content. Here, we show how simple decisions about the filtering of the images, and how they are displayed on the screen, can result in drastically different behavioral outcomes. We show that jointly normalizing the contrast of the stimuli is critical in order to draw accurate conclusions about the influence of the different spatial frequencies, as images of the real world naturally have higher contrast energy at low than high spatial frequencies. Furthermore, the specific choice of filter shape can result in contradictory results about whether high or low spatial frequencies are more useful for understanding image content. Finally, we show that the manner in which the high spatial frequency content is displayed on the screen influences how recognizable an image is. Previous findings that make claims about the visual system's use of certain spatial frequency bands should be revisited, especially if their methods sections do not make clear what filtering choices were made.
Collapse
|
10
|
Min SH, Reynaud A, Hess RF. Interocular Differences in Spatial Frequency Influence the Pulfrich Effect. Vision (Basel) 2020; 4:vision4010020. [PMID: 32244910 PMCID: PMC7157571 DOI: 10.3390/vision4010020] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2020] [Revised: 03/16/2020] [Accepted: 03/17/2020] [Indexed: 11/16/2022] Open
Abstract
The Pulfrich effect is a stereo-motion phenomenon. When the two eyes are presented with visual targets moving in fronto-parallel motion at different luminances or contrasts, the perception is of a target moving-in-depth. It is thought that this percept of motion-in-depth occurs because lower luminance or contrast delays the speed of visual processing. Spatial properties of an image such as spatial frequency and size have also been shown to influence the speed of visual processing. In this study, we use a paradigm to measure interocular delay based on the Pulfrich effect where a structure-from-motion defined cylinder, composed of Gabor elements displayed at different interocular phases, rotates in depth. This allows us to measure any relative interocular processing delay while independently manipulating the spatial frequency and size of the micro elements (i.e., Gabor patches). We show that interocular spatial frequency differences, but not interocular size differences of image features, produce interocular processing delays.
Collapse
|
11
|
Mohsenzadeh Y, Mullin C, Oliva A, Pantazis D. The perceptual neural trace of memorable unseen scenes. Sci Rep 2019; 9:6033. [PMID: 30988333 PMCID: PMC6465597 DOI: 10.1038/s41598-019-42429-x] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2018] [Accepted: 03/28/2019] [Indexed: 12/14/2022] Open
Abstract
Some scenes are more memorable than others: they cement in minds with consistencies across observers and time scales. While memory mechanisms are traditionally associated with the end stages of perception, recent behavioral studies suggest that the features driving these memorability effects are extracted early on, and in an automatic fashion. This raises the question: is the neural signal of memorability detectable during early perceptual encoding phases of visual processing? Using the high temporal resolution of magnetoencephalography (MEG), during a rapid serial visual presentation (RSVP) task, we traced the neural temporal signature of memorability across the brain. We found an early and prolonged memorability related signal under a challenging ultra-rapid viewing condition, across a network of regions in both dorsal and ventral streams. This enhanced encoding could be the key to successful storage and recognition.
Collapse
Affiliation(s)
- Yalda Mohsenzadeh
- Computer Science and Artificial Intelligence Lab., Massachusetts Institute of Technology, Cambridge, MA, USA.
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA.
| | - Caitlin Mullin
- Computer Science and Artificial Intelligence Lab., Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Aude Oliva
- Computer Science and Artificial Intelligence Lab., Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Dimitrios Pantazis
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA.
| |
Collapse
|
12
|
Treder MS. Improving SNR and Reducing Training Time of Classifiers in Large Datasets via Kernel Averaging. Brain Inform 2018. [DOI: 10.1007/978-3-030-05587-5_23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
|