1
|
Schilling A, Gerum R, Boehm C, Rasheed J, Metzner C, Maier A, Reindl C, Hamer H, Krauss P. Deep learning based decoding of single local field potential events. Neuroimage 2024; 297:120696. [PMID: 38909761 DOI: 10.1016/j.neuroimage.2024.120696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Revised: 06/12/2024] [Accepted: 06/18/2024] [Indexed: 06/25/2024] Open
Abstract
How is information processed in the cerebral cortex? In most cases, recorded brain activity is averaged over many (stimulus) repetitions, which erases the fine-structure of the neural signal. However, the brain is obviously a single-trial processor. Thus, we here demonstrate that an unsupervised machine learning approach can be used to extract meaningful information from electro-physiological recordings on a single-trial basis. We use an auto-encoder network to reduce the dimensions of single local field potential (LFP) events to create interpretable clusters of different neural activity patterns. Strikingly, certain LFP shapes correspond to latency differences in different recording channels. Hence, LFP shapes can be used to determine the direction of information flux in the cerebral cortex. Furthermore, after clustering, we decoded the cluster centroids to reverse-engineer the underlying prototypical LFP event shapes. To evaluate our approach, we applied it to both extra-cellular neural recordings in rodents, and intra-cranial EEG recordings in humans. Finally, we find that single channel LFP event shapes during spontaneous activity sample from the realm of possible stimulus evoked event shapes. A finding which so far has only been demonstrated for multi-channel population coding.
Collapse
Affiliation(s)
- Achim Schilling
- Neuroscience Lab, University Hospital Erlangen, Germany; Cognitive Computational Neuroscience Group, University Erlangen-Nürnberg, Germany
| | - Richard Gerum
- Cognitive Computational Neuroscience Group, University Erlangen-Nürnberg, Germany; Department of Physics and Center for Vision Research, York University, Toronto, Canada
| | - Claudia Boehm
- Neuroscience Lab, University Hospital Erlangen, Germany; Cognitive Computational Neuroscience Group, University Erlangen-Nürnberg, Germany
| | - Jwan Rasheed
- Neuroscience Lab, University Hospital Erlangen, Germany; Cognitive Computational Neuroscience Group, University Erlangen-Nürnberg, Germany
| | - Claus Metzner
- Cognitive Computational Neuroscience Group, University Erlangen-Nürnberg, Germany; Pattern Recognition Lab, University Erlangen-Nürnberg, Germany
| | - Andreas Maier
- Pattern Recognition Lab, University Erlangen-Nürnberg, Germany
| | - Caroline Reindl
- Epilepsy Center, Department of Neurology, University Hospital Erlangen, Germany
| | - Hajo Hamer
- Epilepsy Center, Department of Neurology, University Hospital Erlangen, Germany
| | - Patrick Krauss
- Cognitive Computational Neuroscience Group, University Erlangen-Nürnberg, Germany; Pattern Recognition Lab, University Erlangen-Nürnberg, Germany.
| |
Collapse
|
2
|
Metzner C, Yamakou ME, Voelkl D, Schilling A, Krauss P. Quantifying and Maximizing the Information Flux in Recurrent Neural Networks. Neural Comput 2024; 36:351-384. [PMID: 38363658 DOI: 10.1162/neco_a_01651] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Accepted: 12/04/2023] [Indexed: 02/18/2024]
Abstract
Free-running recurrent neural networks (RNNs), especially probabilistic models, generate an ongoing information flux that can be quantified with the mutual information I[x→(t),x→(t+1)] between subsequent system states x→. Although previous studies have shown that I depends on the statistics of the network's connection weights, it is unclear how to maximize I systematically and how to quantify the flux in large systems where computing the mutual information becomes intractable. Here, we address these questions using Boltzmann machines as model systems. We find that in networks with moderately strong connections, the mutual information I is approximately a monotonic transformation of the root-mean-square averaged Pearson correlations between neuron pairs, a quantity that can be efficiently computed even in large systems. Furthermore, evolutionary maximization of I[x→(t),x→(t+1)] reveals a general design principle for the weight matrices enabling the systematic construction of systems with a high spontaneous information flux. Finally, we simultaneously maximize information flux and the mean period length of cyclic attractors in the state-space of these dynamical networks. Our results are potentially useful for the construction of RNNs that serve as short-time memories or pattern generators.
Collapse
Affiliation(s)
- Claus Metzner
- Neuroscience Lab, University Hospital Erlangen, 91054 Erlangen, Germany
- Biophysics Lab, Friedrich-Alexander University of Erlangen-Nuremberg, 91054 Erlangen, Germany
| | - Marius E Yamakou
- Department of Data Science, Friedrich-Alexander University Erlangen-Nuremberg, 91054 Erlangen, Germany
| | - Dennis Voelkl
- Neuroscience Lab, University Hospital Erlangen, 91054 Erlangen, Germany
| | - Achim Schilling
- Neuroscience Lab, University Hospital Erlangen, 91054 Erlangen, Germany
- Cognitive Computational Neuroscience Group, Friedrich-Alexander University Erlangen-Nuremberg, 91054 Erlangen, Germany
| | - Patrick Krauss
- Neuroscience Lab, University Hospital Erlangen, 91054 Erlangen, Germany
- Cognitive Computational Neuroscience Group, Friedrich-Alexander University Erlangen-Nuremberg, 91054 Erlangen, Germany
- Pattern Recognition Lab, Friedrich-Alexander University Erlangen-Nuremberg, 91054 Erlangen, Germany
| |
Collapse
|
3
|
Schilling A, Sedley W, Gerum R, Metzner C, Tziridis K, Maier A, Schulze H, Zeng FG, Friston KJ, Krauss P. Predictive coding and stochastic resonance as fundamental principles of auditory phantom perception. Brain 2023; 146:4809-4825. [PMID: 37503725 PMCID: PMC10690027 DOI: 10.1093/brain/awad255] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Revised: 06/27/2023] [Accepted: 07/15/2023] [Indexed: 07/29/2023] Open
Abstract
Mechanistic insight is achieved only when experiments are employed to test formal or computational models. Furthermore, in analogy to lesion studies, phantom perception may serve as a vehicle to understand the fundamental processing principles underlying healthy auditory perception. With a special focus on tinnitus-as the prime example of auditory phantom perception-we review recent work at the intersection of artificial intelligence, psychology and neuroscience. In particular, we discuss why everyone with tinnitus suffers from (at least hidden) hearing loss, but not everyone with hearing loss suffers from tinnitus. We argue that intrinsic neural noise is generated and amplified along the auditory pathway as a compensatory mechanism to restore normal hearing based on adaptive stochastic resonance. The neural noise increase can then be misinterpreted as auditory input and perceived as tinnitus. This mechanism can be formalized in the Bayesian brain framework, where the percept (posterior) assimilates a prior prediction (brain's expectations) and likelihood (bottom-up neural signal). A higher mean and lower variance (i.e. enhanced precision) of the likelihood shifts the posterior, evincing a misinterpretation of sensory evidence, which may be further confounded by plastic changes in the brain that underwrite prior predictions. Hence, two fundamental processing principles provide the most explanatory power for the emergence of auditory phantom perceptions: predictive coding as a top-down and adaptive stochastic resonance as a complementary bottom-up mechanism. We conclude that both principles also play a crucial role in healthy auditory perception. Finally, in the context of neuroscience-inspired artificial intelligence, both processing principles may serve to improve contemporary machine learning techniques.
Collapse
Affiliation(s)
- Achim Schilling
- Neuroscience Lab, University Hospital Erlangen, 91054 Erlangen, Germany
- Cognitive Computational Neuroscience Group, University Erlangen-Nürnberg, 91058 Erlangen, Germany
| | - William Sedley
- Translational and Clinical Research Institute, Newcastle University Medical School, Newcastle upon Tyne NE2 4HH, UK
| | - Richard Gerum
- Cognitive Computational Neuroscience Group, University Erlangen-Nürnberg, 91058 Erlangen, Germany
- Department of Physics and Astronomy and Center for Vision Research, York University, Toronto, ON M3J 1P3, Canada
| | - Claus Metzner
- Neuroscience Lab, University Hospital Erlangen, 91054 Erlangen, Germany
| | | | - Andreas Maier
- Pattern Recognition Lab, University Erlangen-Nürnberg, 91058 Erlangen, Germany
| | - Holger Schulze
- Neuroscience Lab, University Hospital Erlangen, 91054 Erlangen, Germany
| | - Fan-Gang Zeng
- Center for Hearing Research, Departments of Anatomy and Neurobiology, Biomedical Engineering, Cognitive Sciences, Otolaryngology–Head and Neck Surgery, University of California Irvine, Irvine, CA 92697, USA
| | - Karl J Friston
- Wellcome Centre for Human Neuroimaging, Institute of Neurology, University College London, London WC1N 3AR, UK
| | - Patrick Krauss
- Neuroscience Lab, University Hospital Erlangen, 91054 Erlangen, Germany
- Cognitive Computational Neuroscience Group, University Erlangen-Nürnberg, 91058 Erlangen, Germany
- Pattern Recognition Lab, University Erlangen-Nürnberg, 91058 Erlangen, Germany
| |
Collapse
|
4
|
Metzner C, Schilling A, Traxdorf M, Schulze H, Tziridis K, Krauss P. Extracting continuous sleep depth from EEG data without machine learning. Neurobiol Sleep Circadian Rhythms 2023; 14:100097. [PMID: 37275555 PMCID: PMC10238579 DOI: 10.1016/j.nbscr.2023.100097] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Revised: 04/28/2023] [Accepted: 05/02/2023] [Indexed: 06/07/2023] Open
Abstract
The human sleep-cycle has been divided into discrete sleep stages that can be recognized in electroencephalographic (EEG) and other bio-signals by trained specialists or machine learning systems. It is however unclear whether these human-defined stages can be re-discovered with unsupervised methods of data analysis, using only a minimal amount of generic pre-processing. Based on EEG data, recorded overnight from sleeping human subjects, we investigate the degree of clustering of the sleep stages using the General Discrimination Value as a quantitative measure of class separability. Virtually no clustering is found in the raw data, even after transforming the EEG signals of each 30-s epoch from the time domain into the more informative frequency domain. However, a Principal Component Analysis (PCA) of these epoch-wise frequency spectra reveals that the sleep stages separate significantly better in the low-dimensional sub-space of certain PCA components. In particular the component C1(t) can serve as a robust, continuous 'master variable' that encodes the depth of sleep and therefore correlates strongly with the 'hypnogram', a common plot of the discrete sleep stages over time. Moreover, C1(t) shows persistent trends during extended time periods where the sleep stage is constant, suggesting that sleep may be better understood as a continuum. These intriguing properties of C1(t) are not only relevant for understanding brain dynamics during sleep, but might also be exploited in low-cost single-channel sleep tracking devices for private and clinical use.
Collapse
Affiliation(s)
- Claus Metzner
- Neuroscience Lab, Experimental Otolaryngology, University Hospital, Erlangen, Germany
| | - Achim Schilling
- Neuroscience Lab, Experimental Otolaryngology, University Hospital, Erlangen, Germany
- Cognitive Computational Neuroscience Group, Friedrich-Alexander-Universität Erlangen, Nürnberg, Germany
| | - Maximilian Traxdorf
- Department of Otorhinolaryngology, Head and Neck Surgery, Paracelsus Medical University, Nürnberg, Germany
| | - Holger Schulze
- Neuroscience Lab, Experimental Otolaryngology, University Hospital, Erlangen, Germany
| | - Konstantin Tziridis
- Neuroscience Lab, Experimental Otolaryngology, University Hospital, Erlangen, Germany
| | - Patrick Krauss
- Neuroscience Lab, Experimental Otolaryngology, University Hospital, Erlangen, Germany
- Cognitive Computational Neuroscience Group, Friedrich-Alexander-Universität Erlangen, Nürnberg, Germany
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen, Nürnberg, Germany
| |
Collapse
|
5
|
Stoewer P, Schilling A, Maier A, Krauss P. Neural network based formation of cognitive maps of semantic spaces and the putative emergence of abstract concepts. Sci Rep 2023; 13:3644. [PMID: 36871003 PMCID: PMC9985610 DOI: 10.1038/s41598-023-30307-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2022] [Accepted: 02/21/2023] [Indexed: 03/06/2023] Open
Abstract
How do we make sense of the input from our sensory organs, and put the perceived information into context of our past experiences? The hippocampal-entorhinal complex plays a major role in the organization of memory and thought. The formation of and navigation in cognitive maps of arbitrary mental spaces via place and grid cells can serve as a representation of memories and experiences and their relations to each other. The multi-scale successor representation is proposed to be the mathematical principle underlying place and grid cell computations. Here, we present a neural network, which learns a cognitive map of a semantic space based on 32 different animal species encoded as feature vectors. The neural network successfully learns the similarities between different animal species, and constructs a cognitive map of 'animal space' based on the principle of successor representations with an accuracy of around 30% which is near to the theoretical maximum regarding the fact that all animal species have more than one possible successor, i.e. nearest neighbor in feature space. Furthermore, a hierarchical structure, i.e. different scales of cognitive maps, can be modeled based on multi-scale successor representations. We find that, in fine-grained cognitive maps, the animal vectors are evenly distributed in feature space. In contrast, in coarse-grained maps, animal vectors are highly clustered according to their biological class, i.e. amphibians, mammals and insects. This could be a putative mechanism enabling the emergence of new, abstract semantic concepts. Finally, even completely new or incomplete input can be represented by interpolation of the representations from the cognitive map with remarkable high accuracy of up to 95%. We conclude that the successor representation can serve as a weighted pointer to past memories and experiences, and may therefore be a crucial building block to include prior knowledge, and to derive context knowledge from novel input. Thus, our model provides a new tool to complement contemporary deep learning approaches on the road towards artificial general intelligence.
Collapse
Affiliation(s)
- Paul Stoewer
- Cognitive Computational Neuroscience Group, University Erlangen-Nuremberg, Erlangen, Germany.,Pattern Recognition Lab, University Erlangen-Nuremberg, Erlangen, Germany
| | - Achim Schilling
- Cognitive Computational Neuroscience Group, University Erlangen-Nuremberg, Erlangen, Germany.,Neuroscience Lab, University Hospital Erlangen, Erlangen, Germany
| | - Andreas Maier
- Pattern Recognition Lab, University Erlangen-Nuremberg, Erlangen, Germany
| | - Patrick Krauss
- Cognitive Computational Neuroscience Group, University Erlangen-Nuremberg, Erlangen, Germany. .,Pattern Recognition Lab, University Erlangen-Nuremberg, Erlangen, Germany. .,Neuroscience Lab, University Hospital Erlangen, Erlangen, Germany. .,Linguistics Lab, University Erlangen-Nuremberg, Erlangen, Germany.
| |
Collapse
|
6
|
Cesari M, Egger K, Stefani A, Bergmann M, Ibrahim A, Brandauer E, Högl B, Heidbreder A. Differentiation of central disorders of hypersomnolence with manual and artificial-intelligence-derived polysomnographic measures. Sleep 2023; 46:6862127. [PMID: 36455881 DOI: 10.1093/sleep/zsac288] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Revised: 11/14/2022] [Indexed: 12/03/2022] Open
Abstract
Differentiation of central disorders of hypersomnolence (DOH) is challenging but important for patient care. This study aimed to investigate whether biomarkers derived from sleep structure evaluated both by manual scoring as well as with artificial intelligence (AI) algorithms allow distinction of patients with different DOH. We included video-polysomnography data of 40 narcolepsy type 1 (NT1), 26 narcolepsy type 2 (NT2), 23 patients with idiopathic hypersomnia (IH) and 54 participants with subjective excessive daytime sleepiness (sEDS). Sleep experts manually scored sleep stages. A previously validated AI algorithm was employed to obtain automatic hypnograms and hypnodensity graphs (where each epoch is represented as a mixture of sleep stage probabilities). One-thousand-three features describing sleep architecture and instability were extracted from manual/automatic hypnogram and hypnodensity graphs. After feature selection, random forest classifiers were trained and tested in a 5-fold-cross-validation scheme to distinguish groups pairwise (NT1-vs-NT2, NT1-vs-IH, …) and single groups from the pooled remaining ones (NT1-vs-rest, NT2-vs-rest,…). The accuracy/F1-score values obtained in the test sets were: 0.74 ± 0.04/0.79 ± 0.05 (NT1-vs-NT2), 0.89 ± 0.09/0.91 ± 0.08 (NT1-vs-IH), 0.93 ± 0.06/0.91 ± 0.07 (NT1-vs-sEDS), 0.88 ± 0.04/0.80 ± 0.07 (NT1-vs-rest), 0.65 ± 0.10/0.70 ± 0.09 (NT2-vs-IH), 0.72 ± 0.12/0.60 ± 0.10 (NT2-vs-sEDS), 0.54 ± 0.19/0.38 ± 0.13 (NT2-vs-rest), 0.57 ± 0.11/0.35 ± 0.18 (IH-vs-sEDS), 0.71 ± 0.08/0.35 ± 0.10 (IH-vs-rest) and 0.76 ± 0.08/0.71 ± 0.13 (sEDS-vs-rest). The results confirm previous findings on sleep instability in patients with NT1 and show that combining manual and automatic AI-based sleep analysis could be useful for better distinction of NT2 from IH, but no precise sleep biomarker of NT2 or IH could be identified. Validation in a larger and multi-centric cohort is needed to confirm these findings.
Collapse
Affiliation(s)
- Matteo Cesari
- Department of Neurology, Medical University of Innsbruck, Innsbruck, Austria
| | - Kristin Egger
- Department of Neurology, Medical University of Innsbruck, Innsbruck, Austria
| | - Ambra Stefani
- Department of Neurology, Medical University of Innsbruck, Innsbruck, Austria
| | - Melanie Bergmann
- Department of Neurology, Medical University of Innsbruck, Innsbruck, Austria
| | - Abubaker Ibrahim
- Department of Neurology, Medical University of Innsbruck, Innsbruck, Austria
| | - Elisabeth Brandauer
- Department of Neurology, Medical University of Innsbruck, Innsbruck, Austria
| | - Birgit Högl
- Department of Neurology, Medical University of Innsbruck, Innsbruck, Austria
| | - Anna Heidbreder
- Department of Neurology, Medical University of Innsbruck, Innsbruck, Austria
| |
Collapse
|
7
|
Huijben IAM, Hermans LWA, Rossi AC, Overeem S, van Gilst MM, van Sloun RJG. Interpretation and further development of the hypnodensity representation of sleep structure. Physiol Meas 2023; 44. [PMID: 36595329 DOI: 10.1088/1361-6579/aca641] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Accepted: 11/25/2022] [Indexed: 11/27/2022]
Abstract
Objective.The recently-introduced hypnodensity graph provides a probability distribution over sleep stages per data window (i.e. an epoch). This work explored whether this representation reveals continuities that can only be attributed to intra- and inter-rater disagreement of expert scorings, or also to co-occurrence of sleep stage-dependent features within one epoch.Approach.We proposed a simplified model for time series like the ones measured during sleep, and a second model to describe the annotation process by an expert. Generating data according to these models, enabled controlled experiments to investigate the interpretation of the hypnodensity graph. Moreover, the influence of both the supervised training strategy, and the used softmax non-linearity were investigated. Polysomnography recordings of 96 healthy sleepers (of which 11 were used as independent test set), were subsequently used to transfer conclusions to real data.Main results.A hypnodensity graph, predicted by a supervised neural classifier, represents the probability with which the sleep expert(s) assigned a label to an epoch. It thus reflects annotator behavior, and is thereby only indirectly linked to the ratio of sleep stage-dependent features in the epoch. Unsupervised training was shown to result in hypnodensity graph that were slightly less dependent on this annotation process, resulting in, on average, higher-entropy distributions over sleep stages (Hunsupervised= 0.41 versusHsupervised= 0.29). Moreover, pre-softmax predictions were, for both training strategies, found to better reflect the ratio of sleep stage-dependent characteristics in an epoch, as compared to the post-softmax counterparts (i.e. the hypnodensity graph). In real data, this was observed from the linear relation between pre-softmax N3 predictions and the amount of delta power.Significance.This study provides insights in, and proposes new, representations of sleep that may enhance our comprehension about sleep and sleep disorders.
Collapse
Affiliation(s)
- Iris A M Huijben
- Dept. of Electrical Engineering, Eindhoven University of Technology, 5612 AP Eindhoven, The Netherlands.,Onera Health, 5617 BD Eindhoven, The Netherlands
| | - Lieke W A Hermans
- Dept. of Electrical Engineering, Eindhoven University of Technology, 5612 AP Eindhoven, The Netherlands
| | | | - Sebastiaan Overeem
- Dept. of Electrical Engineering, Eindhoven University of Technology, 5612 AP Eindhoven, The Netherlands.,Sleep Medicine Center Kempenhaeghe, 5591 VE Heeze, The Netherlands
| | - Merel M van Gilst
- Dept. of Electrical Engineering, Eindhoven University of Technology, 5612 AP Eindhoven, The Netherlands.,Sleep Medicine Center Kempenhaeghe, 5591 VE Heeze, The Netherlands
| | - Ruud J G van Sloun
- Dept. of Electrical Engineering, Eindhoven University of Technology, 5612 AP Eindhoven, The Netherlands
| |
Collapse
|
8
|
Garibyan A, Schilling A, Boehm C, Zankl A, Krauss P. Neural correlates of linguistic collocations during continuous speech perception. Front Psychol 2022; 13:1076339. [PMID: 36619132 PMCID: PMC9822706 DOI: 10.3389/fpsyg.2022.1076339] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Accepted: 12/02/2022] [Indexed: 12/25/2022] Open
Abstract
Language is fundamentally predictable, both on a higher schematic level as well as low-level lexical items. Regarding predictability on a lexical level, collocations are frequent co-occurrences of words that are often characterized by high strength of association. So far, psycho- and neurolinguistic studies have mostly employed highly artificial experimental paradigms in the investigation of collocations by focusing on the processing of single words or isolated sentences. In contrast, here we analyze EEG brain responses recorded during stimulation with continuous speech, i.e., audio books. We find that the N400 response to collocations is significantly different from that of non-collocations, whereas the effect varies with respect to cortical region (anterior/posterior) and laterality (left/right). Our results are in line with studies using continuous speech, and they mostly contradict those using artificial paradigms and stimuli. To the best of our knowledge, this is the first neurolinguistic study on collocations using continuous speech stimulation.
Collapse
Affiliation(s)
- Armine Garibyan
- Chair of English Philology and Linguistics, University Erlangen-Nuremberg, Erlangen, Germany,Linguistics Lab, University Erlangen-Nuremberg, Erlangen, Germany
| | - Achim Schilling
- Neuroscience Lab, University Hospital Erlangen, Erlangen, Germany,Cognitive Computational Neuroscience Group, University Erlangen-Nuremberg, Erlangen, Germany
| | - Claudia Boehm
- Linguistics Lab, University Erlangen-Nuremberg, Erlangen, Germany,Neuroscience Lab, University Hospital Erlangen, Erlangen, Germany,Cognitive Computational Neuroscience Group, University Erlangen-Nuremberg, Erlangen, Germany
| | - Alexandra Zankl
- Linguistics Lab, University Erlangen-Nuremberg, Erlangen, Germany,Neuroscience Lab, University Hospital Erlangen, Erlangen, Germany,Cognitive Computational Neuroscience Group, University Erlangen-Nuremberg, Erlangen, Germany
| | - Patrick Krauss
- Linguistics Lab, University Erlangen-Nuremberg, Erlangen, Germany,Neuroscience Lab, University Hospital Erlangen, Erlangen, Germany,Cognitive Computational Neuroscience Group, University Erlangen-Nuremberg, Erlangen, Germany,Pattern Recognition Lab, University Erlangen-Nuremberg, Erlangen, Germany,*Correspondence: Patrick Krauss,
| |
Collapse
|
9
|
Classification at the accuracy limit: facing the problem of data ambiguity. Sci Rep 2022; 12:22121. [PMID: 36543849 PMCID: PMC9772417 DOI: 10.1038/s41598-022-26498-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Accepted: 12/15/2022] [Indexed: 12/24/2022] Open
Abstract
Data classification, the process of analyzing data and organizing it into categories or clusters, is a fundamental computing task of natural and artificial information processing systems. Both supervised classification and unsupervised clustering work best when the input vectors are distributed over the data space in a highly non-uniform way. These tasks become however challenging in weakly structured data sets, where a significant fraction of data points is located in between the regions of high point density. We derive the theoretical limit for classification accuracy that arises from this overlap of data categories. By using a surrogate data generation model with adjustable statistical properties, we show that sufficiently powerful classifiers based on completely different principles, such as perceptrons and Bayesian models, all perform at this universal accuracy limit under ideal training conditions. Remarkably, the accuracy limit is not affected by certain non-linear transformations of the data, even if these transformations are non-reversible and drastically reduce the information content of the input data. We further compare the data embeddings that emerge by supervised and unsupervised training, using the MNIST data set and human EEG recordings during sleep. We find for MNIST that categories are significantly separated not only after supervised training with back-propagation, but also after unsupervised dimensionality reduction. A qualitatively similar cluster enhancement by unsupervised compression is observed for the EEG sleep data, but with a very small overall degree of cluster separation. We conclude that the handwritten letters in MNIST can be considered as 'natural kinds', whereas EEG sleep recordings are a relatively weakly structured data set, so that unsupervised clustering will not necessarily re-cover the human-defined sleep stages.
Collapse
|
10
|
Schilling A, Krauss P. Tinnitus is associated with improved cognitive performance and speech perception-Can stochastic resonance explain? Front Aging Neurosci 2022; 14:1073149. [PMID: 36589535 PMCID: PMC9800600 DOI: 10.3389/fnagi.2022.1073149] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Accepted: 12/02/2022] [Indexed: 12/23/2022] Open
Affiliation(s)
- Achim Schilling
- Neuroscience Lab, University Hospital Erlangen, Erlangen, Germany
- Cognitive Computational Neuroscience Group, University of Erlangen-Nurnberg, Erlangen, Germany
| | - Patrick Krauss
- Neuroscience Lab, University Hospital Erlangen, Erlangen, Germany
- Cognitive Computational Neuroscience Group, University of Erlangen-Nurnberg, Erlangen, Germany
- Linguistics Lab, University of Erlangen-Nurnberg, Erlangen, Germany
- Pattern Recognition Lab, University of Erlangen-Nurnberg, Erlangen, Germany
| |
Collapse
|
11
|
Neural network based successor representations to form cognitive maps of space and language. Sci Rep 2022; 12:11233. [PMID: 35787659 PMCID: PMC9253065 DOI: 10.1038/s41598-022-14916-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2022] [Accepted: 06/15/2022] [Indexed: 11/21/2022] Open
Abstract
How does the mind organize thoughts? The hippocampal-entorhinal complex is thought to support domain-general representation and processing of structural knowledge of arbitrary state, feature and concept spaces. In particular, it enables the formation of cognitive maps, and navigation on these maps, thereby broadly contributing to cognition. It has been proposed that the concept of multi-scale successor representations provides an explanation of the underlying computations performed by place and grid cells. Here, we present a neural network based approach to learn such representations, and its application to different scenarios: a spatial exploration task based on supervised learning, a spatial navigation task based on reinforcement learning, and a non-spatial task where linguistic constructions have to be inferred by observing sample sentences. In all scenarios, the neural network correctly learns and approximates the underlying structure by building successor representations. Furthermore, the resulting neural firing patterns are strikingly similar to experimentally observed place and grid cell firing patterns. We conclude that cognitive maps and neural network-based successor representations of structured knowledge provide a promising way to overcome some of the short comings of deep learning towards artificial general intelligence.
Collapse
|
12
|
Schilling A, Gerum R, Metzner C, Maier A, Krauss P. Intrinsic Noise Improves Speech Recognition in a Computational Model of the Auditory Pathway. Front Neurosci 2022; 16:908330. [PMID: 35757533 PMCID: PMC9215117 DOI: 10.3389/fnins.2022.908330] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Accepted: 05/09/2022] [Indexed: 01/05/2023] Open
Abstract
Noise is generally considered to harm information processing performance. However, in the context of stochastic resonance, noise has been shown to improve signal detection of weak sub- threshold signals, and it has been proposed that the brain might actively exploit this phenomenon. Especially within the auditory system, recent studies suggest that intrinsic noise plays a key role in signal processing and might even correspond to increased spontaneous neuronal firing rates observed in early processing stages of the auditory brain stem and cortex after hearing loss. Here we present a computational model of the auditory pathway based on a deep neural network, trained on speech recognition. We simulate different levels of hearing loss and investigate the effect of intrinsic noise. Remarkably, speech recognition after hearing loss actually improves with additional intrinsic noise. This surprising result indicates that intrinsic noise might not only play a crucial role in human auditory processing, but might even be beneficial for contemporary machine learning approaches.
Collapse
Affiliation(s)
- Achim Schilling
- Laboratory of Sensory and Cognitive Neuroscience, Aix-Marseille University, Marseille, France
- Neuroscience Lab, University Hospital Erlangen, Erlangen, Germany
- Cognitive Computational Neuroscience Group, Friedrich-Alexander-University Erlangen-Nuremberg (FAU), Erlangen, Germany
| | - Richard Gerum
- Department of Physics and Center for Vision Research, York University, Toronto, ON, Canada
| | - Claus Metzner
- Neuroscience Lab, University Hospital Erlangen, Erlangen, Germany
- Friedrich-Alexander-University Erlangen-Nuremberg (FAU), Erlangen, Germany
| | - Andreas Maier
- Pattern Recognition Lab, Friedrich-Alexander-University Erlangen-Nuremberg (FAU), Erlangen, Germany
| | - Patrick Krauss
- Neuroscience Lab, University Hospital Erlangen, Erlangen, Germany
- Cognitive Computational Neuroscience Group, Friedrich-Alexander-University Erlangen-Nuremberg (FAU), Erlangen, Germany
- Pattern Recognition Lab, Friedrich-Alexander-University Erlangen-Nuremberg (FAU), Erlangen, Germany
- Linguistics Lab, Friedrich-Alexander-University Erlangen-Nuremberg (FAU), Erlangen, Germany
| |
Collapse
|
13
|
Representations of temporal sleep dynamics: review and synthesis of the literature. Sleep Med Rev 2022; 63:101611. [DOI: 10.1016/j.smrv.2022.101611] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2021] [Revised: 01/25/2022] [Accepted: 02/07/2022] [Indexed: 12/13/2022]
|
14
|
Metzner C, Schilling A, Traxdorf M, Schulze H, Krauss P. Sleep as a random walk: a super-statistical analysis of EEG data across sleep stages. Commun Biol 2021; 4:1385. [PMID: 34893700 PMCID: PMC8664947 DOI: 10.1038/s42003-021-02912-6] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2021] [Accepted: 11/23/2021] [Indexed: 11/15/2022] Open
Abstract
In clinical practice, human sleep is classified into stages, each associated with different levels of muscular activity and marked by characteristic patterns in the EEG signals. It is however unclear whether this subdivision into discrete stages with sharply defined boundaries is truly reflecting the dynamics of human sleep. To address this question, we consider one-channel EEG signals as heterogeneous random walks: stochastic processes controlled by hyper-parameters that are themselves time-dependent. We first demonstrate the heterogeneity of the random process by showing that each sleep stage has a characteristic distribution and temporal correlation function of the raw EEG signals. Next, we perform a super-statistical analysis by computing hyper-parameters, such as the standard deviation, kurtosis, and skewness of the raw signal distributions, within subsequent 30-second epochs. It turns out that also the hyper-parameters have characteristic, sleep-stage-dependent distributions, which can be exploited for a simple Bayesian sleep stage detection. Moreover, we find that the hyper-parameters are not piece-wise constant, as the traditional hypnograms would suggest, but show rising or falling trends within and across sleep stages, pointing to an underlying continuous rather than sub-divided process that controls human sleep. Based on the hyper-parameters, we finally perform a pairwise similarity analysis between the different sleep stages, using a quantitative measure for the separability of data clusters in multi-dimensional spaces. To improve our understand of how EEG activity reflects the dynamics of human sleep, Metzner et al. use human EEG data and superstatistical analysis to demonstrate that each sleep stage has a characteristic distribution and temporal correlation function of raw EEG signals. They also show that the hyper-parameters controlling the EEG signals have characteristic, sleep-stage-dependent distributions, which can be exploited for a simple Bayesian sleep stage detection.
Collapse
Affiliation(s)
- Claus Metzner
- Neuroscience Lab, Experimental Otolaryngology, University Hospital Erlangen, Erlangen, Germany.
| | - Achim Schilling
- Neuroscience Lab, Experimental Otolaryngology, University Hospital Erlangen, Erlangen, Germany.,Laboratory of Sensory and Cognitive Neuroscience, Aix-Marseille University, Marseille, France.,Cognitive Computational Neuroscience Group, Friedrich-Alexander University Erlangen-Nuremberg, Nuremberg, Germany
| | - Maximilian Traxdorf
- Department of Otorhinolaryngology, Paracelsus Medical University, Nuremberg, Germany
| | - Holger Schulze
- Neuroscience Lab, Experimental Otolaryngology, University Hospital Erlangen, Erlangen, Germany
| | - Patrick Krauss
- Neuroscience Lab, Experimental Otolaryngology, University Hospital Erlangen, Erlangen, Germany.,Cognitive Computational Neuroscience Group, Friedrich-Alexander University Erlangen-Nuremberg, Nuremberg, Germany.,Pattern Recognition Lab, Friedrich-Alexander University Erlangen-Nuremberg, Nuremberg, Germany
| |
Collapse
|