1
|
Perkušić Čović M, Vujović I, Šoda J, Palmović M, Rogić Vidaković M. Overt Word Reading and Visual Object Naming in Adults with Dyslexia: Electroencephalography Study in Transparent Orthography. Bioengineering (Basel) 2024; 11:459. [PMID: 38790326 PMCID: PMC11117949 DOI: 10.3390/bioengineering11050459] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2024] [Revised: 05/01/2024] [Accepted: 05/02/2024] [Indexed: 05/26/2024] Open
Abstract
The study aimed to investigate overt reading and naming processes in adult people with dyslexia (PDs) in shallow (transparent) language orthography. The results of adult PDs are compared with adult healthy controls HCs. Comparisons are made in three phases: pre-lexical (150-260 ms), lexical (280-700 ms), and post-lexical stage of processing (750-1000 ms) time window. Twelve PDs and HCs performed overt reading and naming tasks under EEG recording. The word reading and naming task consisted of sparse neighborhoods with closed phonemic onset (words/objects sharing the same onset). For the analysis of the mean ERP amplitude for pre-lexical, lexical, and post-lexical time window, a mixed design ANOVA was performed with the right (F4, FC2, FC6, C4, T8, CP2, CP6, P4) and left (F3, FC5, FC1, T7, C3, CP5, CP1, P7, P3) electrode sites, within-subject factors and group (PD vs. HC) as between-subject factor. Behavioral response latency results revealed significantly prolonged reading latency between HCs and PDs, while no difference was detected in naming response latency. ERP differences were found between PDs and HCs in the right hemisphere's pre-lexical time window (160-200 ms) for word reading aloud. For visual object naming aloud, ERP differences were found between PDs and HCs in the right hemisphere's post-lexical time window (900-1000 ms). The present study demonstrated different distributions of the electric field at the scalp in specific time windows between two groups in the right hemisphere in both word reading and visual object naming aloud, suggesting alternative processing strategies in adult PDs. These results indirectly support the view that adult PDs in shallow language orthography probably rely on the grapho-phonological route during overt word reading and have difficulties with phoneme and word retrieval during overt visual object naming in adulthood.
Collapse
Affiliation(s)
- Maja Perkušić Čović
- Polyclinic for Rehabilitation of People with Developmental Disorders, 21000 Split, Croatia;
| | - Igor Vujović
- Signal Processing, Analysis, and Advanced Diagnostics Research and Education Laboratory (SPAADREL), Faculty of Maritime Studies, University of Split, 21000 Split, Croatia; (I.V.); (J.Š.)
| | - Joško Šoda
- Signal Processing, Analysis, and Advanced Diagnostics Research and Education Laboratory (SPAADREL), Faculty of Maritime Studies, University of Split, 21000 Split, Croatia; (I.V.); (J.Š.)
| | - Marijan Palmović
- Laboratory for Psycholinguistic Research, Department of Speech and Language Pathology, University of Zagreb, 10000 Zagreb, Croatia;
| | - Maja Rogić Vidaković
- Laboratory for Human and Experimental Neurophysiology, Department of Neuroscience, School of Medicine, University of Split, 21000 Split, Croatia
| |
Collapse
|
2
|
Yao L, Fu Q, Liu CH. The roles of edge-based and surface-based information in the dynamic neural representation of objects. Neuroimage 2023; 283:120425. [PMID: 37890562 DOI: 10.1016/j.neuroimage.2023.120425] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Revised: 10/22/2023] [Accepted: 10/24/2023] [Indexed: 10/29/2023] Open
Abstract
We combined multivariate pattern analysis (MVPA) and electroencephalogram (EEG) to investigate the role of edge, color, and other surface information in the neural representation of visual objects. Participants completed a one-back task in which they were presented with color photographs, grayscale images, and line drawings of animals, tools, and fruits. Our results provide the first neural evidence that line drawings elicit similar neural activities as color photographs and grayscale images during the 175-305 ms window after the stimulus onset. Furthermore, we found that other surface information, rather than color information, facilitates decoding accuracy in the early stages of object representations and affects the speed of this. These results provide new insights into the role of edge-based and surface-based information in the dynamic process of neural representations of visual objects.
Collapse
Affiliation(s)
- Liansheng Yao
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Qiufang Fu
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China.
| | - Chang Hong Liu
- Department of Psychology, Bournemouth University, Fern Barrow, Poole, UK
| |
Collapse
|
3
|
Wu J, Li Q, Fu Q, Rose M, Jing L. Multisensory Information Facilitates the Categorization of Untrained Stimuli. Multisens Res 2021; 35:79-107. [PMID: 34388699 DOI: 10.1163/22134808-bja10061] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2020] [Accepted: 07/30/2021] [Indexed: 11/19/2022]
Abstract
Although it has been demonstrated that multisensory information can facilitate object recognition and object memory, it remains unclear whether such facilitation effect exists in category learning. To address this issue, comparable car images and sounds were first selected by a discrimination task in Experiment 1. Then, those selected images and sounds were utilized in a prototype category learning task in Experiments 2 and 3, in which participants were trained with auditory, visual, and audiovisual stimuli, and were tested with trained or untrained stimuli within the same categories presented alone or accompanied with a congruent or incongruent stimulus in the other modality. In Experiment 2, when low-distortion stimuli (more similar to the prototypes) were trained, there was higher accuracy for audiovisual trials than visual trials, but no significant difference between audiovisual and auditory trials. During testing, accuracy was significantly higher for congruent trials than unisensory or incongruent trials, and the congruency effect was larger for untrained high-distortion stimuli than trained low-distortion stimuli. In Experiment 3, when high-distortion stimuli (less similar to the prototypes) were trained, there was higher accuracy for audiovisual trials than visual or auditory trials, and the congruency effect was larger for trained high-distortion stimuli than untrained low-distortion stimuli during testing. These findings demonstrated that higher degree of stimuli distortion resulted in more robust multisensory effect, and the categorization of not only trained but also untrained stimuli in one modality could be influenced by an accompanying stimulus in the other modality.
Collapse
Affiliation(s)
- Jie Wu
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, 100101, China.,Department of Psychology, Chinese Academy of Sciences, Beijing, 100101, China.,NeuroImage Nord, Department for Systems Neuroscience, University Medical Center Hamburg Eppendorf, 20246 Hamburg, Germany
| | - Qitian Li
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, 100101, China.,Department of Psychology, Chinese Academy of Sciences, Beijing, 100101, China.,NeuroImage Nord, Department for Systems Neuroscience, University Medical Center Hamburg Eppendorf, 20246 Hamburg, Germany
| | - Qiufang Fu
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, 100101, China.,Department of Psychology, Chinese Academy of Sciences, Beijing, 100101, China
| | - Michael Rose
- NeuroImage Nord, Department for Systems Neuroscience, University Medical Center Hamburg Eppendorf, 20246 Hamburg, Germany
| | - Liping Jing
- Beijing Key Lab of Traffic Data Analysis and Mining Beijing Jiaotong University, Beijing, China
| |
Collapse
|
4
|
Viganò S, Borghesani V, Piazza M. Symbolic categorization of novel multisensory stimuli in the human brain. Neuroimage 2021; 235:118016. [PMID: 33819609 DOI: 10.1016/j.neuroimage.2021.118016] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2020] [Revised: 03/15/2021] [Accepted: 03/17/2021] [Indexed: 10/21/2022] Open
Abstract
When primates (both human and non-human) learn to categorize simple visual or acoustic stimuli by means of non-verbal matching tasks, two types of changes occur in their brain: early sensory cortices increase the precision with which they encode sensory information, and parietal and lateral prefrontal cortices develop a categorical response to the stimuli. Contrary to non-human animals, however, our species mostly constructs categories using linguistic labels. Moreover, we naturally tend to define categories by means of multiple sensory features of the stimuli. Here we trained adult subjects to parse a novel audiovisual stimulus space into 4 orthogonal categories, by associating each category to a specific symbol. We then used multi-voxel pattern analysis (MVPA) to show that during a cross-format category repetition detection task three neural representational changes were detectable. First, visual and acoustic cortices increased both precision and selectivity to their preferred sensory feature, displaying increased sensory segregation. Second, a frontoparietal network developed a multisensory object-specific response. Third, the right hippocampus and, at least to some extent, the left angular gyrus, developed a shared representational code common to symbols and objects. In particular, the right hippocampus displayed the highest level of abstraction and generalization from a format to the other, and also predicted symbolic categorization performance outside the scanner. Taken together, these results indicate that when humans categorize multisensory objects by means of language the set of changes occurring in the brain only partially overlaps with that described by classical models of non-verbal unisensory categorization in primates.
Collapse
Affiliation(s)
- Simone Viganò
- Centre for Mind/Brain Sciences, University of Trento, Italy.
| | | | - Manuela Piazza
- Centre for Mind/Brain Sciences, University of Trento, Italy
| |
Collapse
|
5
|
Rule JS, Riesenhuber M. Leveraging Prior Concept Learning Improves Generalization From Few Examples in Computational Models of Human Object Recognition. Front Comput Neurosci 2021; 14:586671. [PMID: 33510629 PMCID: PMC7835122 DOI: 10.3389/fncom.2020.586671] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2020] [Accepted: 11/30/2020] [Indexed: 11/13/2022] Open
Abstract
Humans quickly and accurately learn new visual concepts from sparse data, sometimes just a single example. The impressive performance of artificial neural networks which hierarchically pool afferents across scales and positions suggests that the hierarchical organization of the human visual system is critical to its accuracy. These approaches, however, require magnitudes of order more examples than human learners. We used a benchmark deep learning model to show that the hierarchy can also be leveraged to vastly improve the speed of learning. We specifically show how previously learned but broadly tuned conceptual representations can be used to learn visual concepts from as few as two positive examples; reusing visual representations from earlier in the visual hierarchy, as in prior approaches, requires significantly more examples to perform comparably. These results suggest techniques for learning even more efficiently and provide a biologically plausible way to learn new visual concepts from few examples.
Collapse
Affiliation(s)
- Joshua S. Rule
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, United States
| | - Maximilian Riesenhuber
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC, United States
| |
Collapse
|
6
|
From shape to meaning: Evidence for multiple fast feedforward hierarchies of concept processing in the human brain. Neuroimage 2020; 221:117148. [PMID: 32659350 DOI: 10.1016/j.neuroimage.2020.117148] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2020] [Revised: 06/10/2020] [Accepted: 07/06/2020] [Indexed: 11/23/2022] Open
Abstract
A number of fMRI studies have provided support for the existence of multiple concept representations in areas of the brain such as the anterior temporal lobe (ATL) and inferior parietal lobule (IPL). However, the interaction among different conceptual representations remains unclear. To better understand the dynamics of how the brain extracts meaning from sensory stimuli, we conducted a human high-density electroencephalography (EEG) study in which we first trained participants to associate pseudowords with various animal and tool concepts. After training, multivariate pattern classification of EEG signals in sensor and source space revealed the representation of both animal and tool concepts in the left ATL and tool concepts within the left IPL within 250 ms. Finally, we used Granger Causality analyses to show that orthography-selective sensors directly modulated activity in the parietal-tool selective cluster. Together, our results provide evidence for distinct but parallel "perceptual-to-conceptual" feedforward hierarchies in the brain.
Collapse
|
7
|
Zhou X, Fu Q, Rose M. The Role of Edge-Based and Surface-Based Information in Incidental Category Learning: Evidence From Behavior and Event-Related Potentials. Front Integr Neurosci 2020; 14:36. [PMID: 32792919 PMCID: PMC7387683 DOI: 10.3389/fnint.2020.00036] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2019] [Accepted: 06/05/2020] [Indexed: 11/15/2022] Open
Abstract
Although it has been demonstrated that edge-based information is more important than surface-based information in incidental category learning, it remains unclear how the two types of information play different roles in incidental category learning. To address this issue, the present study combined behavioral and event-related potential (ERP) techniques in an incidental category learning task in which the categories were defined by either edge- or surface-based features. The results from Experiment 1 showed that participants could simultaneously learn both edge- and surface-based information in incidental category learning, and importantly, there was a larger learning effect for the edge-based category than for the surface-based category. The behavioral results from Experiment 2 replicated those from Experiment 1, and the ERP results further revealed that the stimuli from the edge-based category elicited larger anterior and posterior P2 components than those from the surface-based category, whereas the stimuli from the surface-based category elicited larger anterior N1 and P3 components than those from the edge-based category. Taken together, the results suggest that, although surface-based information might attract more attention during feature detection, edge-based information plays more important roles in evaluating the relevance of information in making a decision in categorization.
Collapse
Affiliation(s)
- Xiaoyan Zhou
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China.,The Research Center for Psychological Education, University of International Relations, Beijing, China
| | - Qiufang Fu
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Michael Rose
- NeuroImage Nord, Department for Systems Neuroscience, University Medical Center Hamburg Eppendorf, Hamburg, Germany
| |
Collapse
|
8
|
Training Humans to Categorize Monkey Calls: Auditory Feature- and Category-Selective Neural Tuning Changes. Neuron 2019; 98:405-416.e4. [PMID: 29673483 DOI: 10.1016/j.neuron.2018.03.014] [Citation(s) in RCA: 38] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2017] [Revised: 01/18/2018] [Accepted: 03/08/2018] [Indexed: 11/23/2022]
Abstract
Grouping auditory stimuli into common categories is essential for a variety of auditory tasks, including speech recognition. We trained human participants to categorize auditory stimuli from a large novel set of morphed monkey vocalizations. Using fMRI-rapid adaptation (fMRI-RA) and multi-voxel pattern analysis (MVPA) techniques, we gained evidence that categorization training results in two distinct sets of changes: sharpened tuning to monkey call features (without explicit category representation) in left auditory cortex and category selectivity for different types of calls in lateral prefrontal cortex. In addition, the sharpness of neural selectivity in left auditory cortex, as estimated with both fMRI-RA and MVPA, predicted the steepness of the categorical boundary, whereas categorical judgment correlated with release from adaptation in the left inferior frontal gyrus. These results support the theory that auditory category learning follows a two-stage model analogous to the visual domain, suggesting general principles of perceptual category learning in the human brain.
Collapse
|
9
|
Malone PS, Eberhardt SP, Wimmer K, Sprouse C, Klein R, Glomb K, Scholl CA, Bokeria L, Cho P, Deco G, Jiang X, Bernstein LE, Riesenhuber M. Neural mechanisms of vibrotactile categorization. Hum Brain Mapp 2019; 40:3078-3090. [PMID: 30920706 PMCID: PMC6865665 DOI: 10.1002/hbm.24581] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2018] [Revised: 02/18/2019] [Accepted: 03/12/2019] [Indexed: 11/11/2022] Open
Abstract
The grouping of sensory stimuli into categories is fundamental to cognition. Previous research in the visual and auditory systems supports a two-stage processing hierarchy that underlies perceptual categorization: (a) a "bottom-up" perceptual stage in sensory cortices where neurons show selectivity for stimulus features and (b) a "top-down" second stage in higher level cortical areas that categorizes the stimulus-selective input from the first stage. In order to test the hypothesis that the two-stage model applies to the somatosensory system, 14 human participants were trained to categorize vibrotactile stimuli presented to their right forearm. Then, during an fMRI scan, participants actively categorized the stimuli. Representational similarity analysis revealed stimulus selectivity in areas including the left precentral and postcentral gyri, the supramarginal gyrus, and the posterior middle temporal gyrus. Crucially, we identified a single category-selective region in the left ventral precentral gyrus. Furthermore, an estimation of directed functional connectivity delivered evidence for robust top-down connectivity from the second to first stage. These results support the validity of the two-stage model of perceptual categorization for the somatosensory system, suggesting common computational principles and a unified theory of perceptual categorization across the visual, auditory, and somatosensory systems.
Collapse
Affiliation(s)
- Patrick S. Malone
- Department of NeuroscienceGeorgetown University Medical CenterWashingtonDistrict of Columbia
| | - Silvio P. Eberhardt
- Department of Speech, Language, and Hearing SciencesGeorge Washington UniversityWashingtonDistrict of Columbia
| | - Klaus Wimmer
- Center for Brain and Cognition, Department of Information and Communication TechnologiesUniversitat Pompeu FabraBarcelonaSpain
- Centre de Recerca MatemàticaBarcelonaSpain
- Barcelona Graduate School of MathematicsBarcelonaSpain
| | - Courtney Sprouse
- Department of NeuroscienceGeorgetown University Medical CenterWashingtonDistrict of Columbia
| | - Richard Klein
- Department of NeuroscienceGeorgetown University Medical CenterWashingtonDistrict of Columbia
| | - Katharina Glomb
- Center for Brain and Cognition, Department of Information and Communication TechnologiesUniversitat Pompeu FabraBarcelonaSpain
- Department of RadiologyCentre Hospitalier Universitaire VaudoisLausanneSwitzerland
| | - Clara A. Scholl
- Department of NeuroscienceGeorgetown University Medical CenterWashingtonDistrict of Columbia
| | - Levan Bokeria
- Department of NeuroscienceGeorgetown University Medical CenterWashingtonDistrict of Columbia
| | - Philip Cho
- Department of NeuroscienceGeorgetown University Medical CenterWashingtonDistrict of Columbia
| | - Gustavo Deco
- Center for Brain and Cognition, Department of Information and Communication TechnologiesUniversitat Pompeu FabraBarcelonaSpain
- Institució Catalana de la Recerca i Estudis Avançats (ICREA)BarcelonaSpain
- Department of NeuropsychologyMax Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
- School of Psychological SciencesMonash UniversityMelbourneVictoriaAustralia
| | - Xiong Jiang
- Department of NeuroscienceGeorgetown University Medical CenterWashingtonDistrict of Columbia
| | - Lynne E. Bernstein
- Department of Speech, Language, and Hearing SciencesGeorge Washington UniversityWashingtonDistrict of Columbia
| | - Maximilian Riesenhuber
- Department of NeuroscienceGeorgetown University Medical CenterWashingtonDistrict of Columbia
| |
Collapse
|
10
|
Akhavein H, Dehmoobadsharifabadi A, Farivar R. Magnetoencephalography adaptation reveals depth-cue-invariant object representations in the visual cortex. J Vis 2018; 18:6. [PMID: 30458514 DOI: 10.1167/18.12.6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Independent of edges and 2-D shape that can be highly informative of object identity, depth cues alone can also give rise to vivid and effective object percepts. The processing of different depth cues engages segregated cortical areas, and an efficient object representation would be one that is invariant to depth cues. Here, we investigated depth-cue invariance of object representations by measuring the category-specific response to faces-the M170 response measured with magnetoencephalography. The M170 response is strongest to faces and is sensitive to adaptation, such that repeated presentation of a face diminishes subsequent M170 responses. We used this feature of the M170 and measured the degree to which the adaptation effect is affected by variations in depth cue and 3-D object shape. Subjects viewed a rapid presentation of two stimuli-an adaptor and a test stimulus. The adaptor was either a face, a chair, or a face-like oval surface, and rendered with a single depth cue (shading, structure from motion, or texture). The test stimulus was always a shaded face of a random identity, thus completely controlling for low-level influences on the M170 response to the test stimulus. In the left fusiform face area, we found strong M170 adaptation when the adaptor was a face regardless of its depth cue. This adaptation was marginal in the right fusiform and negligible in the occipital regions. Our results support the presence of depth-cue-invariant representations in the human visual system, alongside size, position, and viewpoint invariance.
Collapse
Affiliation(s)
- Hassan Akhavein
- McGill Vision Research, Department of Ophthalmology, McGill University, Montreal, Canada
| | | | - Reza Farivar
- McGill Vision Research, Department of Ophthalmology, McGill University, Montreal, Canada
| |
Collapse
|
11
|
Incremental learning of perceptual and conceptual representations and the puzzle of neural repetition suppression. Psychon Bull Rev 2017; 23:1055-71. [PMID: 27294423 DOI: 10.3758/s13423-015-0855-y] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Incremental learning models of long-term perceptual and conceptual knowledge hold that neural representations are gradually acquired over many individual experiences via Hebbian-like activity-dependent synaptic plasticity across cortical connections of the brain. In such models, variation in task relevance of information, anatomic constraints, and the statistics of sensory inputs and motor outputs lead to qualitative alterations in the nature of representations that are acquired. Here, the proposal that behavioral repetition priming and neural repetition suppression effects are empirical markers of incremental learning in the cortex is discussed, and research results that both support and challenge this position are reviewed. Discussion is focused on a recent fMRI-adaptation study from our laboratory that shows decoupling of experience-dependent changes in neural tuning, priming, and repetition suppression, with representational changes that appear to work counter to the explicit task demands. Finally, critical experiments that may help to clarify and resolve current challenges are outlined.
Collapse
|
12
|
Jiang X, Petok JR, Howard DV, Howard JH. Individual Differences in Cognitive Function in Older Adults Predicted by Neuronal Selectivity at Corresponding Brain Regions. Front Aging Neurosci 2017; 9:103. [PMID: 28458636 PMCID: PMC5394166 DOI: 10.3389/fnagi.2017.00103] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2016] [Accepted: 03/30/2017] [Indexed: 11/13/2022] Open
Abstract
Relating individual differences in cognitive abilities to neural substrates in older adults is of significant scientific and clinical interest, but remains a major challenge. Previous functional magnetic resonance imaging (fMRI) studies of cognitive aging have mainly focused on the amplitude of fMRI response, which does not measure neuronal selectivity and has led to some conflicting findings. Here, using local regional heterogeneity analysis, or Hcorr , a novel fMRI analysis technique developed to probe the sparseness of neuronal activations as an indirect measure of neuronal selectivity, we found that individual differences in two different cognitive functions, episodic memory and letter verbal fluency, are selectively related to Hcorr -estimated neuronal selectivity at their corresponding brain regions (hippocampus and visual-word form area, respectively). This suggests a direct relationship between cognitive function and neuronal selectivity at the corresponding brain regions in healthy older adults, which in turn suggests that age-related neural dedifferentiation might contribute to rather than compensate for cognitive decline in healthy older adults. Additionally, the capability to estimate neuronal selectivity across brain regions with a single data set and link them to cognitive performance suggests that, compared to fMRI-adaptation-the established fMRI technique to assess neuronal selectivity, Hcorr might be a better alternative in studying normal aging and neurodegenerative diseases, both of which are associated with widespread changes across the brain.
Collapse
Affiliation(s)
- Xiong Jiang
- Department of Neuroscience, Georgetown UniversityWashington, DC, USA
| | - Jessica R. Petok
- Department of Psychology, Georgetown UniversityWashington, DC, USA
- Department of Psychology, St. Olaf CollegeNorthfield, MN, USA
| | - Darlene V. Howard
- Department of Psychology, Georgetown UniversityWashington, DC, USA
- Center for Brain Plasticity and Recovery, Georgetown University Medical CenterWashington, DC, USA
| | - James H. Howard
- Department of Psychology, Georgetown UniversityWashington, DC, USA
- Center for Brain Plasticity and Recovery, Georgetown University Medical CenterWashington, DC, USA
- Department of Psychology, Catholic University of AmericaWashington, DC, USA
| |
Collapse
|
13
|
Shankar S, Kayser AS. Perceptual and categorical decision making: goal-relevant representation of two domains at different levels of abstraction. J Neurophysiol 2017; 117:2088-2103. [PMID: 28250149 DOI: 10.1152/jn.00512.2016] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2016] [Revised: 02/03/2017] [Accepted: 03/01/2017] [Indexed: 01/03/2023] Open
Abstract
To date it has been unclear whether perceptual decision making and rule-based categorization reflect activation of similar cognitive processes and brain regions. On one hand, both map potentially ambiguous stimuli to a smaller set of motor responses. On the other hand, decisions about perceptual salience typically concern concrete sensory representations derived from a noisy stimulus, while categorization is typically conceptualized as an abstract decision about membership in a potentially arbitrary set. Previous work has primarily examined these types of decisions in isolation. Here we independently varied salience in both the perceptual and categorical domains in a random dot-motion framework by manipulating dot-motion coherence and motion direction relative to a category boundary, respectively. Behavioral and modeling results suggest that categorical (more abstract) information, which is more relevant to subjects' decisions, is weighted more strongly than perceptual (more concrete) information, although they also have significant interactive effects on choice. Within the brain, BOLD activity within frontal regions strongly differentiated categorical salience and weakly differentiated perceptual salience; however, the interaction between these two factors activated similar frontoparietal brain networks. Notably, explicitly evaluating feature interactions revealed a frontal-parietal dissociation: parietal activity varied strongly with both features, but frontal activity varied with the combined strength of the information that defined the motor response. Together, these data demonstrate that frontal regions are driven by decision-relevant features and argue that perceptual decisions and rule-based categorization reflect similar cognitive processes and activate similar brain networks to the extent that they define decision-relevant stimulus-response mappings.NEW & NOTEWORTHY Here we study the behavioral and neural dynamics of perceptual categorization when decision information varies in multiple domains at different levels of abstraction. Behavioral and modeling results suggest that categorical (more abstract) information is weighted more strongly than perceptual (more concrete) information but that perceptual and categorical domains interact to influence decisions. Frontoparietal brain activity during categorization flexibly represents decision-relevant features and highlights significant dissociations in frontal and parietal activity during decision making.
Collapse
Affiliation(s)
- Swetha Shankar
- Department of Neurology, University of California, San Francisco, California; .,Center for Brain Imaging, New York University, New York, New York; and
| | - Andrew S Kayser
- Department of Neurology, University of California, San Francisco, California.,Department of Neurology, Department of Veterans Affairs Northern California Health Care System, Martinez, California
| |
Collapse
|
14
|
Extensive training leads to temporal and spatial shifts of cortical activity underlying visual category selectivity. Neuroimage 2016; 134:22-34. [DOI: 10.1016/j.neuroimage.2016.03.066] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2015] [Revised: 03/24/2016] [Accepted: 03/26/2016] [Indexed: 11/24/2022] Open
|
15
|
Abstract
To respond appropriately to objects, we must process visual inputs rapidly and assign them meaning. This involves highly dynamic, interactive neural processes through which information accumulates and cognitive operations are resolved across multiple time scales. However, there is currently no model of object recognition which provides an integrated account of how visual and semantic information emerge over time; therefore, it remains unknown how and when semantic representations are evoked from visual inputs. Here, we test whether a model of individual objects--based on combining the HMax computational model of vision with semantic-feature information--can account for and predict time-varying neural activity recorded with magnetoencephalography. We show that combining HMax and semantic properties provides a better account of neural object representations compared with the HMax alone, both through model fit and classification performance. Our results show that modeling and classifying individual objects is significantly improved by adding semantic-feature information beyond ∼200 ms. These results provide important insights into the functional properties of visual processing across time.
Collapse
Affiliation(s)
- Alex Clarke
- Centre for Speech, Language and the Brain, Department of Psychology, University of Cambridge, Cambridge CB2 3EB, UK
| | - Barry J Devereux
- Centre for Speech, Language and the Brain, Department of Psychology, University of Cambridge, Cambridge CB2 3EB, UK
| | - Billi Randall
- Centre for Speech, Language and the Brain, Department of Psychology, University of Cambridge, Cambridge CB2 3EB, UK
| | - Lorraine K Tyler
- Centre for Speech, Language and the Brain, Department of Psychology, University of Cambridge, Cambridge CB2 3EB, UK
| |
Collapse
|
16
|
Network Anisotropy Trumps Noise for Efficient Object Coding in Macaque Inferior Temporal Cortex. J Neurosci 2015; 35:9889-99. [PMID: 26156990 DOI: 10.1523/jneurosci.4595-14.2015] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
UNLABELLED How neuronal ensembles compute information is actively studied in early visual cortex. Much less is known about how local ensembles function in inferior temporal (IT) cortex, the last stage of the ventral visual pathway that supports visual recognition. Previous reports suggested that nearby neurons carry information mostly independently, supporting efficient processing (Barlow, 1961). However, others postulate that noise covariation effects may depend on network anisotropy/homogeneity and on how the covariation relates to representation. Do slow trial-by-trial noise covariations increase or decrease IT's object coding capability, how does encoding capability relate to correlational structure (i.e., the spatial pattern of signal and noise redundancy/homogeneity across neurons), and does knowledge of correlational structure matter for decoding? We recorded simultaneously from ∼80 spiking neurons in ∼1 mm(3) of macaque IT under light neurolept anesthesia. Noise correlations were stronger for neurons with correlated tuning, and noise covariations reduced object encoding capability, including generalization across object pose and illumination. Knowledge of noise covariations did not lead to better decoding performance. However, knowledge of anisotropy/homogeneity improved encoding and decoding efficiency by reducing the number of neurons needed to reach a given performance level. Such correlated neurons were found mostly in supragranular and infragranular layers, supporting theories that link recurrent circuitry to manifold representation. These results suggest that redundancy benefits manifold learning of complex high-dimensional information and that subsets of neurons may be more immune to noise covariation than others. SIGNIFICANCE STATEMENT How noise affects neuronal population coding is poorly understood. By sampling densely from local populations supporting visual object recognition, we show that recurrent circuitry supports useful representations and that subsets of neurons may be more immune to noise covariation than others.
Collapse
|
17
|
Hung CP, Cui D, Chen YP, Lin CP, Levine MR. Correlated activity supports efficient cortical processing. Front Comput Neurosci 2015; 8:171. [PMID: 25610392 PMCID: PMC4285095 DOI: 10.3389/fncom.2014.00171] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2014] [Accepted: 12/09/2014] [Indexed: 11/13/2022] Open
Abstract
Visual recognition is a computational challenge that is thought to occur via efficient coding. An important concept is sparseness, a measure of coding efficiency. The prevailing view is that sparseness supports efficiency by minimizing redundancy and correlations in spiking populations. Yet, we recently reported that "choristers", neurons that behave more similarly (have correlated stimulus preferences and spontaneous coincident spiking), carry more generalizable object information than uncorrelated neurons ("soloists") in macaque inferior temporal (IT) cortex. The rarity of choristers (as low as 6% of IT neurons) indicates that they were likely missed in previous studies. Here, we report that correlation strength is distinct from sparseness (choristers are not simply broadly tuned neurons), that choristers are located in non-granular output layers, and that correlated activity predicts human visual search efficiency. These counterintuitive results suggest that a redundant correlational structure supports efficient processing and behavior.
Collapse
Affiliation(s)
- Chou P Hung
- Department of Neuroscience, Georgetown University Washington, D.C., USA ; Institute of Neuroscience, National Yang-Ming University Taipei, Taiwan
| | - Ding Cui
- Department of Neuroscience, Georgetown University Washington, D.C., USA
| | - Yueh-Peng Chen
- Institute of Neuroscience, National Yang-Ming University Taipei, Taiwan
| | - Chia-Pei Lin
- Institute of Neuroscience, National Yang-Ming University Taipei, Taiwan
| | - Matthew R Levine
- Department of Neuroscience, Georgetown University Washington, D.C., USA
| |
Collapse
|