1
|
Derawi H, Roark CL, Gabay Y. Procedural auditory category learning is selectively disrupted in developmental language disorder. Psychon Bull Rev 2024; 31:1181-1192. [PMID: 37884775 DOI: 10.3758/s13423-023-02398-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/29/2023] [Indexed: 10/28/2023]
Abstract
Speech communication depends on accurate perception and identification of speech sounds, which vary across talkers and word or sentence contexts. The ability to map this variable input onto discrete speech sound representations relies on categorization. Recent research and theoretical models implicate the procedural learning system in the ability to learn novel speech and non-speech categories. This connection is particularly intriguing because several language disorders that demonstrate linguistic impairments are proposed to stem from procedural learning and memory dysfunction. One such disorder, Developmental Language Disorder (DLD), affects 7.5% of children and persists into adulthood. While DLD is associated with general linguistic impairments, it is not yet clear how fundamental perceptual and cognitive processes supporting language are impacted, such as the ability to learn novel auditory categories. We examined auditory category learning in children with DLD and typically developed (TD) children using two well-matched nonspeech auditory category learning challenges to draw upon presumed procedural (information-integration) versus declarative (rule-based) learning systems. We observed impaired information-integration category learning and intact rule-based category learning in the DLD group. Quantitative model-based analyses revealed reduced use of, and slower shifting to, optimal procedural-based strategies in DLD and slower shifting to but similarly efficient use of optimal hypothesis-testing strategies. The dissociation is consistent with the Procedural Deficit Hypothesis of language disorders and supports the theoretical distinction of multiple category learning systems. These findings demonstrate that highly controlled experimental tasks assessing perceptual and cognitive abilities can relate to real-world challenges facing individuals with DLD in forming stable linguistic representations.
Collapse
Affiliation(s)
- Hadeer Derawi
- Department of Special Education and the Edmond J. Safra Brain Research Center for the Study of Learning Disabilities, University of Haifa, Mount Carmel, 31905, Haifa, Israel.
| | - Casey L Roark
- Department of Communication Science and Disorders, Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, PA, USA
| | - Yafit Gabay
- Department of Special Education and the Edmond J. Safra Brain Research Center for the Study of Learning Disabilities, University of Haifa, Mount Carmel, 31905, Haifa, Israel.
| |
Collapse
|
2
|
van der Willigen RF, Versnel H, van Opstal AJ. Spectral-temporal processing of naturalistic sounds in monkeys and humans. J Neurophysiol 2024; 131:38-63. [PMID: 37965933 PMCID: PMC11305640 DOI: 10.1152/jn.00129.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Revised: 10/23/2023] [Accepted: 11/13/2023] [Indexed: 11/16/2023] Open
Abstract
Human speech and vocalizations in animals are rich in joint spectrotemporal (S-T) modulations, wherein acoustic changes in both frequency and time are functionally related. In principle, the primate auditory system could process these complex dynamic sounds based on either an inseparable representation of S-T features or, alternatively, a separable representation. The separability hypothesis implies an independent processing of spectral and temporal modulations. We collected comparative data on the S-T hearing sensitivity in humans and macaque monkeys to a wide range of broadband dynamic spectrotemporal ripple stimuli employing a yes-no signal-detection task. Ripples were systematically varied, as a function of density (spectral modulation frequency), velocity (temporal modulation frequency), or modulation depth, to cover a listener's full S-T modulation sensitivity, derived from a total of 87 psychometric ripple detection curves. Audiograms were measured to control for normal hearing. Determined were hearing thresholds, reaction time distributions, and S-T modulation transfer functions (MTFs), both at the ripple detection thresholds and at suprathreshold modulation depths. Our psychophysically derived MTFs are consistent with the hypothesis that both monkeys and humans employ analogous perceptual strategies: S-T acoustic information is primarily processed separable. Singular value decomposition (SVD), however, revealed a small, but consistent, inseparable spectral-temporal interaction. Finally, SVD analysis of the known visual spatiotemporal contrast sensitivity function (CSF) highlights that human vision is space-time inseparable to a much larger extent than is the case for S-T sensitivity in hearing. Thus, the specificity with which the primate brain encodes natural sounds appears to be less strict than is required to adequately deal with natural images.NEW & NOTEWORTHY We provide comparative data on primate audition of naturalistic sounds comprising hearing thresholds, reaction time distributions, and spectral-temporal modulation transfer functions. Our psychophysical experiments demonstrate that auditory information is primarily processed in a spectral-temporal-independent manner by both monkeys and humans. Singular value decomposition of known visual spatiotemporal contrast sensitivity, in comparison to our auditory spectral-temporal sensitivity, revealed a striking contrast in how the brain encodes natural sounds as opposed to natural images, as vision appears to be space-time inseparable.
Collapse
Affiliation(s)
- Robert F van der Willigen
- Section Neurophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- School of Communication, Media and Information Technology, Rotterdam University of Applied Sciences, Rotterdam, The Netherlands
- Research Center Creating 010, Rotterdam University of Applied Sciences, Rotterdam, The Netherlands
| | - Huib Versnel
- Section Neurophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Department of Otorhinolaryngology and Head & Neck Surgery, UMC Utrecht Brain Center, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands
| | - A John van Opstal
- Section Neurophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
3
|
Turpin T, Uluç I, Kotlarz P, Lankinen K, Mamashli F, Ahveninen J. Comparing auditory and visual aspects of multisensory working memory using bimodally matched feature patterns. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.08.03.551865. [PMID: 37577481 PMCID: PMC10418174 DOI: 10.1101/2023.08.03.551865] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/15/2023]
Abstract
Working memory (WM) reflects the transient maintenance of information in the absence of external input, which can be attained via multiple senses separately or simultaneously. Pertaining to WM, the prevailing literature suggests the dominance of vision over other sensory systems. However, this imbalance may be stemming from challenges in finding comparable stimuli across modalities. Here, we addressed this problem by using a balanced multisensory retro-cue WM design, which employed combinations of auditory (ripple sounds) and visuospatial (Gabor patches) patterns, adjusted relative to each participant's discrimination ability. In three separate experiments, the participant was asked to determine whether the (retro-cued) auditory and/or visual items maintained in WM matched or mismatched the subsequent probe stimulus. In Experiment 1, all stimuli were audiovisual, and the probes were either fully mismatching, only partially mismatching, or fully matching the memorized item. Experiment 2 was otherwise same as Experiment 1, but the probes were unimodal. In Experiment 3, the participant was cued to maintain only the auditory or visual aspect of an audiovisual item pair. In two of the three experiments, the participant matching performance was significantly more accurate for the auditory than visual attributes of probes. When the perceptual and task demands are bimodally equated, auditory attributes can be matched to multisensory items in WM at least as accurately as, if not more precisely than, their visual counterparts.
Collapse
Affiliation(s)
- Tori Turpin
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA
| | - Işıl Uluç
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Parker Kotlarz
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA
| | - Kaisu Lankinen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Fahimeh Mamashli
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Jyrki Ahveninen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
4
|
Couvignou M, Tillmann B, Caclin A, Kolinsky R. Do developmental dyslexia and congenital amusia share underlying impairments? Child Neuropsychol 2023; 29:1294-1340. [PMID: 36606656 DOI: 10.1080/09297049.2022.2162031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Accepted: 12/19/2022] [Indexed: 01/07/2023]
Abstract
Developmental dyslexia and congenital amusia have common characteristics. Yet, their possible association in some individuals has been addressed only scarcely. Recently, two converging studies reported a sizable comorbidity rate between these two neurodevelopmental disorders (Couvignou et al., Cognitive Neuropsychology 2019; Couvignou & Kolinsky, Neuropsychologia 2021). However, the reason for their association remains unclear. Here, we investigate the hypothesis of shared underlying impairments between dyslexia and amusia. Fifteen dyslexic children with amusia (DYS+A), 15 dyslexic children without amusia (DYS-A), and two groups of 25 typically developing children matched on either chronological age (CA) or reading level (RL) were assessed with a behavioral battery aiming to investigate phonological and pitch processing capacities at auditory memory, perceptual awareness, and attentional levels. Overall, our results suggest that poor auditory serial-order memory increases susceptibility to comorbidity between dyslexia and amusia and may play a role in the development of the comorbid phenotype. In contrast, the impairments observed in the DYS+A children for auditory item memory, perceptual awareness, and attention might be a consequence of their reduced reading experience combined with weaker musical skills. Comparing DYS+A and DYS-A children suggests that the latter are more resourceful and/or have more effective compensatory strategies, or that their phenotype results from a different developmental trajectory. We will discuss the relevance of these findings for delving into the etiology of these two developmental disorders and address their implications for future research and practice.
Collapse
Affiliation(s)
- Manon Couvignou
- Unité de Recherche en Neurosciences Cognitives (Unescog), Center for Research in Cognition & Neurosciences (CRCN), Université Libre de Bruxelles (ULB), Brussels, Belgium
| | - Barbara Tillmann
- Lyon Neuroscience Research Center, CNRS, UMR 5292, INSERM, U1028, Lyon, France
- University Lyon 1, Lyon, France
| | - Anne Caclin
- Lyon Neuroscience Research Center, CNRS, UMR 5292, INSERM, U1028, Lyon, France
- University Lyon 1, Lyon, France
| | - Régine Kolinsky
- Unité de Recherche en Neurosciences Cognitives (Unescog), Center for Research in Cognition & Neurosciences (CRCN), Université Libre de Bruxelles (ULB), Brussels, Belgium
- Fonds de la Recherche Scientifique-FNRS (FRS-FNRS), Brussels, Belgium
| |
Collapse
|
5
|
Roark CL, Chandrasekaran B. Stable, flexible, common, and distinct behaviors support rule-based and information-integration category learning. NPJ SCIENCE OF LEARNING 2023; 8:14. [PMID: 37179364 PMCID: PMC10183008 DOI: 10.1038/s41539-023-00163-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Accepted: 04/21/2023] [Indexed: 05/15/2023]
Abstract
The ability to organize variable sensory signals into discrete categories is a fundamental process in human cognition thought to underlie many real-world learning problems. Decades of research suggests that two learning systems may support category learning and that categories with different distributional structures (rule-based, information-integration) optimally rely on different learning systems. However, it remains unclear how the same individual learns these different categories and whether the behaviors that support learning success are common or distinct across different categories. In two experiments, we investigate learning and develop a taxonomy of learning behaviors to investigate which behaviors are stable or flexible as the same individual learns rule-based and information-integration categories and which behaviors are common or distinct to learning success for these different types of categories. We found that some learning behaviors are stable in an individual across category learning tasks (learning success, strategy consistency), while others are flexibly task-modulated (learning speed, strategy, stability). Further, success in rule-based and information-integration category learning was supported by both common (faster learning speeds, higher working memory ability) and distinct factors (learning strategies, strategy consistency). Overall, these results demonstrate that even with highly similar categories and identical training tasks, individuals dynamically adjust some behaviors to fit the task and success in learning different kinds of categories is supported by both common and distinct factors. These results illustrate a need for theoretical perspectives of category learning to include nuances of behavior at the level of an individual learner.
Collapse
Affiliation(s)
- Casey L Roark
- Department of Communication Science & Disorders,University of Pittsburgh, Pittsburgh, PA, USA.
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA.
| | - Bharath Chandrasekaran
- Department of Communication Science & Disorders,University of Pittsburgh, Pittsburgh, PA, USA.
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA.
| |
Collapse
|
6
|
Roark CL, Lescht E, Wray AH, Chandrasekaran B. Auditory and visual category learning in children and adults. Dev Psychol 2023; 59:963-975. [PMID: 36862449 PMCID: PMC10164074 DOI: 10.1037/dev0001525] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/03/2023]
Abstract
Categories are fundamental to everyday life and the ability to learn new categories is relevant across the lifespan. Categories are ubiquitous across modalities, supporting complex processes such as object recognition and speech perception. Prior work has proposed that different categories may engage learning systems with unique developmental trajectories. There is a limited understanding of how perceptual and cognitive development influences learning as prior studies have examined separate participants in a single modality. The current study presents a comprehensive assessment of category learning in 8-12-year-old children (12 female; 34 white, 1 Asian, 1 more than one race; M household income $85-$100 K) and 18-61-year-old adults (13 female; 32 white, 10 Black or African American, 4 Asian, 2 more than one race, 1 other; M household income $40-55 K) in a broad sample collected online from the United States. Across multiple sessions, participants learned categories across modalities (auditory, visual) that engage different learning systems (explicit, procedural). Unsurprisingly, adults outperformed children across all tasks. However, this enhanced performance was asymmetrical across categories and modalities. Adults far outperformed children in learning visual explicit categories and auditory procedural categories, with fewer differences across development for other types of categories. Adults' general benefit over children was due to enhanced information processing, while their superior performance for visual explicit and auditory procedural categories was associated with less cautious correct responses. These results demonstrate an interaction between perceptual and cognitive development that influences learning of categories that may correspond to the development of real-world skills such as speech perception and reading. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
Collapse
Affiliation(s)
- Casey L. Roark
- University of Pittsburgh, Department of Communication Science and Disorders
- Center for the Neural Basis of Cognition
| | - Erica Lescht
- University of Pittsburgh, Department of Communication Science and Disorders
| | - Amanda Hampton Wray
- University of Pittsburgh, Department of Communication Science and Disorders
- Center for the Neural Basis of Cognition
| | - Bharath Chandrasekaran
- University of Pittsburgh, Department of Communication Science and Disorders
- Center for the Neural Basis of Cognition
| |
Collapse
|
7
|
Kowialiewski B, Krasnoff J, Mizrak E, Oberauer K. Verbal working memory encodes phonological and semantic information differently. Cognition 2023; 233:105364. [PMID: 36584522 DOI: 10.1016/j.cognition.2022.105364] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Revised: 12/08/2022] [Accepted: 12/21/2022] [Indexed: 12/29/2022]
Abstract
Working memory (WM) is often tested through immediate serial recall of word lists. Performance in such tasks is negatively influenced by phonological similarity: People more often get the order of words wrong when they are phonologically similar to each other (e.g., cat, fat, mat). This phonological-similarity effect shows that phonology plays an important role for the representation of serial order in these tasks. By contrast, semantic similarity usually does not impact performance negatively. To resolve and understand this discrepancy, we tested the effects of phonological and semantic similarity for the retention of positional information in WM. Across six experiments (all Ns = 60 young adults), we manipulated between-item semantic and phonological similarity in tasks requiring participants to form and maintain new item-context bindings in WM. Participants were asked to retrieve items from their context, or the contexts from their item. For both retrieval directions, phonological similarity impaired WM for item-context bindings across all experiments. Semantic similarity did not. These results demonstrate that WM encodes phonological and semantic information differently. We propose a WM model accounting for semantic-similarity effects in WM, in which semantic knowledge supports WM through activated long-term memory.
Collapse
Affiliation(s)
- B Kowialiewski
- Department of Psychology, University of Zurich, Switzerland; University of Liège, Liège, Belgium.
| | - J Krasnoff
- Department of Psychology, University of Zurich, Switzerland
| | - E Mizrak
- Department of Psychology, University of Zurich, Switzerland; Department of Psychology, University of Sheffield, United Kingdom
| | - K Oberauer
- Department of Psychology, University of Zurich, Switzerland
| |
Collapse
|
8
|
Ahveninen J, Uluç I, Raij T, Nummenmaa A, Mamashli F. Spectrotemporal content of human auditory working memory represented in functional connectivity patterns. Commun Biol 2023; 6:294. [PMID: 36941477 PMCID: PMC10027691 DOI: 10.1038/s42003-023-04675-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2022] [Accepted: 03/07/2023] [Indexed: 03/23/2023] Open
Abstract
Recent research suggests that working memory (WM), the mental sketchpad underlying thinking and communication, is maintained by multiple regions throughout the brain. Whether parts of a stable WM representation could be distributed across these brain regions is, however, an open question. We addressed this question by examining the content-specificity of connectivity-pattern matrices between subparts of cortical regions-of-interest (ROI). These connectivity patterns were calculated from functional MRI obtained during a ripple-sound auditory WM task. Statistical significance was assessed by comparing the decoding results to a null distribution derived from a permutation test considering all comparable two- to four-ROI connectivity patterns. Maintained WM items could be decoded from connectivity patterns across ROIs in frontal, parietal, and superior temporal cortices. All functional connectivity patterns that were specific to maintained sound content extended from early auditory to frontoparietal cortices. Our results demonstrate that WM maintenance is supported by content-specific patterns of functional connectivity across different levels of cortical hierarchy.
Collapse
Affiliation(s)
- Jyrki Ahveninen
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA.
- Department of Radiology, Harvard Medical School, Boston, MA, USA.
| | - Işıl Uluç
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Tommi Raij
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Aapo Nummenmaa
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Fahimeh Mamashli
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
9
|
Gabay Y, Roark CL, Holt LL. Impaired and Spared Auditory Category Learning in Developmental Dyslexia. Psychol Sci 2023; 34:468-480. [PMID: 36791783 DOI: 10.1177/09567976231151581] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/17/2023] Open
Abstract
Categorization has a deep impact on behavior, but whether category learning is served by a single system or multiple systems remains debated. Here, we designed two well-equated nonspeech auditory category learning challenges to draw on putative procedural (information-integration) versus declarative (rule-based) learning systems among adult Hebrew-speaking control participants and individuals with dyslexia, a language disorder that has been linked to a selective disruption in the procedural memory system and in which phonological deficits are ubiquitous. We observed impaired information-integration category learning and spared rule-based category learning in the dyslexia group compared with the neurotypical group. Quantitative model-based analyses revealed reduced use of, and slower shifting to, optimal procedural-based strategies in dyslexia with hypothesis-testing strategy use on par with control participants. The dissociation is consistent with multiple category learning systems and points to the possibility that procedural learning inefficiencies across categories defined by complex, multidimensional exemplars may result in difficulty in phonetic category acquisition in dyslexia.
Collapse
Affiliation(s)
- Yafit Gabay
- Department of Special Education and the Edmond J. Safra Brain Research Center for the Study of Learning Disabilities, University of Haifa
| | - Casey L Roark
- Department of Communication Science and Disorders, Center for the Neural Basis of Cognition, University of Pittsburgh
| | - Lori L Holt
- Department of Psychology, Neuroscience Institute, Center for the Neural Basis of Cognition, Carnegie Mellon University
| |
Collapse
|
10
|
Bianco R, Chait M. No Link Between Speech-in-Noise Perception and Auditory Sensory Memory - Evidence From a Large Cohort of Older and Younger Listeners. Trends Hear 2023; 27:23312165231190688. [PMID: 37828868 PMCID: PMC10576936 DOI: 10.1177/23312165231190688] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2022] [Revised: 07/06/2023] [Accepted: 07/11/2023] [Indexed: 10/14/2023] Open
Abstract
A growing literature is demonstrating a link between working memory (WM) and speech-in-noise (SiN) perception. However, the nature of this correlation and which components of WM might underlie it, are being debated. We investigated how SiN reception links with auditory sensory memory (aSM) - the low-level processes that support the short-term maintenance of temporally unfolding sounds. A large sample of old (N = 199, 60-79 yo) and young (N = 149, 20-35 yo) participants was recruited online and performed a coordinate response measure-based speech-in-babble task that taps listeners' ability to track a speech target in background noise. We used two tasks to investigate implicit and explicit aSM. Both were based on tone patterns overlapping in processing time scales with speech (presentation rate of tones 20 Hz; of patterns 2 Hz). We hypothesised that a link between SiN and aSM may be particularly apparent in older listeners due to age-related reduction in both SiN reception and aSM. We confirmed impaired SiN reception in the older cohort and demonstrated reduced aSM performance in those listeners. However, SiN and aSM did not share variability. Across the two age groups, SiN performance was predicted by a binaural processing test and age. The results suggest that previously observed links between WM and SiN may relate to the executive components and other cognitive demands of the used tasks. This finding helps to constrain the search for the perceptual and cognitive factors that explain individual variability in SiN performance.
Collapse
Affiliation(s)
- Roberta Bianco
- Ear Institute, University College London, London, UK
- Neuroscience of Perception and Action Lab, Italian Institute of Technology (IIT), Rome, Italy
| | - Maria Chait
- Ear Institute, University College London, London, UK
| |
Collapse
|
11
|
Lim SXL, Höchenberger R, Ruda I, Fink GR, Viswanathan S, Ohla K. The capacity and organization of gustatory working memory. Sci Rep 2022; 12:8056. [PMID: 35577835 PMCID: PMC9110745 DOI: 10.1038/s41598-022-12005-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2021] [Accepted: 04/27/2022] [Indexed: 12/03/2022] Open
Abstract
Remembering a particular taste is crucial in food intake and associative learning. We investigated whether taste can be dynamically encoded, maintained, and retrieved on short time scales consistent with working memory (WM). We use novel single and multi-item taste recognition tasks to show that a single taste can be reliably recognized despite repeated oro-sensory interference suggesting active and resilient maintenance (Experiment 1, N = 21). When multiple tastes were presented (Experiment 2, N = 20), the resolution with which these were maintained depended on their serial position, and recognition was reliable for up to three tastes suggesting a limited capacity of gustatory WM. Lastly, stimulus similarity impaired recognition with increasing set size, which seemed to mask the awareness of capacity limitations. Together, the results advocate a hybrid model of gustatory WM with a limited number of slots where items are stored with varying precision.
Collapse
|
12
|
Liu Q, Ulloa A, Horwitz B. The Spatiotemporal Neural Dynamics of Intersensory Attention Capture of Salient Stimuli: A Large-Scale Auditory-Visual Modeling Study. Front Comput Neurosci 2022; 16:876652. [PMID: 35645750 PMCID: PMC9133449 DOI: 10.3389/fncom.2022.876652] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Accepted: 04/04/2022] [Indexed: 11/13/2022] Open
Abstract
The spatiotemporal dynamics of the neural mechanisms underlying endogenous (top-down) and exogenous (bottom-up) attention, and how attention is controlled or allocated in intersensory perception are not fully understood. We investigated these issues using a biologically realistic large-scale neural network model of visual-auditory object processing of short-term memory. We modeled and incorporated into our visual-auditory object-processing model the temporally changing neuronal mechanisms for the control of endogenous and exogenous attention. The model successfully performed various bimodal working memory tasks, and produced simulated behavioral and neural results that are consistent with experimental findings. Simulated fMRI data were generated that constitute predictions that human experiments could test. Furthermore, in our visual-auditory bimodality simulations, we found that increased working memory load in one modality would reduce the distraction from the other modality, and a possible network mediating this effect is proposed based on our model.
Collapse
Affiliation(s)
- Qin Liu
- Brain Imaging and Modeling Section, National Institute on Deafness and Other Communication Disorders, National Institutes of Health, Bethesda, MD, United States
- Department of Physics, University of Maryland, College Park, College Park, MD, United States
| | - Antonio Ulloa
- Brain Imaging and Modeling Section, National Institute on Deafness and Other Communication Disorders, National Institutes of Health, Bethesda, MD, United States
- Center for Information Technology, National Institutes of Health, Bethesda, MD, United States
| | - Barry Horwitz
- Brain Imaging and Modeling Section, National Institute on Deafness and Other Communication Disorders, National Institutes of Health, Bethesda, MD, United States
- *Correspondence: Barry Horwitz,
| |
Collapse
|
13
|
Long-term priors constrain category learning in the context of short-term statistical regularities. Psychon Bull Rev 2022; 29:1925-1937. [PMID: 35524011 DOI: 10.3758/s13423-022-02114-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/27/2022] [Indexed: 11/08/2022]
Abstract
Cognitive systems face a constant tension of maintaining existing representations that have been fine-tuned to long-term input regularities and adapting representations to meet the needs of short-term input that may deviate from long-term norms. Systems must balance the stability of long-term representations with plasticity to accommodate novel contexts. We investigated the interaction between perceptual biases or priors acquired across the long-term and sensitivity to statistical regularities introduced in the short-term. Participants were first passively exposed to short-term acoustic regularities and then learned categories in a supervised training task that either conflicted or aligned with long-term perceptual priors. We found that the long-term priors had robust and pervasive impact on categorization behavior. In contrast, behavior was not influenced by the nature of the short-term passive exposure. These results demonstrate that perceptual priors place strong constraints on the course of learning and that short-term passive exposure to acoustic regularities has limited impact on directing subsequent category learning.
Collapse
|
14
|
Mechanisms of associative word learning: Benefits from the visual modality and synchrony of labeled objects. Cortex 2022; 152:36-52. [DOI: 10.1016/j.cortex.2022.03.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2021] [Revised: 12/05/2021] [Accepted: 03/30/2022] [Indexed: 11/21/2022]
|
15
|
Mamashli F, Khan S, Hämäläinen M, Jas M, Raij T, Stufflebeam SM, Nummenmaa A, Ahveninen J. Synchronization patterns reveal neuronal coding of working memory content. Cell Rep 2021; 36:109566. [PMID: 34433024 PMCID: PMC8428113 DOI: 10.1016/j.celrep.2021.109566] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2020] [Revised: 04/26/2021] [Accepted: 07/28/2021] [Indexed: 11/24/2022] Open
Abstract
Neuronal oscillations are suggested to play an important role in auditory working memory (WM), but their contribution to content-specific representations has remained unclear. Here, we measure magnetoencephalography during a retro-cueing task with parametric ripple-sound stimuli, which are spectrotemporally similar to speech but resist non-auditory memory strategies. Using machine learning analyses, with rigorous between-subject cross-validation and non-parametric permutation testing, we show that memorized sound content is strongly represented in phase-synchronization patterns between subregions of auditory and frontoparietal cortices. These phase-synchronization patterns predict the memorized sound content steadily across the studied maintenance period. In addition to connectivity-based representations, there are indices of more local, “activity silent” representations in auditory cortices, where the decoding accuracy of WM content significantly increases after task-irrelevant “impulse stimuli.” Our results demonstrate that synchronization patterns across auditory sensory and association areas orchestrate neuronal coding of auditory WM content. This connectivity-based coding scheme could also extend beyond the auditory domain. Mamashli et al. use machine learning analyses of human magnetoencephalography (MEG) recordings to study “working memory,” maintenance of information in mind over brief periods of time. Their results show that the human brain maintains working memory content in transient functional connectivity patterns across sensory and association areas.
Collapse
Affiliation(s)
- Fahimeh Mamashli
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Bldg. 149 13(th) Street, Charlestown, MA 02129, USA; Department of Radiology, Harvard Medical School, 25 Shattuck Street, Boston, MA 02115, USA
| | - Sheraz Khan
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Bldg. 149 13(th) Street, Charlestown, MA 02129, USA; Department of Radiology, Harvard Medical School, 25 Shattuck Street, Boston, MA 02115, USA
| | - Matti Hämäläinen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Bldg. 149 13(th) Street, Charlestown, MA 02129, USA; Department of Radiology, Harvard Medical School, 25 Shattuck Street, Boston, MA 02115, USA
| | - Mainak Jas
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Bldg. 149 13(th) Street, Charlestown, MA 02129, USA; Department of Radiology, Harvard Medical School, 25 Shattuck Street, Boston, MA 02115, USA
| | - Tommi Raij
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Bldg. 149 13(th) Street, Charlestown, MA 02129, USA; Department of Radiology, Harvard Medical School, 25 Shattuck Street, Boston, MA 02115, USA; Departments of Physical Medicine and Rehabilitation and Neurobiology, Northwestern University, 710 North Lake Shore Drive, Chicago, IL 60611, USA
| | - Steven M Stufflebeam
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Bldg. 149 13(th) Street, Charlestown, MA 02129, USA; Department of Radiology, Harvard Medical School, 25 Shattuck Street, Boston, MA 02115, USA
| | - Aapo Nummenmaa
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Bldg. 149 13(th) Street, Charlestown, MA 02129, USA; Department of Radiology, Harvard Medical School, 25 Shattuck Street, Boston, MA 02115, USA
| | - Jyrki Ahveninen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Bldg. 149 13(th) Street, Charlestown, MA 02129, USA; Department of Radiology, Harvard Medical School, 25 Shattuck Street, Boston, MA 02115, USA.
| |
Collapse
|
16
|
Roark CL, Paulon G, Sarkar A, Chandrasekaran B. Comparing perceptual category learning across modalities in the same individuals. Psychon Bull Rev 2021; 28:898-909. [PMID: 33532985 PMCID: PMC8222058 DOI: 10.3758/s13423-021-01878-0] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/04/2021] [Indexed: 11/08/2022]
Abstract
Category learning is a fundamental process in human cognition that spans the senses. However, much still remains unknown about the mechanisms supporting learning in different modalities. In the current study, we directly compared auditory and visual category learning in the same individuals. Thirty participants (22 F; 18-32 years old) completed two unidimensional rule-based category learning tasks in a single day - one with auditory stimuli and another with visual stimuli. We replicated the results in a second experiment with a larger online sample (N = 99, 45 F, 18-35 years old). The categories were identically structured in the two modalities to facilitate comparison. We compared categorization accuracy, decision processes as assessed through drift-diffusion models, and the generalizability of resulting category representation through a generalization test. We found that individuals learned auditory and visual categories to similar extents and that accuracies were highly correlated across the two tasks. Participants had similar evidence accumulation rates in later learning, but early on had slower rates for visual than auditory learning. Participants also demonstrated differences in the decision thresholds across modalities. Participants had more categorical generalizable representations for visual than auditory categories. These results suggest that some modality-general cognitive processes support category learning but also suggest that the modality of the stimuli may also affect category learning behavior and outcomes.
Collapse
Affiliation(s)
- Casey L Roark
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, USA.
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA.
| | - Giorgio Paulon
- Department of Statistics and Data Sciences, The University of Texas at Austin, Austin, TX, USA
| | - Abhra Sarkar
- Department of Statistics and Data Sciences, The University of Texas at Austin, Austin, TX, USA
| | - Bharath Chandrasekaran
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, USA.
- Center for the Neural Basis of Cognition, Pittsburgh, PA, USA.
| |
Collapse
|
17
|
Dhrruvakumar S, Yathiraj A. Relation between auditory memory and global memory in young and older adults. Eur Arch Otorhinolaryngol 2021; 278:2577-2583. [PMID: 33386969 DOI: 10.1007/s00405-020-06512-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2020] [Accepted: 11/19/2020] [Indexed: 11/28/2022]
Abstract
PURPOSE Controversy exists as to whether auditory memory is modality-specific or not. To determine this, the study investigated the relation between the scores obtained on an auditory memory test with that obtained on a global memory test in adults. The study also aimed to compare the scores of young and older adults on the two memory tests. METHODS Thirty young adults aged 18 to 30 years and 30 older adults aged 58 to 70 years, having normal hearing sensitivity, were studied. Auditory memory was evaluated using the 'Kannada auditory memory and sequencing test', while global memory was assessed using the memory domain of the 'Cognitive linguistic assessment protocol for Adults' and the 'Memory ability checklist'. RESULTS No significant correlation was seen between the scores obtained on the auditory memory and the global memory tests in both young adults as well as older adults. Also, the scores on the memory ability checklist did not show any correlation with either global memory scores or auditory memory scores in both participant groups. Additionally, the scores of the three memory measures were found to be significantly different from each other. The older adults obtained significantly poorer scores on all three memory tools compared to young adults. CONCLUSION The findings indicated that auditory memory is modality-specific and is independent of global memory. Additionally, all three measures were sensitive in detecting age-related decline in memory.
Collapse
Affiliation(s)
- Shubhaganga Dhrruvakumar
- Department of Audiology, All India Institute of Speech and Hearing, Manasagangothri, Mysuru, Karnataka, 570 006, India.
| | - Asha Yathiraj
- JSS Institute of Speech and Hearing, MG Road, Mysuru, 570004, India
| |
Collapse
|
18
|
Abstract
We are capable of storing a virtually infinite amount of visual information in visual long-term memory (VLTM) storage. At the same time, the amount of visual information we can encode and maintain in visual short-term memory (VSTM) at a given time is severely limited. How do these two memory systems interact to accumulate vast amount of VLTM? In this series of experiments, we exploited interindividual and intraindividual differences VSTM capacity to examine the direct involvement of VSTM in determining the encoding rate (or "bandwidth") of VLTM. Here, we found that the amount of visual information encoded into VSTM at a given moment (i.e., VSTM capacity), but neither the maintenance duration nor the test process, predicts the effective encoding "bandwidth" of VLTM.
Collapse
Affiliation(s)
- Keisuke Fukuda
- Department of Psychology, University of Toronto Mississauga, 3359 Mississauga Rd., Mississauga, ON, L5L 1C6, Canada.
| | - Edward K Vogel
- Department of Psychology, University of Chicago, Chicago, IL, USA
| |
Collapse
|
19
|
Automatic Frequency-Shift Detection in the Auditory System: A Review of Psychophysical Findings. Neuroscience 2018; 389:30-40. [DOI: 10.1016/j.neuroscience.2017.08.045] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2017] [Revised: 06/20/2017] [Accepted: 08/26/2017] [Indexed: 11/24/2022]
|
20
|
Elkhetali AS, Fleming LL, Vaden RJ, Nenert R, Mendle JE, Visscher KM. Background connectivity between frontal and sensory cortex depends on task state, independent of stimulus modality. Neuroimage 2018; 184:790-800. [PMID: 30237034 DOI: 10.1016/j.neuroimage.2018.09.040] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2018] [Revised: 09/13/2018] [Accepted: 09/15/2018] [Indexed: 10/28/2022] Open
Abstract
The human brain has the ability to process identical information differently depending on the task. In order to perform a given task, the brain must select and react to the appropriate stimuli while ignoring other irrelevant stimuli. The dynamic nature of environmental stimuli and behavioral intentions requires an equally dynamic set of responses within the brain. Collectively, these responses act to set up and maintain states needed to perform a given task. However, the mechanisms that allow for setting up and maintaining a task state are not fully understood. Prior evidence suggests that one possible mechanism for maintaining a task state may be through altering 'background connectivity,' connectivity that exists independently of the trials of a task. Although previous studies have suggested that background connectivity contributes to a task state, these studies have typically not controlled for stimulus characteristics, or have focused primarily on relationships among areas involved with visual sensory processing. In the present study we examined background connectivity during tasks involving both visual and auditory stimuli. We examined the connectivity profiles of both visual and auditory sensory cortex that allow for selection of task-relevant stimuli, demonstrating the existence of a potentially universal pattern of background connectivity underlying attention to a stimulus. Participants were presented with simultaneous auditory and visual stimuli and were instructed to respond to only one, while ignoring the other. Using functional MRI, we observed task-based modulation of the background connectivity profile for both the auditory and visual cortex to certain brain regions. There was an increase in background connectivity between the task-relevant sensory cortex and control areas in the frontal cortex. This increase in synchrony when receiving the task-relevant stimulus as compared to the task irrelevant stimulus may be maintaining paths for passing information within the cortex. These task-based modulations of connectivity occur independently of stimuli and could be one way the brain sets up and maintains a task state.
Collapse
Affiliation(s)
- Abdurahman S Elkhetali
- University of Utah School of Medicine Department of Neurology, Salt Lake City, UT, 84132, USA
| | - Leland L Fleming
- University of Alabama at Birmingham School of Medicine Department of Neurobiology, Birmingham, AL, 35294, USA
| | - Ryan J Vaden
- University of Alabama at Birmingham School of Medicine Department of Neurobiology, Birmingham, AL, 35294, USA
| | - Rodolphe Nenert
- Department of Neurology, University of Alabama at Birmingham School of Medicine, Birmingham, AL, 35294, USA
| | - Jane E Mendle
- Department of Human Development, Cornell University, Ithaca, NY, 14853, USA
| | - Kristina M Visscher
- University of Alabama at Birmingham School of Medicine Department of Neurobiology, Birmingham, AL, 35294, USA.
| |
Collapse
|
21
|
Pillai R, Yathiraj A. Auditory, visual and auditory-visual memory and sequencing performance in typically developing children. Int J Pediatr Otorhinolaryngol 2017; 100:23-34. [PMID: 28802378 DOI: 10.1016/j.ijporl.2017.06.010] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/05/2017] [Revised: 06/03/2017] [Accepted: 06/09/2017] [Indexed: 10/19/2022]
Abstract
OBJECTIVE The study evaluated whether there exists a difference/relation in the way four different memory skills (memory score, sequencing score, memory span, & sequencing span) are processed through the auditory modality, visual modality and combined modalities. METHODS Four memory skills were evaluated on 30 typically developing children aged 7 years and 8 years across three modality conditions (auditory, visual, & auditory-visual). Analogous auditory and visual stimuli were presented to evaluate the three modality conditions across the two age groups. RESULTS The children obtained significantly higher memory scores through the auditory modality compared to the visual modality. Likewise, their memory scores were significantly higher through the auditory-visual modality condition than through the visual modality. However, no effect of modality was observed on the sequencing scores as well as for the memory and the sequencing span. A good agreement was seen between the different modality conditions that were studied (auditory, visual, & auditory-visual) for the different memory skills measures (memory scores, sequencing scores, memory span, & sequencing span). A relatively lower agreement was noted only between the auditory and visual modalities as well as between the visual and auditory-visual modality conditions for the memory scores, measured using Bland-Altman plots. CONCLUSIONS The study highlights the efficacy of using analogous stimuli to assess the auditory, visual as well as combined modalities. The study supports the view that the performance of children on different memory skills was better through the auditory modality compared to the visual modality.
Collapse
Affiliation(s)
- Roshni Pillai
- Department of Audiology, All India Institute of Speech & Hearing, Manasagangothri, Mysuru 570006, India.
| | - Asha Yathiraj
- Department of Audiology, All India Institute of Speech & Hearing, Manasagangothri, Mysuru 570006, India.
| |
Collapse
|
22
|
Missaire M, Fraize N, Joseph MA, Hamieh AM, Parmentier R, Marighetto A, Salin PA, Malleret G. Long-term effects of interference on short-term memory performance in the rat. PLoS One 2017; 12:e0173834. [PMID: 28288205 PMCID: PMC5348021 DOI: 10.1371/journal.pone.0173834] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2016] [Accepted: 02/27/2017] [Indexed: 11/19/2022] Open
Abstract
A distinction has always been made between long-term and short-term memory (also now called working memory, WM). The obvious difference between these two kinds of memory concerns the duration of information storage: information is supposedly transiently stored in WM while it is considered durably consolidated into long-term memory. It is well acknowledged that the content of WM is erased and reset after a short time, to prevent irrelevant information from proactively interfering with newly stored information. In the present study, we used typical WM radial maze tasks to question the brief lifespan of spatial WM content in rodents. Groups of rats were submitted to one of two different WM tasks in a radial maze: a WM task involving the repetitive presentation of a same pair of arms expected to induce a high level of proactive interference (PI) (HIWM task), or a task using a different pair in each trial expected to induce a low level of PI (LIWM task). Performance was effectively lower in the HIWM group than in LIWM in the final trial of each training session, indicative of a "within-session/short-term" PI effect. However, we also observed a different "between-session/long-term" PI effect between the two groups: while performance of LIWM trained rats remained stable over days, the performance of HIWM rats dropped after 10 days of training, and this impairment was visible from the very first trial of the day, hence not attributable to within-session PI. We also showed that a 24 hour-gap across training sessions known to allow consolidation processes to unfold, was a necessary and sufficient condition for the long-term PI effect to occur. These findings suggest that in the HIWM task, WM content was not entirely reset between training sessions and that, in specific conditions, WM content can outlast its purpose by being stored more permanently, generating a long-term deleterious effect of PI. The alternative explanation is that WM content could be transferred and stored more permanently in an intermediary form or memory between WM and long-term memory.
Collapse
Affiliation(s)
- Mégane Missaire
- Forgetting and Cortical Dynamics Team, Lyon Neuroscience Research Center (CRNL), University Lyon 1, Lyon, France
| | - Nicolas Fraize
- Forgetting and Cortical Dynamics Team, Lyon Neuroscience Research Center (CRNL), University Lyon 1, Lyon, France
| | - Mickaël Antoine Joseph
- Forgetting and Cortical Dynamics Team, Lyon Neuroscience Research Center (CRNL), University Lyon 1, Lyon, France
| | - Al Mahdy Hamieh
- Forgetting and Cortical Dynamics Team, Lyon Neuroscience Research Center (CRNL), University Lyon 1, Lyon, France
| | - Régis Parmentier
- Forgetting and Cortical Dynamics Team, Lyon Neuroscience Research Center (CRNL), University Lyon 1, Lyon, France
- Centre National de la Recherche Scientifique (CNRS), Lyon, France
- Institut National de la Santé et de la Recherche Médicale (INSERM), Lyon, France
| | - Aline Marighetto
- Neurocentre Magendie, INSERM U1215, Université de Bordeaux, Bordeaux, France
| | - Paul Antoine Salin
- Forgetting and Cortical Dynamics Team, Lyon Neuroscience Research Center (CRNL), University Lyon 1, Lyon, France
- Centre National de la Recherche Scientifique (CNRS), Lyon, France
- Institut National de la Santé et de la Recherche Médicale (INSERM), Lyon, France
| | - Gaël Malleret
- Forgetting and Cortical Dynamics Team, Lyon Neuroscience Research Center (CRNL), University Lyon 1, Lyon, France
- Centre National de la Recherche Scientifique (CNRS), Lyon, France
- Institut National de la Santé et de la Recherche Médicale (INSERM), Lyon, France
| |
Collapse
|
23
|
Abstract
Vision and audition have complementary affinities, with vision excelling in spatial resolution and audition excelling in temporal resolution. Here, we investigated the relationships among the visual and auditory modalities and spatial and temporal short-term memory (STM) using change detection tasks. We created short sequences of visual or auditory items, such that each item within a sequence arose at a unique spatial location at a unique time. On each trial, two successive sequences were presented; subjects attended to either space (the sequence of locations) or time (the sequence of inter item intervals) and reported whether the patterns of locations or intervals were identical. Each subject completed blocks of unimodal trials (both sequences presented in the same modality) and crossmodal trials (Sequence 1 visual, Sequence 2 auditory, or vice versa) for both spatial and temporal tasks. We found a strong interaction between modality and task: Spatial performance was best on unimodal visual trials, whereas temporal performance was best on unimodal auditory trials. The order of modalities on crossmodal trials also mattered, suggesting that perceptual fidelity at encoding is critical to STM. Critically, no cost was attributable to crossmodal comparison: In both tasks, performance on crossmodal trials was as good as or better than on the weaker unimodal trials. STM representations of space and time can guide change detection in either the visual or the auditory modality, suggesting that the temporal or spatial organization of STM may supersede sensory-specific organization.
Collapse
|
24
|
Early declarative memory predicts productive language: A longitudinal study of deferred imitation and communication at 9 and 16months. J Exp Child Psychol 2016; 151:109-19. [DOI: 10.1016/j.jecp.2016.01.015] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2015] [Revised: 01/11/2016] [Accepted: 01/26/2016] [Indexed: 11/21/2022]
|
25
|
Abstract
Our understanding of short-term recognition memory can be enhanced by careful choice and control of test materials. Theory-driven manipulation of memory test stimuli, including visual textures, human faces, and complex sounds, minimize individual differences and make it possible to predict recognition performance for specific combinations of stimulus items. This stimulus-oriented approach to memory reveals that stimulus similarity plays two different important roles in recognition memory. By exploiting tools used in psychophysics, it is possible to generate mnemometric functions-detailed "snapshots" that capture key features of subjects' memory strength.
Collapse
|
26
|
Siedenburg K, McAdams S. The role of long-term familiarity and attentional maintenance in short-term memory for timbre. Memory 2016; 25:550-564. [PMID: 27314886 DOI: 10.1080/09658211.2016.1197945] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
We study short-term recognition of timbre using familiar recorded tones from acoustic instruments and unfamiliar transformed tones that do not readily evoke sound-source categories. Participants indicated whether the timbre of a probe sound matched with one of three previously presented sounds (item recognition). In Exp. 1, musicians better recognised familiar acoustic compared to unfamiliar synthetic sounds, and this advantage was particularly large in the medial serial position. There was a strong correlation between correct rejection rate and the mean perceptual dissimilarity of the probe to the tones from the sequence. Exp. 2 compared musicians' and non-musicians' performance with concurrent articulatory suppression, visual interference, and with a silent control condition. Both suppression tasks disrupted performance by a similar margin, regardless of musical training of participants or type of sounds. Our results suggest that familiarity with sound source categories and attention play important roles in short-term memory for timbre, which rules out accounts solely based on sensory persistence.
Collapse
Affiliation(s)
- Kai Siedenburg
- a Schulich School of Music , McGill University , Montreal , QC , Canada.,b Department of Medical Physics and Acoustics , University of Oldenburg , Oldenburg , Germany
| | - Stephen McAdams
- a Schulich School of Music , McGill University , Montreal , QC , Canada
| |
Collapse
|
27
|
Plakke B, Romanski LM. Neural circuits in auditory and audiovisual memory. Brain Res 2016; 1640:278-88. [PMID: 26656069 PMCID: PMC4868791 DOI: 10.1016/j.brainres.2015.11.042] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2015] [Revised: 10/28/2015] [Accepted: 11/25/2015] [Indexed: 01/01/2023]
Abstract
Working memory is the ability to employ recently seen or heard stimuli and apply them to changing cognitive context. Although much is known about language processing and visual working memory, the neurobiological basis of auditory working memory is less clear. Historically, part of the problem has been the difficulty in obtaining a robust animal model to study auditory short-term memory. In recent years there has been neurophysiological and lesion studies indicating a cortical network involving both temporal and frontal cortices. Studies specifically targeting the role of the prefrontal cortex (PFC) in auditory working memory have suggested that dorsal and ventral prefrontal regions perform different roles during the processing of auditory mnemonic information, with the dorsolateral PFC performing similar functions for both auditory and visual working memory. In contrast, the ventrolateral PFC (VLPFC), which contains cells that respond robustly to auditory stimuli and that process both face and vocal stimuli may be an essential locus for both auditory and audiovisual working memory. These findings suggest a critical role for the VLPFC in the processing, integrating, and retaining of communication information. This article is part of a Special Issue entitled SI: Auditory working memory.
Collapse
Affiliation(s)
- B Plakke
- University of Rochester School of Medicine & Dentistry, Department Neurobiology & Anatomy, United States.
| | - L M Romanski
- University of Rochester School of Medicine & Dentistry, Department Neurobiology & Anatomy, United States.
| |
Collapse
|
28
|
Scott BH, Mishkin M. Auditory short-term memory in the primate auditory cortex. Brain Res 2016; 1640:264-77. [PMID: 26541581 PMCID: PMC4853305 DOI: 10.1016/j.brainres.2015.10.048] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2015] [Revised: 10/17/2015] [Accepted: 10/26/2015] [Indexed: 12/20/2022]
Abstract
Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory.
Collapse
Affiliation(s)
- Brian H Scott
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health, Bethesda, MD 20892, USA.
| | - Mortimer Mishkin
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health, Bethesda, MD 20892, USA.
| |
Collapse
|
29
|
Fogerty D, Humes LE, Busey TA. Age-Related Declines in Early Sensory Memory: Identification of Rapid Auditory and Visual Stimulus Sequences. Front Aging Neurosci 2016; 8:90. [PMID: 27199737 PMCID: PMC4858528 DOI: 10.3389/fnagi.2016.00090] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2015] [Accepted: 04/11/2016] [Indexed: 11/22/2022] Open
Abstract
Age-related temporal-processing declines of rapidly presented sequences may involve contributions of sensory memory. This study investigated recall for rapidly presented auditory (vowel) and visual (letter) sequences presented at six different stimulus onset asynchronies (SOA) that spanned threshold SOAs for sequence identification. Younger, middle-aged, and older adults participated in all tasks. Results were investigated at both equivalent performance levels (i.e., SOA threshold) and at identical physical stimulus values (i.e., SOAs). For four-item sequences, results demonstrated best performance for the first and last items in the auditory sequences, but only the first item for visual sequences. For two-item sequences, adults identified the second vowel or letter significantly better than the first. Overall, when temporal-order performance was equated for each individual by testing at SOA thresholds, recall accuracy for each position across the age groups was highly similar. These results suggest that modality-specific processing declines of older adults primarily determine temporal-order performance for rapid sequences. However, there is some evidence for a second amodal processing decline in older adults related to early sensory memory for final items in a sequence. This selective deficit was observed particularly for longer sequence lengths and was not accounted for by temporal masking.
Collapse
Affiliation(s)
- Daniel Fogerty
- Department of Communication Sciences and Disorders, University of South CarolinaColumbia, SC, USA
| | - Larry E. Humes
- Department of Speech and Hearing Sciences, Indiana UniversityBloomington, IN, USA
| | - Thomas A. Busey
- Department of Brain and Psychological Sciences, Indiana UniversityBloomington, IN, USA
| |
Collapse
|
30
|
Audiovisual integration facilitates monkeys' short-term memory. Anim Cogn 2016; 19:799-811. [PMID: 27010716 DOI: 10.1007/s10071-016-0979-0] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2016] [Revised: 03/12/2016] [Accepted: 03/18/2016] [Indexed: 12/25/2022]
Abstract
Many human behaviors are known to benefit from audiovisual integration, including language and communication, recognizing individuals, social decision making, and memory. Exceptionally little is known about the contributions of audiovisual integration to behavior in other primates. The current experiment investigated whether short-term memory in nonhuman primates is facilitated by the audiovisual presentation format. Three macaque monkeys that had previously learned an auditory delayed matching-to-sample (DMS) task were trained to perform a similar visual task, after which they were tested with a concurrent audiovisual DMS task with equal proportions of auditory, visual, and audiovisual trials. Parallel to outcomes in human studies, accuracy was higher and response times were faster on audiovisual trials than either unisensory trial type. Unexpectedly, two subjects exhibited superior unimodal performance on auditory trials, a finding that contrasts with previous studies, but likely reflects their training history. Our results provide the first demonstration of a bimodal memory advantage in nonhuman primates, lending further validation to their use as a model for understanding audiovisual integration and memory processing in humans.
Collapse
|
31
|
Joseph S, Teki S, Kumar S, Husain M, Griffiths TD. Resource allocation models of auditory working memory. Brain Res 2016; 1640:183-92. [PMID: 26835560 DOI: 10.1016/j.brainres.2016.01.044] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2015] [Revised: 01/19/2016] [Accepted: 01/25/2016] [Indexed: 10/22/2022]
Abstract
Auditory working memory (WM) is the cognitive faculty that allows us to actively hold and manipulate sounds in mind over short periods of time. We develop here a particular perspective on WM for non-verbal, auditory objects as well as for time based on the consideration of possible parallels to visual WM. In vision, there has been a vigorous debate on whether WM capacity is limited to a fixed number of items or whether it represents a limited resource that can be allocated flexibly across items. Resource allocation models predict that the precision with which an item is represented decreases as a function of total number of items maintained in WM because a limited resource is shared among stored objects. We consider here auditory work on sequentially presented objects of different pitch as well as time intervals from the perspective of dynamic resource allocation. We consider whether the working memory resource might be determined by perceptual features such as pitch or timbre, or bound objects comprising multiple features, and we speculate on brain substrates for these behavioural models. This article is part of a Special Issue entitled SI: Auditory working memory.
Collapse
Affiliation(s)
- Sabine Joseph
- Institute of Cognitive Neuroscience, University College London, UK; Institute of Neurology, University College London, UK.
| | - Sundeep Teki
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford OX1 3QX, UK
| | - Sukhbinder Kumar
- Wellcome Trust Centre for Neuroimaging, University College London, London, UK; Institute of Neuroscience, Medical School, Newcastle University, Newcastle, UK
| | - Masud Husain
- Department of Clinical Neuroscience, University of Oxford, UK; Department of Experimental Psychology, University of Oxford, UK
| | - Timothy D Griffiths
- Wellcome Trust Centre for Neuroimaging, University College London, London, UK; Institute of Neuroscience, Medical School, Newcastle University, Newcastle, UK.
| |
Collapse
|
32
|
The role of age and executive function in auditory category learning. J Exp Child Psychol 2015; 142:48-65. [PMID: 26491987 DOI: 10.1016/j.jecp.2015.09.018] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2015] [Revised: 08/01/2015] [Accepted: 09/21/2015] [Indexed: 11/20/2022]
Abstract
Auditory categorization is a natural and adaptive process that allows for the organization of high-dimensional, continuous acoustic information into discrete representations. Studies in the visual domain have identified a rule-based learning system that learns and reasons via a hypothesis-testing process that requires working memory and executive attention. The rule-based learning system in vision shows a protracted development, reflecting the influence of maturing prefrontal function on visual categorization. The aim of the current study was twofold: (a) to examine the developmental trajectory of rule-based auditory category learning from childhood through adolescence and into early adulthood and (b) to examine the extent to which individual differences in rule-based category learning relate to individual differences in executive function. A sample of 60 participants with normal hearing-20 children (age range=7-12years), 21 adolescents (age range=13-19years), and 19 young adults (age range=20-23years)-learned to categorize novel dynamic "ripple" sounds using trial-by-trial feedback. The spectrotemporally modulated ripple sounds are considered the auditory equivalent of the well-studied "Gabor" patches in the visual domain. Results reveal that auditory categorization accuracy improved with age, with young adults outperforming children and adolescents. Computational modeling analyses indicated that the use of the task-optimal strategy (i.e., a conjunctive rule-based learning strategy) improved with age. Notably, individual differences in executive flexibility significantly predicted auditory category learning success. The current findings demonstrate a protracted development of rule-based auditory categorization. The results further suggest that executive flexibility coupled with perceptual processes play important roles in successful rule-based auditory category learning.
Collapse
|
33
|
Griffis JC, Elkhetali AS, Vaden RJ, Visscher KM. Distinct effects of trial-driven and task Set-related control in primary visual cortex. Neuroimage 2015; 120:285-297. [PMID: 26163806 DOI: 10.1016/j.neuroimage.2015.07.005] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2014] [Revised: 06/02/2015] [Accepted: 07/03/2015] [Indexed: 11/28/2022] Open
Abstract
Task sets are task-specific configurations of cognitive processes that facilitate task-appropriate reactions to stimuli. While it is established that the trial-by-trial deployment of visual attention to expected stimuli influences neural responses in primary visual cortex (V1) in a retinotopically specific manner, it is not clear whether the mechanisms that help maintain a task set over many trials also operate with similar retinotopic specificity. Here, we address this question by using BOLD fMRI to characterize how portions of V1 that are specialized for different eccentricities respond during distinct components of an attention-demanding discrimination task: cue-driven preparation for a trial, trial-driven processing, task-initiation at the beginning of a block of trials, and task-maintenance throughout a block of trials. Tasks required either unimodal attention to an auditory or a visual stimulus or selective intermodal attention to the visual or auditory component of simultaneously presented visual and auditory stimuli. We found that while the retinotopic patterns of trial-driven and cue-driven activity depended on the attended stimulus, the retinotopic patterns of task-initiation and task-maintenance activity did not. Further, only the retinotopic patterns of trial-driven activity were found to depend on the presence of inter-modal distraction. Participants who performed well on the intermodal selective attention tasks showed strong task-specific modulations of both trial-driven and task-maintenance activity. Importantly, task-related modulations of trial-driven and task-maintenance activity were in opposite directions. Together, these results confirm that there are (at least) two different processes for top-down control of V1: One, working trial-by-trial, differently modulates activity across different eccentricity sectors - portions of V1 corresponding to different visual eccentricities. The second process works across longer epochs of task performance, and does not differ among eccentricity sectors. These results are discussed in the context of previous literature examining top-down control of visual cortical areas.
Collapse
Affiliation(s)
- Joseph C Griffis
- The University of Alabama at Birmingham Department of Psychology
| | | | - Ryan J Vaden
- The University of Alabama at Birmingham Department of Neurobiology
| | | |
Collapse
|
34
|
Griffis JC, Elkhetali AS, Burge WK, Chen RH, Visscher KM. Retinotopic patterns of background connectivity between V1 and fronto-parietal cortex are modulated by task demands. Front Hum Neurosci 2015; 9:338. [PMID: 26106320 PMCID: PMC4458688 DOI: 10.3389/fnhum.2015.00338] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2015] [Accepted: 05/27/2015] [Indexed: 11/19/2022] Open
Abstract
Attention facilitates the processing of task-relevant visual information and suppresses interference from task-irrelevant information. Modulations of neural activity in visual cortex depend on attention, and likely result from signals originating in fronto-parietal and cingulo-opercular regions of cortex. Here, we tested the hypothesis that attentional facilitation of visual processing is accomplished in part by changes in how brain networks involved in attentional control interact with sectors of V1 that represent different retinal eccentricities. We measured the strength of background connectivity between fronto-parietal and cingulo-opercular regions with different eccentricity sectors in V1 using functional MRI data that were collected while participants performed tasks involving attention to either a centrally presented visual stimulus or a simultaneously presented auditory stimulus. We found that when the visual stimulus was attended, background connectivity between V1 and the left frontal eye fields (FEF), left intraparietal sulcus (IPS), and right IPS varied strongly across different eccentricity sectors in V1 so that foveal sectors were more strongly connected than peripheral sectors. This retinotopic gradient was weaker when the visual stimulus was ignored, indicating that it was driven by attentional effects. Greater task-driven differences between foveal and peripheral sectors in background connectivity to these regions were associated with better performance on the visual task and faster response times on correct trials. These findings are consistent with the notion that attention drives the configuration of task-specific functional pathways that enable the prioritized processing of task-relevant visual information, and show that the prioritization of visual information by attentional processes may be encoded in the retinotopic gradient of connectivty between V1 and fronto-parietal regions.
Collapse
Affiliation(s)
- Joseph C Griffis
- Department of Psychology, University of Alabama at Birmingham Birmingham, AL, USA
| | | | - Wesley K Burge
- Department of Psychology, University of Alabama at Birmingham Birmingham, AL, USA
| | - Richard H Chen
- Department of Neurobiology, University of Alabama at Birmingham Birmingham, AL, USA
| | - Kristina M Visscher
- Department of Neurobiology, University of Alabama at Birmingham Birmingham, AL, USA
| |
Collapse
|
35
|
Elkhetali AS, Vaden RJ, Pool SM, Visscher KM. Early visual cortex reflects initiation and maintenance of task set. Neuroimage 2014; 107:277-288. [PMID: 25485712 DOI: 10.1016/j.neuroimage.2014.11.061] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2014] [Revised: 10/20/2014] [Accepted: 11/30/2014] [Indexed: 10/24/2022] Open
Abstract
The human brain is able to process information flexibly, depending on a person's task. The mechanisms underlying this ability to initiate and maintain a task set are not well understood, but they are important for understanding the flexibility of human behavior and developing therapies for disorders involving attention. Here we investigate the differential roles of early visual cortical areas in initiating and maintaining a task set. Using functional Magnetic Resonance Imaging (fMRI), we characterized three different components of task set-related, but trial-independent activity in retinotopically mapped areas of early visual cortex, while human participants performed attention demanding visual or auditory tasks. These trial-independent effects reflected: (1) maintenance of attention over a long duration, (2) orienting to a cue, and (3) initiation of a task set. Participants performed tasks that differed in the modality of stimulus to be attended (auditory or visual) and in whether there was a simultaneous distractor (auditory only, visual only, or simultaneous auditory and visual). We found that patterns of trial-independent activity in early visual areas (V1, V2, V3, hV4) depend on attended modality, but not on stimuli. Further, different early visual areas play distinct roles in the initiation of a task set. In addition, activity associated with maintaining a task set tracks with a participant's behavior. These results show that trial-independent activity in early visual cortex reflects initiation and maintenance of a person's task set.
Collapse
Affiliation(s)
- Abdurahman S Elkhetali
- Neurobiology Department, University of Alabama at Birmingham, CIRC 111D, 1530 3(RD) Avenue South, Birmingham, AL 35294, USA.
| | - Ryan J Vaden
- Neurobiology Department, University of Alabama at Birmingham, CIRC 111D, 1530 3(RD) Avenue South, Birmingham, AL 35294, USA.
| | - Sean M Pool
- Biomedical Engineering, University of Alabama at Birmingham, 1530 3(RD) Avenue South, Birmingham, AL 35294, USA.
| | - Kristina M Visscher
- Neurobiology Department, University of Alabama at Birmingham, CIRC 111D, 1530 3(RD) Avenue South, Birmingham, AL 35294, USA; Biomedical Engineering, University of Alabama at Birmingham, 1530 3(RD) Avenue South, Birmingham, AL 35294, USA; Psychology Department, University of Alabama at Birmingham, 1530 3(RD) Avenue South, Birmingham, AL 35294, USA.
| |
Collapse
|
36
|
Lakshminarayanan B, Stanton C, O'Toole PW, Ross RP. Compositional dynamics of the human intestinal microbiota with aging: implications for health. J Nutr Health Aging 2014; 18:773-86. [PMID: 25389954 DOI: 10.1007/s12603-014-0549-6] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
Abstract
The human gut contains trillions of microbes which form an essential part of the complex ecosystem of the host. This microbiota is relatively stable throughout adult life, but may fluctuate over time with aging and disease. The gut microbiota serves a number of functions including roles in energy provision, nutrition and also in the maintenance of host health such as protection against pathogens. This review summarizes the age-related changes in the microbiota of the gastrointestinal tract (GIT) and the link between the gut microbiota in health and disease. Understanding the composition and function of the gut microbiota along with the changes it undergoes overtime should aid the design of novel therapeutic strategies to counteract such alterations. These strategies include probiotic and prebiotic preparations as well as targeted nutrients, designed to enrich the gut microbiota of the aging population.
Collapse
Affiliation(s)
- B Lakshminarayanan
- R. Paul Ross, Teagasc Food Research Centre, Moorepark, Fermoy, Co. Cork, Ireland. , Tel: 00353 (0)25 42229, Fax: 00353 (0)25 42340
| | | | | | | |
Collapse
|
37
|
Chandrasekaran B, Koslov SR, Maddox WT. Toward a dual-learning systems model of speech category learning. Front Psychol 2014; 5:825. [PMID: 25132827 PMCID: PMC4116788 DOI: 10.3389/fpsyg.2014.00825] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2014] [Accepted: 07/10/2014] [Indexed: 11/15/2022] Open
Abstract
More than two decades of work in vision posits the existence of dual-learning systems of category learning. The reflective system uses working memory to develop and test rules for classifying in an explicit fashion, while the reflexive system operates by implicitly associating perception with actions that lead to reinforcement. Dual-learning systems models hypothesize that in learning natural categories, learners initially use the reflective system and, with practice, transfer control to the reflexive system. The role of reflective and reflexive systems in auditory category learning and more specifically in speech category learning has not been systematically examined. In this article, we describe a neurobiologically constrained dual-learning systems theoretical framework that is currently being developed in speech category learning and review recent applications of this framework. Using behavioral and computational modeling approaches, we provide evidence that speech category learning is predominantly mediated by the reflexive learning system. In one application, we explore the effects of normal aging on non-speech and speech category learning. Prominently, we find a large age-related deficit in speech learning. The computational modeling suggests that older adults are less likely to transition from simple, reflective, unidimensional rules to more complex, reflexive, multi-dimensional rules. In a second application, we summarize a recent study examining auditory category learning in individuals with elevated depressive symptoms. We find a deficit in reflective-optimal and an enhancement in reflexive-optimal auditory category learning. Interestingly, individuals with elevated depressive symptoms also show an advantage in learning speech categories. We end with a brief summary and description of a number of future directions.
Collapse
Affiliation(s)
- Bharath Chandrasekaran
- SoundBrain Lab, Department of Communication Sciences and Disorders, The University of Texas at AustinAustin, TX, USA
- Institute for Mental Health Research, The University of Texas at AustinAustin, TX, USA
- Institute for Neuroscience, The University of Texas at AustinAustin, TX, USA
- Department of Psychology, The University of Texas at AustinAustin, TX, USA
| | - Seth R. Koslov
- Department of Psychology, The University of Texas at AustinAustin, TX, USA
| | - W. T. Maddox
- Institute for Mental Health Research, The University of Texas at AustinAustin, TX, USA
- Institute for Neuroscience, The University of Texas at AustinAustin, TX, USA
- Department of Psychology, The University of Texas at AustinAustin, TX, USA
| |
Collapse
|
38
|
Bigelow J, Poremba A. Achilles' ear? Inferior human short-term and recognition memory in the auditory modality. PLoS One 2014; 9:e89914. [PMID: 24587119 PMCID: PMC3935966 DOI: 10.1371/journal.pone.0089914] [Citation(s) in RCA: 48] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2013] [Accepted: 01/28/2014] [Indexed: 11/19/2022] Open
Abstract
Studies of the memory capabilities of nonhuman primates have consistently revealed a relative weakness for auditory compared to visual or tactile stimuli: extensive training is required to learn auditory memory tasks, and subjects are only capable of retaining acoustic information for a brief period of time. Whether a parallel deficit exists in human auditory memory remains an outstanding question. In the current study, a short-term memory paradigm was used to test human subjects' retention of simple auditory, visual, and tactile stimuli that were carefully equated in terms of discriminability, stimulus exposure time, and temporal dynamics. Mean accuracy did not differ significantly among sensory modalities at very short retention intervals (1-4 s). However, at longer retention intervals (8-32 s), accuracy for auditory stimuli fell substantially below that observed for visual and tactile stimuli. In the interest of extending the ecological validity of these findings, a second experiment tested recognition memory for complex, naturalistic stimuli that would likely be encountered in everyday life. Subjects were able to identify all stimuli when retention was not required, however, recognition accuracy following a delay period was again inferior for auditory compared to visual and tactile stimuli. Thus, the outcomes of both experiments provide a human parallel to the pattern of results observed in nonhuman primates. The results are interpreted in light of neuropsychological data from nonhuman primates, which suggest a difference in the degree to which auditory, visual, and tactile memory are mediated by the perirhinal and entorhinal cortices.
Collapse
Affiliation(s)
- James Bigelow
- Department of Psychology, University of Iowa, Iowa City, Iowa, United States of America
| | - Amy Poremba
- Department of Psychology, University of Iowa, Iowa City, Iowa, United States of America
- * E-mail:
| |
Collapse
|
39
|
Gold JM, Aizenman A, Bond SM, Sekuler R. Memory and incidental learning for visual frozen noise sequences. Vision Res 2013; 99:19-36. [PMID: 24075900 DOI: 10.1016/j.visres.2013.09.005] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2013] [Revised: 09/07/2013] [Accepted: 09/10/2013] [Indexed: 12/01/2022]
Abstract
Five experiments explored short-term memory and incidental learning for random visual spatio-temporal sequences. In each experiment, human observers saw samples of 8 Hz temporally-modulated 1D or 2D contrast noise sequences whose members were either uncorrelated across an entire 1-s long stimulus sequence, or comprised two frozen noise sequences that repeated identically between a stimulus' first and second 500 ms halves ("Repeated" noise). Presented with randomly intermixed stimuli of both types, observers judged whether each sequence repeated or not. Additionally, a particular exemplar of Repeated noise (a frozen or "Fixed Repeated" noise) was interspersed multiple times within a block of trials. As previously shown with auditory frozen noise stimuli (Agus, Thorpe, & Pressnitzer, 2010) recognition performance (d') increased with successive presentations of a Fixed Repeated stimulus, and exceeded performance with regular Repeated noise. However, unlike the case with auditory stimuli, learning of random visual stimuli was slow and gradual, rather than fast and abrupt. Reverse correlation revealed that contrasts occupying particular temporal positions within a sequence had disproportionately heavy weight in observers' judgments. A subsequent experiment suggested that this result arose from observers' uncertainty about the temporal mid-point of the noise sequences. Additionally, discrimination performance fell dramatically when a sequence of contrast values was repeated, but in reverse ("mirror image") order. This poor performance with temporal mirror images is strikingly different from vision's exquisite sensitivity to spatial mirror images.
Collapse
Affiliation(s)
- Jason M Gold
- Department of Psychological and Brain Sciences, Indiana University, United States.
| | - Avi Aizenman
- Volen Center for Complex Systems, Brandeis University, United States
| | - Stephanie M Bond
- Volen Center for Complex Systems, Brandeis University, United States
| | - Robert Sekuler
- Volen Center for Complex Systems, Brandeis University, United States
| |
Collapse
|
40
|
|
41
|
van Vugt MK, Sekuler R, Wilson HR, Kahana MJ. An electrophysiological signature of summed similarity in visual working memory. J Exp Psychol Gen 2012; 142:412-25. [PMID: 22963189 DOI: 10.1037/a0029759] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Summed-similarity models of short-term item recognition posit that participants base their judgments of an item's prior occurrence on that item's summed similarity to the ensemble of items on the remembered list. We examined the neural predictions of these models in 3 short-term recognition memory experiments using electrocorticographic/depth electrode recordings and scalp electroencephalography. On each experimental trial, participants judged whether a test face had been among a small set of recently studied faces. Consistent with summed-similarity theory, participants' tendency to endorse a test item increased as a function of its summed similarity to the items on the just-studied list. To characterize this behavioral effect of summed similarity, we successfully fit a summed-similarity model to individual participant data from each experiment. Using the parameters determined from fitting the summed-similarity model to the behavioral data, we examined the relation between summed similarity and brain activity. We found that 4-9 Hz theta activity in the medial temporal lobe and 2-4 Hz delta activity recorded from frontal and parietal cortices increased with summed similarity. These findings demonstrate direct neural correlates of the similarity computations that form the foundation of several major cognitive theories of human recognition memory.
Collapse
Affiliation(s)
- Marieke K van Vugt
- Department of Artificial Intelligence, University of Groningen, Groningen, the Netherlands.
| | | | | | | |
Collapse
|
42
|
Abstract
A stimulus trace may be temporarily retained either actively [i.e., in working memory (WM)] or by the weaker mnemonic process we will call passive short-term memory, in which a given stimulus trace is highly susceptible to "overwriting" by a subsequent stimulus. It has been suggested that WM is the more robust process because it exploits long-term memory (i.e., a current stimulus activates a stored representation of that stimulus, which can then be actively maintained). Recent studies have suggested that monkeys may be unable to store acoustic signals in long-term memory, raising the possibility that they may therefore also lack auditory WM. To explore this possibility, we tested rhesus monkeys on a serial delayed match-to-sample (DMS) task using a small set of sounds presented with ~1-s interstimulus delays. Performance was accurate whenever a match or a nonmatch stimulus followed the sample directly, but it fell precipitously if a single nonmatch stimulus intervened between sample and match. The steep drop in accuracy was found to be due not to passive decay of the sample's trace, but to retroactive interference from the intervening nonmatch stimulus. This "overwriting" effect was far greater than that observed previously in serial DMS with visual stimuli. The results, which accord with the notion that WM relies on long-term memory, indicate that monkeys perform serial DMS in audition remarkably poorly and that whatever success they had on this task depended largely, if not entirely, on the retention of stimulus traces in the passive form of short-term memory.
Collapse
|
43
|
Abstract
An area of research that has experienced recent growth is the study of memory during perception of simple and complex auditory scenes. These studies have provided important information about how well auditory objects are encoded in memory and how well listeners can notice changes in auditory scenes. These are significant developments because they present an opportunity to better understand how we hear in realistic situations, how higher-level aspects of hearing such as semantics and prior exposure affect perception, and the similarities and differences between auditory perception and perception in other modalities, such as vision and touch. The research also poses exciting challenges for behavioral and neural models of how auditory perception and memory work.
Collapse
|
44
|
Bledowski C, Kaiser J, Wibral M, Yildiz-Erzberger K, Rahm B. Separable Neural Bases for Subprocesses of Recognition in Working Memory. Cereb Cortex 2011; 22:1950-8. [DOI: 10.1093/cercor/bhr276] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
|
45
|
Beta and gamma frequency-range abnormalities in parkinsonian patients under cognitive sensorimotor task. J Neurol Sci 2010; 293:51-8. [DOI: 10.1016/j.jns.2010.03.008] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2009] [Revised: 03/08/2010] [Accepted: 03/11/2010] [Indexed: 11/18/2022]
|
46
|
Fundamental differences in change detection between vision and audition. Exp Brain Res 2010; 203:261-70. [DOI: 10.1007/s00221-010-2226-2] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2009] [Accepted: 03/09/2010] [Indexed: 10/19/2022]
|
47
|
Homogeneity computation: how interitem similarity in visual short-term memory alters recognition. Psychon Bull Rev 2010; 17:59-65. [PMID: 20081162 DOI: 10.3758/pbr.17.1.59] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Visual short-term recognition memory for multiple stimuli is strongly influenced by the study items' similarity to one another-that is, by their homogeneity. However, the mechanism responsible for this homogeneity effect has remained unclear. We evaluated competing explanations of this effect, using controlled sets of Gabor patches as study items and probe stimuli. Our results, based on recognition memory for spatial frequency, rule out the possibility that the homogeneity effect arises because similar study items are encoded and/or maintained with higher fidelity in memory than dissimilar study items are. Instead, our results support the hypothesis that the homogeneity effect reflects trial-by-trial comparisons of study items, which generate a homogeneity signal. This homogeneity signal modulates recognition performance through an adjustment of the subject's decision criterion. Additionally, it seems the homogeneity signal is computed prior to the presentation of the probe stimulus, by evaluating the familiarity of each new stimulus with respect to the items already in memory. This suggests that recognition-like processes operate not only on the probe stimulus, but on study items as well.
Collapse
|
48
|
Abstract
Selective attention protects cognition against intrusions of task-irrelevant stimulus attributes. This protective function was tested in coordinated psychophysical and memory experiments. Stimuli were superimposed, horizontally and vertically oriented gratings of varying spatial frequency; only one orientation was task relevant. Experiment 1 demonstrated that a task-irrelevant spatial frequency interfered with visual discrimination of the task-relevant spatial frequency. Experiment 2 adopted a two-item Sternberg task, using stimuli that had been scaled to neutralize interference at the level of vision. Despite being visually neutralized, the task-irrelevant attribute strongly influenced recognition accuracy and associated reaction times (RTs). This effect was sharply tuned, with the task-irrelevant spatial frequency having an impact only when the task-relevant spatial frequencies of the probe and study items were highly similar to one another. Model-based analyses of judgment accuracy and RT distributional properties converged on the point that the irrelevant orientation operates at an early stage in memory processing, not at a later one that supports decision making.
Collapse
|
49
|
Galster M, Kahana MJ, Wilson HR, Sekuler R. Identity modulates short-term memory for facial emotion. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2009; 9:412-26. [PMID: 19897794 PMCID: PMC2836162 DOI: 10.3758/cabn.9.4.412] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
For some time, the relationship between processing of facial expression and facial identity has been in dispute. Using realistic synthetic faces, we reexamined this relationship for both perception and short-term memory. In Experiment 1, subjects tried to identify whether the emotional expression on a probe stimulus face matched the emotional expression on either of two remembered faces that they had just seen. The results showed that identity strongly influenced recognition short-term memory for emotional expression. In Experiment 2, subjects' similarity/dissimilarity judgments were transformed by multidimensional scaling (MDS) into a 2-D description of the faces' perceptual representations. Distances among stimuli in the MDS representation, which showed a strong linkage of emotional expression and facial identity, were good predictors of correct and false recognitions obtained previously in Experiment 1. The convergence of the results from Experiments 1 and 2 suggests that the overall structure and configuration of faces' perceptual representations may parallel their representation in short-term memory and that facial identity modulates the representation of facial emotion, both in perception and in memory. The stimuli from this study may be downloaded from http://cabn.psychonomic-journals.org/content/supplemental.
Collapse
|
50
|
Spectro-temporal modulation transfer function of single voxels in the human auditory cortex measured with high-resolution fMRI. Proc Natl Acad Sci U S A 2009; 106:14611-6. [PMID: 19667199 DOI: 10.1073/pnas.0907682106] [Citation(s) in RCA: 135] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Are visual and auditory stimuli processed by similar mechanisms in the human cerebral cortex? Images can be thought of as light energy modulations over two spatial dimensions, and low-level visual areas analyze images by decomposition into spatial frequencies. Similarly, sounds are energy modulations over time and frequency, and they can be identified and discriminated by the content of such modulations. An obvious question is therefore whether human auditory areas, in direct analogy to visual areas, represent the spectro-temporal modulation content of acoustic stimuli. To answer this question, we measured spectro-temporal modulation transfer functions of single voxels in the human auditory cortex with functional magnetic resonance imaging. We presented dynamic ripples, complex broadband stimuli with a drifting sinusoidal spectral envelope. Dynamic ripples are the auditory equivalent of the gratings often used in studies of the visual system. We demonstrate selective tuning to combined spectro-temporal modulations in the primary and secondary auditory cortex. We describe several types of modulation transfer functions, extracting different spectro-temporal features, with a high degree of interaction between spectral and temporal parameters. The overall low-pass modulation rate preference of the cortex matches the modulation content of natural sounds. These results demonstrate that combined spectro-temporal modulations are represented in the human auditory cortex, and suggest that complex signals are decomposed and processed according to their modulation content, the same transformation used by the visual system.
Collapse
|