1
|
Levy R. The prefrontal cortex: from monkey to man. Brain 2024; 147:794-815. [PMID: 37972282 PMCID: PMC10907097 DOI: 10.1093/brain/awad389] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Revised: 10/01/2023] [Accepted: 11/02/2023] [Indexed: 11/19/2023] Open
Abstract
The prefrontal cortex is so important to human beings that, if deprived of it, our behaviour is reduced to action-reactions and automatisms, with no ability to make deliberate decisions. Why does the prefrontal cortex hold such importance in humans? In answer, this review draws on the proximity between humans and other primates, which enables us, through comparative anatomical-functional analysis, to understand the cognitive functions we have in common and specify those that distinguish humans from their closest cousins. First, a focus on the lateral region of the prefrontal cortex illustrates the existence of a continuum between rhesus monkeys (the most studied primates in neuroscience) and humans for most of the major cognitive functions in which this region of the brain plays a central role. This continuum involves the presence of elementary mental operations in the rhesus monkey (e.g. working memory or response inhibition) that are constitutive of 'macro-functions' such as planning, problem-solving and even language production. Second, the human prefrontal cortex has developed dramatically compared to that of other primates. This increase seems to concern the most anterior part (the frontopolar cortex). In humans, the development of the most anterior prefrontal cortex is associated with three major and interrelated cognitive changes: (i) a greater working memory capacity, allowing for greater integration of past experiences and prospective futures; (ii) a greater capacity to link discontinuous or distant data, whether temporal or semantic; and (iii) a greater capacity for abstraction, allowing humans to classify knowledge in different ways, to engage in analogical reasoning or to acquire abstract values that give rise to our beliefs and morals. Together, these new skills enable us, among other things, to develop highly sophisticated social interactions based on language, enabling us to conceive beliefs and moral judgements and to conceptualize, create and extend our vision of our environment beyond what we can physically grasp. Finally, a model of the transition of prefrontal functions between humans and non-human primates concludes this review.
Collapse
Affiliation(s)
- Richard Levy
- AP–HP, Groupe Hospitalier Pitié-Salpêtrière, Department of Neurology, Sorbonne Université, Institute of Memory and Alzheimer’s Disease, 75013 Paris, France
- Sorbonne Université, INSERM U1127, CNRS 7225, Paris Brain Institute- ICM, 75013 Paris, France
| |
Collapse
|
2
|
Bianco R, Zuk NJ, Bigand F, Quarta E, Grasso S, Arnese F, Ravignani A, Battaglia-Mayer A, Novembre G. Neural encoding of musical expectations in a non-human primate. Curr Biol 2024; 34:444-450.e5. [PMID: 38176416 DOI: 10.1016/j.cub.2023.12.019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Revised: 10/26/2023] [Accepted: 12/07/2023] [Indexed: 01/06/2024]
Abstract
The appreciation of music is a universal trait of humankind.1,2,3 Evidence supporting this notion includes the ubiquity of music across cultures4,5,6,7 and the natural predisposition toward music that humans display early in development.8,9,10 Are we musical animals because of species-specific predispositions? This question cannot be answered by relying on cross-cultural or developmental studies alone, as these cannot rule out enculturation.11 Instead, it calls for cross-species experiments testing whether homologous neural mechanisms underlying music perception are present in non-human primates. We present music to two rhesus monkeys, reared without musical exposure, while recording electroencephalography (EEG) and pupillometry. Monkeys exhibit higher engagement and neural encoding of expectations based on the previously seeded musical context when passively listening to real music as opposed to shuffled controls. We then compare human and monkey neural responses to the same stimuli and find a species-dependent contribution of two fundamental musical features-pitch and timing12-in generating expectations: while timing- and pitch-based expectations13 are similarly weighted in humans, monkeys rely on timing rather than pitch. Together, these results shed light on the phylogeny of music perception. They highlight monkeys' capacity for processing temporal structures beyond plain acoustic processing, and they identify a species-dependent contribution of time- and pitch-related features to the neural encoding of musical expectations.
Collapse
Affiliation(s)
- Roberta Bianco
- Neuroscience of Perception & Action Lab, Italian Institute of Technology, Viale Regina Elena 291, 00161 Rome, Italy.
| | - Nathaniel J Zuk
- Department of Psychology, Nottingham Trent University, 50 Shakespeare Street, Nottingham NG1 4FQ, UK
| | - Félix Bigand
- Neuroscience of Perception & Action Lab, Italian Institute of Technology, Viale Regina Elena 291, 00161 Rome, Italy
| | - Eros Quarta
- Department of Physiology and Pharmacology, Sapienza University of Rome, Piazzale Aldo Moro 5, 00185 Rome, Italy
| | - Stefano Grasso
- Department of Physiology and Pharmacology, Sapienza University of Rome, Piazzale Aldo Moro 5, 00185 Rome, Italy
| | - Flavia Arnese
- Neuroscience of Perception & Action Lab, Italian Institute of Technology, Viale Regina Elena 291, 00161 Rome, Italy
| | - Andrea Ravignani
- Comparative Bioacoustics Group, Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525 XD Nijmegen, the Netherlands; Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Universitetsbyen 3, 8000 Aarhus, Denmark; Department of Human Neurosciences, Sapienza University of Rome, Piazzale Aldo Moro 5, 00185 Rome, Italy
| | - Alexandra Battaglia-Mayer
- Department of Physiology and Pharmacology, Sapienza University of Rome, Piazzale Aldo Moro 5, 00185 Rome, Italy
| | - Giacomo Novembre
- Neuroscience of Perception & Action Lab, Italian Institute of Technology, Viale Regina Elena 291, 00161 Rome, Italy.
| |
Collapse
|
3
|
Turpin T, Uluç I, Kotlarz P, Lankinen K, Mamashli F, Ahveninen J. Comparing auditory and visual aspects of multisensory working memory using bimodally matched feature patterns. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.08.03.551865. [PMID: 37577481 PMCID: PMC10418174 DOI: 10.1101/2023.08.03.551865] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/15/2023]
Abstract
Working memory (WM) reflects the transient maintenance of information in the absence of external input, which can be attained via multiple senses separately or simultaneously. Pertaining to WM, the prevailing literature suggests the dominance of vision over other sensory systems. However, this imbalance may be stemming from challenges in finding comparable stimuli across modalities. Here, we addressed this problem by using a balanced multisensory retro-cue WM design, which employed combinations of auditory (ripple sounds) and visuospatial (Gabor patches) patterns, adjusted relative to each participant's discrimination ability. In three separate experiments, the participant was asked to determine whether the (retro-cued) auditory and/or visual items maintained in WM matched or mismatched the subsequent probe stimulus. In Experiment 1, all stimuli were audiovisual, and the probes were either fully mismatching, only partially mismatching, or fully matching the memorized item. Experiment 2 was otherwise same as Experiment 1, but the probes were unimodal. In Experiment 3, the participant was cued to maintain only the auditory or visual aspect of an audiovisual item pair. In two of the three experiments, the participant matching performance was significantly more accurate for the auditory than visual attributes of probes. When the perceptual and task demands are bimodally equated, auditory attributes can be matched to multisensory items in WM at least as accurately as, if not more precisely than, their visual counterparts.
Collapse
Affiliation(s)
- Tori Turpin
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA
| | - Işıl Uluç
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Parker Kotlarz
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA
| | - Kaisu Lankinen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Fahimeh Mamashli
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Jyrki Ahveninen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
4
|
Pulvermüller F. Neurobiological mechanisms for language, symbols and concepts: Clues from brain-constrained deep neural networks. Prog Neurobiol 2023; 230:102511. [PMID: 37482195 PMCID: PMC10518464 DOI: 10.1016/j.pneurobio.2023.102511] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2022] [Revised: 05/02/2023] [Accepted: 07/18/2023] [Indexed: 07/25/2023]
Abstract
Neural networks are successfully used to imitate and model cognitive processes. However, to provide clues about the neurobiological mechanisms enabling human cognition, these models need to mimic the structure and function of real brains. Brain-constrained networks differ from classic neural networks by implementing brain similarities at different scales, ranging from the micro- and mesoscopic levels of neuronal function, local neuronal links and circuit interaction to large-scale anatomical structure and between-area connectivity. This review shows how brain-constrained neural networks can be applied to study in silico the formation of mechanisms for symbol and concept processing and to work towards neurobiological explanations of specifically human cognitive abilities. These include verbal working memory and learning of large vocabularies of symbols, semantic binding carried by specific areas of cortex, attention focusing and modulation driven by symbol type, and the acquisition of concrete and abstract concepts partly influenced by symbols. Neuronal assembly activity in the networks is analyzed to deliver putative mechanistic correlates of higher cognitive processes and to develop candidate explanations founded in established neurobiological principles.
Collapse
Affiliation(s)
- Friedemann Pulvermüller
- Brain Language Laboratory, Department of Philosophy and Humanities, WE4, Freie Universität Berlin, 14195 Berlin, Germany; Berlin School of Mind and Brain, Humboldt Universität zu Berlin, 10099 Berlin, Germany; Einstein Center for Neurosciences Berlin, 10117 Berlin, Germany; Cluster of Excellence 'Matters of Activity', Humboldt Universität zu Berlin, 10099 Berlin, Germany.
| |
Collapse
|
5
|
Romanski LM, Sharma KK. Multisensory interactions of face and vocal information during perception and memory in ventrolateral prefrontal cortex. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220343. [PMID: 37545305 PMCID: PMC10404928 DOI: 10.1098/rstb.2022.0343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2023] [Accepted: 03/21/2023] [Indexed: 08/08/2023] Open
Abstract
The ventral frontal lobe is a critical node in the circuit that underlies communication, a multisensory process where sensory features of faces and vocalizations come together. The neural basis of face and vocal integration is a topic of great importance since the integration of multiple sensory signals is essential for the decisions that govern our social interactions. Investigations have shown that the macaque ventrolateral prefrontal cortex (VLPFC), a proposed homologue of the human inferior frontal gyrus, is involved in the processing, integration and remembering of audiovisual signals. Single neurons in VLPFC encode and integrate species-specific faces and corresponding vocalizations. During working memory, VLPFC neurons maintain face and vocal information online and exhibit selective activity for face and vocal stimuli. Population analyses indicate that identity, a critical feature of social stimuli, is encoded by VLPFC neurons and dictates the structure of dynamic population activity in the VLPFC during the perception of vocalizations and their corresponding facial expressions. These studies suggest that VLPFC may play a primary role in integrating face and vocal stimuli with contextual information, in order to support decision making during social communication. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- Lizabeth M. Romanski
- Department of Neuroscience, University of Rochester School of Medicine, Rochester, NY 14642, USA
| | - Keshov K. Sharma
- Department of Neuroscience, University of Rochester School of Medicine, Rochester, NY 14642, USA
| |
Collapse
|
6
|
Wagener L, Rinnert P, Veit L, Nieder A. Crows protect visual working memory against interference. J Exp Biol 2023; 226:287069. [PMID: 36806418 PMCID: PMC10038144 DOI: 10.1242/jeb.245453] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Accepted: 02/06/2023] [Indexed: 02/22/2023]
Abstract
Working memory, the ability to actively maintain and manipulate information across time, is key to intelligent behavior. Because of the limited capacity of working memory, relevant information needs to be protected against distracting representations. Whether birds can resist distractors and safeguard memorized relevant information is unclear. We trained carrion crows in a delayed match-to-sample task to memorize an image while resisting other, interfering stimuli. We found that the repetition of the sample stimulus during the memory delay improved performance accuracy and accelerated reaction time relative to a reference condition with a neutral interfering stimulus. In contrast, the presentation of the image that constituted the subsequent non-match test stimulus mildly weakened performance. However, the crows' robust performance in this most demanding distractor condition indicates that sample information was actively protected from being overwritten by the distractor. These data show that crows can cognitively control and safeguard behaviorally relevant working memory contents.
Collapse
Affiliation(s)
- Lysann Wagener
- Animal Physiology, Institute of Neurobiology, University of Tübingen, 72076 Tübingen, Germany
| | - Paul Rinnert
- Animal Physiology, Institute of Neurobiology, University of Tübingen, 72076 Tübingen, Germany
| | - Lena Veit
- Neurobiology of Vocal Communication, Institute of Neurobiology, University of Tübingen, 72076 Tübingen, Germany
| | - Andreas Nieder
- Animal Physiology, Institute of Neurobiology, University of Tübingen, 72076 Tübingen, Germany
| |
Collapse
|
7
|
Melchor J, Vergara J, Figueroa T, Morán I, Lemus L. Formant-Based Recognition of Words and Other Naturalistic Sounds in Rhesus Monkeys. Front Neurosci 2021; 15:728686. [PMID: 34776842 PMCID: PMC8586527 DOI: 10.3389/fnins.2021.728686] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2021] [Accepted: 10/08/2021] [Indexed: 11/21/2022] Open
Abstract
In social animals, identifying sounds is critical for communication. In humans, the acoustic parameters involved in speech recognition, such as the formant frequencies derived from the resonance of the supralaryngeal vocal tract, have been well documented. However, how formants contribute to recognizing learned sounds in non-human primates remains unclear. To determine this, we trained two rhesus monkeys to discriminate target and non-target sounds presented in sequences of 1–3 sounds. After training, we performed three experiments: (1) We tested the monkeys’ accuracy and reaction times during the discrimination of various acoustic categories; (2) their ability to discriminate morphing sounds; and (3) their ability to identify sounds consisting of formant 1 (F1), formant 2 (F2), or F1 and F2 (F1F2) pass filters. Our results indicate that macaques can learn diverse sounds and discriminate from morphs and formants F1 and F2, suggesting that information from few acoustic parameters suffice for recognizing complex sounds. We anticipate that future neurophysiological experiments in this paradigm may help elucidate how formants contribute to the recognition of sounds.
Collapse
Affiliation(s)
- Jonathan Melchor
- Department of Cognitive Neuroscience, Institute of Cell Physiology, Universidad Nacional Autónoma de México, Mexico City, Mexico
| | - José Vergara
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, United States
| | - Tonatiuh Figueroa
- Department of Cognitive Neuroscience, Institute of Cell Physiology, Universidad Nacional Autónoma de México, Mexico City, Mexico
| | - Isaac Morán
- Department of Cognitive Neuroscience, Institute of Cell Physiology, Universidad Nacional Autónoma de México, Mexico City, Mexico
| | - Luis Lemus
- Department of Cognitive Neuroscience, Institute of Cell Physiology, Universidad Nacional Autónoma de México, Mexico City, Mexico
| |
Collapse
|
8
|
Morán I, Perez-Orive J, Melchor J, Figueroa T, Lemus L. Auditory decisions in the supplementary motor area. Prog Neurobiol 2021; 202:102053. [PMID: 33957182 DOI: 10.1016/j.pneurobio.2021.102053] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Revised: 04/06/2021] [Accepted: 04/20/2021] [Indexed: 10/21/2022]
Abstract
In human speech and communication across various species, recognizing and categorizing sounds is fundamental for the selection of appropriate behaviors. However, how does the brain decide which action to perform based on sounds? We explored whether the supplementary motor area (SMA), responsible for linking sensory information to motor programs, also accounts for auditory-driven decision making. To this end, we trained two rhesus monkeys to discriminate between numerous naturalistic sounds and words learned as target (T) or non-target (nT) categories. We found that the SMA at single and population neuronal levels perform decision-related computations that transition from auditory to movement representations in this task. Moreover, we demonstrated that the neural population is organized orthogonally during the auditory and the movement periods, implying that the SMA performs different computations. In conclusion, our results suggest that the SMA integrates acoustic information in order to form categorical signals that drive behavior.
Collapse
Affiliation(s)
- Isaac Morán
- Department of Cognitive Neuroscience, Institute of Cell Physiology, Universidad Nacional Autónoma de México (UNAM), 04510, Mexico City, Mexico
| | - Javier Perez-Orive
- Instituto Nacional de Rehabilitacion "Luis Guillermo Ibarra Ibarra", Mexico City, Mexico
| | - Jonathan Melchor
- Department of Cognitive Neuroscience, Institute of Cell Physiology, Universidad Nacional Autónoma de México (UNAM), 04510, Mexico City, Mexico
| | - Tonatiuh Figueroa
- Department of Cognitive Neuroscience, Institute of Cell Physiology, Universidad Nacional Autónoma de México (UNAM), 04510, Mexico City, Mexico
| | - Luis Lemus
- Department of Cognitive Neuroscience, Institute of Cell Physiology, Universidad Nacional Autónoma de México (UNAM), 04510, Mexico City, Mexico.
| |
Collapse
|
9
|
Yu L, Hu J, Shi C, Zhou L, Tian M, Zhang J, Xu J. The causal role of auditory cortex in auditory working memory. eLife 2021; 10:64457. [PMID: 33913809 PMCID: PMC8169109 DOI: 10.7554/elife.64457] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2020] [Accepted: 04/28/2021] [Indexed: 01/18/2023] Open
Abstract
Working memory (WM), the ability to actively hold information in memory over a delay period of seconds, is a fundamental constituent of cognition. Delay-period activity in sensory cortices has been observed in WM tasks, but whether and when the activity plays a functional role for memory maintenance remains unclear. Here, we investigated the causal role of auditory cortex (AC) for memory maintenance in mice performing an auditory WM task. Electrophysiological recordings revealed that AC neurons were active not only during the presentation of the auditory stimulus but also early in the delay period. Furthermore, optogenetic suppression of neural activity in AC during the stimulus epoch and early delay period impaired WM performance, whereas suppression later in the delay period did not. Thus, AC is essential for information encoding and maintenance in auditory WM task, especially during the early delay period.
Collapse
Affiliation(s)
- Liping Yu
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), Key Laboratory of Adolescent Health Assessment and Exercise Intervention of Ministry of Education, and School of Life Sciences, East China Normal University, Shanghai, China
| | - Jiawei Hu
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), Key Laboratory of Adolescent Health Assessment and Exercise Intervention of Ministry of Education, and School of Life Sciences, East China Normal University, Shanghai, China
| | - Chenlin Shi
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), Key Laboratory of Adolescent Health Assessment and Exercise Intervention of Ministry of Education, and School of Life Sciences, East China Normal University, Shanghai, China
| | - Li Zhou
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), Key Laboratory of Adolescent Health Assessment and Exercise Intervention of Ministry of Education, and School of Life Sciences, East China Normal University, Shanghai, China
| | - Maozhi Tian
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), Key Laboratory of Adolescent Health Assessment and Exercise Intervention of Ministry of Education, and School of Life Sciences, East China Normal University, Shanghai, China
| | - Jiping Zhang
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), Key Laboratory of Adolescent Health Assessment and Exercise Intervention of Ministry of Education, and School of Life Sciences, East China Normal University, Shanghai, China
| | - Jinghong Xu
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), Key Laboratory of Adolescent Health Assessment and Exercise Intervention of Ministry of Education, and School of Life Sciences, East China Normal University, Shanghai, China
| |
Collapse
|
10
|
Asano R. The evolution of hierarchical structure building capacity for language and music: a bottom-up perspective. Primates 2021; 63:417-428. [PMID: 33839984 PMCID: PMC9463250 DOI: 10.1007/s10329-021-00905-x] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2020] [Accepted: 03/26/2021] [Indexed: 12/27/2022]
Abstract
A central property of human language is its hierarchical structure. Humans can flexibly combine elements to build a hierarchical structure expressing rich semantics. A hierarchical structure is also considered as playing a key role in many other human cognitive domains. In music, auditory-motor events are combined into hierarchical pitch and/or rhythm structure expressing affect. How did such a hierarchical structure building capacity evolve? This paper investigates this question from a bottom-up perspective based on a set of action-related components as a shared basis underlying cognitive capacities of nonhuman primates and humans. Especially, I argue that the evolution of hierarchical structure building capacity for language and music is tractable for comparative evolutionary study once we focus on the gradual elaboration of shared brain architecture: the cortico-basal ganglia-thalamocortical circuits for hierarchical control of goal-directed action and the dorsal pathways for hierarchical internal models. I suggest that this gradual elaboration of the action-related brain architecture in the context of vocal control and tool-making went hand in hand with amplification of working memory, and made the brain ready for hierarchical structure building in language and music.
Collapse
Affiliation(s)
- Rie Asano
- Systematic Musicology, Institute of Musicology, University of Cologne, Cologne, Germany.
| |
Collapse
|
11
|
Ren J, Xu T, Wang D, Li M, Lin Y, Schoeppe F, Ramirez JSB, Han Y, Luan G, Li L, Liu H, Ahveninen J. Individual Variability in Functional Organization of the Human and Monkey Auditory Cortex. Cereb Cortex 2020; 31:2450-2465. [PMID: 33350445 DOI: 10.1093/cercor/bhaa366] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2020] [Revised: 11/01/2020] [Accepted: 11/05/2020] [Indexed: 12/13/2022] Open
Abstract
Accumulating evidence shows that auditory cortex (AC) of humans, and other primates, is involved in more complex cognitive processes than feature segregation only, which are shaped by experience-dependent plasticity and thus likely show substantial individual variability. However, thus far, individual variability of ACs has been considered a methodological impediment rather than a phenomenon of theoretical importance. Here, we examined the variability of ACs using intrinsic functional connectivity patterns in humans and macaques. Our results demonstrate that in humans, interindividual variability is greater near the nonprimary than primary ACs, indicating that variability dramatically increases across the processing hierarchy. ACs are also more variable than comparable visual areas and show higher variability in the left than in the right hemisphere, which may be related to the left lateralization of auditory-related functions such as language. Intriguingly, remarkably similar modality differences and lateralization of variability were also observed in macaques. These connectivity-based findings are consistent with a confirmatory task-based functional magnetic resonance imaging analysis. The quantification of variability in auditory function, and the similar findings in both humans and macaques, will have strong implications for understanding the evolution of advanced auditory functions in humans.
Collapse
Affiliation(s)
- Jianxun Ren
- National Engineering Laboratory for Neuromodulation, School of Aerospace Engineering, Tsinghua University, 100084 Beijing, China.,Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA 02129, USA
| | - Ting Xu
- Center for the Developing Brain, Child Mind Institute, New York, NY 10022, USA
| | - Danhong Wang
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA 02129, USA
| | - Meiling Li
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA 02129, USA
| | - Yuanxiang Lin
- Department of Neurosurgery, First Affiliated Hospital, Fujian Medical University, 350108 Fuzhou, China
| | - Franziska Schoeppe
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA 02129, USA
| | - Julian S B Ramirez
- Department of Behavioral Neuroscience, Oregon Health and Science University, Portland, OR 97239, USA
| | - Ying Han
- Department of Neurology, Xuanwu Hospital of Capital Medical University, 100053 Beijing, China
| | - Guoming Luan
- Department of Neurosurgery, Comprehensive Epilepsy Center, Sanbo Brain Hospital, Capital Medical University, 100093 Beijing, China
| | - Luming Li
- National Engineering Laboratory for Neuromodulation, School of Aerospace Engineering, Tsinghua University, 100084 Beijing, China.,Precision Medicine & Healthcare Research Center, Tsinghua-Berkeley Shenzhen Institute, Tsinghua University, 518055 Shenzhen, China.,IDG/McGovern Institute for Brain Research, Tsinghua University, 100084 Beijing, China
| | - Hesheng Liu
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA 02129, USA.,Department of Neuroscience, Medical University of South Carolina, Charleston, SC 29425, USA
| | - Jyrki Ahveninen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA 02129, USA
| |
Collapse
|
12
|
Wagener L, Nieder A. Categorical Auditory Working Memory in Crows. iScience 2020; 23:101737. [PMID: 33225245 PMCID: PMC7662871 DOI: 10.1016/j.isci.2020.101737] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2020] [Revised: 09/10/2020] [Accepted: 10/23/2020] [Indexed: 12/03/2022] Open
Abstract
The ability to group sensory data into behaviorally meaningful classes and to maintain these perceptual categories active in working memory is key to intelligent behavior. Here, we show that carrion crows, highly vocal and cognitively advanced corvid songbirds, possess categorical auditory working memory. The crows were trained in a delayed match-to-category task that required them to flexibly match remembered sounds based on the upward or downward shift of the sounds' frequency modulation. After training, the crows instantaneously classified novel sounds into the correct auditory categories. The crows showed sharp category boundaries as a function of the relative frequency interval of the modulation. In addition, the crows generalized frequency-modulated sounds within a category and correctly classified novel sounds kept in working memory irrespective of other acoustic features of the sound. This suggests that crows can form and actively memorize auditory perceptual categories in the service of cognitive control of their goal-directed behaviors. Crows performed a delayed match-to-category task with frequency modulated sounds Crows classified novel sounds into upward or downward modulated sound categories Crows showed sharp category boundaries and within-category generalization Crows can actively memorize auditory perceptual categories for cognitive control
Collapse
Affiliation(s)
- Lysann Wagener
- Animal Physiology Unit, Institute of Neurobiology, University of Tübingen, Auf der Morgenstelle 28, 72076 Tübingen, Germany
| | - Andreas Nieder
- Animal Physiology Unit, Institute of Neurobiology, University of Tübingen, Auf der Morgenstelle 28, 72076 Tübingen, Germany
| |
Collapse
|
13
|
Wang J, Yang Y, Zhao X, Zuo Z, Tan LH. Evolutional and developmental anatomical architecture of the left inferior frontal gyrus. Neuroimage 2020; 222:117268. [PMID: 32818615 DOI: 10.1016/j.neuroimage.2020.117268] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2020] [Revised: 07/17/2020] [Accepted: 08/12/2020] [Indexed: 12/21/2022] Open
Abstract
The left inferior frontal gyrus (IFG) including Broca's area is involved in the processing of many language subdomains, and thus, research on the evolutional and human developmental characteristics of the left IFG will shed light on how language emerges and maturates. In this study, we used diffusion magnetic resonance imaging (dMRI) and resting-state functional MRI (fMRI) to investigate the evolutional and developmental patterns of the left IFG in humans (age 6-8, age 11-13, and age 16-18 years) and macaques. Tractography-based parcellation was used to define the subcomponents of left IFG and consistently identified four subregions in both humans and macaques. This parcellation scheme for left IFG in human was supported by specific coactivation patterns and functional characterization for each subregion. During evolution and development, we found increased functional balance, amplitude of low frequency fluctuations, functional integration, and functional couplings. We also observed higher fractional anisotropy values, i.e. better myelination of dorsal and ventral white matter language pathways during evolution and development. We assume that the resting-state functional connectivity and task-related coactivation mapping are associated with hierarchical language processing. Our findings have shown the evolutional and human developmental patterns of left IFG, and will contribute to the understanding of how the human language evolves and how atypical language developmental disorders may occur.
Collapse
Affiliation(s)
- Jiaojiang Wang
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 625014, China; Center for Language and Brain, Shenzhen Institute of Neuroscience, Shenzhen 518057, China.
| | - Yang Yang
- CAS Key Laboratory of Behavioral Science, Center for Brain Science and Learning Difficulties, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China
| | - Xudong Zhao
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing 100101, China
| | - Zhentao Zuo
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing 100101, China.
| | - Li-Hai Tan
- Guangdong-Hongkong-Macau Institute of CNS Regeneration and Ministry of Education CNS Regeneration Collaborative Joint Laboratory, Jinan University, Guangzhou 510632, China; Center for Language and Brain, Shenzhen Institute of Neuroscience, Shenzhen 518057, China.
| |
Collapse
|
14
|
Wakita M. Common marmosets (Callithrix jacchus) cannot recognize global configurations of sound patterns but can recognize adjacent relations of sounds. Behav Processes 2020; 176:104136. [PMID: 32404248 DOI: 10.1016/j.beproc.2020.104136] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2019] [Revised: 03/30/2020] [Accepted: 05/03/2020] [Indexed: 10/24/2022]
Abstract
Processing the temporal configuration of discrete sounds to extract a regular pattern is fundamental to humans' faculties of perceiving words and musical phrases. To investigate such auditory pattern perception in monkeys, I trained two common marmosets to discriminate between AB-AB and AA-BB patterns under two training paradigms. One was an absolute discrimination task, in which the discrimination between these stimuli without reference cues was required. The other was a relative discrimination task, in which the detection of a change from one stimulus to the other was required. The marmosets failed in the absolute discrimination task but achieved the relative discrimination task. Failure in the absolute task indicated that the marmosets were unable to form a representation of the global sound patterns in their long-term memory stores. In contrast, success in the relative task indicated that the marmosets had short-term memory of ongoing sounds that enabled an online monitoring to detect deviations between incoming sounds and the anticipated upcoming sounds. Thus, the current findings imply that marmosets can at least perceive adjacent tone relations in an auditory stream regardless of the temporal configuration of the global sound patterns.
Collapse
Affiliation(s)
- Masumi Wakita
- Cognitive Neuroscience Section, Primate Research Institute, Kyoto University, Kanrin 41-2, Inuyama, Aichi 484-8506, Japan.
| |
Collapse
|
15
|
Burton JA, Valero MD, Hackett TA, Ramachandran R. The use of nonhuman primates in studies of noise injury and treatment. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 146:3770. [PMID: 31795680 PMCID: PMC6881191 DOI: 10.1121/1.5132709] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/15/2019] [Revised: 07/25/2019] [Accepted: 07/30/2019] [Indexed: 05/10/2023]
Abstract
Exposure to prolonged or high intensity noise increases the risk for permanent hearing impairment. Over several decades, researchers characterized the nature of harmful noise exposures and worked to establish guidelines for effective protection. Recent laboratory studies, primarily conducted in rodent models, indicate that the auditory system may be more vulnerable to noise-induced hearing loss (NIHL) than previously thought, driving renewed inquiries into the harmful effects of noise in humans. To bridge the translational gaps between rodents and humans, nonhuman primates (NHPs) may serve as key animal models. The phylogenetic proximity of NHPs to humans underlies tremendous similarity in many features of the auditory system (genomic, anatomical, physiological, behavioral), all of which are important considerations in the assessment and treatment of NIHL. This review summarizes the literature pertaining to NHPs as models of hearing and noise-induced hearing loss, discusses factors relevant to the translation of diagnostics and therapeutics from animals to humans, and concludes with some of the practical considerations involved in conducting NHP research.
Collapse
Affiliation(s)
- Jane A Burton
- Neuroscience Graduate Program, Vanderbilt University, Nashville, Tennessee 37212, USA
| | - Michelle D Valero
- Eaton Peabody Laboratories at Massachusetts Eye and Ear, Boston, Massachusetts 02114, USA
| | - Troy A Hackett
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee 37232, USA
| | - Ramnarayan Ramachandran
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee 37232, USA
| |
Collapse
|
16
|
Margiotoudi K, Allritz M, Bohn M, Pulvermüller F. Sound symbolic congruency detection in humans but not in great apes. Sci Rep 2019; 9:12705. [PMID: 31481655 PMCID: PMC6722092 DOI: 10.1038/s41598-019-49101-4] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2019] [Accepted: 08/15/2019] [Indexed: 11/20/2022] Open
Abstract
Theories on the evolution of language highlight iconicity as one of the unique features of human language. One important manifestation of iconicity is sound symbolism, the intrinsic relationship between meaningless speech sounds and visual shapes, as exemplified by the famous correspondences between the pseudowords 'maluma' vs. 'takete' and abstract curved and angular shapes. Although sound symbolism has been studied extensively in humans including young children and infants, it has never been investigated in non-human primates lacking language. In the present study, we administered the classic "takete-maluma" paradigm in both humans (N = 24 and N = 31) and great apes (N = 8). In a forced choice matching task, humans but not great apes, showed crossmodal sound symbolic congruency effects, whereby effects were more pronounced for shape selections following round-sounding primes than following edgy-sounding primes. These results suggest that the ability to detect sound symbolic correspondences is the outcome of a phylogenetic process, whose underlying emerging mechanism may be relevant to symbolic ability more generally.
Collapse
Affiliation(s)
- Konstantina Margiotoudi
- Brain Language Laboratory, Department of Philosophy and Humanities, WE4, Freie Universität Berlin, 14195, Berlin, Germany.
- Berlin School of Mind and Brain, Humboldt Universität zu Berlin, 10099, Berlin, Germany.
| | - Matthias Allritz
- School of Psychology & Neuroscience, University of St. Andrews, St. Andrews, Fife, UK
| | - Manuel Bohn
- Leipziger Forschungszentrum für frühkindliche Entwicklung, Universität Leipzig, Leipzig, Germany
- Department of Psychology, Stanford University, Stanford, USA
| | - Friedemann Pulvermüller
- Brain Language Laboratory, Department of Philosophy and Humanities, WE4, Freie Universität Berlin, 14195, Berlin, Germany
- Berlin School of Mind and Brain, Humboldt Universität zu Berlin, 10099, Berlin, Germany
- Cluster of Excellence "Matters of Activity", Humboldt Universität zu Berlin, 10099, Berlin, Germany
- Einstein Center for Neurosciences Berlin, 10117, Berlin, Germany
| |
Collapse
|
17
|
Zarei SA, Sheibani V, Mansouri FA. Interaction of music and emotional stimuli in modulating working memory in macaque monkeys. Am J Primatol 2019; 81:e22999. [DOI: 10.1002/ajp.22999] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2019] [Revised: 04/25/2019] [Accepted: 05/12/2019] [Indexed: 11/07/2022]
Affiliation(s)
- Shahab A. Zarei
- Cognitive Neuroscience Laboratory, Kerman Neuroscience Research CenterInstitute of Neuropharmacology, Kerman University of Medical SciencesKerman Iran
| | - Vahid Sheibani
- Cognitive Neuroscience Laboratory, Kerman Neuroscience Research CenterInstitute of Neuropharmacology, Kerman University of Medical SciencesKerman Iran
- Cognitive Neuroscience Laboratory, Cognitive Neuroscience Research CentreKerman University of Medical SciencesKerman Iran
| | - Farshad A. Mansouri
- Cognitive Neuroscience Laboratory, Cognitive Neuroscience Research CentreKerman University of Medical SciencesKerman Iran
- Cognitive Neuroscience Laboratory, ARC Center of Excellence for Integrative Brain FunctionMonash UniversityClayton VIC Australia
| |
Collapse
|
18
|
Teichert T, Gurnsey K. Formation and decay of auditory short-term memory in the macaque monkey. J Neurophysiol 2019; 121:2401-2415. [PMID: 31017849 DOI: 10.1152/jn.00821.2018] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Echoic memory (EM) is a short-lived, precategorical, and passive form of auditory short-term memory (STM). A key hallmark of EM is its rapid exponential decay with a time constant between 1 and 2 s. It is not clear whether auditory STM in the rhesus, an important model system, shares this rapid exponential decay. To resolve this shortcoming, two rhesus macaques were trained to perform a delayed frequency discrimination task. Discriminability of delayed tones was measured as a function of retention duration and the number of times the standard had been repeated before the target. Like in the human, our results show a rapid decline of discriminability with retention duration. In addition, the results suggest a gradual strengthening of discriminability with repetition number. Model-based analyses suggest the presence of two components of auditory STM: a short-lived component with a time constant on the order of 550 ms that most likely corresponds to EM and a more stable memory trace with time constants on the order of 10 s that strengthens with repetition and most likely corresponds to auditory recognition memory. NEW & NOTEWORTHY This is the first detailed quantification of the rapid temporal dynamics of auditory short-term memory in the rhesus. Much of the auditory information in short-term memory is lost within the first couple of seconds. Repeated presentations of a tone strengthen its encoding into short-term memory. Model-based analyses suggest two distinct components: an echoic memory homolog that mediates the rapid decay and a more stable but less detail-rich component that mediates strengthening of the trace with repetition.
Collapse
Affiliation(s)
- Tobias Teichert
- Department of Psychiatry, University of Pittsburgh , Pittsburgh, Pennsylvania.,Department of Bioengineering, University of Pittsburgh , Pittsburgh, Pennsylvania
| | - Kate Gurnsey
- Department of Psychiatry, University of Pittsburgh , Pittsburgh, Pennsylvania
| |
Collapse
|
19
|
Wikman P, Rinne T, Petkov CI. Reward cues readily direct monkeys' auditory performance resulting in broad auditory cortex modulation and interaction with sites along cholinergic and dopaminergic pathways. Sci Rep 2019; 9:3055. [PMID: 30816142 PMCID: PMC6395775 DOI: 10.1038/s41598-019-38833-y] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2018] [Accepted: 12/28/2018] [Indexed: 11/18/2022] Open
Abstract
In natural settings, the prospect of reward often influences the focus of our attention, but how cognitive and motivational systems influence sensory cortex is not well understood. Also, challenges in training nonhuman animals on cognitive tasks complicate cross-species comparisons and interpreting results on the neurobiological bases of cognition. Incentivized attention tasks could expedite training and evaluate the impact of attention on sensory cortex. Here we develop an Incentivized Attention Paradigm (IAP) and use it to show that macaque monkeys readily learn to use auditory or visual reward cues, drastically influencing their performance within a simple auditory task. Next, this paradigm was used with functional neuroimaging to measure activation modulation in the monkey auditory cortex. The results show modulation of extensive auditory cortical regions throughout primary and non-primary regions, which although a hallmark of attentional modulation in human auditory cortex, has not been studied or observed as broadly in prior data from nonhuman animals. Psycho-physiological interactions were identified between the observed auditory cortex effects and regions including basal forebrain sites along acetylcholinergic and dopaminergic pathways. The findings reveal the impact and regional interactions in the primate brain during an incentivized attention engaging auditory task.
Collapse
Affiliation(s)
- Patrik Wikman
- Department of Psychology and Logopedics, University of Helsinki, 00014, Helsinki, Finland.
| | - Teemu Rinne
- Turku Brain and Mind Center, Department of Clinical Medicine, University of Turku, 20014, Turku, Finland.
| | - Christopher I Petkov
- Institute of Neuroscience, Newcastle University, NE1 7RU, Newcastle upon Tyne, United Kingdom.
- Centre for Behaviour and Evolution, Newcastle University, NE1 7RU, Newcastle upon Tyne, United Kingdom.
| |
Collapse
|
20
|
Auditory sequence perception in common marmosets (Callithrix jacchus). Behav Processes 2019; 162:55-63. [PMID: 30716383 DOI: 10.1016/j.beproc.2019.01.014] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2018] [Revised: 12/26/2018] [Accepted: 01/31/2019] [Indexed: 11/20/2022]
Abstract
One of the essential linguistic and musical faculties of humans is the ability to recognize the structure of sound configurations and to extract words and melodies from continuous sound sequences. However, monkeys' ability to process the temporal structure of sounds is controversial. Here, to investigate whether monkeys can analyze the temporal structure of auditory patterns, two common marmosets were trained to discriminate auditory patterns in three experiments. In Experiment 1, the marmosets were able to discriminate trains of either 0.5- or 2-kHz tones repeated in either 50- or 200-ms intervals. However, the marmosets were not able to discriminate ABAB from AABB patterns consisting of A (0.5-kHz/50-ms pulse) and B (2-kHz/200-ms pulse) elements in Experiment 2, and A (0.5-kHz/50-ms pulse) and B (0.5-kHz/200-ms pulse) [or A (0.5-kHz/200-ms pulse) and B (2-kHz/200-ms pulse)] in Experiment 3. Consequently, the results indicated that the marmosets could not perceive tonal structures in terms of the temporal configuration of discrete sounds, whereas they could recognize the acoustic features of the stimuli. The present findings were supported by cognitive and brain studies that indicated a limited ability to process sound sequences. However, more studies are needed to confirm the ability of auditory sequence perception in common marmosets.
Collapse
|
21
|
Mars RB, Eichert N, Jbabdi S, Verhagen L, Rushworth MF. Connectivity and the search for specializations in the language-capable brain. Curr Opin Behav Sci 2018; 21:19-26. [PMID: 33898657 PMCID: PMC7610656 DOI: 10.1016/j.cobeha.2017.11.001] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
Abstract
The search for the anatomical basis of language has traditionally been a search for specializations. More recently such research has focused both on aspects of brain organization that are unique to humans and aspects shared with other primates. This work has mostly concentrated on the architecture of connections between brain areas. However, as specializations can take many guises, comparison of anatomical organization across species is often complicated. We demonstrate how viewing different types of specializations within a common framework allows one to better appreciate both shared and unique aspects of brain organization. We illustrate this point by discussing recent insights into the anatomy of the dorsal language pathway to the frontal cortex and areas for laryngeal control in the motor cortex.
Collapse
Affiliation(s)
- Rogier B Mars
- Wellcome Centre for Integrative Neuroimaging, Centre for Functional MRI of the Brain (FMRIB), Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom.,Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands
| | - Nicole Eichert
- Wellcome Centre for Integrative Neuroimaging, Centre for Functional MRI of the Brain (FMRIB), Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
| | - Saad Jbabdi
- Wellcome Centre for Integrative Neuroimaging, Centre for Functional MRI of the Brain (FMRIB), Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
| | - Lennart Verhagen
- Wellcome Centre for Integrative Neuroimaging, Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom
| | - Matthew Fs Rushworth
- Wellcome Centre for Integrative Neuroimaging, Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
22
|
Effects of modality and repetition in a continuous recognition memory task: Repetition has no effect on auditory recognition memory. Acta Psychol (Amst) 2018; 185:72-80. [PMID: 29407247 DOI: 10.1016/j.actpsy.2018.01.012] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2017] [Revised: 01/24/2018] [Accepted: 01/29/2018] [Indexed: 11/20/2022] Open
Abstract
Previous research has shown that auditory recognition memory is poorer compared to visual and cross-modal (visual and auditory) recognition memory. The effect of repetition on memory has been robust in showing improved performance. It is not clear, however, how auditory recognition memory compares to visual and cross-modal recognition memory following repetition. Participants performed a recognition memory task, making old/new discriminations to new stimuli, stimuli repeated for the first time after 4-7 intervening items (R1), or repeated for the second time after 36-39 intervening items (R2). Depending on the condition, participants were either exposed to visual stimuli (2D line drawings), auditory stimuli (spoken words), or cross-modal stimuli (pairs of images and associated spoken words). Results showed that unlike participants in the visual and cross-modal conditions, participants in the auditory recognition did not show improvements in performance on R2 trials compared to R1 trials. These findings have implications for pedagogical techniques in education, as well as for interventions and exercises aimed at boosting memory performance.
Collapse
|
23
|
Aboitiz F. A Brain for Speech. Evolutionary Continuity in Primate and Human Auditory-Vocal Processing. Front Neurosci 2018; 12:174. [PMID: 29636657 PMCID: PMC5880940 DOI: 10.3389/fnins.2018.00174] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2017] [Accepted: 03/05/2018] [Indexed: 12/27/2022] Open
Abstract
In this review article, I propose a continuous evolution from the auditory-vocal apparatus and its mechanisms of neural control in non-human primates, to the peripheral organs and the neural control of human speech. Although there is an overall conservatism both in peripheral systems and in central neural circuits, a few changes were critical for the expansion of vocal plasticity and the elaboration of proto-speech in early humans. Two of the most relevant changes were the acquisition of direct cortical control of the vocal fold musculature and the consolidation of an auditory-vocal articulatory circuit, encompassing auditory areas in the temporoparietal junction and prefrontal and motor areas in the frontal cortex. This articulatory loop, also referred to as the phonological loop, enhanced vocal working memory capacity, enabling early humans to learn increasingly complex utterances. The auditory-vocal circuit became progressively coupled to multimodal systems conveying information about objects and events, which gradually led to the acquisition of modern speech. Gestural communication accompanies the development of vocal communication since very early in human evolution, and although both systems co-evolved tightly in the beginning, at some point speech became the main channel of communication.
Collapse
Affiliation(s)
- Francisco Aboitiz
- Centro Interdisciplinario de Neurociencias, Escuela de Medicina, Pontificia Universidad Católica de Chile, Santiago, Chile
| |
Collapse
|
24
|
|
25
|
Schulze K, Vargha-Khadem F, Mishkin M. Phonological working memory and FOXP2. Neuropsychologia 2017; 108:147-152. [PMID: 29174050 DOI: 10.1016/j.neuropsychologia.2017.11.027] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2017] [Revised: 11/20/2017] [Accepted: 11/21/2017] [Indexed: 11/16/2022]
Abstract
The discovery and description of the affected members of the KE family (aKE) initiated research on how genes enable the unique human trait of speech and language. Many aspects of this genetic influence on speech-related cognitive mechanisms are still elusive, e.g. if and how cognitive processes not directly involved in speech production are affected. In the current study we investigated the effect of the FOXP2 mutation on Working Memory (WM). Half the members of the multigenerational KE family have an inherited speech-language disorder, characterised as a verbal and orofacial dyspraxia caused by a mutation of the FOXP2 gene. The core phenotype of the affected KE members (aKE) is a deficiency in repeating words, especially complex non-words, and in coordinating oromotor sequences generally. Execution of oromotor sequences and repetition of phonological sequences both require WM, but to date the aKE's memory ability in this domain has not been examined in detail. To do so we used a test series based on the Baddeley and Hitch WM model, which posits that the central executive (CE), important for planning and manipulating information, works in conjunction with two modality-specific components: The phonological loop (PL), specialized for processing speech-based information; and the visuospatial sketchpad (VSSP), dedicated to processing visual and spatial information. We compared WM performance related to CE, PL, and VSSP function in five aKE and 15 healthy controls (including three unaffected members of the KE family who do not have the FOXP2 mutation). The aKE scored significantly below this control group on the PL component, but not on the VSSP or CE components. Further, the aKE were impaired relative to the controls not only in motor (i.e. articulatory) output but also on the recognition-based PL subtest (word-list matching), which does not require speech production. These results suggest that the aKE's impaired phonological WM may be due to a defect in subvocal rehearsal of speech-based material, and that this defect may be due in turn to compromised speech-based representations.
Collapse
Affiliation(s)
- Katrin Schulze
- UCL Great Ormond Street Institute of Child Health, 30 Guilford Street, London WC1N 1EH, UK; Clinical Psychology and Psychotherapy Unit, Department of Psychology, Heidelberg University, Hauptstraße 47-51, 69117 Heidelberg, Germany.
| | - Faraneh Vargha-Khadem
- UCL Great Ormond Street Institute of Child Health, 30 Guilford Street, London WC1N 1EH, UK; Great Ormond Street Hospital for Children NHS Foundation Trust, London, UK.
| | - Mortimer Mishkin
- Laboratory of Neuropsychology, National Institute of Mental Health, 49 Convent Drive, Bethesda, MD 20892, USA.
| |
Collapse
|
26
|
Schomers MR, Garagnani M, Pulvermüller F. Neurocomputational Consequences of Evolutionary Connectivity Changes in Perisylvian Language Cortex. J Neurosci 2017; 37:3045-3055. [PMID: 28193685 PMCID: PMC5354338 DOI: 10.1523/jneurosci.2693-16.2017] [Citation(s) in RCA: 40] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2016] [Revised: 12/20/2016] [Accepted: 01/11/2017] [Indexed: 01/07/2023] Open
Abstract
The human brain sets itself apart from that of its primate relatives by specific neuroanatomical features, especially the strong linkage of left perisylvian language areas (frontal and temporal cortex) by way of the arcuate fasciculus (AF). AF connectivity has been shown to correlate with verbal working memory-a specifically human trait providing the foundation for language abilities-but a mechanistic explanation of any related causal link between anatomical structure and cognitive function is still missing. Here, we provide a possible explanation and link, by using neurocomputational simulations in neuroanatomically structured models of the perisylvian language cortex. We compare networks mimicking key features of cortical connectivity in monkeys and humans, specifically the presence of relatively stronger higher-order "jumping links" between nonadjacent perisylvian cortical areas in the latter, and demonstrate that the emergence of working memory for syllables and word forms is a functional consequence of this structural evolutionary change. We also show that a mere increase of learning time is not sufficient, but that this specific structural feature, which entails higher connectivity degree of relevant areas and shorter sensorimotor path length, is crucial. These results offer a better understanding of specifically human anatomical features underlying the language faculty and their evolutionary selection advantage.SIGNIFICANCE STATEMENT Why do humans have superior language abilities compared to primates? Recently, a uniquely human neuroanatomical feature has been demonstrated in the strength of the arcuate fasciculus (AF), a fiber pathway interlinking the left-hemispheric language areas. Although AF anatomy has been related to linguistic skills, an explanation of how this fiber bundle may support language abilities is still missing. We use neuroanatomically structured computational models to investigate the consequences of evolutionary changes in language area connectivity and demonstrate that the human-specific higher connectivity degree and comparatively shorter sensorimotor path length implicated by the AF entail emergence of verbal working memory, a prerequisite for language learning. These results offer a better understanding of specifically human anatomical features for language and their evolutionary selection advantage.
Collapse
Affiliation(s)
- Malte R Schomers
- Brain Language Laboratory, Freie Universität Berlin, 14195 Berlin, Germany,
- Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, 10099 Berlin, Germany
| | - Max Garagnani
- Brain Language Laboratory, Freie Universität Berlin, 14195 Berlin, Germany
- Centre for Robotics and Neural Systems, University of Plymouth, Plymouth PL4 8AA, United Kingdom, and
- Department of Computing, Goldsmiths, University of London, London SE14 6NW, United Kingdom
| | - Friedemann Pulvermüller
- Brain Language Laboratory, Freie Universität Berlin, 14195 Berlin, Germany
- Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, 10099 Berlin, Germany
| |
Collapse
|
27
|
Juan C, Cappe C, Alric B, Roby B, Gilardeau S, Barone P, Girard P. The variability of multisensory processes of natural stimuli in human and non-human primates in a detection task. PLoS One 2017; 12:e0172480. [PMID: 28212416 PMCID: PMC5315309 DOI: 10.1371/journal.pone.0172480] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2016] [Accepted: 02/06/2017] [Indexed: 11/19/2022] Open
Abstract
Background Behavioral studies in both human and animals generally converge to the dogma that multisensory integration improves reaction times (RTs) in comparison to unimodal stimulation. These multisensory effects depend on diverse conditions among which the most studied were the spatial and temporal congruences. Further, most of the studies are using relatively simple stimuli while in everyday life, we are confronted to a large variety of complex stimulations constantly changing our attentional focus over time, a modality switch that can impact on stimuli detection. In the present study, we examined the potential sources of the variability in reaction times and multisensory gains with respect to the intrinsic features of a large set of natural stimuli. Methodology/Principle findings Rhesus macaque monkeys and human subjects performed a simple audio-visual stimulus detection task in which a large collection of unimodal and bimodal natural stimuli with semantic specificities was presented at different saliencies. Although we were able to reproduce the well-established redundant signal effect, we failed to reveal a systematic violation of the race model which is considered to demonstrate multisensory integration. In both monkeys and human species, our study revealed a large range of multisensory gains, with negative and positive values. While modality switch has clear effects on reaction times, one of the main causes of the variability of multisensory gains appeared to be linked to the intrinsic physical parameters of the stimuli. Conclusion/Significance Based on the variability of multisensory benefits, our results suggest that the neuronal mechanisms responsible of the redundant effect (interactions vs. integration) are highly dependent on the stimulus complexity suggesting different implications of uni- and multisensory brain regions. Further, in a simple detection task, the semantic values of individual stimuli tend to have no significant impact on task performances, an effect which is probably present in more cognitive tasks.
Collapse
Affiliation(s)
- Cécile Juan
- Cerco, CNRS UMR 5549, Toulouse, France
- Université de Toulouse, UPS, Centre de Recherche Cerveau et Cognition, Toulouse, France
| | - Céline Cappe
- Cerco, CNRS UMR 5549, Toulouse, France
- Université de Toulouse, UPS, Centre de Recherche Cerveau et Cognition, Toulouse, France
| | - Baptiste Alric
- Cerco, CNRS UMR 5549, Toulouse, France
- Université de Toulouse, UPS, Centre de Recherche Cerveau et Cognition, Toulouse, France
| | - Benoit Roby
- Cerco, CNRS UMR 5549, Toulouse, France
- Université de Toulouse, UPS, Centre de Recherche Cerveau et Cognition, Toulouse, France
| | - Sophie Gilardeau
- Cerco, CNRS UMR 5549, Toulouse, France
- Université de Toulouse, UPS, Centre de Recherche Cerveau et Cognition, Toulouse, France
| | - Pascal Barone
- Cerco, CNRS UMR 5549, Toulouse, France
- Université de Toulouse, UPS, Centre de Recherche Cerveau et Cognition, Toulouse, France
| | - Pascal Girard
- Cerco, CNRS UMR 5549, Toulouse, France
- Université de Toulouse, UPS, Centre de Recherche Cerveau et Cognition, Toulouse, France
- INSERM, Toulouse, France
- * E-mail:
| |
Collapse
|
28
|
Hierarchy, multidomain modules, and the evolution of intelligence. Behav Brain Sci 2017; 40:e212. [DOI: 10.1017/s0140525x16001710] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
AbstractIn this commentary, we support a complex, mosaic, and multimodal approach to the evolution of intelligence. Using the arcuate fasciculus as an example of discontinuity in the evolution of neurobiological architectures, we argue that the strict dichotomy of modules versus G, adopted by Burkart et al. in the target article, is insufficient to interpret the available statistical and experimental evidence.
Collapse
|
29
|
Yin P, Shamma SA, Fritz JB. Relative salience of spectral and temporal features in auditory long-term memory. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2016; 140:4046. [PMID: 28040019 PMCID: PMC6910011 DOI: 10.1121/1.4968395] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/24/2016] [Revised: 10/05/2016] [Accepted: 11/09/2016] [Indexed: 06/06/2023]
Abstract
In order to explore the representation of sound features in auditory long-term memory, two groups of ferrets were trained on Go vs Nogo, 3-zone classification tasks. The sound stimuli differed primarily along the spectral and temporal dimensions. In Group 1, two ferrets were trained to (i) classify tones based on their frequency (Tone-task), and subsequently learned to (ii) classify white noise based on its amplitude modulation rate (AM-task). In Group 2, two ferrets were trained to classify tones based on correlated combinations of their frequency and AM rate (AM-Tone task). Both groups of ferrets learned their tasks and were able to generalize performance along the trained spectral (tone frequency) or temporal (AM rate) dimensions. Insights into stimulus representations in memory were gained when the animals were tested with a diverse set of untrained probes that mixed features from the two dimensions. Animals exhibited a complex pattern of responses to the probes reflecting primarily the probes' spectral similarity with the training stimuli, and secondarily the temporal features of the stimuli. These diverse behavioral decisions could be well accounted for by a nearest-neighbor classifier model that relied on a multiscale spectrotemporal cortical representation of the training and probe sounds.
Collapse
Affiliation(s)
- Pingbo Yin
- Neural Systems Laboratory, Institute for Systems Research, 2207 A.V. Williams Building, University of Maryland, College Park, Maryland 20742, USA
| | - Shihab A Shamma
- Neural Systems Laboratory, Institute for Systems Research, Electrical and Computer Engineering Department, 2203 A.V. Williams Building, University of Maryland, College Park, Maryland 20742, USA
| | - Jonathan B Fritz
- Neural Systems Laboratory, Institute for Systems Research, 2207 A.V. Williams Building, University of Maryland, College Park, Maryland 20742, USA
| |
Collapse
|
30
|
Wittig JH, Morgan B, Masseau E, Richmond BJ. Humans and monkeys use different strategies to solve the same short-term memory tasks. ACTA ACUST UNITED AC 2016; 23:644-647. [PMID: 27918285 PMCID: PMC5066608 DOI: 10.1101/lm.041764.116] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2016] [Accepted: 05/08/2016] [Indexed: 12/03/2022]
Abstract
The neural mechanisms underlying human working memory are often inferred from studies using old-world monkeys. Humans use working memory to selectively memorize important information. We recently reported that monkeys do not seem to use selective memorization under experimental conditions that are common in monkey research, but less common in human research. Here we compare the performance of humans and monkeys under the same experimental conditions. Humans selectively remember important images whereas monkeys largely rely on recency information from nonselective memorization. Working memory studies in old-world monkeys must be interpreted cautiously when making inferences about the mechanisms underlying human working memory.
Collapse
Affiliation(s)
- John H Wittig
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health, Department of Health and Human Services, Bethesda, Maryland 20892-4415, USA
| | - Barak Morgan
- Global Risk Governance Program, Department of Public Law, University of Cape Town, Rondebosch 7701, South Africa.,DST-NRF Centre of Excellence in Human Development, DVC Research Office, University of Witwatersrand, Johannesburg, 2050, South Africa
| | - Evan Masseau
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health, Department of Health and Human Services, Bethesda, Maryland 20892-4415, USA
| | - Barry J Richmond
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health, Department of Health and Human Services, Bethesda, Maryland 20892-4415, USA
| |
Collapse
|
31
|
Using music to study the evolution of cognitive mechanisms relevant to language. Psychon Bull Rev 2016; 24:177-180. [DOI: 10.3758/s13423-016-1088-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
32
|
Poliva O. From Mimicry to Language: A Neuroanatomically Based Evolutionary Model of the Emergence of Vocal Language. Front Neurosci 2016; 10:307. [PMID: 27445676 PMCID: PMC4928493 DOI: 10.3389/fnins.2016.00307] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2016] [Accepted: 06/17/2016] [Indexed: 11/24/2022] Open
Abstract
The auditory cortex communicates with the frontal lobe via the middle temporal gyrus (auditory ventral stream; AVS) or the inferior parietal lobule (auditory dorsal stream; ADS). Whereas the AVS is ascribed only with sound recognition, the ADS is ascribed with sound localization, voice detection, prosodic perception/production, lip-speech integration, phoneme discrimination, articulation, repetition, phonological long-term memory and working memory. Previously, I interpreted the juxtaposition of sound localization, voice detection, audio-visual integration and prosodic analysis, as evidence that the behavioral precursor to human speech is the exchange of contact calls in non-human primates. Herein, I interpret the remaining ADS functions as evidence of additional stages in language evolution. According to this model, the role of the ADS in vocal control enabled early Homo (Hominans) to name objects using monosyllabic calls, and allowed children to learn their parents' calls by imitating their lip movements. Initially, the calls were forgotten quickly but gradually were remembered for longer periods. Once the representations of the calls became permanent, mimicry was limited to infancy, and older individuals encoded in the ADS a lexicon for the names of objects (phonological lexicon). Consequently, sound recognition in the AVS was sufficient for activating the phonological representations in the ADS and mimicry became independent of lip-reading. Later, by developing inhibitory connections between acoustic-syllabic representations in the AVS and phonological representations of subsequent syllables in the ADS, Hominans became capable of concatenating the monosyllabic calls for repeating polysyllabic words (i.e., developed working memory). Finally, due to strengthening of connections between phonological representations in the ADS, Hominans became capable of encoding several syllables as a single representation (chunking). Consequently, Hominans began vocalizing and mimicking/rehearsing lists of words (sentences).
Collapse
|
33
|
Ravignani A, Fitch WT, Hanke FD, Heinrich T, Hurgitsch B, Kotz SA, Scharff C, Stoeger AS, de Boer B. What Pinnipeds Have to Say about Human Speech, Music, and the Evolution of Rhythm. Front Neurosci 2016; 10:274. [PMID: 27378843 PMCID: PMC4913109 DOI: 10.3389/fnins.2016.00274] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2016] [Accepted: 05/31/2016] [Indexed: 12/19/2022] Open
Abstract
Research on the evolution of human speech and music benefits from hypotheses and data generated in a number of disciplines. The purpose of this article is to illustrate the high relevance of pinniped research for the study of speech, musical rhythm, and their origins, bridging and complementing current research on primates and birds. We briefly discuss speech, vocal learning, and rhythm from an evolutionary and comparative perspective. We review the current state of the art on pinniped communication and behavior relevant to the evolution of human speech and music, showing interesting parallels to hypotheses on rhythmic behavior in early hominids. We suggest future research directions in terms of species to test and empirical data needed.
Collapse
Affiliation(s)
- Andrea Ravignani
- Artificial Intelligence Lab, Vrije Universiteit BrusselBrussels, Belgium; Sensory and Cognitive Ecology, Institute for Biosciences, University of RostockRostock, Germany
| | - W Tecumseh Fitch
- Department of Cognitive Biology, University of Vienna Vienna, Austria
| | - Frederike D Hanke
- Sensory and Cognitive Ecology, Institute for Biosciences, University of Rostock Rostock, Germany
| | - Tamara Heinrich
- Sensory and Cognitive Ecology, Institute for Biosciences, University of Rostock Rostock, Germany
| | | | - Sonja A Kotz
- Basic and Applied NeuroDynamics Lab, Department of Neuropsychology and Psychopharmacology, Maastricht UniversityMaastricht, Netherlands; Department of Neuropsychology, Max-Planck Institute for Human Cognitive and Brain SciencesLeipzig, Germany
| | - Constance Scharff
- Department of Animal Behavior, Institute of Biology, Freie Universität Berlin Berlin, Germany
| | - Angela S Stoeger
- Department of Cognitive Biology, University of Vienna Vienna, Austria
| | - Bart de Boer
- Artificial Intelligence Lab, Vrije Universiteit Brussel Brussels, Belgium
| |
Collapse
|
34
|
Plakke B, Romanski LM. Neural circuits in auditory and audiovisual memory. Brain Res 2016; 1640:278-88. [PMID: 26656069 PMCID: PMC4868791 DOI: 10.1016/j.brainres.2015.11.042] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2015] [Revised: 10/28/2015] [Accepted: 11/25/2015] [Indexed: 01/01/2023]
Abstract
Working memory is the ability to employ recently seen or heard stimuli and apply them to changing cognitive context. Although much is known about language processing and visual working memory, the neurobiological basis of auditory working memory is less clear. Historically, part of the problem has been the difficulty in obtaining a robust animal model to study auditory short-term memory. In recent years there has been neurophysiological and lesion studies indicating a cortical network involving both temporal and frontal cortices. Studies specifically targeting the role of the prefrontal cortex (PFC) in auditory working memory have suggested that dorsal and ventral prefrontal regions perform different roles during the processing of auditory mnemonic information, with the dorsolateral PFC performing similar functions for both auditory and visual working memory. In contrast, the ventrolateral PFC (VLPFC), which contains cells that respond robustly to auditory stimuli and that process both face and vocal stimuli may be an essential locus for both auditory and audiovisual working memory. These findings suggest a critical role for the VLPFC in the processing, integrating, and retaining of communication information. This article is part of a Special Issue entitled SI: Auditory working memory.
Collapse
Affiliation(s)
- B Plakke
- University of Rochester School of Medicine & Dentistry, Department Neurobiology & Anatomy, United States.
| | - L M Romanski
- University of Rochester School of Medicine & Dentistry, Department Neurobiology & Anatomy, United States.
| |
Collapse
|
35
|
Fritz JB, Malloy M, Mishkin M, Saunders RC. Monkey׳s short-term auditory memory nearly abolished by combined removal of the rostral superior temporal gyrus and rhinal cortices. Brain Res 2016; 1640:289-98. [PMID: 26707975 PMCID: PMC5890928 DOI: 10.1016/j.brainres.2015.12.012] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2015] [Revised: 12/06/2015] [Accepted: 12/07/2015] [Indexed: 01/19/2023]
Abstract
While monkeys easily acquire the rules for performing visual and tactile delayed matching-to-sample, a method for testing recognition memory, they have extraordinary difficulty acquiring a similar rule in audition. Another striking difference between the modalities is that whereas bilateral ablation of the rhinal cortex (RhC) leads to profound impairment in visual and tactile recognition, the same lesion has no detectable effect on auditory recognition memory (Fritz et al., 2005). In our previous study, a mild impairment in auditory memory was obtained following bilateral ablation of the entire medial temporal lobe (MTL), including the RhC, and an equally mild effect was observed after bilateral ablation of the auditory cortical areas in the rostral superior temporal gyrus (rSTG). In order to test the hypothesis that each of these mild impairments was due to partial disconnection of acoustic input to a common target (e.g., the ventromedial prefrontal cortex), in the current study we examined the effects of a more complete auditory disconnection of this common target by combining the removals of both the rSTG and the MTL. We found that the combined lesion led to forgetting thresholds (performance at 75% accuracy) that fell precipitously from the normal retention duration of ~30 to 40s to a duration of ~1 to 2s, thus nearly abolishing auditory recognition memory, and leaving behind only a residual echoic memory. This article is part of a Special Issue entitled SI: Auditory working memory.
Collapse
Affiliation(s)
- Jonathan B Fritz
- Neural Systems Laboratory, Center for Acoustic and Auditory Research, Institute for Systems Research, University of Maryland, College Park, MD 20742, United States.
| | - Megan Malloy
- Laboratory of Neuropsychology, National Institute of Mental Health, NIH, Bethesda, MD 20892, United States.
| | - Mortimer Mishkin
- Laboratory of Neuropsychology, National Institute of Mental Health, NIH, Bethesda, MD 20892, United States.
| | - Richard C Saunders
- Laboratory of Neuropsychology, National Institute of Mental Health, NIH, Bethesda, MD 20892, United States.
| |
Collapse
|
36
|
Scott BH, Mishkin M. Auditory short-term memory in the primate auditory cortex. Brain Res 2016; 1640:264-77. [PMID: 26541581 PMCID: PMC4853305 DOI: 10.1016/j.brainres.2015.10.048] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2015] [Revised: 10/17/2015] [Accepted: 10/26/2015] [Indexed: 12/20/2022]
Abstract
Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory.
Collapse
Affiliation(s)
- Brian H Scott
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health, Bethesda, MD 20892, USA.
| | - Mortimer Mishkin
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health, Bethesda, MD 20892, USA.
| |
Collapse
|
37
|
Audiovisual integration facilitates monkeys' short-term memory. Anim Cogn 2016; 19:799-811. [PMID: 27010716 DOI: 10.1007/s10071-016-0979-0] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2016] [Revised: 03/12/2016] [Accepted: 03/18/2016] [Indexed: 12/25/2022]
Abstract
Many human behaviors are known to benefit from audiovisual integration, including language and communication, recognizing individuals, social decision making, and memory. Exceptionally little is known about the contributions of audiovisual integration to behavior in other primates. The current experiment investigated whether short-term memory in nonhuman primates is facilitated by the audiovisual presentation format. Three macaque monkeys that had previously learned an auditory delayed matching-to-sample (DMS) task were trained to perform a similar visual task, after which they were tested with a concurrent audiovisual DMS task with equal proportions of auditory, visual, and audiovisual trials. Parallel to outcomes in human studies, accuracy was higher and response times were faster on audiovisual trials than either unisensory trial type. Unexpectedly, two subjects exhibited superior unimodal performance on auditory trials, a finding that contrasts with previous studies, but likely reflects their training history. Our results provide the first demonstration of a bimodal memory advantage in nonhuman primates, lending further validation to their use as a model for understanding audiovisual integration and memory processing in humans.
Collapse
|
38
|
Bigelow J, Ng CW, Poremba A. Local field potential correlates of auditory working memory in primate dorsal temporal pole. Brain Res 2015; 1640:299-313. [PMID: 26718730 DOI: 10.1016/j.brainres.2015.12.025] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2015] [Revised: 12/06/2015] [Accepted: 12/14/2015] [Indexed: 10/22/2022]
Abstract
Dorsal temporal pole (dTP) is a cortical region at the rostral end of the superior temporal gyrus that forms part of the ventral auditory object processing pathway. Anatomical connections with frontal and medial temporal areas, as well as a recent single-unit recording study, suggest this area may be an important part of the network underlying auditory working memory (WM). To further elucidate the role of dTP in auditory WM, local field potentials (LFPs) were recorded from the left dTP region of two rhesus macaques during an auditory delayed matching-to-sample (DMS) task. Sample and test sounds were separated by a 5-s retention interval, and a behavioral response was required only if the sounds were identical (match trials). Sensitivity of auditory evoked responses in dTP to behavioral significance and context was further tested by passively presenting the sounds used as auditory WM memoranda both before and after the DMS task. Average evoked potentials (AEPs) for all cue types and phases of the experiment comprised two small-amplitude early onset components (N20, P40), followed by two broad, large-amplitude components occupying the remainder of the stimulus period (N120, P300), after which a final set of components were observed following stimulus offset (N80OFF, P170OFF). During the DMS task, the peak amplitude and/or latency of several of these components depended on whether the sound was presented as the sample or test, and whether the test matched the sample. Significant differences were also observed among the DMS task and passive exposure conditions. Comparing memory-related effects in the LFP signal with those obtained in the spiking data raises the possibility some memory-related activity in dTP may be locally produced and actively generated. The results highlight the involvement of dTP in auditory stimulus identification and recognition and its sensitivity to the behavioral significance of sounds in different contexts. This article is part of a Special Issue entitled SI: Auditory working memory.
Collapse
Affiliation(s)
- James Bigelow
- Department of Psychological and Brain Sciences, University of Iowa, 11 Seashore Hall East, Iowa City, IA 52242, United States.
| | - Chi-Wing Ng
- Center for Neuroscience University of California, Davis, CA 95616, United States.
| | - Amy Poremba
- Department of Psychological and Brain Sciences, University of Iowa, 11 Seashore Hall East, Iowa City, IA 52242, United States.
| |
Collapse
|
39
|
Abstract
Amplitude modulations are fundamental features of natural signals, including human speech and nonhuman primate vocalizations. Because natural signals frequently occur in the context of other competing signals, we used a forward-masking paradigm to investigate how the modulation context of a prior signal affects cortical responses to subsequent modulated sounds. Psychophysical "modulation masking," in which the presentation of a modulated "masker" signal elevates the threshold for detecting the modulation of a subsequent stimulus, has been interpreted as evidence of a central modulation filterbank and modeled accordingly. Whether cortical modulation tuning is compatible with such models remains unknown. By recording responses to pairs of sinusoidally amplitude modulated (SAM) tones in the auditory cortex of awake squirrel monkeys, we show that the prior presentation of the SAM masker elicited persistent and tuned suppression of the firing rate to subsequent SAM signals. Population averages of these effects are compatible with adaptation in broadly tuned modulation channels. In contrast, modulation context had little effect on the synchrony of the cortical representation of the second SAM stimuli and the tuning of such effects did not match that observed for firing rate. Our results suggest that, although the temporal representation of modulated signals is more robust to changes in stimulus context than representations based on average firing rate, this representation is not fully exploited and psychophysical modulation masking more closely mirrors physiological rate suppression and that rate tuning for a given stimulus feature in a given neuron's signal pathway appears sufficient to engender context-sensitive cortical adaptation.
Collapse
|
40
|
Muñoz-López M, Insausti R, Mohedano-Moriano A, Mishkin M, Saunders RC. Anatomical pathways for auditory memory II: information from rostral superior temporal gyrus to dorsolateral temporal pole and medial temporal cortex. Front Neurosci 2015; 9:158. [PMID: 26041980 PMCID: PMC4435056 DOI: 10.3389/fnins.2015.00158] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2015] [Accepted: 04/16/2015] [Indexed: 12/29/2022] Open
Abstract
Auditory recognition memory in non-human primates differs from recognition memory in other sensory systems. Monkeys learn the rule for visual and tactile delayed matching-to-sample within a few sessions, and then show one-trial recognition memory lasting 10–20 min. In contrast, monkeys require hundreds of sessions to master the rule for auditory recognition, and then show retention lasting no longer than 30–40 s. Moreover, unlike the severe effects of rhinal lesions on visual memory, such lesions have no effect on the monkeys' auditory memory performance. The anatomical pathways for auditory memory may differ from those in vision. Long-term visual recognition memory requires anatomical connections from the visual association area TE with areas 35 and 36 of the perirhinal cortex (PRC). We examined whether there is a similar anatomical route for auditory processing, or that poor auditory recognition memory may reflect the lack of such a pathway. Our hypothesis is that an auditory pathway for recognition memory originates in the higher order processing areas of the rostral superior temporal gyrus (rSTG), and then connects via the dorsolateral temporal pole to access the rhinal cortex of the medial temporal lobe. To test this, we placed retrograde (3% FB and 2% DY) and anterograde (10% BDA 10,000 mW) tracer injections in rSTG and the dorsolateral area 38DL of the temporal pole. Results showed that area 38DL receives dense projections from auditory association areas Ts1, TAa, TPO of the rSTG, from the rostral parabelt and, to a lesser extent, from areas Ts2-3 and PGa. In turn, area 38DL projects densely to area 35 of PRC, entorhinal cortex (EC), and to areas TH/TF of the posterior parahippocampal cortex. Significantly, this projection avoids most of area 36r/c of PRC. This anatomical arrangement may contribute to our understanding of the poor auditory memory of rhesus monkeys.
Collapse
Affiliation(s)
- M Muñoz-López
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health Bethesda, MD, USA ; Human Neuroanatomy Laboratory and Regional Centre for Biomedical Research (CRIB), School of Medicine, University of Castilla-La Mancha Albacete, Spain
| | - R Insausti
- Human Neuroanatomy Laboratory and Regional Centre for Biomedical Research (CRIB), School of Medicine, University of Castilla-La Mancha Albacete, Spain
| | - A Mohedano-Moriano
- Human Neuroanatomy Laboratory and Regional Centre for Biomedical Research (CRIB), School of Medicine, University of Castilla-La Mancha Albacete, Spain
| | - M Mishkin
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health Bethesda, MD, USA
| | - R C Saunders
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health Bethesda, MD, USA
| |
Collapse
|
41
|
Bigelow J, Poremba A. Item-nonspecific proactive interference in monkeys' auditory short-term memory. Hear Res 2015; 327:69-77. [PMID: 25983219 DOI: 10.1016/j.heares.2015.05.002] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/24/2015] [Revised: 05/01/2015] [Accepted: 05/04/2015] [Indexed: 11/25/2022]
Abstract
Recent studies using the delayed matching-to-sample (DMS) paradigm indicate that monkeys' auditory short-term memory (STM) is susceptible to proactive interference (PI). During the task, subjects must indicate whether sample and test sounds separated by a retention interval are identical (match) or not (nonmatch). If a nonmatching test stimulus also occurred on a previous trial, monkeys are more likely to incorrectly make a "match" response (item-specific PI). However, it is not known whether PI may be caused by sounds presented on prior trials that are similar, but nonidentical to the current test stimulus (item-nonspecific PI). This possibility was investigated in two experiments. In Experiment 1, memoranda for each trial comprised tones with a wide range of frequencies, thus minimizing item-specific PI and producing a range of frequency differences among nonidentical tones. In Experiment 2, memoranda were drawn from a set of eight artificial sounds that differed from each other by one, two, or three acoustic dimensions (frequency, spectral bandwidth, and temporal dynamics). Results from both experiments indicate that subjects committed more errors when previously-presented sounds were acoustically similar (though not identical) to the test stimulus of the current trial. Significant effects were produced only by stimuli from the immediately previous trial, suggesting that item-nonspecific PI is less perseverant than item-specific PI, which can extend across noncontiguous trials. Our results contribute to existing human and animal STM literature reporting item-nonspecific PI caused by perceptual similarity among memoranda. Together, these observations underscore the significance of both temporal and discriminability factors in monkeys' STM.
Collapse
|
42
|
Karabanov AN, Paine R, Chao CC, Schulze K, Scott B, Hallett M, Mishkin M. Participation of the classical speech areas in auditory long-term memory. PLoS One 2015; 10:e0119472. [PMID: 25815813 PMCID: PMC4376917 DOI: 10.1371/journal.pone.0119472] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2014] [Accepted: 01/30/2015] [Indexed: 11/30/2022] Open
Abstract
Accumulating evidence suggests that storing speech sounds requires transposing rapidly fluctuating sound waves into more easily encoded oromotor sequences. If so, then the classical speech areas in the caudalmost portion of the temporal gyrus (pSTG) and in the inferior frontal gyrus (IFG) may be critical for performing this acoustic-oromotor transposition. We tested this proposal by applying repetitive transcranial magnetic stimulation (rTMS) to each of these left-hemisphere loci, as well as to a nonspeech locus, while participants listened to pseudowords. After 5 minutes these stimuli were re-presented together with new ones in a recognition test. Compared to control-site stimulation, pSTG stimulation produced a highly significant increase in recognition error rate, without affecting reaction time. By contrast, IFG stimulation led only to a weak, non-significant, trend toward recognition memory impairment. Importantly, the impairment after pSTG stimulation was not due to interference with perception, since the same stimulation failed to affect pseudoword discrimination examined with short interstimulus intervals. Our findings suggest that pSTG is essential for transforming speech sounds into stored motor plans for reproducing the sound. Whether or not the IFG also plays a role in speech-sound recognition could not be determined from the present results.
Collapse
Affiliation(s)
- Anke Ninija Karabanov
- National Institute of Mental Health, Bethesda, Maryland, United Sates of America
- Danish Research Center for Magnetic Resonance, Hvidovre, Denmark
- National Institute of Neurological Disorders and Stroke, Bethesda, Maryland, United Sates of America
- * E-mail:
| | - Rainer Paine
- National Institute of Neurological Disorders and Stroke, Bethesda, Maryland, United Sates of America
| | - Chi Chao Chao
- National Institute of Neurological Disorders and Stroke, Bethesda, Maryland, United Sates of America
- Department of Neurology, National Taiwan University Hospital, Taipei, Taiwan
| | - Katrin Schulze
- Institute of Child Health, University College London, London, United Kingdom
| | - Brian Scott
- National Institute of Mental Health, Bethesda, Maryland, United Sates of America
| | - Mark Hallett
- National Institute of Neurological Disorders and Stroke, Bethesda, Maryland, United Sates of America
| | - Mortimer Mishkin
- National Institute of Mental Health, Bethesda, Maryland, United Sates of America
| |
Collapse
|
43
|
Abstract
During communication we combine auditory and visual information. Neurophysiological research in nonhuman primates has shown that single neurons in ventrolateral prefrontal cortex (VLPFC) exhibit multisensory responses to faces and vocalizations presented simultaneously. However, whether VLPFC is also involved in maintaining those communication stimuli in working memory or combining stored information across different modalities is unknown, although its human homolog, the inferior frontal gyrus, is known to be important in integrating verbal information from auditory and visual working memory. To address this question, we recorded from VLPFC while rhesus macaques (Macaca mulatta) performed an audiovisual working memory task. Unlike traditional match-to-sample/nonmatch-to-sample paradigms, which use unimodal memoranda, our nonmatch-to-sample task used dynamic movies consisting of both facial gestures and the accompanying vocalizations. For the nonmatch conditions, a change in the auditory component (vocalization), the visual component (face), or both components was detected. Our results show that VLPFC neurons are activated by stimulus and task factors: while some neurons simply responded to a particular face or a vocalization regardless of the task period, others exhibited activity patterns typically related to working memory such as sustained delay activity and match enhancement/suppression. In addition, we found neurons that detected the component change during the nonmatch period. Interestingly, some of these neurons were sensitive to the change of both components and therefore combined information from auditory and visual working memory. These results suggest that VLPFC is not only involved in the perceptual processing of faces and vocalizations but also in their mnemonic processing.
Collapse
|
44
|
Poliva O. From where to what: a neuroanatomically based evolutionary model of the emergence of speech in humans. F1000Res 2015; 4:67. [PMID: 28928931 PMCID: PMC5600004 DOI: 10.12688/f1000research.6175.1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 03/03/2015] [Indexed: 03/28/2024] Open
Abstract
In the brain of primates, the auditory cortex connects with the frontal lobe via the temporal pole (auditory ventral stream; AVS) and via the inferior parietal lobule (auditory dorsal stream; ADS). The AVS is responsible for sound recognition, and the ADS for sound-localization, voice detection and audio-visual integration. I propose that the primary role of the ADS in monkeys/apes is the perception and response to contact calls. These calls are exchanged between tribe members (e.g., mother-offspring) and are used for monitoring location. Perception of contact calls occurs by the ADS detecting a voice, localizing it, and verifying that the corresponding face is out of sight. The auditory cortex then projects to parieto-frontal visuospatial regions (visual dorsal stream) for searching the caller, and via a series of frontal lobe-brainstem connections, a contact call is produced in return. Because the human ADS processes also speech production and repetition, I further describe a course for the development of speech in humans. I propose that, due to duplication of a parietal region and its frontal projections, and strengthening of direct frontal-brainstem connections, the ADS converted auditory input directly to vocal regions in the frontal lobe, which endowed early Hominans with partial vocal control. This enabled offspring to modify their contact calls with intonations for signaling different distress levels to their mother. Vocal control could then enable question-answer conversations, by offspring emitting a low-level distress call for inquiring about the safety of objects, and mothers responding with high- or low-level distress calls. Gradually, the ADS and the direct frontal-brainstem connections became more robust and vocal control became more volitional. Eventually, individuals were capable of inventing new words and offspring were capable of inquiring about objects in their environment and learning their names via mimicry.
Collapse
|
45
|
Poliva O. From where to what: a neuroanatomically based evolutionary model of the emergence of speech in humans. F1000Res 2015; 4:67. [PMID: 28928931 PMCID: PMC5600004 DOI: 10.12688/f1000research.6175.3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 09/21/2017] [Indexed: 12/28/2022] Open
Abstract
In the brain of primates, the auditory cortex connects with the frontal lobe via the temporal pole (auditory ventral stream; AVS) and via the inferior parietal lobe (auditory dorsal stream; ADS). The AVS is responsible for sound recognition, and the ADS for sound-localization, voice detection and integration of calls with faces. I propose that the primary role of the ADS in non-human primates is the detection and response to contact calls. These calls are exchanged between tribe members (e.g., mother-offspring) and are used for monitoring location. Detection of contact calls occurs by the ADS identifying a voice, localizing it, and verifying that the corresponding face is out of sight. Once a contact call is detected, the primate produces a contact call in return via descending connections from the frontal lobe to a network of limbic and brainstem regions. Because the ADS of present day humans also performs speech production, I further propose an evolutionary course for the transition from contact call exchange to an early form of speech. In accordance with this model, structural changes to the ADS endowed early members of the genus Homo with partial vocal control. This development was beneficial as it enabled offspring to modify their contact calls with intonations for signaling high or low levels of distress to their mother. Eventually, individuals were capable of participating in yes-no question-answer conversations. In these conversations the offspring emitted a low-level distress call for inquiring about the safety of objects (e.g., food), and his/her mother responded with a high- or low-level distress call to signal approval or disapproval of the interaction. Gradually, the ADS and its connections with brainstem motor regions became more robust and vocal control became more volitional. Speech emerged once vocal control was sufficient for inventing novel calls.
Collapse
|
46
|
Martins PT, Boeckx C. Attention mechanisms and the mosaic evolution of speech. Front Psychol 2014; 5:1463. [PMID: 25566141 PMCID: PMC4267173 DOI: 10.3389/fpsyg.2014.01463] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2014] [Accepted: 11/29/2014] [Indexed: 11/13/2022] Open
Abstract
There is still no categorical answer as to why humans, and no other species, have speech, or why speech is the way it is. Several purely anatomical arguments have been put forward, but they have been shown to be false, biologically implausible, or of limited scope. This perspective paper supports the idea that evolutionary theories of speech could benefit from a focus on the cognitive mechanisms that make speech possible, for which antecedents in evolutionary history and brain correlates can be found. This type of approach is part of a very recent but rapidly growing trend that has already provided crucial insights on the nature of human speech by focusing on the biological bases of vocal learning. Here we contend that a general mechanism of attention, which manifests itself not only in the visual but also in the auditory modality, might be one of the key ingredients of human speech, in addition to the mechanisms underlying vocal learning, and the pairing of facial gestures with vocalic units.
Collapse
Affiliation(s)
- Pedro T. Martins
- Department of Information and Communication Technologies, Pompeu Fabra University, Barcelona, Spain
- Center of Linguistics of the University of Porto, Porto, Portugal
- Biolinguistics Initiative Barcelona, Barcelona, Spain
| | - Cedric Boeckx
- Biolinguistics Initiative Barcelona, Barcelona, Spain
- Department of General Linguistics, University of Barcelona, Barcelona, Spain
- Catalan Institute of Research and Advanced Studies (ICREA), Barcelona, Spain
| |
Collapse
|
47
|
Scott BH, Mishkin M, Yin P. Neural correlates of auditory short-term memory in rostral superior temporal cortex. Curr Biol 2014; 24:2767-75. [PMID: 25456448 DOI: 10.1016/j.cub.2014.10.004] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2014] [Revised: 08/26/2014] [Accepted: 10/02/2014] [Indexed: 11/18/2022]
Abstract
BACKGROUND Auditory short-term memory (STM) in the monkey is less robust than visual STM and may depend on a retained sensory trace, which is likely to reside in the higher-order cortical areas of the auditory ventral stream. RESULTS We recorded from the rostral superior temporal cortex as monkeys performed serial auditory delayed match-to-sample (DMS). A subset of neurons exhibited modulations of their firing rate during the delay between sounds, during the sensory response, or during both. This distributed subpopulation carried a predominantly sensory signal modulated by the mnemonic context of the stimulus. Excitatory and suppressive effects on match responses were dissociable in their timing and in their resistance to sounds intervening between the sample and match. CONCLUSIONS Like the monkeys' behavioral performance, these neuronal effects differ from those reported in the same species during visual DMS, suggesting different neural mechanisms for retaining dynamic sounds and static images in STM.
Collapse
Affiliation(s)
- Brian H Scott
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health, Bethesda, MD 20892, USA.
| | - Mortimer Mishkin
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health, Bethesda, MD 20892, USA
| | - Pingbo Yin
- Laboratory of Neuropsychology, National Institute of Mental Health, National Institutes of Health, Bethesda, MD 20892, USA; Neural Systems Laboratory, Institute for Systems Research, University of Maryland, College Park, MD 20742, USA
| |
Collapse
|
48
|
Statistical learning of recurring sound patterns encodes auditory objects in songbird forebrain. Proc Natl Acad Sci U S A 2014; 111:14553-8. [PMID: 25246563 DOI: 10.1073/pnas.1412109111] [Citation(s) in RCA: 40] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Auditory neurophysiology has demonstrated how basic acoustic features are mapped in the brain, but it is still not clear how multiple sound components are integrated over time and recognized as an object. We investigated the role of statistical learning in encoding the sequential features of complex sounds by recording neuronal responses bilaterally in the auditory forebrain of awake songbirds that were passively exposed to long sound streams. These streams contained sequential regularities, and were similar to streams used in human infants to demonstrate statistical learning for speech sounds. For stimulus patterns with contiguous transitions and with nonadjacent elements, single and multiunit responses reflected neuronal discrimination of the familiar patterns from novel patterns. In addition, discrimination of nonadjacent patterns was stronger in the right hemisphere than in the left, and may reflect an effect of top-down modulation that is lateralized. Responses to recurring patterns showed stimulus-specific adaptation, a sparsening of neural activity that may contribute to encoding invariants in the sound stream and that appears to increase coding efficiency for the familiar stimuli across the population of neurons recorded. As auditory information about the world must be received serially over time, recognition of complex auditory objects may depend on this type of mnemonic process to create and differentiate representations of recently heard sounds.
Collapse
|
49
|
Joly O, Baumann S, Poirier C, Patterson RD, Thiele A, Griffiths TD. A perceptual pitch boundary in a non-human primate. Front Psychol 2014; 5:998. [PMID: 25309477 PMCID: PMC4163976 DOI: 10.3389/fpsyg.2014.00998] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2014] [Accepted: 08/21/2014] [Indexed: 11/20/2022] Open
Abstract
Pitch is an auditory percept critical to the perception of music and speech, and for these harmonic sounds, pitch is closely related to the repetition rate of the acoustic wave. This paper reports a test of the assumption that non-human primates and especially rhesus monkeys perceive the pitch of these harmonic sounds much as humans do. A new procedure was developed to train macaques to discriminate the pitch of harmonic sounds and thereby demonstrate that the lower limit for pitch perception in macaques is close to 30 Hz, as it is in humans. Moreover, when the phases of successive harmonics are alternated to cause a pseudo-doubling of the repetition rate, the lower pitch boundary in macaques decreases substantially, as it does in humans. The results suggest that both species use neural firing times to discriminate pitch, at least for sounds with relatively low repetition rates.
Collapse
Affiliation(s)
- Olivier Joly
- Auditory Group, Institute of Neuroscience, Newcastle University, Newcastle upon Tyne UK ; Department of Experimental Psychology, MRC Cognition and Brain Sciences Unit, University of Oxford Oxford, UK
| | - Simon Baumann
- Auditory Group, Institute of Neuroscience, Newcastle University, Newcastle upon Tyne UK
| | - Colline Poirier
- Auditory Group, Institute of Neuroscience, Newcastle University, Newcastle upon Tyne UK
| | - Roy D Patterson
- Department of Physiology, Development and Neuroscience, University of Cambridge, Cambridge UK
| | - Alexander Thiele
- Auditory Group, Institute of Neuroscience, Newcastle University, Newcastle upon Tyne UK
| | - Timothy D Griffiths
- Auditory Group, Institute of Neuroscience, Newcastle University, Newcastle upon Tyne UK ; University College London London, UK
| |
Collapse
|
50
|
Bigelow J, Rossi B, Poremba A. Neural correlates of short-term memory in primate auditory cortex. Front Neurosci 2014; 8:250. [PMID: 25177266 PMCID: PMC4132374 DOI: 10.3389/fnins.2014.00250] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2014] [Accepted: 07/28/2014] [Indexed: 11/13/2022] Open
Abstract
Behaviorally-relevant sounds such as conspecific vocalizations are often available for only a brief amount of time; thus, goal-directed behavior frequently depends on auditory short-term memory (STM). Despite its ecological significance, the neural processes underlying auditory STM remain poorly understood. To investigate the role of the auditory cortex in STM, single- and multi-unit activity was recorded from the primary auditory cortex (A1) of two monkeys performing an auditory STM task using simple and complex sounds. Each trial consisted of a sample and test stimulus separated by a 5-s retention interval. A brief wait period followed the test stimulus, after which subjects pressed a button if the sounds were identical (match trials) or withheld button presses if they were different (non-match trials). A number of units exhibited significant changes in firing rate for portions of the retention interval, although these changes were rarely sustained. Instead, they were most frequently observed during the early and late portions of the retention interval, with inhibition being observed more frequently than excitation. At the population level, responses elicited on match trials were briefly suppressed early in the sound period relative to non-match trials. However, during the latter portion of the sound, firing rates increased significantly for match trials and remained elevated throughout the wait period. Related patterns of activity were observed in prior experiments from our lab in the dorsal temporal pole (dTP) and prefrontal cortex (PFC) of the same animals. The data suggest that early match suppression occurs in both A1 and the dTP, whereas later match enhancement occurs first in the PFC, followed by A1 and later in dTP. Because match enhancement occurs first in the PFC, we speculate that enhancement observed in A1 and dTP may reflect top–down feedback. Overall, our findings suggest that A1 forms part of the larger neural system recruited during auditory STM.
Collapse
Affiliation(s)
- James Bigelow
- Department of Psychology, University of Iowa Iowa City, IA, USA
| | - Breein Rossi
- Department of Psychology, University of Iowa Iowa City, IA, USA
| | - Amy Poremba
- Department of Psychology, University of Iowa Iowa City, IA, USA
| |
Collapse
|