1
|
Guo D, Yao Y, Liu X, Han Y. Clemastine improves emotional and social deficits in adolescent social isolation mice by reversing demyelination. Pharmacol Biochem Behav 2024; 242:173824. [PMID: 39002803 DOI: 10.1016/j.pbb.2024.173824] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/20/2024] [Revised: 07/09/2024] [Accepted: 07/10/2024] [Indexed: 07/15/2024]
Abstract
Adolescence is a critical period for social experience-dependent oligodendrocyte maturation and myelination. Adolescent stress predisposes to cause irreversible changes in brain structure and function with lasting effects on adulthood or beyond. However, the molecular mechanisms linking adolescent social isolation stress with emotional and social competence remain largely unknown. In our study, we found that social isolation during adolescence leads to anxiety-like behaviors, depression-like behaviors, impaired social memory and altered patterns of social ultrasonic vocalizations in mice. In addition, adolescent social isolation stress induces demyelination in the prefrontal cortex and hippocampus of mice, with decreased myelin-related gene expression and disrupted myelin structure. More importantly, clemastine was sufficient to rescue the impairment of emotional and social memory by promoting remyelination. These findings reveal the demyelination mechanism of emotional and social deficits caused by social isolation stress in adolescence, and provides potential therapeutic targets for treating stress-related mental disorders.
Collapse
Affiliation(s)
- Dan Guo
- Department of Neurobiology, School of Basic Medical Sciences, Peking University Health Science Center, Beijing 100191, China; National Institute on Drug Dependence and Beijing Key Laboratory of Drug Dependence Research, Peking University, Beijing 100191, China
| | - Yuan Yao
- Department of Neurobiology, School of Basic Medical Sciences, Peking University Health Science Center, Beijing 100191, China; National Institute on Drug Dependence and Beijing Key Laboratory of Drug Dependence Research, Peking University, Beijing 100191, China
| | - Xiumin Liu
- National Institute on Drug Dependence and Beijing Key Laboratory of Drug Dependence Research, Peking University, Beijing 100191, China
| | - Ying Han
- National Institute on Drug Dependence and Beijing Key Laboratory of Drug Dependence Research, Peking University, Beijing 100191, China.
| |
Collapse
|
2
|
Hermans EC, de Theije CGM, Nijboer CH, Achterberg EJM. Ultrasonic vocalization emission is altered following neonatal hypoxic-ischemic brain injury in mice. Behav Brain Res 2024; 471:115113. [PMID: 38878973 DOI: 10.1016/j.bbr.2024.115113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2024] [Revised: 06/07/2024] [Accepted: 06/12/2024] [Indexed: 08/03/2024]
Abstract
Neonatal hypoxic-ischemic (HI) brain injury leads to cognitive impairments including social communication disabilities. Current treatments do not sufficiently target these impairments, therefore new tools are needed to examine social communication in models for neonatal brain injury. Ultrasonic vocalizations (USVs) during early life show potential as a measurement for social development and reflect landmark developmental stages in neonatal mice. However, changes in USV emission early after HI injury have not been found yet. Our current study examines USV patterns and classes in the first 3 days after HI injury. C57Bl/6 mice were subjected to HI on postnatal day (P)9 and USVs were recorded between P10 and P12. Audio files were analyzed using the VocalMat automated tool. HI-injured mice emitted less USVs, for shorter durations, and at a higher frequency compared to control (sham-operated) littermates. The HI-induced alterations in USVs were most distinct at P10 and in the frequency range of 50-75 kHz. At P10 HI-injured mouse pups also produced different ratios of USV class types compared to control littermates. Moreover, alterations in the duration and frequency were specific to certain USV classes in HI animals compared to controls. Injury in the striatum and hippocampus contributed most to alterations in USV communication after HI. Overall, neonatal HI injury leads to USV alterations in newborn mice which could be used as a tool to study early HI-related social communication deficits.
Collapse
Affiliation(s)
- Eva C Hermans
- Department for Developmental Origins of Disease, University Medical Center Utrecht Brain Center and Wilhelmina Children's Hospital, Utrecht University, Utrecht, the Netherlands
| | - Caroline G M de Theije
- Department for Developmental Origins of Disease, University Medical Center Utrecht Brain Center and Wilhelmina Children's Hospital, Utrecht University, Utrecht, the Netherlands
| | - Cora H Nijboer
- Department for Developmental Origins of Disease, University Medical Center Utrecht Brain Center and Wilhelmina Children's Hospital, Utrecht University, Utrecht, the Netherlands
| | - E J Marijke Achterberg
- Department of Population Health Sciences, Unit Animals in Science and Society, Division of Behavioural Neuroscience, Faculty of Veterinary Medicine, Utrecht University, Utrecht, the Netherlands.
| |
Collapse
|
3
|
Li Y, Liu ZW, Santana GM, Capaz AM, Doumazane E, Gao XB, Renier N, Dietrich MO. Neurons for infant social behaviors in the mouse zona incerta. Science 2024; 385:409-416. [PMID: 39052814 DOI: 10.1126/science.adk7411] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2023] [Accepted: 06/07/2024] [Indexed: 07/27/2024]
Abstract
Understanding the neural basis of infant social behaviors is crucial for elucidating the mechanisms of early social and emotional development. In this work, we report a specific population of somatostatin-expressing neurons in the zona incerta (ZISST) of preweaning mice that responds dynamically to social interactions, particularly those with their mother. Bidirectional neural activity manipulations in pups revealed that widespread connectivity of preweaning ZISST neurons to sensory, emotional, and cognitive brain centers mediates two key adaptive functions associated with maternal presence: the reduction of behavior distress and the facilitation of learning. These findings reveal a population of neurons in the infant mouse brain that coordinate the positive effects of the relationship with the mother on an infant's behavior and physiology.
Collapse
Affiliation(s)
- Yuexuan Li
- Laboratory of Physiology of Behavior, Department of Comparative Medicine, School of Medicine, Yale University, New Haven, CT 06520, USA
- Department of Comparative Medicine, School of Medicine, Yale University, New Haven, CT 06520, USA
| | - Zhong-Wu Liu
- Department of Comparative Medicine, School of Medicine, Yale University, New Haven, CT 06520, USA
| | - Gustavo M Santana
- Laboratory of Physiology of Behavior, Department of Comparative Medicine, School of Medicine, Yale University, New Haven, CT 06520, USA
- Department of Neuroscience, School of Medicine, Yale University, New Haven, CT 06520, USA
| | - Ana Marta Capaz
- Laboratoire de Plasticité Structurale, Sorbonne Université, ICM Paris Brain Institute, INSERM U1127, CNRS UMR7225, AP-HP, 75013 Paris, France
| | - Etienne Doumazane
- Laboratoire de Plasticité Structurale, Sorbonne Université, ICM Paris Brain Institute, INSERM U1127, CNRS UMR7225, AP-HP, 75013 Paris, France
| | - Xiao-Bing Gao
- Department of Comparative Medicine, School of Medicine, Yale University, New Haven, CT 06520, USA
| | - Nicolas Renier
- Laboratoire de Plasticité Structurale, Sorbonne Université, ICM Paris Brain Institute, INSERM U1127, CNRS UMR7225, AP-HP, 75013 Paris, France
| | - Marcelo O Dietrich
- Laboratory of Physiology of Behavior, Department of Comparative Medicine, School of Medicine, Yale University, New Haven, CT 06520, USA
- Department of Comparative Medicine, School of Medicine, Yale University, New Haven, CT 06520, USA
- Department of Neuroscience, School of Medicine, Yale University, New Haven, CT 06520, USA
| |
Collapse
|
4
|
MacDonald A, Hebling A, Wei XP, Yackle K. The breath shape controls intonation of mouse vocalizations. eLife 2024; 13:RP93079. [PMID: 38963785 PMCID: PMC11223766 DOI: 10.7554/elife.93079] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/06/2024] Open
Abstract
Intonation in speech is the control of vocal pitch to layer expressive meaning to communication, like increasing pitch to indicate a question. Also, stereotyped patterns of pitch are used to create distinct sounds with different denotations, like in tonal languages and, perhaps, the 10 sounds in the murine lexicon. A basic tone is created by exhalation through a constricted laryngeal voice box, and it is thought that more complex utterances are produced solely by dynamic changes in laryngeal tension. But perhaps, the shifting pitch also results from altering the swiftness of exhalation. Consistent with the latter model, we describe that intonation in most vocalization types follows deviations in exhalation that appear to be generated by the re-activation of the cardinal breathing muscle for inspiration. We also show that the brainstem vocalization central pattern generator, the iRO, can create this breath pattern. Consequently, ectopic activation of the iRO not only induces phonation, but also the pitch patterns that compose most of the vocalizations in the murine lexicon. These results reveal a novel brainstem mechanism for intonation.
Collapse
Affiliation(s)
- Alastair MacDonald
- Department of Physiology, University of California-San FranciscoSan FranciscoUnited States
| | - Alina Hebling
- Neuroscience Graduate Program, University of California-San FranciscoSan FranciscoUnited States
| | - Xin Paul Wei
- Department of Physiology, University of California-San FranciscoSan FranciscoUnited States
- Biomedical Sciences Graduate Program, University of California-San FranciscoSan FranciscoUnited States
| | - Kevin Yackle
- Department of Physiology, University of California-San FranciscoSan FranciscoUnited States
| |
Collapse
|
5
|
Chen Z, Jia G, Zhou Q, Zhang Y, Quan Z, Chen X, Fukuda T, Huang Q, Shi Q. ARBUR, a machine learning-based analysis system for relating behaviors and ultrasonic vocalizations of rats. iScience 2024; 27:109998. [PMID: 38947508 PMCID: PMC11214285 DOI: 10.1016/j.isci.2024.109998] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2024] [Revised: 04/01/2024] [Accepted: 05/14/2024] [Indexed: 07/02/2024] Open
Abstract
Deciphering how different behaviors and ultrasonic vocalizations (USVs) of rats interact can yield insights into the neural basis of social interaction. However, the behavior-vocalization interplay of rats remains elusive because of the challenges of relating the two communication media in complex social contexts. Here, we propose a machine learning-based analysis system (ARBUR) that can cluster without bias both non-step (continuous) and step USVs, hierarchically detect eight types of behavior of two freely behaving rats with high accuracy, and locate the vocal rat in 3-D space. ARBUR reveals that rats communicate via distinct USVs during different behaviors. Moreover, we show that ARBUR can indicate findings that are long neglected by former manual analysis, especially regarding the non-continuous USVs during easy-to-confuse social behaviors. This work could help mechanistically understand the behavior-vocalization interplay of rats and highlights the potential of machine learning algorithms in automatic animal behavioral and acoustic analysis.
Collapse
Affiliation(s)
- Zhe Chen
- School of Medical Technology, Beijing Institute of Technology, Beijing, China
- Key Laboratory of Biomimetic Robots and Systems, Beijing Institute of Technology, Ministry of Education, Beijing, China
| | - Guanglu Jia
- Key Laboratory of Biomimetic Robots and Systems, Beijing Institute of Technology, Ministry of Education, Beijing, China
- Intelligent Robotics Institute, School of Mechatronical Engineering, Beijing Institute of Technology, Beijing, China
| | - Qijie Zhou
- Key Laboratory of Biomimetic Robots and Systems, Beijing Institute of Technology, Ministry of Education, Beijing, China
- Intelligent Robotics Institute, School of Mechatronical Engineering, Beijing Institute of Technology, Beijing, China
| | - Yulai Zhang
- School of Medical Technology, Beijing Institute of Technology, Beijing, China
- Key Laboratory of Biomimetic Robots and Systems, Beijing Institute of Technology, Ministry of Education, Beijing, China
| | - Zhenzhen Quan
- Key Laboratory of Molecular Medicine and Biotherapy, School of Life Science, Beijing Institute of Technology, Beijing, China
| | - Xuechao Chen
- Key Laboratory of Biomimetic Robots and Systems, Beijing Institute of Technology, Ministry of Education, Beijing, China
- Intelligent Robotics Institute, School of Mechatronical Engineering, Beijing Institute of Technology, Beijing, China
| | - Toshio Fukuda
- Institute of Innovation for Future Society, Nagoya University, Nagoya, Japan
| | - Qiang Huang
- Key Laboratory of Biomimetic Robots and Systems, Beijing Institute of Technology, Ministry of Education, Beijing, China
- Intelligent Robotics Institute, School of Mechatronical Engineering, Beijing Institute of Technology, Beijing, China
| | - Qing Shi
- Key Laboratory of Biomimetic Robots and Systems, Beijing Institute of Technology, Ministry of Education, Beijing, China
- Intelligent Robotics Institute, School of Mechatronical Engineering, Beijing Institute of Technology, Beijing, China
| |
Collapse
|
6
|
Harding CD, Walker KMM, Hackett TD, Herwig A, Peirson SN, Vyazovskiy VV. Ultrasonic vocalisation rate tracks the diurnal pattern of activity in winter phenotype Djungarian hamsters (Phodopus sungorus). J Comp Physiol B 2024; 194:383-401. [PMID: 38733409 PMCID: PMC11233387 DOI: 10.1007/s00360-024-01556-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2023] [Revised: 04/06/2024] [Accepted: 04/11/2024] [Indexed: 05/13/2024]
Abstract
Vocalisations are increasingly being recognised as an important aspect of normal rodent behaviour yet little is known of how they interact with other spontaneous behaviours such as sleep and torpor, particularly in a social setting. We obtained chronic recordings of the vocal behaviour of adult male and female Djungarian hamsters (Phodopus sungorus) housed under short photoperiod (8 h light, 16 h dark, square wave transitions), in different social contexts. The animals were kept in isolation or in same-sex sibling pairs, separated by a grid which allowed non-physical social interaction. On approximately 20% of days hamsters spontaneously entered torpor, a state of metabolic depression that coincides with the rest phase of many small mammal species in response to actual or predicted energy shortages. Animals produced ultrasonic vocalisations (USVs) with a peak frequency of 57 kHz in both social and asocial conditions and there was a high degree of variability in vocalisation rate between subjects. Vocalisation rate was correlated with locomotor activity across the 24-h light cycle, occurring more frequently during the dark period when the hamsters were more active and peaking around light transitions. Solitary-housed animals did not vocalise whilst torpid and animals remained in torpor despite overlapping with vocalisations in social-housing. Besides a minor decrease in peak USV frequency when isolated hamsters were re-paired with their siblings, changing social contexts did not influence vocalisation behaviour or structure. In rare instances, temporally overlapping USVs occurred when animals were socially-housed and were grouped in such a way that could indicate coordination. We did not observe broadband calls (BBCs) contemporaneous with USVs in this paradigm, corroborating their correlation with physical aggression which was absent from our experiment. Overall, we find little evidence to suggest a direct social function of hamster USVs. We conclude that understanding the effects of vocalisations on spontaneous behaviours, such as sleep and torpor, will inform experimental design of future studies, especially where the role of social interactions is investigated.
Collapse
Affiliation(s)
- Christian D Harding
- Department of Physiology Anatomy and Genetics, University of Oxford, Oxford, UK.
- Division of Pulmonary, Critical Care, Sleep Medicine and Physiology, University of California San Diego, San Diego, USA.
| | - Kerry M M Walker
- Department of Physiology Anatomy and Genetics, University of Oxford, Oxford, UK
| | | | - Annika Herwig
- Institute of Neurobiology, Ulm University, Ulm, Germany
| | - Stuart N Peirson
- Sir Jules Thorn Sleep and Circadian Neuroscience Institute, University of Oxford, Oxford, UK
- Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK
- The Kavli Institute for Nanoscience Discovery, Oxford, UK
| | - Vladyslav V Vyazovskiy
- Department of Physiology Anatomy and Genetics, University of Oxford, Oxford, UK
- Sir Jules Thorn Sleep and Circadian Neuroscience Institute, University of Oxford, Oxford, UK
- The Kavli Institute for Nanoscience Discovery, Oxford, UK
| |
Collapse
|
7
|
Grammer J, Valles R, Bowles A, Zelikowsky M. SAUSI: a novel assay for measuring social anxiety and motivation. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.05.13.594023. [PMID: 38798428 PMCID: PMC11118329 DOI: 10.1101/2024.05.13.594023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2024]
Abstract
Social anxiety is one of the most prevalent mental health disorders, though the underlying neurobiology is poorly understood. Progress in understanding the etiology of social anxiety has been hindered by the lack of comprehensive tools to assess social anxiety in model systems. Here, we created a new behavioral task - Selective Access to Unrestricted Social Interaction (SAUSI), which combines elements of social motivation, hesitancy, decision-making, and free interaction to enable the wholistic assessment of social anxiety-like behaviors in mice. Using this novel assay, we found that social isolation-induced social anxiety-like behaviors in female mice are largely driven by increases in social fear, social hesitancy, and altered ultrasonic vocalizations. Deep learning analyses were able to computationally identify a unique behavioral footprint underlying the state produced by social isolation, demonstrating the compatibility of modern computational approaches with SAUSI. Finally, we compared the results of SAUSI to traditionally social assays including the 3-chamber sociability assay and the resident intruder task. This revealed that behavioral changes induced by isolation were highly context dependent, and that while fragments of social anxiety measured in SAUSI were replicable across other tasks, a wholistic assessment was not obtainable from these alternative assays. Our findings debut a novel task for the behavioral toolbox - one which overcomes limitations of previous assays, allowing for both social choice as well as free interaction, and offers a new approach for assessing social anxiety in rodents.
Collapse
Affiliation(s)
- Jordan Grammer
- Department of Neurobiology, University of Utah, United States
| | - Rene Valles
- Department of Neurobiology, University of Utah, United States
| | - Alexis Bowles
- Department of Neurobiology, University of Utah, United States
| | | |
Collapse
|
8
|
MacDonald A, Hebling A, Wei XP, Yackle K. The breath shape controls intonation of mouse vocalizations. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.10.16.562597. [PMID: 37904912 PMCID: PMC10614923 DOI: 10.1101/2023.10.16.562597] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/01/2023]
Abstract
Intonation in speech is the control of vocal pitch to layer expressive meaning to communication, like increasing pitch to indicate a question. Also, stereotyped patterns of pitch are used to create distinct sounds with different denotations, like in tonal languages and, perhaps, the ten sounds in the murine lexicon. A basic tone is created by exhalation through a constricted laryngeal voice box, and it is thought that more complex utterances are produced solely by dynamic changes in laryngeal tension. But perhaps, the shifting pitch also results from altering the swiftness of exhalation. Consistent with the latter model, we describe that intonation in most vocalization types follows deviations in exhalation that appear to be generated by the re-activation of the cardinal breathing muscle for inspiration. We also show that the brainstem vocalization central pattern generator, the iRO, can create this breath pattern. Consequently, ectopic activation of the iRO not only induces phonation, but also the pitch patterns that compose most of the vocalizations in the murine lexicon. These results reveal a novel brainstem mechanism for intonation.
Collapse
Affiliation(s)
- Alastair MacDonald
- Department of Physiology, University of California-San Francisco, San Francisco, CA 94143
| | - Alina Hebling
- Neuroscience Graduate Program, University of California-San Francisco, San Francisco, CA 94143, USA
| | - Xin Paul Wei
- Department of Physiology, University of California-San Francisco, San Francisco, CA 94143
- Biomedical Sciences Graduate Program, University of California-San Francisco, San Francisco, CA 94143, USA
| | - Kevin Yackle
- Department of Physiology, University of California-San Francisco, San Francisco, CA 94143
| |
Collapse
|
9
|
Santana GM, Dietrich MO. SqueakOut: Autoencoder-based segmentation of mouse ultrasonic vocalizations. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.04.19.590368. [PMID: 38712291 PMCID: PMC11071348 DOI: 10.1101/2024.04.19.590368] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2024]
Abstract
Mice emit ultrasonic vocalizations (USVs) that are important for social communication. Despite great advancements in tools to detect USVs from audio files in the recent years, highly accurate segmentation of USVs from spectrograms (i.e., removing noise) remains a significant challenge. Here, we present a new dataset of 12,954 annotated spectrograms explicitly labeled for mouse USV segmentation. Leveraging this dataset, we developed SqueakOut, a lightweight (4.6M parameters) fully convolutional autoencoder that achieves high accuracy in supervised segmentation of USVs from spectrograms, with a Dice score of 90.22. SqueakOut combines a MobileNetV2 backbone with skip connections and transposed convolutions to precisely segment USVs. Using stochastic data augmentation techniques and a hybrid loss function, SqueakOut learns robust segmentation across varying recording conditions. We evaluate SqueakOut's performance, demonstrating substantial improvements over existing methods like VocalMat (63.82 Dice score). The accurate USV segmentations enabled by SqueakOut will facilitate novel methods for vocalization classification and more accurate analysis of mouse communication. To promote further research, we release the annotated 12,954 spectrogram USV segmentation dataset and the SqueakOut implementation publicly.
Collapse
Affiliation(s)
- Gustavo M Santana
- Laboratory of Physiology of Behavior, Interdepartmental Neuroscience Program, Program in Physics, Engineering and Biology, Yale University, USA
- Graduate Program in Biochemistry, Federal University of Rio Grande do Sul, BRA
| | - Marcelo O Dietrich
- Laboratory of Physiology of Behavior, Department of Comparative Medicine, Department of Neuroscience, Yale University, USA
| |
Collapse
|
10
|
Gencturk S, Unal G. Rodent tests of depression and anxiety: Construct validity and translational relevance. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2024; 24:191-224. [PMID: 38413466 PMCID: PMC11039509 DOI: 10.3758/s13415-024-01171-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 02/03/2024] [Indexed: 02/29/2024]
Abstract
Behavioral testing constitutes the primary method to measure the emotional states of nonhuman animals in preclinical research. Emerging as the characteristic tool of the behaviorist school of psychology, behavioral testing of animals, particularly rodents, is employed to understand the complex cognitive and affective symptoms of neuropsychiatric disorders. Following the symptom-based diagnosis model of the DSM, rodent models and tests of depression and anxiety focus on behavioral patterns that resemble the superficial symptoms of these disorders. While these practices provided researchers with a platform to screen novel antidepressant and anxiolytic drug candidates, their construct validity-involving relevant underlying mechanisms-has been questioned. In this review, we present the laboratory procedures used to assess depressive- and anxiety-like behaviors in rats and mice. These include constructs that rely on stress-triggered responses, such as behavioral despair, and those that emerge with nonaversive training, such as cognitive bias. We describe the specific behavioral tests that are used to assess these constructs and discuss the criticisms on their theoretical background. We review specific concerns about the construct validity and translational relevance of individual behavioral tests, outline the limitations of the traditional, symptom-based interpretation, and introduce novel, ethologically relevant frameworks that emphasize simple behavioral patterns. Finally, we explore behavioral monitoring and morphological analysis methods that can be integrated into behavioral testing and discuss how they can enhance the construct validity of these tests.
Collapse
Affiliation(s)
- Sinem Gencturk
- Behavioral Neuroscience Laboratory, Department of Psychology, Boğaziçi University, 34342, Istanbul, Turkey
| | - Gunes Unal
- Behavioral Neuroscience Laboratory, Department of Psychology, Boğaziçi University, 34342, Istanbul, Turkey.
| |
Collapse
|
11
|
Park J, Choi S, Takatoh J, Zhao S, Harrahill A, Han BX, Wang F. Brainstem control of vocalization and its coordination with respiration. Science 2024; 383:eadi8081. [PMID: 38452069 PMCID: PMC11223444 DOI: 10.1126/science.adi8081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Accepted: 01/18/2024] [Indexed: 03/09/2024]
Abstract
Phonation critically depends on precise controls of laryngeal muscles in coordination with ongoing respiration. However, the neural mechanisms governing these processes remain unclear. We identified excitatory vocalization-specific laryngeal premotor neurons located in the retroambiguus nucleus (RAmVOC) in adult mice as being both necessary and sufficient for driving vocal cord closure and eliciting mouse ultrasonic vocalizations (USVs). The duration of RAmVOC activation can determine the lengths of both USV syllables and concurrent expiration periods, with the impact of RAmVOC activation depending on respiration phases. RAmVOC neurons receive inhibition from the preBötzinger complex, and inspiration needs override RAmVOC-mediated vocal cord closure. Ablating inhibitory synapses in RAmVOC neurons compromised this inspiration gating of laryngeal adduction, resulting in discoordination of vocalization with respiration. Our study reveals the circuits for vocal production and vocal-respiratory coordination.
Collapse
Affiliation(s)
- Jaehong Park
- Department of Brain and Cognitive Sciences, McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
- Department of Biomedical Engineering, Duke University, Durham, NC, 27708, USA
| | - Seonmi Choi
- Department of Brain and Cognitive Sciences, McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Jun Takatoh
- Department of Brain and Cognitive Sciences, McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Shengli Zhao
- Department of Neurobiology, Duke University Medical Center, Durham, NC, 27710, USA
| | - Andrew Harrahill
- Department of Brain and Cognitive Sciences, McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Bao-Xia Han
- Department of Neurobiology, Duke University Medical Center, Durham, NC, 27710, USA
| | - Fan Wang
- Department of Brain and Cognitive Sciences, McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| |
Collapse
|
12
|
Scott KJ, Speers LJ, Bilkey DK. Utilizing synthetic training data for the supervised classification of rat ultrasonic vocalizations. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 155:306-314. [PMID: 38236810 DOI: 10.1121/10.0024340] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Accepted: 12/14/2023] [Indexed: 01/23/2024]
Abstract
Murine rodents generate ultrasonic vocalizations (USVs) with frequencies that extend to around 120 kHz. These calls are important in social behaviour, and so their analysis can provide insights into the function of vocal communication, and its dysfunction. The manual identification of USVs, and subsequent classification into different subcategories is time consuming. Although machine learning approaches for identification and classification can lead to enormous efficiency gains, the time and effort required to generate training data can be high, and the accuracy of current approaches can be problematic. Here, we compare the detection and classification performance of a trained human against two convolutional neural networks (CNNs), DeepSqueak (DS) and VocalMat (VM), on audio containing rat USVs. Furthermore, we test the effect of inserting synthetic USVs into the training data of the VM CNN as a means of reducing the workload associated with generating a training set. Our results indicate that VM outperformed the DS CNN on measures of call identification, and classification. Additionally, we found that the augmentation of training data with synthetic images resulted in a further improvement in accuracy, such that it was sufficiently close to human performance to allow for the use of this software in laboratory conditions.
Collapse
Affiliation(s)
- K Jack Scott
- Department of Psychology, University of Otago, William James Building, 275 Leith Walk, Dunedin 9016, New Zealand
| | - Lucinda J Speers
- Department of Psychology, University of Otago, William James Building, 275 Leith Walk, Dunedin 9016, New Zealand
- Grenoble Institut des Neurosciences, Inserm, France
| | - David K Bilkey
- Department of Psychology, University of Otago, William James Building, 275 Leith Walk, Dunedin 9016, New Zealand
| |
Collapse
|
13
|
Trotier A, Bagnoli E, Walski T, Evers J, Pugliese E, Lowery M, Kilcoyne M, Fitzgerald U, Biggs M. Micromotion Derived Fluid Shear Stress Mediates Peri-Electrode Gliosis through Mechanosensitive Ion Channels. ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2023; 10:e2301352. [PMID: 37518828 PMCID: PMC10520674 DOI: 10.1002/advs.202301352] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 06/11/2023] [Indexed: 08/01/2023]
Abstract
The development of bioelectronic neural implant technologies has advanced significantly over the past 5 years, particularly in brain-machine interfaces and electronic medicine. However, neuroelectrode-based therapies require invasive neurosurgery and can subject neural tissues to micromotion-induced mechanical shear, leading to chronic inflammation, the formation of a peri-electrode void and the deposition of reactive glial scar tissue. These structures act as physical barriers, hindering electrical signal propagation and reducing neural implant functionality. Although well documented, the mechanisms behind the initiation and progression of these processes are poorly understood. Herein, in silico analysis of micromotion-induced peri-electrode void progression and gliosis is described. Subsequently, ventral mesencephalic cells exposed to milliscale fluid shear stress in vitro exhibited increased expression of gliosis-associated proteins and overexpression of mechanosensitive ion channels PIEZO1 (piezo-type mechanosensitive ion channel component 1) and TRPA1 (transient receptor potential ankyrin 1), effects further confirmed in vivo in a rat model of peri-electrode gliosis. Furthermore, in vitro analysis indicates that chemical inhibition/activation of PIEZO1 affects fluid shear stress mediated astrocyte reactivity in a mitochondrial-dependent manner. Together, the results suggest that mechanosensitive ion channels play a major role in the development of a peri-electrode void and micromotion-induced glial scarring at the peri-electrode region.
Collapse
Affiliation(s)
- Alexandre Trotier
- SFI Research Centre for Medical Devices (CÚRAM)University of GalwayGalwayH91 W2TYIreland
- Galway Neuroscience CentreUniversity of GalwayGalwayH91 W2TYIreland
| | - Enrico Bagnoli
- SFI Research Centre for Medical Devices (CÚRAM)University of GalwayGalwayH91 W2TYIreland
- Galway Neuroscience CentreUniversity of GalwayGalwayH91 W2TYIreland
| | - Tomasz Walski
- SFI Research Centre for Medical Devices (CÚRAM)University of GalwayGalwayH91 W2TYIreland
- Department of Biomedical EngineeringFaculty of Fundamental Problems of TechnologyWrocław University of Science and TechnologyWroclaw50‐370Poland
| | - Judith Evers
- School of Electrical and Electronic EngineeringUniversity College DublinDublin 4Ireland
| | - Eugenia Pugliese
- SFI Research Centre for Medical Devices (CÚRAM)University of GalwayGalwayH91 W2TYIreland
| | - Madeleine Lowery
- School of Electrical and Electronic EngineeringUniversity College DublinDublin 4Ireland
| | - Michelle Kilcoyne
- SFI Research Centre for Medical Devices (CÚRAM)University of GalwayGalwayH91 W2TYIreland
- Galway Neuroscience CentreUniversity of GalwayGalwayH91 W2TYIreland
- Carbohydrate Signalling GroupDiscipline of MicrobiologyUniversity of GalwayGalwayH91 W2TYIreland
| | - Una Fitzgerald
- SFI Research Centre for Medical Devices (CÚRAM)University of GalwayGalwayH91 W2TYIreland
- Galway Neuroscience CentreUniversity of GalwayGalwayH91 W2TYIreland
| | - Manus Biggs
- SFI Research Centre for Medical Devices (CÚRAM)University of GalwayGalwayH91 W2TYIreland
- Galway Neuroscience CentreUniversity of GalwayGalwayH91 W2TYIreland
| |
Collapse
|
14
|
Jefferson SJ, Gregg I, Dibbs M, Liao C, Wu H, Davoudian PA, Woodburn SC, Wehrle PH, Sprouse JS, Sherwood AM, Kaye AP, Pittenger C, Kwan AC. 5-MeO-DMT modifies innate behaviors and promotes structural neural plasticity in mice. Neuropsychopharmacology 2023; 48:1257-1266. [PMID: 37015972 PMCID: PMC10354037 DOI: 10.1038/s41386-023-01572-w] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 02/26/2023] [Accepted: 03/20/2023] [Indexed: 04/06/2023]
Abstract
Serotonergic psychedelics are gaining increasing interest as potential therapeutics for a range of mental illnesses. Compounds with short-lived subjective effects may be clinically useful because dosing time would be reduced, which may improve patient access. One short-acting psychedelic is 5-MeO-DMT, which has been associated with improvement in depression and anxiety symptoms in early phase clinical studies. However, relatively little is known about the behavioral and neural mechanisms of 5-MeO-DMT, particularly the durability of its long-term effects. Here we characterized the effects of 5-MeO-DMT on innate behaviors and dendritic architecture in mice. We showed that 5-MeO-DMT induces a dose-dependent increase in head-twitch response that is shorter in duration than that induced by psilocybin at all doses tested. 5-MeO-DMT also substantially suppresses social ultrasonic vocalizations produced during mating behavior. 5-MeO-DMT produces long-lasting increases in dendritic spine density in the mouse medial frontal cortex that are driven by an elevated rate of spine formation. However, unlike psilocybin, 5-MeO-DMT did not affect the size of dendritic spines. These data provide insights into the behavioral and neural consequences underlying the action of 5-MeO-DMT and highlight similarities and differences with those of psilocybin.
Collapse
Affiliation(s)
- Sarah J Jefferson
- Department of Psychiatry, Yale University School of Medicine, New Haven, CT, 06511, USA
| | - Ian Gregg
- Department of Psychiatry, Yale University School of Medicine, New Haven, CT, 06511, USA
| | - Mark Dibbs
- Department of Psychiatry, Yale University School of Medicine, New Haven, CT, 06511, USA
| | - Clara Liao
- Interdepartmental Neuroscience Program, Yale University School of Medicine, New Haven, CT, 06511, USA
| | - Hao Wu
- Interdepartmental Neuroscience Program, Yale University School of Medicine, New Haven, CT, 06511, USA
| | - Pasha A Davoudian
- Interdepartmental Neuroscience Program, Yale University School of Medicine, New Haven, CT, 06511, USA
- Medical Scientist Training Program, Yale University School of Medicine, New Haven, CT, 06511, USA
| | - Samuel C Woodburn
- Meinig School of Biomedical Engineering, Cornell University, Ithaca, NY, 14853, USA
| | - Patrick H Wehrle
- Department of Psychiatry, Yale University School of Medicine, New Haven, CT, 06511, USA
| | | | | | - Alfred P Kaye
- Department of Psychiatry, Yale University School of Medicine, New Haven, CT, 06511, USA
- VA National Center for PTSD Clinical Neuroscience Division, West Haven, CT, 06516, USA
| | - Christopher Pittenger
- Department of Psychiatry, Yale University School of Medicine, New Haven, CT, 06511, USA
- Child Study Center, Yale University School of Medicine, New Haven, CT, 06511, USA
| | - Alex C Kwan
- Department of Psychiatry, Yale University School of Medicine, New Haven, CT, 06511, USA.
- Meinig School of Biomedical Engineering, Cornell University, Ithaca, NY, 14853, USA.
- Department of Neuroscience, Yale University School of Medicine, New Haven, CT, 06511, USA.
- Department of Psychiatry, Weill Cornell Medicine, New York, NY, 10065, USA.
| |
Collapse
|
15
|
Sterling ML, Teunisse R, Englitz B. Rodent ultrasonic vocal interaction resolved with millimeter precision using hybrid beamforming. eLife 2023; 12:e86126. [PMID: 37493217 PMCID: PMC10522333 DOI: 10.7554/elife.86126] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Accepted: 07/25/2023] [Indexed: 07/27/2023] Open
Abstract
Ultrasonic vocalizations (USVs) fulfill an important role in communication and navigation in many species. Because of their social and affective significance, rodent USVs are increasingly used as a behavioral measure in neurodevelopmental and neurolinguistic research. Reliably attributing USVs to their emitter during close interactions has emerged as a difficult, key challenge. If addressed, all subsequent analyses gain substantial confidence. We present a hybrid ultrasonic tracking system, Hybrid Vocalization Localizer (HyVL), that synergistically integrates a high-resolution acoustic camera with high-quality ultrasonic microphones. HyVL is the first to achieve millimeter precision (~3.4-4.8 mm, 91% assigned) in localizing USVs, ~3× better than other systems, approaching the physical limits (mouse snout ~10 mm). We analyze mouse courtship interactions and demonstrate that males and females vocalize in starkly different relative spatial positions, and that the fraction of female vocalizations has likely been overestimated previously due to imprecise localization. Further, we find that when two male mice interact with one female, one of the males takes a dominant role in the interaction both in terms of the vocalization rate and the location relative to the female. HyVL substantially improves the precision with which social communication between rodents can be studied. It is also affordable, open-source, easy to set up, can be integrated with existing setups, and reduces the required number of experiments and animals.
Collapse
Affiliation(s)
- Max L Sterling
- Computational Neuroscience Lab, Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
- Visual Neuroscience Lab, Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
- Department of Human Genetics, Radboudumc, Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
| | - Ruben Teunisse
- Computational Neuroscience Lab, Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
| | - Bernhard Englitz
- Computational Neuroscience Lab, Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
| |
Collapse
|
16
|
Morel C, Martinez Sanchez I, Cherifi Y, Chartrel N, Diaz Heijtz R. Perturbation of maternal gut microbiota in mice during a critical perinatal window influences early neurobehavioral outcomes in offspring. Neuropharmacology 2023; 229:109479. [PMID: 36870672 DOI: 10.1016/j.neuropharm.2023.109479] [Citation(s) in RCA: 13] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Revised: 02/20/2023] [Accepted: 02/24/2023] [Indexed: 03/06/2023]
Abstract
The gut microbiota is increasingly recognized as a key environmental factor that shapes host development and physiology, including neural circuits formation and function. Concurrently, there has been growing concern that early-life antibiotic exposure may alter brain developmental trajectories, increasing the risk for neurodevelopmental disorders such as autism spectrum disorder (ASD). Here, we assessed whether perturbation of the maternal gut microbiota in mice during a narrow critical perinatal window (last week of pregnancy and first three postnatal days), induced by exposure to a commonly used broad-spectrum oral antibiotic (ampicillin), influences offspring neurobehavioral outcomes relevant to ASD. Our results demonstrate that neonatal offspring from antibiotic-treated dams display an altered pattern of ultrasonic communication, which was more pronounced in males. Moreover, juvenile male, but not female, offspring from antibiotic-treated dams showed reduced social motivation and social interaction, as well as context-dependent anxiety-like behavior. However, no changes were observed in locomotor or exploratory activity. This behavioral phenotype of exposed juvenile males was associated with reduced gene expression of the oxytocin receptor (OXTR) and several tight-junction proteins in the prefrontal cortex, a key region involved in the regulation of social and emotional behaviors, as well as a mild inflammatory response in the colon. Further, juvenile offspring from exposed dams also showed distinct alterations in several gut bacterial species, including, Lactobacillus murinus, and Parabacteroides goldsteinii. Overall, this study highlights the importance of the maternal microbiome in early-life, and how its perturbation by a widely used antibiotic could contribute to atypical social and emotional development of offspring in a sex-dependent manner.
Collapse
Affiliation(s)
- Cassandre Morel
- Department of Neuroscience, Karolinska Institutet, 171 77, Stockholm, Sweden; University of Rouen Normandy, INSERM, NorDIC, UMR, 1239, F-76000, Rouen, France
| | | | - Yamina Cherifi
- University of Rouen Normandy, INSERM, NorDIC, UMR, 1239, F-76000, Rouen, France
| | - Nicolas Chartrel
- University of Rouen Normandy, INSERM, NorDIC, UMR, 1239, F-76000, Rouen, France
| | | |
Collapse
|
17
|
Jourjine N, Woolfolk ML, Sanguinetti-Scheck JI, Sabatini JE, McFadden S, Lindholm AK, Hoekstra HE. Two pup vocalization types are genetically and functionally separable in deer mice. Curr Biol 2023; 33:1237-1248.e4. [PMID: 36893759 DOI: 10.1016/j.cub.2023.02.045] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2022] [Revised: 02/11/2023] [Accepted: 02/14/2023] [Indexed: 03/10/2023]
Abstract
Vocalization is a widespread social behavior in vertebrates that can affect fitness in the wild. Although many vocal behaviors are highly conserved, heritable features of specific vocalization types can vary both within and between species, raising the questions of why and how some vocal behaviors evolve. Here, using new computational tools to automatically detect and cluster vocalizations into distinct acoustic categories, we compare pup isolation calls across neonatal development in eight taxa of deer mice (genus Peromyscus) and compare them with laboratory mice (C57BL6/J strain) and free-living, wild house mice (Mus musculus domesticus). Whereas both Peromyscus and Mus pups produce ultrasonic vocalizations (USVs), Peromyscus pups also produce a second call type with acoustic features, temporal rhythms, and developmental trajectories that are distinct from those of USVs. In deer mice, these lower frequency "cries" are predominantly emitted in postnatal days one through nine, whereas USVs are primarily made after day 9. Using playback assays, we show that cries result in a more rapid approach by Peromyscus mothers than USVs, suggesting a role for cries in eliciting parental care early in neonatal development. Using a genetic cross between two sister species of deer mice exhibiting large, innate differences in the acoustic structure of cries and USVs, we find that variation in vocalization rate, duration, and pitch displays different degrees of genetic dominance and that cry and USV features can be uncoupled in second-generation hybrids. Taken together, this work shows that vocal behavior can evolve quickly between closely related rodent species in which vocalization types, likely serving distinct functions in communication, are controlled by distinct genetic loci.
Collapse
Affiliation(s)
- Nicholas Jourjine
- Department of Molecular & Cellular Biology, Department of Organismic & Evolutionary Biology, Center for Brain Science, Museum of Comparative Zoology, Harvard University and the Howard Hughes Medical Institute, 16 Divinity Avenue, Cambridge, MA 02138, USA
| | - Maya L Woolfolk
- Department of Molecular & Cellular Biology, Department of Organismic & Evolutionary Biology, Center for Brain Science, Museum of Comparative Zoology, Harvard University and the Howard Hughes Medical Institute, 16 Divinity Avenue, Cambridge, MA 02138, USA
| | - Juan I Sanguinetti-Scheck
- Department of Molecular & Cellular Biology, Department of Organismic & Evolutionary Biology, Center for Brain Science, Museum of Comparative Zoology, Harvard University and the Howard Hughes Medical Institute, 16 Divinity Avenue, Cambridge, MA 02138, USA
| | - John E Sabatini
- Department of Molecular & Cellular Biology, Department of Organismic & Evolutionary Biology, Center for Brain Science, Museum of Comparative Zoology, Harvard University and the Howard Hughes Medical Institute, 16 Divinity Avenue, Cambridge, MA 02138, USA
| | - Sade McFadden
- Department of Molecular & Cellular Biology, Department of Organismic & Evolutionary Biology, Center for Brain Science, Museum of Comparative Zoology, Harvard University and the Howard Hughes Medical Institute, 16 Divinity Avenue, Cambridge, MA 02138, USA
| | - Anna K Lindholm
- Department of Evolutionary Biology & Environmental Studies, University of Zürich, Winterthurerstrasse, 190 8057 Zürich, Switzerland
| | - Hopi E Hoekstra
- Department of Molecular & Cellular Biology, Department of Organismic & Evolutionary Biology, Center for Brain Science, Museum of Comparative Zoology, Harvard University and the Howard Hughes Medical Institute, 16 Divinity Avenue, Cambridge, MA 02138, USA.
| |
Collapse
|
18
|
Atanasova E, Arévalo AP, Graf I, Zhang R, Bockmann J, Lutz AK, Boeckers TM. Immune activation during pregnancy exacerbates ASD-related alterations in Shank3-deficient mice. Mol Autism 2023; 14:1. [PMID: 36604742 PMCID: PMC9814193 DOI: 10.1186/s13229-022-00532-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2022] [Accepted: 12/11/2022] [Indexed: 01/06/2023] Open
Abstract
BACKGROUND Autism spectrum disorder (ASD) is mainly characterized by deficits in social interaction and communication and repetitive behaviors. Known causes of ASD are mutations of certain risk genes like the postsynaptic protein SHANK3 and environmental factors including prenatal infections. METHODS To analyze the gene-environment interplay in ASD, we combined the Shank3Δ11-/- ASD mouse model with maternal immune activation (MIA) via an intraperitoneal injection of polyinosinic/polycytidylic acid (Poly I:C) on gestational day 12.5. The offspring of the injected dams was further analyzed for autistic-like behaviors and comorbidities followed by biochemical experiments with a focus on synaptic analysis. RESULTS We show that the two-hit mice exhibit excessive grooming and deficits in social behavior more prominently than the Shank3Δ11-/- mice. Interestingly, these behavioral changes were accompanied by an unexpected upregulation of postsynaptic density (PSD) proteins at excitatory synapses in striatum, hippocampus and prefrontal cortex. LIMITATIONS We found several PSD proteins to be increased in the two-hit mice; however, we can only speculate about possible pathways behind the worsening of the autistic phenotype in those mice. CONCLUSIONS With this study, we demonstrate that there is an interplay between genetic susceptibility and environmental factors defining the severity of ASD symptoms. Moreover, we show that a general misbalance of PSD proteins at excitatory synapses is linked to ASD symptoms, making this two-hit model a promising tool for the investigation of the complex pathophysiology of neurodevelopmental disorders.
Collapse
Affiliation(s)
| | | | - Ines Graf
- Institute for Anatomy and Cell Biology, Ulm University, Ulm, Germany
| | - Rong Zhang
- Neuroscience Research Institute, Health Science Centre, Peking University, Peking, China
| | - Juergen Bockmann
- Institute for Anatomy and Cell Biology, Ulm University, Ulm, Germany
| | - Anne-Kathrin Lutz
- Institute for Anatomy and Cell Biology, Ulm University, Ulm, Germany.
| | - Tobias M Boeckers
- Institute for Anatomy and Cell Biology, Ulm University, Ulm, Germany.
- Deutsches Zentrum für Neurodegenerative Erkrankungen (DZNE), Ulm Site, Ulm, Germany.
| |
Collapse
|
19
|
Pranic NM, Kornbrek C, Yang C, Cleland TA, Tschida KA. Rates of ultrasonic vocalizations are more strongly related than acoustic features to non-vocal behaviors in mouse pups. Front Behav Neurosci 2022; 16:1015484. [PMID: 36600992 PMCID: PMC9805956 DOI: 10.3389/fnbeh.2022.1015484] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2022] [Accepted: 11/29/2022] [Indexed: 12/23/2022] Open
Abstract
Mouse pups produce. ultrasonic vocalizations (USVs) in response to isolation from the nest (i.e., isolation USVs). Rates and acoustic features of isolation USVs change dramatically over the first two weeks of life, and there is also substantial variability in the rates and acoustic features of isolation USVs at a given postnatal age. The factors that contribute to within age variability in isolation USVs remain largely unknown. Here, we explore the extent to which non-vocal behaviors of mouse pups relate to the within age variability in rates and acoustic features of their USVs. We recorded non-vocal behaviors of isolated C57BL/6J mouse pups at four postnatal ages (postnatal days 5, 10, 15, and 20), measured rates of isolation USV production, and applied a combination of pre-defined acoustic feature measurements and an unsupervised machine learning-based vocal analysis method to examine USV acoustic features. When we considered different categories of non-vocal behavior, our analyses revealed that mice in all postnatal age groups produce higher rates of isolation USVs during active non-vocal behaviors than when lying still. Moreover, rates of isolation USVs are correlated with the intensity (i.e., magnitude) of non-vocal body and limb movements within a given trial. In contrast, USVs produced during different categories of non-vocal behaviors and during different intensities of non-vocal movement do not differ substantially in their acoustic features. Our findings suggest that levels of behavioral arousal contribute to within age variability in rates, but not acoustic features, of mouse isolation USVs.
Collapse
|
20
|
Jabarin R, Netser S, Wagner S. Beyond the three-chamber test: toward a multimodal and objective assessment of social behavior in rodents. Mol Autism 2022; 13:41. [PMID: 36284353 PMCID: PMC9598038 DOI: 10.1186/s13229-022-00521-6] [Citation(s) in RCA: 24] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2022] [Accepted: 10/06/2022] [Indexed: 12/31/2022] Open
Abstract
MAIN: In recent years, substantial advances in social neuroscience have been realized, including the generation of numerous rodent models of autism spectrum disorder. Still, it can be argued that those methods currently being used to analyze animal social behavior create a bottleneck that significantly slows down progress in this field. Indeed, the bulk of research still relies on a small number of simple behavioral paradigms, the results of which are assessed without considering behavioral dynamics. Moreover, only few variables are examined in each paradigm, thus overlooking a significant portion of the complexity that characterizes social interaction between two conspecifics, subsequently hindering our understanding of the neural mechanisms governing different aspects of social behavior. We further demonstrate these constraints by discussing the most commonly used paradigm for assessing rodent social behavior, the three-chamber test. We also point to the fact that although emotions greatly influence human social behavior, we lack reliable means for assessing the emotional state of animals during social tasks. As such, we also discuss current evidence supporting the existence of pro-social emotions and emotional cognition in animal models. We further suggest that adequate social behavior analysis requires a novel multimodal approach that employs automated and simultaneous measurements of multiple behavioral and physiological variables at high temporal resolution in socially interacting animals. We accordingly describe several computerized systems and computational tools for acquiring and analyzing such measurements. Finally, we address several behavioral and physiological variables that can be used to assess socio-emotional states in animal models and thus elucidate intricacies of social behavior so as to attain deeper insight into the brain mechanisms that mediate such behaviors. CONCLUSIONS: In summary, we suggest that combining automated multimodal measurements with machine-learning algorithms will help define socio-emotional states and determine their dynamics during various types of social tasks, thus enabling a more thorough understanding of the complexity of social behavior.
Collapse
Affiliation(s)
- Renad Jabarin
- Sagol Department of Neurobiology, Faculty of Natural Sciences, University of Haifa, Haifa, Israel.
| | - Shai Netser
- Sagol Department of Neurobiology, Faculty of Natural Sciences, University of Haifa, Haifa, Israel
| | - Shlomo Wagner
- Sagol Department of Neurobiology, Faculty of Natural Sciences, University of Haifa, Haifa, Israel
| |
Collapse
|
21
|
Stoumpou V, Vargas CDM, Schade PF, Boyd JL, Giannakopoulos T, Jarvis ED. Analysis of Mouse Vocal Communication (AMVOC): a deep, unsupervised method for rapid detection, analysis and classification of ultrasonic vocalisations. BIOACOUSTICS 2022. [DOI: 10.1080/09524622.2022.2099973] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
Affiliation(s)
- Vasiliki Stoumpou
- School of Electrical and Computer Engineering, National Technical University of Athens, Athens, Greece
| | - César D. M. Vargas
- Laboratory of Neurogenetics of Language, The Rockefeller University, New York, NY, USA
| | - Peter F. Schade
- Laboratory of Neurogenetics of Language, The Rockefeller University, New York, NY, USA
- Laboratory of Neural Systems, The Rockefeller University, New York, NY, USA
| | - J. Lomax Boyd
- Berman Institute of Bioethics, Johns Hopkins University, Baltimore, MD, USA
| | - Theodoros Giannakopoulos
- Computational Intelligence Lab, Institute of Informatics and Telecommunications, National Center of Scientific Research 'Demokritos', Athens, Greece
| | - Erich D. Jarvis
- Laboratory of Neurogenetics of Language, The Rockefeller University, New York, NY, USA
- Howard Hughes Medical Institute, Chevy Chase, MD, USA
| |
Collapse
|
22
|
Matsumoto J, Kanno K, Kato M, Nishimaru H, Setogawa T, Chinzorig C, Shibata T, Nishijo H. Acoustic camera system for measuring ultrasound communication in mice. iScience 2022; 25:104812. [PMID: 35982786 PMCID: PMC9379670 DOI: 10.1016/j.isci.2022.104812] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 06/15/2022] [Accepted: 07/18/2022] [Indexed: 11/01/2022] Open
Abstract
To investigate biological mechanisms underlying social behaviors and their deficits, social communication via ultrasonic vocalizations (USVs) in mice has received considerable attention as a powerful experimental model. The advances in sound localization technology have facilitated the analysis of vocal interactions between multiple mice. However, existing sound localization systems are built around distributed-microphone arrays, which require a special recording arena and long processing time. Here, we report a novel acoustic camera system, USVCAM, which enables simpler and faster USV localization and assignment. The system comprises recently developed USV segmentation algorithms with a modification for overlapping vocalizations that results in high accuracy. Using USVCAM, we analyzed USV communications in a conventional home cage, and demonstrated novel vocal interactions in female ICR mice under a resident-intruder paradigm. The extended applicability and usability of USVCAM may facilitate future studies investigating typical and atypical vocal communication and social behaviors, as well as the underlying mechanisms.
Collapse
Affiliation(s)
- Jumpei Matsumoto
- Department of System Emotional Science, Faculty of Medicine, University of Toyama, Toyama 930-0194, Japan
- Research Center for Idling Brain Science, University of Toyama, Toyama 930-0194, Japan
| | - Kouta Kanno
- Laboratory of Neuroscience, Course of Psychology, Department of Humanities, Faculty of Law, Economics and the Humanities, Kagoshima University, Kagoshima 890-0065, Japan
| | - Masahiro Kato
- Katou Acoustics Consultant Office, Yokohama 225-0021, Japan
- Osawa Memorial Institute of Architectural Environmental Engineering, Kanto Gakuin University, Yokohama 236-8501, Japan
| | - Hiroshi Nishimaru
- Department of System Emotional Science, Faculty of Medicine, University of Toyama, Toyama 930-0194, Japan
- Research Center for Idling Brain Science, University of Toyama, Toyama 930-0194, Japan
| | - Tsuyoshi Setogawa
- Department of System Emotional Science, Faculty of Medicine, University of Toyama, Toyama 930-0194, Japan
- Research Center for Idling Brain Science, University of Toyama, Toyama 930-0194, Japan
| | - Choijiljav Chinzorig
- Department of System Emotional Science, Faculty of Medicine, University of Toyama, Toyama 930-0194, Japan
| | - Tomohiro Shibata
- Department of Human Intelligence Systems, Graduate School of Life Science and Systems Engineering, Kyushu Institute of Technology, Kitakyushu 808-0196, Japan
| | - Hisao Nishijo
- Department of System Emotional Science, Faculty of Medicine, University of Toyama, Toyama 930-0194, Japan
- Research Center for Idling Brain Science, University of Toyama, Toyama 930-0194, Japan
| |
Collapse
|
23
|
Mai L, Inada H, Kimura R, Kanno K, Matsuda T, Tachibana RO, Tucci V, Komaki F, Hiroi N, Osumi N. Advanced paternal age diversifies individual trajectories of vocalization patterns in neonatal mice. iScience 2022; 25:104834. [PMID: 36039363 PMCID: PMC9418688 DOI: 10.1016/j.isci.2022.104834] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Revised: 06/27/2022] [Accepted: 07/20/2022] [Indexed: 10/25/2022] Open
Abstract
Infant crying is a communicative behavior impaired in neurodevelopmental disorders (NDDs). Because advanced paternal age is a risk factor for NDDs, we performed computational approaches to evaluate how paternal age affected vocal communication and body weight development in C57BL/6 mouse offspring from young and aged fathers. Analyses of ultrasonic vocalization (USV) consisting of syllables showed that advanced paternal age reduced the number and duration of syllables, altered the syllable composition, and caused lower body weight gain in pups. Pups born to young fathers had convergent vocal characteristics with a rich repertoire, whereas those born to aged fathers exhibited more divergent vocal patterns with limited repertoire. Additional analyses revealed that some pups from aged fathers displayed atypical USV trajectories. Thus, our study indicates that advanced paternal age has a significant effect on offspring's vocal development. Our computational analyses are effective in characterizing altered individual diversity.
Collapse
Affiliation(s)
- Lingling Mai
- Department of Developmental Neuroscience, Tohoku University Graduate School of Medicine, Sendai 980-8575, Japan
| | - Hitoshi Inada
- Department of Developmental Neuroscience, Tohoku University Graduate School of Medicine, Sendai 980-8575, Japan.,Laboratory of Health and Sports Sciences, Division of Biomedical Engineering for Health and Welfare, Tohoku University Graduate School of Biomedical Engineering, Sendai 980-8575, Japan
| | - Ryuichi Kimura
- Department of Developmental Neuroscience, Tohoku University Graduate School of Medicine, Sendai 980-8575, Japan.,Department of Drug Discovery Medicine, Kyoto University Graduate School of Medicine, Kyoto 606-8507, Japan
| | - Kouta Kanno
- Faculty of Law, Economics and Humanities, Kagoshima University, Kagoshima 890-0065, Japan
| | - Takeru Matsuda
- Statistical Mathematics Unit, RIKEN Center for Brain Science, Wako 351-0198, Japan
| | - Ryosuke O Tachibana
- Department of Life Science, Graduate School of Arts and Sciences, The University of Tokyo, Tokyo 153-8902, Japan
| | - Valter Tucci
- Genetics and Epigenetics of Behavior (GEB) Laboratory, Istituto Italiano di Tecnologia, Genova 16163, Italy
| | - Fumiyasu Komaki
- Department of Mathematical Informatics, Graduate School of Information Science and Technology, The University of Tokyo, Tokyo 113-8656, Japan.,Mathematical Informatics Collaboration Unit, RIKEN Center for Brain Science, Wako 351-0198, Japan
| | - Noboru Hiroi
- Department of Pharmacology, University of Texas Health Science Center at San Antonio, San Antonio 78229, USA.,Department of Cellular and Integrative Physiology, University of Texas Health Science Center at San Antonio, San Antonio 78229, USA.,Department of Cell Systems and Anatomy, University of Texas Health Science Center at San Antonio, San Antonio 78229, USA
| | - Noriko Osumi
- Department of Developmental Neuroscience, Tohoku University Graduate School of Medicine, Sendai 980-8575, Japan
| |
Collapse
|
24
|
Karigo T. Gaining insights into the internal states of the rodent brain through vocal communications. Neurosci Res 2022; 184:1-8. [PMID: 35908736 DOI: 10.1016/j.neures.2022.07.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2022] [Revised: 07/25/2022] [Accepted: 07/26/2022] [Indexed: 10/31/2022]
Abstract
Animals display various behaviors during social interactions. Social behaviors have been proposed to be driven by the internal states of the animals, reflecting their emotional or motivational states. However, the internal states that drive social behaviors are complex and difficult to interpret. Many animals, including mice, use vocalizations for communication in various social contexts. This review provides an overview of current understandings of mouse vocal communications, its underlying neural circuitry, and the potential to use vocal communications as a readout for the animal's internal states during social interactions.
Collapse
Affiliation(s)
- Tomomi Karigo
- Division of Biology and Biological Engineering 140-18,TianQiao and Chrissy Chen Institute for Neuroscience, California Institute of Technology, Pasadena CA 91125, USA; Present address: Kennedy Krieger Institute, Baltimore, MD 21205, USA; The Solomon H. Snyder Department of Neuroscience, Johns Hopkins University School of Medicine, Baltimore, MD 21205, USA.
| |
Collapse
|
25
|
Pessoa D, Petrella L, Martins P, Castelo-Branco M, Teixeira C. Automatic segmentation and classification of mice ultrasonic vocalizations. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 152:266. [PMID: 35931540 DOI: 10.1121/10.0012350] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Accepted: 06/20/2022] [Indexed: 06/15/2023]
Abstract
This paper addresses the development of a system for classifying mouse ultrasonic vocalizations (USVs) present in audio recordings. The automatic labeling process for USVs is usually divided into two main steps: USV segmentation followed by the matching classification. Three main contributions can be highlighted: (i) a new segmentation algorithm, (ii) a new set of features, and (iii) the discrimination of a higher number of classes when compared to similar studies. The developed segmentation algorithm is based on spectral entropy analysis. This novel segmentation approach can detect USVs with 94% and 74% recall and precision, respectively. When compared to other methods/software, our segmentation algorithm achieves a higher recall. Regarding the classification phase, besides the traditional features from time, frequency, and time-frequency domains, a new set of contour-based features were extracted and used as inputs of shallow machine learning classification models. The contour-based features were obtained from the time-frequency ridge representation of USVs. The classification methods can differentiate among ten different syllable types with 81.1% accuracy and 80.5% weighted F1-score. The algorithms were developed and evaluated based on a large dataset, acquired on diverse social interaction conditions between the animals, to stimulate a varied vocal repertoire.
Collapse
Affiliation(s)
- Diogo Pessoa
- University of Coimbra, Centre for Informatics and Systems of the University of Coimbra, Department of Informatics Engineering, 3030-290 Coimbra, Portugal
| | - Lorena Petrella
- University of Coimbra, Centre for Informatics and Systems of the University of Coimbra, Department of Informatics Engineering, 3030-290 Coimbra, Portugal
| | - Pedro Martins
- University of Coimbra, Centre for Informatics and Systems of the University of Coimbra, Department of Informatics Engineering, 3030-290 Coimbra, Portugal
| | - Miguel Castelo-Branco
- University of Coimbra, Centre for Informatics and Systems of the University of Coimbra, Department of Informatics Engineering, 3030-290 Coimbra, Portugal
| | - César Teixeira
- University of Coimbra, Centre for Informatics and Systems of the University of Coimbra, Department of Informatics Engineering, 3030-290 Coimbra, Portugal
| |
Collapse
|
26
|
Gachomba MJM, Esteve-Agraz J, Caref K, Maroto AS, Bortolozzo-Gleich MH, Laplagne DA, Márquez C. Multimodal cues displayed by submissive rats promote prosocial choices by dominants. Curr Biol 2022; 32:3288-3301.e8. [PMID: 35803272 DOI: 10.1016/j.cub.2022.06.026] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2021] [Revised: 04/25/2022] [Accepted: 06/09/2022] [Indexed: 12/30/2022]
Abstract
Animals often display prosocial behaviors, performing actions that benefit others. Although prosociality is essential for social bonding and cooperation, we still know little about how animals integrate behavioral cues from those in need to make decisions that increase their well-being. To address this question, we used a two-choice task where rats can provide rewards to a conspecific in the absence of self-benefit and investigated which conditions promote prosociality by manipulating the social context of the interacting animals. Although sex or degree of familiarity did not affect prosocial choices in rats, social hierarchy revealed to be a potent modulator, with dominant decision-makers showing faster emergence and higher levels of prosocial choices toward their submissive cage mates. Leveraging quantitative analysis of multimodal social dynamics prior to choice, we identified that pairs with dominant decision-makers exhibited more proximal interactions. Interestingly, these closer interactions were driven by submissive animals that modulated their position and movement following their dominants and whose 50-kHz vocalization rate correlated with dominants' prosociality. Moreover, Granger causality revealed stronger bidirectional influences in pairs with dominant focals and submissive recipients, indicating increased behavioral coordination. Finally, multivariate analysis highlighted body language as the main information dominants use on a trial-by-trial basis to learn that their actions have effects on others. Our results provide a refined understanding of the behavioral dynamics that rats use for action-selection upon perception of socially relevant cues and navigate social decision-making.
Collapse
Affiliation(s)
- Michael Joe Munyua Gachomba
- Neural Circuits of Social Behaviour Laboratory, Instituto de Neurociencias, Universidad Miguel Hernández-Consejo Superior de Investigaciones Científicas (UMH-CSIC), Sant Joan d'Alacant, Alicante, Spain
| | - Joan Esteve-Agraz
- Neural Circuits of Social Behaviour Laboratory, Instituto de Neurociencias, Universidad Miguel Hernández-Consejo Superior de Investigaciones Científicas (UMH-CSIC), Sant Joan d'Alacant, Alicante, Spain
| | - Kevin Caref
- Neural Circuits of Social Behaviour Laboratory, Instituto de Neurociencias, Universidad Miguel Hernández-Consejo Superior de Investigaciones Científicas (UMH-CSIC), Sant Joan d'Alacant, Alicante, Spain
| | - Aroa Sanz Maroto
- Neural Circuits of Social Behaviour Laboratory, Instituto de Neurociencias, Universidad Miguel Hernández-Consejo Superior de Investigaciones Científicas (UMH-CSIC), Sant Joan d'Alacant, Alicante, Spain
| | - Maria Helena Bortolozzo-Gleich
- Neural Circuits of Social Behaviour Laboratory, Instituto de Neurociencias, Universidad Miguel Hernández-Consejo Superior de Investigaciones Científicas (UMH-CSIC), Sant Joan d'Alacant, Alicante, Spain
| | - Diego Andrés Laplagne
- Laboratory of Behavioural Neurophysiology, Brain Institute, Federal University of Rio Grande do Norte, Natal, Brazil
| | - Cristina Márquez
- Neural Circuits of Social Behaviour Laboratory, Instituto de Neurociencias, Universidad Miguel Hernández-Consejo Superior de Investigaciones Científicas (UMH-CSIC), Sant Joan d'Alacant, Alicante, Spain.
| |
Collapse
|
27
|
Abbasi R, Balazs P, Marconi MA, Nicolakis D, Zala SM, Penn DJ. Capturing the songs of mice with an improved detection and classification method for ultrasonic vocalizations (BootSnap). PLoS Comput Biol 2022; 18:e1010049. [PMID: 35551265 PMCID: PMC9098080 DOI: 10.1371/journal.pcbi.1010049] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2021] [Accepted: 03/22/2022] [Indexed: 12/02/2022] Open
Abstract
House mice communicate through ultrasonic vocalizations (USVs), which are above the range of human hearing (>20 kHz), and several automated methods have been developed for USV detection and classification. Here we evaluate their advantages and disadvantages in a full, systematic comparison, while also presenting a new approach. This study aims to 1) determine the most efficient USV detection tool among the existing methods, and 2) develop a classification model that is more generalizable than existing methods. In both cases, we aim to minimize the user intervention required for processing new data. We compared the performance of four detection methods in an out-of-the-box approach, pretrained DeepSqueak detector, MUPET, USVSEG, and the Automatic Mouse Ultrasound Detector (A-MUD). We also compared these methods to human visual or ‘manual’ classification (ground truth) after assessing its reliability. A-MUD and USVSEG outperformed the other methods in terms of true positive rates using default and adjusted settings, respectively, and A-MUD outperformed USVSEG when false detection rates were also considered. For automating the classification of USVs, we developed BootSnap for supervised classification, which combines bootstrapping on Gammatone Spectrograms and Convolutional Neural Networks algorithms with Snapshot ensemble learning. It successfully classified calls into 12 types, including a new class of false positives that is useful for detection refinement. BootSnap outperformed the pretrained and retrained state-of-the-art tool, and thus it is more generalizable. BootSnap is freely available for scientific use. House mice and many other species use ultrasonic vocalizations to communicate in various contexts including social and sexual interactions. These vocalizations are increasingly investigated in research on animal communication and as a phenotype for studying the genetic basis of autism and speech disorders. Because manual methods for analyzing vocalizations are extremely time consuming, automatic tools for detection and classification are needed. We evaluated the performance of the available tools for analyzing ultrasonic vocalizations, and we compared detection tools for the first time to manual methods (“ground truth”) using recordings from wild-derived and laboratory mice. For the first time, class-wise inter-observer reliability of manual labels used for ground truth are analyzed and reported. Moreover, we developed a new classification method based on ensemble deep learning that provides more generalizability than the current state-of-the-art tool (both pretrained and retrained). Our new classification method is free for scientific use.
Collapse
Affiliation(s)
- Reyhaneh Abbasi
- Acoustic Research Institute, Austrian Academy of Sciences, Vienna, Austria
- Konrad Lorenz Institute of Ethology, Department of Interdisciplinary Life Sciences, University of Veterinary Medicine, Vienna, Austria
- Vienna Doctoral School of Cognition, Behaviour and Neuroscience, University of Vienna, Vienna, Austria
- * E-mail:
| | - Peter Balazs
- Acoustic Research Institute, Austrian Academy of Sciences, Vienna, Austria
| | - Maria Adelaide Marconi
- Konrad Lorenz Institute of Ethology, Department of Interdisciplinary Life Sciences, University of Veterinary Medicine, Vienna, Austria
| | - Doris Nicolakis
- Konrad Lorenz Institute of Ethology, Department of Interdisciplinary Life Sciences, University of Veterinary Medicine, Vienna, Austria
| | - Sarah M. Zala
- Konrad Lorenz Institute of Ethology, Department of Interdisciplinary Life Sciences, University of Veterinary Medicine, Vienna, Austria
| | - Dustin J. Penn
- Konrad Lorenz Institute of Ethology, Department of Interdisciplinary Life Sciences, University of Veterinary Medicine, Vienna, Austria
| |
Collapse
|
28
|
Stowell D. Computational bioacoustics with deep learning: a review and roadmap. PeerJ 2022; 10:e13152. [PMID: 35341043 PMCID: PMC8944344 DOI: 10.7717/peerj.13152] [Citation(s) in RCA: 50] [Impact Index Per Article: 25.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Accepted: 03/01/2022] [Indexed: 01/20/2023] Open
Abstract
Animal vocalisations and natural soundscapes are fascinating objects of study, and contain valuable evidence about animal behaviours, populations and ecosystems. They are studied in bioacoustics and ecoacoustics, with signal processing and analysis an important component. Computational bioacoustics has accelerated in recent decades due to the growth of affordable digital sound recording devices, and to huge progress in informatics such as big data, signal processing and machine learning. Methods are inherited from the wider field of deep learning, including speech and image processing. However, the tasks, demands and data characteristics are often different from those addressed in speech or music analysis. There remain unsolved problems, and tasks for which evidence is surely present in many acoustic signals, but not yet realised. In this paper I perform a review of the state of the art in deep learning for computational bioacoustics, aiming to clarify key concepts and identify and analyse knowledge gaps. Based on this, I offer a subjective but principled roadmap for computational bioacoustics with deep learning: topics that the community should aim to address, in order to make the most of future developments in AI and informatics, and to use audio data in answering zoological and ecological questions.
Collapse
Affiliation(s)
- Dan Stowell
- Department of Cognitive Science and Artificial Intelligence, Tilburg University, Tilburg, The Netherlands,Naturalis Biodiversity Center, Leiden, The Netherlands
| |
Collapse
|
29
|
Goussha Y, Bar K, Netser S, Cohen L, Hel-Or Y, Wagner S. HybridMouse: A Hybrid Convolutional-Recurrent Neural Network-Based Model for Identification of Mouse Ultrasonic Vocalizations. Front Behav Neurosci 2022; 15:810590. [PMID: 35145383 PMCID: PMC8823244 DOI: 10.3389/fnbeh.2021.810590] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2021] [Accepted: 12/16/2021] [Indexed: 12/03/2022] Open
Abstract
Mice use ultrasonic vocalizations (USVs) to convey a variety of socially relevant information. These vocalizations are affected by the sex, age, strain, and emotional state of the emitter and can thus be used to characterize it. Current tools used to detect and analyze murine USVs rely on user input and image processing algorithms to identify USVs, therefore requiring ideal recording environments. More recent tools which utilize convolutional neural networks models to identify vocalization segments perform well above the latter but do not exploit the sequential structure of audio vocalizations. On the other hand, human voice recognition models were made explicitly for audio processing; they incorporate the advantages of CNN models in recurrent models that allow them to capture the sequential nature of the audio. Here we describe the HybridMouse software: an audio analysis tool that combines convolutional (CNN) and recurrent (RNN) neural networks for automatically identifying, labeling, and extracting recorded USVs. Following training on manually labeled audio files recorded in various experimental conditions, HybridMouse outperformed the most commonly used benchmark model utilizing deep-learning tools in accuracy and precision. Moreover, it does not require user input and produces reliable detection and analysis of USVs recorded under harsh experimental conditions. We suggest that HybrideMouse will enhance the analysis of murine USVs and facilitate their use in scientific research.
Collapse
Affiliation(s)
- Yizhaq Goussha
- Sagol Department of Neurobiology, Faculty of Natural Sciences, University of Haifa, Haifa, Israel
- *Correspondence: Yizhaq Goussha
| | - Kfir Bar
- School of Computer Science, The Interdisciplinary Center, Herzliya, Israel
| | - Shai Netser
- Sagol Department of Neurobiology, Faculty of Natural Sciences, University of Haifa, Haifa, Israel
| | - Lior Cohen
- Sagol Department of Neurobiology, Faculty of Natural Sciences, University of Haifa, Haifa, Israel
| | - Yacov Hel-Or
- School of Computer Science, The Interdisciplinary Center, Herzliya, Israel
| | - Shlomo Wagner
- Sagol Department of Neurobiology, Faculty of Natural Sciences, University of Haifa, Haifa, Israel
| |
Collapse
|
30
|
Cohen Y, Nicholson DA, Sanchioni A, Mallaber EK, Skidanova V, Gardner TJ. Automated annotation of birdsong with a neural network that segments spectrograms. eLife 2022; 11:63853. [PMID: 35050849 PMCID: PMC8860439 DOI: 10.7554/elife.63853] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2020] [Accepted: 01/19/2022] [Indexed: 11/13/2022] Open
Abstract
Songbirds provide a powerful model system for studying sensory-motor learning. However, many analyses of birdsong require time-consuming, manual annotation of its elements, called syllables. Automated methods for annotation have been proposed, but these methods assume that audio can be cleanly segmented into syllables, or they require carefully tuning multiple statistical models. Here we present TweetyNet: a single neural network model that learns how to segment spectrograms of birdsong into annotated syllables. We show that TweetyNet mitigates limitations of methods that rely on segmented audio. We also show that TweetyNet performs well across multiple individuals from two species of songbirds, Bengalese finches and canaries. Lastly, we demonstrate that using TweetyNet we can accurately annotate very large datasets containing multiple days of song, and that these predicted annotations replicate key findings from behavioral studies. In addition, we provide open-source software to assist other researchers, and a large dataset of annotated canary song that can serve as a benchmark. We conclude that TweetyNet makes it possible to address a wide range of new questions about birdsong.
Collapse
Affiliation(s)
- Yarden Cohen
- Department of Brain Sciences, Weizmann Institute of Science, Rehovot, Israel
| | | | - Alexa Sanchioni
- Department of Biology, Boston University, Boston, United States
| | | | | | - Timothy J Gardner
- Phil and Penny Knight Campus for Accelerating Scientific Impact, University of Oregon, Eugene, United States
| |
Collapse
|
31
|
Li SW, Williams ZM, Báez-Mendoza R. Investigating the Neurobiology of Abnormal Social Behaviors. Front Neural Circuits 2021; 15:769314. [PMID: 34916912 PMCID: PMC8670406 DOI: 10.3389/fncir.2021.769314] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2021] [Accepted: 11/11/2021] [Indexed: 11/13/2022] Open
Affiliation(s)
- S William Li
- Department of Neurosurgery, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States.,Department of Anatomy and Neurobiology, Boston University School of Medicine, Boston, MA, United States
| | - Ziv M Williams
- Department of Neurosurgery, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States.,Harvard-MIT Division of Health Sciences and Technology, Boston, MA, United States.,Program in Neuroscience, Harvard Medical School, Boston, MA, United States
| | - Raymundo Báez-Mendoza
- Department of Neurosurgery, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States
| |
Collapse
|
32
|
Bosque Ortiz GM, Santana GM, Dietrich MO. Deficiency of the paternally inherited gene Magel2 alters the development of separation-induced vocalization and maternal behavior in mice. GENES, BRAIN, AND BEHAVIOR 2021; 21:e12776. [PMID: 34812568 PMCID: PMC9744533 DOI: 10.1111/gbb.12776] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/09/2021] [Revised: 10/09/2021] [Accepted: 10/25/2021] [Indexed: 01/15/2023]
Abstract
The behavior of offspring results from the combined expression of maternal and paternal genes. Genomic imprinting silences some genes in a parent-of-origin specific manner, a process that, among all animals, occurs only in mammals. How genomic imprinting affects the behavior of mammalian offspring, however, remains poorly understood. Here, we studied how the loss of the paternally inherited gene Magel2 in mouse pups affects the emission of separation-induced ultrasonic vocalizations (USV). Using quantitative analysis of more than 1000 USVs, we characterized the rate of vocalizations as well as their spectral features from postnatal days 6-12 (P6-P12), a critical phase of mouse development that covers the peak of vocal behavior in pups. Our analyses show that Magel2 deficient offspring emit separation-induced vocalizations at lower rates and with altered spectral features mainly at P8. We also show that dams display altered behavior towards their own Magel2 deficient offspring at this age. In a test to compare the retrieval of two pups, dams retrieve wildtype control pups first and faster than Magel2 deficient offspring. These results suggest that the loss of Magel2 impairs the expression of separation-induced vocalization in pups as well as maternal behavior at a specific age of postnatal development, both of which support the pups' growth and development.
Collapse
Affiliation(s)
- Gabriela M. Bosque Ortiz
- Laboratory of Physiology of Behavior, Department of Comparative MedicineYale School of MedicineNew HavenConnecticutUSA,Interdepartmental Neuroscience Program, Biological and Biomedical Sciences Program, Graduate School in Arts and SciencesYale UniversityNew HavenConnecticutUSA
| | - Gustavo M. Santana
- Laboratory of Physiology of Behavior, Department of Comparative MedicineYale School of MedicineNew HavenConnecticutUSA,Interdepartmental Neuroscience Program, Biological and Biomedical Sciences Program, Graduate School in Arts and SciencesYale UniversityNew HavenConnecticutUSA,Graduate Program in Biological Sciences‐BiochemistryFederal University of Rio Grande do SulPorto AlegreBrazil
| | - Marcelo O. Dietrich
- Laboratory of Physiology of Behavior, Department of Comparative MedicineYale School of MedicineNew HavenConnecticutUSA,Interdepartmental Neuroscience Program, Biological and Biomedical Sciences Program, Graduate School in Arts and SciencesYale UniversityNew HavenConnecticutUSA,Yale Center for Molecular and Systems MetabolismYale School of MedicineNew HavenConnecticutUSA,Department of NeuroscienceYale School of MedicineNew HavenConnecticutUSA
| |
Collapse
|
33
|
Hepbasli D, Gredy S, Ullrich M, Reigl A, Abeßer M, Raabe T, Schuh K. Genotype- and Age-Dependent Differences in Ultrasound Vocalizations of SPRED2 Mutant Mice Revealed by Machine Deep Learning. Brain Sci 2021; 11:brainsci11101365. [PMID: 34679429 PMCID: PMC8533915 DOI: 10.3390/brainsci11101365] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2021] [Revised: 10/13/2021] [Accepted: 10/15/2021] [Indexed: 11/25/2022] Open
Abstract
Vocalization is an important part of social communication, not only for humans but also for mice. Here, we show in a mouse model that functional deficiency of Sprouty-related EVH1 domain-containing 2 (SPRED2), a protein ubiquitously expressed in the brain, causes differences in social ultrasound vocalizations (USVs), using an uncomplicated and reliable experimental setting of a short meeting of two individuals. SPRED2 mutant mice show an OCD-like behaviour, accompanied by an increased release of stress hormones from the hypothalamic–pituitary–adrenal axis, both factors probably influencing USV usage. To determine genotype-related differences in USV usage, we analyzed call rate, subtype profile, and acoustic parameters (i.e., duration, bandwidth, and mean peak frequency) in young and old SPRED2-KO mice. We recorded USVs of interacting male and female mice, and analyzed the calls with the deep-learning DeepSqueak software, which was trained to recognize and categorize the emitted USVs. Our findings provide the first classification of SPRED2-KO vs. wild-type mouse USVs using neural networks and reveal significant differences in their development and use of calls. Our results show, first, that simple experimental settings in combination with deep learning are successful at identifying genotype-dependent USV usage and, second, that SPRED2 deficiency negatively affects the vocalization usage and social communication of mice.
Collapse
Affiliation(s)
- Denis Hepbasli
- Institute of Physiology I, University Wuerzburg, Roentgenring 9, 97070 Wuerzburg, Germany; (S.G.); (A.R.); (M.A.)
- Correspondence: (D.H.); (K.S.)
| | - Sina Gredy
- Institute of Physiology I, University Wuerzburg, Roentgenring 9, 97070 Wuerzburg, Germany; (S.G.); (A.R.); (M.A.)
| | - Melanie Ullrich
- Center for Rare Diseases, University Clinic Wuerzburg, Josef-Schneider-Strasse 2, 97080 Wuerzburg, Germany;
- Center for Medical Informatics, University Clinic Wuerzburg, Schweinfurter Strasse 4, 97080 Wuerzburg, Germany
| | - Amelie Reigl
- Institute of Physiology I, University Wuerzburg, Roentgenring 9, 97070 Wuerzburg, Germany; (S.G.); (A.R.); (M.A.)
| | - Marco Abeßer
- Institute of Physiology I, University Wuerzburg, Roentgenring 9, 97070 Wuerzburg, Germany; (S.G.); (A.R.); (M.A.)
| | - Thomas Raabe
- Institute for Medical Radiation and Cell Research, Campus Hubland, University Wuerzburg, Biozentrum, 97074 Wuerzburg, Germany;
| | - Kai Schuh
- Institute of Physiology I, University Wuerzburg, Roentgenring 9, 97070 Wuerzburg, Germany; (S.G.); (A.R.); (M.A.)
- Correspondence: (D.H.); (K.S.)
| |
Collapse
|