1
|
Pilipenko T, Premoli M, Gnutti A, Bonini SA, Leonardi R, Memo M, Migliorati P. Exploring ultrasonic communication in mice treated with Cannabis sativa oil: Audio data processing and correlation study with different behaviours. Eur J Neurosci 2024; 60:4244-4253. [PMID: 38816916 DOI: 10.1111/ejn.16433] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2023] [Revised: 04/18/2024] [Accepted: 05/17/2024] [Indexed: 06/01/2024]
Abstract
Studying ultrasonic vocalizations (USVs) plays a crucial role in understanding animal communication, particularly in the field of ethology and neuropharmacology. Communication is associated with social behaviour; so, USVs study is a valid assay in behavioural readout and monitoring in this context. This paper delved into an investigation of ultrasonic communication in mice treated with Cannabis sativa oil (CS mice), which has been demonstrated having a prosocial effect on behaviour of mice, versus control mice (vehicle-treated, VH mice). To conduct this study, we created a dataset by recording audio-video files and annotating the duration of time that test mice spent engaging in social activities, along with categorizing the types of emitted USVs. The analysis encompassed the frequency of individual sounds as well as more complex sequences of consecutive syllables (patterns). The primary goal was to examine the extent and nature of diversity in ultrasonic communication patterns emitted by these two groups of mice. As a result, we observed statistically significant differences for each considered pattern length between the two groups of mice. Additionally, the study extended its research by considering specific behaviours, aiming to ascertain whether dissimilarities in ultrasonic communication between CS and VH mice are more pronounced or subtle within distinct behavioural contexts. Our findings suggest that while there is variation in USV communication between the two groups of mice, the degree of this diversity may vary depending on the specific behaviour being observed.
Collapse
Affiliation(s)
- Tatiana Pilipenko
- Department of Information Engineering, University of Brescia, Brescia, Italy
| | - Marika Premoli
- Department of Molecular and Translational, Medicine, University of Brescia, Brescia, Italy
| | - Alessandro Gnutti
- Department of Information Engineering, University of Brescia, Brescia, Italy
| | - Sara Anna Bonini
- Department of Molecular and Translational, Medicine, University of Brescia, Brescia, Italy
| | - Riccardo Leonardi
- Department of Information Engineering, University of Brescia, Brescia, Italy
| | - Maurizio Memo
- Department of Molecular and Translational, Medicine, University of Brescia, Brescia, Italy
| | | |
Collapse
|
2
|
Perrodin C, Verzat C, Bendor D. Courtship behaviour reveals temporal regularity is a critical social cue in mouse communication. eLife 2023; 12:RP86464. [PMID: 38149925 PMCID: PMC10752583 DOI: 10.7554/elife.86464] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2023] Open
Abstract
While animals navigating the real world face a barrage of sensory input, their brains evolved to perceptually compress multidimensional information by selectively extracting the features relevant for survival. Notably, communication signals supporting social interactions in several mammalian species consist of acoustically complex sequences of vocalisations. However, little is known about what information listeners extract from such time-varying sensory streams. Here, we utilise female mice's natural behavioural response to male courtship songs to identify the relevant acoustic dimensions used in their social decisions. We found that females were highly sensitive to disruptions of song temporal regularity and preferentially approached playbacks of intact over rhythmically irregular versions of male songs. In contrast, female behaviour was invariant to manipulations affecting the songs' sequential organisation or the spectro-temporal structure of individual syllables. The results reveal temporal regularity as a key acoustic cue extracted by mammalian listeners from complex vocal sequences during goal-directed social behaviour.
Collapse
Affiliation(s)
- Catherine Perrodin
- Institute of Behavioural Neuroscience, Department of Experimental Psychology, University College LondonLondonUnited Kingdom
| | - Colombine Verzat
- Institute of Behavioural Neuroscience, Department of Experimental Psychology, University College LondonLondonUnited Kingdom
- Idiap Research InstituteMartignySwitzerland
| | - Daniel Bendor
- Institute of Behavioural Neuroscience, Department of Experimental Psychology, University College LondonLondonUnited Kingdom
| |
Collapse
|
3
|
Gan-Or B, London M. Cortical circuits modulate mouse social vocalizations. SCIENCE ADVANCES 2023; 9:eade6992. [PMID: 37774030 PMCID: PMC10541007 DOI: 10.1126/sciadv.ade6992] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/01/2022] [Accepted: 08/30/2023] [Indexed: 10/01/2023]
Abstract
Vocalizations provide a means of communication with high fidelity and information rate for many species. Diencephalon and brainstem neural circuits have been shown to control mouse vocal production; however, the role of cortical circuits in this process is debatable. Using electrical and optogenetic stimulation, we identified a cortical region in the anterior cingulate cortex in which stimulation elicits ultrasonic vocalizations. Moreover, fiber photometry showed an increase in Ca2+ dynamics preceding vocal initiation, whereas optogenetic suppression in this cortical area caused mice to emit fewer vocalizations. Last, electrophysiological recordings indicated a differential increase in neural activity in response to female social exposure dependent on vocal output. Together, these results indicate that the cortex is a key node in the neuronal circuits controlling vocal behavior in mice.
Collapse
Affiliation(s)
- Benjamin Gan-Or
- Edmond and Lily Safra Center for Brain Sciences and Alexander Silberman Institute of Life Sciences, The Hebrew University of Jerusalem, Jerusalem 91904, Israel
| | | |
Collapse
|
4
|
Sterling ML, Teunisse R, Englitz B. Rodent ultrasonic vocal interaction resolved with millimeter precision using hybrid beamforming. eLife 2023; 12:e86126. [PMID: 37493217 PMCID: PMC10522333 DOI: 10.7554/elife.86126] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Accepted: 07/25/2023] [Indexed: 07/27/2023] Open
Abstract
Ultrasonic vocalizations (USVs) fulfill an important role in communication and navigation in many species. Because of their social and affective significance, rodent USVs are increasingly used as a behavioral measure in neurodevelopmental and neurolinguistic research. Reliably attributing USVs to their emitter during close interactions has emerged as a difficult, key challenge. If addressed, all subsequent analyses gain substantial confidence. We present a hybrid ultrasonic tracking system, Hybrid Vocalization Localizer (HyVL), that synergistically integrates a high-resolution acoustic camera with high-quality ultrasonic microphones. HyVL is the first to achieve millimeter precision (~3.4-4.8 mm, 91% assigned) in localizing USVs, ~3× better than other systems, approaching the physical limits (mouse snout ~10 mm). We analyze mouse courtship interactions and demonstrate that males and females vocalize in starkly different relative spatial positions, and that the fraction of female vocalizations has likely been overestimated previously due to imprecise localization. Further, we find that when two male mice interact with one female, one of the males takes a dominant role in the interaction both in terms of the vocalization rate and the location relative to the female. HyVL substantially improves the precision with which social communication between rodents can be studied. It is also affordable, open-source, easy to set up, can be integrated with existing setups, and reduces the required number of experiments and animals.
Collapse
Affiliation(s)
- Max L Sterling
- Computational Neuroscience Lab, Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
- Visual Neuroscience Lab, Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
- Department of Human Genetics, Radboudumc, Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
| | - Ruben Teunisse
- Computational Neuroscience Lab, Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
| | - Bernhard Englitz
- Computational Neuroscience Lab, Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
| |
Collapse
|
5
|
Differences in temporal processing speeds between the right and left auditory cortex reflect the strength of recurrent synaptic connectivity. PLoS Biol 2022; 20:e3001803. [PMID: 36269764 PMCID: PMC9629599 DOI: 10.1371/journal.pbio.3001803] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Revised: 11/02/2022] [Accepted: 08/24/2022] [Indexed: 11/23/2022] Open
Abstract
Brain asymmetry in the sensitivity to spectrotemporal modulation is an established functional feature that underlies the perception of speech and music. The left auditory cortex (ACx) is believed to specialize in processing fast temporal components of speech sounds, and the right ACx slower components. However, the circuit features and neural computations behind these lateralized spectrotemporal processes are poorly understood. To answer these mechanistic questions we use mice, an animal model that captures some relevant features of human communication systems. In this study, we screened for circuit features that could subserve temporal integration differences between the left and right ACx. We mapped excitatory input to principal neurons in all cortical layers and found significantly stronger recurrent connections in the superficial layers of the right ACx compared to the left. We hypothesized that the underlying recurrent neural dynamics would exhibit differential characteristic timescales corresponding to their hemispheric specialization. To investigate, we recorded spike trains from awake mice and estimated the network time constants using a statistical method to combine evidence from multiple weak signal-to-noise ratio neurons. We found longer temporal integration windows in the superficial layers of the right ACx compared to the left as predicted by stronger recurrent excitation. Our study shows substantial evidence linking stronger recurrent synaptic connections to longer network timescales. These findings support speech processing theories that purport asymmetry in temporal integration is a crucial feature of lateralization in auditory processing.
Collapse
|
6
|
Stoumpou V, Vargas CDM, Schade PF, Boyd JL, Giannakopoulos T, Jarvis ED. Analysis of Mouse Vocal Communication (AMVOC): a deep, unsupervised method for rapid detection, analysis and classification of ultrasonic vocalisations. BIOACOUSTICS 2022. [DOI: 10.1080/09524622.2022.2099973] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
Affiliation(s)
- Vasiliki Stoumpou
- School of Electrical and Computer Engineering, National Technical University of Athens, Athens, Greece
| | - César D. M. Vargas
- Laboratory of Neurogenetics of Language, The Rockefeller University, New York, NY, USA
| | - Peter F. Schade
- Laboratory of Neurogenetics of Language, The Rockefeller University, New York, NY, USA
- Laboratory of Neural Systems, The Rockefeller University, New York, NY, USA
| | - J. Lomax Boyd
- Berman Institute of Bioethics, Johns Hopkins University, Baltimore, MD, USA
| | - Theodoros Giannakopoulos
- Computational Intelligence Lab, Institute of Informatics and Telecommunications, National Center of Scientific Research 'Demokritos', Athens, Greece
| | - Erich D. Jarvis
- Laboratory of Neurogenetics of Language, The Rockefeller University, New York, NY, USA
- Howard Hughes Medical Institute, Chevy Chase, MD, USA
| |
Collapse
|
7
|
Netser S, Nahardiya G, Weiss-Dicker G, Dadush R, Goussha Y, John SR, Taub M, Werber Y, Sapir N, Yovel Y, Harony-Nicolas H, Buxbaum JD, Cohen L, Crammer K, Wagner S. TrackUSF, a novel tool for automated ultrasonic vocalization analysis, reveals modified calls in a rat model of autism. BMC Biol 2022; 20:159. [PMID: 35820848 PMCID: PMC9277954 DOI: 10.1186/s12915-022-01299-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Accepted: 04/14/2022] [Indexed: 11/30/2022] Open
Abstract
Background Various mammalian species emit ultrasonic vocalizations (USVs), which reflect their emotional state and mediate social interactions. USVs are usually analyzed by manual or semi-automated methodologies that categorize discrete USVs according to their structure in the frequency-time domains. This laborious analysis hinders the effective use of USVs as a readout for high-throughput analysis of behavioral changes in animals. Results Here we present a novel automated open-source tool that utilizes a different approach towards USV analysis, termed TrackUSF. To validate TrackUSF, we analyzed calls from different animal species, namely mice, rats, and bats, recorded in various settings and compared the results with a manual analysis by a trained observer. We found that TrackUSF detected the majority of USVs, with less than 1% of false-positive detections. We then employed TrackUSF to analyze social vocalizations in Shank3-deficient rats, a rat model of autism, and revealed that these vocalizations exhibit a spectrum of deviations from appetitive calls towards aversive calls. Conclusions TrackUSF is a simple and easy-to-use system that may be used for a high-throughput comparison of ultrasonic vocalizations between groups of animals of any kind in any setting, with no prior assumptions. Supplementary Information The online version contains supplementary material available at 10.1186/s12915-022-01299-y.
Collapse
Affiliation(s)
- Shai Netser
- Sagol Department of Neurobiology, University of Haifa, 3498838, Haifa, Israel.,The Integrated Brain and Behavior Research Center (IBBR), Faculty of Natural Sciences, University of Haifa, Mt. Carmel, 3498838, Haifa, Israel
| | - Guy Nahardiya
- Sagol Department of Neurobiology, University of Haifa, 3498838, Haifa, Israel.,The Integrated Brain and Behavior Research Center (IBBR), Faculty of Natural Sciences, University of Haifa, Mt. Carmel, 3498838, Haifa, Israel
| | - Gili Weiss-Dicker
- Department of Electrical Engineering, The Technion, 32000, Haifa, Israel
| | - Roei Dadush
- Department of Electrical Engineering, The Technion, 32000, Haifa, Israel
| | - Yizhaq Goussha
- Sagol Department of Neurobiology, University of Haifa, 3498838, Haifa, Israel.,The Integrated Brain and Behavior Research Center (IBBR), Faculty of Natural Sciences, University of Haifa, Mt. Carmel, 3498838, Haifa, Israel
| | - Shanah Rachel John
- Sagol Department of Neurobiology, University of Haifa, 3498838, Haifa, Israel.,The Integrated Brain and Behavior Research Center (IBBR), Faculty of Natural Sciences, University of Haifa, Mt. Carmel, 3498838, Haifa, Israel
| | - Mor Taub
- School of Zoology, Faculty of Life-Sciences, Tel-Aviv University, Tel Aviv, Israel
| | - Yuval Werber
- Department of Evolutionary and Environmental Biology and Institute of Evolution, University of Haifa, Haifa, Israel
| | - Nir Sapir
- Department of Evolutionary and Environmental Biology and Institute of Evolution, University of Haifa, Haifa, Israel
| | - Yossi Yovel
- School of Zoology, Faculty of Life-Sciences, Tel-Aviv University, Tel Aviv, Israel
| | - Hala Harony-Nicolas
- The Department of Psychiatry and The Seaver Autism Center for Research and Treatment, Icahn School of Medicine at Mount Sinai, New York, NY, 10029, USA
| | - Joseph D Buxbaum
- The Department of Psychiatry and The Seaver Autism Center for Research and Treatment, Icahn School of Medicine at Mount Sinai, New York, NY, 10029, USA
| | - Lior Cohen
- Sagol Department of Neurobiology, University of Haifa, 3498838, Haifa, Israel
| | - Koby Crammer
- Department of Electrical Engineering, The Technion, 32000, Haifa, Israel
| | - Shlomo Wagner
- Sagol Department of Neurobiology, University of Haifa, 3498838, Haifa, Israel. .,The Integrated Brain and Behavior Research Center (IBBR), Faculty of Natural Sciences, University of Haifa, Mt. Carmel, 3498838, Haifa, Israel.
| |
Collapse
|
8
|
|
9
|
Lee J, Rothschild G. Encoding of acquired sound-sequence salience by auditory cortical offset responses. Cell Rep 2021; 37:109927. [PMID: 34731615 DOI: 10.1016/j.celrep.2021.109927] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Revised: 08/19/2021] [Accepted: 10/12/2021] [Indexed: 11/25/2022] Open
Abstract
Behaviorally relevant sounds are often composed of distinct acoustic units organized into specific temporal sequences. The meaning of such sound sequences can therefore be fully recognized only when they have terminated. However, the neural mechanisms underlying the perception of sound sequences remain unclear. Here, we use two-photon calcium imaging in the auditory cortex of behaving mice to test the hypothesis that neural responses to termination of sound sequences ("Off-responses") encode their acoustic history and behavioral salience. We find that auditory cortical Off-responses encode preceding sound sequences and that learning to associate a sound sequence with a reward induces enhancement of Off-responses relative to responses during the sound sequence ("On-responses"). Furthermore, learning enhances network-level discriminability of sound sequences by Off-responses. Last, learning-induced plasticity of Off-responses but not On-responses lasts to the next day. These findings identify auditory cortical Off-responses as a key neural signature of acquired sound-sequence salience.
Collapse
Affiliation(s)
- Joonyeup Lee
- Department of Psychology, University of Michigan, Ann Arbor, MI 48109, USA
| | - Gideon Rothschild
- Department of Psychology, University of Michigan, Ann Arbor, MI 48109, USA; Kresge Hearing Research Institute and Department of Otolaryngology - Head and Neck Surgery, University of Michigan, Ann Arbor, MI 48109, USA.
| |
Collapse
|
10
|
de Chaumont F, Lemière N, Coqueran S, Bourgeron T, Ey E. LMT USV Toolbox, a Novel Methodological Approach to Place Mouse Ultrasonic Vocalizations in Their Behavioral Contexts-A Study in Female and Male C57BL/6J Mice and in Shank3 Mutant Females. Front Behav Neurosci 2021; 15:735920. [PMID: 34720899 PMCID: PMC8548730 DOI: 10.3389/fnbeh.2021.735920] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2021] [Accepted: 09/20/2021] [Indexed: 11/13/2022] Open
Abstract
Ultrasonic vocalizations (USVs) are used as a phenotypic marker in mouse models of neuropsychiatric disorders. Nevertheless, current methodologies still require time-consuming manual input or sound recordings clean of any background noise. We developed a method to overcome these two restraints to boost knowledge on mouse USVs. The methods are freely available and the USV analysis runs online at https://usv.pasteur.cloud. As little is currently known about usage and structure of ultrasonic vocalizations during social interactions over the long-term and in unconstrained context, we investigated mouse spontaneous communication by coupling the analysis of USVs with automatic labeling of behaviors. We continuously recorded during 3 days undisturbed interactions of same-sex pairs of C57BL/6J sexually naive males and females at 5 weeks and 3 and 7 months of age. In same-sex interactions, we observed robust differences between males and females in the amount of USVs produced, in the acoustic structure and in the contexts of emission. The context-specific acoustic variations emerged with increasing age. The emission of USVs also reflected a high level of excitement during social interactions. We finally highlighted the importance of studying long-term spontaneous communication by investigating female mice lacking Shank3, a synaptic protein associated with autism. While the previous short-time constrained investigations could not detect USV emission abnormalities, our analysis revealed robust differences in the usage and structure of the USVs emitted by mutant mice compared to wild-type female pairs.
Collapse
Affiliation(s)
- Fabrice de Chaumont
- Human Genetics and Cognitive Functions, Institut Pasteur, UMR 3571 CNRS, Université de Paris, Paris, France
| | - Nathalie Lemière
- Human Genetics and Cognitive Functions, Institut Pasteur, UMR 3571 CNRS, Université de Paris, Paris, France
| | - Sabrina Coqueran
- Human Genetics and Cognitive Functions, Institut Pasteur, UMR 3571 CNRS, Université de Paris, Paris, France
| | - Thomas Bourgeron
- Human Genetics and Cognitive Functions, Institut Pasteur, UMR 3571 CNRS, Université de Paris, Paris, France
| | - Elodie Ey
- Human Genetics and Cognitive Functions, Institut Pasteur, UMR 3571 CNRS, Université de Paris, Paris, France
| |
Collapse
|
11
|
Scott KJ, Tashakori-Sabzevar F, Bilkey DK. Maternal immune activation alters the sequential structure of ultrasonic communications in male rats. Brain Behav Immun Health 2021; 16:100304. [PMID: 34589796 PMCID: PMC8474666 DOI: 10.1016/j.bbih.2021.100304] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2021] [Revised: 06/16/2021] [Accepted: 07/24/2021] [Indexed: 11/23/2022] Open
Abstract
Maternal immune activation (MIA) is a risk factor for schizophrenia and many of the symptoms and neurodevelopmental changes associated with this disorder have been modelled in the rodent. While several previous studies have reported that rodent ultrasonic vocalizations (USVs) are affected by MIA, no previous study has examined whether MIA affects the way that individual USVs occur over time to produce vocalisation sequences. The sequential aspect of this behaviour may be particularly important because changes in sequencing mechanisms have been proposed as a core deficit in schizophrenia. The present research generates MIA with POLY I:C administered to pregnant Sprague-Dawley rat dams at GD15. Male pairs of MIA adult offspring or pairs of their saline controls were placed into a two-chamber apparatus where they were separated from each other by a perforated plexiglass barrier. USVs were recorded for a period of 10 min and automated detection and call review were used to classify short call types in the nominal 50 kHz band of social affiliative calls. Our data show that the duration of these 50-kHz USVs is longer in MIA rat pairs and the time between calls is shorter. Furthermore, the transition probability between call pairs was different in the MIA animals compared to the control group, indicating alterations in sequential behaviour. These results provide the first evidence that USV call sequencing is altered by the MIA intervention and suggest that further investigations of these temporally extended aspects of USV production are likely to reveal useful information about the mechanisms that underlie sequence generation. This is particularly important given previous research suggesting that sequencing deficits may have a significant impact on both behaviour and cognition.
Collapse
Affiliation(s)
| | | | - David K. Bilkey
- Department of Psychology, University of Otago, Dunedin, New Zealand
| |
Collapse
|
12
|
Goffinet J, Brudner S, Mooney R, Pearson J. Low-dimensional learned feature spaces quantify individual and group differences in vocal repertoires. eLife 2021; 10:e67855. [PMID: 33988503 PMCID: PMC8213406 DOI: 10.7554/elife.67855] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2021] [Accepted: 05/12/2021] [Indexed: 11/16/2022] Open
Abstract
Increases in the scale and complexity of behavioral data pose an increasing challenge for data analysis. A common strategy involves replacing entire behaviors with small numbers of handpicked, domain-specific features, but this approach suffers from several crucial limitations. For example, handpicked features may miss important dimensions of variability, and correlations among them complicate statistical testing. Here, by contrast, we apply the variational autoencoder (VAE), an unsupervised learning method, to learn features directly from data and quantify the vocal behavior of two model species: the laboratory mouse and the zebra finch. The VAE converges on a parsimonious representation that outperforms handpicked features on a variety of common analysis tasks, enables the measurement of moment-by-moment vocal variability on the timescale of tens of milliseconds in the zebra finch, provides strong evidence that mouse ultrasonic vocalizations do not cluster as is commonly believed, and captures the similarity of tutor and pupil birdsong with qualitatively higher fidelity than previous approaches. In all, we demonstrate the utility of modern unsupervised learning approaches to the quantification of complex and high-dimensional vocal behavior.
Collapse
Affiliation(s)
- Jack Goffinet
- Department of Computer Science, Duke UniversityDurhamUnited States
- Center for Cognitive Neurobiology, Duke UniversityDurhamUnited States
- Department of Neurobiology, Duke UniversityDurhamUnited States
| | - Samuel Brudner
- Department of Neurobiology, Duke UniversityDurhamUnited States
| | - Richard Mooney
- Department of Neurobiology, Duke UniversityDurhamUnited States
| | - John Pearson
- Center for Cognitive Neurobiology, Duke UniversityDurhamUnited States
- Department of Neurobiology, Duke UniversityDurhamUnited States
- Department of Biostatistics & Bioinformatics, Duke UniversityDurhamUnited States
- Department of Electrical and Computer Engineering, Duke UniversityDurhamUnited States
| |
Collapse
|