1
|
Biderman D, Whiteway MR, Hurwitz C, Greenspan N, Lee RS, Vishnubhotla A, Warren R, Pedraja F, Noone D, Schartner MM, Huntenburg JM, Khanal A, Meijer GT, Noel JP, Pan-Vazquez A, Socha KZ, Urai AE, Cunningham JP, Sawtell NB, Paninski L. Lightning Pose: improved animal pose estimation via semi-supervised learning, Bayesian ensembling and cloud-native open-source tools. Nat Methods 2024:10.1038/s41592-024-02319-1. [PMID: 38918605 DOI: 10.1038/s41592-024-02319-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2023] [Accepted: 05/17/2024] [Indexed: 06/27/2024]
Abstract
Contemporary pose estimation methods enable precise measurements of behavior via supervised deep learning with hand-labeled video frames. Although effective in many cases, the supervised approach requires extensive labeling and often produces outputs that are unreliable for downstream analyses. Here, we introduce 'Lightning Pose', an efficient pose estimation package with three algorithmic contributions. First, in addition to training on a few labeled video frames, we use many unlabeled videos and penalize the network whenever its predictions violate motion continuity, multiple-view geometry and posture plausibility (semi-supervised learning). Second, we introduce a network architecture that resolves occlusions by predicting pose on any given frame using surrounding unlabeled frames. Third, we refine the pose predictions post hoc by combining ensembling and Kalman smoothing. Together, these components render pose trajectories more accurate and scientifically usable. We released a cloud application that allows users to label data, train networks and process new videos directly from the browser.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | | | | | | | | | - Anup Khanal
- University of California, Los Angeles, Los Angeles, CA, USA
| | | | | | | | | | | | | | | | | |
Collapse
|
2
|
Håkansson J, Quinn BL, Shultz AL, Swartz SM, Corcoran AJ. Application of a novel deep learning-based 3D videography workflow to bat flight. Ann N Y Acad Sci 2024; 1536:92-106. [PMID: 38652595 DOI: 10.1111/nyas.15143] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/25/2024]
Abstract
Studying the detailed biomechanics of flying animals requires accurate three-dimensional coordinates for key anatomical landmarks. Traditionally, this relies on manually digitizing animal videos, a labor-intensive task that scales poorly with increasing framerates and numbers of cameras. Here, we present a workflow that combines deep learning-powered automatic digitization with filtering and correction of mislabeled points using quality metrics from deep learning and 3D reconstruction. We tested our workflow using a particularly challenging scenario: bat flight. First, we documented four bats flying steadily in a 2 m3 wind tunnel test section. Wing kinematic parameters resulting from manually digitizing bats with markers applied to anatomical landmarks were not significantly different from those resulting from applying our workflow to the same bats without markers for five out of six parameters. Second, we compared coordinates from manual digitization against those yielded via our workflow for bats flying freely in a 344 m3 enclosure. Average distance between coordinates from our workflow and those from manual digitization was less than a millimeter larger than the average human-to-human coordinate distance. The improved efficiency of our workflow has the potential to increase the scalability of studies on animal flight biomechanics.
Collapse
Affiliation(s)
- Jonas Håkansson
- Department of Biology, University of Colorado Colorado Springs, Colorado Springs, Colorado, USA
| | - Brooke L Quinn
- Department of Ecology, Evolution, and Organismal Biology, Brown University, Providence, Rhode Island, USA
| | - Abigail L Shultz
- Department of Biology, University of Colorado Colorado Springs, Colorado Springs, Colorado, USA
| | - Sharon M Swartz
- Department of Ecology, Evolution, and Organismal Biology, Brown University, Providence, Rhode Island, USA
- School of Engineering, Brown University, Providence, Rhode Island, USA
| | - Aaron J Corcoran
- Department of Biology, University of Colorado Colorado Springs, Colorado Springs, Colorado, USA
| |
Collapse
|
3
|
Li C, Mellbin Y, Krogager J, Polikovsky S, Holmberg M, Ghorbani N, Black MJ, Kjellström H, Zuffi S, Hernlund E. The Poses for Equine Research Dataset (PFERD). Sci Data 2024; 11:497. [PMID: 38750064 PMCID: PMC11096353 DOI: 10.1038/s41597-024-03312-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Accepted: 04/25/2024] [Indexed: 05/18/2024] Open
Abstract
Studies of quadruped animal motion help us to identify diseases, understand behavior and unravel the mechanics behind gaits in animals. The horse is likely the best-studied animal in this aspect, but data capture is challenging and time-consuming. Computer vision techniques improve animal motion extraction, but the development relies on reference datasets, which are scarce, not open-access and often provide data from only a few anatomical landmarks. Addressing this data gap, we introduce PFERD, a video and 3D marker motion dataset from horses using a full-body set-up of densely placed over 100 skin-attached markers and synchronized videos from ten camera angles. Five horses of diverse conformations provide data for various motions from basic poses (eg. walking, trotting) to advanced motions (eg. rearing, kicking). We further express the 3D motions with current techniques and a 3D parameterized model, the hSMAL model, establishing a baseline for 3D horse markerless motion capture. PFERD enables advanced biomechanical studies and provides a resource of ground truth data for the methodological development of markerless motion capture.
Collapse
Affiliation(s)
- Ci Li
- KTH Royal Institute of Technology, Stockholm, Sweden
| | - Ylva Mellbin
- Swedish University of Agricultural Sciences, Uppsala, Sweden
| | | | | | | | - Nima Ghorbani
- Sporttotal.tv, Immersive Technologies, Cologne, Germany
| | - Michael J Black
- Max Planck Institute for Intelligent Systems, Tübingen, Germany
| | - Hedvig Kjellström
- KTH Royal Institute of Technology, Stockholm, Sweden
- Swedish University of Agricultural Sciences, Uppsala, Sweden
| | - Silvia Zuffi
- CNR Institute for Applied Mathematics and Information Technologies, Milan, Italy
| | - Elin Hernlund
- Swedish University of Agricultural Sciences, Uppsala, Sweden.
| |
Collapse
|
4
|
Grammer J, Valles R, Bowles A, Zelikowsky M. SAUSI: a novel assay for measuring social anxiety and motivation. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.05.13.594023. [PMID: 38798428 PMCID: PMC11118329 DOI: 10.1101/2024.05.13.594023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2024]
Abstract
Social anxiety is one of the most prevalent mental health disorders, though the underlying neurobiology is poorly understood. Progress in understanding the etiology of social anxiety has been hindered by the lack of comprehensive tools to assess social anxiety in model systems. Here, we created a new behavioral task - Selective Access to Unrestricted Social Interaction (SAUSI), which combines elements of social motivation, hesitancy, decision-making, and free interaction to enable the wholistic assessment of social anxiety-like behaviors in mice. Using this novel assay, we found that social isolation-induced social anxiety-like behaviors in female mice are largely driven by increases in social fear, social hesitancy, and altered ultrasonic vocalizations. Deep learning analyses were able to computationally identify a unique behavioral footprint underlying the state produced by social isolation, demonstrating the compatibility of modern computational approaches with SAUSI. Finally, we compared the results of SAUSI to traditionally social assays including the 3-chamber sociability assay and the resident intruder task. This revealed that behavioral changes induced by isolation were highly context dependent, and that while fragments of social anxiety measured in SAUSI were replicable across other tasks, a wholistic assessment was not obtainable from these alternative assays. Our findings debut a novel task for the behavioral toolbox - one which overcomes limitations of previous assays, allowing for both social choice as well as free interaction, and offers a new approach for assessing social anxiety in rodents.
Collapse
Affiliation(s)
- Jordan Grammer
- Department of Neurobiology, University of Utah, United States
| | - Rene Valles
- Department of Neurobiology, University of Utah, United States
| | - Alexis Bowles
- Department of Neurobiology, University of Utah, United States
| | | |
Collapse
|
5
|
Kastner DB, Williams G, Holobetz C, Romano JP, Dayan P. The choice-wide behavioral association study: data-driven identification of interpretable behavioral components. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.02.26.582115. [PMID: 38464037 PMCID: PMC10925091 DOI: 10.1101/2024.02.26.582115] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/12/2024]
Abstract
Behavior contains rich structure across many timescales, but there is a dearth of methods to identify relevant components, especially over the longer periods required for learning and decision-making. Inspired by the goals and techniques of genome-wide association studies, we present a data-driven method-the choice-wide behavioral association study: CBAS-that systematically identifies such behavioral features. CBAS uses a powerful, resampling-based, method of multiple comparisons correction to identify sequences of actions or choices that either differ significantly between groups or significantly correlate with a covariate of interest. We apply CBAS to different tasks and species (flies, rats, and humans) and find, in all instances, that it provides interpretable information about each behavioral task.
Collapse
Affiliation(s)
- David B. Kastner
- Department of Psychiatry and Behavioral Sciences, University of California, San Francisco, CA 94143, USA
- Lead Contact
| | - Greer Williams
- Department of Psychiatry and Behavioral Sciences, University of California, San Francisco, CA 94143, USA
| | - Cristofer Holobetz
- Department of Psychiatry and Behavioral Sciences, University of California, San Francisco, CA 94143, USA
| | - Joseph P. Romano
- Department of Statistics, Stanford University, Stanford, CA 94305, USA
| | - Peter Dayan
- Max Planck Institute for Biological Cybernetics, Tübingen 72076, Germany
| |
Collapse
|
6
|
Chettih SN, Mackevicius EL, Hale S, Aronov D. Barcoding of episodic memories in the hippocampus of a food-caching bird. Cell 2024; 187:1922-1935.e20. [PMID: 38554707 PMCID: PMC11015962 DOI: 10.1016/j.cell.2024.02.032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Revised: 11/28/2023] [Accepted: 02/23/2024] [Indexed: 04/02/2024]
Abstract
The hippocampus is critical for episodic memory. Although hippocampal activity represents place and other behaviorally relevant variables, it is unclear how it encodes numerous memories of specific events in life. To study episodic coding, we leveraged the specialized behavior of chickadees-food-caching birds that form memories at well-defined moments in time whenever they cache food for subsequent retrieval. Our recordings during caching revealed very sparse, transient barcode-like patterns of firing across hippocampal neurons. Each "barcode" uniquely represented a caching event and transiently reactivated during the retrieval of that specific cache. Barcodes co-occurred with the conventional activity of place cells but were uncorrelated even for nearby cache locations that had similar place codes. We propose that animals recall episodic memories by reactivating hippocampal barcodes. Similarly to computer hash codes, these patterns assign unique identifiers to different events and could be a mechanism for rapid formation and storage of many non-interfering memories.
Collapse
Affiliation(s)
- Selmaan N Chettih
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA
| | - Emily L Mackevicius
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA; Basis Research Institute, New York, NY 10027, USA
| | - Stephanie Hale
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA
| | - Dmitriy Aronov
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA.
| |
Collapse
|
7
|
Biderman D, Whiteway MR, Hurwitz C, Greenspan N, Lee RS, Vishnubhotla A, Warren R, Pedraja F, Noone D, Schartner M, Huntenburg JM, Khanal A, Meijer GT, Noel JP, Pan-Vazquez A, Socha KZ, Urai AE, Cunningham JP, Sawtell NB, Paninski L. Lightning Pose: improved animal pose estimation via semi-supervised learning, Bayesian ensembling, and cloud-native open-source tools. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.04.28.538703. [PMID: 37162966 PMCID: PMC10168383 DOI: 10.1101/2023.04.28.538703] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
Contemporary pose estimation methods enable precise measurements of behavior via supervised deep learning with hand-labeled video frames. Although effective in many cases, the supervised approach requires extensive labeling and often produces outputs that are unreliable for downstream analyses. Here, we introduce "Lightning Pose," an efficient pose estimation package with three algorithmic contributions. First, in addition to training on a few labeled video frames, we use many unlabeled videos and penalize the network whenever its predictions violate motion continuity, multiple-view geometry, and posture plausibility (semi-supervised learning). Second, we introduce a network architecture that resolves occlusions by predicting pose on any given frame using surrounding unlabeled frames. Third, we refine the pose predictions post-hoc by combining ensembling and Kalman smoothing. Together, these components render pose trajectories more accurate and scientifically usable. We release a cloud application that allows users to label data, train networks, and predict new videos directly from the browser.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | | | | | | | | | - Anup Khanal
- University of California Los Angeles, Los Angeles, USA
| | | | | | | | | | | | | | | | | |
Collapse
|
8
|
Tillmann JF, Hsu AI, Schwarz MK, Yttri EA. A-SOiD, an active-learning platform for expert-guided, data-efficient discovery of behavior. Nat Methods 2024; 21:703-711. [PMID: 38383746 DOI: 10.1038/s41592-024-02200-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Accepted: 01/29/2024] [Indexed: 02/23/2024]
Abstract
To identify and extract naturalistic behavior, two methods have become popular: supervised and unsupervised. Each approach carries its own strengths and weaknesses (for example, user bias, training cost, complexity and action discovery), which the user must consider in their decision. Here, an active-learning platform, A-SOiD, blends these strengths, and in doing so, overcomes several of their inherent drawbacks. A-SOiD iteratively learns user-defined groups with a fraction of the usual training data, while attaining expansive classification through directed unsupervised classification. In socially interacting mice, A-SOiD outperformed standard methods despite requiring 85% less training data. Additionally, it isolated ethologically distinct mouse interactions via unsupervised classification. We observed similar performance and efficiency using nonhuman primate and human three-dimensional pose data. In both cases, the transparency in A-SOiD's cluster definitions revealed the defining features of the supervised classification through a game-theoretic approach. To facilitate use, A-SOiD comes as an intuitive, open-source interface for efficient segmentation of user-defined behaviors and discovered sub-actions.
Collapse
Affiliation(s)
- Jens F Tillmann
- Institute of Experimental Epileptology and Cognition Research, University of Bonn, Bonn, Germany
| | - Alexander I Hsu
- Department of Biological Sciences, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Martin K Schwarz
- Institute of Experimental Epileptology and Cognition Research, University of Bonn, Bonn, Germany.
| | - Eric A Yttri
- Department of Biological Sciences, Carnegie Mellon University, Pittsburgh, PA, USA.
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA.
| |
Collapse
|
9
|
Itahara A, Kano F. Gaze tracking of large-billed crows (Corvus macrorhynchos) in a motion capture system. J Exp Biol 2024; 227:jeb246514. [PMID: 38362616 PMCID: PMC11007591 DOI: 10.1242/jeb.246514] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2023] [Accepted: 02/07/2024] [Indexed: 02/17/2024]
Abstract
Previous studies often inferred the focus of a bird's attention from its head movements because it provides important clues about their perception and cognition. However, it remains challenging to do so accurately, as the details of how they orient their visual field toward the visual targets remain largely unclear. We thus examined visual field configurations and the visual field use of large-billed crows (Corvus macrorhynchos Wagler 1827). We used an established ophthalmoscopic reflex technique to identify the visual field configuration, including the binocular width and optical axes, as well as the degree of eye movement. A newly established motion capture system was then used to track the head movements of freely moving crows to examine how they oriented their reconstructed visual fields toward attention-getting objects. When visual targets were moving, the crows frequently used their binocular visual fields, particularly around the projection of the beak-tip. When the visual targets stopped moving, crows frequently used non-binocular visual fields, particularly around the regions where their optical axes were found. On such occasions, the crows slightly preferred the right eye. Overall, the visual field use of crows is clearly predictable. Thus, while the untracked eye movements could introduce some level of uncertainty (typically within 15 deg), we demonstrated the feasibility of inferring a crow's attentional focus by 3D tracking of their heads. Our system represents a promising initial step towards establishing gaze tracking methods for studying corvid behavior and cognition.
Collapse
Affiliation(s)
- Akihiro Itahara
- Wildlife Research Center, Kyoto University, Kyoto 6068203, Japan
| | - Fumihiro Kano
- Centre for the Advanced Study of Collective Behavior, University of Konstanz, Konstanz 78464, Germany
- Max-Planck Institute of Animal Behavior, Radolfzell 78315, Germany
| |
Collapse
|
10
|
Cascella M, Capuozzo M, Ferrara F, Ottaiano A, Perri F, Sabbatino F, Conti V, Santoriello V, Ponsiglione AM, Romano M, Amato F, Piazza O. Two-year Opioid Prescription Trends in Local Sanitary Agency Naples 3 South, Campania Region, Italy. Descriptive Analyses and AI-based Translational Perspectives. Transl Med UniSa 2024; 26:1-14. [PMID: 38560616 PMCID: PMC10980290 DOI: 10.37825/2239-9747.1047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2023] [Revised: 01/23/2024] [Accepted: 01/31/2024] [Indexed: 04/04/2024] Open
Abstract
Aims This study delves into the two-year opioid prescription trends in the Local Sanitary Agency Naples 3 South, Campania Region, Italy. The research aims to elucidate prescribing patterns, demographics, and dosage categories within a population representing 1.7% of the national total. Perspectives on artificial intelligence research are discussed. Methods From the original dataset, spanning from January 2022 to October 2023, we processed multiple variables including demographic data, medications, dosages, drug consumption, and administration routes. The dispensing quantity was calculated as defined daily doses (DDD). Results The analysis reveals a conservative approach to opioid therapy. In subjects under the age of 20, prescriptions accounted for 2.1% in 2022 and declined to 1.4% in 2023. The drug combination paracetamol/codeine was the most frequently prescribed, followed by tapentadol. Approximately two-thirds of the consumption pertains to oral formulations. Transdermal formulations were 15% (fentanyl 9.8%, buprenorphine 5.1%) in 2022; and 16.6% (fentanyl 10%, buprenorphine 6.6%) in 2023. These data were confirmed by the DDD analysis. The trend analysis demonstrated a significant reduction ( p < 0.001) in the number of prescribed opioids from 2022 to 2023 in adults (40-69 years). The study of rapid-onset opioids (ROOs), drugs specifically used for breakthrough cancer pain, showed higher dosage (>267 mcg) consumption among women, whereas a lower dosage (<133 mcg) was calculated for men. Fentanyl pectin nasal spray accounted for approximately one-fifth of all ROOs. Conclusion Despite limitations, the study provides valuable insights into prescribing practices involving an important study population. The findings underscore the need for tailored approaches to prescribing practices, recognizing the complexities of pain management in different contexts. This research can contribute to the ongoing discourse on opioid use, advocating for innovative strategies that optimize therapeutic outcomes while mitigating potential risks.
Collapse
Affiliation(s)
- Marco Cascella
- Department of Medicine, Surgery and Dentistry, University of Salerno, Baronissi, 84081, Salerno,
Italy
| | - Maurizio Capuozzo
- Pharmaceutical Department, ASL Napoli 3 Sud, Ercolano, 80056, Naples,
Italy
| | - Francesco Ferrara
- Pharmaceutical Department, ASL Napoli 3 Sud, Ercolano, 80056, Naples,
Italy
| | - Alessandro Ottaiano
- Istituto Nazionale Tumori di Napoli, IRCCS “G. Pascale”, via M. Semmola, 80131, Naples,
Italy
| | - Francesco Perri
- Istituto Nazionale Tumori di Napoli, IRCCS “G. Pascale”, via M. Semmola, 80131, Naples,
Italy
| | - Francesco Sabbatino
- Department of Medicine, Surgery and Dentistry, University of Salerno, Baronissi, 84081, Salerno,
Italy
| | - Valeria Conti
- Department of Medicine, Surgery and Dentistry, University of Salerno, Baronissi, 84081, Salerno,
Italy
| | - Vittorio Santoriello
- Department of Information Technology and Electrical Engineering, University of Naples Federico II, 80125, Naples,
Italy
| | - Alfonso Maria Ponsiglione
- Department of Information Technology and Electrical Engineering, University of Naples Federico II, 80125, Naples,
Italy
| | - Maria Romano
- Department of Information Technology and Electrical Engineering, University of Naples Federico II, 80125, Naples,
Italy
| | - Francesco Amato
- Department of Information Technology and Electrical Engineering, University of Naples Federico II, 80125, Naples,
Italy
| | - Ornella Piazza
- Department of Medicine, Surgery and Dentistry, University of Salerno, Baronissi, 84081, Salerno,
Italy
| |
Collapse
|
11
|
Modi AD, Parekh A, Patel ZH. Methods for evaluating gait associated dynamic balance and coordination in rodents. Behav Brain Res 2024; 456:114695. [PMID: 37783346 DOI: 10.1016/j.bbr.2023.114695] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Revised: 09/29/2023] [Accepted: 09/30/2023] [Indexed: 10/04/2023]
Abstract
Balance is the dynamic and unconscious control of the body's centre of mass to maintain postural equilibrium. Regulated by the vestibular system, head movement and acceleration are processed by the brain to adjust joints. Several conditions result in a loss of balance, including Alzheimer's Disease, Parkinson's Disease, Menière's Disease and cervical spondylosis, all of which are caused by damage to certain parts of the vestibular pathways. Studies about the impairment of the vestibular system are challenging to carry out in human trials due to smaller study sizes limiting applications of the results and a lacking understanding of the human balance control mechanism. In contrast, more controlled research can be performed in animal studies which have fewer confounding factors than human models and allow specific conditions that affect balance to be replicated. Balance control can be studied using rodent balance-related behavioural tests after spinal or brain lesions, such as the Basso, Beattie and Bresnahan (BBB) Locomotor Scale, Foot Fault Scoring System, Ledged Beam Test, Beam Walking Test, and Ladder Beam Test, which are discussed in this review article along with their advantages and disadvantages. These tests can be performed in preclinical rodent models of femoral nerve injury, stroke, spinal cord injury and neurodegenerative diseases.
Collapse
Affiliation(s)
- Akshat D Modi
- Department of Biological Sciences, University of Toronto, Scarborough, Ontario M1C 1A4, Canada; Department of Genetics and Development, Krembil Research Institute, Toronto, Ontario M5T 0S8, Canada.
| | - Anavi Parekh
- Department of Neuroscience, University of Toronto, Toronto, Ontario M5S 1A1, Canada
| | - Zeenal H Patel
- Department of Biological Sciences, University of Toronto, Scarborough, Ontario M1C 1A4, Canada; Department of Biochemistry, University of Toronto, Scarborough, Ontario M1C 1A4, Canada
| |
Collapse
|
12
|
Lipp HP, Krackow S, Turkes E, Benner S, Endo T, Russig H. IntelliCage: the development and perspectives of a mouse- and user-friendly automated behavioral test system. Front Behav Neurosci 2024; 17:1270538. [PMID: 38235003 PMCID: PMC10793385 DOI: 10.3389/fnbeh.2023.1270538] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Accepted: 10/18/2023] [Indexed: 01/19/2024] Open
Abstract
IntelliCage for mice is a rodent home-cage equipped with four corner structures harboring symmetrical double panels for operant conditioning at each of the two sides, either by reward (access to water) or by aversion (non-painful stimuli: air-puffs, LED lights). Corner visits, nose-pokes and actual licks at bottle-nipples are recorded individually using subcutaneously implanted transponders for RFID identification of up to 16 adult mice housed in the same home-cage. This allows for recording individual in-cage activity of mice and applying reward/punishment operant conditioning schemes in corners using workflows designed on a versatile graphic user interface. IntelliCage development had four roots: (i) dissatisfaction with standard approaches for analyzing mouse behavior, including standardization and reproducibility issues, (ii) response to handling and housing animal welfare issues, (iii) the increasing number of mouse models had produced a high work burden on classic manual behavioral phenotyping of single mice. and (iv), studies of transponder-chipped mice in outdoor settings revealed clear genetic behavioral differences in mouse models corresponding to those observed by classic testing in the laboratory. The latter observations were important for the development of home-cage testing in social groups, because they contradicted the traditional belief that animals must be tested under social isolation to prevent disturbance by other group members. The use of IntelliCages reduced indeed the amount of classic testing remarkably, while its flexibility was proved in a wide range of applications worldwide including transcontinental parallel testing. Essentially, two lines of testing emerged: sophisticated analysis of spontaneous behavior in the IntelliCage for screening of new genetic models, and hypothesis testing in many fields of behavioral neuroscience. Upcoming developments of the IntelliCage aim at improved stimulus presentation in the learning corners and videotracking of social interactions within the IntelliCage. Its main advantages are (i) that mice live in social context and are not stressfully handled for experiments, (ii) that studies are not restricted in time and can run in absence of humans, (iii) that it increases reproducibility of behavioral phenotyping worldwide, and (iv) that the industrial standardization of the cage permits retrospective data analysis with new statistical tools even after many years.
Collapse
Affiliation(s)
- Hans-Peter Lipp
- Faculty of Medicine, Institute of Evolutionary Medicine, University of Zürich, Zürich, Switzerland
| | - Sven Krackow
- Institute of Pathology and Molecular Pathology, University Hospital Zürich, Zürich, Switzerland
| | - Emir Turkes
- Queen Square Institute of Neurology, University College London, London, United Kingdom
| | - Seico Benner
- Center for Health and Environmental Risk Research, National Institute for Environmental Studies, Ibaraki, Japan
| | | | | |
Collapse
|
13
|
Badrulhisham F, Pogatzki-Zahn E, Segelcke D, Spisak T, Vollert J. Machine learning and artificial intelligence in neuroscience: A primer for researchers. Brain Behav Immun 2024; 115:470-479. [PMID: 37972877 DOI: 10.1016/j.bbi.2023.11.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Revised: 10/16/2023] [Accepted: 11/08/2023] [Indexed: 11/19/2023] Open
Abstract
Artificial intelligence (AI) is often used to describe the automation of complex tasks that we would attribute intelligence to. Machine learning (ML) is commonly understood as a set of methods used to develop an AI. Both have seen a recent boom in usage, both in scientific and commercial fields. For the scientific community, ML can solve bottle necks created by complex, multi-dimensional data generated, for example, by functional brain imaging or *omics approaches. ML can here identify patterns that could not have been found using traditional statistic approaches. However, ML comes with serious limitations that need to be kept in mind: their tendency to optimise solutions for the input data means it is of crucial importance to externally validate any findings before considering them more than a hypothesis. Their black-box nature implies that their decisions usually cannot be understood, which renders their use in medical decision making problematic and can lead to ethical issues. Here, we present an introduction for the curious to the field of ML/AI. We explain the principles as commonly used methods as well as recent methodological advancements before we discuss risks and what we see as future directions of the field. Finally, we show practical examples of neuroscience to illustrate the use and limitations of ML.
Collapse
Affiliation(s)
| | - Esther Pogatzki-Zahn
- Department of Anaesthesiology, Intensive Care and Pain Medicine, University Hospital Muenster, Muenster, Germany
| | - Daniel Segelcke
- Department of Anaesthesiology, Intensive Care and Pain Medicine, University Hospital Muenster, Muenster, Germany
| | - Tamas Spisak
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Medicine Essen, Essen, Germany; Center for Translational Neuro- and Behavioral Sciences, Department of Neurology, University Medicine Essen, Essen, Germany
| | - Jan Vollert
- Department of Clinical and Biomedical Sciences, Faculty of Health and Life Sciences, University of Exeter, Exeter, United Kingdom; Pain Research, Department of Surgery and Cancer, Imperial College London, London, United Kingdom.
| |
Collapse
|
14
|
Zhang L, Zhao L, Yan Y. A hybrid neural network-based intelligent body posture estimation system in sports scenes. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2024; 21:1017-1037. [PMID: 38303452 DOI: 10.3934/mbe.2024042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/03/2024]
Abstract
Body posture estimation has been a hot branch in the field of computer vision. This work focuses on one of its typical applications: recognition of various body postures in sports scenes. Existing technical methods were mostly established on the basis of convolution neural network (CNN) structures, due to their strong visual information sensing ability. However, sports scenes are highly dynamic, and many valuable contextual features can be extracted from multimedia frame sequences. To handle the current challenge, this paper proposes a hybrid neural network-based intelligent body posture estimation system for sports scenes. Specifically, a CNN unit and a long short-term memory (LSTM) unit are employed as the backbone network in order to extract key-point information and temporal information from video frames, respectively. Then, a semi-supervised learning-based computing framework is developed to output estimation results. It can make training procedures using limited labeled samples. Finally, through extensive experiments, it is proved that the proposed body posture estimation method in this paper can achieve proper estimation effect in real-world frame samples of sports scenes.
Collapse
Affiliation(s)
- Liguo Zhang
- School of Physical Education, Shandong University, Jinan 250000, China
| | - Liangyu Zhao
- School of Physical Education, Shandong University, Jinan 250000, China
| | - Yongtao Yan
- Department of Physical Education, Shenzhen Polytechnic, Shenzhen 518055, China
| |
Collapse
|
15
|
Gyles TM, Nestler EJ, Parise EM. Advancing preclinical chronic stress models to promote therapeutic discovery for human stress disorders. Neuropsychopharmacology 2024; 49:215-226. [PMID: 37349475 PMCID: PMC10700361 DOI: 10.1038/s41386-023-01625-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Revised: 05/08/2023] [Accepted: 05/19/2023] [Indexed: 06/24/2023]
Abstract
There is an urgent need to develop more effective treatments for stress-related illnesses, which include depression, post-traumatic stress disorder, and anxiety. We view animal models as playing an essential role in this effort, but to date, such approaches have generally not succeeded in developing therapeutics with new mechanisms of action. This is partly due to the complexity of the brain and its disorders, but also to inherent difficulties in modeling human disorders in rodents and to the incorrect use of animal models: namely, trying to recapitulate a human syndrome in a rodent which is likely not possible as opposed to using animals to understand underlying mechanisms and evaluating potential therapeutic paths. Recent transcriptomic research has established the ability of several different chronic stress procedures in rodents to recapitulate large portions of the molecular pathology seen in postmortem brain tissue of individuals with depression. These findings provide crucial validation for the clear relevance of rodent stress models to better understand the pathophysiology of human stress disorders and help guide therapeutic discovery. In this review, we first discuss the current limitations of preclinical chronic stress models as well as traditional behavioral phenotyping approaches. We then explore opportunities to dramatically enhance the translational use of rodent stress models through the application of new experimental technologies. The goal of this review is to promote the synthesis of these novel approaches in rodents with human cell-based approaches and ultimately with early-phase proof-of-concept studies in humans to develop more effective treatments for human stress disorders.
Collapse
Affiliation(s)
- Trevonn M Gyles
- Nash Family Department of Neuroscience & Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY, 10029, USA
| | - Eric J Nestler
- Nash Family Department of Neuroscience & Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY, 10029, USA
| | - Eric M Parise
- Nash Family Department of Neuroscience & Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY, 10029, USA.
| |
Collapse
|
16
|
Desai N, Bala P, Richardson R, Raper J, Zimmermann J, Hayden B. OpenApePose, a database of annotated ape photographs for pose estimation. eLife 2023; 12:RP86873. [PMID: 38078902 PMCID: PMC10712952 DOI: 10.7554/elife.86873] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2023] Open
Abstract
Because of their close relationship with humans, non-human apes (chimpanzees, bonobos, gorillas, orangutans, and gibbons, including siamangs) are of great scientific interest. The goal of understanding their complex behavior would be greatly advanced by the ability to perform video-based pose tracking. Tracking, however, requires high-quality annotated datasets of ape photographs. Here we present OpenApePose, a new public dataset of 71,868 photographs, annotated with 16 body landmarks of six ape species in naturalistic contexts. We show that a standard deep net (HRNet-W48) trained on ape photos can reliably track out-of-sample ape photos better than networks trained on monkeys (specifically, the OpenMonkeyPose dataset) and on humans (COCO) can. This trained network can track apes almost as well as the other networks can track their respective taxa, and models trained without one of the six ape species can track the held-out species better than the monkey and human models can. Ultimately, the results of our analyses highlight the importance of large, specialized databases for animal tracking systems and confirm the utility of our new ape database.
Collapse
Affiliation(s)
- Nisarg Desai
- Department of Neuroscience and Center for Magnetic Resonance Research, University of MinnesotaMinneapolisUnited States
| | - Praneet Bala
- Department of Computer Science, University of MinnesotaMinneapolisUnited States
| | - Rebecca Richardson
- Emory National Primate Research Center, Emory UniversityAtlantaUnited States
| | - Jessica Raper
- Emory National Primate Research Center, Emory UniversityAtlantaUnited States
| | - Jan Zimmermann
- Department of Neuroscience and Center for Magnetic Resonance Research, University of MinnesotaMinneapolisUnited States
| | - Benjamin Hayden
- Department of Neuroscience and Center for Magnetic Resonance Research, University of MinnesotaMinneapolisUnited States
| |
Collapse
|
17
|
Kleinfeld D, Deschênes M, Economo MN, Elbaz M, Golomb D, Liao SM, O'Connor DH, Wang F. Low- and high-level coordination of orofacial motor actions. Curr Opin Neurobiol 2023; 83:102784. [PMID: 37757586 PMCID: PMC11034851 DOI: 10.1016/j.conb.2023.102784] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 08/24/2023] [Accepted: 08/25/2023] [Indexed: 09/29/2023]
Abstract
Orofacial motor actions are movements that, in rodents, involve whisking of the vibrissa, deflection of the nose, licking and lapping with the tongue, and consumption through chewing. These actions, along with bobbing and turning of the head, coordinate to subserve exploration while not conflicting with life-supporting actions such as breathing and swallowing. Orofacial and head movements are comprised of two additive components: a rhythm that can be entrained by the breathing oscillator and a broadband component that directs the actuator to the region of interest. We focus on coordinating the rhythmic component of actions into a behavior. We hypothesize that the precise timing of each constituent action is continually adjusted through the merging of low-level oscillator input with sensory-derived, high-level rhythmic feedback. Supporting evidence is discussed.
Collapse
Affiliation(s)
- David Kleinfeld
- Department of Physics, University of California at San Diego, La Jolla, CA 92093, USA; Department of Neurobiology, University of California at San Diego, La Jolla, CA 92093, USA.
| | - Martin Deschênes
- Department of Psychiatry and Neuroscience, Laval University, Québec City, G1J 2R3 Canada
| | - Michael N Economo
- Department of Bioengineering, Boston University, Boston, MA 02215, USA
| | - Michaël Elbaz
- Department of Neurobiology, Northwestern University, Evanston, IL 60208, USA
| | - David Golomb
- Department of Physiology and Cell Biology, Ben Gurion University, Be'er-Sheba 8410501, Israel; Department of Physics, Ben Gurion University, Be'er-Sheba 8410501, Israel
| | - Song-Mao Liao
- Department of Physics, University of California at San Diego, La Jolla, CA 92093, USA
| | - Daniel H O'Connor
- Department of Neuroscience, Johns Hopkins University School of Medicine, Baltimore, MD 21205, USA; Zynval Krieger Mind/Brain Institute, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Fan Wang
- Department of Brain and Cognitive Science, Massachusetts Institute of Technology, Cambridge, MA 02139, USA; McGovern Institute, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| |
Collapse
|
18
|
Sakata S. SaLSa: A Combinatory Approach of Semi-Automatic Labeling and Long Short-Term Memory to Classify Behavioral Syllables. eNeuro 2023; 10:ENEURO.0201-23.2023. [PMID: 37989587 PMCID: PMC10714892 DOI: 10.1523/eneuro.0201-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Revised: 10/19/2023] [Accepted: 11/09/2023] [Indexed: 11/23/2023] Open
Abstract
Accurately and quantitatively describing mouse behavior is an important area. Although advances in machine learning have made it possible to track their behaviors accurately, reliable classification of behavioral sequences or syllables remains a challenge. In this study, we present a novel machine learning approach, called SaLSa (a combination of semi-automatic labeling and long short-term memory-based classification), to classify behavioral syllables of mice exploring an open field. This approach consists of two major steps. First, after tracking multiple body parts, spatial and temporal features of their egocentric coordinates are extracted. A fully automated unsupervised process identifies candidates for behavioral syllables, followed by manual labeling of behavioral syllables using a graphical user interface (GUI). Second, a long short-term memory (LSTM) classifier is trained with the labeled data. We found that the classification performance was marked over 97%. It provides a performance equivalent to a state-of-the-art model while classifying some of the syllables. We applied this approach to examine how hyperactivity in a mouse model of Alzheimer's disease develops with age. When the proportion of each behavioral syllable was compared between genotypes and sexes, we found that the characteristic hyperlocomotion of female Alzheimer's disease mice emerges between four and eight months. In contrast, age-related reduction in rearing is common regardless of genotype and sex. Overall, SaLSa enables detailed characterization of mouse behavior.
Collapse
Affiliation(s)
- Shuzo Sakata
- Strathclyde Institute of Pharmacy and Biomedical Sciences, University of Strathclyde, Glasgow G4 0RE, United Kingdom
| |
Collapse
|
19
|
An L, Ren J, Yu T, Hai T, Jia Y, Liu Y. Three-dimensional surface motion capture of multiple freely moving pigs using MAMMAL. Nat Commun 2023; 14:7727. [PMID: 38001106 PMCID: PMC10673844 DOI: 10.1038/s41467-023-43483-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Accepted: 11/09/2023] [Indexed: 11/26/2023] Open
Abstract
Understandings of the three-dimensional social behaviors of freely moving large-size mammals are valuable for both agriculture and life science, yet challenging due to occlusions in close interactions. Although existing animal pose estimation methods captured keypoint trajectories, they ignored deformable surfaces which contained geometric information essential for social interaction prediction and for dealing with the occlusions. In this study, we develop a Multi-Animal Mesh Model Alignment (MAMMAL) system based on an articulated surface mesh model. Our self-designed MAMMAL algorithms automatically enable us to align multi-view images into our mesh model and to capture 3D surface motions of multiple animals, which display better performance upon severe occlusions compared to traditional triangulation and allow complex social analysis. By utilizing MAMMAL, we are able to quantitatively analyze the locomotion, postures, animal-scene interactions, social interactions, as well as detailed tail motions of pigs. Furthermore, experiments on mouse and Beagle dogs demonstrate the generalizability of MAMMAL across different environments and mammal species.
Collapse
Affiliation(s)
- Liang An
- Department of Automation, Tsinghua University, Beijing, China
| | - Jilong Ren
- State Key Laboratory of Stem Cell and Reproductive Biology, Institute of Zoology, Chinese Academy of Sciences, Beijing, China
- Beijing Farm Animal Research Center, Institute of Zoology, Chinese Academy of Sciences, Beijing, China
| | - Tao Yu
- Department of Automation, Tsinghua University, Beijing, China
- Tsinghua University Beijing National Research Center for Information Science and Technology (BNRist), Beijing, China
| | - Tang Hai
- State Key Laboratory of Stem Cell and Reproductive Biology, Institute of Zoology, Chinese Academy of Sciences, Beijing, China.
- Beijing Farm Animal Research Center, Institute of Zoology, Chinese Academy of Sciences, Beijing, China.
| | - Yichang Jia
- School of Medicine, Tsinghua University, Beijing, China.
- IDG/McGovern Institute for Brain Research at Tsinghua, Beijing, China.
- Tsinghua Laboratory of Brain and Intelligence, Beijing, China.
| | - Yebin Liu
- Department of Automation, Tsinghua University, Beijing, China.
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, China.
| |
Collapse
|
20
|
Keles MF, Sapci A, Brody C, Palmer I, Le C, Tastan O, Keles S, Wu MN. Deep Phenotyping of Sleep in Drosophila. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.10.30.564733. [PMID: 37961473 PMCID: PMC10635029 DOI: 10.1101/2023.10.30.564733] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/15/2023]
Abstract
Sleep is an evolutionarily conserved behavior, whose function is unknown. Here, we present a method for deep phenotyping of sleep in Drosophila, consisting of a high-resolution video imaging system, coupled with closed-loop laser perturbation to measure arousal threshold. To quantify sleep-associated microbehaviors, we trained a deep-learning network to annotate body parts in freely moving flies and developed a semi-supervised computational pipeline to classify behaviors. Quiescent flies exhibit a rich repertoire of microbehaviors, including proboscis pumping (PP) and haltere switches, which vary dynamically across the night. Using this system, we characterized the effects of optogenetically activating two putative sleep circuits. These data reveal that activating dFB neurons produces micromovements, inconsistent with sleep, while activating R5 neurons triggers PP followed by behavioral quiescence. Our findings suggest that sleep in Drosophila is polyphasic with different stages and set the stage for a rigorous analysis of sleep and other behaviors in this species.
Collapse
Affiliation(s)
- Mehmet F. Keles
- Department of Neurology, Johns Hopkins University, Baltimore, MD 21205, USA
| | - Ali Sapci
- Department of Computer Science, Sabanci University, Tuzla, Istanbul, 34956, Turkey
| | - Casey Brody
- Department of Neurology, Johns Hopkins University, Baltimore, MD 21205, USA
| | - Isabelle Palmer
- Department of Neurology, Johns Hopkins University, Baltimore, MD 21205, USA
| | - Christin Le
- Department of Neurology, Johns Hopkins University, Baltimore, MD 21205, USA
| | - Oznur Tastan
- Department of Computer Science, Sabanci University, Tuzla, Istanbul, 34956, Turkey
| | - Sunduz Keles
- Department of Biostatistics and Medical Informatics, University of Wisconsin-Madison, Madison, WI 53706, USA
| | - Mark N. Wu
- Department of Neurology, Johns Hopkins University, Baltimore, MD 21205, USA
- Department of Neuroscience, Johns Hopkins University, Baltimore, MD 21287, USA
| |
Collapse
|
21
|
Voloh B, Maisson DJN, Cervera RL, Conover I, Zambre M, Hayden B, Zimmermann J. Hierarchical action encoding in prefrontal cortex of freely moving macaques. Cell Rep 2023; 42:113091. [PMID: 37656619 PMCID: PMC10591875 DOI: 10.1016/j.celrep.2023.113091] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2022] [Revised: 06/23/2023] [Accepted: 08/18/2023] [Indexed: 09/03/2023] Open
Abstract
Our natural behavioral repertoires include coordinated actions of characteristic types. To better understand how neural activity relates to the expression of actions and action switches, we studied macaques performing a freely moving foraging task in an open environment. We developed a novel analysis pipeline that can identify meaningful units of behavior, corresponding to recognizable actions such as sitting, walking, jumping, and climbing. On the basis of transition probabilities between these actions, we found that behavior is organized in a modular and hierarchical fashion. We found that, after regressing out many potential confounders, actions are associated with specific patterns of firing in each of six prefrontal brain regions and that, overall, encoding of action category is progressively stronger in more dorsal and more caudal prefrontal regions. Together, these results establish a link between selection of units of primate behavior on one hand and neuronal activity in prefrontal regions on the other.
Collapse
Affiliation(s)
- Benjamin Voloh
- Department of Neuroscience, University of Minnesota, Minneapolis, MN 55455, USA
| | - David J-N Maisson
- Department of Neuroscience, University of Minnesota, Minneapolis, MN 55455, USA
| | | | - Indirah Conover
- Department of Neuroscience, University of Minnesota, Minneapolis, MN 55455, USA
| | - Mrunal Zambre
- Department of Neuroscience, University of Minnesota, Minneapolis, MN 55455, USA
| | - Benjamin Hayden
- Department of Neuroscience, University of Minnesota, Minneapolis, MN 55455, USA; Department of Neurosurgery, Baylor College of Medicine, Houston, TX 77030, USA
| | - Jan Zimmermann
- Department of Neuroscience, University of Minnesota, Minneapolis, MN 55455, USA; Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN 55455, USA.
| |
Collapse
|
22
|
Butler DJ, Keim AP, Ray S, Azim E. Large-scale capture of hidden fluorescent labels for training generalizable markerless motion capture models. Nat Commun 2023; 14:5866. [PMID: 37752123 PMCID: PMC10522643 DOI: 10.1038/s41467-023-41565-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Accepted: 09/08/2023] [Indexed: 09/28/2023] Open
Abstract
Deep learning-based markerless tracking has revolutionized studies of animal behavior. Yet the generalizability of trained models tends to be limited, as new training data typically needs to be generated manually for each setup or visual environment. With each model trained from scratch, researchers track distinct landmarks and analyze the resulting kinematic data in idiosyncratic ways. Moreover, due to inherent limitations in manual annotation, only a sparse set of landmarks are typically labeled. To address these issues, we developed an approach, which we term GlowTrack, for generating orders of magnitude more training data, enabling models that generalize across experimental contexts. We describe: a) a high-throughput approach for producing hidden labels using fluorescent markers; b) a multi-camera, multi-light setup for simulating diverse visual conditions; and c) a technique for labeling many landmarks in parallel, enabling dense tracking. These advances lay a foundation for standardized behavioral pipelines and more complete scrutiny of movement.
Collapse
Affiliation(s)
- Daniel J Butler
- Molecular Neurobiology Laboratory, Salk Institute for Biological Studies, 10010 N. Torrey Pines Road, La Jolla, CA, 92037, USA
| | - Alexander P Keim
- Molecular Neurobiology Laboratory, Salk Institute for Biological Studies, 10010 N. Torrey Pines Road, La Jolla, CA, 92037, USA
| | - Shantanu Ray
- Molecular Neurobiology Laboratory, Salk Institute for Biological Studies, 10010 N. Torrey Pines Road, La Jolla, CA, 92037, USA
| | - Eiman Azim
- Molecular Neurobiology Laboratory, Salk Institute for Biological Studies, 10010 N. Torrey Pines Road, La Jolla, CA, 92037, USA.
| |
Collapse
|
23
|
Newman JP, Zhang J, Cuevas-López A, Miller NJ, Honda T, van der Goes MSH, Leighton AH, Carvalho F, Lopes G, Lakunina A, Siegle JH, Harnett MT, Wilson MA, Voigts J. A unified open-source platform for multimodal neural recording and perturbation during naturalistic behavior. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.08.30.554672. [PMID: 37693443 PMCID: PMC10491150 DOI: 10.1101/2023.08.30.554672] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/12/2023]
Abstract
Behavioral neuroscience faces two conflicting demands: long-duration recordings from large neural populations and unimpeded animal behavior. To meet this challenge, we developed ONIX, an open-source data acquisition system with high data throughput (2GB/sec) and low closed-loop latencies (<1ms) that uses a novel 0.3 mm thin tether to minimize behavioral impact. Head position and rotation are tracked in 3D and used to drive active commutation without torque measurements. ONIX can acquire from combinations of passive electrodes, Neuropixels probes, head-mounted microscopes, cameras, 3D-trackers, and other data sources. We used ONIX to perform uninterrupted, long (~7 hours) neural recordings in mice as they traversed complex 3-dimensional terrain. ONIX allowed exploration with similar mobility as non-implanted animals, in contrast to conventional tethered systems which restricted movement. By combining long recordings with full mobility, our technology will enable new progress on questions that require high-quality neural recordings during ethologically grounded behaviors.
Collapse
Affiliation(s)
- Jonathan P Newman
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, USA
- Picower Institute for Learning and Memory, MIT, Cambridge, MA, USA
- Open Ephys Inc. Atlanta, GA, USA
| | - Jie Zhang
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, USA
- Picower Institute for Learning and Memory, MIT, Cambridge, MA, USA
| | - Aarón Cuevas-López
- Open Ephys Inc. Atlanta, GA, USA
- Dept. of Electrical Engineering, Polytechnic University of Valencia, Valencia, Spain
- Open Ephys Production Site, Lisbon, Portugal
| | - Nicholas J Miller
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, USA
- McGovern Institute for Brain Research, MIT, Cambridge, MA, USA
| | - Takato Honda
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, USA
- Picower Institute for Learning and Memory, MIT, Cambridge, MA, USA
| | - Marie-Sophie H van der Goes
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, USA
- McGovern Institute for Brain Research, MIT, Cambridge, MA, USA
| | | | | | | | - Anna Lakunina
- Allen Institute for Neural Dynamics, Seattle, Washington, USA
| | - Joshua H Siegle
- Allen Institute for Neural Dynamics, Seattle, Washington, USA
| | - Mark T Harnett
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, USA
- McGovern Institute for Brain Research, MIT, Cambridge, MA, USA
| | - Matthew A Wilson
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, USA
- Picower Institute for Learning and Memory, MIT, Cambridge, MA, USA
| | - Jakob Voigts
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, USA
- Open Ephys Inc. Atlanta, GA, USA
- McGovern Institute for Brain Research, MIT, Cambridge, MA, USA
- HHMI Janelia Research Campus, Ashburn, VA, USA
| |
Collapse
|
24
|
Nagy M, Naik H, Kano F, Carlson NV, Koblitz JC, Wikelski M, Couzin ID. SMART-BARN: Scalable multimodal arena for real-time tracking behavior of animals in large numbers. SCIENCE ADVANCES 2023; 9:eadf8068. [PMID: 37656798 PMCID: PMC10854427 DOI: 10.1126/sciadv.adf8068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Accepted: 08/01/2023] [Indexed: 09/03/2023]
Abstract
The SMART-BARN (scalable multimodal arena for real-time tracking behavior of animals in large numbers) achieves fast, robust acquisition of movement, behavior, communication, and interactions of animals in groups, within a large (14.7 meters by 6.6 meters by 3.8 meters), three-dimensional environment using multiple information channels. Behavior is measured from a wide range of taxa (insects, birds, mammals, etc.) and body size (from moths to humans) simultaneously. This system integrates multiple, concurrent measurement techniques including submillimeter precision and high-speed (300 hertz) motion capture, acoustic recording and localization, automated behavioral recognition (computer vision), and remote computer-controlled interactive units (e.g., automated feeders and animal-borne devices). The data streams are available in real time allowing highly controlled and behavior-dependent closed-loop experiments, while producing comprehensive datasets for offline analysis. The diverse capabilities of SMART-BARN are demonstrated through three challenging avian case studies, while highlighting its broad applicability to the fine-scale analysis of collective animal behavior across species.
Collapse
Affiliation(s)
- Máté Nagy
- Department of Collective Behavior, Max-Planck Institute of Animal Behavior, Konstanz, Germany
- Centre for the Advanced Study of Collective Behavior, University of Konstanz, Konstanz, Germany
- Department of Biology, University of Konstanz, Konstanz, Germany
- MTA-ELTE Lendület Collective Behavior Research Group, Hungarian Academy of Sciences, Budapest, Hungary
- MTA-ELTE Statistical and Biological Physics Research Group, Eötvös Loránd Research Network, Budapest, Hungary
- Department of Biological Physics, Eötvös Loránd University, Budapest, Hungary
| | - Hemal Naik
- Department of Collective Behavior, Max-Planck Institute of Animal Behavior, Konstanz, Germany
- Centre for the Advanced Study of Collective Behavior, University of Konstanz, Konstanz, Germany
- Department of Biology, University of Konstanz, Konstanz, Germany
- Department of Ecology of Animal Societies, Max-Planck Institute of Animal Behavior, Konstanz, Germany
| | - Fumihiro Kano
- Department of Collective Behavior, Max-Planck Institute of Animal Behavior, Konstanz, Germany
- Centre for the Advanced Study of Collective Behavior, University of Konstanz, Konstanz, Germany
- Department of Biology, University of Konstanz, Konstanz, Germany
| | - Nora V. Carlson
- Department of Collective Behavior, Max-Planck Institute of Animal Behavior, Konstanz, Germany
- Centre for the Advanced Study of Collective Behavior, University of Konstanz, Konstanz, Germany
- Department of Biology, University of Konstanz, Konstanz, Germany
- Department of Zoology, Faculty of Science/Graduate School of Science, Kyoto University, Kyoto, 606-8502, Japan
| | - Jens C. Koblitz
- Department of Collective Behavior, Max-Planck Institute of Animal Behavior, Konstanz, Germany
- Centre for the Advanced Study of Collective Behavior, University of Konstanz, Konstanz, Germany
- Department of Biology, University of Konstanz, Konstanz, Germany
| | - Martin Wikelski
- Centre for the Advanced Study of Collective Behavior, University of Konstanz, Konstanz, Germany
- Department of Migration, Max Planck Institute of Animal Behavior, Radolfzell, Germany
| | - Iain D. Couzin
- Department of Collective Behavior, Max-Planck Institute of Animal Behavior, Konstanz, Germany
- Centre for the Advanced Study of Collective Behavior, University of Konstanz, Konstanz, Germany
- Department of Biology, University of Konstanz, Konstanz, Germany
| |
Collapse
|
25
|
Hu B, Seybold B, Yang S, Sud A, Liu Y, Barron K, Cha P, Cosino M, Karlsson E, Kite J, Kolumam G, Preciado J, Zavala-Solorio J, Zhang C, Zhang X, Voorbach M, Tovcimak AE, Ruby JG, Ross DA. 3D mouse pose from single-view video and a new dataset. Sci Rep 2023; 13:13554. [PMID: 37604955 PMCID: PMC10442417 DOI: 10.1038/s41598-023-40738-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Accepted: 08/16/2023] [Indexed: 08/23/2023] Open
Abstract
We present a method to infer the 3D pose of mice, including the limbs and feet, from monocular videos. Many human clinical conditions and their corresponding animal models result in abnormal motion, and accurately measuring 3D motion at scale offers insights into health. The 3D poses improve classification of health-related attributes over 2D representations. The inferred poses are accurate enough to estimate stride length even when the feet are mostly occluded. This method could be applied as part of a continuous monitoring system to non-invasively measure animal health, as demonstrated by its use in successfully classifying animals based on age and genotype. We introduce the Mouse Pose Analysis Dataset, the first large scale video dataset of lab mice in their home cage with ground truth keypoint and behavior labels. The dataset also contains high resolution mouse CT scans, which we use to build the shape models for 3D pose reconstruction.
Collapse
Affiliation(s)
- Bo Hu
- Google, 1600 Amphitheatre Parkway, Mountain View, CA, 94043, USA.
| | - Bryan Seybold
- Google, 1600 Amphitheatre Parkway, Mountain View, CA, 94043, USA
| | - Shan Yang
- Google, 1600 Amphitheatre Parkway, Mountain View, CA, 94043, USA
| | - Avneesh Sud
- Google, 1600 Amphitheatre Parkway, Mountain View, CA, 94043, USA
| | - Yi Liu
- Calico Life Sciences LLC, 1170 Veterans Blvd., South San Francisco, CA, 94080, USA
| | - Karla Barron
- Calico Life Sciences LLC, 1170 Veterans Blvd., South San Francisco, CA, 94080, USA
| | - Paulyn Cha
- Calico Life Sciences LLC, 1170 Veterans Blvd., South San Francisco, CA, 94080, USA
| | - Marcelo Cosino
- Calico Life Sciences LLC, 1170 Veterans Blvd., South San Francisco, CA, 94080, USA
| | - Ellie Karlsson
- Calico Life Sciences LLC, 1170 Veterans Blvd., South San Francisco, CA, 94080, USA
| | - Janessa Kite
- Calico Life Sciences LLC, 1170 Veterans Blvd., South San Francisco, CA, 94080, USA
| | - Ganesh Kolumam
- Calico Life Sciences LLC, 1170 Veterans Blvd., South San Francisco, CA, 94080, USA
| | - Joseph Preciado
- Calico Life Sciences LLC, 1170 Veterans Blvd., South San Francisco, CA, 94080, USA
| | - José Zavala-Solorio
- Calico Life Sciences LLC, 1170 Veterans Blvd., South San Francisco, CA, 94080, USA
| | - Chunlian Zhang
- Calico Life Sciences LLC, 1170 Veterans Blvd., South San Francisco, CA, 94080, USA
| | - Xiaomeng Zhang
- Translational Imaging, Neuroscience Discovery, Abbvie, 1 N. Waukegan Rd., North Chicago, IL, 60064-1802, USA
| | - Martin Voorbach
- Translational Imaging, Neuroscience Discovery, Abbvie, 1 N. Waukegan Rd., North Chicago, IL, 60064-1802, USA
| | - Ann E Tovcimak
- Translational Imaging, Neuroscience Discovery, Abbvie, 1 N. Waukegan Rd., North Chicago, IL, 60064-1802, USA
| | - J Graham Ruby
- Calico Life Sciences LLC, 1170 Veterans Blvd., South San Francisco, CA, 94080, USA
| | - David A Ross
- Google, 1600 Amphitheatre Parkway, Mountain View, CA, 94043, USA
| |
Collapse
|
26
|
Mimica B, Tombaz T, Battistin C, Fuglstad JG, Dunn BA, Whitlock JR. Behavioral decomposition reveals rich encoding structure employed across neocortex in rats. Nat Commun 2023; 14:3947. [PMID: 37402724 DOI: 10.1038/s41467-023-39520-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Accepted: 06/16/2023] [Indexed: 07/06/2023] Open
Abstract
The cortical population code is pervaded by activity patterns evoked by movement, but it remains largely unknown how such signals relate to natural behavior or how they might support processing in sensory cortices where they have been observed. To address this we compared high-density neural recordings across four cortical regions (visual, auditory, somatosensory, motor) in relation to sensory modulation, posture, movement, and ethograms of freely foraging male rats. Momentary actions, such as rearing or turning, were represented ubiquitously and could be decoded from all sampled structures. However, more elementary and continuous features, such as pose and movement, followed region-specific organization, with neurons in visual and auditory cortices preferentially encoding mutually distinct head-orienting features in world-referenced coordinates, and somatosensory and motor cortices principally encoding the trunk and head in egocentric coordinates. The tuning properties of synaptically coupled cells also exhibited connection patterns suggestive of area-specific uses of pose and movement signals, particularly in visual and auditory regions. Together, our results indicate that ongoing behavior is encoded at multiple levels throughout the dorsal cortex, and that low-level features are differentially utilized by different regions to serve locally relevant computations.
Collapse
Affiliation(s)
- Bartul Mimica
- Princeton Neuroscience Institute, Princeton University, Washington Road, Princeton, 100190, NJ, USA.
| | - Tuçe Tombaz
- Kavli Institute for Systems Neuroscience, Norwegian University of Science and Technology, Olav Kyrres Gate 9, 7030, Trondheim, Norway
| | - Claudia Battistin
- Kavli Institute for Systems Neuroscience, Norwegian University of Science and Technology, Olav Kyrres Gate 9, 7030, Trondheim, Norway
- Department of Mathematical Sciences, Norwegian University of Science and Technology, 7491, Trondheim, Norway
| | - Jingyi Guo Fuglstad
- Kavli Institute for Systems Neuroscience, Norwegian University of Science and Technology, Olav Kyrres Gate 9, 7030, Trondheim, Norway
| | - Benjamin A Dunn
- Kavli Institute for Systems Neuroscience, Norwegian University of Science and Technology, Olav Kyrres Gate 9, 7030, Trondheim, Norway
- Department of Mathematical Sciences, Norwegian University of Science and Technology, 7491, Trondheim, Norway
| | - Jonathan R Whitlock
- Kavli Institute for Systems Neuroscience, Norwegian University of Science and Technology, Olav Kyrres Gate 9, 7030, Trondheim, Norway.
| |
Collapse
|
27
|
Chettih SN, Mackevicius EL, Hale S, Aronov D. Barcoding of episodic memories in the hippocampus of a food-caching bird. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.05.27.542597. [PMID: 37461442 PMCID: PMC10349996 DOI: 10.1101/2023.05.27.542597] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 07/24/2023]
Abstract
Episodic memory, or memory of experienced events, is a critical function of the hippocampus1-3. It is therefore important to understand how hippocampal activity represents specific events in an animal's life. We addressed this question in chickadees - specialist food-caching birds that hide food at scattered locations and use memory to find their caches later in time4,5. We performed high-density neural recordings in the hippocampus of chickadees as they cached and retrieved seeds in a laboratory arena. We found that each caching event was represented by a burst of firing in a unique set of hippocampal neurons. These 'barcode-like' patterns of activity were sparse (<10% of neurons active), uncorrelated even for immediately adjacent caches, and different even for separate caches at the same location. The barcode representing a specific caching event was transiently reactivated whenever a bird later interacted with the same cache - for example, to retrieve food. Barcodes co-occurred with conventional place cell activity6,7, as well as location-independent responses to cached seeds. We propose that barcodes are signatures of episodic memories evoked during memory recall. These patterns assign a unique identifier to each event and may be a mechanism for rapid formation and storage of many non-interfering memories.
Collapse
Affiliation(s)
| | | | - Stephanie Hale
- Zuckerman Mind Brain Behavior Institute, Columbia University
| | - Dmitriy Aronov
- Zuckerman Mind Brain Behavior Institute, Columbia University
| |
Collapse
|
28
|
Voloh B, Eisenreich BR, Maisson DJN, Ebitz RB, Park HS, Hayden BY, Zimmermann J. Hierarchical organization of rhesus macaque behavior. OXFORD OPEN NEUROSCIENCE 2023; 2:kvad006. [PMID: 37577290 PMCID: PMC10421634 DOI: 10.1093/oons/kvad006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Revised: 06/11/2023] [Accepted: 06/12/2023] [Indexed: 08/15/2023]
Abstract
Primatologists, psychologists and neuroscientists have long hypothesized that primate behavior is highly structured. However, delineating that structure has been impossible due to the difficulties of precision behavioral tracking. Here we analyzed a dataset consisting of continuous measures of the 3D position of two male rhesus macaques (Macaca mulatta) performing three different tasks in a large unrestrained environment over several hours. Using an unsupervised embedding approach on the tracked joints, we identified commonly repeated pose patterns, which we call postures. We found that macaques' behavior is characterized by 49 distinct postures, lasting an average of 0.6 seconds. We found evidence that behavior is hierarchically organized, in that transitions between poses tend to occur within larger modules, which correspond to identifiable actions; these actions are further organized hierarchically. Our behavioral decomposition allows us to identify universal (cross-individual and cross-task) and unique (specific to each individual and task) principles of behavior. These results demonstrate the hierarchical nature of primate behavior, provide a method for the automated ethogramming of primate behavior, and provide important constraints on neural models of pose generation.
Collapse
Affiliation(s)
- Benjamin Voloh
- Department of Neuroscience, Center for Magnetic Resonance Research, Center for Neuroengineering, 1 Baylor Plaza, Houston, TX 77030
| | - Benjamin R Eisenreich
- Department of Neuroscience, Center for Magnetic Resonance Research, Center for Neuroengineering, 1 Baylor Plaza, Houston, TX 77030
| | - David J-N Maisson
- Department of Neuroscience, Center for Magnetic Resonance Research, Center for Neuroengineering, 1 Baylor Plaza, Houston, TX 77030
| | - R Becket Ebitz
- Department of Neuroscience, Center for Magnetic Resonance Research, Center for Neuroengineering, 1 Baylor Plaza, Houston, TX 77030
| | - Hyun Soo Park
- Department of Computer Science and Engineering, University of Minnesota, 40 Church St, Minneapolis, MN 55455, USA
| | - Benjamin Y Hayden
- Department of Neuroscience, Center for Magnetic Resonance Research, Center for Neuroengineering, 1 Baylor Plaza, Houston, TX 77030
| | - Jan Zimmermann
- Department of Neuroscience, Center for Magnetic Resonance Research, Center for Neuroengineering, 1 Baylor Plaza, Houston, TX 77030
| |
Collapse
|
29
|
Li T, Severson KS, Wang F, Dunn TW. Improved 3D Markerless Mouse Pose Estimation Using Temporal Semi-Supervision. Int J Comput Vis 2023; 131:1389-1405. [PMID: 38273902 PMCID: PMC10810175 DOI: 10.1007/s11263-023-01756-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2022] [Accepted: 01/10/2023] [Indexed: 02/24/2023]
Abstract
Three-dimensional markerless pose estimation from multi-view video is emerging as an exciting method for quantifying the behavior of freely moving animals. Nevertheless, scientifically precise 3D animal pose estimation remains challenging, primarily due to a lack of large training and benchmark datasets and the immaturity of algorithms tailored to the demands of animal experiments and body plans. Existing techniques employ fully supervised convolutional neural networks (CNNs) trained to predict body keypoints in individual video frames, but this demands a large collection of labeled training samples to achieve desirable 3D tracking performance. Here, we introduce a semi-supervised learning strategy that incorporates unlabeled video frames via a simple temporal constraint applied during training. In freely moving mice, our new approach improves the current state-of-the-art performance of multi-view volumetric 3D pose estimation and further enhances the temporal stability and skeletal consistency of 3D tracking.
Collapse
Affiliation(s)
- Tianqing Li
- Duke University, Pratt School of Engineering, Department of Biomedical Engineering, Durham, 27708, NC, USA
| | - Kyle S. Severson
- Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, Cambridge, 02140, MA, USA
| | - Fan Wang
- Massachusetts Institute of Technology, Department of Brain and Cognitive Sciences, Cambridge, 02140, MA, USA
| | - Timothy W. Dunn
- Duke University, Pratt School of Engineering, Department of Biomedical Engineering, Durham, 27708, NC, USA
| |
Collapse
|
30
|
Provini P, Camp AL, Crandell KE. Emerging biological insights enabled by high-resolution 3D motion data: promises, perspectives and pitfalls. J Exp Biol 2023; 226:286825. [PMID: 36752301 PMCID: PMC10038148 DOI: 10.1242/jeb.245138] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/09/2023]
Abstract
Deconstructing motion to better understand it is a key prerequisite in the field of comparative biomechanics. Since Marey and Muybridge's work, technical constraints have been the largest limitation to motion capture and analysis, which, in turn, limited what kinds of questions biologists could ask or answer. Throughout the history of our field, conceptual leaps and significant technical advances have generally worked hand in hand. Recently, high-resolution, three-dimensional (3D) motion data have become easier to acquire, providing new opportunities for comparative biomechanics. We describe how adding a third dimension of information has fuelled major paradigm shifts, not only leading to a reinterpretation of long-standing scientific questions but also allowing new questions to be asked. In this paper, we highlight recent work published in Journal of Experimental Biology and influenced by these studies, demonstrating the biological breakthroughs made with 3D data. Although amazing opportunities emerge from these technical and conceptual advances, high-resolution data often come with a price. Here, we discuss challenges of 3D data, including low-throughput methodology, costly equipment, low sample sizes, and complex analyses and presentation. Therefore, we propose guidelines for how and when to pursue 3D high-resolution data. We also suggest research areas that are poised for major new biological advances through emerging 3D data collection.
Collapse
Affiliation(s)
- Pauline Provini
- Université Paris Cité, Inserm, System Engineering and Evolution Dynamics, F-75004 Paris, France
- Learning Planet Institute, F-75004 Paris, France
- Département Adaptations du Vivant, UMR 7179 CNRS/Muséum National d'Histoire Naturelle, F-75005 Paris, France
| | - Ariel L Camp
- Department of Musculoskeletal and Ageing Science, Institute of Life Course and Medical Sciences, University of Liverpool, Liverpool L78TX, UK
| | | |
Collapse
|
31
|
Couzin ID, Heins C. Emerging technologies for behavioral research in changing environments. Trends Ecol Evol 2023; 38:346-354. [PMID: 36509561 DOI: 10.1016/j.tree.2022.11.008] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Revised: 11/14/2022] [Accepted: 11/21/2022] [Indexed: 12/13/2022]
Abstract
The first response exhibited by animals to changing environments is typically behavioral. Behavior is thus central to predicting, and mitigating, the impacts that natural and anthropogenic environmental changes will have on populations and, consequently, ecosystems. Yet the inherently multiscale nature of behavior, as well as the complexities associated with inferring how animals perceive their world, and make decisions, has constrained the scope of behavioral research. Major technological advances in electronics and in machine learning, however, provide increasingly powerful means to see, analyze, and interpret behavior in its natural complexity. We argue that these disruptive technologies will foster new approaches that will allow us to move beyond quantitative descriptions and reveal the underlying generative processes that give rise to behavior.
Collapse
Affiliation(s)
- Iain D Couzin
- Department of Collective Behaviour, Max Planck Institute of Animal Behavior, Konstanz, Germany; Centre for the Advanced Study of Collective Behaviour & Department of Biology, University of Konstanz, Germany.
| | - Conor Heins
- Department of Collective Behaviour, Max Planck Institute of Animal Behavior, Konstanz, Germany; Centre for the Advanced Study of Collective Behaviour & Department of Biology, University of Konstanz, Germany
| |
Collapse
|
32
|
Hu Y, Ferrario CR, Maitland AD, Ionides RB, Ghimire A, Watson B, Iwasaki K, White H, Xi Y, Zhou J, Ye B. LabGym: Quantification of user-defined animal behaviors using learning-based holistic assessment. CELL REPORTS METHODS 2023; 3:100415. [PMID: 37056376 PMCID: PMC10088092 DOI: 10.1016/j.crmeth.2023.100415] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/18/2022] [Revised: 10/19/2022] [Accepted: 02/01/2023] [Indexed: 03/09/2023]
Abstract
Quantifying animal behavior is important for biological research. Identifying behaviors is the prerequisite of quantifying them. Current computational tools for behavioral quantification typically use high-level properties such as body poses to identify the behaviors, which constrains the information available for a holistic assessment. Here we report LabGym, an open-source computational tool for quantifying animal behaviors without this constraint. In LabGym, we introduce "pattern image" to represent the animal's motion pattern, in addition to "animation" that shows all spatiotemporal details of a behavior. These two pieces of information are assessed holistically by customizable deep neural networks for accurate behavior identifications. The quantitative measurements of each behavior are then calculated. LabGym is applicable for experiments involving multiple animals, requires little programming knowledge to use, and provides visualizations of behavioral datasets. We demonstrate its efficacy in capturing subtle behavioral changes in diverse animal species.
Collapse
Affiliation(s)
- Yujia Hu
- Life Sciences Institute and Department of Cell and Developmental Biology, University of Michigan, Ann Arbor, MI 48109, USA
| | - Carrie R. Ferrario
- Department of Pharmacology and Psychology Department (Biopsychology), University of Michigan, Ann Arbor, MI 48109, USA
| | - Alexander D. Maitland
- Department of Pharmacology and Psychology Department (Biopsychology), University of Michigan, Ann Arbor, MI 48109, USA
| | - Rita B. Ionides
- Department of Pharmacology and Psychology Department (Biopsychology), University of Michigan, Ann Arbor, MI 48109, USA
| | - Anjesh Ghimire
- Department of Psychiatry, University of Michigan, Ann Arbor, MI 48109, USA
| | - Brendon Watson
- Department of Psychiatry, University of Michigan, Ann Arbor, MI 48109, USA
| | - Kenichi Iwasaki
- Life Sciences Institute and Department of Cell and Developmental Biology, University of Michigan, Ann Arbor, MI 48109, USA
| | - Hope White
- Life Sciences Institute and Department of Cell and Developmental Biology, University of Michigan, Ann Arbor, MI 48109, USA
| | - Yitao Xi
- Life Sciences Institute and Department of Cell and Developmental Biology, University of Michigan, Ann Arbor, MI 48109, USA
| | - Jie Zhou
- Department of Computer Science, Northern Illinois University, DeKalb, IL 60115, USA
| | - Bing Ye
- Life Sciences Institute and Department of Cell and Developmental Biology, University of Michigan, Ann Arbor, MI 48109, USA
| |
Collapse
|
33
|
Luxem K, Sun JJ, Bradley SP, Krishnan K, Yttri E, Zimmermann J, Pereira TD, Laubach M. Open-source tools for behavioral video analysis: Setup, methods, and best practices. eLife 2023; 12:79305. [PMID: 36951911 PMCID: PMC10036114 DOI: 10.7554/elife.79305] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Accepted: 03/03/2023] [Indexed: 03/24/2023] Open
Abstract
Recently developed methods for video analysis, especially models for pose estimation and behavior classification, are transforming behavioral quantification to be more precise, scalable, and reproducible in fields such as neuroscience and ethology. These tools overcome long-standing limitations of manual scoring of video frames and traditional 'center of mass' tracking algorithms to enable video analysis at scale. The expansion of open-source tools for video acquisition and analysis has led to new experimental approaches to understand behavior. Here, we review currently available open-source tools for video analysis and discuss how to set up these methods for labs new to video recording. We also discuss best practices for developing and using video analysis methods, including community-wide standards and critical needs for the open sharing of datasets and code, more widespread comparisons of video analysis methods, and better documentation for these methods especially for new users. We encourage broader adoption and continued development of these tools, which have tremendous potential for accelerating scientific progress in understanding the brain and behavior.
Collapse
Affiliation(s)
- Kevin Luxem
- Cellular Neuroscience, Leibniz Institute for Neurobiology, Magdeburg, Germany
| | - Jennifer J Sun
- Department of Computing and Mathematical Sciences, California Institute of Technology, Pasadena, United States
| | - Sean P Bradley
- Rodent Behavioral Core, National Institute of Mental Health, National Institutes of Health, Bethesda, United States
| | - Keerthi Krishnan
- Department of Biochemistry and Cellular & Molecular Biology, University of Tennessee, Knoxville, United States
| | - Eric Yttri
- Department of Biological Sciences, Carnegie Mellon University, Pittsburgh, United States
| | - Jan Zimmermann
- Department of Neuroscience, University of Minnesota, Minneapolis, United States
| | - Talmo D Pereira
- The Salk Institute of Biological Studies, La Jolla, United States
| | - Mark Laubach
- Department of Neuroscience, American University, Washington D.C., United States
| |
Collapse
|
34
|
Sanders LM, Scott RT, Yang JH, Qutub AA, Garcia Martin H, Berrios DC, Hastings JJA, Rask J, Mackintosh G, Hoarfrost AL, Chalk S, Kalantari J, Khezeli K, Antonsen EL, Babdor J, Barker R, Baranzini SE, Beheshti A, Delgado-Aparicio GM, Glicksberg BS, Greene CS, Haendel M, Hamid AA, Heller P, Jamieson D, Jarvis KJ, Komarova SV, Komorowski M, Kothiyal P, Mahabal A, Manor U, Mason CE, Matar M, Mias GI, Miller J, Myers JG, Nelson C, Oribello J, Park SM, Parsons-Wingerter P, Prabhu RK, Reynolds RJ, Saravia-Butler A, Saria S, Sawyer A, Singh NK, Snyder M, Soboczenski F, Soman K, Theriot CA, Van Valen D, Venkateswaran K, Warren L, Worthey L, Zitnik M, Costes SV. Biological research and self-driving labs in deep space supported by artificial intelligence. NAT MACH INTELL 2023. [DOI: 10.1038/s42256-023-00618-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/28/2023]
|
35
|
Multi-view Tracking, Re-ID, and Social Network Analysis of a Flock of Visually Similar Birds in an Outdoor Aviary. Int J Comput Vis 2023. [DOI: 10.1007/s11263-023-01768-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/08/2023]
|
36
|
Bohnslav JP, Osman MAM, Jaggi A, Soares S, Weinreb C, Datta SR, Harvey CD. ArMo: An Articulated Mesh Approach for Mouse 3D Reconstruction. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.02.17.526719. [PMID: 36824774 PMCID: PMC9949085 DOI: 10.1101/2023.02.17.526719] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/20/2023]
Abstract
Characterizing animal behavior requires methods to distill 3D movements from video data. Though keypoint tracking has emerged as a widely used solution to this problem, it only provides a limited view of pose, reducing the body of an animal to a sparse set of experimenter-defined points. To more completely capture 3D pose, recent studies have fit 3D mesh models to subjects in image and video data. However, despite the importance of mice as a model organism in neuroscience research, these methods have not been applied to the 3D reconstruction of mouse behavior. Here, we present ArMo, an articulated mesh model of the laboratory mouse, and demonstrate its application to multi-camera recordings of head-fixed mice running on a spherical treadmill. Using an end-to-end gradient based optimization procedure, we fit the shape and pose of a dense 3D mouse model to data-derived keypoint and point cloud observations. The resulting reconstructions capture the shape of the animal’s surface while compactly summarizing its movements as a time series of 3D skeletal joint angles. ArMo therefore provides a novel alternative to the sparse representations of pose more commonly used in neuroscience research.
Collapse
|
37
|
Ebrahimi AS, Orlowska-Feuer P, Huang Q, Zippo AG, Martial FP, Petersen RS, Storchi R. Three-dimensional unsupervised probabilistic pose reconstruction (3D-UPPER) for freely moving animals. Sci Rep 2023; 13:155. [PMID: 36599877 PMCID: PMC9813182 DOI: 10.1038/s41598-022-25087-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2022] [Accepted: 11/24/2022] [Indexed: 01/05/2023] Open
Abstract
A key step in understanding animal behaviour relies in the ability to quantify poses and movements. Methods to track body landmarks in 2D have made great progress over the last few years but accurate 3D reconstruction of freely moving animals still represents a challenge. To address this challenge here we develop the 3D-UPPER algorithm, which is fully automated, requires no a priori knowledge of the properties of the body and can also be applied to 2D data. We find that 3D-UPPER reduces by [Formula: see text] fold the error in 3D reconstruction of mouse body during freely moving behaviour compared with the traditional triangulation of 2D data. To achieve that, 3D-UPPER performs an unsupervised estimation of a Statistical Shape Model (SSM) and uses this model to constrain the viable 3D coordinates. We show, by using simulated data, that our SSM estimator is robust even in datasets containing up to 50% of poses with outliers and/or missing data. In simulated and real data SSM estimation converges rapidly, capturing behaviourally relevant changes in body shape associated with exploratory behaviours (e.g. with rearing and changes in body orientation). Altogether 3D-UPPER represents a simple tool to minimise errors in 3D reconstruction while capturing meaningful behavioural parameters.
Collapse
Affiliation(s)
- Aghileh S. Ebrahimi
- grid.5379.80000000121662407Division of Neuroscience, School of Biological Science, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, UK
| | - Patrycja Orlowska-Feuer
- grid.5379.80000000121662407Division of Neuroscience, School of Biological Science, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, UK
| | - Qian Huang
- grid.5379.80000000121662407Division of Neuroscience, School of Biological Science, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, UK
| | - Antonio G. Zippo
- grid.454291.f0000 0004 1781 1192Institute of Neuroscience, Consiglio Nazionale delle Ricerche, Milan, Italy
| | - Franck P. Martial
- grid.5379.80000000121662407Division of Neuroscience, School of Biological Science, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, UK
| | - Rasmus S. Petersen
- grid.5379.80000000121662407Division of Neuroscience, School of Biological Science, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, UK
| | - Riccardo Storchi
- grid.5379.80000000121662407Division of Neuroscience, School of Biological Science, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, UK
| |
Collapse
|
38
|
Yao Y, Bala P, Mohan A, Bliss-Moreau E, Coleman K, Freeman SM, Machado CJ, Raper J, Zimmermann J, Hayden BY, Park HS. OpenMonkeyChallenge: Dataset and Benchmark Challenges for Pose Estimation of Non-human Primates. Int J Comput Vis 2023; 131:243-258. [PMID: 37576929 PMCID: PMC10414782 DOI: 10.1007/s11263-022-01698-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2021] [Accepted: 09/22/2022] [Indexed: 11/05/2022]
Abstract
The ability to automatically estimate the pose of non-human primates as they move through the world is important for several subfields in biology and biomedicine. Inspired by the recent success of computer vision models enabled by benchmark challenges (e.g., object detection), we propose a new benchmark challenge called OpenMonkeyChallenge that facilitates collective community efforts through an annual competition to build generalizable non-human primate pose estimation models. To host the benchmark challenge, we provide a new public dataset consisting of 111,529 annotated (17 body landmarks) photographs of non-human primates in naturalistic contexts obtained from various sources including the Internet, three National Primate Research Centers, and the Minnesota Zoo. Such annotated datasets will be used for the training and testing datasets to develop generalizable models with standardized evaluation metrics. We demonstrate the effectiveness of our dataset quantitatively by comparing it with existing datasets based on seven state-of-the-art pose estimation models.
Collapse
Affiliation(s)
- Yuan Yao
- Computer Science and Engineering, University of Minnesota, Minneapolis, USA
| | - Praneet Bala
- Computer Science and Engineering, University of Minnesota, Minneapolis, USA
| | - Abhiraj Mohan
- Computer Science and Engineering, University of Minnesota, Minneapolis, USA
| | | | | | | | | | - Jessica Raper
- Emory National Primate Research Center, Atlanta, USA
| | | | | | - Hyun Soo Park
- Computer Science and Engineering, University of Minnesota, Minneapolis, USA
| |
Collapse
|
39
|
Cregg JM, Mirdamadi JL, Fortunato C, Okorokova EV, Kuper C, Nayeem R, Byun AJ, Avraham C, Buonocore A, Winner TS, Mildren RL. Highlights from the 31st Annual Meeting of the Society for the Neural Control of Movement. J Neurophysiol 2023; 129:220-234. [PMID: 36541602 PMCID: PMC9844973 DOI: 10.1152/jn.00500.2022] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2022] [Accepted: 12/16/2022] [Indexed: 12/24/2022] Open
Affiliation(s)
- Jared M Cregg
- Department of Neuroscience, Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark
| | - Jasmine L Mirdamadi
- Division of Physical Therapy, Department of Rehabilitation Medicine, Emory University School of Medicine, Atlanta, Georgia
| | - Cátia Fortunato
- Department of Bioengineering, Imperial College London, London, United Kingdom
| | | | - Clara Kuper
- Department of Psychology, Humboldt-Universität zu Berlin, Berlin, Germany
- School of Mind and Brain, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Rashida Nayeem
- Department of Electrical Engineering, Northeastern University, Boston, Massachusetts
| | - Andrew J Byun
- John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, Massachusetts
| | - Chen Avraham
- Department of Biomedical Engineering, Ben-Gurion University of the Negev, Beersheva, Israel
| | - Antimo Buonocore
- Werner Reichardt Centre for Integrative Neuroscience, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
- Department of Educational, Psychological and Communication Sciences, Suor Orsola Benincasa University, Naples, Italy
| | - Taniel S Winner
- Wallace H. Coulter Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, Georgia
| | - Robyn L Mildren
- Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, Maryland
| |
Collapse
|
40
|
Animal Pose Tracking: 3D Multimodal Dataset and Token-based Pose Optimization. Int J Comput Vis 2022. [DOI: 10.1007/s11263-022-01714-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
AbstractAccurate tracking of the 3D pose of animals from video recordings is critical for many behavioral studies, yet there is a dearth of publicly available datasets that the computer vision community could use for model development. We here introduce the Rodent3D dataset that records animals exploring their environment and/or interacting with each other with multiple cameras and modalities (RGB, depth, thermal infrared). Rodent3D consists of 200 min of multimodal video recordings from up to three thermal and three RGB-D synchronized cameras (approximately 4 million frames). For the task of optimizing estimates of pose sequences provided by existing pose estimation methods, we provide a baseline model called OptiPose. While deep-learned attention mechanisms have been used for pose estimation in the past, with OptiPose, we propose a different way by representing 3D poses as tokens for which deep-learned context models pay attention to both spatial and temporal keypoint patterns. Our experiments show how OptiPose is highly robust to noise and occlusion and can be used to optimize pose sequences provided by state-of-the-art models for animal pose estimation.
Collapse
|
41
|
Luxem K, Mocellin P, Fuhrmann F, Kürsch J, Miller SR, Palop JJ, Remy S, Bauer P. Identifying behavioral structure from deep variational embeddings of animal motion. Commun Biol 2022; 5:1267. [PMID: 36400882 PMCID: PMC9674640 DOI: 10.1038/s42003-022-04080-7] [Citation(s) in RCA: 27] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Accepted: 10/06/2022] [Indexed: 11/19/2022] Open
Abstract
Quantification and detection of the hierarchical organization of behavior is a major challenge in neuroscience. Recent advances in markerless pose estimation enable the visualization of high-dimensional spatiotemporal behavioral dynamics of animal motion. However, robust and reliable technical approaches are needed to uncover underlying structure in these data and to segment behavior into discrete hierarchically organized motifs. Here, we present an unsupervised probabilistic deep learning framework that identifies behavioral structure from deep variational embeddings of animal motion (VAME). By using a mouse model of beta amyloidosis as a use case, we show that VAME not only identifies discrete behavioral motifs, but also captures a hierarchical representation of the motif's usage. The approach allows for the grouping of motifs into communities and the detection of differences in community-specific motif usage of individual mouse cohorts that were undetectable by human visual observation. Thus, we present a robust approach for the segmentation of animal motion that is applicable to a wide range of experimental setups, models and conditions without requiring supervised or a-priori human interference.
Collapse
Affiliation(s)
- Kevin Luxem
- grid.418723.b0000 0001 2109 6265Leibniz Institute for Neurobiology (LIN), Department of Cellular Neuroscience, Magdeburg, Germany ,grid.424247.30000 0004 0438 0426German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany
| | - Petra Mocellin
- grid.418723.b0000 0001 2109 6265Leibniz Institute for Neurobiology (LIN), Department of Cellular Neuroscience, Magdeburg, Germany ,grid.424247.30000 0004 0438 0426German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany
| | - Falko Fuhrmann
- grid.418723.b0000 0001 2109 6265Leibniz Institute for Neurobiology (LIN), Department of Cellular Neuroscience, Magdeburg, Germany ,grid.424247.30000 0004 0438 0426German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany
| | - Johannes Kürsch
- grid.418723.b0000 0001 2109 6265Leibniz Institute for Neurobiology (LIN), Department of Cellular Neuroscience, Magdeburg, Germany ,grid.424247.30000 0004 0438 0426German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany
| | - Stephanie R. Miller
- grid.249878.80000 0004 0572 7110Gladstone Institute of Neurological Disease, San Francisco, CA 94158 USA ,grid.266102.10000 0001 2297 6811Department of Neurology, University of California, San Francisco, San Francisco, CA 94158 USA
| | - Jorge J. Palop
- grid.249878.80000 0004 0572 7110Gladstone Institute of Neurological Disease, San Francisco, CA 94158 USA ,grid.266102.10000 0001 2297 6811Department of Neurology, University of California, San Francisco, San Francisco, CA 94158 USA
| | - Stefan Remy
- grid.418723.b0000 0001 2109 6265Leibniz Institute for Neurobiology (LIN), Department of Cellular Neuroscience, Magdeburg, Germany ,grid.424247.30000 0004 0438 0426German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany ,grid.418723.b0000 0001 2109 6265Center for Behavioral Brain Sciences (CBBS), Magdeburg, Germany ,German Center for Mental Health (DZPG), Magdeburg, Germany
| | - Pavol Bauer
- grid.418723.b0000 0001 2109 6265Leibniz Institute for Neurobiology (LIN), Department of Cellular Neuroscience, Magdeburg, Germany ,grid.424247.30000 0004 0438 0426German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany
| |
Collapse
|
42
|
McKay A, Costa EK, Chen J, Hu CK, Chen X, Bedbrook CN, Khondker RC, Thielvoldt M, Priya Singh P, Wyss-Coray T, Brunet A. An automated feeding system for the African killifish reveals the impact of diet on lifespan and allows scalable assessment of associative learning. eLife 2022; 11:e69008. [PMID: 36354233 PMCID: PMC9788828 DOI: 10.7554/elife.69008] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2021] [Accepted: 11/09/2022] [Indexed: 11/11/2022] Open
Abstract
The African turquoise killifish is an exciting new vertebrate model for aging studies. A significant challenge for any model organism is the control over its diet in space and time. To address this challenge, we created an automated and networked fish feeding system. Our automated feeder is designed to be open-source, easily transferable, and built from widely available components. Compared to manual feeding, our automated system is highly precise and flexible. As a proof of concept for the feeding flexibility of these automated feeders, we define a favorable regimen for growth and fertility for the African killifish and a dietary restriction regimen where both feeding time and quantity are reduced. We show that this dietary restriction regimen extends lifespan in males (but not in females) and impacts the transcriptomes of killifish livers in a sex-specific manner. Moreover, combining our automated feeding system with a video camera, we establish a quantitative associative learning assay to provide an integrative measure of cognitive performance for the killifish. The ability to precisely control food delivery in the killifish opens new areas to assess lifespan and cognitive behavior dynamics and to screen for dietary interventions and drugs in a scalable manner previously impossible with traditional vertebrate model organisms.
Collapse
Affiliation(s)
- Andrew McKay
- Department of Genetics, Stanford UniversityStanfordUnited States
- Biology Graduate Program, Stanford UniversityStanfordUnited States
| | - Emma K Costa
- Department of Neurology and Neurological Sciences, Stanford UniversityStanfordUnited States
- Neurosciences Interdepartmental Program, Stanford University School of MedicineStanfordUnited States
| | - Jingxun Chen
- Department of Genetics, Stanford UniversityStanfordUnited States
| | - Chi-Kuo Hu
- Department of Genetics, Stanford UniversityStanfordUnited States
| | - Xiaoshan Chen
- Department of Genetics, Stanford UniversityStanfordUnited States
| | - Claire N Bedbrook
- Department of Genetics, Stanford UniversityStanfordUnited States
- Department of Bioengineering, Stanford UniversityStanfordUnited States
| | | | | | | | - Tony Wyss-Coray
- Department of Neurology and Neurological Sciences, Stanford UniversityStanfordUnited States
- Glenn Laboratories for the Biology of Aging, Stanford UniversityStanfordUnited States
- Wu Tsai Neurosciences Institute, Stanford UniversityStanfordUnited States
| | - Anne Brunet
- Department of Genetics, Stanford UniversityStanfordUnited States
- Glenn Laboratories for the Biology of Aging, Stanford UniversityStanfordUnited States
- Wu Tsai Neurosciences Institute, Stanford UniversityStanfordUnited States
| |
Collapse
|
43
|
Kano F, Naik H, Keskin G, Couzin ID, Nagy M. Head-tracking of freely-behaving pigeons in a motion-capture system reveals the selective use of visual field regions. Sci Rep 2022; 12:19113. [PMID: 36352049 PMCID: PMC9646700 DOI: 10.1038/s41598-022-21931-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Accepted: 10/06/2022] [Indexed: 11/11/2022] Open
Abstract
Using a motion-capture system and custom head-calibration methods, we reconstructed the head-centric view of freely behaving pigeons and examined how they orient their head when presented with various types of attention-getting objects at various relative locations. Pigeons predominantly employed their retinal specializations to view a visual target, namely their foveas projecting laterally (at an azimuth of ± 75°) into the horizon, and their visually-sensitive "red areas" projecting broadly into the lower-frontal visual field. Pigeons used their foveas to view any distant object while they used their red areas to view a nearby object on the ground (< 50 cm). Pigeons "fixated" a visual target with their foveas; the intervals between head-saccades were longer when the visual target was viewed by birds' foveas compared to when it was viewed by any other region. Furthermore, pigeons showed a weak preference to use their right eye to examine small objects distinctive in detailed features and their left eye to view threat-related or social stimuli. Despite the known difficulty in identifying where a bird is attending, we show that it is possible to estimate the visual attention of freely-behaving birds by tracking the projections of their retinal specializations in their visual field with cutting-edge methods.
Collapse
Affiliation(s)
- Fumihiro Kano
- grid.9811.10000 0001 0658 7699Centre for the Advanced Study of Collective Behaviour, University of Konstanz, Universitätsstraße 10, 78464 Konstanz, Germany ,grid.507516.00000 0004 7661 536XDepartment of Collective Behaviour, Max-Planck Institute of Animal Behavior, Konstanz, Germany
| | - Hemal Naik
- grid.507516.00000 0004 7661 536XDepartment of Collective Behaviour, Max-Planck Institute of Animal Behavior, Konstanz, Germany ,grid.507516.00000 0004 7661 536XDepartment of Ecology of Animal Societies, Max-Planck Institute of Animal Behavior, Konstanz, Germany ,grid.5252.00000 0004 1936 973XComputer Aided Medical Procedures, Teschnische Universiät Munchen, Munich, Germany ,grid.9811.10000 0001 0658 7699Department of Biology, University of Konstanz, Konstanz, Germany
| | - Göksel Keskin
- grid.5018.c0000 0001 2149 4407MTA-ELTE Lendület Collective Behaviour Research Group, Hungarian Academy of Sciences, Budapest, Hungary ,grid.5591.80000 0001 2294 6276Department of Biological Physics, Eötvös Loránd University, Budapest, Hungary
| | - Iain D. Couzin
- grid.9811.10000 0001 0658 7699Centre for the Advanced Study of Collective Behaviour, University of Konstanz, Universitätsstraße 10, 78464 Konstanz, Germany ,grid.507516.00000 0004 7661 536XDepartment of Collective Behaviour, Max-Planck Institute of Animal Behavior, Konstanz, Germany ,grid.9811.10000 0001 0658 7699Department of Biology, University of Konstanz, Konstanz, Germany
| | - Máté Nagy
- grid.507516.00000 0004 7661 536XDepartment of Collective Behaviour, Max-Planck Institute of Animal Behavior, Konstanz, Germany ,grid.5018.c0000 0001 2149 4407MTA-ELTE Lendület Collective Behaviour Research Group, Hungarian Academy of Sciences, Budapest, Hungary ,grid.5591.80000 0001 2294 6276Department of Biological Physics, Eötvös Loránd University, Budapest, Hungary
| |
Collapse
|
44
|
Mao D. Neural Correlates of Spatial Navigation in Primate Hippocampus. Neurosci Bull 2022; 39:315-327. [PMID: 36319893 PMCID: PMC9905402 DOI: 10.1007/s12264-022-00968-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2022] [Accepted: 06/16/2022] [Indexed: 11/07/2022] Open
Abstract
The hippocampus has been extensively implicated in spatial navigation in rodents and more recently in bats. Numerous studies have revealed that various kinds of spatial information are encoded across hippocampal regions. In contrast, investigations of spatial behavioral correlates in the primate hippocampus are scarce and have been mostly limited to head-restrained subjects during virtual navigation. However, recent advances made in freely-moving primates suggest marked differences in spatial representations from rodents, albeit some similarities. Here, we review empirical studies examining the neural correlates of spatial navigation in the primate (including human) hippocampus at the levels of local field potentials and single units. The lower frequency theta oscillations are often intermittent. Single neuron responses are highly mixed and task-dependent. We also discuss neuronal selectivity in the eye and head coordinates. Finally, we propose that future studies should focus on investigating both intrinsic and extrinsic population activity and examining spatial coding properties in large-scale hippocampal-neocortical networks across tasks.
Collapse
Affiliation(s)
- Dun Mao
- Center for Excellence in Brain Science and Intelligent Technology, Institute of Neuroscience, Chinese Academy of Sciences, Shanghai, 200031, China. .,University of Chinese Academy of Sciences, Beijing, 100049, China.
| |
Collapse
|
45
|
Monsees A, Voit KM, Wallace DJ, Sawinski J, Charyasz E, Scheffler K, Macke JH, Kerr JND. Estimation of skeletal kinematics in freely moving rodents. Nat Methods 2022; 19:1500-1509. [PMID: 36253644 DOI: 10.1038/s41592-022-01634-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2021] [Accepted: 09/02/2022] [Indexed: 11/09/2022]
Abstract
Forming a complete picture of the relationship between neural activity and skeletal kinematics requires quantification of skeletal joint biomechanics during free behavior; however, without detailed knowledge of the underlying skeletal motion, inferring limb kinematics using surface-tracking approaches is difficult, especially for animals where the relationship between the surface and underlying skeleton changes during motion. Here we developed a videography-based method enabling detailed three-dimensional kinematic quantification of an anatomically defined skeleton in untethered freely behaving rats and mice. This skeleton-based model was constrained using anatomical principles and joint motion limits and provided skeletal pose estimates for a range of body sizes, even when limbs were occluded. Model-inferred limb positions and joint kinematics during gait and gap-crossing behaviors were verified by direct measurement of either limb placement or limb kinematics using inertial measurement units. Together we show that complex decision-making behaviors can be accurately reconstructed at the level of skeletal kinematics using our anatomically constrained model.
Collapse
Affiliation(s)
- Arne Monsees
- Department of Behavior and Brain Organization, Max Planck Institute for Neurobiology of Behavior, Bonn, Germany.
| | - Kay-Michael Voit
- Department of Behavior and Brain Organization, Max Planck Institute for Neurobiology of Behavior, Bonn, Germany
| | - Damian J Wallace
- Department of Behavior and Brain Organization, Max Planck Institute for Neurobiology of Behavior, Bonn, Germany
| | - Juergen Sawinski
- Department of Behavior and Brain Organization, Max Planck Institute for Neurobiology of Behavior, Bonn, Germany
| | - Edyta Charyasz
- High-Field MR Center, Max Planck Institute for Biological Cybernetics, Tübingen, Germany.,Department for Biomedical Magnetic Resonance, Eberhard Karls University of Tübingen, Tübingen, Germany
| | - Klaus Scheffler
- High-Field MR Center, Max Planck Institute for Biological Cybernetics, Tübingen, Germany.,Department for Biomedical Magnetic Resonance, Eberhard Karls University of Tübingen, Tübingen, Germany
| | - Jakob H Macke
- Machine Learning in Science, Eberhard Karls University of Tübingen, Tübingen, Germany.,Empirical Inference, Max Planck Institute for Intelligent Systems, Tübingen, Germany
| | - Jason N D Kerr
- Department of Behavior and Brain Organization, Max Planck Institute for Neurobiology of Behavior, Bonn, Germany.
| |
Collapse
|
46
|
Kadmon Harpaz N, Hardcastle K, Ölveczky BP. Learning-induced changes in the neural circuits underlying motor sequence execution. Curr Opin Neurobiol 2022; 76:102624. [PMID: 36030613 PMCID: PMC11125547 DOI: 10.1016/j.conb.2022.102624] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Revised: 06/02/2022] [Accepted: 07/19/2022] [Indexed: 11/03/2022]
Abstract
As the old adage goes: practice makes perfect. Yet, the neural mechanisms by which rote repetition transforms a halting behavior into a fluid, effortless, and "automatic" action are not well understood. Here we consider the possibility that well-practiced motor sequences, which initially rely on higher-level decision-making circuits, become wholly specified in lower-level control circuits. We review studies informing this idea, discuss the constraints on such shift in control, and suggest approaches to pinpoint circuit-level changes associated with motor sequence learning.
Collapse
Affiliation(s)
- Naama Kadmon Harpaz
- Department of Organismic and Evolutionary Biology and Center for Brain Science, Harvard University. https://twitter.com/@NKadmonHarpaz
| | - Kiah Hardcastle
- Department of Organismic and Evolutionary Biology and Center for Brain Science, Harvard University. https://twitter.com/@kiahhardcastle
| | - Bence P Ölveczky
- Department of Organismic and Evolutionary Biology and Center for Brain Science, Harvard University.
| |
Collapse
|
47
|
Bumgarner JR, Becker-Krail DD, White RC, Nelson RJ. Machine learning and deep learning frameworks for the automated analysis of pain and opioid withdrawal behaviors. Front Neurosci 2022; 16:953182. [PMID: 36225736 PMCID: PMC9549170 DOI: 10.3389/fnins.2022.953182] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Accepted: 09/08/2022] [Indexed: 11/23/2022] Open
Abstract
The automation of behavioral tracking and analysis in preclinical research can serve to advance the rate of research outcomes, increase experimental scalability, and challenge the scientific reproducibility crisis. Recent advances in the efficiency, accuracy, and accessibility of deep learning (DL) and machine learning (ML) frameworks are enabling this automation. As the ongoing opioid epidemic continues to worsen alongside increasing rates of chronic pain, there are ever-growing needs to understand opioid use disorders (OUDs) and identify non-opioid therapeutic options for pain. In this review, we examine how these related needs can be advanced by the development and validation of DL and ML resources for automated pain and withdrawal behavioral tracking. We aim to emphasize the utility of these tools for automated behavioral analysis, and we argue that currently developed models should be deployed to address novel questions in the fields of pain and OUD research.
Collapse
|
48
|
Orlowska-Feuer P, Ebrahimi AS, Zippo AG, Petersen RS, Lucas RJ, Storchi R. Look-up and look-down neurons in the mouse visual thalamus during freely moving exploration. Curr Biol 2022; 32:3987-3999.e4. [PMID: 35973431 PMCID: PMC9616738 DOI: 10.1016/j.cub.2022.07.049] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2022] [Revised: 05/24/2022] [Accepted: 07/20/2022] [Indexed: 12/28/2022]
Abstract
Visual information reaches cortex via the thalamic dorsal lateral geniculate nucleus (dLGN). dLGN activity is modulated by global sleep/wake states and arousal, indicating that it is not simply a passive relay station. However, its potential for more specific visuomotor integration is largely unexplored. We addressed this question by developing robust 3D video reconstruction of mouse head and body during spontaneous exploration paired with simultaneous neuronal recordings from dLGN. Unbiased evaluation of a wide range of postures and movements revealed a widespread coupling between neuronal activity and few behavioral parameters. In particular, postures associated with the animal looking up/down correlated with activity in >50% neurons, and the extent of this effect was comparable with that induced by full-body movements (typically locomotion). By contrast, thalamic activity was minimally correlated with other postures or movements (e.g., left/right head and body torsions). Importantly, up/down postures and full-body movements were largely independent and jointly coupled to neuronal activity. Thus, although most units were excited during full-body movements, some expressed highest firing when the animal was looking up ("look-up" neurons), whereas others expressed highest firing when the animal was looking down ("look-down" neurons). These results were observed in the dark, thus representing a genuine behavioral modulation, and were amplified in a lit arena. Our results demonstrate that the primary visual thalamus, beyond global modulations by sleep/awake states, is potentially involved in specific visuomotor integration and reveal two distinct couplings between up/down postures and neuronal activity.
Collapse
Affiliation(s)
- Patrycja Orlowska-Feuer
- University of Manchester, Faculty of Biology, Medicine and Health, School of Biological Science, Division of Neuroscience and Experimental Psychology, Oxford Road, M139PL Manchester, UK
| | - Aghileh S Ebrahimi
- University of Manchester, Faculty of Biology, Medicine and Health, School of Biological Science, Division of Neuroscience and Experimental Psychology, Oxford Road, M139PL Manchester, UK
| | - Antonio G Zippo
- Institute of Neuroscience, Consiglio Nazionale delle Ricerche, Via Raoul Follereau, 3, 20854 Vedano al Lambro, Italy
| | - Rasmus S Petersen
- University of Manchester, Faculty of Biology, Medicine and Health, School of Biological Science, Division of Neuroscience and Experimental Psychology, Oxford Road, M139PL Manchester, UK
| | - Robert J Lucas
- University of Manchester, Faculty of Biology, Medicine and Health, School of Biological Science, Division of Neuroscience and Experimental Psychology, Oxford Road, M139PL Manchester, UK
| | - Riccardo Storchi
- University of Manchester, Faculty of Biology, Medicine and Health, School of Biological Science, Division of Neuroscience and Experimental Psychology, Oxford Road, M139PL Manchester, UK.
| |
Collapse
|
49
|
Flavell SW, Gogolla N, Lovett-Barron M, Zelikowsky M. The emergence and influence of internal states. Neuron 2022; 110:2545-2570. [PMID: 35643077 PMCID: PMC9391310 DOI: 10.1016/j.neuron.2022.04.030] [Citation(s) in RCA: 51] [Impact Index Per Article: 25.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2021] [Revised: 02/11/2022] [Accepted: 04/27/2022] [Indexed: 01/09/2023]
Abstract
Animal behavior is shaped by a variety of "internal states"-partially hidden variables that profoundly shape perception, cognition, and action. The neural basis of internal states, such as fear, arousal, hunger, motivation, aggression, and many others, is a prominent focus of research efforts across animal phyla. Internal states can be inferred from changes in behavior, physiology, and neural dynamics and are characterized by properties such as pleiotropy, persistence, scalability, generalizability, and valence. To date, it remains unclear how internal states and their properties are generated by nervous systems. Here, we review recent progress, which has been driven by advances in behavioral quantification, cellular manipulations, and neural population recordings. We synthesize research implicating defined subsets of state-inducing cell types, widespread changes in neural activity, and neuromodulation in the formation and updating of internal states. In addition to highlighting the significance of these findings, our review advocates for new approaches to clarify the underpinnings of internal brain states across the animal kingdom.
Collapse
Affiliation(s)
- Steven W Flavell
- Picower Institute for Learning and Memory, Department of Brain and Cognitive Science, Massachusetts Institute of Technology, Cambridge, MA 02139, USA.
| | - Nadine Gogolla
- Emotion Research Department, Max Planck Institute of Psychiatry, 80804 Munich, Germany; Circuits for Emotion Research Group, Max Planck Institute of Neurobiology, 82152 Martinsried, Germany.
| | - Matthew Lovett-Barron
- Division of Biological Sciences-Neurobiology Section, University of California, San Diego, La Jolla, CA 92093, USA.
| | - Moriel Zelikowsky
- Department of Neurobiology, University of Utah, Salt Lake City, UT 84112, USA.
| |
Collapse
|
50
|
Bertram MG, Martin JM, McCallum ES, Alton LA, Brand JA, Brooks BW, Cerveny D, Fick J, Ford AT, Hellström G, Michelangeli M, Nakagawa S, Polverino G, Saaristo M, Sih A, Tan H, Tyler CR, Wong BB, Brodin T. Frontiers in quantifying wildlife behavioural responses to chemical pollution. Biol Rev Camb Philos Soc 2022; 97:1346-1364. [PMID: 35233915 PMCID: PMC9543409 DOI: 10.1111/brv.12844] [Citation(s) in RCA: 42] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2021] [Revised: 02/13/2022] [Accepted: 02/16/2022] [Indexed: 12/26/2022]
Abstract
Animal behaviour is remarkably sensitive to disruption by chemical pollution, with widespread implications for ecological and evolutionary processes in contaminated wildlife populations. However, conventional approaches applied to study the impacts of chemical pollutants on wildlife behaviour seldom address the complexity of natural environments in which contamination occurs. The aim of this review is to guide the rapidly developing field of behavioural ecotoxicology towards increased environmental realism, ecological complexity, and mechanistic understanding. We identify research areas in ecology that to date have been largely overlooked within behavioural ecotoxicology but which promise to yield valuable insights, including within- and among-individual variation, social networks and collective behaviour, and multi-stressor interactions. Further, we feature methodological and technological innovations that enable the collection of data on pollutant-induced behavioural changes at an unprecedented resolution and scale in the laboratory and the field. In an era of rapid environmental change, there is an urgent need to advance our understanding of the real-world impacts of chemical pollution on wildlife behaviour. This review therefore provides a roadmap of the major outstanding questions in behavioural ecotoxicology and highlights the need for increased cross-talk with other disciplines in order to find the answers.
Collapse
Affiliation(s)
- Michael G. Bertram
- Department of Wildlife, Fish, and Environmental StudiesSwedish University of Agricultural SciencesSkogsmarksgränd 17UmeåVästerbottenSE‐907 36Sweden
| | - Jake M. Martin
- School of Biological SciencesMonash University25 Rainforest WalkMelbourneVictoria3800Australia
| | - Erin S. McCallum
- Department of Wildlife, Fish, and Environmental StudiesSwedish University of Agricultural SciencesSkogsmarksgränd 17UmeåVästerbottenSE‐907 36Sweden
| | - Lesley A. Alton
- School of Biological SciencesMonash University25 Rainforest WalkMelbourneVictoria3800Australia
| | - Jack A. Brand
- School of Biological SciencesMonash University25 Rainforest WalkMelbourneVictoria3800Australia
| | - Bryan W. Brooks
- Department of Environmental ScienceBaylor UniversityOne Bear PlaceWacoTexas76798‐7266U.S.A.
| | - Daniel Cerveny
- Department of Wildlife, Fish, and Environmental StudiesSwedish University of Agricultural SciencesSkogsmarksgränd 17UmeåVästerbottenSE‐907 36Sweden
- Faculty of Fisheries and Protection of Waters, South Bohemian Research Center of Aquaculture and Biodiversity of HydrocenosesUniversity of South Bohemia in Ceske BudejoviceZátiší 728/IIVodnany389 25Czech Republic
| | - Jerker Fick
- Department of ChemistryUmeå UniversityLinnaeus väg 10UmeåVästerbottenSE‐907 36Sweden
| | - Alex T. Ford
- Institute of Marine SciencesUniversity of PortsmouthWinston Churchill Avenue, PortsmouthHampshirePO1 2UPU.K.
| | - Gustav Hellström
- Department of Wildlife, Fish, and Environmental StudiesSwedish University of Agricultural SciencesSkogsmarksgränd 17UmeåVästerbottenSE‐907 36Sweden
| | - Marcus Michelangeli
- Department of Wildlife, Fish, and Environmental StudiesSwedish University of Agricultural SciencesSkogsmarksgränd 17UmeåVästerbottenSE‐907 36Sweden
- Department of Environmental Science and PolicyUniversity of California350 E Quad, DavisCaliforniaCA95616U.S.A.
| | - Shinichi Nakagawa
- Evolution & Ecology Research Centre, School of Biological, Earth and Environmental SciencesUniversity of New South Wales, Biological Sciences West (D26)SydneyNSW2052Australia
| | - Giovanni Polverino
- School of Biological SciencesMonash University25 Rainforest WalkMelbourneVictoria3800Australia
- Centre for Evolutionary Biology, School of Biological SciencesUniversity of Western Australia35 Stirling HighwayPerthWA6009Australia
- Department of Ecological and Biological SciencesTuscia UniversityVia S.M. in Gradi n.4ViterboLazio01100Italy
| | - Minna Saaristo
- Environment Protection Authority VictoriaEPA Science2 Terrace WayMacleodVictoria3085Australia
| | - Andrew Sih
- Department of Environmental Science and PolicyUniversity of California350 E Quad, DavisCaliforniaCA95616U.S.A.
| | - Hung Tan
- School of Biological SciencesMonash University25 Rainforest WalkMelbourneVictoria3800Australia
| | - Charles R. Tyler
- Biosciences, College of Life and Environmental SciencesUniversity of ExeterStocker RoadExeterDevonEX4 4QDU.K.
| | - Bob B.M. Wong
- School of Biological SciencesMonash University25 Rainforest WalkMelbourneVictoria3800Australia
| | - Tomas Brodin
- Department of Wildlife, Fish, and Environmental StudiesSwedish University of Agricultural SciencesSkogsmarksgränd 17UmeåVästerbottenSE‐907 36Sweden
| |
Collapse
|