1
|
Roos J, Bancelin S, Delaire T, Wilhelmi A, Levet F, Engelhardt M, Viasnoff V, Galland R, Nägerl UV, Sibarita JB. Arkitekt: streaming analysis and real-time workflows for microscopy. Nat Methods 2024; 21:1884-1894. [PMID: 39294366 DOI: 10.1038/s41592-024-02404-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Accepted: 08/01/2024] [Indexed: 09/20/2024]
Abstract
Quantitative microscopy workflows have evolved dramatically over the past years, progressively becoming more complex with the emergence of deep learning. Long-standing challenges such as three-dimensional segmentation of complex microscopy data can finally be addressed, and new imaging modalities are breaking records in both resolution and acquisition speed, generating gigabytes if not terabytes of data per day. With this shift in bioimage workflows comes an increasing need for efficient orchestration and data management, necessitating multitool interoperability and the ability to span dedicated computing resources. However, existing solutions are still limited in their flexibility and scalability and are usually restricted to offline analysis. Here we introduce Arkitekt, an open-source middleman between users and bioimage apps that enables complex quantitative microscopy workflows in real time. It allows the orchestration of popular bioimage software locally or remotely in a reliable and efficient manner. It includes visualization and analysis modules, but also mechanisms to execute source code and pilot acquisition software, making 'smart microscopy' a reality.
Collapse
Affiliation(s)
- Johannes Roos
- Interdisciplinary Institute for Neuroscience, University of Bordeaux, CNRS, Bordeaux, France
| | - Stéphane Bancelin
- Interdisciplinary Institute for Neuroscience, University of Bordeaux, CNRS, Bordeaux, France
| | - Tom Delaire
- Interdisciplinary Institute for Neuroscience, University of Bordeaux, CNRS, Bordeaux, France
| | | | - Florian Levet
- Interdisciplinary Institute for Neuroscience, University of Bordeaux, CNRS, Bordeaux, France
- Bordeaux Imaging Center, University of Bordeaux, CNRS, INSERM, Bordeaux, France
| | - Maren Engelhardt
- Frankfurt Institute for Advanced Studies, Frankfurt, Germany
- Institute of Anatomy and Cell Biology, Medical Faculty, Johannes Kepler University, Linz, Austria
| | - Virgile Viasnoff
- Mechanobiology Institute, National University of Singapore, Singapore, Singapore
| | - Rémi Galland
- Interdisciplinary Institute for Neuroscience, University of Bordeaux, CNRS, Bordeaux, France
| | - U Valentin Nägerl
- Interdisciplinary Institute for Neuroscience, University of Bordeaux, CNRS, Bordeaux, France
| | - Jean-Baptiste Sibarita
- Interdisciplinary Institute for Neuroscience, University of Bordeaux, CNRS, Bordeaux, France.
| |
Collapse
|
2
|
Mohn JL, Baese-Berk MM, Jaramillo S. Selectivity to acoustic features of human speech in the auditory cortex of the mouse. Hear Res 2024; 441:108920. [PMID: 38029503 PMCID: PMC10787375 DOI: 10.1016/j.heares.2023.108920] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Revised: 10/29/2023] [Accepted: 11/20/2023] [Indexed: 12/01/2023]
Abstract
A better understanding of the neural mechanisms of speech processing can have a major impact in the development of strategies for language learning and in addressing disorders that affect speech comprehension. Technical limitations in research with human subjects hinder a comprehensive exploration of these processes, making animal models essential for advancing the characterization of how neural circuits make speech perception possible. Here, we investigated the mouse as a model organism for studying speech processing and explored whether distinct regions of the mouse auditory cortex are sensitive to specific acoustic features of speech. We found that mice can learn to categorize frequency-shifted human speech sounds based on differences in formant transitions (FT) and voice onset time (VOT). Moreover, neurons across various auditory cortical regions were selective to these speech features, with a higher proportion of speech-selective neurons in the dorso-posterior region. Last, many of these neurons displayed mixed-selectivity for both features, an attribute that was most common in dorsal regions of the auditory cortex. Our results demonstrate that the mouse serves as a valuable model for studying the detailed mechanisms of speech feature encoding and neural plasticity during speech-sound learning.
Collapse
Affiliation(s)
- Jennifer L Mohn
- Institute of Neuroscience, University of Oregon, Eugene, OR 97403, United States of America
| | - Melissa M Baese-Berk
- Department of Linguistics, University of Oregon, Eugene, OR 97403, United States of America; Department of Linguistics, University of Chicago, Chicago, IL 60637, United States of America(1)
| | - Santiago Jaramillo
- Institute of Neuroscience, University of Oregon, Eugene, OR 97403, United States of America.
| |
Collapse
|
3
|
Mohn JL, Baese-Berk MM, Jaramillo S. Selectivity to acoustic features of human speech in the auditory cortex of the mouse. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.09.20.558699. [PMID: 37790479 PMCID: PMC10542132 DOI: 10.1101/2023.09.20.558699] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/05/2023]
Abstract
A better understanding of the neural mechanisms of speech processing can have a major impact in the development of strategies for language learning and in addressing disorders that affect speech comprehension. Technical limitations in research with human subjects hinder a comprehensive exploration of these processes, making animal models essential for advancing the characterization of how neural circuits make speech perception possible. Here, we investigated the mouse as a model organism for studying speech processing and explored whether distinct regions of the mouse auditory cortex are sensitive to specific acoustic features of speech. We found that mice can learn to categorize frequency-shifted human speech sounds based on differences in formant transitions (FT) and voice onset time (VOT). Moreover, neurons across various auditory cortical regions were selective to these speech features, with a higher proportion of speech-selective neurons in the dorso-posterior region. Last, many of these neurons displayed mixed-selectivity for both features, an attribute that was most common in dorsal regions of the auditory cortex. Our results demonstrate that the mouse serves as a valuable model for studying the detailed mechanisms of speech feature encoding and neural plasticity during speech-sound learning.
Collapse
Affiliation(s)
- Jennifer L. Mohn
- Institute of Neuroscience, University of Oregon. Eugene, OR 97403
| | | | | |
Collapse
|
4
|
Melchor J, Vergara J, Figueroa T, Morán I, Lemus L. Formant-Based Recognition of Words and Other Naturalistic Sounds in Rhesus Monkeys. Front Neurosci 2021; 15:728686. [PMID: 34776842 PMCID: PMC8586527 DOI: 10.3389/fnins.2021.728686] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2021] [Accepted: 10/08/2021] [Indexed: 11/21/2022] Open
Abstract
In social animals, identifying sounds is critical for communication. In humans, the acoustic parameters involved in speech recognition, such as the formant frequencies derived from the resonance of the supralaryngeal vocal tract, have been well documented. However, how formants contribute to recognizing learned sounds in non-human primates remains unclear. To determine this, we trained two rhesus monkeys to discriminate target and non-target sounds presented in sequences of 1–3 sounds. After training, we performed three experiments: (1) We tested the monkeys’ accuracy and reaction times during the discrimination of various acoustic categories; (2) their ability to discriminate morphing sounds; and (3) their ability to identify sounds consisting of formant 1 (F1), formant 2 (F2), or F1 and F2 (F1F2) pass filters. Our results indicate that macaques can learn diverse sounds and discriminate from morphs and formants F1 and F2, suggesting that information from few acoustic parameters suffice for recognizing complex sounds. We anticipate that future neurophysiological experiments in this paradigm may help elucidate how formants contribute to the recognition of sounds.
Collapse
Affiliation(s)
- Jonathan Melchor
- Department of Cognitive Neuroscience, Institute of Cell Physiology, Universidad Nacional Autónoma de México, Mexico City, Mexico
| | - José Vergara
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, United States
| | - Tonatiuh Figueroa
- Department of Cognitive Neuroscience, Institute of Cell Physiology, Universidad Nacional Autónoma de México, Mexico City, Mexico
| | - Isaac Morán
- Department of Cognitive Neuroscience, Institute of Cell Physiology, Universidad Nacional Autónoma de México, Mexico City, Mexico
| | - Luis Lemus
- Department of Cognitive Neuroscience, Institute of Cell Physiology, Universidad Nacional Autónoma de México, Mexico City, Mexico
| |
Collapse
|
5
|
O’Sullivan C, Weible AP, Wehr M. Disruption of Early or Late Epochs of Auditory Cortical Activity Impairs Speech Discrimination in Mice. Front Neurosci 2020; 13:1394. [PMID: 31998064 PMCID: PMC6965026 DOI: 10.3389/fnins.2019.01394] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2019] [Accepted: 12/10/2019] [Indexed: 11/22/2022] Open
Abstract
Speech evokes robust activity in auditory cortex, which contains information over a wide range of spatial and temporal scales. It remains unclear which components of these neural representations are causally involved in the perception and processing of speech sounds. Here we compared the relative importance of early and late speech-evoked activity for consonant discrimination. We trained mice to discriminate the initial consonants in spoken words, and then tested the effect of optogenetically suppressing different temporal windows of speech-evoked activity in auditory cortex. We found that both early and late suppression disrupted performance equivalently. These results suggest that mice are impaired at recognizing either type of disrupted representation because it differs from those learned in training.
Collapse
Affiliation(s)
- Conor O’Sullivan
- Institute of Neuroscience, University of Oregon, Eugene, OR, United States
- Department of Biology, University of Oregon, Eugene, OR, United States
| | - Aldis P. Weible
- Institute of Neuroscience, University of Oregon, Eugene, OR, United States
| | - Michael Wehr
- Institute of Neuroscience, University of Oregon, Eugene, OR, United States
- Department of Psychology, University of Oregon, Eugene, OR, United States
- *Correspondence: Michael Wehr,
| |
Collapse
|
6
|
Saunders JL, Wehr M. Mice can learn phonetic categories. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 145:1168. [PMID: 31067917 PMCID: PMC6910010 DOI: 10.1121/1.5091776] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/04/2018] [Revised: 01/26/2019] [Accepted: 02/04/2019] [Indexed: 06/09/2023]
Abstract
Speech is perceived as a series of relatively invariant phonemes despite extreme variability in the acoustic signal. To be perceived as nearly-identical phonemes, speech sounds that vary continuously over a range of acoustic parameters must be perceptually discretized by the auditory system. Such many-to-one mappings of undifferentiated sensory information to a finite number of discrete categories are ubiquitous in perception. Although many mechanistic models of phonetic perception have been proposed, they remain largely unconstrained by neurobiological data. Current human neurophysiological methods lack the necessary spatiotemporal resolution to provide it: speech is too fast, and the neural circuitry involved is too small. This study demonstrates that mice are capable of learning generalizable phonetic categories, and can thus serve as a model for phonetic perception. Mice learned to discriminate consonants and generalized consonant identity across novel vowel contexts and speakers, consistent with true category learning. A mouse model, given the powerful genetic and electrophysiological tools for probing neural circuits available for them, has the potential to powerfully augment a mechanistic understanding of phonetic perception.
Collapse
Affiliation(s)
- Jonny L Saunders
- University of Oregon, Institute of Neuroscience and Department of Psychology, Eugene, Oregon 97403, USA
| | - Michael Wehr
- University of Oregon, Institute of Neuroscience and Department of Psychology, Eugene, Oregon 97403, USA
| |
Collapse
|