1
|
Intra-individual variation in the songs of humpback whales suggests they are sonically searching for conspecifics. Learn Behav 2022; 50:456-481. [PMID: 34791610 DOI: 10.3758/s13420-021-00495-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/18/2021] [Indexed: 01/01/2023]
Abstract
Observations of animals' vocal actions can provide important clues about how they communicate and about how they perceive and react to changing situations. Here, analyses of consecutive songs produced by singing humpback whales recorded off the coast of Hawaii revealed that singers constantly vary the acoustic qualities of their songs within prolonged song sessions. Unlike the progressive changes in song structure that singing humpback whales make across months and years, intra-individual acoustic variations within song sessions appear to be largely stochastic. Additionally, four sequentially produced song components (or "themes") were each found to vary in unique ways. The most extensively used theme was highly variable in overall duration within and across song sessions, but varied relatively little in frequency content. In contrast, the remaining themes varied greatly in frequency content, but showed less variation in duration. Analyses of variations in the amount of time singers spent producing the four themes suggest that the mechanisms that determine when singers transition between themes may be comparable to those that control when terrestrial animals move their eyes to fixate on different positions as they examine visual scenes. The dynamic changes that individual whales make to songs within song sessions are counterproductive if songs serve mainly to provide conspecifics with indications of a singer's fitness. Instead, within-session changes to the acoustic features of songs may serve to enhance a singer's capacity to echoically detect, localize, and track conspecifics from long distances.
Collapse
|
2
|
Wagner JD, Gelman A, Hancock KE, Chung Y, Delgutte B. Rabbits use both spectral and temporal cues to discriminate the fundamental frequency of harmonic complexes with missing fundamentals. J Neurophysiol 2022; 127:290-312. [PMID: 34879207 PMCID: PMC8759963 DOI: 10.1152/jn.00366.2021] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023] Open
Abstract
The pitch of harmonic complex tones (HCTs) common in speech, music, and animal vocalizations plays a key role in the perceptual organization of sound. Unraveling the neural mechanisms of pitch perception requires animal models, but little is known about complex pitch perception by animals, and some species appear to use different pitch mechanisms than humans. Here, we tested rabbits' ability to discriminate the fundamental frequency (F0) of HCTs with missing fundamentals, using a behavioral paradigm inspired by foraging behavior in which rabbits learned to harness a spatial gradient in F0 to find the location of a virtual target within a room for a food reward. Rabbits were initially trained to discriminate HCTs with F0s in the range 400-800 Hz and with harmonics covering a wide frequency range (800-16,000 Hz) and then tested with stimuli differing in spectral composition to test the role of harmonic resolvability (experiment 1) or in F0 range (experiment 2) or in both F0 and spectral content (experiment 3). Together, these experiments show that rabbits can discriminate HCTs over a wide F0 range (200-1,600 Hz) encompassing the range of conspecific vocalizations and can use either the spectral pattern of harmonics resolved by the cochlea for higher F0s or temporal envelope cues resulting from interaction between unresolved harmonics for lower F0s. The qualitative similarity of these results to human performance supports the use of rabbits as an animal model for studies of pitch mechanisms, providing species differences in cochlear frequency selectivity and F0 range of vocalizations are taken into account.NEW & NOTEWORTHY Understanding the neural mechanisms of pitch perception requires experiments in animal models, but little is known about pitch perception by animals. Here we show that rabbits, a popular animal in auditory neuroscience, can discriminate complex sounds differing in pitch using either spectral cues or temporal cues. The results suggest that the role of spectral cues in pitch perception by animals may have been underestimated by predominantly testing low frequencies in the range of human voice.
Collapse
Affiliation(s)
- Joseph D. Wagner
- 1Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, Massachusetts,3Department of Biomedical Engineering, Boston University, Boston, Massachusetts
| | - Alice Gelman
- 1Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, Massachusetts
| | - Kenneth E. Hancock
- 1Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, Massachusetts,2Department of Otolaryngology, Head and Neck Surgery, Harvard Medical School, Boston, Massachusetts
| | - Yoojin Chung
- 1Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, Massachusetts,2Department of Otolaryngology, Head and Neck Surgery, Harvard Medical School, Boston, Massachusetts
| | - Bertrand Delgutte
- 1Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, Massachusetts,2Department of Otolaryngology, Head and Neck Surgery, Harvard Medical School, Boston, Massachusetts
| |
Collapse
|
3
|
Stidsholt L, Greif S, Goerlitz HR, Beedholm K, Macaulay J, Johnson M, Madsen PT. Hunting bats adjust their echolocation to receive weak prey echoes for clutter reduction. SCIENCE ADVANCES 2021; 7:7/10/eabf1367. [PMID: 33658207 PMCID: PMC7929515 DOI: 10.1126/sciadv.abf1367] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/06/2020] [Accepted: 01/21/2021] [Indexed: 05/27/2023]
Abstract
How animals extract information from their surroundings to guide motor patterns is central to their survival. Here, we use echo-recording tags to show how wild hunting bats adjust their sensory strategies to their prey and natural environment. When searching, bats maximize the chances of detecting small prey by using large sensory volumes. During prey pursuit, they trade spatial for temporal information by reducing sensory volumes while increasing update rate and redundancy of their sensory scenes. These adjustments lead to very weak prey echoes that bats protect from interference by segregating prey sensory streams from the background using a combination of fast-acting sensory and motor strategies. Counterintuitively, these weak sensory scenes allow bats to be efficient hunters close to background clutter broadening the niches available to hunt for insects.
Collapse
Affiliation(s)
- Laura Stidsholt
- Zoophysiology, Department of Biology, Aarhus University, Aarhus, Denmark.
| | - Stefan Greif
- Department of Zoology, Tel Aviv University, Tel Aviv, Israel
- Acoustic and Functional Ecology, Max Planck Institute for Ornithology, Seewiesen, Germany
| | - Holger R Goerlitz
- Acoustic and Functional Ecology, Max Planck Institute for Ornithology, Seewiesen, Germany
| | - Kristian Beedholm
- Zoophysiology, Department of Biology, Aarhus University, Aarhus, Denmark
| | - Jamie Macaulay
- Zoophysiology, Department of Biology, Aarhus University, Aarhus, Denmark
| | - Mark Johnson
- Aarhus Institute of Advanced Studies, Aarhus University, Aarhus, Denmark
| | | |
Collapse
|
4
|
Warren WH. Information Is Where You Find It: Perception as an Ecologically Well-Posed Problem. Iperception 2021; 12:20416695211000366. [PMID: 33815740 PMCID: PMC7995459 DOI: 10.1177/20416695211000366] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2020] [Accepted: 02/16/2021] [Indexed: 11/16/2022] Open
Abstract
Texts on visual perception typically begin with the following premise: Vision is an ill-posed problem, and perception is underdetermined by the available information. If this were really the case, however, it is hard to see how vision could ever get off the ground. James Gibson's signal contribution was his hypothesis that for every perceivable property of the environment, however subtle, there must be a higher order variable of information, however complex, that specifies it-if only we are clever enough to find them. Such variables are informative about behaviorally relevant properties within the physical and ecological constraints of a species' niche. Sensory ecology is replete with instructive examples, including weakly electric fish, the narwal's tusk, and insect flight control. In particular, I elaborate the case of passing through gaps. Optic flow is sufficient to control locomotion around obstacles and through openings. The affordances of the environment, such as gap passability, are specified by action-scaled information. Logically ill-posed problems may thus, on closer inspection, be ecologically well-posed.
Collapse
|
5
|
Weisser A, Buchholz JM, Keidser G. Complex Acoustic Environments: Review, Framework, and Subjective Model. Trends Hear 2020; 23:2331216519881346. [PMID: 31808369 PMCID: PMC6900675 DOI: 10.1177/2331216519881346] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
The concept of complex acoustic environments has appeared in several unrelated
research areas within acoustics in different variations. Based on a review of
the usage and evolution of this concept in the literature, a relevant framework
was developed, which includes nine broad characteristics that are thought to
drive the complexity of acoustic scenes. The framework was then used to study
the most relevant characteristics for stimuli of realistic, everyday, acoustic
scenes: multiple sources, source diversity, reverberation, and the listener’s
task. The effect of these characteristics on perceived scene complexity was then
evaluated in an exploratory study that reproduced the same stimuli with a
three-dimensional loudspeaker array inside an anechoic chamber. Sixty-five
subjects listened to the scenes and for each one had to rate 29 attributes,
including complexity, both with and without target speech in the scenes. The
data were analyzed using three-way principal component analysis with a (2 3 2)
Tucker3 model in the dimensions of scales (or ratings), scenes, and subjects,
explaining 42% of variation in the data. “Comfort” and “variability” were the
dominant scale components, which span the perceived complexity. Interaction
effects were observed, including the additional task of attending to target
speech that shifted the complexity rating closer to the comfort scale. Also,
speech contained in the background scenes introduced a second subject component,
which suggests that some subjects are more distracted than others by background
speech when listening to target speech. The results are interpreted in light of
the proposed framework.
Collapse
Affiliation(s)
- Adam Weisser
- Department of Linguistics, Faculty of Human Sciences, Macquarie University, Sydney, Australia.,The HEARing Cooperative Research Centre, Carlton, Victoria, Australia
| | - Jörg M Buchholz
- Department of Linguistics, Faculty of Human Sciences, Macquarie University, Sydney, Australia.,The HEARing Cooperative Research Centre, Carlton, Victoria, Australia
| | - Gitte Keidser
- The HEARing Cooperative Research Centre, Carlton, Victoria, Australia.,National Acoustic Laboratory, The Hearing Hub, Macquarie University, Sydney, New South Wales, Australia
| |
Collapse
|
6
|
Characterizing long-range search behavior in Diptera using complex 3D virtual environments. Proc Natl Acad Sci U S A 2020; 117:12201-12207. [PMID: 32424090 PMCID: PMC7275712 DOI: 10.1073/pnas.1912124117] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
The exemplary search capabilities of flying insects have established them as one of the most diverse taxa on Earth. However, we still lack the fundamental ability to quantify, represent, and predict trajectories under natural contexts to understand search and its applications. For example, flying insects have evolved in complex multimodal three-dimensional (3D) environments, but we do not yet understand which features of the natural world are used to locate distant objects. Here, we independently and dynamically manipulate 3D objects, airflow fields, and odor plumes in virtual reality over large spatial and temporal scales. We demonstrate that flies make use of features such as foreground segmentation, perspective, motion parallax, and integration of multiple modalities to navigate to objects in a complex 3D landscape while in flight. We first show that tethered flying insects of multiple species navigate to virtual 3D objects. Using the apple fly Rhagoletis pomonella, we then measure their reactive distance to objects and show that these flies use perspective and local parallax cues to distinguish and navigate to virtual objects of different sizes and distances. We also show that apple flies can orient in the absence of optic flow by using only directional airflow cues, and require simultaneous odor and directional airflow input for plume following to a host volatile blend. The elucidation of these features unlocks the opportunity to quantify parameters underlying insect behavior such as reactive space, optimal foraging, and dispersal, as well as develop strategies for pest management, pollination, robotics, and search algorithms.
Collapse
|
7
|
Gleiss H, Encke J, Lingner A, Jennings TR, Brosel S, Kunz L, Grothe B, Pecka M. Cooperative population coding facilitates efficient sound-source separability by adaptation to input statistics. PLoS Biol 2019; 17:e3000150. [PMID: 31356637 PMCID: PMC6687189 DOI: 10.1371/journal.pbio.3000150] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2019] [Revised: 08/08/2019] [Accepted: 07/11/2019] [Indexed: 01/31/2023] Open
Abstract
Our sensory environment changes constantly. Accordingly, neural systems continually adapt to the concurrent stimulus statistics to remain sensitive over a wide range of conditions. Such dynamic range adaptation (DRA) is assumed to increase both the effectiveness of the neuronal code and perceptual sensitivity. However, direct demonstrations of DRA-based efficient neuronal processing that also produces perceptual benefits are lacking. Here, we investigated the impact of DRA on spatial coding in the rodent brain and the perception of human listeners. Complex spatial stimulation with dynamically changing source locations elicited prominent DRA already on the initial spatial processing stage, the Lateral Superior Olive (LSO) of gerbils. Surprisingly, on the level of individual neurons, DRA diminished spatial tuning because of large response variability across trials. However, when considering single-trial population averages of multiple neurons, DRA enhanced the coding efficiency specifically for the concurrently most probable source locations. Intrinsic LSO population imaging of energy consumption combined with pharmacology revealed that a slow-acting LSO gain-control mechanism distributes activity across a group of neurons during DRA, thereby enhancing population coding efficiency. Strikingly, such “efficient cooperative coding” also improved neuronal source separability specifically for the locations that were most likely to occur. These location-specific enhancements in neuronal coding were paralleled by human listeners exhibiting a selective improvement in spatial resolution. We conclude that, contrary to canonical models of sensory encoding, the primary motive of early spatial processing is efficiency optimization of neural populations for enhanced source separability in the concurrent environment. The efficient coding hypothesis suggests that sensory processing adapts to the stimulus statistics to maximize information while minimizing energetic costs. This study finds that an auditory spatial processing circuit distributes activity across neurons to enhance processing efficiency, focally improving spatial resolution both in neurons and in human listeners.
Collapse
Affiliation(s)
- Helge Gleiss
- Division of Neurobiology, Department of Biology II, Ludwig-Maximilians-Universitaet Muenchen, Martinsried, Germany
| | - Jörg Encke
- Chair of Bio-Inspired Information Processing, Department of Electrical and Computer Engineering, Technical University of Munich, Garching, Germany
| | - Andrea Lingner
- Division of Neurobiology, Department of Biology II, Ludwig-Maximilians-Universitaet Muenchen, Martinsried, Germany
| | - Todd R. Jennings
- Division of Neurobiology, Department of Biology II, Ludwig-Maximilians-Universitaet Muenchen, Martinsried, Germany
| | - Sonja Brosel
- Division of Neurobiology, Department of Biology II, Ludwig-Maximilians-Universitaet Muenchen, Martinsried, Germany
| | - Lars Kunz
- Division of Neurobiology, Department of Biology II, Ludwig-Maximilians-Universitaet Muenchen, Martinsried, Germany
| | - Benedikt Grothe
- Division of Neurobiology, Department of Biology II, Ludwig-Maximilians-Universitaet Muenchen, Martinsried, Germany
| | - Michael Pecka
- Division of Neurobiology, Department of Biology II, Ludwig-Maximilians-Universitaet Muenchen, Martinsried, Germany
- * E-mail:
| |
Collapse
|
8
|
Chakrabarty D, Elhilali M. A Gestalt inference model for auditory scene segregation. PLoS Comput Biol 2019; 15:e1006711. [PMID: 30668568 PMCID: PMC6358108 DOI: 10.1371/journal.pcbi.1006711] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2018] [Revised: 02/01/2019] [Accepted: 12/12/2018] [Indexed: 11/18/2022] Open
Abstract
Our current understanding of how the brain segregates auditory scenes into meaningful objects is in line with a Gestaltism framework. These Gestalt principles suggest a theory of how different attributes of the soundscape are extracted then bound together into separate groups that reflect different objects or streams present in the scene. These cues are thought to reflect the underlying statistical structure of natural sounds in a similar way that statistics of natural images are closely linked to the principles that guide figure-ground segregation and object segmentation in vision. In the present study, we leverage inference in stochastic neural networks to learn emergent grouping cues directly from natural soundscapes including speech, music and sounds in nature. The model learns a hierarchy of local and global spectro-temporal attributes reminiscent of simultaneous and sequential Gestalt cues that underlie the organization of auditory scenes. These mappings operate at multiple time scales to analyze an incoming complex scene and are then fused using a Hebbian network that binds together coherent features into perceptually-segregated auditory objects. The proposed architecture successfully emulates a wide range of well established auditory scene segregation phenomena and quantifies the complimentary role of segregation and binding cues in driving auditory scene segregation.
Collapse
Affiliation(s)
- Debmalya Chakrabarty
- Laboratory for Computational Audio Processing, Center for Speech and Language Processing, Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Mounya Elhilali
- Laboratory for Computational Audio Processing, Center for Speech and Language Processing, Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA
- * E-mail:
| |
Collapse
|
9
|
Turner MH, Sanchez Giraldo LG, Schwartz O, Rieke F. Stimulus- and goal-oriented frameworks for understanding natural vision. Nat Neurosci 2019; 22:15-24. [PMID: 30531846 PMCID: PMC8378293 DOI: 10.1038/s41593-018-0284-0] [Citation(s) in RCA: 32] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2018] [Accepted: 10/22/2018] [Indexed: 12/21/2022]
Abstract
Our knowledge of sensory processing has advanced dramatically in the last few decades, but this understanding remains far from complete, especially for stimuli with the large dynamic range and strong temporal and spatial correlations characteristic of natural visual inputs. Here we describe some of the issues that make understanding the encoding of natural images a challenge. We highlight two broad strategies for approaching this problem: a stimulus-oriented framework and a goal-oriented one. Different contexts can call for one framework or the other. Looking forward, recent advances, particularly those based in machine learning, show promise in borrowing key strengths of both frameworks and by doing so illuminating a path to a more comprehensive understanding of the encoding of natural stimuli.
Collapse
Affiliation(s)
- Maxwell H Turner
- Department of Physiology and Biophysics, University of Washington, Seattle, WA, USA
- Graduate Program in Neuroscience, University of Washington, Seattle, WA, USA
| | | | - Odelia Schwartz
- Department of Computer Science, University of Miami, Coral Gables, FL, USA
| | - Fred Rieke
- Department of Physiology and Biophysics, University of Washington, Seattle, WA, USA.
| |
Collapse
|
10
|
Abstract
Why do humpback whales sing? This paper considers the hypothesis that humpback whales may use song for long range sonar. Given the vocal and social behavior of humpback whales, in several cases it is not apparent how they monitor the movements of distant whales or prey concentrations. Unless distant animals produce sounds, humpback whales are unlikely to be aware of their presence or actions. Some field observations are strongly suggestive of the use of song as sonar. Humpback whales sometimes stop singing and then rapidly approach distant whales in cases where sound production by those whales is not apparent, and singers sometimes alternately sing and swim while attempting to intercept another whale that is swimming evasively. In the evolutionary development of modern cetaceans, perceptual mechanisms have shifted from reliance on visual scanning to the active generation and monitoring of echoes. It is hypothesized that as the size and distance of relevant events increased, humpback whales developed adaptive specializations for long-distance echolocation. Differences between use of songs by humpback whales and use of sonar by other echolocating species are discussed, as are similarities between bat echolocation and singing by humpback whales. Singing humpback whales are known to emit sounds intense enough to generate echoes at long ranges, and to flexibly control the timing and qualities of produced sounds. The major problem for the hypothesis is the lack of recordings of echoes from other whales arriving at singers immediately before they initiate actions related to those whales. An earlier model of echoic processing by singing humpback whales is here revised to incorporate recent discoveries. According to the revised model, both direct echoes from targets and modulations in song-generated reverberation can provide singers with information that can help them make decisions about future actions related to mating, traveling, and foraging. The model identifies acoustic and structural features produced by singing humpback whales that may facilitate a singer's ability to interpret changes in echoic scenes and suggests that interactive signal coordination by singing whales may help them to avoid mutual interference. Specific, testable predictions of the model are presented.
Collapse
Affiliation(s)
- Eduardo Mercado III
- Department of Psychology, University at Buffalo, The State University of New York, Buffalo, NY, United States
- Evolution, Ecology, and Behavior Program, University at Buffalo, The State University of New York, Buffalo, NY, United States
| |
Collapse
|
11
|
Statistics of Natural Communication Signals Observed in the Wild Identify Important Yet Neglected Stimulus Regimes in Weakly Electric Fish. J Neurosci 2018; 38:5456-5465. [PMID: 29735558 DOI: 10.1523/jneurosci.0350-18.2018] [Citation(s) in RCA: 37] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2018] [Revised: 03/12/2018] [Accepted: 04/08/2018] [Indexed: 12/16/2022] Open
Abstract
Sensory systems evolve in the ecological niches that each species is occupying. Accordingly, encoding of natural stimuli by sensory neurons is expected to be adapted to the statistics of these stimuli. For a direct quantification of sensory scenes, we tracked natural communication behavior of male and female weakly electric fish, Apteronotus rostratus, in their Neotropical rainforest habitat with high spatiotemporal resolution over several days. In the context of courtship, we observed large quantities of electrocommunication signals. Echo responses, acknowledgment signals, and their synchronizing role in spawning demonstrated the behavioral relevance of these signals. In both courtship and aggressive contexts, we observed robust behavioral responses in stimulus regimes that have so far been neglected in electrophysiological studies of this well characterized sensory system and that are well beyond the range of known best frequency and amplitude tuning of the electroreceptor afferents' firing rate modulation. Our results emphasize the importance of quantifying sensory scenes derived from freely behaving animals in their natural habitats for understanding the function and evolution of neural systems.SIGNIFICANCE STATEMENT The processing mechanisms of sensory systems have evolved in the context of the natural lives of organisms. To understand the functioning of sensory systems therefore requires probing them in the stimulus regimes in which they evolved. We took advantage of the continuously generated electric fields of weakly electric fish to explore electrosensory stimulus statistics in their natural Neotropical habitat. Unexpectedly, many of the electrocommunication signals recorded during courtship, spawning, and aggression had much smaller amplitudes or higher frequencies than stimuli used so far in neurophysiological characterizations of the electrosensory system. Our results demonstrate that quantifying sensory scenes derived from freely behaving animals in their natural habitats is essential to avoid biases in the choice of stimuli used to probe brain function.
Collapse
|
12
|
Beetz MJ, García-Rosales F, Kössl M, Hechavarría JC. Robustness of cortical and subcortical processing in the presence of natural masking sounds. Sci Rep 2018; 8:6863. [PMID: 29717258 PMCID: PMC5931562 DOI: 10.1038/s41598-018-25241-x] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2018] [Accepted: 04/17/2018] [Indexed: 11/17/2022] Open
Abstract
Processing of ethologically relevant stimuli could be interfered by non-relevant stimuli. Animals have behavioral adaptations to reduce signal interference. It is largely unexplored whether the behavioral adaptations facilitate neuronal processing of relevant stimuli. Here, we characterize behavioral adaptations in the presence of biotic noise in the echolocating bat Carollia perspicillata and we show that the behavioral adaptations could facilitate neuronal processing of biosonar information. According to the echolocation behavior, bats need to extract their own signals in the presence of vocalizations from conspecifics. With playback experiments, we demonstrate that C. perspicillata increases the sensory acquisition rate by emitting groups of echolocation calls when flying in noisy environments. Our neurophysiological results from the auditory midbrain and cortex show that the high sensory acquisition rate does not vastly increase neuronal suppression and that the response to an echolocation sequence is partially preserved in the presence of biosonar signals from conspecifics.
Collapse
Affiliation(s)
- M Jerome Beetz
- Institute for Cell Biology and Neuroscience, Goethe-University, 60438, Frankfurt/M., Germany. .,Department of Behavioral Physiology and Sociobiology, Biozentrum, University of Würzburg, Am Hubland, Würzburg, 97074, Germany.
| | | | - Manfred Kössl
- Institute for Cell Biology and Neuroscience, Goethe-University, 60438, Frankfurt/M., Germany
| | - Julio C Hechavarría
- Institute for Cell Biology and Neuroscience, Goethe-University, 60438, Frankfurt/M., Germany
| |
Collapse
|
13
|
Kothari NB, Wohlgemuth MJ, Moss CF. Dynamic representation of 3D auditory space in the midbrain of the free-flying echolocating bat. eLife 2018; 7:e29053. [PMID: 29633711 PMCID: PMC5896882 DOI: 10.7554/elife.29053] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2017] [Accepted: 02/27/2018] [Indexed: 11/23/2022] Open
Abstract
Essential to spatial orientation in the natural environment is a dynamic representation of direction and distance to objects. Despite the importance of 3D spatial localization to parse objects in the environment and to guide movement, most neurophysiological investigations of sensory mapping have been limited to studies of restrained subjects, tested with 2D, artificial stimuli. Here, we show for the first time that sensory neurons in the midbrain superior colliculus (SC) of the free-flying echolocating bat encode 3D egocentric space, and that the bat's inspection of objects in the physical environment sharpens tuning of single neurons, and shifts peak responses to represent closer distances. These findings emerged from wireless neural recordings in free-flying bats, in combination with an echo model that computes the animal's instantaneous stimulus space. Our research reveals dynamic 3D space coding in a freely moving mammal engaged in a real-world navigation task.
Collapse
|
14
|
Kothari NB, Wohlgemuth MJ, Moss CF. Adaptive sonar call timing supports target tracking in echolocating bats. J Exp Biol 2018; 221:jeb.176537. [DOI: 10.1242/jeb.176537] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2017] [Accepted: 07/02/2018] [Indexed: 11/20/2022]
Abstract
Echolocating bats dynamically adapt the features of their sonar calls as they approach obstacles and track targets. As insectivorous bats forage, they increase sonar call rate with decreasing prey distance, and often embedded in bat insect approach sequences are clusters of sonar sounds, termed sonar sound groups (SSGs). The bat's production of SSGs has been observed in both field and laboratory conditions, and is hypothesized to sharpen spatiotemporal sonar resolution. When insectivorous bats hunt insects, they may encounter erratically moving prey, which increases the demands on the bat's sonar imaging system. Here, we studied the bat's adaptive vocal behavior in an experimentally controlled insect tracking task, allowing us to manipulate the predictability of target trajectories and measure the prevalence of SSGs. With this system, we trained bats to remain stationary on a platform and track a moving prey item, whose trajectory was programmed either to approach the bat, or to move back and forth, before arriving at the bat. We manipulated target motion predictability by varying the order in which different target trajectories were presented to the bats. During all trials, we recorded the bat's sonar calls and later analyzed the incidence of SSG production during the different target tracking conditions. Our results demonstrate that bats increase the production of SSGs when target unpredictability increases, and decrease the production of SSGs when target motion predictability increases. Further, bats produce the same number of sonar vocalizations irrespective of the target motion predictability, indicating that the animal's temporal clustering of sonar call sequences to produce SSGs is purposeful, and therefore involves sensorimotor planning.
Collapse
Affiliation(s)
- Ninad B. Kothari
- Department of Psychological & Brain Sciences, Krieger School of Arts and Sciences, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Melville J. Wohlgemuth
- Department of Psychological & Brain Sciences, Krieger School of Arts and Sciences, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Cynthia F. Moss
- Department of Psychological & Brain Sciences, Krieger School of Arts and Sciences, Johns Hopkins University, Baltimore, MD 21218, USA
- The Solomon H. Snyder Department of Neuroscience, School of Medicine. Johns Hopkins University, Baltimore, MD 21218, USA
- Department of Mechanical Engineering, Whiting School of Engineering. Johns Hopkins University, Baltimore, MD 21218, USA
- Behavioral Biology Program Chair. Johns Hopkins University, Baltimore, MD 21218, USA
| |
Collapse
|
15
|
Lee WJ, Falk B, Chiu C, Krishnan A, Arbour JH, Moss CF. Tongue-driven sonar beam steering by a lingual-echolocating fruit bat. PLoS Biol 2017; 15:e2003148. [PMID: 29244805 PMCID: PMC5774845 DOI: 10.1371/journal.pbio.2003148] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2017] [Revised: 01/19/2018] [Accepted: 11/22/2017] [Indexed: 11/19/2022] Open
Abstract
Animals enhance sensory acquisition from a specific direction by movements of head, ears, or eyes. As active sensing animals, echolocating bats also aim their directional sonar beam to selectively “illuminate” a confined volume of space, facilitating efficient information processing by reducing echo interference and clutter. Such sonar beam control is generally achieved by head movements or shape changes of the sound-emitting mouth or nose. However, lingual-echolocating Egyptian fruit bats, Rousettus aegyptiacus, which produce sound by clicking their tongue, can dramatically change beam direction at very short temporal intervals without visible morphological changes. The mechanism supporting this capability has remained a mystery. Here, we measured signals from free-flying Egyptian fruit bats and discovered a systematic angular sweep of beam focus across increasing frequency. This unusual signal structure has not been observed in other animals and cannot be explained by the conventional and widely-used “piston model” that describes the emission pattern of other bat species. Through modeling, we show that the observed beam features can be captured by an array of tongue-driven sound sources located along the side of the mouth, and that the sonar beam direction can be steered parsimoniously by inducing changes to the pattern of phase differences through moving tongue location. The effects are broadly similar to those found in a phased array—an engineering design widely found in human-made sonar systems that enables beam direction changes without changes in the physical transducer assembly. Our study reveals an intriguing parallel between biology and human engineering in solving problems in fundamentally similar ways. It is well known that animals move their eyes, ears, and heads towards stimuli of interest to selectively gather information in complex environments. Interestingly, lingual-echolocating fruit bats, which generate sonar signals for object localization by clicking their tongues, can rapidly switch the direction of the sonar beam without changing head aim or mouth shape. The mechanism underlying this capability has intrigued scientists and engineers alike. In this study, we used a combination of experimental measurements and theoretical modeling to solve this mystery. We discovered that the focus of this bat’s sound beam shifts systematically across a range of angles as the sonar frequency increases. This unusual multi-frequency structure can be captured by modeling the sound emission as an array of sound sources located along the side of the mouth and driven by the clicking tongue. Changing only the position of the tongue in this model can steer the sonar beam in different directions, showing an effect broadly similar to that found in a human-made sonar phased array—a design that enables changing beam direction without changing the physical transducer assembly. Our study thus reveals an intriguing parallel between biology and human engineering, which arrived at fundamentally similar solutions to the same problem.
Collapse
Affiliation(s)
- Wu-Jung Lee
- Applied Physics Laboratory, University of Washington, Seattle, Washington, United States of America
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, Maryland, United States of America
- * E-mail:
| | - Benjamin Falk
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, Maryland, United States of America
| | - Chen Chiu
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, Maryland, United States of America
| | - Anand Krishnan
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, Maryland, United States of America
- Indian Institute of Science Education and Research (IISER) Pune, Pune, Maharashtra, India
| | - Jessica H. Arbour
- Department of Biology, University of Washington, Seattle, Washington, United States of America
| | - Cynthia F. Moss
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, Maryland, United States of America
| |
Collapse
|
16
|
Single Neurons in the Avian Auditory Cortex Encode Individual Identity and Propagation Distance in Naturally Degraded Communication Calls. J Neurosci 2017; 37:3491-3510. [PMID: 28235893 PMCID: PMC5373131 DOI: 10.1523/jneurosci.2220-16.2017] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2016] [Revised: 01/08/2017] [Accepted: 01/13/2017] [Indexed: 11/21/2022] Open
Abstract
One of the most complex tasks performed by sensory systems is "scene analysis": the interpretation of complex signals as behaviorally relevant objects. The study of this problem, universal to species and sensory modalities, is particularly challenging in audition, where sounds from various sources and localizations, degraded by propagation through the environment, sum to form a single acoustical signal. Here we investigated in a songbird model, the zebra finch, the neural substrate for ranging and identifying a single source. We relied on ecologically and behaviorally relevant stimuli, contact calls, to investigate the neural discrimination of individual vocal signature as well as sound source distance when calls have been degraded through propagation in a natural environment. Performing electrophysiological recordings in anesthetized birds, we found neurons in the auditory forebrain that discriminate individual vocal signatures despite long-range degradation, as well as neurons discriminating propagation distance, with varying degrees of multiplexing between both information types. Moreover, the neural discrimination performance of individual identity was not affected by propagation-induced degradation beyond what was induced by the decreased intensity. For the first time, neurons with distance-invariant identity discrimination properties as well as distance-discriminant neurons are revealed in the avian auditory cortex. Because these neurons were recorded in animals that had prior experience neither with the vocalizers of the stimuli nor with long-range propagation of calls, we suggest that this neural population is part of a general-purpose system for vocalizer discrimination and ranging.SIGNIFICANCE STATEMENT Understanding how the brain makes sense of the multitude of stimuli that it continually receives in natural conditions is a challenge for scientists. Here we provide a new understanding of how the auditory system extracts behaviorally relevant information, the vocalizer identity and its distance to the listener, from acoustic signals that have been degraded by long-range propagation in natural conditions. We show, for the first time, that single neurons, in the auditory cortex of zebra finches, are capable of discriminating the individual identity and sound source distance in conspecific communication calls. The discrimination of identity in propagated calls relies on a neural coding that is robust to intensity changes, signals' quality, and decreases in the signal-to-noise ratio.
Collapse
|
17
|
Statistics of natural reverberation enable perceptual separation of sound and space. Proc Natl Acad Sci U S A 2016; 113:E7856-E7865. [PMID: 27834730 DOI: 10.1073/pnas.1612524113] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
In everyday listening, sound reaches our ears directly from a source as well as indirectly via reflections known as reverberation. Reverberation profoundly distorts the sound from a source, yet humans can both identify sound sources and distinguish environments from the resulting sound, via mechanisms that remain unclear. The core computational challenge is that the acoustic signatures of the source and environment are combined in a single signal received by the ear. Here we ask whether our recognition of sound sources and spaces reflects an ability to separate their effects and whether any such separation is enabled by statistical regularities of real-world reverberation. To first determine whether such statistical regularities exist, we measured impulse responses (IRs) of 271 spaces sampled from the distribution encountered by humans during daily life. The sampled spaces were diverse, but their IRs were tightly constrained, exhibiting exponential decay at frequency-dependent rates: Mid frequencies reverberated longest whereas higher and lower frequencies decayed more rapidly, presumably due to absorptive properties of materials and air. To test whether humans leverage these regularities, we manipulated IR decay characteristics in simulated reverberant audio. Listeners could discriminate sound sources and environments from these signals, but their abilities degraded when reverberation characteristics deviated from those of real-world environments. Subjectively, atypical IRs were mistaken for sound sources. The results suggest the brain separates sound into contributions from the source and the environment, constrained by a prior on natural reverberation. This separation process may contribute to robust recognition while providing information about spaces around us.
Collapse
|
18
|
Vanderelst D, Steckel J, Boen A, Peremans H, Holderied MW. Place recognition using batlike sonar. eLife 2016; 5:e14188. [PMID: 27481189 PMCID: PMC4970868 DOI: 10.7554/elife.14188] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2016] [Accepted: 06/20/2016] [Indexed: 11/28/2022] Open
Abstract
Echolocating bats have excellent spatial memory and are able to navigate to salient locations using bio-sonar. Navigating and route-following require animals to recognize places. Currently, it is mostly unknown how bats recognize places using echolocation. In this paper, we propose template based place recognition might underlie sonar-based navigation in bats. Under this hypothesis, bats recognize places by remembering their echo signature - rather than their 3D layout. Using a large body of ensonification data collected in three different habitats, we test the viability of this hypothesis assessing two critical properties of the proposed echo signatures: (1) they can be uniquely classified and (2) they vary continuously across space. Based on the results presented, we conclude that the proposed echo signatures satisfy both criteria. We discuss how these two properties of the echo signatures can support navigation and building a cognitive map.
Collapse
Affiliation(s)
- Dieter Vanderelst
- School of Biological Sciences, University of Bristol, Bristol, United Kingdom
- Active Perception Lab, University of Antwerp, Antwerp, Belgium
| | - Jan Steckel
- Active Perception Lab, University of Antwerp, Antwerp, Belgium
- Constrained Systems Lab, Faculty of Applied Engineering, University of Antwerp, Antwerp, Belgium
| | - Andre Boen
- Active Perception Lab, University of Antwerp, Antwerp, Belgium
| | | | - Marc W Holderied
- School of Biological Sciences, University of Bristol, Bristol, United Kingdom
| |
Collapse
|
19
|
Rauschecker JP. Auditory and visual cortex of primates: a comparison of two sensory systems. Eur J Neurosci 2015; 41:579-85. [PMID: 25728177 DOI: 10.1111/ejn.12844] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2014] [Revised: 12/23/2014] [Accepted: 12/23/2014] [Indexed: 11/29/2022]
Abstract
A comparative view of the brain, comparing related functions across species and sensory systems, offers a number of advantages. In particular, it allows separation of the formal purpose of a model structure from its implementation in specific brains. Models of auditory cortical processing can be conceived by analogy to the visual cortex, incorporating neural mechanisms that are found in both the visual and auditory systems. Examples of such canonical features at the columnar level are direction selectivity, size/bandwidth selectivity, and receptive fields with segregated vs. overlapping ON and OFF subregions. On a larger scale, parallel processing pathways have been envisioned that represent the two main facets of sensory perception: (i) identification of objects; and (ii) processing of space. Expanding this model in terms of sensorimotor integration and control offers an overarching view of cortical function independently of sensory modality.
Collapse
Affiliation(s)
- Josef P Rauschecker
- Department of Neuroscience, Georgetown University Medical Center, NRB WP19, 3970 Reservoir Rd NW, Washington, DC, 20057-1460, USA; Institute for Advanced Study, Technische Universität München, Garching, Germany
| |
Collapse
|
20
|
Fast sensory-motor reactions in echolocating bats to sudden changes during the final buzz and prey intercept. Proc Natl Acad Sci U S A 2015; 112:4122-7. [PMID: 25775538 DOI: 10.1073/pnas.1424457112] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Echolocation is an active sense enabling bats and toothed whales to orient in darkness through echo returns from their ultrasonic signals. Immediately before prey capture, both bats and whales emit a buzz with such high emission rates (≥ 180 Hz) and overall duration so short that its functional significance remains an enigma. To investigate sensory-motor control during the buzz of the insectivorous bat Myotis daubentonii, we removed prey, suspended in air or on water, before expected capture. The bats responded by shortening their echolocation buzz gradually; the earlier prey was removed down to approximately 100 ms (30 cm) before expected capture, after which the full buzz sequence was emitted both in air and over water. Bats trawling over water also performed the full capture behavior, but in-air capture motions were aborted, even at very late prey removals (<20 ms = 6 cm before expected contact). Thus, neither the buzz nor capture movements are stereotypical, but dynamically adapted based on sensory feedback. The results indicate that echolocation is controlled mainly by acoustic feedback, whereas capture movements are adjusted according to both acoustic and somatosensory feedback, suggesting separate (but coordinated) central motor control of the two behaviors based on multimodal input. Bat echolocation, especially the terminal buzz, provides a unique window to extremely fast decision processes in response to sensory feedback and modulation through attention in a naturally behaving animal.
Collapse
|
21
|
Abstract
Binaural sound localization is usually considered a discrimination task, where interaural phase (IPD) and level (ILD) disparities at narrowly tuned frequency channels are utilized to identify a position of a sound source. In natural conditions however, binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. Distribution of cues encountered naturally and their dependence on physical properties of an auditory scene have not been studied before. In the present work we analyzed statistics of naturally encountered binaural sounds. We performed binaural recordings of three auditory scenes with varying spatial configuration and analyzed empirical cue distributions from each scene. We have found that certain properties such as the spread of IPD distributions as well as an overall shape of ILD distributions do not vary strongly between different auditory scenes. Moreover, we found that ILD distributions vary much weaker across frequency channels and IPDs often attain much higher values, than can be predicted from head filtering properties. In order to understand the complexity of the binaural hearing task in the natural environment, sound waveforms were analyzed by performing Independent Component Analysis (ICA). Properties of learned basis functions indicate that in natural conditions soundwaves in each ear are predominantly generated by independent sources. This implies that the real-world sound localization must rely on mechanisms more complex than a mere cue extraction.
Collapse
Affiliation(s)
- Wiktor Młynarski
- Max-Planck Institute for Mathematics in the Sciences, Leipzig, Germany
- * E-mail:
| | - Jürgen Jost
- Max-Planck Institute for Mathematics in the Sciences, Leipzig, Germany
- Santa Fe Institute, Santa Fe, New Mexico, United States of America
| |
Collapse
|