1
|
Beyond the Language Module: Musicality as a Stepping Stone Towards Language Acquisition. EVOLUTIONARY PSYCHOLOGY 2022. [DOI: 10.1007/978-3-030-76000-7_12] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
|
2
|
Toro JM, Crespo-Bojorque P. Arc-shaped pitch contours facilitate item recognition in non-human animals. Cognition 2021; 213:104614. [PMID: 33558018 DOI: 10.1016/j.cognition.2021.104614] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2020] [Revised: 01/11/2021] [Accepted: 01/26/2021] [Indexed: 10/22/2022]
Abstract
Acoustic changes linked to natural prosody are a key source of information about the organization of language. Both human infants and adults readily take advantage of such changes to discover and memorize linguistic patterns. Do they so because our brain is efficiently wired to specifically process linguistic stimuli? Or are we co-opting for language acquisition purposes more general principles that might be inherited from our animal ancestors? Here, we address this question by exploring if other species profit from prosody to better process acoustic sequences. More specifically, we test whether arc-shaped pitch contours defining natural prosody might facilitate item recognition and memorization in rats. In two experiments, we presented to the rats nonsense words with flat, natural, inverted and random prosodic contours. We observed that the animals correctly recognized the familiarization words only when arc-shaped pitch contours were implemented over them. Our results suggest that other species might also benefit from prosody for the memorization of items in a sequence. Such capacity seems to be rooted in general principles of how biological sounds are produced and processed.
Collapse
Affiliation(s)
- Juan M Toro
- Institució Catalana de Recerca i Estudis Avançats (ICREA), Pg. Lluis Companys, 23, 08019 Barcelona, Spain; Universitat Pompeu Fabra, C. Ramon Trias Fargas, 25-27, 08005 Barcelona, Spain.
| | | |
Collapse
|
3
|
Mann DC, Fitch WT, Tu HW, Hoeschele M. Universal principles underlying segmental structures in parrot song and human speech. Sci Rep 2021; 11:776. [PMID: 33436874 PMCID: PMC7804275 DOI: 10.1038/s41598-020-80340-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2020] [Accepted: 12/04/2020] [Indexed: 01/29/2023] Open
Abstract
Despite the diversity of human languages, certain linguistic patterns are remarkably consistent across human populations. While syntactic universals receive more attention, there is stronger evidence for universal patterns in the inventory and organization of segments: units that are separated by rapid acoustic transitions which are used to build syllables, words, and phrases. Crucially, if an alien researcher investigated spoken human language how we analyze non-human communication systems, many of the phonological regularities would be overlooked, as the majority of analyses in non-humans treat breath groups, or "syllables" (units divided by silent inhalations), as the smallest unit. Here, we introduce a novel segment-based analysis that reveals patterns in the acoustic output of budgerigars, a vocal learning parrot species, that match universal phonological patterns well-documented in humans. We show that song in four independent budgerigar populations is comprised of consonant- and vowel-like segments. Furthermore, the organization of segments within syllables is not random. As in spoken human language, segments at the start of a vocalization are more likely to be consonant-like and segments at the end are more likely to be longer, quieter, and lower in fundamental frequency. These results provide a new foundation for empirical investigation of language-like abilities in other species.
Collapse
Affiliation(s)
- Dan C Mann
- Linguistics Program, The Graduate Center of the City University of New York, New York City, USA.
- Department of Cognitive Biology, University of Vienna, Vienna, Austria.
| | - W Tecumseh Fitch
- Department of Cognitive Biology, University of Vienna, Vienna, Austria
| | - Hsiao-Wei Tu
- Department of Psychology, University of Maryland, College Park, USA
| | - Marisa Hoeschele
- Department of Cognitive Biology, University of Vienna, Vienna, Austria
- Acoustics Research Institute, Austrian Academy of Sciences, Vienna, Austria
| |
Collapse
|
4
|
Ravignani A, Dalla Bella S, Falk S, Kello CT, Noriega F, Kotz SA. Rhythm in speech and animal vocalizations: a cross-species perspective. Ann N Y Acad Sci 2019; 1453:79-98. [PMID: 31237365 PMCID: PMC6851814 DOI: 10.1111/nyas.14166] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2019] [Revised: 05/14/2019] [Accepted: 05/24/2019] [Indexed: 12/31/2022]
Abstract
Why does human speech have rhythm? As we cannot travel back in time to witness how speech developed its rhythmic properties and why humans have the cognitive skills to process them, we rely on alternative methods to find out. One powerful tool is the comparative approach: studying the presence or absence of cognitive/behavioral traits in other species to determine which traits are shared between species and which are recent human inventions. Vocalizations of many species exhibit temporal structure, but little is known about how these rhythmic structures evolved, are perceived and produced, their biological and developmental bases, and communicative functions. We review the literature on rhythm in speech and animal vocalizations as a first step toward understanding similarities and differences across species. We extend this review to quantitative techniques that are useful for computing rhythmic structure in acoustic sequences and hence facilitate cross-species research. We report links between vocal perception and motor coordination and the differentiation of rhythm based on hierarchical temporal structure. While still far from a complete cross-species perspective of speech rhythm, our review puts some pieces of the puzzle together.
Collapse
Affiliation(s)
- Andrea Ravignani
- Artificial Intelligence LaboratoryVrije Universiteit BrusselBrusselsBelgium
- Institute for Advanced StudyUniversity of AmsterdamAmsterdamthe Netherlands
| | - Simone Dalla Bella
- International Laboratory for BrainMusic and Sound Research (BRAMS)MontréalQuebecCanada
- Department of PsychologyUniversity of MontrealMontréalQuebecCanada
- Department of Cognitive PsychologyWarsawPoland
| | - Simone Falk
- International Laboratory for BrainMusic and Sound Research (BRAMS)MontréalQuebecCanada
- Laboratoire de Phonétique et Phonologie, UMR 7018, CNRS/Université Sorbonne Nouvelle Paris‐3Institut de Linguistique et Phonétique générales et appliquéesParisFrance
| | | | - Florencia Noriega
- Chair for Network DynamicsCenter for Advancing Electronics Dresden (CFAED), TU DresdenDresdenGermany
- CODE University of Applied SciencesBerlinGermany
| | - Sonja A. Kotz
- International Laboratory for BrainMusic and Sound Research (BRAMS)MontréalQuebecCanada
- Basic and Applied NeuroDynamics Laboratory, Faculty of Psychology and Neuroscience, Department of Neuropsychology and PsychopharmacologyMaastricht UniversityMaastrichtthe Netherlands
- Department of NeuropsychologyMax‐Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
| |
Collapse
|
5
|
Filippi P, Hoeschele M, Spierings M, Bowling DL. Temporal modulation in speech, music, and animal vocal communication: evidence of conserved function. Ann N Y Acad Sci 2019; 1453:99-113. [DOI: 10.1111/nyas.14228] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2019] [Revised: 08/09/2019] [Accepted: 08/13/2019] [Indexed: 12/11/2022]
Affiliation(s)
- Piera Filippi
- Laboratoire Parole et Langage, LPL UMR 7309, Centre National de la Recherche ScientifiqueAix‐Marseille Université Aix‐en‐Provence France
- Institute of Language, Communication and the Brain, Centre National de la Recherche ScientifiqueAix‐Marseille Université Aix‐en‐Provence France
- Laboratoire de Psychologie Cognitive LPC UMR 7290, Centre National de la Recherche ScientifiqueAix‐Marseille Université Marseille France
| | - Marisa Hoeschele
- Acoustics Research InstituteAustrian Academy of Science Vienna Austria
- Department of Cognitive BiologyUniversity of Vienna Vienna Austria
| | | | - Daniel L. Bowling
- Department of Psychiatry and Behavioral SciencesStanford University School of Medicine Stanford California
| |
Collapse
|
6
|
|
7
|
Mueller JL, Cate CT, Toro JM. A Comparative Perspective on the Role of Acoustic Cues in Detecting Language Structure. Top Cogn Sci 2018; 12:859-874. [PMID: 30033636 DOI: 10.1111/tops.12373] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2018] [Revised: 05/20/2018] [Accepted: 06/20/2018] [Indexed: 12/01/2022]
Abstract
Most human language learners acquire language primarily via the auditory modality. This is one reason why auditory artificial grammars play a prominent role in the investigation of the development and evolutionary roots of human syntax. The present position paper brings together findings from human and non-human research on the impact of auditory cues on learning about linguistic structures with a special focus on how different types of cues and biases in auditory cognition may contribute to success and failure in artificial grammar learning (AGL). The basis of our argument is the link between auditory cues and syntactic structure across languages and development. Cross-species comparison suggests that many aspects of auditory cognition that are relevant for language are not human specific and are present even in rather distantly related species. Furthermore, auditory cues and biases impact on learning, which we will discuss in the example of auditory perception and AGL studies. This observation, together with the significant role of auditory cues in language processing, supports the idea that auditory cues served as a bootstrap to syntax during language evolution. Yet this also means that potentially human-specific syntactic abilities are not due to basic auditory differences between humans and non-human animals but are based upon more advanced cognitive processes.
Collapse
Affiliation(s)
| | - Carel Ten Cate
- Institute of Biology, Leiden University.,Leiden Institute for Brain and Cognition
| | - Juan M Toro
- ICREA (Institució Catalana de Recerca I Estudis Avançats).,Center for Brain and Cognition, University Pompeu Fabra
| |
Collapse
|
8
|
Fitch WT. What animals can teach us about human language: the phonological continuity hypothesis. Curr Opin Behav Sci 2018. [DOI: 10.1016/j.cobeha.2018.01.014] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
9
|
Spierings M, Hubert J, Ten Cate C. Selective auditory grouping by zebra finches: testing the iambic-trochaic law. Anim Cogn 2017; 20:665-675. [PMID: 28391488 PMCID: PMC5486500 DOI: 10.1007/s10071-017-1089-3] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2016] [Revised: 01/03/2017] [Accepted: 03/30/2017] [Indexed: 12/31/2022]
Abstract
Humans have a strong tendency to spontaneously group visual or auditory stimuli together in larger patterns. One of these perceptual grouping biases is formulated as the iambic/trochaic law, where humans group successive tones alternating in pitch and intensity as trochees (high-low and loud-soft) and alternating in duration as iambs (short-long). The grouping of alternations in pitch and intensity into trochees is a human universal and is also present in one non-human animal species, rats. The perceptual grouping of sounds alternating in duration seems to be affected by native language in humans and has so far not been found among animals. In the current study, we explore to which extent these perceptual biases are present in a songbird, the zebra finch. Zebra finches were trained to discriminate between short strings of pure tones organized as iambs and as trochees. One group received tones that alternated in pitch, a second group heard tones alternating in duration, and for a third group, tones alternated in intensity. Those zebra finches that showed sustained correct discrimination were next tested with longer, ambiguous strings of alternating sounds. The zebra finches in the pitch condition categorized ambiguous strings of alternating tones as trochees, similar to humans. However, most of the zebra finches in the duration and intensity condition did not learn to discriminate between training stimuli organized as iambs and trochees. This study shows that the perceptual bias to group tones alternating in pitch as trochees is not specific to humans and rats, but may be more widespread among animals.
Collapse
Affiliation(s)
- Michelle Spierings
- Behavioural Biology, Institute of Biology Leiden (IBL), Leiden University, P.O. Box 9505, 2300 RA, Leiden, The Netherlands.
- Leiden Institute for Brain and Cognition (LIBC), Leiden University, P.O. Box 9600, 2300 RC, Leiden, The Netherlands.
| | - Jeroen Hubert
- Behavioural Biology, Institute of Biology Leiden (IBL), Leiden University, P.O. Box 9505, 2300 RA, Leiden, The Netherlands
| | - Carel Ten Cate
- Behavioural Biology, Institute of Biology Leiden (IBL), Leiden University, P.O. Box 9505, 2300 RA, Leiden, The Netherlands
- Leiden Institute for Brain and Cognition (LIBC), Leiden University, P.O. Box 9600, 2300 RC, Leiden, The Netherlands
| |
Collapse
|
10
|
Mol C, Chen A, Kager RWJ, Ter Haar SM. Prosody in birdsong: A review and perspective. Neurosci Biobehav Rev 2017; 81:167-180. [PMID: 28232050 DOI: 10.1016/j.neubiorev.2017.02.016] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2016] [Revised: 02/16/2017] [Accepted: 02/17/2017] [Indexed: 11/28/2022]
Abstract
Birdsong shows striking parallels with human speech. Previous comparisons between birdsong and human vocalizations focused on syntax, phonology and phonetics. In this review, we propose that future comparative research should expand its focus to include prosody, i.e. the temporal and melodic properties that extend over larger units of song. To this end, we consider the similarities between birdsong structure and the prosodic hierarchy in human speech and between context-dependent acoustic variations in birdsong and the biological codes in human speech. Moreover, we discuss songbirds' sensitivity to prosody-like acoustic features and the role of such features in song segmentation and song learning in relation to infants' sensitivity to prosody and the role of prosody in early language acquisition. Finally, we make suggestions for future comparative birdsong research, including a framework of how prosody in birdsong can be studied. In particular, we propose to analyze birdsong as a multidimensional signal composed of specific acoustic features, and to assess whether these acoustic features are organized into prosody-like structures.
Collapse
Affiliation(s)
- Carien Mol
- Cognitive Neurobiology and Helmholtz Institute, Department of Psychology, Utrecht University, P.O. Box 80086, 3508 TB Utrecht, The Netherlands.
| | - Aoju Chen
- Utrecht Institute of Linguistics OTS, Department of Languages, Literature and Communication, Utrecht University, Trans 10, 3512 JK Utrecht, The Netherlands
| | - René W J Kager
- Utrecht Institute of Linguistics OTS, Department of Languages, Literature and Communication, Utrecht University, Trans 10, 3512 JK Utrecht, The Netherlands
| | - Sita M Ter Haar
- Cognitive Neurobiology and Helmholtz Institute, Department of Psychology, Utrecht University, P.O. Box 80086, 3508 TB Utrecht, The Netherlands
| |
Collapse
|
11
|
Abstract
Pitch is a percept of sound that is based in part on fundamental frequency. Although pitch can be defined in a way that is clearly separable from other aspects of musical sounds, such as timbre, the perception of pitch is not a simple topic. Despite this, studying pitch separately from other aspects of sound has led to some interesting conclusions about how humans and other animals process acoustic signals. It turns out that pitch perception in humans is based on an assessment of pitch height, pitch chroma, relative pitch, and grouping principles. How pitch is broken down depends largely on the context. Most, if not all, of these principles appear to also be used by other species, but when and how accurately they are used varies across species and context. Studying how other animals compare to humans in their pitch abilities is partially a reevaluation of what we know about humans by considering ourselves in a biological context.
Collapse
Affiliation(s)
- Marisa Hoeschele
- Department of Cognitive Biology, University of Vienna, Vienna, Austria
| |
Collapse
|
12
|
Jadoul Y, Ravignani A, Thompson B, Filippi P, de Boer B. Seeking Temporal Predictability in Speech: Comparing Statistical Approaches on 18 World Languages. Front Hum Neurosci 2016; 10:586. [PMID: 27994544 PMCID: PMC5133256 DOI: 10.3389/fnhum.2016.00586] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2016] [Accepted: 11/03/2016] [Indexed: 11/30/2022] Open
Abstract
Temporal regularities in speech, such as interdependencies in the timing of speech events, are thought to scaffold early acquisition of the building blocks in speech. By providing on-line clues to the location and duration of upcoming syllables, temporal structure may aid segmentation and clustering of continuous speech into separable units. This hypothesis tacitly assumes that learners exploit predictability in the temporal structure of speech. Existing measures of speech timing tend to focus on first-order regularities among adjacent units, and are overly sensitive to idiosyncrasies in the data they describe. Here, we compare several statistical methods on a sample of 18 languages, testing whether syllable occurrence is predictable over time. Rather than looking for differences between languages, we aim to find across languages (using clearly defined acoustic, rather than orthographic, measures), temporal predictability in the speech signal which could be exploited by a language learner. First, we analyse distributional regularities using two novel techniques: a Bayesian ideal learner analysis, and a simple distributional measure. Second, we model higher-order temporal structure—regularities arising in an ordered series of syllable timings—testing the hypothesis that non-adjacent temporal structures may explain the gap between subjectively-perceived temporal regularities, and the absence of universally-accepted lower-order objective measures. Together, our analyses provide limited evidence for predictability at different time scales, though higher-order predictability is difficult to reliably infer. We conclude that temporal predictability in speech may well arise from a combination of individually weak perceptual cues at multiple structural levels, but is challenging to pinpoint.
Collapse
Affiliation(s)
- Yannick Jadoul
- Artificial Intelligence Lab, Vrije Universiteit Brussel Brussels, Belgium
| | - Andrea Ravignani
- Artificial Intelligence Lab, Vrije Universiteit Brussel Brussels, Belgium
| | - Bill Thompson
- Artificial Intelligence Lab, Vrije Universiteit Brussel Brussels, Belgium
| | - Piera Filippi
- Artificial Intelligence Lab, Vrije Universiteit Brussel Brussels, Belgium
| | - Bart de Boer
- Artificial Intelligence Lab, Vrije Universiteit Brussel Brussels, Belgium
| |
Collapse
|
13
|
Filippi P. Emotional and Interactional Prosody across Animal Communication Systems: A Comparative Approach to the Emergence of Language. Front Psychol 2016; 7:1393. [PMID: 27733835 PMCID: PMC5039945 DOI: 10.3389/fpsyg.2016.01393] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2016] [Accepted: 08/31/2016] [Indexed: 01/29/2023] Open
Abstract
Across a wide range of animal taxa, prosodic modulation of the voice can express emotional information and is used to coordinate vocal interactions between multiple individuals. Within a comparative approach to animal communication systems, I hypothesize that the ability for emotional and interactional prosody (EIP) paved the way for the evolution of linguistic prosody - and perhaps also of music, continuing to play a vital role in the acquisition of language. In support of this hypothesis, I review three research fields: (i) empirical studies on the adaptive value of EIP in non-human primates, mammals, songbirds, anurans, and insects; (ii) the beneficial effects of EIP in scaffolding language learning and social development in human infants; (iii) the cognitive relationship between linguistic prosody and the ability for music, which has often been identified as the evolutionary precursor of language.
Collapse
Affiliation(s)
- Piera Filippi
- Department of Artificial Intelligence, Vrije Universiteit BrusselBrussels, Belgium
| |
Collapse
|
14
|
Toro JM, Hoeschele M. Generalizing prosodic patterns by a non-vocal learning mammal. Anim Cogn 2016; 20:179-185. [PMID: 27658675 PMCID: PMC5306188 DOI: 10.1007/s10071-016-1036-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2016] [Revised: 09/08/2016] [Accepted: 09/09/2016] [Indexed: 11/30/2022]
Abstract
Prosody, a salient aspect of speech that includes rhythm and intonation, has been shown to help infants acquire some aspects of syntax. Recent studies have shown that birds of two vocal learning species are able to categorize human speech stimuli based on prosody. In the current study, we found that the non-vocal learning rat could also discriminate human speech stimuli based on prosody. Not only that, but rats were able to generalize to novel stimuli they had not been trained with, which suggests that they had not simply memorized the properties of individual stimuli, but learned a prosodic rule. When tested with stimuli with either one or three out of the four prosodic cues removed, the rats did poorly, suggesting that all cues were necessary for the rats to solve the task. This result is in contrast to results with humans and budgerigars, both of which had previously been studied using the same paradigm. Humans and budgerigars both learned the task and generalized to novel items, but were also able to solve the task with some of the cues removed. In conclusion, rats appear to have some of the perceptual abilities necessary to generalize prosodic patterns, in a similar though not identical way to the vocal learning species that have been studied.
Collapse
Affiliation(s)
- Juan M Toro
- ICREA, Pg. Lluis Companys 23, 08019, Barcelona, Spain.,Center for Brain and Cognition, Universitat Pompeu Fabra, Roc Boronat, 138, 08018, Barcelona, Spain
| | - Marisa Hoeschele
- Department of Cognitive Biology, University of Vienna, Althanstrasse 14, 1090, Vienna, Austria.
| |
Collapse
|
15
|
Ravignani A, Fitch WT, Hanke FD, Heinrich T, Hurgitsch B, Kotz SA, Scharff C, Stoeger AS, de Boer B. What Pinnipeds Have to Say about Human Speech, Music, and the Evolution of Rhythm. Front Neurosci 2016; 10:274. [PMID: 27378843 PMCID: PMC4913109 DOI: 10.3389/fnins.2016.00274] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2016] [Accepted: 05/31/2016] [Indexed: 12/19/2022] Open
Abstract
Research on the evolution of human speech and music benefits from hypotheses and data generated in a number of disciplines. The purpose of this article is to illustrate the high relevance of pinniped research for the study of speech, musical rhythm, and their origins, bridging and complementing current research on primates and birds. We briefly discuss speech, vocal learning, and rhythm from an evolutionary and comparative perspective. We review the current state of the art on pinniped communication and behavior relevant to the evolution of human speech and music, showing interesting parallels to hypotheses on rhythmic behavior in early hominids. We suggest future research directions in terms of species to test and empirical data needed.
Collapse
Affiliation(s)
- Andrea Ravignani
- Artificial Intelligence Lab, Vrije Universiteit BrusselBrussels, Belgium; Sensory and Cognitive Ecology, Institute for Biosciences, University of RostockRostock, Germany
| | - W Tecumseh Fitch
- Department of Cognitive Biology, University of Vienna Vienna, Austria
| | - Frederike D Hanke
- Sensory and Cognitive Ecology, Institute for Biosciences, University of Rostock Rostock, Germany
| | - Tamara Heinrich
- Sensory and Cognitive Ecology, Institute for Biosciences, University of Rostock Rostock, Germany
| | | | - Sonja A Kotz
- Basic and Applied NeuroDynamics Lab, Department of Neuropsychology and Psychopharmacology, Maastricht UniversityMaastricht, Netherlands; Department of Neuropsychology, Max-Planck Institute for Human Cognitive and Brain SciencesLeipzig, Germany
| | - Constance Scharff
- Department of Animal Behavior, Institute of Biology, Freie Universität Berlin Berlin, Germany
| | - Angela S Stoeger
- Department of Cognitive Biology, University of Vienna Vienna, Austria
| | - Bart de Boer
- Artificial Intelligence Lab, Vrije Universiteit Brussel Brussels, Belgium
| |
Collapse
|