1
|
Cognitive and Neurophysiological Models of Brain Asymmetry. Symmetry (Basel) 2022. [DOI: 10.3390/sym14050971] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Asymmetry is an inherent characteristic of brain organization in both humans and other vertebrate species, and is evident at the behavioral, neurophysiological, and structural levels. Brain asymmetry underlies the organization of several cognitive systems, such as emotion, communication, and spatial processing. Despite this ubiquity of asymmetries in the vertebrate brain, we are only beginning to understand the complex neuronal mechanisms underlying the interaction between hemispheric asymmetries and cognitive systems. Unfortunately, despite the vast number of empirical studies on brain asymmetries, theoretical models that aim to provide mechanistic explanations of hemispheric asymmetries are sparse in the field. Therefore, this Special Issue aims to highlight empirically based mechanistic models of brain asymmetry. Overall, six theoretical and four empirical articles were published in the Special Issue, covering a wide range of topics, from human handedness to auditory laterality in bats. Two key challenges for theoretical models of brain asymmetry are the integration of increasingly complex molecular data into testable models, and the creation of theoretical models that are robust and testable across different species.
Collapse
|
2
|
Unmasking the relevance of hemispheric asymmetries—Break on through (to the other side). Prog Neurobiol 2020; 192:101823. [DOI: 10.1016/j.pneurobio.2020.101823] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2020] [Revised: 04/17/2020] [Accepted: 05/13/2020] [Indexed: 12/21/2022]
|
3
|
Abstract
Comparative studies on brain asymmetry date back to the 19th century but then largely disappeared due to the assumption that lateralization is uniquely human. Since the reemergence of this field in the 1970s, we learned that left-right differences of brain and behavior exist throughout the animal kingdom and pay off in terms of sensory, cognitive, and motor efficiency. Ontogenetically, lateralization starts in many species with asymmetrical expression patterns of genes within the Nodal cascade that set up the scene for later complex interactions of genetic, environmental, and epigenetic factors. These take effect during different time points of ontogeny and create asymmetries of neural networks in diverse species. As a result, depending on task demands, left- or right-hemispheric loops of feedforward or feedback projections are then activated and can temporarily dominate a neural process. In addition, asymmetries of commissural transfer can shape lateralized processes in each hemisphere. It is still unclear if interhemispheric interactions depend on an inhibition/excitation dichotomy or instead adjust the contralateral temporal neural structure to delay the other hemisphere or synchronize with it during joint action. As outlined in our review, novel animal models and approaches could be established in the last decades, and they already produced a substantial increase of knowledge. Since there is practically no realm of human perception, cognition, emotion, or action that is not affected by our lateralized neural organization, insights from these comparative studies are crucial to understand the functions and pathologies of our asymmetric brain.
Collapse
Affiliation(s)
- Onur Güntürkün
- Department of Biopsychology, Institute of Cognitive Neuroscience, Ruhr University Bochum, Bochum, Germany
| | - Felix Ströckens
- Department of Biopsychology, Institute of Cognitive Neuroscience, Ruhr University Bochum, Bochum, Germany
| | - Sebastian Ocklenburg
- Department of Biopsychology, Institute of Cognitive Neuroscience, Ruhr University Bochum, Bochum, Germany
| |
Collapse
|
4
|
Abstract
Left-hemispheric language dominance is a well-known characteristic of the human language system. However, it has been shown that leftward language lateralization decreases dramatically when people communicate using whistles. Whistled languages present a transformation of a spoken language into whistles, facilitating communication over great distances. In order to investigate the laterality of Silbo Gomero, a form of whistled Spanish, we used a vocal and a whistled dichotic listening task in a sample of 75 healthy Spanish speakers. Both individuals that were able to whistle and to understand Silbo Gomero and a non-whistling control group showed a clear right-ear advantage for vocal dichotic listening. For whistled dichotic listening, the control group did not show any hemispheric asymmetries. In contrast, the whistlers’ group showed a right-ear advantage for whistled stimuli. This right-ear advantage was, however, smaller compared to the right-ear advantage found for vocal dichotic listening. In line with a previous study on language lateralization of whistled Turkish, these findings suggest that whistled language processing is associated with a decrease in left and a relative increase in right hemispheric processing. This shows that bihemispheric processing of whistled language stimuli occurs independent of language.
Collapse
|
5
|
Proulx MJ, Brown DJ, Lloyd-Esenkaya T, Leveson JB, Todorov OS, Watson SH, de Sousa AA. Visual-to-auditory sensory substitution alters language asymmetry in both sighted novices and experienced visually impaired users. APPLIED ERGONOMICS 2020; 85:103072. [PMID: 32174360 DOI: 10.1016/j.apergo.2020.103072] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/09/2019] [Revised: 12/05/2019] [Accepted: 02/01/2020] [Indexed: 06/10/2023]
Abstract
Visual-to-auditory sensory substitution devices (SSDs) provide improved access to the visual environment for the visually impaired by converting images into auditory information. Research is lacking on the mechanisms involved in processing data that is perceived through one sensory modality, but directly associated with a source in a different sensory modality. This is important because SSDs that use auditory displays could involve binaural presentation requiring both ear canals, or monaural presentation requiring only one - but which ear would be ideal? SSDs may be similar to reading, as an image (printed word) is converted into sound (when read aloud). Reading, and language more generally, are typically lateralised to the left cerebral hemisphere. Yet, unlike symbolic written language, SSDs convert images to sound based on visuospatial properties, with the right cerebral hemisphere potentially having a role in processing such visuospatial data. Here we investigated whether there is a hemispheric bias in the processing of visual-to-auditory sensory substitution information and whether that varies as a function of experience and visual ability. We assessed the lateralization of auditory processing with two tests: a standard dichotic listening test and a novel dichotic listening test created using the auditory information produced by an SSD, The vOICe. Participants were tested either in the lab or online with the same stimuli. We did not find a hemispheric bias in the processing of visual-to-auditory information in visually impaired, experienced vOICe users. Further, we did not find any difference between visually impaired, experienced vOICe users and sighted novices in the hemispheric lateralization of visual-to-auditory information processing. Although standard dichotic listening is lateralised to the left hemisphere, the auditory processing of images in SSDs is bilateral, possibly due to the increased influence of right hemisphere processing. Auditory SSDs might therefore be equally effective with presentation to either ear if a monaural, rather than binaural, presentation were necessary.
Collapse
Affiliation(s)
- Michael J Proulx
- Department of Psychology, University of Bath, Bath, BA2 7AY, UK; Crossmodal Cognition Laboratory, REVEAL Research Centre, University of Bath, Bath, BA2 7AY, UK
| | - David J Brown
- Crossmodal Cognition Laboratory, REVEAL Research Centre, University of Bath, Bath, BA2 7AY, UK; Centre for Health and Cognition, Bath Spa University, Bath, BA2 9BN, UK
| | - Tayfun Lloyd-Esenkaya
- Crossmodal Cognition Laboratory, REVEAL Research Centre, University of Bath, Bath, BA2 7AY, UK; Department of Computer Science, REVEAL Research Centre, University of Bath, Bath, BA2 7AY, UK
| | - Jack Barnett Leveson
- Department of Psychology, University of Bath, Bath, BA2 7AY, UK; Crossmodal Cognition Laboratory, REVEAL Research Centre, University of Bath, Bath, BA2 7AY, UK
| | - Orlin S Todorov
- School of Biological Sciences, The University of Queensland, St. Lucia, QLD, 4072, Australia
| | - Samuel H Watson
- Centre for Health and Cognition, Bath Spa University, Bath, BA2 9BN, UK
| | - Alexandra A de Sousa
- Crossmodal Cognition Laboratory, REVEAL Research Centre, University of Bath, Bath, BA2 7AY, UK; Centre for Health and Cognition, Bath Spa University, Bath, BA2 9BN, UK.
| |
Collapse
|
6
|
Belyk M, Schultz BG, Correia J, Beal DS, Kotz SA. Whistling shares a common tongue with speech: bioacoustics from real-time MRI of the human vocal tract. Proc Biol Sci 2019; 286:20191116. [PMID: 31551056 DOI: 10.1098/rspb.2019.1116] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Most human communication is carried by modulations of the voice. However, a wide range of cultures has developed alternative forms of communication that make use of a whistled sound source. For example, whistling is used as a highly salient signal for capturing attention, and can have iconic cultural meanings such as the catcall, enact a formal code as in boatswain's calls or stand as a proxy for speech in whistled languages. We used real-time magnetic resonance imaging to examine the muscular control of whistling to describe a strong association between the shape of the tongue and the whistled frequency. This bioacoustic profile parallels the use of the tongue in vowel production. This is consistent with the role of whistled languages as proxies for spoken languages, in which one of the acoustical features of speech sounds is substituted with a frequency-modulated whistle. Furthermore, previous evidence that non-human apes may be capable of learning to whistle from humans suggests that these animals may have similar sensorimotor abilities to those that are used to support speech in humans.
Collapse
Affiliation(s)
- Michel Belyk
- Department of Speech, Hearing and Phonetic Sciences, University College London, London, UK.,Bloorview Research Institute, Holland Bloorview Kids Rehabilitation Hospital, Toronto, Canada.,Faculty of Psychology and Neuroscience, University of Maastricht, Maastricht, The Netherlands
| | - Benjamin G Schultz
- Faculty of Psychology and Neuroscience, University of Maastricht, Maastricht, The Netherlands.,Institute of Logic, Language, and Computation, University of Amsterdam, Amsterdam, The Netherlands
| | - Joao Correia
- Faculty of Psychology and Neuroscience, University of Maastricht, Maastricht, The Netherlands.,Basque Center on Cognition, Brain and Language, Donostia-San Sebastian, Spain.,Centre for Biomedical Research (CBMR)/Department of Psychology, Universidade do Algarve, Portugal
| | - Deryk S Beal
- Bloorview Research Institute, Holland Bloorview Kids Rehabilitation Hospital, Toronto, Canada.,Department of Speech-Language Pathology, University of Toronto, Toronto, Canada
| | - Sonja A Kotz
- Faculty of Psychology and Neuroscience, University of Maastricht, Maastricht, The Netherlands.,Department of Neuropsychology, Max Planck Institute for Human and Cognitive Sciences, Leipzig, Germany
| |
Collapse
|
7
|
Bender A. The Role of Culture and Evolution for Human Cognition. Top Cogn Sci 2019; 12:1403-1420. [DOI: 10.1111/tops.12449] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2019] [Revised: 06/20/2019] [Accepted: 07/16/2019] [Indexed: 01/19/2023]
Affiliation(s)
- Andrea Bender
- Department of Psychosocial Science & SFF Centre for Early Sapiens Behaviour (SapienCE), University of Bergen
| |
Collapse
|
8
|
Hemispheric asymmetry: Looking for a novel signature of the modulation of spatial attention in multisensory processing. Psychon Bull Rev 2018; 24:690-707. [PMID: 27586002 PMCID: PMC5486865 DOI: 10.3758/s13423-016-1154-y] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
The extent to which attention modulates multisensory processing in a top-down fashion is still a subject of debate among researchers. Typically, cognitive psychologists interested in this question have manipulated the participants’ attention in terms of single/dual tasking or focal/divided attention between sensory modalities. We suggest an alternative approach, one that builds on the extensive older literature highlighting hemispheric asymmetries in the distribution of spatial attention. Specifically, spatial attention in vision, audition, and touch is typically biased preferentially toward the right hemispace, especially under conditions of high perceptual load. We review the evidence demonstrating such an attentional bias toward the right in extinction patients and healthy adults, along with the evidence of such rightward-biased attention in multisensory experimental settings. We then evaluate those studies that have demonstrated either a more pronounced multisensory effect in right than in left hemispace, or else similar effects in the two hemispaces. The results suggest that the influence of rightward-biased attention is more likely to be observed when the crossmodal signals interact at later stages of information processing and under conditions of higher perceptual load—that is, conditions under which attention is perhaps a compulsory enhancer of information processing. We therefore suggest that the spatial asymmetry in attention may provide a useful signature of top-down attentional modulation in multisensory processing.
Collapse
|
9
|
Podlipniak P. The Role of the Baldwin Effect in the Evolution of Human Musicality. Front Neurosci 2017; 11:542. [PMID: 29056895 PMCID: PMC5635050 DOI: 10.3389/fnins.2017.00542] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2017] [Accepted: 09/19/2017] [Indexed: 12/17/2022] Open
Abstract
From the biological perspective human musicality is the term referred to as a set of abilities which enable the recognition and production of music. Since music is a complex phenomenon which consists of features that represent different stages of the evolution of human auditory abilities, the question concerning the evolutionary origin of music must focus mainly on music specific properties and their possible biological function or functions. What usually differentiates music from other forms of human sound expressions is a syntactically organized structure based on pitch classes and rhythmic units measured in reference to musical pulse. This structure is an auditory (not acoustical) phenomenon, meaning that it is a human-specific interpretation of sounds achieved thanks to certain characteristics of the nervous system. There is historical and cross-cultural diversity of this structure which indicates that learning is an important part of the development of human musicality. However, the fact that there is no culture without music, the syntax of which is implicitly learned and easily recognizable, suggests that human musicality may be an adaptive phenomenon. If the use of syntactically organized structure as a communicative phenomenon were adaptive it would be only in circumstances in which this structure is recognizable by more than one individual. Therefore, there is a problem to explain the adaptive value of an ability to recognize a syntactically organized structure that appeared accidentally as the result of mutation or recombination in an environment without a syntactically organized structure. The possible solution could be explained by the Baldwin effect in which a culturally invented trait is transformed into an instinctive trait by the means of natural selection. It is proposed that in the beginning musical structure was invented and learned thanks to neural plasticity. Because structurally organized music appeared adaptive (phenotypic adaptation) e.g., as a tool of social consolidation, our predecessors started to spend a lot of time and energy on music. In such circumstances, accidentally one individual was born with the genetically controlled development of new neural circuitry which allowed him or her to learn music faster and with less energy use.
Collapse
Affiliation(s)
- Piotr Podlipniak
- Institute of Musicology, Adam Mickiewicz University in Poznań, Poznań, Poland
| |
Collapse
|
10
|
Meyer J, Dentel L, Meunier F. Categorization of Natural Whistled Vowels by Naïve Listeners of Different Language Background. Front Psychol 2017; 8:25. [PMID: 28174545 PMCID: PMC5258750 DOI: 10.3389/fpsyg.2017.00025] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2016] [Accepted: 01/04/2017] [Indexed: 11/26/2022] Open
Abstract
Whistled speech in a non-tonal language consists of the natural emulation of vocalic and consonantal qualities in a simple modulated whistled signal. This special speech register represents a natural telecommunication system that enables high levels of sentence intelligibility by trained speakers and is not directly intelligible to naïve listeners. Yet, it is easily learned by speakers of the language that is being whistled, as attested by the current efforts of the revitalization of whistled Spanish in the Canary Islands. To better understand the relation between whistled and spoken speech perception, we look herein at how Spanish, French, and Standard Chinese native speakers, knowing nothing about whistled speech, categorized four Spanish whistled vowels. The results show that the listeners categorized differently depending on their native language. The Standard Chinese speakers demonstrated the worst performance on this task but were still able to associate a tonal whistle to vowel categories. Spanish speakers were the most accurate, and both Spanish and French participants were able to categorize the four vowels, although not as accurately as an expert whistler. These results attest that whistled speech can be used as a natural laboratory to test the perceptual processes of language.
Collapse
Affiliation(s)
- Julien Meyer
- Univ. Grenoble Alpes, CNRSGIPSA-Lab, Grenoble, France; Centre National de la Recherche Scientifique, Laboratoire sur le Langage, le Cerveau et la CognitionBron, France
| | - Laure Dentel
- The World Whistles Research Association Paris, France
| | - Fanny Meunier
- Centre National de la Recherche Scientifique, Laboratoire sur le Langage, le Cerveau et la CognitionBron, France; CNRS UMR7320, Laboratoire BasesCorpus, Langage, Nice, France
| |
Collapse
|
11
|
|
12
|
Filippi P. Emotional and Interactional Prosody across Animal Communication Systems: A Comparative Approach to the Emergence of Language. Front Psychol 2016; 7:1393. [PMID: 27733835 PMCID: PMC5039945 DOI: 10.3389/fpsyg.2016.01393] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2016] [Accepted: 08/31/2016] [Indexed: 01/29/2023] Open
Abstract
Across a wide range of animal taxa, prosodic modulation of the voice can express emotional information and is used to coordinate vocal interactions between multiple individuals. Within a comparative approach to animal communication systems, I hypothesize that the ability for emotional and interactional prosody (EIP) paved the way for the evolution of linguistic prosody - and perhaps also of music, continuing to play a vital role in the acquisition of language. In support of this hypothesis, I review three research fields: (i) empirical studies on the adaptive value of EIP in non-human primates, mammals, songbirds, anurans, and insects; (ii) the beneficial effects of EIP in scaffolding language learning and social development in human infants; (iii) the cognitive relationship between linguistic prosody and the ability for music, which has often been identified as the evolutionary precursor of language.
Collapse
Affiliation(s)
- Piera Filippi
- Department of Artificial Intelligence, Vrije Universiteit BrusselBrussels, Belgium
| |
Collapse
|