1
|
Akerman A, Etkovitch A, Kalanthroff E. Global-Local Processing in ADHD Is Not Limited to the Visuospatial Domain: Novel Evidence From the Auditory Domain. J Atten Disord 2023; 27:822-829. [PMID: 36779530 DOI: 10.1177/10870547231153952] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/14/2023]
Abstract
OBJECTIVE Global-local visuospatial processing has been widely investigated in both healthy and clinical populations. Recent studies indicated that individuals with ADHD lack a global processing bias. However, the extant literature regarding global-local processing style focuses solely on the visual modality. METHODS ADHD (N = 21) and typically developed (TD) controls (N = 24) underwent an auditory global-local task, in which they had to decide whether the melody is ascending or descending in global or local conditions. RESULTS TD controls exhibited a classic global processing bias in the auditory task. The ADHD group exhibited no global processing bias, indicating similar processing for global and local dimensions, implying that individuals with ADHD are distracted by incongruent information in global and local conditions similarly, in both visual and auditory tasks. CONCLUSION A lack of global processing bias in ADHD is not limited to the visuospatial modality and likely reflects a broader and more general processing style.
Collapse
Affiliation(s)
- Aviv Akerman
- The Hebrew University of Jerusalem, Jerusalem, Israel
| | | | | |
Collapse
|
2
|
Cariani P, Baker JM. Time Is of the Essence: Neural Codes, Synchronies, Oscillations, Architectures. Front Comput Neurosci 2022; 16:898829. [PMID: 35814343 PMCID: PMC9262106 DOI: 10.3389/fncom.2022.898829] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2022] [Accepted: 05/04/2022] [Indexed: 11/25/2022] Open
Abstract
Time is of the essence in how neural codes, synchronies, and oscillations might function in encoding, representation, transmission, integration, storage, and retrieval of information in brains. This Hypothesis and Theory article examines observed and possible relations between codes, synchronies, oscillations, and types of neural networks they require. Toward reverse-engineering informational functions in brains, prospective, alternative neural architectures incorporating principles from radio modulation and demodulation, active reverberant circuits, distributed content-addressable memory, signal-signal time-domain correlation and convolution operations, spike-correlation-based holography, and self-organizing, autoencoding anticipatory systems are outlined. Synchronies and oscillations are thought to subserve many possible functions: sensation, perception, action, cognition, motivation, affect, memory, attention, anticipation, and imagination. These include direct involvement in coding attributes of events and objects through phase-locking as well as characteristic patterns of spike latency and oscillatory response. They are thought to be involved in segmentation and binding, working memory, attention, gating and routing of signals, temporal reset mechanisms, inter-regional coordination, time discretization, time-warping transformations, and support for temporal wave-interference based operations. A high level, partial taxonomy of neural codes consists of channel, temporal pattern, and spike latency codes. The functional roles of synchronies and oscillations in candidate neural codes, including oscillatory phase-offset codes, are outlined. Various forms of multiplexing neural signals are considered: time-division, frequency-division, code-division, oscillatory-phase, synchronized channels, oscillatory hierarchies, polychronous ensembles. An expandable, annotative neural spike train framework for encoding low- and high-level attributes of events and objects is proposed. Coding schemes require appropriate neural architectures for their interpretation. Time-delay, oscillatory, wave-interference, synfire chain, polychronous, and neural timing networks are discussed. Some novel concepts for formulating an alternative, more time-centric theory of brain function are discussed. As in radio communication systems, brains can be regarded as networks of dynamic, adaptive transceivers that broadcast and selectively receive multiplexed temporally-patterned pulse signals. These signals enable complex signal interactions that select, reinforce, and bind common subpatterns and create emergent lower dimensional signals that propagate through spreading activation interference networks. If memory traces share the same kind of temporal pattern forms as do active neuronal representations, then distributed, holograph-like content-addressable memories are made possible via temporal pattern resonances.
Collapse
Affiliation(s)
- Peter Cariani
- Hearing Research Center, Boston University, Boston, MA, United States
- Department of Otolaryngology-Head and Neck Surgery, Harvard Medical School, Boston, MA, United States
| | | |
Collapse
|
3
|
Foster NEV, Beffa L, Lehmann A. Accuracy of Tempo Judgments in Disk Jockeys Compared to Musicians and Untrained Individuals. Front Psychol 2021; 12:709979. [PMID: 34675835 PMCID: PMC8525396 DOI: 10.3389/fpsyg.2021.709979] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2021] [Accepted: 09/10/2021] [Indexed: 11/25/2022] Open
Abstract
Professional disk jockeys (DJs) are an under-studied population whose performance involves creating new musical experiences by combining existing musical materials with a high level of temporal precision. In contemporary electronic dance music, these materials have a stable tempo and are composed with the expectation for further transformation during performance by a DJ for the audience of dancers. Thus, a fundamental aspect of DJ performance is synchronizing the tempo and phase of multiple pieces of music, so that over seconds or even minutes, they may be layered and transitioned without disrupting the rhythmic pulse. This has been accomplished traditionally by manipulating the speed of individual music pieces “by ear,” without additional technological synchronization aids. However, the cumulative effect of this repeated practice on auditory tempo perception has not yet been evaluated. Well-known phenomena of experience-dependent plasticity in other populations, such as musicians, prompts the question of whether such effects exist in DJs in their domain of expertise. This pilot study examined auditory judgments of tempo in 10 professional DJs with experience mixing by ear, compared to 7 percussionists, 12 melodic instrumental musicians, and 11 untrained controls. Participants heard metronome sequences between 80 and 160 beats per minute (BPM) and estimated the tempo. In their most-trained tempo range, 120–139 BPM, DJs were more accurate (lower absolute percent error) than untrained participants. Within the DJ group, 120–139 BPM exhibited greater accuracy than slower tempos of 80–99 or 100–119 BPM. DJs did not differ in accuracy compared to percussionists or melodic musicians on any BPM range. Percussionists were more accurate than controls for 100–119 and 120–139 BPM. The results affirm the experience-dependent skill of professional DJs in temporal perception, with comparable performance to conventionally trained percussionists and instrumental musicians. Additionally, the pattern of results suggests a tempo-specific aspect to this training effect that may be more pronounced in DJs than percussionists and musicians. As one of the first demonstrations of enhanced auditory perception in this unorthodox music expert population, this work opens the way to testing whether DJs also have enhanced rhythmic production abilities, and investigating the neural substrates of this skill compared to conventional musicians.
Collapse
Affiliation(s)
- Nicholas E V Foster
- Department of Otolaryngology Head and Neck Surgery, McGill University, Montreal, QC, Canada.,International Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, QC, Canada.,Center for Research on Brain, Language and Music (CRBLM), Montreal, QC, Canada
| | - Lauriane Beffa
- International Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, QC, Canada.,Center for Research on Brain, Language and Music (CRBLM), Montreal, QC, Canada
| | - Alexandre Lehmann
- Department of Otolaryngology Head and Neck Surgery, McGill University, Montreal, QC, Canada.,International Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, QC, Canada.,Center for Research on Brain, Language and Music (CRBLM), Montreal, QC, Canada
| |
Collapse
|
4
|
Notter MP, Hanke M, Murray MM, Geiser E. Encoding of Auditory Temporal Gestalt in the Human Brain. Cereb Cortex 2020; 29:475-484. [PMID: 29365070 DOI: 10.1093/cercor/bhx328] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2017] [Indexed: 12/16/2022] Open
Abstract
The perception of an acoustic rhythm is invariant to the absolute temporal intervals constituting a sound sequence. It is unknown where in the brain temporal Gestalt, the percept emerging from the relative temporal proximity between acoustic events, is encoded. Two different relative temporal patterns, each induced by three experimental conditions with different absolute temporal patterns as sensory basis, were presented to participants. A linear support vector machine classifier was trained to differentiate activation patterns in functional magnetic resonance imaging data to the two different percepts. Across the sensory constituents the classifier decoded which percept was perceived. A searchlight analysis localized activation patterns specific to the temporal Gestalt bilaterally to the temporoparietal junction, including the planum temporale and supramarginal gyrus, and unilaterally to the right inferior frontal gyrus (pars opercularis). We show that auditory areas not only process absolute temporal intervals, but also integrate them into percepts of Gestalt and that encoding of these percepts persists in high-level associative areas. The findings complement existing knowledge regarding the processing of absolute temporal patterns to the processing of relative temporal patterns relevant to the sequential binding of perceptual elements into Gestalt.
Collapse
Affiliation(s)
- Michael P Notter
- Department of Radiology.,Neuropsychology and Neurorehabilitation Service.,EEG Brain Mapping Core, Center for Biomedical Imaging (CIBM), Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Michael Hanke
- Institute of Psychology, Otto-von-Guericke-University.,Center for Behavioral Brain Sciences, Magdeburg, Germany
| | - Micah M Murray
- Department of Radiology.,Neuropsychology and Neurorehabilitation Service.,EEG Brain Mapping Core, Center for Biomedical Imaging (CIBM), Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland.,Ophthalmology Department, University of Lausanne and Fondation Asile des Aveugles, Lausanne, Switzerland.,Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA
| | - Eveline Geiser
- Department of Radiology.,Neuropsychology and Neurorehabilitation Service.,McGovern Institute, Massachusetts Institute of Technology, Cambridge, MA, USA
| |
Collapse
|
5
|
Lee JH, Wang X, Bendor D. The role of adaptation in generating monotonic rate codes in auditory cortex. PLoS Comput Biol 2020; 16:e1007627. [PMID: 32069272 PMCID: PMC7048304 DOI: 10.1371/journal.pcbi.1007627] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2019] [Revised: 02/28/2020] [Accepted: 01/02/2020] [Indexed: 11/19/2022] Open
Abstract
In primary auditory cortex, slowly repeated acoustic events are represented temporally by the stimulus-locked activity of single neurons. Single-unit studies in awake marmosets (Callithrix jacchus) have shown that a sub-population of these neurons also monotonically increase or decrease their average discharge rate during stimulus presentation for higher repetition rates. Building on a computational single-neuron model that generates stimulus-locked responses with stimulus evoked excitation followed by strong inhibition, we find that stimulus-evoked short-term depression is sufficient to produce synchronized monotonic positive and negative responses to slowly repeated stimuli. By exploring model robustness and comparing it to other models for adaptation to such stimuli, we conclude that short-term depression best explains our observations in single-unit recordings in awake marmosets. Together, our results show how a simple biophysical mechanism in single neurons can generate complementary neural codes for acoustic stimuli.
Collapse
Affiliation(s)
- Jong Hoon Lee
- Laboratory of Auditory Neurophysiology, Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, Maryland, United States of America
- Institute of Behavioural Neuroscience, Department of Experimental Psychology, University College London, London, United Kingdom
| | - Xiaoqin Wang
- Laboratory of Auditory Neurophysiology, Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, Maryland, United States of America
| | - Daniel Bendor
- Institute of Behavioural Neuroscience, Department of Experimental Psychology, University College London, London, United Kingdom
| |
Collapse
|
6
|
Shared neural resources of rhythm and syntax: An ALE meta-analysis. Neuropsychologia 2019; 137:107284. [PMID: 31783081 DOI: 10.1016/j.neuropsychologia.2019.107284] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2019] [Accepted: 11/25/2019] [Indexed: 11/20/2022]
Abstract
A growing body of evidence has highlighted behavioral connections between musical rhythm and linguistic syntax, suggesting that these abilities may be mediated by common neural resources. Here, we performed a quantitative meta-analysis of neuroimaging studies using activation likelihood estimate (ALE) to localize the shared neural structures engaged in a representative set of musical rhythm (rhythm, beat, and meter) and linguistic syntax (merge movement, and reanalysis) operations. Rhythm engaged a bilateral sensorimotor network throughout the brain consisting of the inferior frontal gyri, supplementary motor area, superior temporal gyri/temporoparietal junction, insula, intraparietal lobule, and putamen. By contrast, syntax mostly recruited the left sensorimotor network including the inferior frontal gyrus, posterior superior temporal gyrus, premotor cortex, and supplementary motor area. Intersections between rhythm and syntax maps yielded overlapping regions in the left inferior frontal gyrus, left supplementary motor area, and bilateral insula-neural substrates involved in temporal hierarchy processing and predictive coding. Together, this is the first neuroimaging meta-analysis providing detailed anatomical overlap of sensorimotor regions recruited for musical rhythm and linguistic syntax.
Collapse
|
7
|
Qi W, Nakajima T, Sakamoto M, Kato K, Kawakami Y, Kanosue K. Walking and finger tapping can be done with independent rhythms. Sci Rep 2019; 9:7620. [PMID: 31110194 PMCID: PMC6527701 DOI: 10.1038/s41598-019-43824-0] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2018] [Accepted: 05/02/2019] [Indexed: 11/16/2022] Open
Abstract
Rhythmic movements occur in many aspects of daily life. Examples include clapping the hands and walking. The production of two independent rhythms with multiple limbs is considered to be extremely difficult. In the present study we evaluated whether two different, independent rhythms that involved finger tapping and walking could be produced. In Experiment I, twenty subjects that had no experience of musical instrument training performed rhythmic finger tapping with the right index finger and one of four different lower limb movements; (1) self-paced walking, (2) given-paced walking, (3) alternative bilateral heel tapping from a sitting position, and (4) unilateral heel tapping with the leg ipsilateral to the tapping finger from a sitting position. The target intervals of finger tapping and heel strikes for walking step/heel tapping were set at 375 ms and 600 ms, respectively. The even distribution of relative phases between instantaneous finger tapping and heel strike was taken as the criteria of independency for the two rhythms. In the self-paced walking and given-paced walking tasks, 16 out of 20 subjects successfully performed finger tapping and walking with independent rhythms without any special practice. On the other hand, in the bipedal heels striking and unipedal heel striking tasks 19 subjects failed to perform the two movements independently, falling into interrelated rhythms with the ratio mostly being 2:1. In Experiment II, a similar independency of finger tapping and walking at a given pace was observed for heel strike intervals of 400, 600, and 800 ms, as well as at the constant 375 ms for finger tapping. These results suggest that finger tapping and walking are controlled by separate neural control mechanisms, presumably with a supra-spinal locus for finger tapping, and a spinal location for walking.
Collapse
Affiliation(s)
- Weihuang Qi
- Graduate School of Sport Sciences, Waseda University, Saitama, Japan
| | - Tsuyoshi Nakajima
- Department of Integrative Physiology, Kyorin University School of Medicine, Tokyo, Japan
| | - Masanori Sakamoto
- Faculty of Education, Department of Physical Education, Kumamoto University, Kumamoto, Japan
| | - Kouki Kato
- Faculty of Sport Sciences, Waseda University, Saitama, Japan
| | - Yasuo Kawakami
- Faculty of Sport Sciences, Waseda University, Saitama, Japan
| | | |
Collapse
|
8
|
|
9
|
Green B, Jääskeläinen IP, Sams M, Rauschecker JP. Distinct brain areas process novel and repeating tone sequences. BRAIN AND LANGUAGE 2018; 187:104-114. [PMID: 30278992 DOI: 10.1016/j.bandl.2018.09.006] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2017] [Revised: 10/03/2017] [Accepted: 09/23/2018] [Indexed: 06/08/2023]
Abstract
The auditory dorsal stream has been implicated in sensorimotor integration and concatenation of sequential sound events, both being important for processing of speech and music. The auditory ventral stream, by contrast, is characterized as subserving sound identification and recognition. We studied the respective roles of the dorsal and ventral streams, including recruitment of basal ganglia and medial temporal lobe structures, in the processing of tone sequence elements. A sequence was presented incrementally across several runs during functional magnetic resonance imaging in humans, and we compared activation by sequence elements when heard for the first time ("novel") versus when the elements were repeating ("familiar"). Our results show a shift in tone-sequence-dependent activation from posterior-dorsal cortical areas and the basal ganglia during the processing of less familiar sequence elements towards anterior and ventral cortical areas and the medial temporal lobe after the encoding of highly familiar sequence elements into identifiable auditory objects.
Collapse
Affiliation(s)
- Brannon Green
- Laboratory of Integrative Neuroscience and Cognition, Interdisciplinary Program in Neuroscience, Georgetown University Medical Center, 3970 Reservoir Road NW, New Research Building-WP19, Washington, DC 20007, USA.
| | - Iiro P Jääskeläinen
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, School of Science, Aalto University, 00076 AALTO Espoo, Finland; AMI Centre, Aalto NeuroImaging, Aalto University, Finland
| | - Mikko Sams
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, School of Science, Aalto University, 00076 AALTO Espoo, Finland
| | - Josef P Rauschecker
- Laboratory of Integrative Neuroscience and Cognition, Interdisciplinary Program in Neuroscience, Georgetown University Medical Center, 3970 Reservoir Road NW, New Research Building-WP19, Washington, DC 20007, USA; Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, School of Science, Aalto University, 00076 AALTO Espoo, Finland; Institute for Advanced Study, TUM, Munich-Garching, 80333 Munich, Germany.
| |
Collapse
|
10
|
Rajendran VG, Teki S, Schnupp JWH. Temporal Processing in Audition: Insights from Music. Neuroscience 2018; 389:4-18. [PMID: 29108832 PMCID: PMC6371985 DOI: 10.1016/j.neuroscience.2017.10.041] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2017] [Revised: 10/24/2017] [Accepted: 10/27/2017] [Indexed: 11/28/2022]
Abstract
Music is a curious example of a temporally patterned acoustic stimulus, and a compelling pan-cultural phenomenon. This review strives to bring some insights from decades of music psychology and sensorimotor synchronization (SMS) literature into the mainstream auditory domain, arguing that musical rhythm perception is shaped in important ways by temporal processing mechanisms in the brain. The feature that unites these disparate disciplines is an appreciation of the central importance of timing, sequencing, and anticipation. Perception of musical rhythms relies on an ability to form temporal predictions, a general feature of temporal processing that is equally relevant to auditory scene analysis, pattern detection, and speech perception. By bringing together findings from the music and auditory literature, we hope to inspire researchers to look beyond the conventions of their respective fields and consider the cross-disciplinary implications of studying auditory temporal sequence processing. We begin by highlighting music as an interesting sound stimulus that may provide clues to how temporal patterning in sound drives perception. Next, we review the SMS literature and discuss possible neural substrates for the perception of, and synchronization to, musical beat. We then move away from music to explore the perceptual effects of rhythmic timing in pattern detection, auditory scene analysis, and speech perception. Finally, we review the neurophysiology of general timing processes that may underlie aspects of the perception of rhythmic patterns. We conclude with a brief summary and outlook for future research.
Collapse
Affiliation(s)
- Vani G Rajendran
- Auditory Neuroscience Group, University of Oxford, Department of Physiology, Anatomy, and Genetics, Oxford, UK
| | - Sundeep Teki
- Auditory Neuroscience Group, University of Oxford, Department of Physiology, Anatomy, and Genetics, Oxford, UK
| | - Jan W H Schnupp
- City University of Hong Kong, Department of Biomedical Sciences, 31 To Yuen Street, Kowloon Tong, Hong Kong.
| |
Collapse
|
11
|
Zuk NJ, Carney LH, Lalor EC. Preferred Tempo and Low-Audio-Frequency Bias Emerge From Simulated Sub-cortical Processing of Sounds With a Musical Beat. Front Neurosci 2018; 12:349. [PMID: 29896080 PMCID: PMC5987030 DOI: 10.3389/fnins.2018.00349] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2018] [Accepted: 05/07/2018] [Indexed: 11/17/2022] Open
Abstract
Prior research has shown that musical beats are salient at the level of the cortex in humans. Yet below the cortex there is considerable sub-cortical processing that could influence beat perception. Some biases, such as a tempo preference and an audio frequency bias for beat timing, could result from sub-cortical processing. Here, we used models of the auditory-nerve and midbrain-level amplitude modulation filtering to simulate sub-cortical neural activity to various beat-inducing stimuli, and we used the simulated activity to determine the tempo or beat frequency of the music. First, irrespective of the stimulus being presented, the preferred tempo was around 100 beats per minute, which is within the range of tempi where tempo discrimination and tapping accuracy are optimal. Second, sub-cortical processing predicted a stronger influence of lower audio frequencies on beat perception. However, the tempo identification algorithm that was optimized for simple stimuli often failed for recordings of music. For music, the most highly synchronized model activity occurred at a multiple of the beat frequency. Using bottom-up processes alone is insufficient to produce beat-locked activity. Instead, a learned and possibly top-down mechanism that scales the synchronization frequency to derive the beat frequency greatly improves the performance of tempo identification.
Collapse
Affiliation(s)
- Nathaniel J. Zuk
- Department of Biomedical Engineering, University of Rochester, Rochester, NY, United States
| | - Laurel H. Carney
- Department of Biomedical Engineering, University of Rochester, Rochester, NY, United States
- Department of Neuroscience, University of Rochester Medical Center, Rochester, NY, United States
| | - Edmund C. Lalor
- Department of Biomedical Engineering, University of Rochester, Rochester, NY, United States
- Department of Neuroscience, University of Rochester Medical Center, Rochester, NY, United States
- Del Monte Institute for Neuroscience, University of Rochester Medical Center, Rochester, NY, United States
- Trinity Centre for Bioengineering, Trinity College Dublin, Dublin, Ireland
| |
Collapse
|
12
|
Rajendran VG, Harper NS, Garcia-Lazaro JA, Lesica NA, Schnupp JWH. Midbrain adaptation may set the stage for the perception of musical beat. Proc Biol Sci 2017; 284:20171455. [PMID: 29118141 PMCID: PMC5698641 DOI: 10.1098/rspb.2017.1455] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2017] [Accepted: 10/13/2017] [Indexed: 11/20/2022] Open
Abstract
The ability to spontaneously feel a beat in music is a phenomenon widely believed to be unique to humans. Though beat perception involves the coordinated engagement of sensory, motor and cognitive processes in humans, the contribution of low-level auditory processing to the activation of these networks in a beat-specific manner is poorly understood. Here, we present evidence from a rodent model that midbrain preprocessing of sounds may already be shaping where the beat is ultimately felt. For the tested set of musical rhythms, on-beat sounds on average evoked higher firing rates than off-beat sounds, and this difference was a defining feature of the set of beat interpretations most commonly perceived by human listeners over others. Basic firing rate adaptation provided a sufficient explanation for these results. Our findings suggest that midbrain adaptation, by encoding the temporal context of sounds, creates points of neural emphasis that may influence the perceptual emergence of a beat.
Collapse
Affiliation(s)
- Vani G Rajendran
- Auditory Neuroscience Group, Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford, UK
| | - Nicol S Harper
- Auditory Neuroscience Group, Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford, UK
| | | | - Nicholas A Lesica
- UCL Ear Institute, 332 Grays Inn Rd, Kings Cross, London WC1X 8EE, UK
| | - Jan W H Schnupp
- Department of Biomedical Sciences, City University of Hong Kong, 1/F, Block 1, To Yuen Building, 31 To Yuen Street, Hong Kong
| |
Collapse
|
13
|
Duarte F, Lemus L. The Time Is Up: Compression of Visual Time Interval Estimations of Bimodal Aperiodic Patterns. Front Integr Neurosci 2017; 11:17. [PMID: 28848406 PMCID: PMC5550683 DOI: 10.3389/fnint.2017.00017] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2017] [Accepted: 07/28/2017] [Indexed: 11/13/2022] Open
Abstract
The ability to estimate time intervals subserves many of our behaviors and perceptual experiences. However, it is not clear how aperiodic (AP) stimuli affect our perception of time intervals across sensory modalities. To address this question, we evaluated the human capacity to discriminate between two acoustic (A), visual (V) or audiovisual (AV) time intervals of trains of scattered pulses. We first measured the periodicity of those stimuli and then sought for correlations with the accuracy and reaction times (RTs) of the subjects. We found that, for all time intervals tested in our experiment, the visual system consistently perceived AP stimuli as being shorter than the periodic (P) ones. In contrast, such a compression phenomenon was not apparent during auditory trials. Our conclusions are: first, the subjects exposed to P stimuli are more likely to measure their durations accurately. Second, perceptual time compression occurs for AP visual stimuli. Lastly, AV discriminations are determined by A dominance rather than by AV enhancement.
Collapse
Affiliation(s)
- Fabiola Duarte
- Primate Neurobiology Laboratory, Instituto de Fisiología Celular, Neurociencia Cognitiva, Universidad Nacional Autónoma de MéxicoCiudad de México, Mexico
| | - Luis Lemus
- Primate Neurobiology Laboratory, Instituto de Fisiología Celular, Neurociencia Cognitiva, Universidad Nacional Autónoma de MéxicoCiudad de México, Mexico
| |
Collapse
|
14
|
Mol C, Chen A, Kager RWJ, Ter Haar SM. Prosody in birdsong: A review and perspective. Neurosci Biobehav Rev 2017; 81:167-180. [PMID: 28232050 DOI: 10.1016/j.neubiorev.2017.02.016] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2016] [Revised: 02/16/2017] [Accepted: 02/17/2017] [Indexed: 11/28/2022]
Abstract
Birdsong shows striking parallels with human speech. Previous comparisons between birdsong and human vocalizations focused on syntax, phonology and phonetics. In this review, we propose that future comparative research should expand its focus to include prosody, i.e. the temporal and melodic properties that extend over larger units of song. To this end, we consider the similarities between birdsong structure and the prosodic hierarchy in human speech and between context-dependent acoustic variations in birdsong and the biological codes in human speech. Moreover, we discuss songbirds' sensitivity to prosody-like acoustic features and the role of such features in song segmentation and song learning in relation to infants' sensitivity to prosody and the role of prosody in early language acquisition. Finally, we make suggestions for future comparative birdsong research, including a framework of how prosody in birdsong can be studied. In particular, we propose to analyze birdsong as a multidimensional signal composed of specific acoustic features, and to assess whether these acoustic features are organized into prosody-like structures.
Collapse
Affiliation(s)
- Carien Mol
- Cognitive Neurobiology and Helmholtz Institute, Department of Psychology, Utrecht University, P.O. Box 80086, 3508 TB Utrecht, The Netherlands.
| | - Aoju Chen
- Utrecht Institute of Linguistics OTS, Department of Languages, Literature and Communication, Utrecht University, Trans 10, 3512 JK Utrecht, The Netherlands
| | - René W J Kager
- Utrecht Institute of Linguistics OTS, Department of Languages, Literature and Communication, Utrecht University, Trans 10, 3512 JK Utrecht, The Netherlands
| | - Sita M Ter Haar
- Cognitive Neurobiology and Helmholtz Institute, Department of Psychology, Utrecht University, P.O. Box 80086, 3508 TB Utrecht, The Netherlands
| |
Collapse
|
15
|
Sameiro-Barbosa CM, Geiser E. Sensory Entrainment Mechanisms in Auditory Perception: Neural Synchronization Cortico-Striatal Activation. Front Neurosci 2016; 10:361. [PMID: 27559306 PMCID: PMC4978719 DOI: 10.3389/fnins.2016.00361] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2016] [Accepted: 07/20/2016] [Indexed: 12/18/2022] Open
Abstract
The auditory system displays modulations in sensitivity that can align with the temporal structure of the acoustic environment. This sensory entrainment can facilitate sensory perception and is particularly relevant for audition. Systems neuroscience is slowly uncovering the neural mechanisms underlying the behaviorally observed sensory entrainment effects in the human sensory system. The present article summarizes the prominent behavioral effects of sensory entrainment and reviews our current understanding of the neural basis of sensory entrainment, such as synchronized neural oscillations, and potentially, neural activation in the cortico-striatal system.
Collapse
Affiliation(s)
- Catia M Sameiro-Barbosa
- Service de Neuropsychologie et de Neuroréhabilitation, Centre Hospitalier Universitaire Vaudois Lausanne, Switzerland
| | - Eveline Geiser
- Service de Neuropsychologie et de Neuroréhabilitation, Centre Hospitalier Universitaire VaudoisLausanne, Switzerland; The Laboratory for Investigative Neurophysiology, Department of Radiology, Centre Hospitalier Universitaire VaudoisLausanne, Switzerland; Department of Brain and Cognitive Sciences, McGovern Institute for Brain Research, Massachusetts Institute of TechnologyCambridge, MA, USA
| |
Collapse
|
16
|
Spierings MJ, Ten Cate C. Zebra Finches As a Model Species to Understand the Roots of Rhythm. Front Neurosci 2016; 10:345. [PMID: 27499731 PMCID: PMC4956661 DOI: 10.3389/fnins.2016.00345] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2016] [Accepted: 07/08/2016] [Indexed: 11/16/2022] Open
Affiliation(s)
- Michelle J Spierings
- Behavioural Biology, Institute Biology Leiden, Leiden UniversityLeiden, Netherlands; Leiden Institute for Brain and Cognition, Leiden UniversityLeiden, Netherlands
| | - Carel Ten Cate
- Behavioural Biology, Institute Biology Leiden, Leiden UniversityLeiden, Netherlands; Leiden Institute for Brain and Cognition, Leiden UniversityLeiden, Netherlands
| |
Collapse
|
17
|
Ten Cate C, Spierings M, Hubert J, Honing H. Can Birds Perceive Rhythmic Patterns? A Review and Experiments on a Songbird and a Parrot Species. Front Psychol 2016; 7:730. [PMID: 27242635 PMCID: PMC4872036 DOI: 10.3389/fpsyg.2016.00730] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2015] [Accepted: 04/29/2016] [Indexed: 12/03/2022] Open
Abstract
While humans can easily entrain their behavior with the beat in music, this ability is rare among animals. Yet, comparative studies in non-human species are needed if we want to understand how and why this ability evolved. Entrainment requires two abilities: (1) recognizing the regularity in the auditory stimulus and (2) the ability to adjust the own motor output to the perceived pattern. It has been suggested that beat perception and entrainment are linked to the ability for vocal learning. The presence of some bird species showing beat induction, and also the existence of vocal learning as well as vocal non-learning bird taxa, make them relevant models for comparative research on rhythm perception and its link to vocal learning. Also, some bird vocalizations show strong regularity in rhythmic structure, suggesting that birds might perceive rhythmic structures. In this paper we review the available experimental evidence for the perception of regularity and rhythms by birds, like the ability to distinguish regular from irregular stimuli over tempo transformations and report data from new experiments. While some species show a limited ability to detect regularity, most evidence suggests that birds attend primarily to absolute and not relative timing of patterns and to local features of stimuli. We conclude that, apart from some large parrot species, there is limited evidence for beat and regularity perception among birds and that the link to vocal learning is unclear. We next report the new experiments in which zebra finches and budgerigars (both vocal learners) were first trained to distinguish a regular from an irregular pattern of beats and then tested on various tempo transformations of these stimuli. The results showed that both species reduced the discrimination after tempo transformations. This suggests that, as was found in earlier studies, they attended mainly to local temporal features of the stimuli, and not to their overall regularity. However, some individuals of both species showed an additional sensitivity to the more global pattern if some local features were left unchanged. Altogether our study indicates both between and within species variation, in which birds attend to a mixture of local and to global rhythmic features.
Collapse
Affiliation(s)
- Carel Ten Cate
- Behavioural Biology, Institute of Biology Leiden and Leiden Institute for Brain and Cognition, Leiden University Leiden, Netherlands
| | - Michelle Spierings
- Behavioural Biology, Institute of Biology Leiden and Leiden Institute for Brain and Cognition, Leiden University Leiden, Netherlands
| | - Jeroen Hubert
- Behavioural Biology, Institute of Biology Leiden and Leiden Institute for Brain and Cognition, Leiden University Leiden, Netherlands
| | - Henkjan Honing
- Amsterdam Brain and Cognition, Institute for Logic Language and Computation, University of Amsterdam Amsterdam, Netherlands
| |
Collapse
|
18
|
Brain responses in humans reveal ideal observer-like sensitivity to complex acoustic patterns. Proc Natl Acad Sci U S A 2016; 113:E616-25. [PMID: 26787854 DOI: 10.1073/pnas.1508523113] [Citation(s) in RCA: 131] [Impact Index Per Article: 16.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023] Open
Abstract
We use behavioral methods, magnetoencephalography, and functional MRI to investigate how human listeners discover temporal patterns and statistical regularities in complex sound sequences. Sensitivity to patterns is fundamental to sensory processing, in particular in the auditory system, because most auditory signals only have meaning as successions over time. Previous evidence suggests that the brain is tuned to the statistics of sensory stimulation. However, the process through which this arises has been elusive. We demonstrate that listeners are remarkably sensitive to the emergence of complex patterns within rapidly evolving sound sequences, performing on par with an ideal observer model. Brain responses reveal online processes of evidence accumulation--dynamic changes in tonic activity precisely correlate with the expected precision or predictability of ongoing auditory input--both in terms of deterministic (first-order) structure and the entropy of random sequences. Source analysis demonstrates an interaction between primary auditory cortex, hippocampus, and inferior frontal gyrus in the process of discovering the regularity within the ongoing sound sequence. The results are consistent with precision based predictive coding accounts of perceptual inference and provide compelling neurophysiological evidence of the brain's capacity to encode high-order temporal structure in sensory signals.
Collapse
|
19
|
Bendor D. The role of inhibition in a computational model of an auditory cortical neuron during the encoding of temporal information. PLoS Comput Biol 2015; 11:e1004197. [PMID: 25879843 PMCID: PMC4400160 DOI: 10.1371/journal.pcbi.1004197] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2014] [Accepted: 02/12/2015] [Indexed: 11/19/2022] Open
Abstract
In auditory cortex, temporal information within a sound is represented by two complementary neural codes: a temporal representation based on stimulus-locked firing and a rate representation, where discharge rate co-varies with the timing between acoustic events but lacks a stimulus-synchronized response. Using a computational neuronal model, we find that stimulus-locked responses are generated when sound-evoked excitation is combined with strong, delayed inhibition. In contrast to this, a non-synchronized rate representation is generated when the net excitation evoked by the sound is weak, which occurs when excitation is coincident and balanced with inhibition. Using single-unit recordings from awake marmosets (Callithrix jacchus), we validate several model predictions, including differences in the temporal fidelity, discharge rates and temporal dynamics of stimulus-evoked responses between neurons with rate and temporal representations. Together these data suggest that feedforward inhibition provides a parsimonious explanation of the neural coding dichotomy observed in auditory cortex.
Collapse
Affiliation(s)
- Daniel Bendor
- Institute of Behavioural Neuroscience, Department of Experimental Psychology, University College London, London, United Kingdom
- * E-mail:
| |
Collapse
|
20
|
Patel AD, Iversen JR. The evolutionary neuroscience of musical beat perception: the Action Simulation for Auditory Prediction (ASAP) hypothesis. Front Syst Neurosci 2014; 8:57. [PMID: 24860439 PMCID: PMC4026735 DOI: 10.3389/fnsys.2014.00057] [Citation(s) in RCA: 213] [Impact Index Per Article: 21.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2013] [Accepted: 03/25/2014] [Indexed: 11/17/2022] Open
Abstract
EVERY HUMAN CULTURE HAS SOME FORM OF MUSIC WITH A BEAT a perceived periodic pulse that structures the perception of musical rhythm and which serves as a framework for synchronized movement to music. What are the neural mechanisms of musical beat perception, and how did they evolve? One view, which dates back to Darwin and implicitly informs some current models of beat perception, is that the relevant neural mechanisms are relatively general and are widespread among animal species. On the basis of recent neural and cross-species data on musical beat processing, this paper argues for a different view. Here we argue that beat perception is a complex brain function involving temporally-precise communication between auditory regions and motor planning regions of the cortex (even in the absence of overt movement). More specifically, we propose that simulation of periodic movement in motor planning regions provides a neural signal that helps the auditory system predict the timing of upcoming beats. This "action simulation for auditory prediction" (ASAP) hypothesis leads to testable predictions. We further suggest that ASAP relies on dorsal auditory pathway connections between auditory regions and motor planning regions via the parietal cortex, and suggest that these connections may be stronger in humans than in non-human primates due to the evolution of vocal learning in our lineage. This suggestion motivates cross-species research to determine which species are capable of human-like beat perception, i.e., beat perception that involves accurate temporal prediction of beat times across a fairly broad range of tempi.
Collapse
Affiliation(s)
| | - John R. Iversen
- Swartz Center for Computational Neuroscience, Institute for Neural Computation, University of California San DiegoLa Jolla, CA, USA
| |
Collapse
|