51
|
Kim CE, Kim YK, Chung G, Im HJ, Lee DS, Kim J, Kim SJ. Identifying neuropathic pain using 18F-FDG micro-PET: A multivariate pattern analysis. Neuroimage 2014; 86:311-6. [DOI: 10.1016/j.neuroimage.2013.10.001] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2013] [Revised: 06/30/2013] [Accepted: 10/01/2013] [Indexed: 01/03/2023] Open
|
52
|
Klein ME, Zatorre RJ. Representations of Invariant Musical Categories Are Decodable by Pattern Analysis of Locally Distributed BOLD Responses in Superior Temporal and Intraparietal Sulci. Cereb Cortex 2014; 25:1947-57. [PMID: 24488957 DOI: 10.1093/cercor/bhu003] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
In categorical perception (CP), continuous physical signals are mapped to discrete perceptual bins: mental categories not found in the physical world. CP has been demonstrated across multiple sensory modalities and, in audition, for certain over-learned speech and musical sounds. The neural basis of auditory CP, however, remains ambiguous, including its robustness in nonspeech processes and the relative roles of left/right hemispheres; primary/nonprimary cortices; and ventral/dorsal perceptual processing streams. Here, highly trained musicians listened to 2-tone musical intervals, which they perceive categorically while undergoing functional magnetic resonance imaging. Multivariate pattern analyses were performed after grouping sounds by interval quality (determined by frequency ratio between tones) or pitch height (perceived noncategorically, frequency ratios remain constant). Distributed activity patterns in spheres of voxels were used to determine sound sample identities. For intervals, significant decoding accuracy was observed in the right superior temporal and left intraparietal sulci, with smaller peaks observed homologously in contralateral hemispheres. For pitch height, no significant decoding accuracy was observed, consistent with the non-CP of this dimension. These results suggest that similar mechanisms are operative for nonspeech categories as for speech; espouse roles for 2 segregated processing streams; and support hierarchical processing models for CP.
Collapse
Affiliation(s)
- Mike E Klein
- Cognitive Neuroscience Unit, Montréal Neurological Institute, McGill University, Montréal, Québec, Canada H3A 2B4 International Laboratory for Brain, Music and Sound Research, Montréal, Québec, Canada H3C 3J7
| | - Robert J Zatorre
- Cognitive Neuroscience Unit, Montréal Neurological Institute, McGill University, Montréal, Québec, Canada H3A 2B4 International Laboratory for Brain, Music and Sound Research, Montréal, Québec, Canada H3C 3J7
| |
Collapse
|
53
|
Bulthé J, De Smedt B, Op de Beeck HP. Format-dependent representations of symbolic and non-symbolic numbers in the human cortex as revealed by multi-voxel pattern analyses. Neuroimage 2013; 87:311-22. [PMID: 24201011 DOI: 10.1016/j.neuroimage.2013.10.049] [Citation(s) in RCA: 103] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2013] [Revised: 10/24/2013] [Accepted: 10/26/2013] [Indexed: 11/26/2022] Open
Abstract
Neuroimaging studies in the last 20 years have tried to unravel the neural correlates of number processing across formats in humans and non-human primates. Results point to the intraparietal sulcus as the core area for an abstract representation of numerical quantity. On the other hand, there exist a variety of behavioral and neuroimaging data that are difficult to reconcile with the existence of such an abstract representation. In this study, we addressed this issue by applying multi-voxel pattern analysis (MVPA) to functional Magnetic Resonance Imaging (fMRI) data to unravel the neural representations of symbolic (digits) and non-symbolic (dots) numbers and their possible overlap on three different spatial scales (entire lobules, smaller regions of interest and a searchlight analysis with 2-voxel radius). Results showed that numbers in both formats are decodable in occipital, frontal, temporal and parietal regions. However, there were no overlapping representations between dots and digits on any of the spatial scales. These data suggest that the human brain does not contain an abstract representation of numerical magnitude.
Collapse
Affiliation(s)
- J Bulthé
- Laboratory of Biological Psychology, University of Leuven (KU Leuven), Tiensestraat 102, B-3000 Leuven, Belgium; Parenting and Special Education Research Unit, University of Leuven (KU Leuven), Leopold Vanderkelenstraat 32, B-3000 Leuven, Belgium
| | - B De Smedt
- Parenting and Special Education Research Unit, University of Leuven (KU Leuven), Leopold Vanderkelenstraat 32, B-3000 Leuven, Belgium.
| | - H P Op de Beeck
- Laboratory of Biological Psychology, University of Leuven (KU Leuven), Tiensestraat 102, B-3000 Leuven, Belgium.
| |
Collapse
|
54
|
Abstract
Music has existed in human societies since prehistory, perhaps because it allows expression and regulation of emotion and evokes pleasure. In this review, we present findings from cognitive neuroscience that bear on the question of how we get from perception of sound patterns to pleasurable responses. First, we identify some of the auditory cortical circuits that are responsible for encoding and storing tonal patterns and discuss evidence that cortical loops between auditory and frontal cortices are important for maintaining musical information in working memory and for the recognition of structural regularities in musical patterns, which then lead to expectancies. Second, we review evidence concerning the mesolimbic striatal system and its involvement in reward, motivation, and pleasure in other domains. Recent data indicate that this dopaminergic system mediates pleasure associated with music; specifically, reward value for music can be coded by activity levels in the nucleus accumbens, whose functional connectivity with auditory and frontal areas increases as a function of increasing musical reward. We propose that pleasure in music arises from interactions between cortical loops that enable predictions and expectancies to emerge from sound patterns and subcortical systems responsible for reward and valuation.
Collapse
|
55
|
Seger CA, Spiering BJ, Sares AG, Quraini SI, Alpeter C, David J, Thaut MH. Corticostriatal contributions to musical expectancy perception. J Cogn Neurosci 2013; 25:1062-77. [PMID: 23410032 DOI: 10.1162/jocn_a_00371] [Citation(s) in RCA: 42] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
This study investigates the functional neuroanatomy of harmonic music perception with fMRI. We presented short pieces of Western classical music to nonmusicians. The ending of each piece was systematically manipulated in the following four ways: Standard Cadence (expected resolution), Deceptive Cadence (moderate deviation from expectation), Modulated Cadence (strong deviation from expectation but remaining within the harmonic structure of Western tonal music), and Atonal Cadence (strongest deviation from expectation by leaving the harmonic structure of Western tonal music). Music compared with baseline broadly recruited regions of the bilateral superior temporal gyrus (STG) and the right inferior frontal gyrus (IFG). Parametric regressors scaled to the degree of deviation from harmonic expectancy identified regions sensitive to expectancy violation. Areas within the BG were significantly modulated by expectancy violation, indicating a previously unappreciated role in harmonic processing. Expectancy violation also recruited bilateral cortical regions in the IFG and anterior STG, previously associated with syntactic processing in other domains. The posterior STG was not significantly modulated by expectancy. Granger causality mapping found functional connectivity between IFG, anterior STG, posterior STG, and the BG during music perception. Our results imply the IFG, anterior STG, and the BG are recruited for higher-order harmonic processing, whereas the posterior STG is recruited for basic pitch and melodic processing.
Collapse
|
56
|
Coutanche MN, Thompson-Schill SL. Informational connectivity: identifying synchronized discriminability of multi-voxel patterns across the brain. Front Hum Neurosci 2013; 7:15. [PMID: 23403700 PMCID: PMC3566529 DOI: 10.3389/fnhum.2013.00015] [Citation(s) in RCA: 47] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2012] [Accepted: 01/14/2013] [Indexed: 11/30/2022] Open
Abstract
The fluctuations in a brain region's activation levels over a functional magnetic resonance imaging (fMRI) time-course are used in functional connectivity (FC) to identify networks with synchronous responses. It is increasingly recognized that multi-voxel activity patterns contain information that cannot be extracted from univariate activation levels. Here we present a novel analysis method that quantifies regions' synchrony in multi-voxel activity pattern discriminability, rather than univariate activation, across a timeseries. We introduce a measure of multi-voxel pattern discriminability at each time-point, which is then used to identify regions that share synchronous time-courses of condition-specific multi-voxel information. This method has the sensitivity and access to distributed information that multi-voxel pattern analysis enjoys, allowing it to be applied to data from conditions not separable by univariate responses. We demonstrate this by analyzing data collected while people viewed four different types of man-made objects (typically not separable by univariate analyses) using both FC and informational connectivity (IC) methods. IC reveals networks of object-processing regions that are not detectable using FC. The IC results support prior findings and hypotheses about object processing. This new method allows investigators to ask questions that are not addressable through typical FC, just as multi-voxel pattern analysis (MVPA) has added new research avenues to those addressable with the general linear model (GLM).
Collapse
Affiliation(s)
- Marc N Coutanche
- Department of Psychology, University of Pennsylvania Philadelphia, PA, USA
| | | |
Collapse
|
57
|
Kragel PA, Carter RM, Huettel SA. What makes a pattern? Matching decoding methods to data in multivariate pattern analysis. Front Neurosci 2012; 6:162. [PMID: 23189035 PMCID: PMC3505006 DOI: 10.3389/fnins.2012.00162] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2012] [Accepted: 10/22/2012] [Indexed: 01/22/2023] Open
Abstract
Research in neuroscience faces the challenge of integrating information across different spatial scales of brain function. A promising technique for harnessing information at a range of spatial scales is multivariate pattern analysis (MVPA) of functional magnetic resonance imaging (fMRI) data. While the prevalence of MVPA has increased dramatically in recent years, its typical implementations for classification of mental states utilize only a subset of the information encoded in local fMRI signals. We review published studies employing multivariate pattern classification since the technique’s introduction, which reveal an extensive focus on the improved detection power that linear classifiers provide over traditional analysis techniques. We demonstrate using simulations and a searchlight approach, however, that non-linear classifiers are capable of extracting distinct information about interactions within a local region. We conclude that for spatially localized analyses, such as searchlight and region of interest, multiple classification approaches should be compared in order to match fMRI analyses to the properties of local circuits.
Collapse
Affiliation(s)
- Philip A Kragel
- Department of Psychology and Neuroscience, Duke University Durham, NC, USA ; Center for Cognitive Neuroscience, Duke University Durham, NC, USA
| | | | | |
Collapse
|
58
|
Abstract
The perception of a melody is invariant to the absolute properties of its constituting notes, but depends on the relation between them-the melody's relative pitch profile. In fact, a melody's "Gestalt" is recognized regardless of the instrument or key used to play it. Pitch processing in general is assumed to occur at the level of the auditory cortex. However, it is unknown whether early auditory regions are able to encode pitch sequences integrated over time (i.e., melodies) and whether the resulting representations are invariant to specific keys. Here, we presented participants different melodies composed of the same 4 harmonic pitches during functional magnetic resonance imaging recordings. Additionally, we played the same melodies transposed in different keys and on different instruments. We found that melodies were invariantly represented by their blood oxygen level-dependent activation patterns in primary and secondary auditory cortices across instruments, and also across keys. Our findings extend common hierarchical models of auditory processing by showing that melodies are encoded independent of absolute pitch and based on their relative pitch profile as early as the primary auditory cortex.
Collapse
Affiliation(s)
- Andreas Schindler
- Vision and Cognition Lab, Centre for Integrative Neuroscience, University of Tübingen, Tübingen, Germany
| | | | | |
Collapse
|
59
|
Abstract
Music and speech are often cited as characteristically human forms of communication. Both share the features of hierarchical structure, complex sound systems, and sensorimotor sequencing demands, and both are used to convey and influence emotions, among other functions [1]. Both music and speech also prominently use acoustical frequency modulations, perceived as variations in pitch, as part of their communicative repertoire. Given these similarities, and the fact that pitch perception and production involve the same peripheral transduction system (cochlea) and the same production mechanism (vocal tract), it might be natural to assume that pitch processing in speech and music would also depend on the same underlying cognitive and neural mechanisms. In this essay we argue that the processing of pitch information differs significantly for speech and music; specifically, we suggest that there are two pitch-related processing systems, one for more coarse-grained, approximate analysis and one for more fine-grained accurate representation, and that the latter is unique to music. More broadly, this dissociation offers clues about the interface between sensory and motor systems, and highlights the idea that multiple processing streams are a ubiquitous feature of neuro-cognitive architectures. Pitch changes are an integral part of both spoken language and song. Despite sharing some of the same psychological and neural mechanisms, the authors conclude there are fundamental differences between them.
Collapse
Affiliation(s)
- Robert J Zatorre
- Montreal Neurological Institute, McGill University, Montreal, Quebec, Canada.
| | | |
Collapse
|
60
|
Abstract
Playing a musical instrument requires a complex skill set that depends on the brain's ability to quickly integrate information from multiple senses. It has been well documented that intensive musical training alters brain structure and function within and across multisensory brain regions, supporting the experience-dependent plasticity model. Here, we argue that this experience-dependent plasticity occurs because of the multisensory nature of the brain and may be an important contributing factor to musical learning. This review highlights key multisensory regions within the brain and discusses their role in the context of music learning and rehabilitation.
Collapse
Affiliation(s)
- Emily Zimmerman
- Department of Newborn Medicine, Brigham and Women's Hospital, Boston, Massachusetts, USA
| | | |
Collapse
|
61
|
Nemoto I. Evoked magnetoencephalographic responses to omission of a tone in a musical scale. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2012; 131:4770-4784. [PMID: 22712949 DOI: 10.1121/1.4714916] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
The musical scale is a basis for melodies and can be a simple melody by itself. The present study investigated magnetoencephalographic (MEG) responses to omissions of one tone out of the C major scale. The tone preceding the omitted "target" tone was either prolonged or repeated. In another series, the tone after the target tone was repeated. In "normal" oddball experiments, the complete C major scale was presented more frequently than an incomplete scale lacking one tone, and in "reverse" oddball experiments, the roles were exchanged. In the normal oddball experiments, omission of any tone produced a response significantly different in amplitude from the standard response in the group of non-musicians, although the responses differed depending on the types of omission. The leading tone (B in the C major scale) was shown to elicit a large response when omitted and also when its presence was emphasized. The Reverse oddball experiments showed that repeated presentation of an incomplete scale lacking one tone temporarily reduced the influence of the complete scale but could not even temporarily replace it working as "standard." In addition, an auxiliary study was done to see possible influence of rhythmic variations.
Collapse
Affiliation(s)
- Iku Nemoto
- Department of Information Environment, Tokyo Denki University, 2-1200 Muzai-gakuendai, Inzai Chiba, 270-1382, Japan.
| |
Collapse
|
62
|
Coutanche MN, Thompson-Schill SL. The advantage of brief fMRI acquisition runs for multi-voxel pattern detection across runs. Neuroimage 2012; 61:1113-9. [PMID: 22498658 DOI: 10.1016/j.neuroimage.2012.03.076] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2011] [Revised: 03/05/2012] [Accepted: 03/25/2012] [Indexed: 10/28/2022] Open
Abstract
Functional magnetic resonance imaging (fMRI) studies are broken up into runs (or 'sessions'), frequently selected to be long to minimize across-run signal variations. For investigations that use multi-voxel pattern analysis (MVPA), however, employing many short runs might improve a classifier's ability to generalize across irrelevant pattern variations and detect condition-related activity patterns. We directly tested this hypothesis by scanning participants with both long and short runs and comparing MVPA performance using data from each set of runs. Every run included presentations of faces, places, man-made objects and fruit in a blocked 1-back design. MVPA performance significantly improved from using a large number of short runs, compared to several long runs, in across-run classifications with identical amounts of data. Superior classification was found across variations in the classifier employed, feature selection procedure and region of interest. Performance improvements also extended to an information brain mapping 'searchlight' procedure. These results suggest that investigators looking to maximize the detection of subtle multi-voxel patterns across runs might consider employing short fMRI runs.
Collapse
Affiliation(s)
- Marc N Coutanche
- Department of Psychology, University of Pennsylvania, 3720 Walnut Street, Philadelphia, PA 19104, USA.
| | | |
Collapse
|
63
|
Continuation tapping to triggered melodies: motor resonance effects of melodic motion. Exp Brain Res 2011; 216:51-60. [PMID: 22038717 DOI: 10.1007/s00221-011-2907-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2010] [Accepted: 10/07/2011] [Indexed: 10/16/2022]
Abstract
Common Coding theory predicts that perceived action should resonate in produced action to which it bears some resemblance. Here we show that the qualities of motion commonly attributed to melodies are instantiated in motor plans that control timed movements. Participants attempted to tap a steady beat. Each tap triggered a sounded tone, and successive tones were systematically varied in pitch to form short melodies. Tapping behavior was monitored with motion capture. Although instructed to ignore them, triggered tones systematically affected timing and finger movement. When slower melodic motion was implied by a contour change or a smaller pitch displacement, the interval-tap interval (ITI) was longer. When faster melodic motion was implied by a preserved pitch contour or a larger pitch displacement, ITI was shorter. Kinematic recordings suggested that ITI Error arose from an initial failure to disambiguate perception (i.e., velocity implied by melodic motion) from action (i.e., finger velocity [FV]). Early in the tap trajectory, slower FV was associated with longer ITI and faster FV was associated with shorter ITI. These associations were reversed near mid-trajectory, suggesting a transition from execution of motor planning to online control (Glover et al. in Exp Brain Res 154:103-108, 2004).
Collapse
|