1
|
Abstract
Across the millennia, and across a range of disciplines, there has been a widespread desire to connect, or translate between, the senses in a manner that is meaningful, rather than arbitrary. Early examples were often inspired by the vivid, yet mostly idiosyncratic, crossmodal matches expressed by synaesthetes, often exploited for aesthetic purposes by writers, artists, and composers. A separate approach comes from those academic commentators who have attempted to translate between structurally similar dimensions of perceptual experience (such as pitch and colour). However, neither approach has succeeded in delivering consensually agreed crossmodal matches. As such, an alternative approach to sensory translation is needed. In this narrative historical review, focusing on the translation between audition and vision, we attempt to shed light on the topic by addressing the following three questions: (1) How is the topic of sensory translation related to synaesthesia, multisensory integration, and crossmodal associations? (2) Are there common processing mechanisms across the senses that can help to guarantee the success of sensory translation, or, rather, is mapping among the senses mediated by allegedly universal (e.g., amodal) stimulus dimensions? (3) Is the term 'translation' in the context of cross-sensory mappings used metaphorically or literally? Given the general mechanisms and concepts discussed throughout the review, the answers we come to regarding the nature of audio-visual translation are likely to apply to the translation between other perhaps less-frequently studied modality pairings as well.
Collapse
Affiliation(s)
- Charles Spence
- Crossmodal Research Laboratory, University of Oxford, Oxford, UK.
- Department of Experimental Psychology, New Radcliffe House, University of Oxford, Oxford, OX2 6BW, UK.
| | - Nicola Di Stefano
- Institute of Cognitive Sciences and Technologies, National Research Council of Italy (CNR), Rome, Italy
| |
Collapse
|
2
|
Spence C, Di Stefano N. What, if anything, can be considered an amodal sensory dimension? Psychon Bull Rev 2024:10.3758/s13423-023-02447-3. [PMID: 38381301 DOI: 10.3758/s13423-023-02447-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/17/2023] [Indexed: 02/22/2024]
Abstract
The term 'amodal' is a key topic in several different research fields across experimental psychology and cognitive neuroscience, including in the areas of developmental and perception science. However, despite being regularly used in the literature, the term means something different to the researchers working in the different contexts. Many developmental scientists conceive of the term as referring to those perceptual qualities, such as, for example, the size and shape of an object, that can be picked up by multiple senses (e.g., vision and touch potentially providing information relevant to the same physical stimulus/property). However, the amodal label is also widely used in the case of those qualities that are not directly sensory, such as, for example, numerosity, rhythm, synchrony, etc. Cognitive neuroscientists, by contrast, tend to use the term amodal to refer to those central cognitive processes and brain areas that do not appear to be preferentially responsive to a particular sensory modality or to those symbolic or formal representations that essentially lack any modality and that are assumed to play a role in the higher processing of sensory information. Finally, perception scientists sometimes refer to the phenomenon of 'amodal completion', referring to the spontaneous completion of perceptual information that is missing when occluded objects are presented to observers. In this paper, we review the various different ways in which the term 'amodal' has been used in the literature and the evidence supporting the various uses of the term. Morever, we highlight some of the various properties that have been suggested to be 'amodal' over the years. Then, we try to address some of the questions that arise from the reviewed evidence, such as: Do different uses of the 'term' refer to different domains, for example, sensory information, perceptual processes, or perceptual representations? Are there any commonalities among the different uses of the term? To what extent is research on cross-modal associations (or correspondences) related to, or can shed light on, amodality? And how is the notion of amodal related to multisensory integration? Based on the reviewed evidence, it is argued that there is, as yet, no convincing empirical evidence to support the claim that amodal sensory qualities exist. We thus suggest that use of the term amodal would be more meaningful with respect to abstract cognition rather than necessarily sensory perception, the latter being more adequately explained/understood in terms of highly redundant cross-modal correspondences.
Collapse
Affiliation(s)
- Charles Spence
- Department of Experimental Psychology, New Radcliffe House, University of Oxford, Oxford, OX2 6BW, UK.
- Crossmodal Research Laboratory, University of Oxford, Oxford, UK.
| | - Nicola Di Stefano
- Institute of Cognitive Sciences and Technologies, National Research Council of Italy (CNR), Rome, Italy
| |
Collapse
|
3
|
Sailer U, Zucknick M, Laeng B. Caressed by music: Related preferences for velocity of touch and tempo of music? Front Psychol 2023; 14:1135988. [PMID: 36935986 PMCID: PMC10017781 DOI: 10.3389/fpsyg.2023.1135988] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2023] [Accepted: 02/13/2023] [Indexed: 03/06/2023] Open
Abstract
Given that both hearing and touch are 'mechanical senses' that respond to physical pressure or mechanical energy and that individuals appear to have a characteristic internal or spontaneous tempo, individual preferences in musical and touch rhythms might be related. We explored this in two experiments probing individual preferences for tempo in the tactile and auditory modalities. Study 1 collected ratings of received stroking on the forearm and measured the velocity the participants used for stroking a fur. Music tempo preferences were assessed as mean beats per minute of individually selected music pieces and via the adjustment of experimenter-selected music to a preferred tempo. Heart rate was recorded to measure levels of physiological arousal. We found that the preferred tempo of favorite (self-selected) music correlated positively with the velocity with which each individual liked to be touched. In Study 2, participants rated videos of repeated touch on someone else's arm and videos of a drummer playing with brushes on a snare drum, both at a variety of tempos. We found that participants with similar rating patterns for the different stroking speeds did not show similar rating patterns for the different music beats. The results suggest that there may be a correspondence between preferences for favorite music and felt touch, but this is either weak or it cannot be evoked effectively with vicarious touch and/or mere drum beats. Thus, if preferences for touch and music are related, this is likely to be dependent on the specific type of stimulation.
Collapse
Affiliation(s)
- Uta Sailer
- Department of Behavioural Medicine, Faculty of Medicine, Institute of Basic Medical Sciences, University of Oslo, Oslo, Norway
| | - Manuela Zucknick
- Department of Biostatistics, Faculty of Medicine, Institute of Basic Medical Sciences, University of Oslo, Oslo, Norway
| | - Bruno Laeng
- Department of Psychology, University of Oslo, Oslo, Norway
- RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Oslo, Norway
| |
Collapse
|
4
|
Aker SC, Innes-Brown H, Faulkner KF, Vatti M, Marozeau J. Effect of audio-tactile congruence on vibrotactile music enhancement. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 152:3396. [PMID: 36586853 DOI: 10.1121/10.0016444] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Accepted: 11/21/2022] [Indexed: 06/17/2023]
Abstract
Music listening experiences can be enhanced with tactile vibrations. However, it is not known which parameters of the tactile vibration must be congruent with the music to enhance it. Devices that aim to enhance music with tactile vibrations often require coding an acoustic signal into a congruent vibrotactile signal. Therefore, understanding which of these audio-tactile congruences are important is crucial. Participants were presented with a simple sine wave melody through supra-aural headphones and a haptic actuator held between the thumb and forefinger. Incongruent versions of the stimuli were made by randomizing physical parameters of the tactile stimulus independently of the auditory stimulus. Participants were instructed to rate the stimuli against the incongruent stimuli based on preference. It was found making the intensity of the tactile stimulus incongruent with the intensity of the auditory stimulus, as well as misaligning the two modalities in time, had the biggest negative effect on ratings for the melody used. Future vibrotactile music enhancement devices can use time alignment and intensity congruence as a baseline coding strategy, which improved strategies can be tested against.
Collapse
Affiliation(s)
- Scott C Aker
- Music and Cochlear Implant Lab, Department of Health Technology, Technical University of Denmark, Kongens Lyngby, 2800, Denmark
| | | | | | | | - Jeremy Marozeau
- Music and Cochlear Implant Lab, Department of Health Technology, Technical University of Denmark, Kongens Lyngby, 2800, Denmark
| |
Collapse
|
5
|
Emotion Elicitation through Vibrotactile Stimulation as an Alternative for Deaf and Hard of Hearing People: An EEG Study. ELECTRONICS 2022. [DOI: 10.3390/electronics11142196] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Despite technological and accessibility advances, the performing arts and their cultural offerings remain inaccessible to many people. By using vibrotactile stimulation as an alternative channel, we explored a different way to enhance emotional processes produced while watching audiovisual media and, thus, elicit a greater emotional reaction in hearing-impaired people. We recorded the brain activity of 35 participants with normal hearing and 8 participants with severe and total hearing loss. The results showed activation in the same areas both in participants with normal hearing while watching a video, and in hearing-impaired participants while watching the same video with synchronized soft vibrotactile stimulation in both hands, based on a proprietary stimulation glove. These brain areas (bilateral middle frontal orbitofrontal, bilateral superior frontal gyrus, and left cingulum) have been reported as emotional and attentional areas. We conclude that vibrotactile stimulation can elicit the appropriate cortex activation while watching audiovisual media.
Collapse
|
6
|
Spence C, Di Stefano N. Crossmodal Harmony: Looking for the Meaning of Harmony Beyond Hearing. Iperception 2022; 13:20416695211073817. [PMID: 35186248 PMCID: PMC8850342 DOI: 10.1177/20416695211073817] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Revised: 11/20/2021] [Accepted: 12/23/2021] [Indexed: 12/02/2022] Open
Abstract
The notion of harmony was first developed in the context of metaphysics before being applied to the domain of music. However, in recent centuries, the term has often been used to describe especially pleasing combinations of colors by those working in the visual arts too. Similarly, the harmonization of flavors is nowadays often invoked as one of the guiding principles underpinning the deliberate pairing of food and drink. However, beyond the various uses of the term to describe and construct pleasurable unisensory perceptual experiences, it has also been suggested that music and painting may be combined harmoniously (e.g., see the literature on "color music"). Furthermore, those working in the area of "sonic seasoning" sometimes describe certain sonic compositions as harmonizing crossmodally with specific flavor sensations. In this review, we take a critical look at the putative meaning(s) of the term "harmony" when used in a crossmodal, or multisensory, context. Furthermore, we address the question of whether the term's use outside of a strictly unimodal auditory context should be considered literally or merely metaphorically (i.e., as a shorthand to describe those combinations of sensory stimuli that, for whatever reason, appear to go well together, and hence which can be processed especially fluently).
Collapse
Affiliation(s)
- Charles Spence
- Crossmodal Research Laboratory, University of Oxford, Oxford, UK
| | - Nicola Di Stefano
- Institute for Cognitive Sciences and Technologies, National Research Council of Italy (CNR), Rome, Italy
| |
Collapse
|
7
|
Ooishi Y, Kobayashi M, Kashino M, Ueno K. Presence of Three-Dimensional Sound Field Facilitates Listeners' Mood, Felt Emotion, and Respiration Rate When Listening to Music. Front Psychol 2021; 12:650777. [PMID: 34867569 PMCID: PMC8637927 DOI: 10.3389/fpsyg.2021.650777] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2021] [Accepted: 10/15/2021] [Indexed: 11/23/2022] Open
Abstract
Many studies have investigated the effects of music listening from the viewpoint of music features such as tempo or key by measuring psychological or psychophysiological responses. In addition, technologies for three-dimensional sound field (3D-SF) reproduction and binaural recording have been developed to induce a realistic sensation of sound. However, it is still unclear whether music listened to in the presence of 3D-SF is more impressive than in the absence of it. We hypothesized that the presence of a 3D-SF when listening to music facilitates listeners' moods, emotions for music, and physiological activities such as respiration rate. Here, we examined this hypothesis by evaluating differences between a reproduction condition with headphones (HD condition) and one with a 3D-SF reproduction system (3D-SF condition). We used a 3D-SF reproduction system based on the boundary surface control principle (BoSC system) to reproduce a sound field of music in the 3D-SF condition. Music in the 3D-SF condition was binaurally recorded through a dummy head in the BoSC reproduction room and reproduced with headphones in the HD condition. Therefore, music in the HD condition was auditorily as rich in information as that in the 3D-SF condition, but the 3D-sound field surrounding listeners was absent. We measured the respiration rate and heart rate of participants listening to acousmonium and pipe organ music. The participants rated their felt moods before and after they listened to music, and after they listened, they also rated their felt emotion. We found that the increase in respiration rate, the degree of decrease in well-being, and unpleasantness for both pieces in the 3D-SF condition were greater than in the HD condition. These results suggest that the presence of 3D-SF enhances changes in mood, felt emotion for music, and respiration rate when listening to music.
Collapse
Affiliation(s)
- Yuuki Ooishi
- NTT Communication Science Laboratories, NTT Corporation, Atsugi, Japan
| | - Maori Kobayashi
- Faculty of Human Sciences, School of Human Sciences, Waseda University, Tokorozawa, Japan
- Department of Architecture, School of Science and Technology, Meiji University, Kawasaki, Japan
| | - Makio Kashino
- NTT Communication Science Laboratories, NTT Corporation, Atsugi, Japan
| | - Kanako Ueno
- Department of Architecture, School of Science and Technology, Meiji University, Kawasaki, Japan
- Core Research for Evolutional Science and Technology, Japan Science and Technology Agency (CREST, JST), Tokyo, Japan
| |
Collapse
|
8
|
Lauzon AP, Russo FA, Harris LR. The influence of rhythm on detection of auditory and vibrotactile asynchrony. Exp Brain Res 2020; 238:825-832. [PMID: 32130431 PMCID: PMC7181424 DOI: 10.1007/s00221-019-05720-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2019] [Accepted: 12/30/2019] [Indexed: 11/24/2022]
Abstract
The perception of an event is strongly influenced by the context in which it occurs. Here, we examined the effect of a rhythmic context on detection of asynchrony in both the auditory and vibrotactile modalities. Using the method of constant stimuli and a two-alternative forced choice (2AFC), participants were presented with pairs of pure tones played either simultaneously or with various levels of stimulus onset asynchrony (SOA). Target stimuli in both modalities were nested within either: (i) a regularly occurring, predictable rhythm (ii) an irregular, unpredictable rhythm, or (iii) no rhythm at all. Vibrotactile asynchrony detection had higher thresholds and showed greater variability than auditory asynchrony detection in general. Asynchrony detection thresholds for auditory targets but not vibrotactile targets were significantly reduced when the target stimulus was embedded in a regular rhythm as compared to no rhythm. Embedding within an irregular rhythm produced no such improvement. The observed modality asymmetries are interpreted with regard to the superior temporal resolution of the auditory system and specialized brain circuitry supporting auditory-motor coupling.
Collapse
Affiliation(s)
- Andrew P Lauzon
- Department of Psychology, York University, 4700 Keele St, Toronto, ON, M3J 1P3, Canada.
- Centre for Vision Research, York University, Toronto, ON, Canada.
| | - Frank A Russo
- Department of Psychology, Ryerson University, Toronto, ON, Canada
| | - Laurence R Harris
- Department of Psychology, York University, 4700 Keele St, Toronto, ON, M3J 1P3, Canada
- Centre for Vision Research, York University, Toronto, ON, Canada
| |
Collapse
|
9
|
Stanton TR, Spence C. The Influence of Auditory Cues on Bodily and Movement Perception. Front Psychol 2020; 10:3001. [PMID: 32010030 PMCID: PMC6978806 DOI: 10.3389/fpsyg.2019.03001] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2019] [Accepted: 12/18/2019] [Indexed: 12/31/2022] Open
Abstract
The sounds that result from our movement and that mark the outcome of our actions typically convey useful information concerning the state of our body and its movement, as well as providing pertinent information about the stimuli with which we are interacting. Here we review the rapidly growing literature investigating the influence of non-veridical auditory cues (i.e., inaccurate in terms of their context, timing, and/or spectral distribution) on multisensory body and action perception, and on motor behavior. Inaccurate auditory cues provide a unique opportunity to study cross-modal processes: the ability to detect the impact of each sense when they provide a slightly different message is greater. Additionally, given that similar cross-modal processes likely occur regardless of the accuracy or inaccuracy of sensory input, studying incongruent interactions are likely to also help us predict interactions between congruent inputs. The available research convincingly demonstrates that perceptions of the body, of movement, and of surface contact features (e.g., roughness) are influenced by the addition of non-veridical auditory cues. Moreover, auditory cues impact both motor behavior and emotional valence, the latter showing that sounds that are highly incongruent with the performed movement induce feelings of unpleasantness (perhaps associated with lower processing fluency). Such findings are relevant to the design of auditory cues associated with product interaction, and the use of auditory cues in sport performance and therapeutic situations given the impact on motor behavior.
Collapse
Affiliation(s)
- Tasha R. Stanton
- Pain and Perception Lab, IIMPACT in Health, The University of South Australia, Adelaide, SA, Australia
- Neuroscience Research Australia, Randwick, NSW, Australia
| | - Charles Spence
- Crossmodal Research Laboratory, Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
10
|
Reybrouck M, Podlipniak P, Welch D. Music and Noise: Same or Different? What Our Body Tells Us. Front Psychol 2019; 10:1153. [PMID: 31293465 PMCID: PMC6603256 DOI: 10.3389/fpsyg.2019.01153] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2018] [Accepted: 05/01/2019] [Indexed: 11/23/2022] Open
Abstract
In this article, we consider music and noise in terms of vibrational and transferable energy as well as from the evolutionary significance of the hearing system of Homo sapiens. Music and sound impinge upon our body and our mind and we can react to both either positively or negatively. Much depends, in this regard, on the frequency spectrum and the level of the sound stimuli, which may sometimes make it possible to set music apart from noise. There are, however, two levels of description: the physical-acoustic description of the sound and the subjective-psychological reactions by the listeners. Starting from a vibrational approach to sound and music, we first investigate how sound may activate the sense of touch and the vestibular system of the inner ear besides the sense of hearing. We then touch upon distinct issues such as the relation between low-frequency sounds and annoyance, the harmful effect of loud sound and noise, the direct effects of overstimulation with sound, the indirect effects of unwanted sounds as related to auditory neurology, and the widespread phenomenon of liking loud sound and music, both from the point of view of behavioral and psychological aspects.
Collapse
Affiliation(s)
- Mark Reybrouck
- Musicology Research Group, Faculty of Arts, KU Leuven-University of Leuven, Leuven, Belgium.,IPEM, Department of Art History, Musicology and Theatre Studies, Ghent, Belgium
| | - Piotr Podlipniak
- Institute of Musicology, Adam Mickiewicz University in Poznań, Poznań, Poland
| | - David Welch
- Audiology Section, School of Population Health, University of Auckland, Auckland, New Zealand
| |
Collapse
|
11
|
Araneda R, Renier L, Ebner-Karestinos D, Dricot L, De Volder AG. Hearing, feeling or seeing a beat recruits a supramodal network in the auditory dorsal stream. Eur J Neurosci 2016; 45:1439-1450. [PMID: 27471102 DOI: 10.1111/ejn.13349] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2015] [Revised: 06/13/2016] [Accepted: 07/23/2016] [Indexed: 10/21/2022]
Abstract
Hearing a beat recruits a wide neural network that involves the auditory cortex and motor planning regions. Perceiving a beat can potentially be achieved via vision or even touch, but it is currently not clear whether a common neural network underlies beat processing. Here, we used functional magnetic resonance imaging (fMRI) to test to what extent the neural network involved in beat processing is supramodal, that is, is the same in the different sensory modalities. Brain activity changes in 27 healthy volunteers were monitored while they were attending to the same rhythmic sequences (with and without a beat) in audition, vision and the vibrotactile modality. We found a common neural network for beat detection in the three modalities that involved parts of the auditory dorsal pathway. Within this network, only the putamen and the supplementary motor area (SMA) showed specificity to the beat, while the brain activity in the putamen covariated with the beat detection speed. These results highlighted the implication of the auditory dorsal stream in beat detection, confirmed the important role played by the putamen in beat detection and indicated that the neural network for beat detection is mostly supramodal. This constitutes a new example of convergence of the same functional attributes into one centralized representation in the brain.
Collapse
Affiliation(s)
- Rodrigo Araneda
- Université catholique de Louvain, 54 Avenue Hippocrate UCL B1.54.09, 1200, Brussels, Belgium
| | - Laurent Renier
- Université catholique de Louvain, 54 Avenue Hippocrate UCL B1.54.09, 1200, Brussels, Belgium
| | | | - Laurence Dricot
- Université catholique de Louvain, 54 Avenue Hippocrate UCL B1.54.09, 1200, Brussels, Belgium
| | - Anne G De Volder
- Université catholique de Louvain, 54 Avenue Hippocrate UCL B1.54.09, 1200, Brussels, Belgium
| |
Collapse
|
12
|
Wu C, Stefanescu RA, Martel DT, Shore SE. Listening to another sense: somatosensory integration in the auditory system. Cell Tissue Res 2015; 361:233-50. [PMID: 25526698 PMCID: PMC4475675 DOI: 10.1007/s00441-014-2074-7] [Citation(s) in RCA: 44] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2014] [Accepted: 11/18/2014] [Indexed: 12/19/2022]
Abstract
Conventionally, sensory systems are viewed as separate entities, each with its own physiological process serving a different purpose. However, many functions require integrative inputs from multiple sensory systems and sensory intersection and convergence occur throughout the central nervous system. The neural processes for hearing perception undergo significant modulation by the two other major sensory systems, vision and somatosensation. This synthesis occurs at every level of the ascending auditory pathway: the cochlear nucleus, inferior colliculus, medial geniculate body and the auditory cortex. In this review, we explore the process of multisensory integration from (1) anatomical (inputs and connections), (2) physiological (cellular responses), (3) functional and (4) pathological aspects. We focus on the convergence between auditory and somatosensory inputs in each ascending auditory station. This review highlights the intricacy of sensory processing and offers a multisensory perspective regarding the understanding of sensory disorders.
Collapse
Affiliation(s)
- Calvin Wu
- Department of Otolaryngology, Kresge Hearing Research Institute, University of Michigan, Ann Arbor, MI, 48109, USA
| | | | | | | |
Collapse
|
13
|
Bravi R, Quarta E, Del Tongo C, Carbonaro N, Tognetti A, Minciacchi D. Music, clicks, and their imaginations favor differently the event-based timing component for rhythmic movements. Exp Brain Res 2015; 233:1945-61. [PMID: 25837726 DOI: 10.1007/s00221-015-4267-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2014] [Accepted: 03/24/2015] [Indexed: 01/04/2023]
Affiliation(s)
- Riccardo Bravi
- Department of Experimental and Clinical Medicine, Physiological Sciences Section, University of Florence, Viale Morgagni 63, 50134, Florence, Italy
| | | | | | | | | | | |
Collapse
|
14
|
Lerens E, Araneda R, Renier L, De Volder AG. Improved beat asynchrony detection in early blind individuals. Perception 2014; 43:1083-96. [PMID: 25509685 DOI: 10.1068/p7789] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Although early blind (EB) individuals are thought to have a better musical sense than sighted subjects, no study has investigated the musical rhythm and beat processing abilities in EB individuals. Using an adaptive 'up and down' procedure, we measured the beat asynchrony detection threshold and the duration discrimination threshold, in the auditory and vibrotactile modalities in both EB and sighted control (SC) subjects matched for age, gender, and musical experience. We observed that EB subjects were better than SC in the beat asynchrony detection task; that is, they showed lower thresholds than SC, both in the auditory and in the vibrotactile modalities. In addition, EB subjects had a lower threshold than SC for duration discrimination in the vibrotactile modality only. These improved beat asynchrony detection abilities may contribute to the known excellent musical abilities often observed in many blind subjects.
Collapse
|
15
|
Huang J, Gamble D, Sarnlertsophon K, Wang X, Hsiao S. Integration of auditory and tactile inputs in musical meter perception. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2013; 787:453-61. [PMID: 23716252 PMCID: PMC4324720 DOI: 10.1007/978-1-4614-1590-9_50] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
Abstract
Musicians often say that they not only hear but also "feel" music. To explore the contribution of tactile information to "feeling" music, we investigated the degree that auditory and tactile inputs are integrated in humans performing a musical meter-recognition task. Subjects discriminated between two types of sequences, "duple" (march-like rhythms) and "triple" (waltz-like rhythms), presented in three conditions: (1) unimodal inputs (auditory or tactile alone); (2) various combinations of bimodal inputs, where sequences were distributed between the auditory and tactile channels such that a single channel did not produce coherent meter percepts; and (3) bimodal inputs where the two channels contained congruent or incongruent meter cues. We first show that meter is perceived similarly well (70-85 %) when tactile or auditory cues are presented alone. We next show in the bimodal experiments that auditory and tactile cues are integrated to produce coherent meter percepts. Performance is high (70-90 %) when all of the metrically important notes are assigned to one channel and is reduced to 60 % when half of these notes are assigned to one channel. When the important notes are presented simultaneously to both channels, congruent cues enhance meter recognition (90 %). Performance dropped dramatically when subjects were presented with incongruent auditory cues (10 %), as opposed to incongruent tactile cues (60 %), demonstrating that auditory input dominates meter perception. These observations support the notion that meter perception is a cross-modal percept with tactile inputs underlying the perception of "feeling" music.
Collapse
Affiliation(s)
- Juan Huang
- The Solomon H. Snyder Department of Neuroscience, The Johns Hopkins University, Baltimore, MD 21205, USA.
| | | | | | | | | |
Collapse
|