1
|
Snir A, Cieśla K, Ozdemir G, Vekslar R, Amedi A. Localizing 3D motion through the fingertips: Following in the footsteps of elephants. iScience 2024; 27:109820. [PMID: 38799571 PMCID: PMC11126990 DOI: 10.1016/j.isci.2024.109820] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2024] [Revised: 03/07/2024] [Accepted: 04/24/2024] [Indexed: 05/29/2024] Open
Abstract
Each sense serves a different specific function in spatial perception, and they all form a joint multisensory spatial representation. For instance, hearing enables localization in the entire 3D external space, while touch traditionally only allows localization of objects on the body (i.e., within the peripersonal space alone). We use an in-house touch-motion algorithm (TMA) to evaluate individuals' capability to understand externalized 3D information through touch, a skill that was not acquired during an individual's development or in evolution. Four experiments demonstrate quick learning and high accuracy in localization of motion using vibrotactile inputs on fingertips and successful audio-tactile integration in background noise. Subjective responses in some participants imply spatial experiences through visualization and perception of tactile "moving" sources beyond reach. We discuss our findings with respect to developing new skills in an adult brain, including combining a newly acquired "sense" with an existing one and computation-based brain organization.
Collapse
Affiliation(s)
- Adi Snir
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, HaUniversita 8, Herzliya 461010, Israel
| | - Katarzyna Cieśla
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, HaUniversita 8, Herzliya 461010, Israel
- World Hearing Centre, Institute of Physiology and Pathology of Hearing, Mokra 17, 05-830 Kajetany, Nadarzyn, Poland
| | - Gizem Ozdemir
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, HaUniversita 8, Herzliya 461010, Israel
| | - Rotem Vekslar
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, HaUniversita 8, Herzliya 461010, Israel
| | - Amir Amedi
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, HaUniversita 8, Herzliya 461010, Israel
| |
Collapse
|
2
|
Papatzikis E, Agapaki M, Selvan RN, Pandey V, Zeba F. Quality standards and recommendations for research in music and neuroplasticity. Ann N Y Acad Sci 2023; 1520:20-33. [PMID: 36478395 DOI: 10.1111/nyas.14944] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Research on how music influences brain plasticity has gained momentum in recent years. Considering, however, the nonuniform methodological standards implemented, the findings end up being nonreplicable and less generalizable. To address the need for a standardized baseline of research quality, we gathered all the studies in the music and neuroplasticity field in 2019 and appraised their methodological rigor systematically and critically. The aim was to provide a preliminary and, at the minimum, acceptable quality threshold-and, ipso facto, suggested recommendations-whereupon further discussion and development may take place. Quality appraisal was performed on 89 articles by three independent raters, following a standardized scoring system. The raters' scoring was cross-referenced following an inter-rater reliability measure, and further studied by performing multiple ratings comparisons and matrix analyses. The results for methodological quality were at a quite good level (quantitative articles: mean = 0.737, SD = 0.084; qualitative articles: mean = 0.677, SD = 0.144), following a moderate but statistically significant level of agreement between the raters (W = 0.44, χ2 = 117.249, p = 0.020). We conclude that the standards for implementation and reporting are of high quality; however, certain improvements are needed to reach the stringent levels presumed for such an influential interdisciplinary scientific field.
Collapse
Affiliation(s)
- Efthymios Papatzikis
- Department of Early Childhood Education and Care, Oslo Metropolitan University, Oslo, Norway
| | - Maria Agapaki
- Department of Early Childhood Education and Care, Oslo Metropolitan University, Oslo, Norway
| | - Rosari Naveena Selvan
- Institute for Physics 3 - Biophysics and Bernstein Center for Computational Neuroscience (BCCN), University of Göttingen, Göttingen, Germany.,Department of Psychology, University of Münster, Münster, Germany
| | | | - Fathima Zeba
- School of Humanities and Social Sciences, Manipal Academy of Higher Education Dubai, Dubai, United Arab Emirates
| |
Collapse
|
3
|
Aker SC, Innes-Brown H, Faulkner KF, Vatti M, Marozeau J. Effect of audio-tactile congruence on vibrotactile music enhancement. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 152:3396. [PMID: 36586853 DOI: 10.1121/10.0016444] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Accepted: 11/21/2022] [Indexed: 06/17/2023]
Abstract
Music listening experiences can be enhanced with tactile vibrations. However, it is not known which parameters of the tactile vibration must be congruent with the music to enhance it. Devices that aim to enhance music with tactile vibrations often require coding an acoustic signal into a congruent vibrotactile signal. Therefore, understanding which of these audio-tactile congruences are important is crucial. Participants were presented with a simple sine wave melody through supra-aural headphones and a haptic actuator held between the thumb and forefinger. Incongruent versions of the stimuli were made by randomizing physical parameters of the tactile stimulus independently of the auditory stimulus. Participants were instructed to rate the stimuli against the incongruent stimuli based on preference. It was found making the intensity of the tactile stimulus incongruent with the intensity of the auditory stimulus, as well as misaligning the two modalities in time, had the biggest negative effect on ratings for the melody used. Future vibrotactile music enhancement devices can use time alignment and intensity congruence as a baseline coding strategy, which improved strategies can be tested against.
Collapse
Affiliation(s)
- Scott C Aker
- Music and Cochlear Implant Lab, Department of Health Technology, Technical University of Denmark, Kongens Lyngby, 2800, Denmark
| | | | | | | | - Jeremy Marozeau
- Music and Cochlear Implant Lab, Department of Health Technology, Technical University of Denmark, Kongens Lyngby, 2800, Denmark
| |
Collapse
|
4
|
Hirano M, Furuya S. Multisensory interactions on auditory and somatosensory information in expert pianists. Sci Rep 2022; 12:12503. [PMID: 35869149 PMCID: PMC9307509 DOI: 10.1038/s41598-022-16618-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2022] [Accepted: 07/12/2022] [Indexed: 11/23/2022] Open
Abstract
Fine-tuned sensory functions typically characterize skilled individuals. Although numerous studies demonstrated enhanced unimodal sensory functions at both neural and behavioral levels in skilled individuals, little is known about their multisensory interaction function, especially multisensory integration and selective attention that involve volitional control of information derived from multiple sensory organs. In the current study, expert pianists and musically untrained individuals performed five sets of intensity discrimination tasks at the auditory and somatosensory modalities with different conditions: (1) auditory stimulus, (2) somatosensory stimulus, (3) congruent auditory and somatosensory stimuli (i.e., multisensory integration), (4) auditory and task-irrelevant somatosensory stimuli, and (5) somatosensory and task-irrelevant auditory stimuli. In the fourth and fifth conditions, participants were instructed to ignore a task-irrelevant stimulus and to pay attention to a task-relevant stimulus (i.e., selective attention), respectively. While the discrimination perception was superior in the condition (3) compared to the better one of the individual unimodal conditions only in the pianists, the task-irrelevant somatosensory stimulus worsened the auditory discrimination more in the pianists than the nonmusicians. These findings indicate unique multisensory interactions in expert pianists, which enables pianists to efficiently integrate the auditory and somatosensory information, but exacerbates top-down selective inhibition of somatosensory information during auditory processing.
Collapse
|
5
|
Ooishi Y, Kobayashi M, Kashino M, Ueno K. Presence of Three-Dimensional Sound Field Facilitates Listeners' Mood, Felt Emotion, and Respiration Rate When Listening to Music. Front Psychol 2021; 12:650777. [PMID: 34867569 PMCID: PMC8637927 DOI: 10.3389/fpsyg.2021.650777] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2021] [Accepted: 10/15/2021] [Indexed: 11/23/2022] Open
Abstract
Many studies have investigated the effects of music listening from the viewpoint of music features such as tempo or key by measuring psychological or psychophysiological responses. In addition, technologies for three-dimensional sound field (3D-SF) reproduction and binaural recording have been developed to induce a realistic sensation of sound. However, it is still unclear whether music listened to in the presence of 3D-SF is more impressive than in the absence of it. We hypothesized that the presence of a 3D-SF when listening to music facilitates listeners' moods, emotions for music, and physiological activities such as respiration rate. Here, we examined this hypothesis by evaluating differences between a reproduction condition with headphones (HD condition) and one with a 3D-SF reproduction system (3D-SF condition). We used a 3D-SF reproduction system based on the boundary surface control principle (BoSC system) to reproduce a sound field of music in the 3D-SF condition. Music in the 3D-SF condition was binaurally recorded through a dummy head in the BoSC reproduction room and reproduced with headphones in the HD condition. Therefore, music in the HD condition was auditorily as rich in information as that in the 3D-SF condition, but the 3D-sound field surrounding listeners was absent. We measured the respiration rate and heart rate of participants listening to acousmonium and pipe organ music. The participants rated their felt moods before and after they listened to music, and after they listened, they also rated their felt emotion. We found that the increase in respiration rate, the degree of decrease in well-being, and unpleasantness for both pieces in the 3D-SF condition were greater than in the HD condition. These results suggest that the presence of 3D-SF enhances changes in mood, felt emotion for music, and respiration rate when listening to music.
Collapse
Affiliation(s)
- Yuuki Ooishi
- NTT Communication Science Laboratories, NTT Corporation, Atsugi, Japan
| | - Maori Kobayashi
- Faculty of Human Sciences, School of Human Sciences, Waseda University, Tokorozawa, Japan
- Department of Architecture, School of Science and Technology, Meiji University, Kawasaki, Japan
| | - Makio Kashino
- NTT Communication Science Laboratories, NTT Corporation, Atsugi, Japan
| | - Kanako Ueno
- Department of Architecture, School of Science and Technology, Meiji University, Kawasaki, Japan
- Core Research for Evolutional Science and Technology, Japan Science and Technology Agency (CREST, JST), Tokyo, Japan
| |
Collapse
|
6
|
Hollins M, Athans L. Perceptual amplification following sustained attention: implications for hypervigilance. Exp Brain Res 2020; 239:279-288. [PMID: 33151350 DOI: 10.1007/s00221-020-05910-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2020] [Accepted: 08/20/2020] [Indexed: 11/30/2022]
Abstract
It is known that attending to a cutaneous stimulus briefly increases its subjective intensity. The purpose of the present study was to determine whether an extended period of attention would produce a longer-lasting perceptual amplification. Eighty subjects were assigned alternately to experimental and control groups. Members of the two groups received identical series of tactile stimuli (near-threshold von Frey filaments applied to the forearm), but those in the experimental group carried out a two-interval forced-choice detection task that required attention to the filaments, while subjects in the control group attended instead to a video game. After this initial phase, all subjects gave magnitude estimates of the intensity of a wide range of von Frey filaments. The experimental group gave estimates 42% greater than those of the control group, both for filaments used in the initial phase, and others not presented previously; the perceptual amplification did not, however, transfer to a different type of pressure stimulus, a 5 mm-diameter rod applied to the skin. The aftereffect of sustained attention lasted for at least 15 min. This phenomenon, demonstrated in normal subjects, may have implications for the hypervigilance of some chronic pain patients, which is characterized by both heightened attention to pain and long-lasting perceptual amplification of noxious stimuli.
Collapse
Affiliation(s)
- Mark Hollins
- Department of Psychology and Neuroscience, University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, USA.
| | - Luke Athans
- Department of Psychology and Neuroscience, University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, USA
| |
Collapse
|
7
|
Sharp A, Houde MS, Bacon BA, Champoux F. Musicians Show Better Auditory and Tactile Identification of Emotions in Music. Front Psychol 2019; 10:1976. [PMID: 31555172 PMCID: PMC6722200 DOI: 10.3389/fpsyg.2019.01976] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2019] [Accepted: 08/13/2019] [Indexed: 12/03/2022] Open
Abstract
Musicians are better at processing sensory information and at integrating multisensory information in detection and discrimination tasks, but whether these enhanced abilities extend to more complex processes is still unknown. Emotional appeal is a crucial part of musical experience, but whether musicians can better identify emotions in music throughout different sensory modalities has yet to be determined. The goal of the present study was to investigate the auditory, tactile and audiotactile identification of emotions in musicians. Melodies expressing happiness, sadness, fear/threat, and peacefulness were played and participants had to rate each excerpt on a 10-point scale for each of the four emotions. Stimuli were presented through headphones and/or a glove with haptic audio exciters. The data suggest that musicians and control are comparable in the identification of the most basic (happiness and sadness) emotions. However, in the most difficult unisensory identification conditions (fear/threat and peacefulness), significant differences emerge between groups, suggesting that musical training enhances the identification of emotions, in both the auditory and tactile domains. These results support the hypothesis that musical training has an impact at all hierarchical levels of sensory and cognitive processing.
Collapse
Affiliation(s)
- Andréanne Sharp
- École d'Orthophonie et d'Audiologie, Université de Montréal, Montreal, QC, Canada
| | - Marie-Soleil Houde
- École d'Orthophonie et d'Audiologie, Université de Montréal, Montreal, QC, Canada
| | | | - François Champoux
- École d'Orthophonie et d'Audiologie, Université de Montréal, Montreal, QC, Canada
| |
Collapse
|