1
|
Snir A, Cieśla K, Ozdemir G, Vekslar R, Amedi A. Localizing 3D motion through the fingertips: Following in the footsteps of elephants. iScience 2024; 27:109820. [PMID: 38799571 PMCID: PMC11126990 DOI: 10.1016/j.isci.2024.109820] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2024] [Revised: 03/07/2024] [Accepted: 04/24/2024] [Indexed: 05/29/2024] Open
Abstract
Each sense serves a different specific function in spatial perception, and they all form a joint multisensory spatial representation. For instance, hearing enables localization in the entire 3D external space, while touch traditionally only allows localization of objects on the body (i.e., within the peripersonal space alone). We use an in-house touch-motion algorithm (TMA) to evaluate individuals' capability to understand externalized 3D information through touch, a skill that was not acquired during an individual's development or in evolution. Four experiments demonstrate quick learning and high accuracy in localization of motion using vibrotactile inputs on fingertips and successful audio-tactile integration in background noise. Subjective responses in some participants imply spatial experiences through visualization and perception of tactile "moving" sources beyond reach. We discuss our findings with respect to developing new skills in an adult brain, including combining a newly acquired "sense" with an existing one and computation-based brain organization.
Collapse
Affiliation(s)
- Adi Snir
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, HaUniversita 8, Herzliya 461010, Israel
| | - Katarzyna Cieśla
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, HaUniversita 8, Herzliya 461010, Israel
- World Hearing Centre, Institute of Physiology and Pathology of Hearing, Mokra 17, 05-830 Kajetany, Nadarzyn, Poland
| | - Gizem Ozdemir
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, HaUniversita 8, Herzliya 461010, Israel
| | - Rotem Vekslar
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, HaUniversita 8, Herzliya 461010, Israel
| | - Amir Amedi
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, HaUniversita 8, Herzliya 461010, Israel
| |
Collapse
|
2
|
Emotion Elicitation through Vibrotactile Stimulation as an Alternative for Deaf and Hard of Hearing People: An EEG Study. ELECTRONICS 2022. [DOI: 10.3390/electronics11142196] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Despite technological and accessibility advances, the performing arts and their cultural offerings remain inaccessible to many people. By using vibrotactile stimulation as an alternative channel, we explored a different way to enhance emotional processes produced while watching audiovisual media and, thus, elicit a greater emotional reaction in hearing-impaired people. We recorded the brain activity of 35 participants with normal hearing and 8 participants with severe and total hearing loss. The results showed activation in the same areas both in participants with normal hearing while watching a video, and in hearing-impaired participants while watching the same video with synchronized soft vibrotactile stimulation in both hands, based on a proprietary stimulation glove. These brain areas (bilateral middle frontal orbitofrontal, bilateral superior frontal gyrus, and left cingulum) have been reported as emotional and attentional areas. We conclude that vibrotactile stimulation can elicit the appropriate cortex activation while watching audiovisual media.
Collapse
|
3
|
Cieśla K, Wolak T, Lorens A, Mentzel M, Skarżyński H, Amedi A. Effects of training and using an audio-tactile sensory substitution device on speech-in-noise understanding. Sci Rep 2022; 12:3206. [PMID: 35217676 PMCID: PMC8881456 DOI: 10.1038/s41598-022-06855-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2021] [Accepted: 01/28/2022] [Indexed: 11/09/2022] Open
Abstract
Understanding speech in background noise is challenging. Wearing face-masks, imposed by the COVID19-pandemics, makes it even harder. We developed a multi-sensory setup, including a sensory substitution device (SSD) that can deliver speech simultaneously through audition and as vibrations on the fingertips. The vibrations correspond to low frequencies extracted from the speech input. We trained two groups of non-native English speakers in understanding distorted speech in noise. After a short session (30-45 min) of repeating sentences, with or without concurrent matching vibrations, we showed comparable mean group improvement of 14-16 dB in Speech Reception Threshold (SRT) in two test conditions, i.e., when the participants were asked to repeat sentences only from hearing and also when matching vibrations on fingertips were present. This is a very strong effect, if one considers that a 10 dB difference corresponds to doubling of the perceived loudness. The number of sentence repetitions needed for both types of training to complete the task was comparable. Meanwhile, the mean group SNR for the audio-tactile training (14.7 ± 8.7) was significantly lower (harder) than for the auditory training (23.9 ± 11.8), which indicates a potential facilitating effect of the added vibrations. In addition, both before and after training most of the participants (70-80%) showed better performance (by mean 4-6 dB) in speech-in-noise understanding when the audio sentences were accompanied with matching vibrations. This is the same magnitude of multisensory benefit that we reported, with no training at all, in our previous study using the same experimental procedures. After training, performance in this test condition was also best in both groups (SRT ~ 2 dB). The least significant effect of both training types was found in the third test condition, i.e. when participants were repeating sentences accompanied with non-matching tactile vibrations and the performance in this condition was also poorest after training. The results indicate that both types of training may remove some level of difficulty in sound perception, which might enable a more proper use of speech inputs delivered via vibrotactile stimulation. We discuss the implications of these novel findings with respect to basic science. In particular, we show that even in adulthood, i.e. long after the classical "critical periods" of development have passed, a new pairing between a certain computation (here, speech processing) and an atypical sensory modality (here, touch) can be established and trained, and that this process can be rapid and intuitive. We further present possible applications of our training program and the SSD for auditory rehabilitation in patients with hearing (and sight) deficits, as well as healthy individuals in suboptimal acoustic situations.
Collapse
Affiliation(s)
- K Cieśla
- The Baruch Ivcher Institute for Brain, Cognition & Technology, The Baruch Ivcher School of Psychology and the Ruth and Meir Rosental Brain Imaging Center, Reichman University, Herzliya, Israel. .,World Hearing Centre, Institute of Physiology and Pathology of Hearing, Warsaw, Poland.
| | - T Wolak
- World Hearing Centre, Institute of Physiology and Pathology of Hearing, Warsaw, Poland
| | - A Lorens
- World Hearing Centre, Institute of Physiology and Pathology of Hearing, Warsaw, Poland
| | - M Mentzel
- The Baruch Ivcher Institute for Brain, Cognition & Technology, The Baruch Ivcher School of Psychology and the Ruth and Meir Rosental Brain Imaging Center, Reichman University, Herzliya, Israel
| | - H Skarżyński
- World Hearing Centre, Institute of Physiology and Pathology of Hearing, Warsaw, Poland
| | - A Amedi
- The Baruch Ivcher Institute for Brain, Cognition & Technology, The Baruch Ivcher School of Psychology and the Ruth and Meir Rosental Brain Imaging Center, Reichman University, Herzliya, Israel
| |
Collapse
|
4
|
Zhang R, Abbott JJ. Vibrotactile Display of Patterned Surface Textures With Kinesthetic Haptic Devices Using Balanced Impulses. IEEE TRANSACTIONS ON HAPTICS 2021; 14:776-791. [PMID: 33844632 DOI: 10.1109/toh.2021.3072588] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Kinesthetic haptic devices are designed primarily to display quasistatic and low-bandwidth forces and moments. Existing methods for vibrotactile display sometimes introduce haptic and/or audio artifacts. In this article, we propose a method to display vibrotactile stimulus signals of moderate to high frequency (20-500 Hz) using kinesthetic haptic devices with a standard 1 kHz haptic update rate. Our method combines symmetric square-wave signals whose periods are even multiples of the haptic update period with asymmetric square-wave signals whose periods are odd multiples of the haptic update period, while ensuring that the positive and negative impulses are balanced in both cases, and utilizing the just noticeable difference in frequency discrimination to avoid the need to display other frequencies. For frequencies at which the above method is insufficient, corresponding to a small band near 400 Hz for a 1 kHz update rate, we utilize a signal-mixing method. Our complete method is then extended to render haptic gratings by measuring scanning velocity, converting the local spatial frequency to its equivalent instantaneous temporal frequency, and displaying a single full-period vibration event. In a series of human-subject studies, we showed that our proposed method is preferred over existing methods for vibrotactile display of signals with relatively high-frequency content.
Collapse
|
5
|
Mirzaei M, Kan P, Kaufmann H. EarVR: Using Ear Haptics in Virtual Reality for Deaf and Hard-of-Hearing People. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2020; 26:2084-2093. [PMID: 32070977 DOI: 10.1109/tvcg.2020.2973441] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Virtual Reality (VR) has a great potential to improve skills of Deaf and Hard-of-Hearing (DHH) people. Most VR applications and devices are designed for persons without hearing problems. Therefore, DHH persons have many limitations when using VR. Adding special features in a VR environment, such as subtitles, or haptic devices will help them. Previously, it was necessary to design a special VR environment for DHH persons. We introduce and evaluate a new prototype called "EarVR" that can be mounted on any desktop or mobile VR Head-Mounted Display (HMD). EarVR analyzes 3D sounds in a VR environment and locates the direction of the sound source that is closest to a user. It notifies the user about the sound direction using two vibro-motors placed on the user's ears. EarVR helps DHH persons to complete sound-based VR tasks in any VR application with 3D audio and a mute option for background music. Therefore, DHH persons can use all VR applications with 3D audio, not only those applications designed for them. Our user study shows that DHH participants were able to complete a simple VR task significantly faster with EarVR than without. The completion time of DHH participants was very close to participants without hearing problems. Also, it shows that DHH participants were able to finish a complex VR task with EarVR, while without it, they could not finish the task even once. Finally, our qualitative and quantitative evaluation among DHH participants indicates that they preferred to use EarVR and it encouraged them to use VR technology more.
Collapse
|
6
|
Sharp A, Bacon BA, Champoux F. Enhanced tactile identification of musical emotion in the deaf. Exp Brain Res 2020; 238:1229-1236. [DOI: 10.1007/s00221-020-05789-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2020] [Accepted: 03/18/2020] [Indexed: 10/24/2022]
|
7
|
Sharp A, Houde MS, Bacon BA, Champoux F. Musicians Show Better Auditory and Tactile Identification of Emotions in Music. Front Psychol 2019; 10:1976. [PMID: 31555172 PMCID: PMC6722200 DOI: 10.3389/fpsyg.2019.01976] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2019] [Accepted: 08/13/2019] [Indexed: 12/03/2022] Open
Abstract
Musicians are better at processing sensory information and at integrating multisensory information in detection and discrimination tasks, but whether these enhanced abilities extend to more complex processes is still unknown. Emotional appeal is a crucial part of musical experience, but whether musicians can better identify emotions in music throughout different sensory modalities has yet to be determined. The goal of the present study was to investigate the auditory, tactile and audiotactile identification of emotions in musicians. Melodies expressing happiness, sadness, fear/threat, and peacefulness were played and participants had to rate each excerpt on a 10-point scale for each of the four emotions. Stimuli were presented through headphones and/or a glove with haptic audio exciters. The data suggest that musicians and control are comparable in the identification of the most basic (happiness and sadness) emotions. However, in the most difficult unisensory identification conditions (fear/threat and peacefulness), significant differences emerge between groups, suggesting that musical training enhances the identification of emotions, in both the auditory and tactile domains. These results support the hypothesis that musical training has an impact at all hierarchical levels of sensory and cognitive processing.
Collapse
Affiliation(s)
- Andréanne Sharp
- École d'Orthophonie et d'Audiologie, Université de Montréal, Montreal, QC, Canada
| | - Marie-Soleil Houde
- École d'Orthophonie et d'Audiologie, Université de Montréal, Montreal, QC, Canada
| | | | - François Champoux
- École d'Orthophonie et d'Audiologie, Université de Montréal, Montreal, QC, Canada
| |
Collapse
|
8
|
Improved tactile frequency discrimination in musicians. Exp Brain Res 2019; 237:1575-1580. [PMID: 30927044 DOI: 10.1007/s00221-019-05532-z] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2019] [Accepted: 03/25/2019] [Indexed: 10/27/2022]
Abstract
Music practice is a multisensory training that is of great interest to neuroscientists because of its implications for neural plasticity. Music-related modulation of sensory systems has been observed in neuroimaging data, and has been supported by results in behavioral tasks. Some studies have shown that musicians react faster than non-musicians to visual, tactile and auditory stimuli. Behavioral enhancement in more complex tasks has received considerably less attention in musicians. This study aims to investigate unisensory and multisensory discrimination capabilities in musicians. More specifically, the goal of this study is to examine auditory, tactile and auditory-tactile discrimination in musicians. The literature suggesting better auditory and auditory-tactile discrimination in musicians is scarce, and no study to date has examined pure tactile discrimination capabilities in musicians. A two-alternative forced-choice frequency discrimination task was used in this experiment. The task was inspired by musical production, and participants were asked to identify whether a frequency was the same as or different than a standard stimulus of 160 Hz in three conditions: auditory only, auditory-tactile only and tactile only. Three waveforms were used to replicate the variability of pitch that can be found in music. Stimuli were presented through headphones for auditory stimulation and a glove with haptic audio exciters for tactile stimulation. Results suggest that musicians have lower discrimination thresholds than non-musicians for auditory-only and auditory-tactile conditions for all waveforms. The results also revealed that musicians have lower discrimination thresholds than non-musicians in the tactile condition for sine and square waveforms. Taken together, these results support the hypothesis that musical training can lead to better unisensory tactile discrimination which is in itself a new and major finding.
Collapse
|
9
|
Choi MH, Kim B, Kim HS, Jo JH, Chung SC. The use of natural language to communicate the perception of vibrotactile stimuli. Somatosens Mot Res 2019; 36:42-48. [DOI: 10.1080/08990220.2019.1580568] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Affiliation(s)
- Mi Hyun Choi
- Department of Biomedical Engineering, Research Institute of Biomedical Engineering, School of ICT Convergence, Engineering College of Science & Technology, Konkuk University, Chungju, South Korea
| | - Boseong Kim
- Department of Philosophical Counseling and Psychology, Dong-Eui University, Busan, South Korea
| | - Hyung-Sik Kim
- Department of Biomedical Engineering, Research Institute of Biomedical Engineering, School of ICT Convergence, Engineering College of Science & Technology, Konkuk University, Chungju, South Korea
| | - Ji-Hun Jo
- Department of Biomedical Engineering, Research Institute of Biomedical Engineering, School of ICT Convergence, Engineering College of Science & Technology, Konkuk University, Chungju, South Korea
| | - Soon-Cheol Chung
- Department of Biomedical Engineering, Research Institute of Biomedical Engineering, School of ICT Convergence, Engineering College of Science & Technology, Konkuk University, Chungju, South Korea
| |
Collapse
|
10
|
Cieśla K, Wolak T, Lorens A, Heimler B, Skarżyński H, Amedi A. Immediate improvement of speech-in-noise perception through multisensory stimulation via an auditory to tactile sensory substitution. Restor Neurol Neurosci 2019; 37:155-166. [PMID: 31006700 PMCID: PMC6598101 DOI: 10.3233/rnn-190898] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
BACKGROUND Hearing loss is becoming a real social and health problem. Its prevalence in the elderly is an epidemic. The risk of developing hearing loss is also growing among younger people. If left untreated, hearing loss can perpetuate development of neurodegenerative diseases, including dementia. Despite recent advancements in hearing aid (HA) and cochlear implant (CI) technologies, hearing impaired users still encounter significant practical and social challenges, with or without aids. In particular, they all struggle with understanding speech in challenging acoustic environments, especially in presence of a competing speaker. OBJECTIVES In the current proof-of-concept study we tested whether multisensory stimulation, pairing audition and a minimal-size touch device would improve intelligibility of speech in noise. METHODS To this aim we developed an audio-to-tactile sensory substitution device (SSD) transforming low-frequency speech signals into tactile vibrations delivered on two finger tips. Based on the inverse effectiveness law, i.e., multisensory enhancement is strongest when signal-to-noise ratio is lowest between senses, we embedded non-native language stimuli in speech-like noise and paired it with a low-frequency input conveyed through touch. RESULTS We found immediate and robust improvement in speech recognition (i.e. in the Signal-To-Noise-ratio) in the multisensory condition without any training, at a group level as well as in every participant. The reported improvement at the group-level of 6 dB was indeed major considering that an increase of 10 dB represents a doubling of the perceived loudness. CONCLUSIONS These results are especially relevant when compared to previous SSD studies showing effects in behavior only after a demanding cognitive training. We discuss the implications of our results for development of SSDs and of specific rehabilitation programs for the hearing impaired either using or not using HAs or CIs. We also discuss the potential application of such a set-up for sense augmentation, such as when learning a new language.
Collapse
Affiliation(s)
- Katarzyna Cieśla
- Institute of Physiology and Pathology of Hearing, World Hearing Center, Warsaw, Poland
- Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel
| | - Tomasz Wolak
- Institute of Physiology and Pathology of Hearing, World Hearing Center, Warsaw, Poland
| | - Artur Lorens
- Institute of Physiology and Pathology of Hearing, World Hearing Center, Warsaw, Poland
| | - Benedetta Heimler
- Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel
| | - Henryk Skarżyński
- Institute of Physiology and Pathology of Hearing, World Hearing Center, Warsaw, Poland
| | - Amir Amedi
- Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel
- The Cognitive Science Program, The Hebrew University of Jerusalem, Jerusalem, Israel
| |
Collapse
|
11
|
Fluckiger M, Grosshauser T, Troster G, Fluckiger M, Grosshauser T, Troster G, Fluckiger M, Troster G, Grosshauser T. Evaluation of Piano Key Vibrations Among Different Acoustic Pianos and Relevance to Vibration Sensation. IEEE TRANSACTIONS ON HAPTICS 2018; 11:212-219. [PMID: 29911980 DOI: 10.1109/toh.2017.2773099] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Recent studies suggest that vibration of piano keys affect the perceived quality of the instrument, as well as the dynamic control and timing in piano playing. However, the time signals of piano key vibrations and its physical properties have not been analyzed and compared to the threshold of vibration sensation in a real-life playing situation yet. This study investigates piano key vibrations and explores the diversity of vibrations among different pianos with a laser Doppler vibrometer. A pianist was performing single keystrokes, note sequences, and a music piece excerpt on four concert grand pianos, five grand pianos, and two upright pianos. The measurements showed peak displacement levels up to 80 m and the frequency spectrum of the vibrations is dominated by frequencies lower than 500 Hz. Finally, a frequency weighting filter is introduced to show that vibration displacement time signals exceed the threshold of human vibration sensation for all evaluated instruments, when a note sequence is played in the bass to mid range with a single hand at forte level. The conducted experiments demonstrate that the vibration characteristics vary distinctively among the investigated pianos.
Collapse
|
12
|
Electro-Tactile Stimulation Enhances Cochlear Implant Speech Recognition in Noise. Sci Rep 2017; 7:2196. [PMID: 28526871 PMCID: PMC5438362 DOI: 10.1038/s41598-017-02429-1] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2017] [Accepted: 04/11/2017] [Indexed: 11/08/2022] Open
Abstract
For cochlear implant users, combined electro-acoustic stimulation (EAS) significantly improves the performance. However, there are many more users who do not have any functional residual acoustic hearing at low frequencies. Because tactile sensation also operates in the same low frequencies (<500 Hz) as the acoustic hearing in EAS, we propose electro-tactile stimulation (ETS) to improve cochlear implant performance. In ten cochlear implant users, a tactile aid was applied to the index finger that converted voice fundamental frequency into tactile vibrations. Speech recognition in noise was compared for cochlear implants alone and for the bimodal ETS condition. On average, ETS improved speech reception thresholds by 2.2 dB over cochlear implants alone. Nine of the ten subjects showed a positive ETS effect ranging from 0.3 to 7.0 dB, which was similar to the amount of the previously-reported EAS benefit. The comparable results indicate similar neural mechanisms that underlie both the ETS and EAS effects. The positive results suggest that the complementary auditory and tactile modes also be used to enhance performance for normal hearing listeners and automatic speech recognition for machines.
Collapse
|