1
|
O'Dowd A, Hirst RJ, Seveso MA, McKenna EM, Newell FN. Generalisation to novel exemplars of learned shape categories based on visual and auditory spatial cues does not benefit from multisensory information. Psychon Bull Rev 2024:10.3758/s13423-024-02548-7. [PMID: 39103708 DOI: 10.3758/s13423-024-02548-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/18/2024] [Indexed: 08/07/2024]
Abstract
Although the integration of information across multiple senses can enhance object representations in memory, how multisensory information affects the formation of categories is uncertain. In particular, it is unclear to what extent categories formed from multisensory information benefit object recognition over unisensory inputs. Two experiments investigated the categorisation of novel auditory and visual objects, with categories defined by spatial similarity, and tested generalisation to novel exemplars. Participants learned to categorise exemplars based on visual-only (geometric shape), auditory-only (spatially defined soundscape) or audio-visual spatial cues. Categorisation to learned as well as novel exemplars was then tested under the same sensory learning conditions. For all learning modalities, categorisation generalised to novel exemplars. However, there was no evidence of enhanced categorisation performance for learned multisensory exemplars. At best, bimodal performance approximated that of the most accurate unimodal condition, although this was observed only for a subset of exemplars within a category. These findings provide insight into the perceptual processes involved in the formation of categories and have relevance for understanding the sensory nature of object representations underpinning these categories.
Collapse
Affiliation(s)
- A O'Dowd
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland.
| | - R J Hirst
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland
| | - M A Seveso
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland
| | - E M McKenna
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland
| | - F N Newell
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland
- Department of Psychology, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
| |
Collapse
|
2
|
Chang M, Suzuki S, Kurose T, Ibaraki T. Pretraining alpha rhythm enhancement by neurofeedback facilitates short-term perceptual learning and improves visual acuity by facilitated consolidation. FRONTIERS IN NEUROERGONOMICS 2024; 5:1399578. [PMID: 38894852 PMCID: PMC11184131 DOI: 10.3389/fnrgo.2024.1399578] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/12/2024] [Accepted: 05/20/2024] [Indexed: 06/21/2024]
Abstract
Introduction Learning through perceptual training using the Gabor patch (GP) has attracted attention as a new vision restoration technique for myopia and age-related deterioration of visual acuity (VA). However, the task itself is monotonous and painful and requires numerous training sessions and some time before being effective, which has been a challenge for its widespread application. One effective means of facilitating perceptual learning is the empowerment of EEG alpha rhythm in the sensory cortex before neurofeedback (NF) training; however, there is a lack of evidence for VA. Methods We investigated whether four 30-min sessions of GP training, conducted over 2 weeks with/without EEG NF to increase alpha power (NF and control group, respectively), can improve vision in myopic subjects. Contrast sensitivity (CS) and VA were measured before and after each GP training. Results The NF group showed an improvement in CS at the fourth training session, not observed in the control group. In addition, VA improved only in the NF group at the third and fourth training sessions, this appears as a consolidation effect (maintenance of the previous training effect). Participants who produced stronger alpha power during the third training session showed greater VA recovery during the fourth training session. Discussion These results indicate that enhanced pretraining alpha empowerment strengthens the subsequent consolidation of perceptual learning and that even a short period of GP training can have a positive effect on VA recovery. This simple protocol may facilitate use of a training method to easily recover vision.
Collapse
Affiliation(s)
| | - Shuntaro Suzuki
- Vie, Inc., Kamakura, Japan
- NTT Data Institute of Management Consulting, Inc., Tokyo, Japan
| | | | - Takuya Ibaraki
- Vie, Inc., Kamakura, Japan
- NTT Data Institute of Management Consulting, Inc., Tokyo, Japan
| |
Collapse
|
3
|
Zhao Y, Liu J, Dosher BA, Lu ZL. Enabling identification of component processes in perceptual learning with nonparametric hierarchical Bayesian modeling. J Vis 2024; 24:8. [PMID: 38780934 PMCID: PMC11131338 DOI: 10.1167/jov.24.5.8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Accepted: 04/13/2024] [Indexed: 05/25/2024] Open
Abstract
Perceptual learning is a multifaceted process, encompassing general learning, between-session forgetting or consolidation, and within-session fast relearning and deterioration. The learning curve constructed from threshold estimates in blocks or sessions, based on tens or hundreds of trials, may obscure component processes; high temporal resolution is necessary. We developed two nonparametric inference procedures: a Bayesian inference procedure (BIP) to estimate the posterior distribution of contrast threshold in each learning block for each learner independently and a hierarchical Bayesian model (HBM) that computes the joint posterior distribution of contrast threshold across all learning blocks at the population, subject, and test levels via the covariance of contrast thresholds across blocks. We applied the procedures to the data from two studies that investigated the interaction between feedback and training accuracy in Gabor orientation identification over 1920 trials across six sessions and estimated learning curve with block sizes L = 10, 20, 40, 80, 160, and 320 trials. The HBM generated significantly better fits to the data, smaller standard deviations, and more precise estimates, compared to the BIP across all block sizes. In addition, the HBM generated unbiased estimates, whereas the BIP only generated unbiased estimates with large block sizes but exhibited increased bias with small block sizes. With L = 10, 20, and 40, we were able to consistently identify general learning, between-session forgetting, and rapid relearning and adaptation within sessions. The nonparametric HBM provides a general framework for fine-grained assessment of the learning curve and enables identification of component processes in perceptual learning.
Collapse
Affiliation(s)
- Yukai Zhao
- Center for Neural Science, New York University, New York, NY, USA
| | - Jiajuan Liu
- Department of Cognitive Sciences and Institute of Mathematical Behavioral Sciences, University of California, Irvine, CA, USA
| | - Barbara Anne Dosher
- Department of Cognitive Sciences and Institute of Mathematical Behavioral Sciences, University of California, Irvine, CA, USA
| | - Zhong-Lin Lu
- Division of Arts and Sciences, NYU Shanghai, Shanghai, China
- Center for Neural Science and Department of Psychology, New York University, New York, NY, USA
- NYU-ECNU Institute of Brain and Cognitive Neuroscience, Shanghai, China
| |
Collapse
|
4
|
Yu L, Xu J. The Development of Multisensory Integration at the Neuronal Level. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1437:153-172. [PMID: 38270859 DOI: 10.1007/978-981-99-7611-9_10] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
Multisensory integration is a fundamental function of the brain. In the typical adult, multisensory neurons' response to paired multisensory (e.g., audiovisual) cues is significantly more robust than the corresponding best unisensory response in many brain regions. Synthesizing sensory signals from multiple modalities can speed up sensory processing and improve the salience of outside events or objects. Despite its significance, multisensory integration is testified to be not a neonatal feature of the brain. Neurons' ability to effectively combine multisensory information does not occur rapidly but develops gradually during early postnatal life (for cats, 4-12 weeks required). Multisensory experience is critical for this developing process. If animals were restricted from sensing normal visual scenes or sounds (deprived of the relevant multisensory experience), the development of the corresponding integrative ability could be blocked until the appropriate multisensory experience is obtained. This section summarizes the extant literature on the development of multisensory integration (mainly using cat superior colliculus as a model), sensory-deprivation-induced cross-modal plasticity, and how sensory experience (sensory exposure and perceptual learning) leads to the plastic change and modification of neural circuits in cortical and subcortical areas.
Collapse
Affiliation(s)
- Liping Yu
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), School of Life Sciences, East China Normal University, Shanghai, China.
| | - Jinghong Xu
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), School of Life Sciences, East China Normal University, Shanghai, China
| |
Collapse
|
5
|
Plaza PL, Renier L, Rosemann S, De Volder AG, Rauschecker JP. Sound-encoded faces activate the left fusiform face area in the early blind. PLoS One 2023; 18:e0286512. [PMID: 37992062 PMCID: PMC10664868 DOI: 10.1371/journal.pone.0286512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2022] [Accepted: 05/17/2023] [Indexed: 11/24/2023] Open
Abstract
Face perception in humans and nonhuman primates is accomplished by a patchwork of specialized cortical regions. How these regions develop has remained controversial. In sighted individuals, facial information is primarily conveyed via the visual modality. Early blind individuals, on the other hand, can recognize shapes using auditory and tactile cues. Here we demonstrate that such individuals can learn to distinguish faces from houses and other shapes by using a sensory substitution device (SSD) presenting schematic faces as sound-encoded stimuli in the auditory modality. Using functional MRI, we then asked whether a face-selective brain region like the fusiform face area (FFA) shows selectivity for faces in the same subjects, and indeed, we found evidence for preferential activation of the left FFA by sound-encoded faces. These results imply that FFA development does not depend on experience with visual faces per se but may instead depend on exposure to the geometry of facial configurations.
Collapse
Affiliation(s)
- Paula L. Plaza
- Laboratory of Integrative Neuroscience and Cognition, Department of Neuroscience, Georgetown University Medical Center, Washington, DC, United States of America
| | - Laurent Renier
- Laboratory of Integrative Neuroscience and Cognition, Department of Neuroscience, Georgetown University Medical Center, Washington, DC, United States of America
- Neural Rehabilitation Laboratory, Institute of Neuroscience, Université Catholique de Louvain, Brussels, Belgium
| | - Stephanie Rosemann
- Laboratory of Integrative Neuroscience and Cognition, Department of Neuroscience, Georgetown University Medical Center, Washington, DC, United States of America
| | - Anne G. De Volder
- Neural Rehabilitation Laboratory, Institute of Neuroscience, Université Catholique de Louvain, Brussels, Belgium
| | - Josef P. Rauschecker
- Laboratory of Integrative Neuroscience and Cognition, Department of Neuroscience, Georgetown University Medical Center, Washington, DC, United States of America
| |
Collapse
|
6
|
Konagaya A, Gutmann G, Zhang Y. Co-creation environment with cloud virtual reality and real-time artificial intelligence toward the design of molecular robots. J Integr Bioinform 2023; 20:jib-2022-0017. [PMID: 36194394 PMCID: PMC10063180 DOI: 10.1515/jib-2022-0017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2022] [Revised: 08/31/2022] [Accepted: 09/07/2022] [Indexed: 11/15/2022] Open
Abstract
This paper describes the design philosophy for our cloud-based virtual reality (VR) co-creation environment (CCE) for molecular modeling. Using interactive VR simulation can provide enhanced perspectives in molecular modeling for intuitive live demonstration and experimentation in the CCE. Then the use of the CCE can enhance knowledge creation by bringing people together to share and create ideas or knowledge that may not emerge otherwise. Our prototype CCE discussed here, which was developed to demonstrate our design philosophy, has already enabled multiple members to log in and touch virtual molecules running on a cloud server with no noticeable network latency via real-time artificial intelligence techniques. The CCE plays an essential role in the rational design of molecular robot parts, which consist of bio-molecules such as DNA and protein molecules.
Collapse
Affiliation(s)
- Akihiko Konagaya
- Molecular Robotics Research Institute, Co., Ltd., 4259-3, Nagatsuta, Midori, Yokohama, Japan
- Keisen University, 2-10-1, Minamino, Tama, Tokyo, Japan
| | - Gregory Gutmann
- Molecular Robotics Research Institute, Co., Ltd., 4259-3, Nagatsuta, Midori, Yokohama, Japan
| | - Yuhui Zhang
- Molecular Robotics Research Institute, Co., Ltd., 4259-3, Nagatsuta, Midori, Yokohama, Japan
| |
Collapse
|
7
|
Han Y, Lu Y, Zuo Y, Song H, Chou CH, Wang X, Li X, Li L, Niu CM, Hou W. Substitutive proprioception feedback of a prosthetic wrist by electrotactile stimulation. Front Neurosci 2023; 17:1135687. [PMID: 36895418 PMCID: PMC9989268 DOI: 10.3389/fnins.2023.1135687] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2023] [Accepted: 02/01/2023] [Indexed: 02/25/2023] Open
Abstract
Objective Sensory feedback of upper-limb prostheses is widely desired and studied. As important components of proprioception, position, and movement feedback help users to control prostheses better. Among various feedback methods, electrotactile stimulation is a potential method for coding proprioceptive information of a prosthesis. This study was motivated by the need for proprioception information for a prosthetic wrist. The flexion-extension (FE) position and movement information of the prosthetic wrist are transmitted back to the human body through multichannel electrotactile stimulation. Approach We developed an electrotactile scheme to encode the FE position and movement of the prosthetic wrist and designed an integrated experimental platform. A preliminary experiment on the sensory threshold and discomfort threshold was performed. Then, two proprioceptive feedback experiments were performed: a position sense experiment (Exp 1) and a movement sense experiment (Exp 2). Each experiment included a learning session and a test session. The success rate (SR) and discrimination reaction time (DRT) were analyzed to evaluate the recognition effect. The acceptance of the electrotactile scheme was evaluated by a questionnaire. Main results Our results showed that the average position SRs of five able-bodied subjects, amputee 1, and amputee 2 were 83.78, 97.78, and 84.44%, respectively. The average movement SR, and the direction and range SR of wrist movement in five able-bodied subjects were 76.25, 96.67%, respectively. Amputee 1 and amputee 2 had movement SRs of 87.78 and 90.00% and direction and range SRs of 64.58 and 77.08%, respectively. The average DRT of five able-bodied subjects was less than 1.5 s and that of amputees was less than 3.5 s. Conclusion The results indicate that after a short period of learning, the subjects can sense the position and movement of wrist FE. The proposed substitutive scheme has the potential for amputees to sense a prosthetic wrist, thus enhancing the human-machine interaction.
Collapse
Affiliation(s)
- Yichen Han
- Biomedical Engineering Department, Bioengineering College, Chongqing University, Chongqing, China
| | - Yinping Lu
- Biomedical Engineering Department, Bioengineering College, Chongqing University, Chongqing, China
| | - Yufeng Zuo
- Biomedical Engineering Department, Bioengineering College, Chongqing University, Chongqing, China
| | - Hongliang Song
- Biomedical Engineering Department, Bioengineering College, Chongqing University, Chongqing, China
| | - Chih-Hong Chou
- Laboratory of Neurorehabilitation Engineering, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Xing Wang
- Biomedical Engineering Department, Bioengineering College, Chongqing University, Chongqing, China
| | - Xiangxin Li
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences (CAS), Shenzhen, Guangdong, China
| | - Lei Li
- Department of Rehabilitation, Southwest Hospital, Army Medical University, Chongqing, China
| | - Chuanxin M Niu
- Department of Rehabilitation Medicine, Ruijin Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Wensheng Hou
- Biomedical Engineering Department, Bioengineering College, Chongqing University, Chongqing, China
| |
Collapse
|
8
|
Hu D, Wei Y, Qian R, Lin W, Song R, Wen JR. Class-Aware Sounding Objects Localization via Audiovisual Correspondence. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2022; 44:9844-9859. [PMID: 34941503 DOI: 10.1109/tpami.2021.3137988] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Audiovisual scenes are pervasive in our daily life. It is commonplace for humans to discriminatively localize different sounding objects but quite challenging for machines to achieve class-aware sounding objects localization without category annotations, i.e., localizing the sounding object and recognizing its category. To address this problem, we propose a two-stage step-by-step learning framework to localize and recognize sounding objects in complex audiovisual scenarios using only the correspondence between audio and vision. First, we propose to determine the sounding area via coarse-grained audiovisual correspondence in the single source cases. Then visual features in the sounding area are leveraged as candidate object representations to establish a category-representation object dictionary for expressive visual character extraction. We generate class-aware object localization maps in cocktail-party scenarios and use audiovisual correspondence to suppress silent areas by referring to this dictionary. Finally, we employ category-level audiovisual consistency as the supervision to achieve fine-grained audio and sounding object distribution alignment. Experiments on both realistic and synthesized videos show that our model is superior in localizing and recognizing objects as well as filtering out silent ones. We also transfer the learned audiovisual network into the unsupervised object detection task, obtaining reasonable performance.
Collapse
|
9
|
Lu ZL, Dosher BA. Current directions in visual perceptual learning. NATURE REVIEWS PSYCHOLOGY 2022; 1:654-668. [PMID: 37274562 PMCID: PMC10237053 DOI: 10.1038/s44159-022-00107-2] [Citation(s) in RCA: 23] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 08/16/2022] [Indexed: 06/06/2023]
Abstract
The visual expertise of adult humans is jointly determined by evolution, visual development, and visual perceptual learning. Perceptual learning refers to performance improvements in perceptual tasks after practice or training in the task. It occurs in almost all visual tasks, ranging from simple feature detection to complex scene analysis. In this Review, we focus on key behavioral aspects of visual perceptual learning. We begin by describing visual perceptual learning tasks and manipulations that influence the magnitude of learning, and then discuss specificity of learning. Next, we present theories and computational models of learning and specificity. We then review applications of visual perceptual learning in visual rehabilitation. Finally, we summarize the general principles of visual perceptual learning, discuss the tension between plasticity and stability, and conclude with new research directions.
Collapse
Affiliation(s)
- Zhong-Lin Lu
- Division of Arts and Sciences, New York University Shanghai, Shanghai, China
- Center for Neural Science, New York University, New York, NY, USA
- Department of Psychology, New York University, New York, NY, USA
- Institute of Brain and Cognitive Science, New York University - East China Normal University, Shanghai, China
| | | |
Collapse
|
10
|
Bleau M, Paré S, Chebat DR, Kupers R, Nemargut JP, Ptito M. Neural substrates of spatial processing and navigation in blindness: An activation likelihood estimation meta-analysis. Front Neurosci 2022; 16:1010354. [PMID: 36340755 PMCID: PMC9630591 DOI: 10.3389/fnins.2022.1010354] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2022] [Accepted: 09/30/2022] [Indexed: 12/02/2022] Open
Abstract
Even though vision is considered the best suited sensory modality to acquire spatial information, blind individuals can form spatial representations to navigate and orient themselves efficiently in space. Consequently, many studies support the amodality hypothesis of spatial representations since sensory modalities other than vision contribute to the formation of spatial representations, independently of visual experience and imagery. However, given the high variability in abilities and deficits observed in blind populations, a clear consensus about the neural representations of space has yet to be established. To this end, we performed a meta-analysis of the literature on the neural correlates of spatial processing and navigation via sensory modalities other than vision, like touch and audition, in individuals with early and late onset blindness. An activation likelihood estimation (ALE) analysis of the neuroimaging literature revealed that early blind individuals and sighted controls activate the same neural networks in the processing of non-visual spatial information and navigation, including the posterior parietal cortex, frontal eye fields, insula, and the hippocampal complex. Furthermore, blind individuals also recruit primary and associative occipital areas involved in visuo-spatial processing via cross-modal plasticity mechanisms. The scarcity of studies involving late blind individuals did not allow us to establish a clear consensus about the neural substrates of spatial representations in this specific population. In conclusion, the results of our analysis on neuroimaging studies involving early blind individuals support the amodality hypothesis of spatial representations.
Collapse
Affiliation(s)
- Maxime Bleau
- École d’Optométrie, Université de Montréal, Montreal, QC, Canada
| | - Samuel Paré
- École d’Optométrie, Université de Montréal, Montreal, QC, Canada
| | - Daniel-Robert Chebat
- Visual and Cognitive Neuroscience Laboratory (VCN Lab), Department of Psychology, Faculty of Social Sciences and Humanities, Ariel University, Ariel, Israel
- Navigation and Accessibility Research Center of Ariel University (NARCA), Ariel University, Ariel, Israel
| | - Ron Kupers
- École d’Optométrie, Université de Montréal, Montreal, QC, Canada
- Institute of Neuroscience, Faculty of Medicine, Université de Louvain, Brussels, Belgium
- Department of Neuroscience, University of Copenhagen, Copenhagen, Denmark
| | | | - Maurice Ptito
- École d’Optométrie, Université de Montréal, Montreal, QC, Canada
- Department of Neuroscience, University of Copenhagen, Copenhagen, Denmark
- Department of Neurology and Neurosurgery, Montreal Neurological Institute, McGill University, Montreal, QC, Canada
- *Correspondence: Maurice Ptito,
| |
Collapse
|
11
|
Normandin ME, Garza MC, Ramos-Alvarez MM, Julian JB, Eresanara T, Punjaala N, Vasquez JH, Lopez MR, Muzzio IA. Navigable Space and Traversable Edges Differentially Influence Reorientation in Sighted and Blind Mice. Psychol Sci 2022; 33:925-947. [PMID: 35536866 PMCID: PMC9343889 DOI: 10.1177/09567976211055373] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Reorientation enables navigators to regain their bearings after becoming lost. Disoriented individuals primarily reorient themselves using the geometry of a layout, even when other informative cues, such as landmarks, are present. Yet the specific strategies that animals use to determine geometry are unclear. Moreover, because vision allows subjects to rapidly form precise representations of objects and background, it is unknown whether it has a deterministic role in the use of geometry. In this study, we tested sighted and congenitally blind mice (Ns = 8-11) in various settings in which global shape parameters were manipulated. Results indicated that the navigational affordances of the context-the traversable space-promote sampling of boundaries, which determines the effective use of geometric strategies in both sighted and blind mice. However, blind animals can also effectively reorient themselves using 3D edges by extensively patrolling the borders, even when the traversable space is not limited by these boundaries.
Collapse
Affiliation(s)
| | - Maria C Garza
- Department of Biology, The University of Texas at San Antonio
| | | | | | - Tuoyo Eresanara
- Department of Biology, The University of Texas at San Antonio
| | | | - Juan H Vasquez
- Department of Biology, The University of Texas at San Antonio
| | - Matthew R Lopez
- Department of Biology, The University of Texas at San Antonio
| | - Isabel A Muzzio
- Department of Biology, The University of Texas at San Antonio
| |
Collapse
|
12
|
Karagiorgis AT, Chalas N, Karagianni M, Papadelis G, Vivas AB, Bamidis P, Paraskevopoulos E. Computerized Music-Reading Intervention Improves Resistance to Unisensory Distraction Within a Multisensory Task, in Young and Older Adults. Front Hum Neurosci 2021; 15:742607. [PMID: 34566611 PMCID: PMC8461100 DOI: 10.3389/fnhum.2021.742607] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2021] [Accepted: 08/23/2021] [Indexed: 11/13/2022] Open
Abstract
Incoming information from multiple sensory channels compete for attention. Processing the relevant ones and ignoring distractors, while at the same time monitoring the environment for potential threats, is crucial for survival, throughout the lifespan. However, sensory and cognitive mechanisms often decline in aging populations, making them more susceptible to distraction. Previous interventions in older adults have successfully improved resistance to distraction, but the inclusion of multisensory integration, with its unique properties in attentional capture, in the training protocol is underexplored. Here, we studied whether, and how, a 4-week intervention, which targets audiovisual integration, affects the ability to deal with task-irrelevant unisensory deviants within a multisensory task. Musically naïve participants engaged in a computerized music reading game and were asked to detect audiovisual incongruences between the pitch of a song's melody and the position of a disk on the screen, similar to a simplistic music staff. The effects of the intervention were evaluated via behavioral and EEG measurements in young and older adults. Behavioral findings include the absence of age-related differences in distraction and the indirect improvement of performance due to the intervention, seen as an amelioration of response bias. An asymmetry between the effects of auditory and visual deviants was identified and attributed to modality dominance. The electroencephalographic results showed that both groups shared an increase in activation strength after training, when processing auditory deviants, located in the left dorsolateral prefrontal cortex. A functional connectivity analysis revealed that only young adults improved flow of information, in a network comprised of a fronto-parietal subnetwork and a multisensory temporal area. Overall, both behavioral measures and neurophysiological findings suggest that the intervention was indirectly successful, driving a shift in response strategy in the cognitive domain and higher-level or multisensory brain areas, and leaving lower level unisensory processing unaffected.
Collapse
Affiliation(s)
- Alexandros T Karagiorgis
- School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, Thessaloniki, Greece.,School of Music Studies, Faculty of Fine Arts, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Nikolas Chalas
- Institute for Biomagnetism and Biosignalanalysis, University of Münster, Münster, Germany
| | - Maria Karagianni
- School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Georgios Papadelis
- School of Music Studies, Faculty of Fine Arts, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Ana B Vivas
- Department of Psychology, CITY College, University of York Europe Campus, Thessaloniki, Greece
| | - Panagiotis Bamidis
- School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Evangelos Paraskevopoulos
- School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, Thessaloniki, Greece.,Department of Psychology, University of Cyprus, Nicosia, Cyprus
| |
Collapse
|
13
|
Jouybari AF, Franza M, Kannape OA, Hara M, Blanke O. Tactile spatial discrimination on the torso using vibrotactile and force stimulation. Exp Brain Res 2021; 239:3175-3188. [PMID: 34424361 PMCID: PMC8541989 DOI: 10.1007/s00221-021-06181-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Accepted: 07/12/2021] [Indexed: 11/30/2022]
Abstract
There is a steadily growing number of mobile communication systems that provide spatially encoded tactile information to the humans' torso. However, the increased use of such hands-off displays is currently not matched with or supported by systematic perceptual characterization of tactile spatial discrimination on the torso. Furthermore, there are currently no data testing spatial discrimination for dynamic force stimuli applied to the torso. In the present study, we measured tactile point localization (LOC) and tactile direction discrimination (DIR) on the thoracic spine using two unisex torso-worn tactile vests realized with arrays of 3 × 3 vibrotactile or force feedback actuators. We aimed to, first, evaluate and compare the spatial discrimination of vibrotactile and force stimulations on the thoracic spine and, second, to investigate the relationship between the LOC and DIR results across stimulations. Thirty-four healthy participants performed both tasks with both vests. Tactile accuracies for vibrotactile and force stimulations were 60.7% and 54.6% for the LOC task; 71.0% and 67.7% for the DIR task, respectively. Performance correlated positively with both stimulations, although accuracies were higher for the vibrotactile than for the force stimulation across tasks, arguably due to specific properties of vibrotactile stimulations. We observed comparable directional anisotropies in the LOC results for both stimulations; however, anisotropies in the DIR task were only observed with vibrotactile stimulations. We discuss our findings with respect to tactile perception research as well as their implications for the design of high-resolution torso-mounted tactile displays for spatial cueing.
Collapse
Affiliation(s)
- Atena Fadaei Jouybari
- Laboratory of Cognitive Neuroscience, Center for Neuroprosthetics, Faculty of Life Sciences, Swiss Federal Institute of Technology (EPFL), Geneva, Switzerland.,Laboratory of Cognitive Neuroscience, Brain Mind Institute, Faculty of Life Sciences, Swiss Federal Institute of Technology (EPFL), Geneva, Switzerland
| | - Matteo Franza
- Laboratory of Cognitive Neuroscience, Center for Neuroprosthetics, Faculty of Life Sciences, Swiss Federal Institute of Technology (EPFL), Geneva, Switzerland.,Laboratory of Cognitive Neuroscience, Brain Mind Institute, Faculty of Life Sciences, Swiss Federal Institute of Technology (EPFL), Geneva, Switzerland
| | - Oliver Alan Kannape
- Laboratory of Cognitive Neuroscience, Center for Neuroprosthetics, Faculty of Life Sciences, Swiss Federal Institute of Technology (EPFL), Geneva, Switzerland.,Laboratory of Cognitive Neuroscience, Brain Mind Institute, Faculty of Life Sciences, Swiss Federal Institute of Technology (EPFL), Geneva, Switzerland
| | - Masayuki Hara
- Graduate School of Science and Engineering, Saitama University, Saitama, Japan
| | - Olaf Blanke
- Laboratory of Cognitive Neuroscience, Center for Neuroprosthetics, Faculty of Life Sciences, Swiss Federal Institute of Technology (EPFL), Geneva, Switzerland. .,Laboratory of Cognitive Neuroscience, Brain Mind Institute, Faculty of Life Sciences, Swiss Federal Institute of Technology (EPFL), Geneva, Switzerland. .,Bertarelli Chair in Cognitive Neuroprosthetics, Center for Neuroprosthetics and Brain Mind Institute, School of Life Sciences, Campus Biotech, Swiss Federal Institute of Technology (EPFL), 1012, Geneva, Switzerland.
| |
Collapse
|
14
|
Han X, Xu J, Chang S, Keniston L, Yu L. Multisensory-Guided Associative Learning Enhances Multisensory Representation in Primary Auditory Cortex. Cereb Cortex 2021; 32:1040-1054. [PMID: 34378017 DOI: 10.1093/cercor/bhab264] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2021] [Revised: 07/13/2021] [Accepted: 07/15/2021] [Indexed: 11/12/2022] Open
Abstract
Sensory cortices, classically considered to represent modality-specific sensory information, are also found to engage in multisensory processing. However, how sensory processing in sensory cortices is cross-modally modulated remains an open question. Specifically, we understand little of cross-modal representation in sensory cortices in perceptual tasks and how perceptual learning modifies this process. Here, we recorded neural responses in primary auditory cortex (A1) both while freely moving rats discriminated stimuli in Go/No-Go tasks and when anesthetized. Our data show that cross-modal representation in auditory cortices varies with task contexts. In the task of an audiovisual cue being the target associating with water reward, a significantly higher proportion of auditory neurons showed a visually evoked response. The vast majority of auditory neurons, if processing auditory-visual interactions, exhibit significant multisensory enhancement. However, when the rats performed tasks with unisensory cues being the target, cross-modal inhibition, rather than enhancement, predominated. In addition, multisensory associational learning appeared to leave a trace of plastic change in A1, as a larger proportion of A1 neurons showed multisensory enhancement in anesthesia. These findings indicate that multisensory processing in principle sensory cortices is not static, and having cross-modal interaction in the task requirement can substantially enhance multisensory processing in sensory cortices.
Collapse
Affiliation(s)
- Xiao Han
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai) School of Life Sciences, East China Normal University, Shanghai 200062, China
| | - Jinghong Xu
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai) School of Life Sciences, East China Normal University, Shanghai 200062, China
| | - Song Chang
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai) School of Life Sciences, East China Normal University, Shanghai 200062, China
| | - Les Keniston
- Department of Physical Therapy, University of Maryland Eastern Shore, Princess Anne, MD 21853, USA
| | - Liping Yu
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai) School of Life Sciences, East China Normal University, Shanghai 200062, China.,Key Laboratory of Adolescent Health Assessment and Exercise Intervention of Ministry of Education, School of Life Sciences, East China Normal University, Shanghai 200062, China
| |
Collapse
|
15
|
Pesnot Lerousseau J, Arnold G, Auvray M. Training-induced plasticity enables visualizing sounds with a visual-to-auditory conversion device. Sci Rep 2021; 11:14762. [PMID: 34285265 PMCID: PMC8292401 DOI: 10.1038/s41598-021-94133-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2021] [Accepted: 06/28/2021] [Indexed: 12/04/2022] Open
Abstract
Sensory substitution devices aim at restoring visual functions by converting visual information into auditory or tactile stimuli. Although these devices show promise in the range of behavioral abilities they allow, the processes underlying their use remain underspecified. In particular, while an initial debate focused on the visual versus auditory or tactile nature of sensory substitution, since over a decade, the idea that it reflects a mixture of both has emerged. In order to investigate behaviorally the extent to which visual and auditory processes are involved, participants completed a Stroop-like crossmodal interference paradigm before and after being trained with a conversion device which translates visual images into sounds. In addition, participants' auditory abilities and their phenomenologies were measured. Our study revealed that, after training, when asked to identify sounds, processes shared with vision were involved, as participants’ performance in sound identification was influenced by the simultaneously presented visual distractors. In addition, participants’ performance during training and their associated phenomenology depended on their auditory abilities, revealing that processing finds its roots in the input sensory modality. Our results pave the way for improving the design and learning of these devices by taking into account inter-individual differences in auditory and visual perceptual strategies.
Collapse
Affiliation(s)
| | | | - Malika Auvray
- Sorbonne Université, CNRS UMR 7222, Institut des Systèmes Intelligents et de Robotique (ISIR), 75005, Paris, France.
| |
Collapse
|
16
|
Visual Influences on Auditory Behavioral, Neural, and Perceptual Processes: A Review. J Assoc Res Otolaryngol 2021; 22:365-386. [PMID: 34014416 PMCID: PMC8329114 DOI: 10.1007/s10162-021-00789-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2020] [Accepted: 02/07/2021] [Indexed: 01/03/2023] Open
Abstract
In a naturalistic environment, auditory cues are often accompanied by information from other senses, which can be redundant with or complementary to the auditory information. Although the multisensory interactions derived from this combination of information and that shape auditory function are seen across all sensory modalities, our greatest body of knowledge to date centers on how vision influences audition. In this review, we attempt to capture the state of our understanding at this point in time regarding this topic. Following a general introduction, the review is divided into 5 sections. In the first section, we review the psychophysical evidence in humans regarding vision's influence in audition, making the distinction between vision's ability to enhance versus alter auditory performance and perception. Three examples are then described that serve to highlight vision's ability to modulate auditory processes: spatial ventriloquism, cross-modal dynamic capture, and the McGurk effect. The final part of this section discusses models that have been built based on available psychophysical data and that seek to provide greater mechanistic insights into how vision can impact audition. The second section reviews the extant neuroimaging and far-field imaging work on this topic, with a strong emphasis on the roles of feedforward and feedback processes, on imaging insights into the causal nature of audiovisual interactions, and on the limitations of current imaging-based approaches. These limitations point to a greater need for machine-learning-based decoding approaches toward understanding how auditory representations are shaped by vision. The third section reviews the wealth of neuroanatomical and neurophysiological data from animal models that highlights audiovisual interactions at the neuronal and circuit level in both subcortical and cortical structures. It also speaks to the functional significance of audiovisual interactions for two critically important facets of auditory perception-scene analysis and communication. The fourth section presents current evidence for alterations in audiovisual processes in three clinical conditions: autism, schizophrenia, and sensorineural hearing loss. These changes in audiovisual interactions are postulated to have cascading effects on higher-order domains of dysfunction in these conditions. The final section highlights ongoing work seeking to leverage our knowledge of audiovisual interactions to develop better remediation approaches to these sensory-based disorders, founded in concepts of perceptual plasticity in which vision has been shown to have the capacity to facilitate auditory learning.
Collapse
|
17
|
Buchs G, Haimler B, Kerem M, Maidenbaum S, Braun L, Amedi A. A self-training program for sensory substitution devices. PLoS One 2021; 16:e0250281. [PMID: 33905446 PMCID: PMC8078811 DOI: 10.1371/journal.pone.0250281] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2020] [Accepted: 04/01/2021] [Indexed: 11/30/2022] Open
Abstract
Sensory Substitution Devices (SSDs) convey visual information through audition or touch, targeting blind and visually impaired individuals. One bottleneck towards adopting SSDs in everyday life by blind users, is the constant dependency on sighted instructors throughout the learning process. Here, we present a proof-of-concept for the efficacy of an online self-training program developed for learning the basics of the EyeMusic visual-to-auditory SSD tested on sighted blindfolded participants. Additionally, aiming to identify the best training strategy to be later re-adapted for the blind, we compared multisensory vs. unisensory as well as perceptual vs. descriptive feedback approaches. To these aims, sighted participants performed identical SSD-stimuli identification tests before and after ~75 minutes of self-training on the EyeMusic algorithm. Participants were divided into five groups, differing by the feedback delivered during training: auditory-descriptive, audio-visual textual description, audio-visual perceptual simultaneous and interleaved, and a control group which had no training. At baseline, before any EyeMusic training, participants SSD objects’ identification was significantly above chance, highlighting the algorithm’s intuitiveness. Furthermore, self-training led to a significant improvement in accuracy between pre- and post-training tests in each of the four feedback groups versus control, though no significant difference emerged among those groups. Nonetheless, significant correlations between individual post-training success rates and various learning measures acquired during training, suggest a trend for an advantage of multisensory vs. unisensory feedback strategies, while no trend emerged for perceptual vs. descriptive strategies. The success at baseline strengthens the conclusion that cross-modal correspondences facilitate learning, given SSD algorithms are based on such correspondences. Additionally, and crucially, the results highlight the feasibility of self-training for the first stages of SSD learning, and suggest that for these initial stages, unisensory training, easily implemented also for blind and visually impaired individuals, may suffice. Together, these findings will potentially boost the use of SSDs for rehabilitation.
Collapse
Affiliation(s)
- Galit Buchs
- The Baruch Ivcher Institute For Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC), Herzeliya, Israel
- Department of Cognitive Science, Faculty of Humanities, Hebrew University of Jerusalem, Jerusalem, Israel
- * E-mail: (AA); (GB)
| | - Benedetta Haimler
- The Baruch Ivcher Institute For Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC), Herzeliya, Israel
- Center of Advanced Technologies in Rehabilitation (CATR), The Chaim Sheba Medical Center, Ramat Gan, Israel
| | - Menachem Kerem
- The Baruch Ivcher Institute For Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC), Herzeliya, Israel
| | - Shachar Maidenbaum
- The Baruch Ivcher Institute For Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC), Herzeliya, Israel
- Department of Biomedical Engineering, Ben Gurion University, Beersheba, Israel
| | - Liraz Braun
- The Baruch Ivcher Institute For Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC), Herzeliya, Israel
- Hebrew University of Jerusalem, Jerusalem, Israel
| | - Amir Amedi
- The Baruch Ivcher Institute For Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC), Herzeliya, Israel
- * E-mail: (AA); (GB)
| |
Collapse
|
18
|
Araneda R, Silva Moura S, Dricot L, De Volder AG. Beat Detection Recruits the Visual Cortex in Early Blind Subjects. Life (Basel) 2021; 11:life11040296. [PMID: 33807372 PMCID: PMC8066101 DOI: 10.3390/life11040296] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2021] [Revised: 03/25/2021] [Accepted: 03/29/2021] [Indexed: 11/16/2022] Open
Abstract
Using functional magnetic resonance imaging, here we monitored the brain activity in 12 early blind subjects and 12 blindfolded control subjects, matched for age, gender and musical experience, during a beat detection task. Subjects were required to discriminate regular ("beat") from irregular ("no beat") rhythmic sequences composed of sounds or vibrotactile stimulations. In both sensory modalities, the brain activity differences between the two groups involved heteromodal brain regions including parietal and frontal cortical areas and occipital brain areas, that were recruited in the early blind group only. Accordingly, early blindness induced brain plasticity changes in the cerebral pathways involved in rhythm perception, with a participation of the visually deprived occipital brain areas whatever the sensory modality for input. We conclude that the visually deprived cortex switches its input modality from vision to audition and vibrotactile sense to perform this temporal processing task, supporting the concept of a metamodal, multisensory organization of this cortex.
Collapse
Affiliation(s)
- Rodrigo Araneda
- Motor Skill Learning and Intensive Neurorehabilitation Laboratory (MSL-IN), Institute of Neuroscience (IoNS; COSY Section), Université Catholique de Louvain, 1200 Brussels, Belgium; (R.A.); (S.S.M.)
| | - Sandra Silva Moura
- Motor Skill Learning and Intensive Neurorehabilitation Laboratory (MSL-IN), Institute of Neuroscience (IoNS; COSY Section), Université Catholique de Louvain, 1200 Brussels, Belgium; (R.A.); (S.S.M.)
| | - Laurence Dricot
- Institute of Neuroscience (IoNS; NEUR Section), Université Catholique de Louvain, 1200 Brussels, Belgium;
| | - Anne G. De Volder
- Motor Skill Learning and Intensive Neurorehabilitation Laboratory (MSL-IN), Institute of Neuroscience (IoNS; COSY Section), Université Catholique de Louvain, 1200 Brussels, Belgium; (R.A.); (S.S.M.)
- Correspondence: ; Tel.: +32-2-764-54-82
| |
Collapse
|
19
|
Paré S, Bleau M, Djerourou I, Malotaux V, Kupers R, Ptito M. Spatial navigation with horizontally spatialized sounds in early and late blind individuals. PLoS One 2021; 16:e0247448. [PMID: 33635892 PMCID: PMC7909643 DOI: 10.1371/journal.pone.0247448] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2020] [Accepted: 02/07/2021] [Indexed: 12/02/2022] Open
Abstract
Blind individuals often report difficulties to navigate and to detect objects placed outside their peri-personal space. Although classical sensory substitution devices could be helpful in this respect, these devices often give a complex signal which requires intensive training to analyze. New devices that provide a less complex output signal are therefore needed. Here, we evaluate a smartphone-based sensory substitution device that offers navigation guidance based on strictly spatial cues in the form of horizontally spatialized sounds. The system uses multiple sensors to either detect obstacles at a distance directly in front of the user or to create a 3D map of the environment (detection and avoidance mode, respectively), and informs the user with auditory feedback. We tested 12 early blind, 11 late blind and 24 blindfolded-sighted participants for their ability to detect obstacles and to navigate in an obstacle course. The three groups did not differ in the number of objects detected and avoided. However, early blind and late blind participants were faster than their sighted counterparts to navigate through the obstacle course. These results are consistent with previous research on sensory substitution showing that vision can be replaced by other senses to improve performance in a wide variety of tasks in blind individuals. This study offers new evidence that sensory substitution devices based on horizontally spatialized sounds can be used as a navigation tool with a minimal amount of training.
Collapse
Affiliation(s)
- Samuel Paré
- École d’Optométrie, Université de Montréal, Québec, Canada
| | - Maxime Bleau
- École d’Optométrie, Université de Montréal, Québec, Canada
| | | | - Vincent Malotaux
- Institute of Neuroscience, Université Catholique de Louvain, Brussels, Belgium
| | - Ron Kupers
- École d’Optométrie, Université de Montréal, Québec, Canada
- Institute of Neuroscience, Université Catholique de Louvain, Brussels, Belgium
- Institute of Neuroscience and Pharmacology (INF), University of Copenhagen, Copenhagen, Denmark
| | - Maurice Ptito
- École d’Optométrie, Université de Montréal, Québec, Canada
- Institute of Neuroscience and Pharmacology (INF), University of Copenhagen, Copenhagen, Denmark
- * E-mail:
| |
Collapse
|
20
|
Perrotta MV, Asgeirsdottir T, Eagleman DM. Deciphering Sounds Through Patterns of Vibration on the Skin. Neuroscience 2021; 458:77-86. [PMID: 33465416 DOI: 10.1016/j.neuroscience.2021.01.008] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2020] [Revised: 12/04/2020] [Accepted: 01/05/2021] [Indexed: 11/26/2022]
Abstract
Sensory substitution refers to the concept of feeding information to the brain via an atypical sensory pathway. We here examined the degree to which participants (deaf and hard of hearing) can learn to identify sounds that are algorithmically translated into spatiotemporal patterns of vibration on the skin of the wrist. In a three-alternative forced choice task, participants could determine the identity of up to 95% and on average 70% of the stimuli simply by the spatial pattern of vibrations on the skin. Performance improved significantly over the course of 1 month. Younger participants tended to score better, possibly because of higher brain plasticity, more sensitive skin, or better skills at playing digital games. Similar results were obtained with pattern discrimination, in which a pattern representing the sound of one word was presented to the skin, followed by that of a second word. Participants answered whether the word was the same or different. With minimal difference pairs (distinguished by only one phoneme, such as "house" and "mouse"), the best performance was 83% (average of 62%), while with non-minimal pairs (such as "house" and "zip") the best performance was 100% (average of 70%). Collectively, these results demonstrate that participants are capable of using the channel of the skin to interpret auditory stimuli, opening the way for low-cost, wearable sensory substitution for the deaf and hard of hearing communities.
Collapse
Affiliation(s)
| | | | - David M Eagleman
- Neosensory, 4 West 4th Street, Suite 301, San Mateo, CA 94402, USA; Department of Psychiatry and Behavioral Sciences, Stanford University, 401 Quarry Road, Stanford, CA 94304, USA.
| |
Collapse
|
21
|
Paraskevopoulos E, Chalas N, Karagiorgis A, Karagianni M, Styliadis C, Papadelis G, Bamidis P. Aging Effects on the Neuroplastic Attributes of Multisensory Cortical Networks as Triggered by a Computerized Music Reading Training Intervention. Cereb Cortex 2021; 31:123-137. [PMID: 32794571 DOI: 10.1093/cercor/bhaa213] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2020] [Revised: 07/08/2020] [Accepted: 07/13/2020] [Indexed: 12/24/2022] Open
Abstract
The constant increase in the graying population is the result of a great expansion of life expectancy. A smaller expansion of healthy cognitive and brain functioning diminishes the gains achieved by longevity. Music training, as a special case of multisensory learning, may induce restorative neuroplasticity in older ages. The current study aimed to explore aging effects on the cortical network supporting multisensory cognition and to define aging effects on the network's neuroplastic attributes. A computer-based music reading protocol was developed and evaluated via electroencephalography measurements pre- and post-training on young and older adults. Results revealed that multisensory integration is performed via diverse strategies in the two groups: Older adults employ higher-order supramodal areas to a greater extent than lower level perceptual regions, in contrast to younger adults, indicating an age-related shift in the weight of each processing strategy. Restorative neuroplasticity was revealed in the left inferior frontal gyrus and right medial temporal gyrus, as a result of the training, while task-related reorganization of cortical connectivity was obstructed in the group of older adults, probably due to systemic maturation mechanisms. On the contrary, younger adults significantly increased functional connectivity among the regions supporting multisensory integration.
Collapse
Affiliation(s)
- Evangelos Paraskevopoulos
- School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece
| | - Nikolas Chalas
- School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece.,Institute for Biomagnetism and Biosignal Analysis, University of Münster, D-48149 Münster, Germany
| | - Alexandros Karagiorgis
- School of Music Studies, Faculty of Fine Arts, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece
| | - Maria Karagianni
- School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece
| | - Charis Styliadis
- School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece
| | - Georgios Papadelis
- School of Music Studies, Faculty of Fine Arts, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece
| | - Panagiotis Bamidis
- School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece
| |
Collapse
|
22
|
Neugebauer A, Rifai K, Getzlaff M, Wahl S. Navigation aid for blind persons by visual-to-auditory sensory substitution: A pilot study. PLoS One 2020; 15:e0237344. [PMID: 32818953 PMCID: PMC7446825 DOI: 10.1371/journal.pone.0237344] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2020] [Accepted: 07/23/2020] [Indexed: 11/19/2022] Open
Abstract
PURPOSE In this study, we investigate to what degree augmented reality technology can be used to create and evaluate a visual-to-auditory sensory substitution device to improve the performance of blind persons in navigation and recognition tasks. METHODS A sensory substitution algorithm that translates 3D visual information into audio feedback was designed. This algorithm was integrated in an augmented reality based mobile phone application. Using the mobile device as sensory substitution device, a study with blind participants (n = 7) was performed. The participants navigated through pseudo-randomized obstacle courses using either the sensory substitution device, a white cane or a combination of both. In a second task, virtual 3D objects and structures had to be identified by the participants using the same sensory substitution device. RESULTS The realized application for mobile devices enabled participants to complete the navigation and object recognition tasks in an experimental environment already within the first trials without previous training. This demonstrates the general feasibility and low entry barrier of the designed sensory substitution algorithm. In direct comparison to the white cane, within the study duration of ten hours the sensory substitution device did not offer a statistically significant improvement in navigation.
Collapse
Affiliation(s)
- Alexander Neugebauer
- ZEISS Vision Science Lab, Eberhard-Karls-University Tuebingen, Tübingen, Germany
- * E-mail:
| | - Katharina Rifai
- ZEISS Vision Science Lab, Eberhard-Karls-University Tuebingen, Tübingen, Germany
- Carl Zeiss Vision International GmbH, Aalen, Germany
| | - Mathias Getzlaff
- Institute for Applied Physics, Heinrich-Heine University Duesseldorf, Duesseldorf, Germany
| | - Siegfried Wahl
- ZEISS Vision Science Lab, Eberhard-Karls-University Tuebingen, Tübingen, Germany
- Carl Zeiss Vision International GmbH, Aalen, Germany
| |
Collapse
|
23
|
Lloyd-Esenkaya T, Lloyd-Esenkaya V, O'Neill E, Proulx MJ. Multisensory inclusive design with sensory substitution. COGNITIVE RESEARCH-PRINCIPLES AND IMPLICATIONS 2020; 5:37. [PMID: 32770416 PMCID: PMC7415050 DOI: 10.1186/s41235-020-00240-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/22/2019] [Accepted: 07/13/2020] [Indexed: 11/10/2022]
Abstract
Sensory substitution techniques are perceptual and cognitive phenomena used to represent one sensory form with an alternative. Current applications of sensory substitution techniques are typically focused on the development of assistive technologies whereby visually impaired users can acquire visual information via auditory and tactile cross-modal feedback. But despite their evident success in scientific research and furthering theory development in cognition, sensory substitution techniques have not yet gained widespread adoption within sensory-impaired populations. Here we argue that shifting the focus from assistive to mainstream applications may resolve some of the current issues regarding the use of sensory substitution devices to improve outcomes for those with disabilities. This article provides a tutorial guide on how to use research into multisensory processing and sensory substitution techniques from the cognitive sciences to design new inclusive cross-modal displays. A greater focus on developing inclusive mainstream applications could lead to innovative technologies that could be enjoyed by every person.
Collapse
Affiliation(s)
- Tayfun Lloyd-Esenkaya
- Crossmodal Cognition Lab, University of Bath, Bath, BA2 7AY, UK.,Department of Computer Science, University of Bath, Bath, UK
| | | | - Eamonn O'Neill
- Department of Computer Science, University of Bath, Bath, UK
| | - Michael J Proulx
- Crossmodal Cognition Lab, University of Bath, Bath, BA2 7AY, UK. .,Department of Psychology, University of Bath, Bath, UK.
| |
Collapse
|
24
|
Jicol C, Lloyd-Esenkaya T, Proulx MJ, Lange-Smith S, Scheller M, O'Neill E, Petrini K. Efficiency of Sensory Substitution Devices Alone and in Combination With Self-Motion for Spatial Navigation in Sighted and Visually Impaired. Front Psychol 2020; 11:1443. [PMID: 32754082 PMCID: PMC7381305 DOI: 10.3389/fpsyg.2020.01443] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2020] [Accepted: 05/29/2020] [Indexed: 11/13/2022] Open
Abstract
Human adults can optimally combine vision with self-motion to facilitate navigation. In the absence of visual input (e.g., dark environments and visual impairments), sensory substitution devices (SSDs), such as The vOICe or BrainPort, which translate visual information into auditory or tactile information, could be used to increase navigation precision when integrated together or with self-motion. In Experiment 1, we compared and assessed together The vOICe and BrainPort in aerial maps task performed by a group of sighted participants. In Experiment 2, we examined whether sighted individuals and a group of visually impaired (VI) individuals could benefit from using The vOICe, with and without self-motion, to accurately navigate a three-dimensional (3D) environment. In both studies, 3D motion tracking data were used to determine the level of precision with which participants performed two different tasks (an egocentric and an allocentric task) and three different conditions (two unisensory conditions and one multisensory condition). In Experiment 1, we found no benefit of using the devices together. In Experiment 2, the sighted performance during The vOICe was almost as good as that for self-motion despite a short training period, although we found no benefit (reduction in variability) of using The vOICe and self-motion in combination compared to the two in isolation. In contrast, the group of VI participants did benefit from combining The vOICe and self-motion despite the low number of trials. Finally, while both groups became more accurate in their use of The vOICe with increased trials, only the VI group showed an increased level of accuracy in the combined condition. Our findings highlight how exploiting non-visual multisensory integration to develop new assistive technologies could be key to help blind and VI persons, especially due to their difficulty in attaining allocentric information.
Collapse
Affiliation(s)
- Crescent Jicol
- Department of Psychology, University of Bath, Bath, United Kingdom
| | | | - Michael J Proulx
- Department of Psychology, University of Bath, Bath, United Kingdom
| | - Simon Lange-Smith
- School of Sport and Exercise Sciences, Liverpool John Moores University, Liverpool, United Kingdom
| | - Meike Scheller
- Department of Psychology, University of Bath, Bath, United Kingdom
| | - Eamonn O'Neill
- Department of Computer Science, University of Bath, Bath, United Kingdom
| | - Karin Petrini
- Department of Psychology, University of Bath, Bath, United Kingdom
| |
Collapse
|
25
|
Kirsch LP, Job X, Auvray M. Mixing up the Senses: Sensory Substitution Is Not a Form of Artificially Induced Synaesthesia. Multisens Res 2020; 34:297-322. [PMID: 33706280 DOI: 10.1163/22134808-bja10010] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2020] [Accepted: 05/26/2020] [Indexed: 11/19/2022]
Abstract
Sensory Substitution Devices (SSDs) are typically used to restore functionality of a sensory modality that has been lost, like vision for the blind, by recruiting another sensory modality such as touch or audition. Sensory substitution has given rise to many debates in psychology, neuroscience and philosophy regarding the nature of experience when using SSDs. Questions first arose as to whether the experience of sensory substitution is represented by the substituted information, the substituting information, or a multisensory combination of the two. More recently, parallels have been drawn between sensory substitution and synaesthesia, a rare condition in which individuals involuntarily experience a percept in one sensory or cognitive pathway when another one is stimulated. Here, we explore the efficacy of understanding sensory substitution as a form of 'artificial synaesthesia'. We identify several problems with previous suggestions for a link between these two phenomena. Furthermore, we find that sensory substitution does not fulfil the essential criteria that characterise synaesthesia. We conclude that sensory substitution and synaesthesia are independent of each other and thus, the 'artificial synaesthesia' view of sensory substitution should be rejected.
Collapse
Affiliation(s)
- Louise P Kirsch
- Institut des Systèmes Intelligents et de Robotique (ISIR), Sorbonne Université, Paris, France
| | - Xavier Job
- Institut des Systèmes Intelligents et de Robotique (ISIR), Sorbonne Université, Paris, France
| | - Malika Auvray
- Institut des Systèmes Intelligents et de Robotique (ISIR), Sorbonne Université, Paris, France
| |
Collapse
|
26
|
Auvray M. Multisensory and spatial processes in sensory substitution. Restor Neurol Neurosci 2019; 37:609-619. [DOI: 10.3233/rnn-190950] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Affiliation(s)
- Malika Auvray
- Institut des Systèmes Intelligents et de Robotique, CNRS UMR 7222, Sorbonne Université, Paris, France
| |
Collapse
|
27
|
Cross-modal size-contrast illusion: Acoustic increases in intensity and bandwidth modulate haptic representation of object size. Sci Rep 2019; 9:14440. [PMID: 31595003 PMCID: PMC6783429 DOI: 10.1038/s41598-019-50912-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2019] [Accepted: 09/12/2019] [Indexed: 01/20/2023] Open
Abstract
Changes in the retinal size of stationary objects provide a cue to the observer's motion in the environment: Increases indicate the observer's forward motion, and decreases backward motion. In this study, a series of images each comprising a pair of pine-tree figures were translated into auditory modality using sensory substitution software. Resulting auditory stimuli were presented in an ascending sequence (i.e. increasing in intensity and bandwidth compatible with forward motion), descending sequence (i.e. decreasing in intensity and bandwidth compatible with backward motion), or in a scrambled order. During the presentation of stimuli, blindfolded participants estimated the lengths of wooden sticks by haptics. Results showed that those exposed to the stimuli compatible with forward motion underestimated the lengths of the sticks. This consistent underestimation may share some aspects with visual size-contrast effects such as the Ebbinghaus illusion. In contrast, participants in the other two conditions did not show such magnitude of error in size estimation; which is consistent with the "adaptive perceptual bias" towards acoustic increases in intensity and bandwidth. In sum, we report a novel cross-modal size-contrast illusion, which reveals that auditory motion cues compatible with listeners' forward motion modulate haptic representations of object size.
Collapse
|
28
|
Navigation Systems for the Blind and Visually Impaired: Past Work, Challenges, and Open Problems. SENSORS 2019; 19:s19153404. [PMID: 31382536 PMCID: PMC6696419 DOI: 10.3390/s19153404] [Citation(s) in RCA: 37] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/18/2019] [Revised: 07/30/2019] [Accepted: 07/30/2019] [Indexed: 11/16/2022]
Abstract
Over the last decades, the development of navigation devices capable of guiding the blind through indoor and/or outdoor scenarios has remained a challenge. In this context, this paper’s objective is to provide an updated, holistic view of this research, in order to enable developers to exploit the different aspects of its multidisciplinary nature. To that end, previous solutions will be briefly described and analyzed from a historical perspective, from the first “Electronic Travel Aids” and early research on sensory substitution or indoor/outdoor positioning, to recent systems based on artificial vision. Thereafter, user-centered design fundamentals are addressed, including the main points of criticism of previous approaches. Finally, several technological achievements are highlighted as they could underpin future feasible designs. In line with this, smartphones and wearables with built-in cameras will then be indicated as potentially feasible options with which to support state-of-art computer vision solutions, thus allowing for both the positioning and monitoring of the user’s surrounding area. These functionalities could then be further boosted by means of remote resources, leading to cloud computing schemas or even remote sensing via urban infrastructure.
Collapse
|
29
|
O'Connor MB, Bennie SJ, Deeks HM, Jamieson-Binnie A, Jones AJ, Shannon RJ, Walters R, Mitchell TJ, Mulholland AJ, Glowacki DR. Interactive molecular dynamics in virtual reality from quantum chemistry to drug binding: An open-source multi-person framework. J Chem Phys 2019; 150:220901. [PMID: 31202243 DOI: 10.1063/1.5092590] [Citation(s) in RCA: 49] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2023] Open
Abstract
As molecular scientists have made progress in their ability to engineer nanoscale molecular structure, we face new challenges in our ability to engineer molecular dynamics (MD) and flexibility. Dynamics at the molecular scale differs from the familiar mechanics of everyday objects because it involves a complicated, highly correlated, and three-dimensional many-body dynamical choreography which is often nonintuitive even for highly trained researchers. We recently described how interactive molecular dynamics in virtual reality (iMD-VR) can help to meet this challenge, enabling researchers to manipulate real-time MD simulations of flexible structures in 3D. In this article, we outline various efforts to extend immersive technologies to the molecular sciences, and we introduce "Narupa," a flexible, open-source, multiperson iMD-VR software framework which enables groups of researchers to simultaneously cohabit real-time simulation environments to interactively visualize and manipulate the dynamics of molecular structures with atomic-level precision. We outline several application domains where iMD-VR is facilitating research, communication, and creative approaches within the molecular sciences, including training machines to learn potential energy functions, biomolecular conformational sampling, protein-ligand binding, reaction discovery using "on-the-fly" quantum chemistry, and transport dynamics in materials. We touch on iMD-VR's various cognitive and perceptual affordances and outline how these provide research insight for molecular systems. By synergistically combining human spatial reasoning and design insight with computational automation, technologies such as iMD-VR have the potential to improve our ability to understand, engineer, and communicate microscopic dynamical behavior, offering the potential to usher in a new paradigm for engineering molecules and nano-architectures.
Collapse
Affiliation(s)
- Michael B O'Connor
- Intangible Realities Laboratory, University of Bristol, Cantock's Close, Bristol BS8 1TS, United Kingdom
| | - Simon J Bennie
- Intangible Realities Laboratory, University of Bristol, Cantock's Close, Bristol BS8 1TS, United Kingdom
| | - Helen M Deeks
- Intangible Realities Laboratory, University of Bristol, Cantock's Close, Bristol BS8 1TS, United Kingdom
| | - Alexander Jamieson-Binnie
- Intangible Realities Laboratory, University of Bristol, Cantock's Close, Bristol BS8 1TS, United Kingdom
| | - Alex J Jones
- Intangible Realities Laboratory, University of Bristol, Cantock's Close, Bristol BS8 1TS, United Kingdom
| | - Robin J Shannon
- Centre for Computational Chemistry, School of Chemistry, University of Bristol, Cantock's Close, Bristol BS8 1TS, United Kingdom
| | - Rebecca Walters
- Intangible Realities Laboratory, University of Bristol, Cantock's Close, Bristol BS8 1TS, United Kingdom
| | - Thomas J Mitchell
- Intangible Realities Laboratory, University of Bristol, Cantock's Close, Bristol BS8 1TS, United Kingdom
| | - Adrian J Mulholland
- Centre for Computational Chemistry, School of Chemistry, University of Bristol, Cantock's Close, Bristol BS8 1TS, United Kingdom
| | - David R Glowacki
- Intangible Realities Laboratory, University of Bristol, Cantock's Close, Bristol BS8 1TS, United Kingdom
| |
Collapse
|
30
|
Visually induced gains in pitch discrimination: Linking audio-visual processing with auditory abilities. Atten Percept Psychophys 2019; 80:999-1010. [PMID: 29473142 DOI: 10.3758/s13414-017-1481-8] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Perception is fundamentally a multisensory experience. The principle of inverse effectiveness (PoIE) states how the multisensory gain is maximal when responses to the unisensory constituents of the stimuli are weak. It is one of the basic principles underlying multisensory processing of spatiotemporally corresponding crossmodal stimuli that are well established at behavioral as well as neural levels. It is not yet clear, however, how modality-specific stimulus features influence discrimination of subtle changes in a crossmodally corresponding feature belonging to another modality. Here, we tested the hypothesis that reliance on visual cues to pitch discrimination follow the PoIE at the interindividual level (i.e., varies with varying levels of auditory-only pitch discrimination abilities). Using an oddball pitch discrimination task, we measured the effect of varying visually perceived vertical position in participants exhibiting a wide range of pitch discrimination abilities (i.e., musicians and nonmusicians). Visual cues significantly enhanced pitch discrimination as measured by the sensitivity index d', and more so in the crossmodally congruent than incongruent condition. The magnitude of gain caused by compatible visual cues was associated with individual pitch discrimination thresholds, as predicted by the PoIE. This was not the case for the magnitude of the congruence effect, which was unrelated to individual pitch discrimination thresholds, indicating that the pitch-height association is robust to variations in auditory skills. Our findings shed light on individual differences in multisensory processing by suggesting that relevant multisensory information that crucially aids some perceivers' performance may be of less importance to others, depending on their unisensory abilities.
Collapse
|
31
|
Hanneton S, Hoellinger T, Forma V, Roby-Brami A, Auvray M. Ears on the Hand: Reaching Three-Dimensional Targets With an Audio-Motor Device. Multisens Res 2019; 33:1-23. [PMID: 32092705 DOI: 10.1163/22134808-20191436] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2019] [Accepted: 09/18/2019] [Indexed: 11/19/2022]
Abstract
Understanding the processes underlying sensorimotor coupling with the environment is crucial for sensorimotor rehabilitation and sensory substitution. In doing so, devices which provide novel sensory feedback consequent to body movement may be optimized in order to enhance motor performance for particular tasks. The aim of the study reported here was to investigate audio-motor coupling when the auditory experience is linked to movements of the head or the hands. The participants had to localize and reach a virtual source with the dominant hand in response to sounds. An electromagnetic system recorded the position and orientation of the participants' head and hands. This system was connected to a 3D audio system that provided binaural auditory feedback on the position of the virtual listener located on the participants' body. The listener's position was computed either from the hands or from the head. For the hand condition, the virtual listener was placed on the dominant hand (the one used to reach the target) in Experiment 1 and on the non-dominant hand, which was constrained in order to have similar amplitude and degrees of freedom as that of the head, in Experiment 2. The results revealed that, in the two experiments, the participants were able to localize a source within the 3D auditory environment. Performance varied as a function of the effector's degrees of freedom and the spatial coincidence between sensor and effector. The results also allowed characterizing the kinematics of the hand and head and how they change with audio-motor coupling condition and practice.
Collapse
Affiliation(s)
- Sylvain Hanneton
- 1Institut des Sciences du Sport-Santé EA3625, Université Paris Descartes, Paris, France
| | - Thomas Hoellinger
- 2Laboratoire de Neurophysiologie et Biomécanique du mouvement, Faculté des Sciences de la motricité, Université Libre de Bruxelles, Brussel, Belgium
| | - Vincent Forma
- 3Laboratoire Psychologie de la Perception, CNRS UMR 8242, Université Paris Descartes, Paris, France
| | - Agnes Roby-Brami
- 4Institut des Systèmes Intelligents et de Robotique, ISIR, CNRS UMR 7222, Sorbonne Université, Paris, France
- 5Institut des Systèmes Intelligents et de Robotique, Equipe Agathe, INSERM U 1150, Paris, France
| | - Malika Auvray
- 4Institut des Systèmes Intelligents et de Robotique, ISIR, CNRS UMR 7222, Sorbonne Université, Paris, France
| |
Collapse
|
32
|
Tactile recognition of visual stimuli: Specificity versus generalization of perceptual learning. Vision Res 2018; 152:40-50. [DOI: 10.1016/j.visres.2017.11.007] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2017] [Revised: 10/30/2017] [Accepted: 11/16/2017] [Indexed: 11/19/2022]
|
33
|
Sanders PJ, Thompson B, Corballis PM, Maslin M, Searchfield GD. A review of plasticity induced by auditory and visual tetanic stimulation in humans. Eur J Neurosci 2018; 48:2084-2097. [PMID: 30025183 DOI: 10.1111/ejn.14080] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2017] [Revised: 06/10/2018] [Accepted: 07/04/2018] [Indexed: 12/01/2022]
Abstract
Long-term potentiation is a form of synaptic plasticity thought to play an important role in learning and memory. Recently noninvasive methods have been developed to induce and measure activity similar to long-term potentiation in humans. Sensory tetani (trains of quickly repeating auditory or visual stimuli) alter the electroencephalogram in a manner similar to electrical stimulation that results in long-term potentiation. This review briefly covers the development of long-term potentiation research before focusing on in vivo human studies that produce long-term potentiation-like effects using auditory and visual stimulation. Similarities and differences between traditional (animal and brain tissue) long-term potentiation studies and human sensory tetanization studies will be discussed, as well as implications for perceptual learning. Although evidence for functional consequences of sensory tetanization remains scarce, studies involving clinical populations indicate that sensory induced plasticity paradigms may be developed into diagnostic and research tools in clinical settings. Individual differences in the effects of sensory tetanization are not well-understood and provide an interesting avenue for future research. Differences in effects found between research groups that have emerged as the field has progressed are also yet to be resolved.
Collapse
Affiliation(s)
- Philip J Sanders
- Section of Audiology, University of Auckland, Auckland, New Zealand.,Centre for Brain Research, University of Auckland, Auckland, New Zealand.,Brain Research New Zealand - Rangahau Roro Aotearoa, Auckland, New Zealand
| | - Benjamin Thompson
- Centre for Brain Research, University of Auckland, Auckland, New Zealand.,School of Optometry & Vision Science, University of Auckland, Auckland, New Zealand.,School of Optometry and Vision Science, University of Waterloo, Waterloo, Canada
| | - Paul M Corballis
- Centre for Brain Research, University of Auckland, Auckland, New Zealand.,Department of Psychology, University of Auckland, Auckland, New Zealand
| | | | - Grant D Searchfield
- Section of Audiology, University of Auckland, Auckland, New Zealand.,Centre for Brain Research, University of Auckland, Auckland, New Zealand.,Brain Research New Zealand - Rangahau Roro Aotearoa, Auckland, New Zealand
| |
Collapse
|
34
|
Rinaldi L, Merabet LB, Vecchi T, Cattaneo Z. The spatial representation of number, time, and serial order following sensory deprivation: A systematic review. Neurosci Biobehav Rev 2018; 90:371-380. [PMID: 29746876 DOI: 10.1016/j.neubiorev.2018.04.021] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2017] [Revised: 03/15/2018] [Accepted: 04/27/2018] [Indexed: 11/16/2022]
Abstract
The spatial representation of numerical and temporal information is thought to be rooted in our multisensory experiences. Accordingly, we may expect visual or auditory deprivation to affect the way we represent numerical magnitude and time spatially. Here, we systematically review recent findings on how blind and deaf individuals represent abstract concepts such as magnitude and time (e.g., past/future, serial order of events) in a spatial format. Interestingly, available evidence suggests that sensory deprivation does not prevent the spatial "re-mapping" of abstract information, but differences compared to normally sighted and hearing individuals may emerge depending on the specific dimension considered (i.e., numerical magnitude, time as past/future, serial order). Herein we discuss how the study of sensory deprived populations may shed light on the specific, and possibly distinct, mechanisms subserving the spatial representation of these concepts. Furthermore, we pinpoint unresolved issues that need to be addressed by future studies to grasp a full understanding of the spatial representation of abstract information associated with visual and auditory deprivation.
Collapse
Affiliation(s)
- Luca Rinaldi
- Department of Psychology, University of Milano-Bicocca, Milano, Italy; NeuroMI, Milan Center for Neuroscience, Milano, Italy.
| | - Lotfi B Merabet
- The Laboratory for Visual Neuroplasticity, Department of Ophthalmology, Massachusetts Eye and Ear Infirmary, Harvard Medical School, Boston, USA
| | - Tomaso Vecchi
- Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy; IRCCS Mondino Foundation, Pavia, Italy
| | - Zaira Cattaneo
- Department of Psychology, University of Milano-Bicocca, Milano, Italy; IRCCS Mondino Foundation, Pavia, Italy.
| |
Collapse
|
35
|
Henschke JU, Oelschlegel AM, Angenstein F, Ohl FW, Goldschmidt J, Kanold PO, Budinger E. Early sensory experience influences the development of multisensory thalamocortical and intracortical connections of primary sensory cortices. Brain Struct Funct 2018; 223:1165-1190. [PMID: 29094306 PMCID: PMC5871574 DOI: 10.1007/s00429-017-1549-1] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2017] [Accepted: 09/29/2017] [Indexed: 12/21/2022]
Abstract
The nervous system integrates information from multiple senses. This multisensory integration already occurs in primary sensory cortices via direct thalamocortical and corticocortical connections across modalities. In humans, sensory loss from birth results in functional recruitment of the deprived cortical territory by the spared senses but the underlying circuit changes are not well known. Using tracer injections into primary auditory, somatosensory, and visual cortex within the first postnatal month of life in a rodent model (Mongolian gerbil) we show that multisensory thalamocortical connections emerge before corticocortical connections but mostly disappear during development. Early auditory, somatosensory, or visual deprivation increases multisensory connections via axonal reorganization processes mediated by non-lemniscal thalamic nuclei and the primary areas themselves. Functional single-photon emission computed tomography of regional cerebral blood flow reveals altered stimulus-induced activity and higher functional connectivity specifically between primary areas in deprived animals. Together, we show that intracortical multisensory connections are formed as a consequence of sensory-driven multisensory thalamocortical activity and that spared senses functionally recruit deprived cortical areas by an altered development of sensory thalamocortical and corticocortical connections. The functional-anatomical changes after early sensory deprivation have translational implications for the therapy of developmental hearing loss, blindness, and sensory paralysis and might also underlie developmental synesthesia.
Collapse
Affiliation(s)
- Julia U Henschke
- Department Systems Physiology of Learning, Leibniz Institute for Neurobiology, Brenneckestr. 6, 39118, Magdeburg, Germany
- German Center for Neurodegenerative Diseases Within the Helmholtz Association, Leipziger Str. 44, 39120, Magdeburg, Germany
- Institute of Cognitive Neurology and Dementia Research (IKND), Otto-von-Guericke-University Magdeburg, Leipziger Str. 44, 39120, Magdeburg, Germany
- Center for Behavioral Brain Sciences, Universitätsplatz 2, 39120, Magdeburg, Germany
| | - Anja M Oelschlegel
- Research Group Neuropharmacology, Leibniz Institute for Neurobiology, Brenneckestr. 6, 39118, Magdeburg, Germany
- Institute of Anatomy, Otto-von-Guericke-University Magdeburg, Leipziger Str. 44, 39120, Magdeburg, Germany
| | - Frank Angenstein
- Functional Neuroimaging Group, German Center for Neurodegenerative Diseases Within the Helmholtz Association, Leipziger Str. 44, 39120, Magdeburg, Germany
- Center for Behavioral Brain Sciences, Universitätsplatz 2, 39120, Magdeburg, Germany
| | - Frank W Ohl
- Department Systems Physiology of Learning, Leibniz Institute for Neurobiology, Brenneckestr. 6, 39118, Magdeburg, Germany
- Institute of Biology, Otto-von-Guericke-University Magdeburg, Leipziger Str. 44, 39120, Magdeburg, Germany
- Center for Behavioral Brain Sciences, Universitätsplatz 2, 39120, Magdeburg, Germany
| | - Jürgen Goldschmidt
- Department Systems Physiology of Learning, Leibniz Institute for Neurobiology, Brenneckestr. 6, 39118, Magdeburg, Germany
- Center for Behavioral Brain Sciences, Universitätsplatz 2, 39120, Magdeburg, Germany
| | - Patrick O Kanold
- Department of Biology, University of Maryland, College Park, MD, 20742, USA
| | - Eike Budinger
- Department Systems Physiology of Learning, Leibniz Institute for Neurobiology, Brenneckestr. 6, 39118, Magdeburg, Germany.
- Center for Behavioral Brain Sciences, Universitätsplatz 2, 39120, Magdeburg, Germany.
| |
Collapse
|
36
|
Dell'Erba S, Brown DJ, Proulx MJ. Synesthetic hallucinations induced by psychedelic drugs in a congenitally blind man. Conscious Cogn 2018; 60:127-132. [PMID: 29549713 DOI: 10.1016/j.concog.2018.02.008] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2017] [Revised: 02/22/2018] [Accepted: 02/23/2018] [Indexed: 10/17/2022]
Abstract
This case report offers rare insights into crossmodal responses to psychedelic drug use in a congenitally blind (CB) individual as a form of synthetic synesthesia. BP's personal experience provides us with a unique report on the psychological and sensory alterations induced by hallucinogenic drugs, including an account of the absence of visual hallucinations, and a compelling look at the relationship between LSD induced synesthesia and crossmodal correspondences. The hallucinatory experiences reported by BP are of particular interest in light of the observation that rates of psychosis within the CB population are extremely low. The phenomenology of the induced hallucinations suggests that experiences acquired through other means, might not give rise to "visual" experiences in the phenomenological sense, but instead gives rise to novel experiences in the other functioning senses.
Collapse
Affiliation(s)
- Sara Dell'Erba
- Crossmodal Cognition Lab, Department of Psychology, University of Bath, Bath BA2 7AY, UK
| | - David J Brown
- Crossmodal Cognition Lab, Department of Psychology, University of Bath, Bath BA2 7AY, UK
| | - Michael J Proulx
- Crossmodal Cognition Lab, Department of Psychology, University of Bath, Bath BA2 7AY, UK.
| |
Collapse
|
37
|
Santaniello G, Sebastián M, Carretié L, Fernández-Folgueiras U, Hinojosa JA. Haptic recognition memory following short-term visual deprivation: Behavioral and neural correlates from ERPs and alpha band oscillations. Biol Psychol 2018; 133:18-29. [PMID: 29360562 DOI: 10.1016/j.biopsycho.2018.01.008] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2017] [Revised: 12/29/2017] [Accepted: 01/11/2018] [Indexed: 10/18/2022]
Abstract
In the current study, we investigated the effects of short-term visual deprivation (2 h) on a haptic recognition memory task with familiar objects. Behavioral data, as well as event-related potentials (ERPs) and induced event-related oscillations (EROs) were analyzed. At the behavioral level, deprived participants showed speeded reaction times to new stimuli. Analyses of ERPs indicated that starting from 1000 ms the recognition of old objects elicited enhanced positive amplitudes only for the visually deprived group. Visual deprivation also influenced EROs. In this sense, we observed reduced power in the lower-1 alpha band for the processing of new compared to old stimuli between 500 and 750 ms. Overall, our data showed improved haptic recognition memory after a short period of visual deprivation. These effects were thought to reflect a compensatory mechanism that might have developed as an adaptive strategy for dealing with the environment when visual information is not available.
Collapse
Affiliation(s)
- Gerardo Santaniello
- Instituto Pluridisciplinar, Universidad Complutense de Madrid, 28040 Madrid, Spain.
| | - Manuel Sebastián
- Instituto Pluridisciplinar, Universidad Complutense de Madrid, 28040 Madrid, Spain; Facultad de Ciencias de la Salud, Universidad Católica San Antonio de Murcia, 30107 Guadalupe, Murcia, Spain
| | - Luis Carretié
- Facultad de Psicología, Universidad Autónoma de Madrid, 28049 Madrid, Spain
| | | | - José Antonio Hinojosa
- Instituto Pluridisciplinar, Universidad Complutense de Madrid, 28040 Madrid, Spain; Facultad de Psicología, Universidad Complutense de Madrid, 28223 Pozuelo de Alarcón, Madrid, Spain
| |
Collapse
|
38
|
Abstract
Understanding perception and aesthetic appeal of arts and environmental objects, what is appreciated, liked, or preferred, and why, is of prime importance for improving the functional capacity of the blind and visually impaired and the ergonomic design for their environment, which however so far, has been examined only in sighted individuals. This paper provides a general overview of the first experimental study of tactile aesthetics as a function of visual experience and level of visual deprivation, using both behavioral and brain imaging techniques. We investigated how blind people perceive 3D tactile objects, how they characterize them, and whether the tactile perception, and tactile shape preference (liking or disliking) and tactile aesthetic appreciation (judging tactile qualities of an object, such as pleasantness, comfortableness etc.) of 3D tactile objects can be affected by the level of visual experience. The study employed innovative behavioral measures, such as new forms of aesthetic preference-appreciation and perceptual discrimination questionnaires, in combination with advanced functional Magnetic Resonance Imaging (fMRI) techniques, and compared congenitally blind, late-onset blind and blindfolded (sighted) participants. Behavioral results demonstrated that both blind and blindfolded-sighted participants assessed curved or rounded 3D tactile objects as significantly more pleasing than sharp 3D tactile objects, and symmetric 3D tactile objects as significantly more pleasing than asymmetric 3D tactile objects. However, as compared to the sighted, blind people showed better skills in tactile discrimination as demonstrated by accuracy and speed of discrimination. Functional MRI results demonstrated that there was a large overlap and characteristic differences in the aesthetic appreciation brain networks in the blind and the sighted. As demonstrated both populations commonly recruited the somatosensory and motor areas of the brain, but with stronger activations in the blind as compared to the sighted. Secondly, sighted people recruited more frontal regions whereas blind people, in particular, the congenitally blind, paradoxically recruited more 'visual' areas of the brain. These differences were more pronounced between the sighted and the congenitally blind rather than between the sighted and the late-onset blind, indicating the key influence of the onset time of visual deprivation. Understanding of the underlying brain mechanisms should have a wide range of important implications for a generalized cross-sensory theory and practice in the rapidly evolving field of neuroaesthetics, as well as for 'cutting-edge' rehabilitation technologies for the blind and the visually impaired.
Collapse
Affiliation(s)
- A K M Rezaul Karim
- The Smith-Kettlewell Eye Research Institute, 2318 Fillmore St, San Francisco, CA 94115, USA.,Envision Research Institute, 610 N Main St, Wichita, KS 67203, USA.,Department of Psychology, University of Dhaka, Dhaka 1000, Bangladesh
| | - Lora T Likova
- The Smith-Kettlewell Eye Research Institute, 2318 Fillmore St, San Francisco, CA 94115, USA
| |
Collapse
|
39
|
Schumann F, O'Regan JK. Sensory augmentation: integration of an auditory compass signal into human perception of space. Sci Rep 2017; 7:42197. [PMID: 28195187 PMCID: PMC5307328 DOI: 10.1038/srep42197] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2016] [Accepted: 01/06/2017] [Indexed: 12/30/2022] Open
Abstract
Bio-mimetic approaches to restoring sensory function show great promise in that they rapidly produce perceptual experience, but have the disadvantage of being invasive. In contrast, sensory substitution approaches are non-invasive, but may lead to cognitive rather than perceptual experience. Here we introduce a new non-invasive approach that leads to fast and truly perceptual experience like bio-mimetic techniques. Instead of building on existing circuits at the neural level as done in bio-mimetics, we piggy-back on sensorimotor contingencies at the stimulus level. We convey head orientation to geomagnetic North, a reliable spatial relation not normally sensed by humans, by mimicking sensorimotor contingencies of distal sounds via head-related transfer functions. We demonstrate rapid and long-lasting integration into the perception of self-rotation. Short training with amplified or reduced rotation gain in the magnetic signal can expand or compress the perceived extent of vestibular self-rotation, even with the magnetic signal absent in the test. We argue that it is the reliability of the magnetic signal that allows vestibular spatial recalibration, and the coding scheme mimicking sensorimotor contingencies of distal sounds that permits fast integration. Hence we propose that contingency-mimetic feedback has great potential for creating sensory augmentation devices that achieve fast and genuinely perceptual experiences.
Collapse
Affiliation(s)
- Frank Schumann
- Laboratoire Psychologie de la Perception - CNRS UMR 8242, Université Paris Descartes, Paris, France
| | - J Kevin O'Regan
- Laboratoire Psychologie de la Perception - CNRS UMR 8242, Université Paris Descartes, Paris, France
| |
Collapse
|
40
|
Arnold G, Pesnot-Lerousseau J, Auvray M. Individual Differences in Sensory Substitution. Multisens Res 2017; 30:579-600. [DOI: 10.1163/22134808-00002561] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2016] [Accepted: 03/16/2017] [Indexed: 12/23/2022]
Abstract
Sensory substitution devices were developed in the context of perceptual rehabilitation and they aim at compensating one or several functions of a deficient sensory modality by converting stimuli that are normally accessed through this deficient sensory modality into stimuli accessible by another sensory modality. For instance, they can convert visual information into sounds or tactile stimuli. In this article, we review those studies that investigated the individual differences at the behavioural, neural, and phenomenological levels when using a sensory substitution device. We highlight how taking into account individual differences has consequences for the optimization and learning of sensory substitution devices. We also discuss the extent to which these studies allow a better understanding of the experience with sensory substitution devices, and in particular how the resulting experience is not akin to a single sensory modality. Rather, it should be conceived as a multisensory experience, involving both perceptual and cognitive processes, and emerging on each user’s pre-existing sensory and cognitive capacities.
Collapse
Affiliation(s)
- Gabriel Arnold
- Institut des Systèmes Intelligents et de Robotique, CNRS UMR 7222, Université Pierre et Marie Curie, 4 place Jussieu, 75005 Paris, France
| | - Jacques Pesnot-Lerousseau
- Institut des Systèmes Intelligents et de Robotique, CNRS UMR 7222, Université Pierre et Marie Curie, 4 place Jussieu, 75005 Paris, France
| | - Malika Auvray
- Institut des Systèmes Intelligents et de Robotique, CNRS UMR 7222, Université Pierre et Marie Curie, 4 place Jussieu, 75005 Paris, France
| |
Collapse
|
41
|
Cecchetti L, Kupers R, Ptito M, Pietrini P, Ricciardi E. Are Supramodality and Cross-Modal Plasticity the Yin and Yang of Brain Development? From Blindness to Rehabilitation. Front Syst Neurosci 2016; 10:89. [PMID: 27877116 PMCID: PMC5099160 DOI: 10.3389/fnsys.2016.00089] [Citation(s) in RCA: 42] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2016] [Accepted: 10/27/2016] [Indexed: 12/20/2022] Open
Abstract
Research in blind individuals has primarily focused for a long time on the brain plastic reorganization that occurs in early visual areas. Only more recently, scientists have developed innovative strategies to understand to what extent vision is truly a mandatory prerequisite for the brain's fine morphological architecture to develop and function. As a whole, the studies conducted to date in sighted and congenitally blind individuals have provided ample evidence that several "visual" cortical areas develop independently from visual experience and do process information content regardless of the sensory modality through which a particular stimulus is conveyed: a property named supramodality. At the same time, lack of vision leads to a structural and functional reorganization within "visual" brain areas, a phenomenon known as cross-modal plasticity. Cross-modal recruitment of the occipital cortex in visually deprived individuals represents an adaptative compensatory mechanism that mediates processing of non-visual inputs. Supramodality and cross-modal plasticity appears to be the "yin and yang" of brain development: supramodal is what takes place despite the lack of vision, whereas cross-modal is what happens because of lack of vision. Here we provide a critical overview of the research in this field and discuss the implications that these novel findings have for the development of educative/rehabilitation approaches and sensory substitution devices (SSDs) in sensory-impaired individuals.
Collapse
Affiliation(s)
- Luca Cecchetti
- Department of Surgical, Medical, Molecular Pathology and Critical Care, University of PisaPisa, Italy; Clinical Psychology Branch, Pisa University HospitalPisa, Italy
| | - Ron Kupers
- BRAINlab, Department of Neuroscience and Pharmacology, Panum Institute, University of CopenhagenCopenhagen, Denmark; Department of Radiology and Biomedical Imaging, Yale UniversityNew Haven, CT, USA
| | - Maurice Ptito
- Laboratory of Neuropsychiatry, Psychiatric Centre CopenhagenCopenhagen, Denmark; School of Optometry, Université de MontréalMontréal, QC, Canada
| | | | - Emiliano Ricciardi
- Department of Surgical, Medical, Molecular Pathology and Critical Care, University of PisaPisa, Italy; MOMILab, IMT School for Advanced Studies LuccaLucca, Italy
| |
Collapse
|
42
|
Proulx MJ, Gwinnutt J, Dell'Erba S, Levy-Tzedek S, de Sousa AA, Brown DJ. Other ways of seeing: From behavior to neural mechanisms in the online "visual" control of action with sensory substitution. Restor Neurol Neurosci 2016; 34:29-44. [PMID: 26599473 PMCID: PMC4927905 DOI: 10.3233/rnn-150541] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Vision is the dominant sense for perception-for-action in humans and other higher primates. Advances in sight restoration now utilize the other intact senses to provide information that is normally sensed visually through sensory substitution to replace missing visual information. Sensory substitution devices translate visual information from a sensor, such as a camera or ultrasound device, into a format that the auditory or tactile systems can detect and process, so the visually impaired can see through hearing or touch. Online control of action is essential for many daily tasks such as pointing, grasping and navigating, and adapting to a sensory substitution device successfully requires extensive learning. Here we review the research on sensory substitution for vision restoration in the context of providing the means of online control for action in the blind or blindfolded. It appears that the use of sensory substitution devices utilizes the neural visual system; this suggests the hypothesis that sensory substitution draws on the same underlying mechanisms as unimpaired visual control of action. Here we review the current state of the art for sensory substitution approaches to object recognition, localization, and navigation, and the potential these approaches have for revealing a metamodal behavioral and neural basis for the online control of action.
Collapse
Affiliation(s)
- Michael J Proulx
- Crossmodal Cognition Lab, Department of Psychology, University of Bath, Bath, UK
| | - James Gwinnutt
- Crossmodal Cognition Lab, Department of Psychology, University of Bath, Bath, UK
| | - Sara Dell'Erba
- Crossmodal Cognition Lab, Department of Psychology, University of Bath, Bath, UK
| | - Shelly Levy-Tzedek
- Cognition, Aging and Rehabilitation Lab, Recanati School for Community Health Professions, Department of Physical Therapy & Zlotowski Center for Neuroscience, Ben Gurion University of the Negev, Beer-Sheva, Israel
| | - Alexandra A de Sousa
- Crossmodal Cognition Lab, Department of Psychology, University of Bath, Bath, UK.,Department of Science, Bath Spa University, Bath, UK
| | - David J Brown
- Crossmodal Cognition Lab, Department of Psychology, University of Bath, Bath, UK
| |
Collapse
|
43
|
Kristjánsson Á, Moldoveanu A, Jóhannesson ÓI, Balan O, Spagnol S, Valgeirsdóttir VV, Unnthorsson R. Designing sensory-substitution devices: Principles, pitfalls and potential1. Restor Neurol Neurosci 2016; 34:769-87. [PMID: 27567755 PMCID: PMC5044782 DOI: 10.3233/rnn-160647] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
An exciting possibility for compensating for loss of sensory function is to augment deficient senses by conveying missing information through an intact sense. Here we present an overview of techniques that have been developed for sensory substitution (SS) for the blind, through both touch and audition, with special emphasis on the importance of training for the use of such devices, while highlighting potential pitfalls in their design. One example of a pitfall is how conveying extra information about the environment risks sensory overload. Related to this, the limits of attentional capacity make it important to focus on key information and avoid redundancies. Also, differences in processing characteristics and bandwidth between sensory systems severely constrain the information that can be conveyed. Furthermore, perception is a continuous process and does not involve a snapshot of the environment. Design of sensory substitution devices therefore requires assessment of the nature of spatiotemporal continuity for the different senses. Basic psychophysical and neuroscientific research into representations of the environment and the most effective ways of conveying information should lead to better design of sensory substitution systems. Sensory substitution devices should emphasize usability, and should not interfere with other inter- or intramodal perceptual function. Devices should be task-focused since in many cases it may be impractical to convey too many aspects of the environment. Evidence for multisensory integration in the representation of the environment suggests that researchers should not limit themselves to a single modality in their design. Finally, we recommend active training on devices, especially since it allows for externalization, where proximal sensory stimulation is attributed to a distinct exterior object.
Collapse
Affiliation(s)
- Árni Kristjánsson
- Laboratory of Visual Perception and Visuomotor control, University of Iceland, Faculty of Psychology, School of Health Sciences, Reykjavik, Iceland
| | - Alin Moldoveanu
- University Politehnica of Bucharest, Faculty of Automatic Control and Computers, Computer Science and Engineering Department, Bucharest, Romania
| | - Ómar I. Jóhannesson
- Laboratory of Visual Perception and Visuomotor control, University of Iceland, Faculty of Psychology, School of Health Sciences, Reykjavik, Iceland
| | - Oana Balan
- University Politehnica of Bucharest, Faculty of Automatic Control and Computers, Computer Science and Engineering Department, Bucharest, Romania
| | - Simone Spagnol
- Faculty of Industrial Engineering, Mechanical Engineering and Computer Science, School of Engineering and Natural Sciences, Reykjavik, Iceland
| | - Vigdís Vala Valgeirsdóttir
- Laboratory of Visual Perception and Visuomotor control, University of Iceland, Faculty of Psychology, School of Health Sciences, Reykjavik, Iceland
| | - Rúnar Unnthorsson
- Faculty of Industrial Engineering, Mechanical Engineering and Computer Science, School of Engineering and Natural Sciences, Reykjavik, Iceland
| |
Collapse
|
44
|
Araneda R, Renier LA, Rombaux P, Cuevas I, De Volder AG. Cortical Plasticity and Olfactory Function in Early Blindness. Front Syst Neurosci 2016; 10:75. [PMID: 27625596 PMCID: PMC5003898 DOI: 10.3389/fnsys.2016.00075] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2016] [Accepted: 08/17/2016] [Indexed: 11/13/2022] Open
Abstract
Over the last decade, functional brain imaging has provided insight to the maturation processes and has helped elucidate the pathophysiological mechanisms involved in brain plasticity in the absence of vision. In case of congenital blindness, drastic changes occur within the deafferented “visual” cortex that starts receiving and processing non visual inputs, including olfactory stimuli. This functional reorganization of the occipital cortex gives rise to compensatory perceptual and cognitive mechanisms that help blind persons achieve perceptual tasks, leading to superior olfactory abilities in these subjects. This view receives support from psychophysical testing, volumetric measurements and functional brain imaging studies in humans, which are presented here.
Collapse
Affiliation(s)
- Rodrigo Araneda
- Institute of Neuroscience (IoNS), Université catholique de Louvain Brussels, Belgium
| | - Laurent A Renier
- Institute of Neuroscience (IoNS), Université catholique de Louvain Brussels, Belgium
| | - Philippe Rombaux
- Institute of Neuroscience (IoNS), Université catholique de LouvainBrussels, Belgium; Department of Otorhinolaryngology, Cliniques Universitaires Saint-LucBrussels, Belgium
| | - Isabel Cuevas
- Laboratorio de Neurociencias, Escuela de Kinesiología, Facultad de Ciencias, Pontificia Universidad Católica de Valparaíso Valparaíso, Chile
| | - Anne G De Volder
- Institute of Neuroscience (IoNS), Université catholique de Louvain Brussels, Belgium
| |
Collapse
|
45
|
Pasqualotto A, Esenkaya T. Sensory Substitution: The Spatial Updating of Auditory Scenes "Mimics" the Spatial Updating of Visual Scenes. Front Behav Neurosci 2016; 10:79. [PMID: 27148000 PMCID: PMC4838627 DOI: 10.3389/fnbeh.2016.00079] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2015] [Accepted: 04/08/2016] [Indexed: 12/19/2022] Open
Abstract
Visual-to-auditory sensory substitution is used to convey visual information through audition, and it was initially created to compensate for blindness; it consists of software converting the visual images captured by a video-camera into the equivalent auditory images, or “soundscapes”. Here, it was used by blindfolded sighted participants to learn the spatial position of simple shapes depicted in images arranged on the floor. Very few studies have used sensory substitution to investigate spatial representation, while it has been widely used to investigate object recognition. Additionally, with sensory substitution we could study the performance of participants actively exploring the environment through audition, rather than passively localizing sound sources. Blindfolded participants egocentrically learnt the position of six images by using sensory substitution and then a judgment of relative direction task (JRD) was used to determine how this scene was represented. This task consists of imagining being in a given location, oriented in a given direction, and pointing towards the required image. Before performing the JRD task, participants explored a map that provided allocentric information about the scene. Although spatial exploration was egocentric, surprisingly we found that performance in the JRD task was better for allocentric perspectives. This suggests that the egocentric representation of the scene was updated. This result is in line with previous studies using visual and somatosensory scenes, thus supporting the notion that different sensory modalities produce equivalent spatial representation(s). Moreover, our results have practical implications to improve training methods with sensory substitution devices (SSD).
Collapse
Affiliation(s)
| | - Tayfun Esenkaya
- Faculty of Arts and Social Sciences, Sabanci UniversityIstanbul, Turkey; Department of Psychology, University of BathBath, UK
| |
Collapse
|
46
|
Maidenbaum S, Buchs G, Abboud S, Lavi-Rotbain O, Amedi A. Perception of Graphical Virtual Environments by Blind Users via Sensory Substitution. PLoS One 2016; 11:e0147501. [PMID: 26882473 PMCID: PMC4755598 DOI: 10.1371/journal.pone.0147501] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2015] [Accepted: 01/05/2016] [Indexed: 12/20/2022] Open
Abstract
Graphical virtual environments are currently far from accessible to blind users as their content is mostly visual. This is especially unfortunate as these environments hold great potential for this population for purposes such as safe orientation, education, and entertainment. Previous tools have increased accessibility but there is still a long way to go. Visual-to-audio Sensory-Substitution-Devices (SSDs) can increase accessibility generically by sonifying on-screen content regardless of the specific environment and offer increased accessibility without the use of expensive dedicated peripherals like electrode/vibrator arrays. Using SSDs virtually utilizes similar skills as when using them in the real world, enabling both training on the device and training on environments virtually before real-world visits. This could enable more complex, standardized and autonomous SSD training and new insights into multisensory interaction and the visually-deprived brain. However, whether congenitally blind users, who have never experienced virtual environments, will be able to use this information for successful perception and interaction within them is currently unclear.We tested this using the EyeMusic SSD, which conveys whole-scene visual information, to perform virtual tasks otherwise impossible without vision. Congenitally blind users had to navigate virtual environments and find doors, differentiate between them based on their features (Experiment1:task1) and surroundings (Experiment1:task2) and walk through them; these tasks were accomplished with a 95% and 97% success rate, respectively. We further explored the reactions of congenitally blind users during their first interaction with a more complex virtual environment than in the previous tasks-walking down a virtual street, recognizing different features of houses and trees, navigating to cross-walks, etc. Users reacted enthusiastically and reported feeling immersed within the environment. They highlighted the potential usefulness of such environments for understanding what visual scenes are supposed to look like and their potential for complex training and suggested many future environments they wished to experience.
Collapse
Affiliation(s)
- Shachar Maidenbaum
- The Edmond and Lily Safra Center for Brain Research, Hebrew University of Jerusalem, Jerusalem, Israel
- Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, Hebrew University of Jerusalem, Jerusalem, Israel
| | - Galit Buchs
- The Edmond and Lily Safra Center for Brain Research, Hebrew University of Jerusalem, Jerusalem, Israel
- Department of Cognitive Science, Faculty of Humanities, Hebrew University of Jerusalem, Jerusalem, Israel
| | - Sami Abboud
- The Edmond and Lily Safra Center for Brain Research, Hebrew University of Jerusalem, Jerusalem, Israel
- Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, Hebrew University of Jerusalem, Jerusalem, Israel
| | - Ori Lavi-Rotbain
- The Edmond and Lily Safra Center for Brain Research, Hebrew University of Jerusalem, Jerusalem, Israel
| | - Amir Amedi
- The Edmond and Lily Safra Center for Brain Research, Hebrew University of Jerusalem, Jerusalem, Israel
- Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, Hebrew University of Jerusalem, Jerusalem, Israel
- Department of Cognitive Science, Faculty of Humanities, Hebrew University of Jerusalem, Jerusalem, Israel
- Sorbonne Universités UPMC Univ Paris 06, Institut de la Vision Paris, Paris, France
- * E-mail:
| |
Collapse
|
47
|
Crossmodal processing and sensory substitution: Is "seeing" with sound and touch a form of perception or cognition? Behav Brain Sci 2016; 39:e241. [PMID: 28355859 DOI: 10.1017/s0140525x1500268x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
The brain has evolved in this multisensory context to perceive the world in an integrated fashion. Although there are good reasons to be skeptical of the influence of cognition on perception, here we argue that the study of sensory substitution devices might reveal that perception and cognition are not necessarily distinct, but rather continuous aspects of our information processing capacities.
Collapse
|
48
|
Buchs G, Maidenbaum S, Levy-Tzedek S, Amedi A. Integration and binding in rehabilitative sensory substitution: Increasing resolution using a new Zooming-in approach. Restor Neurol Neurosci 2016; 34:97-105. [PMID: 26518671 PMCID: PMC4927841 DOI: 10.3233/rnn-150592] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
PURPOSE To visually perceive our surroundings we constantly move our eyes and focus on particular details, and then integrate them into a combined whole. Current visual rehabilitation methods, both invasive, like bionic-eyes and non-invasive, like Sensory Substitution Devices (SSDs), down-sample visual stimuli into low-resolution images. Zooming-in to sub-parts of the scene could potentially improve detail perception. Can congenitally blind individuals integrate a 'visual' scene when offered this information via different sensory modalities, such as audition? Can they integrate visual information -perceived in parts - into larger percepts despite never having had any visual experience? METHODS We explored these questions using a zooming-in functionality embedded in the EyeMusic visual-to-auditory SSD. Eight blind participants were tasked with identifying cartoon faces by integrating their individual components recognized via the EyeMusic's zooming mechanism. RESULTS After specialized training of just 6-10 hours, blind participants successfully and actively integrated facial features into cartooned identities in 79±18% of the trials in a highly significant manner, (chance level 10% ; rank-sum P < 1.55E-04). CONCLUSIONS These findings show that even users who lacked any previous visual experience whatsoever can indeed integrate this visual information with increased resolution. This potentially has important practical visual rehabilitation implications for both invasive and non-invasive methods.
Collapse
Affiliation(s)
- Galit Buchs
- Department of Cognitive Science, Faculty of Humanities, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel
- The Edmond and Lily Safra Center for Brain Research, Hebrew University of Jerusalem Hadassah Ein-Kerem, Jerusalem, Israel
| | - Shachar Maidenbaum
- The Edmond and Lily Safra Center for Brain Research, Hebrew University of Jerusalem Hadassah Ein-Kerem, Jerusalem, Israel
- Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel
| | - Shelly Levy-Tzedek
- Recanati School for Community Health Professions, Department of Physical Therapy, Ben Gurion University of the Negev, Beer-Sheva, Israel
- Zlotowski Center for Neuroscience, Ben Gurion University of the Negev, Beer-Sheva, Israel
| | - Amir Amedi
- Department of Cognitive Science, Faculty of Humanities, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel
- The Edmond and Lily Safra Center for Brain Research, Hebrew University of Jerusalem Hadassah Ein-Kerem, Jerusalem, Israel
- Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel
- Sorbonne Universités UPMC Univ Paris 06, Institut de la Vision Paris, France
| |
Collapse
|
49
|
Reading in the dark: neural correlates and cross-modal plasticity for learning to read entire words without visual experience. Neuropsychologia 2015; 83:149-160. [PMID: 26577136 DOI: 10.1016/j.neuropsychologia.2015.11.009] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2015] [Revised: 11/03/2015] [Accepted: 11/09/2015] [Indexed: 12/17/2022]
Abstract
Cognitive neuroscience has long attempted to determine the ways in which cortical selectivity develops, and the impact of nature vs. nurture on it. Congenital blindness (CB) offers a unique opportunity to test this question as the brains of blind individuals develop without visual experience. Here we approach this question through the reading network. Several areas in the visual cortex have been implicated as part of the reading network, and one of the main ones among them is the VWFA, which is selective to the form of letters and words. But what happens in the CB brain? On the one hand, it has been shown that cross-modal plasticity leads to the recruitment of occipital areas, including the VWFA, for linguistic tasks. On the other hand, we have recently demonstrated VWFA activity for letters in contrast to other visual categories when the information is provided via other senses such as touch or audition. Which of these tasks is more dominant? By which mechanism does the CB brain process reading? Using fMRI and visual-to-auditory sensory substitution which transfers the topographical features of the letters we compare reading with semantic and scrambled conditions in a group of CB. We found activation in early auditory and visual cortices during the early processing phase (letter), while the later phase (word) showed VWFA and bilateral dorsal-intraparietal activations for words. This further supports the notion that many visual regions in general, even early visual areas, also maintain a predilection for task processing even when the modality is variable and in spite of putative lifelong linguistic cross-modal plasticity. Furthermore, we find that the VWFA is recruited preferentially for letter and word form, while it was not recruited, and even exhibited deactivation, for an immediately subsequent semantic task suggesting that despite only short sensory substitution experience orthographic task processing can dominate semantic processing in the VWFA. On a wider scope, this implies that at least in some cases cross-modal plasticity which enables the recruitment of areas for new tasks may be dominated by sensory independent task specific activation.
Collapse
|
50
|
Brown DJ, Simpson AJR, Proulx MJ. Auditory scene analysis and sonified visual images. Does consonance negatively impact on object formation when using complex sonified stimuli? Front Psychol 2015; 6:1522. [PMID: 26528202 PMCID: PMC4602098 DOI: 10.3389/fpsyg.2015.01522] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2015] [Accepted: 09/22/2015] [Indexed: 11/22/2022] Open
Abstract
A critical task for the brain is the sensory representation and identification of perceptual objects in the world. When the visual sense is impaired, hearing and touch must take primary roles and in recent times compensatory techniques have been developed that employ the tactile or auditory system as a substitute for the visual system. Visual-to-auditory sonifications provide a complex, feature-based auditory representation that must be decoded and integrated into an object-based representation by the listener. However, we don’t yet know what role the auditory system plays in the object integration stage and whether the principles of auditory scene analysis apply. Here we used coarse sonified images in a two-tone discrimination task to test whether auditory feature-based representations of visual objects would be confounded when their features conflicted with the principles of auditory consonance. We found that listeners (N = 36) performed worse in an object recognition task when the auditory feature-based representation was harmonically consonant. We also found that this conflict was not negated with the provision of congruent audio–visual information. The findings suggest that early auditory processes of harmonic grouping dominate the object formation process and that the complexity of the signal, and additional sensory information have limited effect on this.
Collapse
Affiliation(s)
- David J Brown
- Crossmodal Cognition Lab, Department of Psychology, University of Bath Bath, UK ; Biological and Experimental Psychology Group, School of Biological and Chemical Sciences, Queen Mary University of London London, UK
| | - Andrew J R Simpson
- Centre for Vision, Speech and Signal Processing, University of Surrey Guildford, UK
| | - Michael J Proulx
- Crossmodal Cognition Lab, Department of Psychology, University of Bath Bath, UK
| |
Collapse
|