1
|
Romeni S, Toni L, Artoni F, Micera S. Decoding electroencephalographic responses to visual stimuli compatible with electrical stimulation. APL Bioeng 2024; 8:026123. [PMID: 38894958 PMCID: PMC11184972 DOI: 10.1063/5.0195680] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Accepted: 05/20/2024] [Indexed: 06/21/2024] Open
Abstract
Electrical stimulation of the visual nervous system could improve the quality of life of patients affected by acquired blindness by restoring some visual sensations, but requires careful optimization of stimulation parameters to produce useful perceptions. Neural correlates of elicited perceptions could be used for fast automatic optimization, with electroencephalography as a natural choice as it can be acquired non-invasively. Nonetheless, its low signal-to-noise ratio may hinder discrimination of similar visual patterns, preventing its use in the optimization of electrical stimulation. Our work investigates for the first time the discriminability of the electroencephalographic responses to visual stimuli compatible with electrical stimulation, employing a newly acquired dataset whose stimuli encompass the concurrent variation of several features, while neuroscience research tends to study the neural correlates of single visual features. We then performed above-chance single-trial decoding of multiple features of our newly crafted visual stimuli using relatively simple machine learning algorithms. A decoding scheme employing the information from multiple stimulus presentations was implemented, substantially improving our decoding performance, suggesting that such methods should be used systematically in future applications. The significance of the present work relies in the determination of which visual features can be decoded from electroencephalographic responses to electrical stimulation-compatible stimuli and at which granularity they can be discriminated. Our methods pave the way to using electroencephalographic correlates to optimize electrical stimulation parameters, thus increasing the effectiveness of current visual neuroprostheses.
Collapse
Affiliation(s)
| | | | - Fiorenzo Artoni
- Department of Clinical Neurosciences, Faculty of Medicine, University of Geneva, Geneva, Switzerland
| | | |
Collapse
|
2
|
van der Grinten M, de Ruyter van Steveninck J, Lozano A, Pijnacker L, Rueckauer B, Roelfsema P, van Gerven M, van Wezel R, Güçlü U, Güçlütürk Y. Towards biologically plausible phosphene simulation for the differentiable optimization of visual cortical prostheses. eLife 2024; 13:e85812. [PMID: 38386406 PMCID: PMC10883675 DOI: 10.7554/elife.85812] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2022] [Accepted: 01/21/2024] [Indexed: 02/23/2024] Open
Abstract
Blindness affects millions of people around the world. A promising solution to restoring a form of vision for some individuals are cortical visual prostheses, which bypass part of the impaired visual pathway by converting camera input to electrical stimulation of the visual system. The artificially induced visual percept (a pattern of localized light flashes, or 'phosphenes') has limited resolution, and a great portion of the field's research is devoted to optimizing the efficacy, efficiency, and practical usefulness of the encoding of visual information. A commonly exploited method is non-invasive functional evaluation in sighted subjects or with computational models by using simulated prosthetic vision (SPV) pipelines. An important challenge in this approach is to balance enhanced perceptual realism, biologically plausibility, and real-time performance in the simulation of cortical prosthetic vision. We present a biologically plausible, PyTorch-based phosphene simulator that can run in real-time and uses differentiable operations to allow for gradient-based computational optimization of phosphene encoding models. The simulator integrates a wide range of clinical results with neurophysiological evidence in humans and non-human primates. The pipeline includes a model of the retinotopic organization and cortical magnification of the visual cortex. Moreover, the quantitative effects of stimulation parameters and temporal dynamics on phosphene characteristics are incorporated. Our results demonstrate the simulator's suitability for both computational applications such as end-to-end deep learning-based prosthetic vision optimization as well as behavioral experiments. The modular and open-source software provides a flexible simulation framework for computational, clinical, and behavioral neuroscientists working on visual neuroprosthetics.
Collapse
Affiliation(s)
| | | | - Antonio Lozano
- Netherlands Institute for Neuroscience, Vrije Universiteit, Amsterdam, Netherlands
| | - Laura Pijnacker
- Donders Institute for Brain Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
| | - Bodo Rueckauer
- Donders Institute for Brain Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
| | - Pieter Roelfsema
- Netherlands Institute for Neuroscience, Vrije Universiteit, Amsterdam, Netherlands
| | - Marcel van Gerven
- Donders Institute for Brain Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
| | - Richard van Wezel
- Donders Institute for Brain Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
- Biomedical Signals and Systems Group, University of Twente, Enschede, Netherlands
| | - Umut Güçlü
- Donders Institute for Brain Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
| | - Yağmur Güçlütürk
- Donders Institute for Brain Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
| |
Collapse
|
3
|
Kasowski J, Johnson BA, Neydavood R, Akkaraju A, Beyeler M. A systematic review of extended reality (XR) for understanding and augmenting vision loss. J Vis 2023; 23:5. [PMID: 37140911 PMCID: PMC10166121 DOI: 10.1167/jov.23.5.5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Accepted: 04/04/2023] [Indexed: 05/05/2023] Open
Abstract
Over the past decade, extended reality (XR) has emerged as an assistive technology not only to augment residual vision of people losing their sight but also to study the rudimentary vision restored to blind people by a visual neuroprosthesis. A defining quality of these XR technologies is their ability to update the stimulus based on the user's eye, head, or body movements. To make the best use of these emerging technologies, it is valuable and timely to understand the state of this research and identify any shortcomings that are present. Here we present a systematic literature review of 227 publications from 106 different venues assessing the potential of XR technology to further visual accessibility. In contrast to other reviews, we sample studies from multiple scientific disciplines, focus on technology that augments a person's residual vision, and require studies to feature a quantitative evaluation with appropriate end users. We summarize prominent findings from different XR research areas, show how the landscape has changed over the past decade, and identify scientific gaps in the literature. Specifically, we highlight the need for real-world validation, the broadening of end-user participation, and a more nuanced understanding of the usability of different XR-based accessibility aids.
Collapse
Affiliation(s)
- Justin Kasowski
- Graduate Program in Dynamical Neuroscience, University of California, Santa Barbara, CA, USA
| | - Byron A Johnson
- Department of Psychological & Brain Sciences, University of California, Santa Barbara, CA, USA
| | - Ryan Neydavood
- Department of Psychological & Brain Sciences, University of California, Santa Barbara, CA, USA
| | - Anvitha Akkaraju
- Department of Psychological & Brain Sciences, University of California, Santa Barbara, CA, USA
| | - Michael Beyeler
- Department of Psychological & Brain Sciences, University of California, Santa Barbara, CA, USA
- Department of Computer Science, University of California, Santa Barbara, CA, USA
| |
Collapse
|
4
|
|
5
|
Hoogsteen KM, Szpiro S, Kreiman G, Peli E. Beyond the Cane: Describing Urban Scenes to Blind People for Mobility Tasks. ACM TRANSACTIONS ON ACCESSIBLE COMPUTING 2022; 15. [DOI: 10.1145/3522757] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
Blind people face difficulties with independent mobility, impacting employment prospects, social inclusion, and quality of life. Given the advancements in computer vision, with more efficient and effective automated information extraction from visual scenes, it is important to determine what information is worth conveying to blind travelers, especially since people have a limited capacity to receive and process sensory information. We aimed to investigate which objects in a street scene are useful to describe and how those objects should be described. Thirteen cane-using participants, five of whom were early blind, took part in two urban walking experiments. In the first experiment, participants were asked to voice their information needs in the form of questions to the experimenter. In the second experiment, participants were asked to score scene descriptions and navigation instructions, provided by the experimenter, in terms of their usefulness. The descriptions included a variety of objects with various annotations per object. Additionally, we asked participants to rank order the objects and the different descriptions per object in terms of priority and explain why the provided information is or is not useful to them. The results reveal differences between early and late blind participants. Late blind participants requested information more frequently and prioritized information about objects’ locations. Our results illustrate how different factors, such as the level of detail, relative position, and what type of information is provided when describing an object, affected the usefulness of scene descriptions. Participants explained how they (indirectly) used information, but they were frequently unable to explain their ratings. The results distinguish between various types of travel information, underscore the importance of featuring these types at multiple levels of abstraction, and highlight gaps in current understanding of travel information needs. Elucidating the information needs of blind travelers is critical for the development of more useful assistive technologies.
Collapse
Affiliation(s)
- Karst M.P. Hoogsteen
- Schepens Eye Research Institute, Mass Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, Massachusetts, United States of America
| | - Sarit Szpiro
- Department of Special Education, University of Haifa, Haifa, Israel
| | - Gabriel Kreiman
- Boston Children's Hospital, Harvard Medical School, Boston, Massachusetts, United States of America
- Center for Brains, Minds, and Machines, Cambridge, Massachusetts, United States of America
| | - Eli Peli
- Schepens Eye Research Institute, Mass Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, Massachusetts, United States of America
| |
Collapse
|
6
|
de Ruyter van Steveninck J, Güçlü U, van Wezel R, van Gerven M. End-to-end optimization of prosthetic vision. J Vis 2022; 22:20. [PMID: 35703408 PMCID: PMC8899855 DOI: 10.1167/jov.22.2.20] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Neural prosthetics may provide a promising solution to restore visual perception in some forms of blindness. The restored prosthetic percept is rudimentary compared to normal vision and can be optimized with a variety of image preprocessing techniques to maximize relevant information transfer. Extracting the most useful features from a visual scene is a nontrivial task and optimal preprocessing choices strongly depend on the context. Despite rapid advancements in deep learning, research currently faces a difficult challenge in finding a general and automated preprocessing strategy that can be tailored to specific tasks or user requirements. In this paper, we present a novel deep learning approach that explicitly addresses this issue by optimizing the entire process of phosphene generation in an end-to-end fashion. The proposed model is based on a deep auto-encoder architecture and includes a highly adjustable simulation module of prosthetic vision. In computational validation experiments, we show that such an approach is able to automatically find a task-specific stimulation protocol. The results of these proof-of-principle experiments illustrate the potential of end-to-end optimization for prosthetic vision. The presented approach is highly modular and our approach could be extended to automated dynamic optimization of prosthetic vision for everyday tasks, given any specific constraints, accommodating individual requirements of the end-user.
Collapse
Affiliation(s)
- Jaap de Ruyter van Steveninck
- Department of Artificial Intelligence, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Umut Güçlü
- Department of Artificial Intelligence, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Richard van Wezel
- Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Biomedical Signal and Systems, MIRA Institute for Biomedical Technology and Technical Medicine, University of Twente, Enschede, The Netherlands
| | - Marcel van Gerven
- Department of Artificial Intelligence, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
7
|
Pio-Lopez L, Poulkouras R, Depannemaecker D. Visual cortical prosthesis: an electrical perspective. J Med Eng Technol 2021; 45:394-407. [PMID: 33843427 DOI: 10.1080/03091902.2021.1907468] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
Abstract
The electrical stimulation of the visual cortices has the potential to restore vision to blind individuals. Until now, the results of visual cortical prosthetics have been limited as no prosthesis has restored a full working vision but the field has shown a renewed interest these last years, thanks to wireless and technological advances. However, several scientific and technical challenges are still open to achieve the therapeutic benefit expected by these new devices. One of the main challenges is the electrical stimulation of the brain itself. In this review, we analyse the results in electrode-based visual cortical prosthetics from the electrical point of view. We first describe what is known about the electrode-tissue interface and safety of electrical stimulation. Then we focus on the psychophysics of prosthetic vision and the state-of-the-art on the interplay between the electrical stimulation of the visual cortex and the phosphene perception. Lastly, we discuss the challenges and perspectives of visual cortex electrical stimulation and electrode array design to develop the new generation implantable cortical visual prostheses.
Collapse
Affiliation(s)
| | - Romanos Poulkouras
- Department of Bioelectronics, Ecole Nationale Supérieure des Mines, CMP-EMSE, Gardanne, France.,Institut de Neurosciences de la Timone, UMR 7289, CNRS, Aix-Marseille Université, Marseille, France
| | - Damien Depannemaecker
- Department of Integrative and Computational Neuroscience, Paris-Saclay Institute of Neuroscience, Centre National de la Recherche Scientifique, Gif-sur-Yvette, France
| |
Collapse
|
8
|
Sanchez-Garcia M, Martinez-Cantin R, Bermudez-Cameo J, Guerrero JJ. Influence of field of view in visual prostheses design: Analysis with a VR system. J Neural Eng 2020; 17:056002. [PMID: 32947270 DOI: 10.1088/1741-2552/abb9be] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
OBJECTIVE Visual prostheses are designed to restore partial functional vision in patients with total vision loss. Retinal visual prostheses provide limited capabilities as a result of low resolution, limited field of view and poor dynamic range. Understanding the influence of these parameters in the perception results can guide prostheses research and design. APPROACH In this work, we evaluate the influence of field of view with respect to spatial resolution in visual prostheses, measuring the accuracy and response time in a search and recognition task. Twenty-four normally sighted participants were asked to find and recognize usual objects, such as furniture and home appliance in indoor room scenes. For the experiment, we use a new simulated prosthetic vision system that allows simple and effective experimentation. Our system uses a virtual-reality environment based on panoramic scenes. The simulator employs a head-mounted display which allows users to feel immersed in the scene by perceiving the entire scene all around. Our experiments use public image datasets and a commercial head-mounted display. We have also released the virtual-reality software for replicating and extending the experimentation. MAIN RESULTS Results show that the accuracy and response time decrease when the field of view is increased. Furthermore, performance appears to be correlated with the angular resolution, but showing a diminishing return even with a resolution of less than 2.3 phosphenes per degree. SIGNIFICANCE Our results seem to indicate that, for the design of retinal prostheses, it is better to concentrate the phosphenes in a small area, to maximize the angular resolution, even if that implies sacrificing field of view.
Collapse
Affiliation(s)
- Melani Sanchez-Garcia
- Instituto de Investigación en Ingeniería de Aragón, (I3A). Universidad de Zaragoza, Spain
| | | | | | | |
Collapse
|
9
|
Sanchez-Garcia M, Martinez-Cantin R, Guerrero JJ. Semantic and structural image segmentation for prosthetic vision. PLoS One 2020; 15:e0227677. [PMID: 31995568 PMCID: PMC6988941 DOI: 10.1371/journal.pone.0227677] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2019] [Accepted: 12/24/2019] [Indexed: 01/12/2023] Open
Abstract
Prosthetic vision is being applied to partially recover the retinal stimulation of visually impaired people. However, the phosphenic images produced by the implants have very limited information bandwidth due to the poor resolution and lack of color or contrast. The ability of object recognition and scene understanding in real environments is severely restricted for prosthetic users. Computer vision can play a key role to overcome the limitations and to optimize the visual information in the prosthetic vision, improving the amount of information that is presented. We present a new approach to build a schematic representation of indoor environments for simulated phosphene images. The proposed method combines a variety of convolutional neural networks for extracting and conveying relevant information about the scene such as structural informative edges of the environment and silhouettes of segmented objects. Experiments were conducted with normal sighted subjects with a Simulated Prosthetic Vision system. The results show good accuracy for object recognition and room identification tasks for indoor scenes using the proposed approach, compared to other image processing methods.
Collapse
Affiliation(s)
- Melani Sanchez-Garcia
- Instituto de Investigación en Ingeniería de Aragón (I3A), Universidad de Zaragoza, Zaragoza, Spain
| | - Ruben Martinez-Cantin
- Instituto de Investigación en Ingeniería de Aragón (I3A), Universidad de Zaragoza, Zaragoza, Spain
| | - Jose J. Guerrero
- Instituto de Investigación en Ingeniería de Aragón (I3A), Universidad de Zaragoza, Zaragoza, Spain
| |
Collapse
|
10
|
Abstract
In this Editor's Review, articles published in 2017 are organized by category and summarized. We provide a brief reflection of the research and progress in artificial organs intended to advance and better human life while providing insight for continued application of these technologies and methods. Artificial Organs continues in the original mission of its founders "to foster communications in the field of artificial organs on an international level." Artificial Organs continues to publish developments and clinical applications of artificial organ technologies in this broad and expanding field of organ Replacement, Recovery, and Regeneration from all over the world. Peer-reviewed Special Issues this year included contributions from the 12th International Conference on Pediatric Mechanical Circulatory Support Systems and Pediatric Cardiopulmonary Perfusion edited by Dr. Akif Undar, Artificial Oxygen Carriers edited by Drs. Akira Kawaguchi and Jan Simoni, the 24th Congress of the International Society for Mechanical Circulatory Support edited by Dr. Toru Masuzawa, Challenges in the Field of Biomedical Devices: A Multidisciplinary Perspective edited by Dr. Vincenzo Piemonte and colleagues and Functional Electrical Stimulation edited by Dr. Winfried Mayr and colleagues. We take this time also to express our gratitude to our authors for offering their work to this journal. We offer our very special thanks to our reviewers who give so generously of time and expertise to review, critique, and especially provide meaningful suggestions to the author's work whether eventually accepted or rejected. Without these excellent and dedicated reviewers the quality expected from such a journal could not be possible. We also express our special thanks to our Publisher, John Wiley & Sons for their expert attention and support in the production and marketing of Artificial Organs. We look forward to reporting further advances in the coming years.
Collapse
|