1
|
Holiel HA, Fawzi SA, Al-Atabany W. Pre-processing visual scenes for retinal prosthesis systems: A comprehensive review. Artif Organs 2024. [PMID: 39023279 DOI: 10.1111/aor.14824] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Revised: 05/13/2024] [Accepted: 06/21/2024] [Indexed: 07/20/2024]
Abstract
BACKGROUND Retinal prostheses offer hope for individuals with degenerative retinal diseases by stimulating the remaining retinal cells to partially restore their vision. This review delves into the current advancements in retinal prosthesis technology, with a special emphasis on the pivotal role that image processing and machine learning techniques play in this evolution. METHODS We provide a comprehensive analysis of the existing implantable devices and optogenetic strategies, delineating their advantages, limitations, and challenges in addressing complex visual tasks. The review extends to various image processing algorithms and deep learning architectures that have been implemented to enhance the functionality of retinal prosthetic devices. We also illustrate the testing results by demonstrating the clinical trials or using Simulated Prosthetic Vision (SPV) through phosphene simulations, which is a critical aspect of simulating visual perception for retinal prosthesis users. RESULTS Our review highlights the significant progress in retinal prosthesis technology, particularly its capacity to augment visual perception among the visually impaired. It discusses the integration between image processing and deep learning, illustrating their impact on individual interactions and navigations within the environment through applying clinical trials and also illustrating the limitations of some techniques to be used with current devices, as some approaches only use simulation even on sighted-normal individuals or rely on qualitative analysis, where some consider realistic perception models and others do not. CONCLUSION This interdisciplinary field holds promise for the future of retinal prostheses, with the potential to significantly enhance the quality of life for individuals with retinal prostheses. Future research directions should pivot towards optimizing phosphene simulations for SPV approaches, considering the distorted and confusing nature of phosphene perception, thereby enriching the visual perception provided by these prosthetic devices. This endeavor will not only improve navigational independence but also facilitate a more immersive interaction with the environment.
Collapse
Affiliation(s)
- Heidi Ahmed Holiel
- Medical Imaging and Image Processing Research Group, Center for Informatics Science, Nile University, Sheikh Zayed City, Egypt
| | - Sahar Ali Fawzi
- Medical Imaging and Image Processing Research Group, Center for Informatics Science, Nile University, Sheikh Zayed City, Egypt
- Systems and Biomedical Engineering Department, Cairo University, Giza, Egypt
| | - Walid Al-Atabany
- Medical Imaging and Image Processing Research Group, Center for Informatics Science, Nile University, Sheikh Zayed City, Egypt
- Biomedical Engineering Department, Helwan University, Helwan, Egypt
| |
Collapse
|
2
|
van der Grinten M, de Ruyter van Steveninck J, Lozano A, Pijnacker L, Rueckauer B, Roelfsema P, van Gerven M, van Wezel R, Güçlü U, Güçlütürk Y. Towards biologically plausible phosphene simulation for the differentiable optimization of visual cortical prostheses. eLife 2024; 13:e85812. [PMID: 38386406 PMCID: PMC10883675 DOI: 10.7554/elife.85812] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2022] [Accepted: 01/21/2024] [Indexed: 02/23/2024] Open
Abstract
Blindness affects millions of people around the world. A promising solution to restoring a form of vision for some individuals are cortical visual prostheses, which bypass part of the impaired visual pathway by converting camera input to electrical stimulation of the visual system. The artificially induced visual percept (a pattern of localized light flashes, or 'phosphenes') has limited resolution, and a great portion of the field's research is devoted to optimizing the efficacy, efficiency, and practical usefulness of the encoding of visual information. A commonly exploited method is non-invasive functional evaluation in sighted subjects or with computational models by using simulated prosthetic vision (SPV) pipelines. An important challenge in this approach is to balance enhanced perceptual realism, biologically plausibility, and real-time performance in the simulation of cortical prosthetic vision. We present a biologically plausible, PyTorch-based phosphene simulator that can run in real-time and uses differentiable operations to allow for gradient-based computational optimization of phosphene encoding models. The simulator integrates a wide range of clinical results with neurophysiological evidence in humans and non-human primates. The pipeline includes a model of the retinotopic organization and cortical magnification of the visual cortex. Moreover, the quantitative effects of stimulation parameters and temporal dynamics on phosphene characteristics are incorporated. Our results demonstrate the simulator's suitability for both computational applications such as end-to-end deep learning-based prosthetic vision optimization as well as behavioral experiments. The modular and open-source software provides a flexible simulation framework for computational, clinical, and behavioral neuroscientists working on visual neuroprosthetics.
Collapse
Affiliation(s)
| | | | - Antonio Lozano
- Netherlands Institute for Neuroscience, Vrije Universiteit, Amsterdam, Netherlands
| | - Laura Pijnacker
- Donders Institute for Brain Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
| | - Bodo Rueckauer
- Donders Institute for Brain Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
| | - Pieter Roelfsema
- Netherlands Institute for Neuroscience, Vrije Universiteit, Amsterdam, Netherlands
| | - Marcel van Gerven
- Donders Institute for Brain Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
| | - Richard van Wezel
- Donders Institute for Brain Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
- Biomedical Signals and Systems Group, University of Twente, Enschede, Netherlands
| | - Umut Güçlü
- Donders Institute for Brain Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
| | - Yağmur Güçlütürk
- Donders Institute for Brain Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, Netherlands
| |
Collapse
|
3
|
Rafiei MH, Gauthier LV, Adeli H, Takabi D. Self-Supervised Learning for Electroencephalography. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:1457-1471. [PMID: 35867362 DOI: 10.1109/tnnls.2022.3190448] [Citation(s) in RCA: 33] [Impact Index Per Article: 33.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Decades of research have shown machine learning superiority in discovering highly nonlinear patterns embedded in electroencephalography (EEG) records compared with conventional statistical techniques. However, even the most advanced machine learning techniques require relatively large, labeled EEG repositories. EEG data collection and labeling are costly. Moreover, combining available datasets to achieve a large data volume is usually infeasible due to inconsistent experimental paradigms across trials. Self-supervised learning (SSL) solves these challenges because it enables learning from EEG records across trials with variable experimental paradigms, even when the trials explore different phenomena. It aggregates multiple EEG repositories to increase accuracy, reduce bias, and mitigate overfitting in machine learning training. In addition, SSL could be employed in situations where there is limited labeled training data, and manual labeling is costly. This article: 1) provides a brief introduction to SSL; 2) describes some SSL techniques employed in recent studies, including EEG; 3) proposes current and potential SSL techniques for future investigations in EEG studies; 4) discusses the cons and pros of different SSL techniques; and 5) proposes holistic implementation tips and potential future directions for EEG SSL practices.
Collapse
|
4
|
Leong F, Rahmani B, Psaltis D, Moser C, Ghezzi D. An actor-model framework for visual sensory encoding. Nat Commun 2024; 15:808. [PMID: 38280912 PMCID: PMC10821921 DOI: 10.1038/s41467-024-45105-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Accepted: 01/15/2024] [Indexed: 01/29/2024] Open
Abstract
A fundamental challenge in neuroengineering is determining a proper artificial input to a sensory system that yields the desired perception. In neuroprosthetics, this process is known as artificial sensory encoding, and it holds a crucial role in prosthetic devices restoring sensory perception in individuals with disabilities. For example, in visual prostheses, one key aspect of artificial image encoding is to downsample images captured by a camera to a size matching the number of inputs and resolution of the prosthesis. Here, we show that downsampling an image using the inherent computation of the retinal network yields better performance compared to learning-free downsampling methods. We have validated a learning-based approach (actor-model framework) that exploits the signal transformation from photoreceptors to retinal ganglion cells measured in explanted mouse retinas. The actor-model framework generates downsampled images eliciting a neuronal response in-silico and ex-vivo with higher neuronal reliability than the one produced by a learning-free approach. During the learning process, the actor network learns to optimize contrast and the kernel's weights. This methodological approach might guide future artificial image encoding strategies for visual prostheses. Ultimately, this framework could be applicable for encoding strategies in other sensory prostheses such as cochlear or limb.
Collapse
Affiliation(s)
- Franklin Leong
- Medtronic Chair in Neuroengineering, Center for Neuroprosthetics and Institute of Bioengineering, School of Engineering, École Polytechnique Fédérale de Lausanne, Geneva, Switzerland
| | - Babak Rahmani
- Laboratory of Applied Photonics Devices, Institute of Electrical and Micro Engineering, School of Engineering, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
- Microsoft Research, Cambridge, UK
| | - Demetri Psaltis
- Optics Laboratory, Institute of Electrical and Micro Engineering, School of Engineering, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Christophe Moser
- Laboratory of Applied Photonics Devices, Institute of Electrical and Micro Engineering, School of Engineering, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Diego Ghezzi
- Medtronic Chair in Neuroengineering, Center for Neuroprosthetics and Institute of Bioengineering, School of Engineering, École Polytechnique Fédérale de Lausanne, Geneva, Switzerland.
- Ophthalmic and Neural Technologies Laboratory, Department of Ophthalmology, University of Lausanne, Hôpital ophtalmique Jules-Gonin, Fondation Asile des Aveugles, Lausanne, Switzerland.
| |
Collapse
|
5
|
Wang HZ, Wong YT. A novel simulation paradigm utilising MRI-derived phosphene maps for cortical prosthetic vision. J Neural Eng 2023; 20:046027. [PMID: 37531948 PMCID: PMC10594539 DOI: 10.1088/1741-2552/aceca2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Revised: 07/13/2023] [Accepted: 08/01/2023] [Indexed: 08/04/2023]
Abstract
Objective.We developed a realistic simulation paradigm for cortical prosthetic vision and investigated whether we can improve visual performance using a novel clustering algorithm.Approach.Cortical visual prostheses have been developed to restore sight by stimulating the visual cortex. To investigate the visual experience, previous studies have used uniform phosphene maps, which may not accurately capture generated phosphene map distributions of implant recipients. The current simulation paradigm was based on the Human Connectome Project retinotopy dataset and the placement of implants on the cortices from magnetic resonance imaging scans. Five unique retinotopic maps were derived using this method. To improve performance on these retinotopic maps, we enabled head scanning and a density-based clustering algorithm was then used to relocate centroids of visual stimuli. The impact of these improvements on visual detection performance was tested. Using spatially evenly distributed maps as a control, we recruited ten subjects and evaluated their performance across five sessions on the Berkeley Rudimentary Visual Acuity test and the object recognition task.Main results.Performance on control maps is significantly better than on retinotopic maps in both tasks. Both head scanning and the clustering algorithm showed the potential of improving visual ability across multiple sessions in the object recognition task.Significance.The current paradigm is the first that simulates the experience of cortical prosthetic vision based on brain scans and implant placement, which captures the spatial distribution of phosphenes more realistically. Utilisation of evenly distributed maps may overestimate the performance that visual prosthetics can restore. This simulation paradigm could be used in clinical practice when making plans for where best to implant cortical visual prostheses.
Collapse
Affiliation(s)
- Haozhe Zac Wang
- Department of Electrical and Computer Systems Engineering, Monash University, Melbourne, Australia
| | - Yan Tat Wong
- Department of Electrical and Computer Systems Engineering, Monash University, Melbourne, Australia
- Department of Physiology, Monash University, Melbourne, Australia
| |
Collapse
|
6
|
Selcuk Nogay H, Adeli H. Diagnostic of autism spectrum disorder based on structural brain MRI images using, grid search optimization, and convolutional neural networks. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104234] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
7
|
Wang J, Zhang L, Zhang Y. Mixture 2D Convolutions for 3D Medical Image Segmentation. Int J Neural Syst 2023; 33:2250059. [PMID: 36328969 DOI: 10.1142/s0129065722500599] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Abstract
Three-dimensional (3D) medical image segmentation plays a crucial role in medical care applications. Although various two-dimensional (2D) and 3D neural network models have been applied to 3D medical image segmentation and achieved impressive results, a trade-off remains between efficiency and accuracy. To address this issue, a novel mixture convolutional network (MixConvNet) is proposed, in which traditional 2D/3D convolutional blocks are replaced with novel MixConv blocks. In the MixConv block, 3D convolution is decomposed into a mixture of 2D convolutions from different views. Therefore, the MixConv block fully utilizes the advantages of 2D convolution and maintains the learning ability of 3D convolution. It acts as 3D convolutions and thus can process volumetric input directly and learn intra-slice features, which are absent in the traditional 2D convolutional block. By contrast, the proposed MixConv block only contains 2D convolutions; hence, it has significantly fewer trainable parameters and less computation budget than a block containing 3D convolutions. Furthermore, the proposed MixConvNet is pre-trained with small input patches and fine-tuned with large input patches to improve segmentation performance further. In experiments on the Decathlon Heart dataset and Sliver07 dataset, the proposed MixConvNet outperformed the state-of-the-art methods such as UNet3D, VNet, and nnUnet.
Collapse
Affiliation(s)
- Jianyong Wang
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, Sichuan, P. R. China
| | - Lei Zhang
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, Sichuan, P. R. China
| | - Yi Zhang
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, Sichuan, P. R. China
| |
Collapse
|
8
|
Wang J, Zhao R, Li P, Fang Z, Li Q, Han Y, Zhou R, Zhang Y. Clinical Progress and Optimization of Information Processing in Artificial Visual Prostheses. SENSORS (BASEL, SWITZERLAND) 2022; 22:6544. [PMID: 36081002 PMCID: PMC9460383 DOI: 10.3390/s22176544] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Revised: 08/22/2022] [Accepted: 08/26/2022] [Indexed: 06/15/2023]
Abstract
Visual prostheses, used to assist in restoring functional vision to the visually impaired, convert captured external images into corresponding electrical stimulation patterns that are stimulated by implanted microelectrodes to induce phosphenes and eventually visual perception. Detecting and providing useful visual information to the prosthesis wearer under limited artificial vision has been an important concern in the field of visual prosthesis. Along with the development of prosthetic device design and stimulus encoding methods, researchers have explored the possibility of the application of computer vision by simulating visual perception under prosthetic vision. Effective image processing in computer vision is performed to optimize artificial visual information and improve the ability to restore various important visual functions in implant recipients, allowing them to better achieve their daily demands. This paper first reviews the recent clinical implantation of different types of visual prostheses, summarizes the artificial visual perception of implant recipients, and especially focuses on its irregularities, such as dropout and distorted phosphenes. Then, the important aspects of computer vision in the optimization of visual information processing are reviewed, and the possibilities and shortcomings of these solutions are discussed. Ultimately, the development direction and emphasis issues for improving the performance of visual prosthesis devices are summarized.
Collapse
Affiliation(s)
- Jing Wang
- School of Information, Shanghai Ocean University, Shanghai 201306, China
- Key Laboratory of Fishery Information, Ministry of Agriculture, Shanghai 200335, China
| | - Rongfeng Zhao
- School of Information, Shanghai Ocean University, Shanghai 201306, China
| | - Peitong Li
- School of Information, Shanghai Ocean University, Shanghai 201306, China
| | - Zhiqiang Fang
- School of Information, Shanghai Ocean University, Shanghai 201306, China
| | - Qianqian Li
- School of Information, Shanghai Ocean University, Shanghai 201306, China
| | - Yanling Han
- School of Information, Shanghai Ocean University, Shanghai 201306, China
| | - Ruyan Zhou
- School of Information, Shanghai Ocean University, Shanghai 201306, China
| | - Yun Zhang
- School of Information, Shanghai Ocean University, Shanghai 201306, China
| |
Collapse
|
9
|
Hoogsteen KM, Szpiro S, Kreiman G, Peli E. Beyond the Cane: Describing Urban Scenes to Blind People for Mobility Tasks. ACM TRANSACTIONS ON ACCESSIBLE COMPUTING 2022; 15. [DOI: 10.1145/3522757] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
Blind people face difficulties with independent mobility, impacting employment prospects, social inclusion, and quality of life. Given the advancements in computer vision, with more efficient and effective automated information extraction from visual scenes, it is important to determine what information is worth conveying to blind travelers, especially since people have a limited capacity to receive and process sensory information. We aimed to investigate which objects in a street scene are useful to describe and how those objects should be described. Thirteen cane-using participants, five of whom were early blind, took part in two urban walking experiments. In the first experiment, participants were asked to voice their information needs in the form of questions to the experimenter. In the second experiment, participants were asked to score scene descriptions and navigation instructions, provided by the experimenter, in terms of their usefulness. The descriptions included a variety of objects with various annotations per object. Additionally, we asked participants to rank order the objects and the different descriptions per object in terms of priority and explain why the provided information is or is not useful to them. The results reveal differences between early and late blind participants. Late blind participants requested information more frequently and prioritized information about objects’ locations. Our results illustrate how different factors, such as the level of detail, relative position, and what type of information is provided when describing an object, affected the usefulness of scene descriptions. Participants explained how they (indirectly) used information, but they were frequently unable to explain their ratings. The results distinguish between various types of travel information, underscore the importance of featuring these types at multiple levels of abstraction, and highlight gaps in current understanding of travel information needs. Elucidating the information needs of blind travelers is critical for the development of more useful assistive technologies.
Collapse
Affiliation(s)
- Karst M.P. Hoogsteen
- Schepens Eye Research Institute, Mass Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, Massachusetts, United States of America
| | - Sarit Szpiro
- Department of Special Education, University of Haifa, Haifa, Israel
| | - Gabriel Kreiman
- Boston Children's Hospital, Harvard Medical School, Boston, Massachusetts, United States of America
- Center for Brains, Minds, and Machines, Cambridge, Massachusetts, United States of America
| | - Eli Peli
- Schepens Eye Research Institute, Mass Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, Massachusetts, United States of America
| |
Collapse
|
10
|
de Ruyter van Steveninck J, Güçlü U, van Wezel R, van Gerven M. End-to-end optimization of prosthetic vision. J Vis 2022; 22:20. [PMID: 35703408 PMCID: PMC8899855 DOI: 10.1167/jov.22.2.20] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Neural prosthetics may provide a promising solution to restore visual perception in some forms of blindness. The restored prosthetic percept is rudimentary compared to normal vision and can be optimized with a variety of image preprocessing techniques to maximize relevant information transfer. Extracting the most useful features from a visual scene is a nontrivial task and optimal preprocessing choices strongly depend on the context. Despite rapid advancements in deep learning, research currently faces a difficult challenge in finding a general and automated preprocessing strategy that can be tailored to specific tasks or user requirements. In this paper, we present a novel deep learning approach that explicitly addresses this issue by optimizing the entire process of phosphene generation in an end-to-end fashion. The proposed model is based on a deep auto-encoder architecture and includes a highly adjustable simulation module of prosthetic vision. In computational validation experiments, we show that such an approach is able to automatically find a task-specific stimulation protocol. The results of these proof-of-principle experiments illustrate the potential of end-to-end optimization for prosthetic vision. The presented approach is highly modular and our approach could be extended to automated dynamic optimization of prosthetic vision for everyday tasks, given any specific constraints, accommodating individual requirements of the end-user.
Collapse
Affiliation(s)
- Jaap de Ruyter van Steveninck
- Department of Artificial Intelligence, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Umut Güçlü
- Department of Artificial Intelligence, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Richard van Wezel
- Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Biomedical Signal and Systems, MIRA Institute for Biomedical Technology and Technical Medicine, University of Twente, Enschede, The Netherlands
| | - Marcel van Gerven
- Department of Artificial Intelligence, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
11
|
Hua Y, Shu X, Wang Z, Zhang L. Uncertainty-Guided Voxel-Level Supervised Contrastive Learning for Semi-Supervised Medical Image Segmentation. Int J Neural Syst 2022; 32:2250016. [DOI: 10.1142/s0129065722500162] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Semi-supervised learning reduces overfitting and facilitates medical image segmentation by regularizing the learning of limited well-annotated data with the knowledge provided by a large amount of unlabeled data. However, there are many misuses and underutilization of data in conventional semi-supervised methods. On the one hand, the model will deviate from the empirical distribution under the training of numerous unlabeled data. On the other hand, the model treats labeled and unlabeled data differently and does not consider inter-data information. In this paper, a semi-supervised method is proposed to exploit unlabeled data to further narrow the gap between the semi-supervised model and its fully-supervised counterpart. Specifically, the architecture of the proposed method is based on the mean-teacher framework, and the uncertainty estimation module is improved to impose constraints of consistency and guide the selection of feature representation vectors. Notably, a voxel-level supervised contrastive learning module is devised to establish a contrastive relationship between feature representation vectors, whether from labeled or unlabeled data. The supervised manner ensures that the network learns the correct knowledge, and the dense contrastive relationship further extracts information from unlabeled data. The above overcomes data misuse and underutilization in semi-supervised frameworks. Moreover, it favors the feature representation with intra-class compactness and inter-class separability and gains extra performance. Extensive experimental results on the left atrium dataset from Atrial Segmentation Challenge demonstrate that the proposed method has superior performance over the state-of-the-art methods.
Collapse
Affiliation(s)
- Yu Hua
- College of Computer Science, Sichuan University, Section 4, Southern 1st Ring Rd, Chengdu, Sichuan 610065, P. R. China
| | - Xin Shu
- College of Computer Science, Sichuan University, Section 4, Southern 1st Ring Rd, Chengdu, Sichuan 610065, P. R. China
| | - Zizhou Wang
- College of Computer Science, Sichuan University, Section 4, Southern 1st Ring Rd, Chengdu, Sichuan 610065, P. R. China
| | - Lei Zhang
- College of Computer Science, Sichuan University, Section 4, Southern 1st Ring Rd, Chengdu, Sichuan 610065, P. R. China
| |
Collapse
|
12
|
Machine learning techniques for diagnosis of alzheimer disease, mild cognitive disorder, and other types of dementia. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103293] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
|
13
|
de Ruyter van Steveninck J, van Gestel T, Koenders P, van der Ham G, Vereecken F, Güçlü U, van Gerven M, Güçlütürk Y, van Wezel R. Real-world indoor mobility with simulated prosthetic vision: The benefits and feasibility of contour-based scene simplification at different phosphene resolutions. J Vis 2022; 22:1. [PMID: 35103758 PMCID: PMC8819280 DOI: 10.1167/jov.22.2.1] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2021] [Accepted: 12/28/2021] [Indexed: 11/24/2022] Open
Abstract
Neuroprosthetic implants are a promising technology for restoring some form of vision in people with visual impairments via electrical neurostimulation in the visual pathway. Although an artificially generated prosthetic percept is relatively limited compared with normal vision, it may provide some elementary perception of the surroundings, re-enabling daily living functionality. For mobility in particular, various studies have investigated the benefits of visual neuroprosthetics in a simulated prosthetic vision paradigm with varying outcomes. The previous literature suggests that scene simplification via image processing, and particularly contour extraction, may potentially improve the mobility performance in a virtual environment. In the current simulation study with sighted participants, we explore both the theoretically attainable benefits of strict scene simplification in an indoor environment by controlling the environmental complexity, as well as the practically achieved improvement with a deep learning-based surface boundary detection implementation compared with traditional edge detection. A simulated electrode resolution of 26 × 26 was found to provide sufficient information for mobility in a simple environment. Our results suggest that, for a lower number of implanted electrodes, the removal of background textures and within-surface gradients may be beneficial in theory. However, the deep learning-based implementation for surface boundary detection did not improve mobility performance in the current study. Furthermore, our findings indicate that, for a greater number of electrodes, the removal of within-surface gradients and background textures may deteriorate, rather than improve, mobility. Therefore, finding a balanced amount of scene simplification requires a careful tradeoff between informativity and interpretability that may depend on the number of implanted electrodes.
Collapse
Affiliation(s)
- Jaap de Ruyter van Steveninck
- Department of Artificial Intelligence, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
- Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Tom van Gestel
- Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Paula Koenders
- Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Guus van der Ham
- Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Floris Vereecken
- Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Umut Güçlü
- Department of Artificial Intelligence, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Marcel van Gerven
- Department of Artificial Intelligence, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Yagmur Güçlütürk
- Department of Artificial Intelligence, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Richard van Wezel
- Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
- Biomedical Signal and Systems, MIRA Institute for Biomedical Technology and Technical Medicine, University of Twente, Enschede, the Netherlands
| |
Collapse
|
14
|
Arco JE, Ortiz A, Ramírez J, Zhang YD, Górriz JM. Tiled Sparse Coding in Eigenspaces for Image Classification. Int J Neural Syst 2021; 32:2250007. [PMID: 34967705 DOI: 10.1142/s0129065722500071] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
The automation in the diagnosis of medical images is currently a challenging task. The use of Computer Aided Diagnosis (CAD) systems can be a powerful tool for clinicians, especially in situations when hospitals are overflowed. These tools are usually based on artificial intelligence (AI), a field that has been recently revolutionized by deep learning approaches. blackThese alternatives usually obtain a large performance based on complex solutions, leading to a high computational cost and the need of having large databases. In this work, we propose a classification framework based on sparse coding. Images are blackfirst partitioned into different tiles, and a dictionary is built after applying PCA to these tiles. The original signals are then transformed as a linear combination of the elements of the dictionary. blackThen, they are reconstructed by iteratively deactivating the elements associated with each component. Classification is finally performed employing as features the subsequent reconstruction errors. Performance is evaluated in a real context where distinguishing between four different pathologies: control versus bacterial pneumonia versus viral pneumonia versus COVID-19. blackOur system differentiates between pneumonia patients and controls with an accuracy of 97.74%, whereas in the 4-class context the accuracy is 86.73%. The excellent results and the pioneering use of sparse coding in this scenario evidence that our proposal can assist clinicians when their workload is high.
Collapse
Affiliation(s)
- Juan E Arco
- Department of Signal Theory, Networking and Communications, University of Granada 18010, Spain.,Andalusian Research Institute in Data, Science and Computational Intelligence, Spain
| | - Andrés Ortiz
- Department of Communications Engineering, University of Malaga 29010, Spain.,Andalusian Research Institute in Data, Science and Computational Intelligence, Spain
| | - Javier Ramírez
- Department of Signal Theory, Networking and Communications, University of Granada 18010, Spain.,Andalusian Research Institute in Data, Science and Computational Intelligence, Spain
| | - Yu-Dong Zhang
- School of Informatics, University of Leicester, Leicester LE1 7RH, UK
| | - Juan M Górriz
- Department of Signal Theory, Networking and Communications, University of Granada 18010, Spain.,Andalusian Research Institute in Data, Science and Computational Intelligence, Spain
| |
Collapse
|
15
|
Karakullukcu N, Yilmaz B. Detection of Movement Intention in EEG-Based Brain-Computer Interfaces Using Fourier-Based Synchrosqueezing Transform. Int J Neural Syst 2021; 32:2150059. [PMID: 34806939 DOI: 10.1142/s0129065721500593] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Patients with motor impairments need caregivers' help to initiate the operation of brain-computer interfaces (BCI). This study aims to identify and characterize movement intention using multichannel electroencephalography (EEG) signals as a means to initiate BCI systems without extra accessories/methodologies. We propose to discriminate the resting and motor imagery (MI) states with high accuracy using Fourier-based synchrosqueezing transform (FSST) as a feature extractor. FSST has been investigated and compared with other popular approaches in 28 healthy subjects for a total of 6657 trials. The accuracy and f-measure values were obtained as 99.8% and 0.99, respectively, when FSST was used as the feature extractor and singular value decomposition (SVD) as the feature selection method and support vector machines as the classifier. Moreover, this study investigated the use of data that contain certain amount of noise without any preprocessing in addition to the clean counterparts. Furthermore, the statistical analysis of EEG channels with the best discrimination (of resting and MI states) characteristics demonstrated that F4-Fz-C3-Cz-C4-Pz channels and several statistical features had statistical significance levels, [Formula: see text], less than 0.05. This study showed that the preparation of the movement can be detected in real-time employing FSST-SVD combination and several channels with minimal pre-processing effort.
Collapse
Affiliation(s)
- Nedime Karakullukcu
- Electrical and Computer Engineering Department, Graduate School of Engineering and Sciences, Abdullah Gul University, 38080 Kayseri, Turkey.,Biomedical Instrumentation and Signal Analysis, Laboratory (BISA-Lab), School of Engineering, Abdullah Gul University, 38080 Kayseri, Turkey
| | - Bülent Yilmaz
- Electrical and Computer Engineering Department, Graduate School of Engineering and Sciences, Abdullah Gul University, 38080 Kayseri, Turkey.,Electrical-Electronics Engineering Department, School of Engineering, Abdullah Gul University, 38080 Kayseri, Turkey.,Biomedical Instrumentation and Signal Analysis Laboratory (BISA-Lab), School of Engineering, Abdullah Gul University, 38080 Kayseri, Turkey
| |
Collapse
|
16
|
Fernández E, Alfaro A, Soto-Sánchez C, González-López P, Lozano Ortega AM, Peña S, Grima MD, Rodil A, Gómez B, Chen X, Roelfsema PR, Rolston JD, Davis TS, Normann RA. Visual percepts evoked with an Intracortical 96-channel microelectrode array inserted in human occipital cortex. J Clin Invest 2021; 131:151331. [PMID: 34665780 DOI: 10.1172/jci151331] [Citation(s) in RCA: 64] [Impact Index Per Article: 21.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2021] [Accepted: 09/28/2021] [Indexed: 01/11/2023] Open
Abstract
BACKGROUND A long-held dream of scientists is to transfer information directly to the visual cortex of blind individuals, thereby restoring a rudimentary form of sight. However, no clinically available cortical visual prosthesis yet exists. METHODS We implanted an intracortical microelectrode array consisting of 96 electrodes in the visual cortex of a 57-year-old person with complete blindness for a six- month period. We measured thresholds and the characteristics of the visual percepts elicited by intracortical microstimulation. RESULTS Implantation and subsequent explantation of intracortical microelectrodes were carried out without complications. The mean stimulation threshold for single electrodes was 66.8 ± 36.5 μA. We consistently obtained high-quality recordings from visually deprived neurons and the stimulation parameters remained stable over time. Simultaneous stimulation via multiple electrodes were associated with a significant reduction in thresholds (p<0.001, ANOVA test) and evoked discriminable phosphene percepts, allowing the blind participant to identify some letters and recognize object boundaries. Furthermore, we observed a learning process that helped the subject to recognize complex patterns over time. CONCLUSIONS Our results demonstrate the safety and efficacy of chronic intracortical microstimulation via a large number of electrodes in human visual cortex, showing its high potential for restoring functional vision in the blind. TRIAL REGISTRATION ClinicalTrials.gov identifier NCT02983370. FUNDING Funding was provided by grant RTI2018-098969-B-100 from the Spanish Ministerio de Ciencia Innovación y Universidades, by grant PROMETEO/2019/119 from the Generalitat Valenciana (Spain), by the Bidons Egara Research Chair of the University Miguel Hernández (Spain) and by the John Moran Eye Center of the University of Utah (US).
Collapse
Affiliation(s)
| | - Arantxa Alfaro
- Servicio de Neurología, Hospital Vega Baja, Elche, Spain
| | | | - Pablo González-López
- Servicio de Neurología, Hospital General Universitario de Alicante, Alicante, Spain
| | | | - Sebastian Peña
- Bioengineering Institute, University Miguel Hernandez, Elche, Spain
| | | | - Alfonso Rodil
- Bioengineering Institute, University Miguel Hernandez, Elche, Spain
| | - Bernardeta Gómez
- Bioengineering Institute, University Miguel Hernandez, Elche, Spain
| | - Xing Chen
- Department of Vision & Cognition, Netherland Institute for Neuroscience, Amsterdam, Netherlands
| | - Pieter R Roelfsema
- Department of Vision & Cognition, Netherland Institute for Neuroscience, Amsterdam, Netherlands
| | - John D Rolston
- Department of Neurosurgery and Biomedical Engineering, University of Utah, Salt Lake City, United States of America
| | - Tyler S Davis
- Department of Neurosurgery and Biomedical Engineering, University of Utah, Salt Lake City, United States of America
| | - Richard A Normann
- John Moran Eye Center and Biomedical Engineering, University of Utah, Salt Lake City, United States of America
| |
Collapse
|
17
|
Amodeo M, Arpaia P, Buzio M, Di Capua V, Donnarumma F. Hysteresis Modeling in Iron-Dominated Magnets Based on a Multi-Layered Narx Neural Network Approach. Int J Neural Syst 2021; 31:2150033. [PMID: 34296651 DOI: 10.1142/s0129065721500337] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
A full-fledged neural network modeling, based on a Multi-layered Nonlinear Autoregressive Exogenous Neural Network (NARX) architecture, is proposed for quasi-static and dynamic hysteresis loops, one of the most challenging topics for computational magnetism. This modeling approach overcomes drawbacks in attaining better than percent-level accuracy of classical and recent approaches for accelerator magnets, that combine hybridization of standard hysteretic models and neural network architectures. By means of an incremental procedure, different Deep Neural Network Architectures are selected, fine-tuned and tested in order to predict magnetic hysteresis in the context of electromagnets. Tests and results show that the proposed NARX architecture best fits the measured magnetic field behavior of a reference quadrupole at CERN. In particular, the proposed modeling framework leads to a percent error below 0.02% for the magnetic field prediction, thus outperforming state of the art approaches and paving a very promising way for future real time applications.
Collapse
Affiliation(s)
- Maria Amodeo
- Department of Electronics and Telecommunications (DET), Polytechnic University of Turin, Turin 10129, Italy.,Instrumentation and Measurement Laboratory for Particle Accelerator Laboratory (IMPALab), Department of Electrical Engineering and Information Technology (DIETI), University of Naples Federico II, Naples 80100, Italy.,Technology Department, CERN - European Organization for Nuclear Research, 1211 Meyrin, Switzerland
| | - Pasquale Arpaia
- Instrumentation and Measurement Laboratory for Particle Accelerator Laboratory (IMPALab), Department of Electrical Engineering and Information Technology (DIETI), University of Naples Federico II, Naples 80100, Italy.,Technology Department, CERN - European Organization for Nuclear Research, 1211 Meyrin, Switzerland
| | - Marco Buzio
- Technology Department, CERN - European Organization for Nuclear Research, 1211 Meyrin, Switzerland
| | - Vincenzo Di Capua
- Instrumentation and Measurement Laboratory for Particle Accelerator Laboratory (IMPALab), Department of Electrical Engineering and Information Technology (DIETI), University of Naples Federico II, Naples 80100, Italy.,Technology Department, CERN - European Organization for Nuclear Research, 1211 Meyrin, Switzerland
| | - Francesco Donnarumma
- Institute of Cognitive Sciences and Technologies (ISTC), National Research Council (CNR), Via San Martino della Battaglia, 44, Rome 00185, Italy
| |
Collapse
|
18
|
Jin J, Fang H, Daly I, Xiao R, Miao Y, Wang X, Cichocki A. Optimization of Model Training Based on Iterative Minimum Covariance Determinant In Motor-Imagery BCI. Int J Neural Syst 2021; 31:2150030. [PMID: 34176450 DOI: 10.1142/s0129065721500301] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
The common spatial patterns (CSP) algorithm is one of the most frequently used and effective spatial filtering methods for extracting relevant features for use in motor imagery brain-computer interfaces (MI-BCIs). However, the inherent defect of the traditional CSP algorithm is that it is highly sensitive to potential outliers, which adversely affects its performance in practical applications. In this work, we propose a novel feature optimization and outlier detection method for the CSP algorithm. Specifically, we use the minimum covariance determinant (MCD) to detect and remove outliers in the dataset, then we use the Fisher score to evaluate and select features. In addition, in order to prevent the emergence of new outliers, we propose an iterative minimum covariance determinant (IMCD) algorithm. We evaluate our proposed algorithm in terms of iteration times, classification accuracy and feature distribution using two BCI competition datasets. The experimental results show that the average classification performance of our proposed method is 12% and 22.9% higher than that of the traditional CSP method in two datasets ([Formula: see text]), and our proposed method obtains better performance in comparison with other competing methods. The results show that our method improves the performance of MI-BCI systems.
Collapse
Affiliation(s)
- Jing Jin
- Key Laboratory of Smart Manufacturing in Energy Chemical Process, Ministry of Education, East China University of Science and Technology, Shanghai, P. R. China
| | - Hua Fang
- Key Laboratory of Smart Manufacturing in Energy Chemical Process, Ministry of Education, East China University of Science and Technology, Shanghai, P. R. China
| | - Ian Daly
- Brain-Computer Interfacing and Neural Engineering Laboratory, School of Computer Science and Electronic Engineering, University of Essex, Wivenhoe Park, Colchester, Essex CO43SQ, UK
| | - Ruocheng Xiao
- Key Laboratory of Smart Manufacturing in Energy Chemical Process, Ministry of Education, East China University of Science and Technology, Shanghai, P. R. China
| | - Yangyang Miao
- Key Laboratory of Smart Manufacturing in Energy Chemical Process, Ministry of Education, East China University of Science and Technology, Shanghai, P. R. China
| | - Xingyu Wang
- Key Laboratory of Smart Manufacturing in Energy Chemical Process, Ministry of Education, East China University of Science and Technology, Shanghai, P. R. China
| | - Andrzej Cichocki
- Skolkowo Institute of Science and Technology (SKOLTECH), 143026 Moscow, Russia.,Systems Research Institute of Polish Academy of Science, 01-447 Warsaw, Poland.,Department of Informatics, Nicolaus Copernicus University, 87-100 Torun, Poland.,College of Computer Science, Hangzhou Dianzi University, 310018 Hangzhou, P. R. China
| |
Collapse
|
19
|
Tao Q, Si Y, Li F, Li P, Li Y, Zhang S, Wan F, Yao D, Xu P. Decision-Feedback Stages Revealed by Hidden Markov Modeling of EEG. Int J Neural Syst 2021; 31:2150031. [PMID: 34167448 DOI: 10.1142/s0129065721500313] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Decision response and feedback in gambling are interrelated. Different decisions lead to different ranges of feedback, which in turn influences subsequent decisions. However, the mechanism underlying the continuous decision-feedback process is still left unveiled. To fulfill this gap, we applied the hidden Markov model (HMM) to the gambling electroencephalogram (EEG) data to characterize the dynamics of this process. Furthermore, we explored the differences between distinct decision responses (i.e. choose large or small bets) or distinct feedback (i.e. win or loss outcomes) in corresponding phases. We demonstrated that the processing stages in decision-feedback process including strategy adjustment and visual information processing can be characterized by distinct brain networks. Moreover, time-varying networks showed, after decision response, large bet recruited more resources from right frontal and right center cortices while small bet was more related to the activation of the left frontal lobe. Concerning feedback, networks of win feedback showed a strong right frontal and right center pattern, while an information flow originating from the left frontal lobe to the middle frontal lobe was observed in loss feedback. Taken together, these findings shed light on general principles of natural decision-feedback and may contribute to the design of biologically inspired, participant-independent decision-feedback systems.
Collapse
Affiliation(s)
- Qin Tao
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Laboratory for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, 611731, P. R. China.,School of Life Science and Technology, Center for Information in Medicine, University of Electronic Science and Technology of China, Chengdu, 611731, P. R. China
| | - Yajing Si
- School of Psychology, Xinxiang Medical University, Hena, 453000, P. R. China
| | - Fali Li
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Laboratory for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, 611731, P. R. China.,School of Life Science and Technology, Center for Information in Medicine, University of Electronic Science and Technology of China, Chengdu, 611731, P. R. China
| | - Peiyang Li
- School of Bioinformatics, Chongqing University of Posts and Telecommunications, Chongqing, 400065, P. R. China
| | - Yuqin Li
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Laboratory for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, 611731, P. R. China.,School of Life Science and Technology, Center for Information in Medicine, University of Electronic Science and Technology of China, Chengdu, 611731, P. R. China
| | - Shu Zhang
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Laboratory for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, 611731, P. R. China.,School of Life Science and Technology, Center for Information in Medicine, University of Electronic Science and Technology of China, Chengdu, 611731, P. R. China
| | - Feng Wan
- Faculty of Science and Technology, University of Macau, 999078, Macau
| | - Dezhong Yao
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Laboratory for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, 611731, P. R. China.,School of Life Science and Technology, Center for Information in Medicine, University of Electronic Science and Technology of China, Chengdu, 611731, P. R. China
| | - Peng Xu
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Laboratory for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, 611731, P. R. China.,School of Life Science and Technology, Center for Information in Medicine, University of Electronic Science and Technology of China, Chengdu, 611731, P. R. China
| |
Collapse
|
20
|
Pio-Lopez L, Poulkouras R, Depannemaecker D. Visual cortical prosthesis: an electrical perspective. J Med Eng Technol 2021; 45:394-407. [PMID: 33843427 DOI: 10.1080/03091902.2021.1907468] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
Abstract
The electrical stimulation of the visual cortices has the potential to restore vision to blind individuals. Until now, the results of visual cortical prosthetics have been limited as no prosthesis has restored a full working vision but the field has shown a renewed interest these last years, thanks to wireless and technological advances. However, several scientific and technical challenges are still open to achieve the therapeutic benefit expected by these new devices. One of the main challenges is the electrical stimulation of the brain itself. In this review, we analyse the results in electrode-based visual cortical prosthetics from the electrical point of view. We first describe what is known about the electrode-tissue interface and safety of electrical stimulation. Then we focus on the psychophysics of prosthetic vision and the state-of-the-art on the interplay between the electrical stimulation of the visual cortex and the phosphene perception. Lastly, we discuss the challenges and perspectives of visual cortex electrical stimulation and electrode array design to develop the new generation implantable cortical visual prostheses.
Collapse
Affiliation(s)
| | - Romanos Poulkouras
- Department of Bioelectronics, Ecole Nationale Supérieure des Mines, CMP-EMSE, Gardanne, France.,Institut de Neurosciences de la Timone, UMR 7289, CNRS, Aix-Marseille Université, Marseille, France
| | - Damien Depannemaecker
- Department of Integrative and Computational Neuroscience, Paris-Saclay Institute of Neuroscience, Centre National de la Recherche Scientifique, Gif-sur-Yvette, France
| |
Collapse
|