1
|
Wang J, Zhao R, Li P, Fang Z, Li Q, Han Y, Zhou R, Zhang Y. Clinical Progress and Optimization of Information Processing in Artificial Visual Prostheses. SENSORS (BASEL, SWITZERLAND) 2022; 22:6544. [PMID: 36081002 PMCID: PMC9460383 DOI: 10.3390/s22176544] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Revised: 08/22/2022] [Accepted: 08/26/2022] [Indexed: 06/15/2023]
Abstract
Visual prostheses, used to assist in restoring functional vision to the visually impaired, convert captured external images into corresponding electrical stimulation patterns that are stimulated by implanted microelectrodes to induce phosphenes and eventually visual perception. Detecting and providing useful visual information to the prosthesis wearer under limited artificial vision has been an important concern in the field of visual prosthesis. Along with the development of prosthetic device design and stimulus encoding methods, researchers have explored the possibility of the application of computer vision by simulating visual perception under prosthetic vision. Effective image processing in computer vision is performed to optimize artificial visual information and improve the ability to restore various important visual functions in implant recipients, allowing them to better achieve their daily demands. This paper first reviews the recent clinical implantation of different types of visual prostheses, summarizes the artificial visual perception of implant recipients, and especially focuses on its irregularities, such as dropout and distorted phosphenes. Then, the important aspects of computer vision in the optimization of visual information processing are reviewed, and the possibilities and shortcomings of these solutions are discussed. Ultimately, the development direction and emphasis issues for improving the performance of visual prosthesis devices are summarized.
Collapse
Affiliation(s)
- Jing Wang
- School of Information, Shanghai Ocean University, Shanghai 201306, China
- Key Laboratory of Fishery Information, Ministry of Agriculture, Shanghai 200335, China
| | - Rongfeng Zhao
- School of Information, Shanghai Ocean University, Shanghai 201306, China
| | - Peitong Li
- School of Information, Shanghai Ocean University, Shanghai 201306, China
| | - Zhiqiang Fang
- School of Information, Shanghai Ocean University, Shanghai 201306, China
| | - Qianqian Li
- School of Information, Shanghai Ocean University, Shanghai 201306, China
| | - Yanling Han
- School of Information, Shanghai Ocean University, Shanghai 201306, China
| | - Ruyan Zhou
- School of Information, Shanghai Ocean University, Shanghai 201306, China
| | - Yun Zhang
- School of Information, Shanghai Ocean University, Shanghai 201306, China
| |
Collapse
|
2
|
Abstract
Visual retinal prostheses aim to restore vision for blind individuals who suffer from outer retinal degenerative diseases, such as retinitis pigmentosa and age-related macular degeneration. Perception through retinal prostheses is very limited, but it can be improved by applying object isolation. We used an object isolation algorithm based on integral imaging to isolate objects of interest according to their depth from the camera and applied image processing manipulation to the isolated-object images. Subsequently, we applied a spatial prosthetic vision simulation that converted the isolated-object images to phosphene images. We compared the phosphene images for two types of input images, the original image (before applying object isolation), and the isolated-object image to illustrate the effects of object isolation on simulated prosthetic vision without and with multiple spatial variations of phosphenes, such as size and shape variations, spatial shifts, and dropout rate. The results show an improvement in the perceived shape, contrast, and dynamic range (number of gray levels) of objects in the phosphene image.
Collapse
|
3
|
Sanchez-Garcia M, Martinez-Cantin R, Bermudez-Cameo J, Guerrero JJ. Influence of field of view in visual prostheses design: Analysis with a VR system. J Neural Eng 2020; 17:056002. [PMID: 32947270 DOI: 10.1088/1741-2552/abb9be] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
OBJECTIVE Visual prostheses are designed to restore partial functional vision in patients with total vision loss. Retinal visual prostheses provide limited capabilities as a result of low resolution, limited field of view and poor dynamic range. Understanding the influence of these parameters in the perception results can guide prostheses research and design. APPROACH In this work, we evaluate the influence of field of view with respect to spatial resolution in visual prostheses, measuring the accuracy and response time in a search and recognition task. Twenty-four normally sighted participants were asked to find and recognize usual objects, such as furniture and home appliance in indoor room scenes. For the experiment, we use a new simulated prosthetic vision system that allows simple and effective experimentation. Our system uses a virtual-reality environment based on panoramic scenes. The simulator employs a head-mounted display which allows users to feel immersed in the scene by perceiving the entire scene all around. Our experiments use public image datasets and a commercial head-mounted display. We have also released the virtual-reality software for replicating and extending the experimentation. MAIN RESULTS Results show that the accuracy and response time decrease when the field of view is increased. Furthermore, performance appears to be correlated with the angular resolution, but showing a diminishing return even with a resolution of less than 2.3 phosphenes per degree. SIGNIFICANCE Our results seem to indicate that, for the design of retinal prostheses, it is better to concentrate the phosphenes in a small area, to maximize the angular resolution, even if that implies sacrificing field of view.
Collapse
Affiliation(s)
- Melani Sanchez-Garcia
- Instituto de Investigación en Ingeniería de Aragón, (I3A). Universidad de Zaragoza, Spain
| | | | | | | |
Collapse
|
4
|
Spencer MJ, Kameneva T, Grayden DB, Meffin H, Burkitt AN. Global activity shaping strategies for a retinal implant. J Neural Eng 2019; 16:026008. [DOI: 10.1088/1741-2552/aaf071] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
|
5
|
Spatiotemporal Pixelization to Increase the Recognition Score of Characters for Retinal Prostheses. SENSORS 2017; 17:s17102439. [PMID: 29073735 PMCID: PMC5677288 DOI: 10.3390/s17102439] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/26/2017] [Revised: 10/21/2017] [Accepted: 10/22/2017] [Indexed: 11/16/2022]
Abstract
Most of the retinal prostheses use a head-fixed camera and a video processing unit. Some studies proposed various image processing methods to improve visual perception for patients. However, previous studies only focused on using spatial information. The present study proposes a spatiotemporal pixelization method mimicking fixational eye movements to generate stimulation images for artificial retina arrays by combining spatial and temporal information. Input images were sampled with a resolution that was four times higher than the number of pixel arrays. We subsampled this image and generated four different phosphene images. We then evaluated the recognition scores of characters by sequentially presenting phosphene images with varying pixel array sizes (6 × 6, 8 × 8 and 10 × 10) and stimulus frame rates (10 Hz, 15 Hz, 20 Hz, 30 Hz, and 60 Hz). The proposed method showed the highest recognition score at a stimulus frame rate of approximately 20 Hz. The method also significantly improved the recognition score for complex characters. This method provides a new way to increase practical resolution over restricted spatial resolution by merging the higher resolution image into high-frame time slots.
Collapse
|
6
|
Ge C, Kasabov N, Liu Z, Yang J. A spiking neural network model for obstacle avoidance in simulated prosthetic vision. Inf Sci (N Y) 2017. [DOI: 10.1016/j.ins.2017.03.006] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
7
|
Guo BB, Zheng XL, Lu ZG, Wang X, Yin ZQ, Hou WS, Meng M. Decoding brain responses to pixelized images in the primary visual cortex: implications for visual cortical prostheses. Neural Regen Res 2015; 10:1622-7. [PMID: 26692860 PMCID: PMC4660756 DOI: 10.4103/1673-5374.167761] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022] Open
Abstract
Visual cortical prostheses have the potential to restore partial vision. Still limited by the low-resolution visual percepts provided by visual cortical prostheses, implant wearers can currently only "see" pixelized images, and how to obtain the specific brain responses to different pixelized images in the primary visual cortex (the implant area) is still unknown. We conducted a functional magnetic resonance imaging experiment on normal human participants to investigate the brain activation patterns in response to 18 different pixelized images. There were 100 voxels in the brain activation pattern that were selected from the primary visual cortex, and voxel size was 4 mm × 4 mm × 4 mm. Multi-voxel pattern analysis was used to test if these 18 different brain activation patterns were specific. We chose a Linear Support Vector Machine (LSVM) as the classifier in this study. The results showed that the classification accuracies of different brain activation patterns were significantly above chance level, which suggests that the classifier can successfully distinguish the brain activation patterns. Our results suggest that the specific brain activation patterns to different pixelized images can be obtained in the primary visual cortex using a 4 mm × 4 mm × 4 mm voxel size and a 100-voxel pattern.
Collapse
Affiliation(s)
- Bing-Bing Guo
- Department of Biomedical Engineering, Chongqing University, Chongqing, China ; Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA
| | - Xiao-Lin Zheng
- Department of Biomedical Engineering, Chongqing University, Chongqing, China
| | - Zhen-Gang Lu
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA
| | - Xing Wang
- Department of Biomedical Engineering, Chongqing University, Chongqing, China
| | - Zheng-Qin Yin
- Key Lab of Visual Damage and Regeneration & Restoration, Third Military Medical University, Chongqing, China
| | - Wen-Sheng Hou
- Department of Biomedical Engineering, Chongqing University, Chongqing, China
| | - Ming Meng
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA
| |
Collapse
|
8
|
Yang K, Liu S, Wang H, Liu W, Wu Y. Effect of Pixel’s Spatial Characteristics on Recognition of Isolated Pixelized Chinese Character. Open Biomed Eng J 2015; 9:234-9. [PMID: 26628934 PMCID: PMC4645899 DOI: 10.2174/1874120701509010234] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2015] [Revised: 05/21/2015] [Accepted: 06/09/2015] [Indexed: 11/22/2022] Open
Abstract
The influence of pixel’s spatial characteristics on recognition of isolated Chinese character was investigated using simulated prosthestic vision. The accuracy of Chinese character recognition with 4 kinds of pixel number (6*6, 8*8, 10*10, and 12*12 pixel array) and 3 kinds of pixel shape (Square, Dot and Gaussian) and different pixel spacing were tested through head-mounted display (HMD). A captured image of Chinese characters in font style of Hei were pixelized with Square, Dot and Gaussian pixel. Results showed that pixel number was the most important factor which could affect the recognition of isolated pixelized Chinese Chartars and the accuracy of recognition increased with the addition of pixel number. 10*10 pixel array could provide enough information for people to recognize an isolated Chinese character. At low resolution (6*6 and 8*8 pixel array), there were little difference of recognition accuracy between different pixel shape and different pixel spacing. While as for high resolution (10*10 and 12*12 pixel array), the fluctuation of pixel shape and pixel spacing could not affect the performance of recognition of isolated pixelized Chinese Character.
Collapse
|
9
|
Xia P, Hu J, Peng Y. Adaptation to Phosphene Parameters Based on Multi-Object Recognition Using Simulated Prosthetic Vision. Artif Organs 2015; 39:1038-45. [DOI: 10.1111/aor.12504] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Affiliation(s)
- Peng Xia
- School of Mechanical Engineering; Shanghai Jiao Tong University; Shanghai China
| | - Jie Hu
- School of Mechanical Engineering; Shanghai Jiao Tong University; Shanghai China
| | - Yinghong Peng
- School of Mechanical Engineering; Shanghai Jiao Tong University; Shanghai China
| |
Collapse
|
10
|
Denis G, Jouffrais C, Mailhes C, Mace MJM. Simulated prosthetic vision: improving text accessibility with retinal prostheses. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2015; 2014:1719-22. [PMID: 25570307 DOI: 10.1109/embc.2014.6943939] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Image processing can improve significantly the every-day life of blind people wearing current and upcoming retinal prostheses relying on an external camera. We propose to use a real-time text localization algorithm to improve text accessibility. An augmented text-specific rendering based on automatic text localization has been developed. It has been evaluated in comparison to the classical rendering through a Simulated Prosthetic Vision (SPV) experiment with 16 subjects. Subjects were able to detect text in natural scenes much faster and further with the augmented rendering compared to the control rendering. Our results show that current and next generation of low resolution retinal prostheses may benefit from real-time text detection algorithms.
Collapse
|
11
|
Moving object recognition under simulated prosthetic vision using background-subtraction-based image processing strategies. Inf Sci (N Y) 2014. [DOI: 10.1016/j.ins.2014.02.136] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
12
|
Hu J, Xia P, Gu C, Qi J, Li S, Peng Y. Recognition of Similar Objects Using Simulated Prosthetic Vision. Artif Organs 2013; 38:159-67. [PMID: 24033534 DOI: 10.1111/aor.12147] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
Affiliation(s)
- Jie Hu
- School of Mechanical Engineering; Shanghai Jiao Tong University; Shanghai China
| | - Peng Xia
- School of Mechanical Engineering; Shanghai Jiao Tong University; Shanghai China
| | - Chaochen Gu
- School of Mechanical Engineering; Shanghai Jiao Tong University; Shanghai China
| | - Jin Qi
- School of Mechanical Engineering; Shanghai Jiao Tong University; Shanghai China
| | - Sheng Li
- School of Mechanical Engineering; Shanghai Jiao Tong University; Shanghai China
| | - Yinghong Peng
- School of Mechanical Engineering; Shanghai Jiao Tong University; Shanghai China
| |
Collapse
|
13
|
Li S, Hu J, Chai X, Peng Y. Image Recognition With a Limited Number of Pixels for Visual Prostheses Design. Artif Organs 2011; 36:266-74. [DOI: 10.1111/j.1525-1594.2011.01347.x] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
14
|
Guo H, Wang Y, Yang Y, Tong S, Zhu Y, Qiu Y. Object recognition under distorted prosthetic vision. Artif Organs 2011; 34:846-56. [PMID: 20545671 DOI: 10.1111/j.1525-1594.2009.00976.x] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
Psychophysical studies have reported the efficacy of phosphene-based prosthetic vision in partly recovering the visual function of blind individuals. However, results by far have been based on evenly aligned phosphene arrays, which neglected the complicated visuotopy in the visual prosthesis system. In this study, we investigated how the objects were recognized under the stimuli with distorted phosphene arrays simulated by transformations of barrel distortion, rotation, or translation. The results revealed that distortions significantly decreased the accuracy of categorization (CA) and showed distinct interactive effects with the factors of object category and phosphene array density. Moreover, the CA changed differently with the increase of distortion levels. Regression analysis suggested a phosphene array of at least 10 × 10 be the essential for achieving a CA over the threshold value (CA(t)=62.5%) under distorted prosthetic vision. It is recommended that discriminative features be extracted to improve the performance of prosthetic vision.
Collapse
Affiliation(s)
- Hong Guo
- Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | | | | | | | | | | |
Collapse
|
15
|
Dai C, Lu M, Zhao Y, Lu Y, Zhou C, Chen Y, Ren Q, Chai X. Correction for Chinese character patterns formed by simulated irregular phosphene map. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2010; 2010:5887-90. [PMID: 21096931 DOI: 10.1109/iembs.2010.5627528] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
To reduce the unfavorable influence of phosphene array irregularity on the form of Chinese character pattern so as to improve recognition accuracy in visual prostheses, two correction methods were put forward. One method was to generate phosphene closest to the target point in regular arrays using weighted nearest neighbor search. The other was to generate phosphene whose center located in the region covered by dilated characters. Based on a simulation system, Chinese character recognition tests were given to fifteen normally sighted subjects under five degrees of array irregularity (0.2, 0.4, 0.6, 0.8, 1.0) without correction. The recognition accuracy decreased with the increase of irregularity. When the recognition accuracy dropped below 80%, two correction methods were applied and their effects were evaluated. With the increase of array irregularity, both effects on the accuracy of recognition grew considerably. Comparison between the two methods revealed that the former method afforded higher recognition accuracy and the latter only applied to phosphene map with serious irregularity.
Collapse
Affiliation(s)
- Cong Dai
- Department of Biomedical Engineering, School of Life Sciences & Biotechnology, Shanghai Jiao Tong University, 200240, China
| | | | | | | | | | | | | | | |
Collapse
|
16
|
Zhao Y, Lu Y, Tian Y, Li L, Ren Q, Chai X. Image processing based recognition of images with a limited number of pixels using simulated prosthetic vision. Inf Sci (N Y) 2010. [DOI: 10.1016/j.ins.2010.04.021] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
17
|
Guo H, Qin R, Qiu Y, Zhu Y, Tong S. Configuration-Based Processing of Phosphene Pattern Recognition for Simulated Prosthetic Vision. Artif Organs 2010; 34:324-30. [DOI: 10.1111/j.1525-1594.2009.00863.x] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
18
|
Yang K, Zhou C, Ren Q, Fan J, Zhang L, Chai X. Complexity Analysis Based on Image-Processing Method and Pixelized Recognition of Chinese Characters Using Simulated Prosthetic Vision. Artif Organs 2010; 34:28-36. [DOI: 10.1111/j.1525-1594.2009.00778.x] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
19
|
Chen SC, Suaning GJ, Morley JW, Lovell NH. Simulating prosthetic vision: II. Measuring functional capacity. Vision Res 2009; 49:2329-43. [DOI: 10.1016/j.visres.2009.07.003] [Citation(s) in RCA: 27] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2008] [Revised: 07/09/2009] [Accepted: 07/09/2009] [Indexed: 10/20/2022]
|
20
|
Tsai D, Morley JW, Suaning GJ, Lovell NH. A wearable real-time image processor for a vision prosthesis. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2009; 95:258-269. [PMID: 19394713 DOI: 10.1016/j.cmpb.2009.03.009] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/03/2008] [Revised: 12/10/2008] [Accepted: 03/13/2009] [Indexed: 05/27/2023]
Abstract
Rapid progress in recent years has made implantable retinal prostheses a promising therapeutic option in the near future for patients with macular degeneration or retinitis pigmentosa. Yet little work on devices that encode visual images into electrical stimuli have been reported to date. This paper presents a wearable image processor for use as the external module of a vision prosthesis. It is based on a dual-core microprocessor architecture and runs the Linux operating system. A set of image-processing algorithms executes on the digital signal processor of the device, which may be controlled remotely via a standard desktop computer. The results indicate that a highly flexible and configurable image processor can be built with the dual-core architecture. Depending on the image-processing requirements, general-purpose embedded microprocessors alone may be inadequate for implementing image-processing strategies required by retinal prostheses.
Collapse
Affiliation(s)
- D Tsai
- Graduate School of Biomedical Engineering, University of New South Wales, Sydney, NSW 2052, Australia
| | | | | | | |
Collapse
|
21
|
|
22
|
Sui X, Li L, Chai X, Wu K, Zhou C, Sun X, Xu X, Li X, Ren Q. Visual Prosthesis for Optic Nerve Stimulation. BIOLOGICAL AND MEDICAL PHYSICS, BIOMEDICAL ENGINEERING 2009. [DOI: 10.1007/978-0-387-77261-5_2] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
|
23
|
Chai X, Li L, Wu K, Zhou C, Cao P, Ren Q. C-sight visual prostheses for the blind. ACTA ACUST UNITED AC 2008; 27:20-8. [PMID: 18799386 DOI: 10.1109/memb.2008.923959] [Citation(s) in RCA: 45] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Affiliation(s)
- Xinyu Chai
- Shanghai Jiao Tong University, Shanghai 200240, China
| | | | | | | | | | | |
Collapse
|
24
|
Malchesky PS. Artificial Organs 2007: A Year in Review. Artif Organs 2008. [DOI: 10.1111/j.1525-1594.2007.00536.x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|