1
|
Wang HZ, Wong YT. A novel simulation paradigm utilising MRI-derived phosphene maps for cortical prosthetic vision. J Neural Eng 2023; 20:046027. [PMID: 37531948 PMCID: PMC10594539 DOI: 10.1088/1741-2552/aceca2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Revised: 07/13/2023] [Accepted: 08/01/2023] [Indexed: 08/04/2023]
Abstract
Objective.We developed a realistic simulation paradigm for cortical prosthetic vision and investigated whether we can improve visual performance using a novel clustering algorithm.Approach.Cortical visual prostheses have been developed to restore sight by stimulating the visual cortex. To investigate the visual experience, previous studies have used uniform phosphene maps, which may not accurately capture generated phosphene map distributions of implant recipients. The current simulation paradigm was based on the Human Connectome Project retinotopy dataset and the placement of implants on the cortices from magnetic resonance imaging scans. Five unique retinotopic maps were derived using this method. To improve performance on these retinotopic maps, we enabled head scanning and a density-based clustering algorithm was then used to relocate centroids of visual stimuli. The impact of these improvements on visual detection performance was tested. Using spatially evenly distributed maps as a control, we recruited ten subjects and evaluated their performance across five sessions on the Berkeley Rudimentary Visual Acuity test and the object recognition task.Main results.Performance on control maps is significantly better than on retinotopic maps in both tasks. Both head scanning and the clustering algorithm showed the potential of improving visual ability across multiple sessions in the object recognition task.Significance.The current paradigm is the first that simulates the experience of cortical prosthetic vision based on brain scans and implant placement, which captures the spatial distribution of phosphenes more realistically. Utilisation of evenly distributed maps may overestimate the performance that visual prosthetics can restore. This simulation paradigm could be used in clinical practice when making plans for where best to implant cortical visual prostheses.
Collapse
Affiliation(s)
- Haozhe Zac Wang
- Department of Electrical and Computer Systems Engineering, Monash University, Melbourne, Australia
| | - Yan Tat Wong
- Department of Electrical and Computer Systems Engineering, Monash University, Melbourne, Australia
- Department of Physiology, Monash University, Melbourne, Australia
| |
Collapse
|
2
|
Kasowski J, Johnson BA, Neydavood R, Akkaraju A, Beyeler M. A systematic review of extended reality (XR) for understanding and augmenting vision loss. J Vis 2023; 23:5. [PMID: 37140911 PMCID: PMC10166121 DOI: 10.1167/jov.23.5.5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Accepted: 04/04/2023] [Indexed: 05/05/2023] Open
Abstract
Over the past decade, extended reality (XR) has emerged as an assistive technology not only to augment residual vision of people losing their sight but also to study the rudimentary vision restored to blind people by a visual neuroprosthesis. A defining quality of these XR technologies is their ability to update the stimulus based on the user's eye, head, or body movements. To make the best use of these emerging technologies, it is valuable and timely to understand the state of this research and identify any shortcomings that are present. Here we present a systematic literature review of 227 publications from 106 different venues assessing the potential of XR technology to further visual accessibility. In contrast to other reviews, we sample studies from multiple scientific disciplines, focus on technology that augments a person's residual vision, and require studies to feature a quantitative evaluation with appropriate end users. We summarize prominent findings from different XR research areas, show how the landscape has changed over the past decade, and identify scientific gaps in the literature. Specifically, we highlight the need for real-world validation, the broadening of end-user participation, and a more nuanced understanding of the usability of different XR-based accessibility aids.
Collapse
Affiliation(s)
- Justin Kasowski
- Graduate Program in Dynamical Neuroscience, University of California, Santa Barbara, CA, USA
| | - Byron A Johnson
- Department of Psychological & Brain Sciences, University of California, Santa Barbara, CA, USA
| | - Ryan Neydavood
- Department of Psychological & Brain Sciences, University of California, Santa Barbara, CA, USA
| | - Anvitha Akkaraju
- Department of Psychological & Brain Sciences, University of California, Santa Barbara, CA, USA
| | - Michael Beyeler
- Department of Psychological & Brain Sciences, University of California, Santa Barbara, CA, USA
- Department of Computer Science, University of California, Santa Barbara, CA, USA
| |
Collapse
|
3
|
Wang C, Fang C, Zou Y, Yang J, Sawan M. Artificial intelligence techniques for retinal prostheses: a comprehensive review and future direction. J Neural Eng 2023; 20. [PMID: 36634357 DOI: 10.1088/1741-2552/acb295] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Accepted: 01/12/2023] [Indexed: 01/14/2023]
Abstract
Objective. Retinal prostheses are promising devices to restore vision for patients with severe age-related macular degeneration or retinitis pigmentosa disease. The visual processing mechanism embodied in retinal prostheses play an important role in the restoration effect. Its performance depends on our understanding of the retina's working mechanism and the evolvement of computer vision models. Recently, remarkable progress has been made in the field of processing algorithm for retinal prostheses where the new discovery of the retina's working principle and state-of-the-arts computer vision models are combined together.Approach. We investigated the related research on artificial intelligence techniques for retinal prostheses. The processing algorithm in these studies could be attributed to three types: computer vision-related methods, biophysical models, and deep learning models.Main results. In this review, we first illustrate the structure and function of the normal and degenerated retina, then demonstrate the vision rehabilitation mechanism of three representative retinal prostheses. It is necessary to summarize the computational frameworks abstracted from the normal retina. In addition, the development and feature of three types of different processing algorithms are summarized. Finally, we analyze the bottleneck in existing algorithms and propose our prospect about the future directions to improve the restoration effect.Significance. This review systematically summarizes existing processing models for predicting the response of the retina to external stimuli. What's more, the suggestions for future direction may inspire researchers in this field to design better algorithms for retinal prostheses.
Collapse
Affiliation(s)
- Chuanqing Wang
- Center of Excellence in Biomedical Research on Advanced Integrated-on-chips Neurotechnologies, School of Engineering, Westlake University, Hangzhou 310030, People's Republic of China
| | - Chaoming Fang
- Center of Excellence in Biomedical Research on Advanced Integrated-on-chips Neurotechnologies, School of Engineering, Westlake University, Hangzhou 310030, People's Republic of China
| | - Yong Zou
- Beijing Institute of Radiation Medicine, Beijing, People's Republic of China
| | - Jie Yang
- Center of Excellence in Biomedical Research on Advanced Integrated-on-chips Neurotechnologies, School of Engineering, Westlake University, Hangzhou 310030, People's Republic of China
| | - Mohamad Sawan
- Center of Excellence in Biomedical Research on Advanced Integrated-on-chips Neurotechnologies, School of Engineering, Westlake University, Hangzhou 310030, People's Republic of China
| |
Collapse
|
4
|
Elnabawy RH, Abdennadher S, Hellwich O, Eldawlatly S. Object recognition and localization enhancement in visual prostheses: a real-time mixed reality simulation. Biomed Eng Online 2022; 21:91. [PMID: 36566183 DOI: 10.1186/s12938-022-01059-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2022] [Accepted: 12/12/2022] [Indexed: 12/25/2022] Open
Abstract
Blindness is a main threat that affects the daily life activities of any human. Visual prostheses have been introduced to provide artificial vision to the blind with the aim of allowing them to restore confidence and independence. In this article, we propose an approach that involves four image enhancement techniques to facilitate object recognition and localization for visual prostheses users. These techniques are clip art representation of the objects, edge sharpening, corner enhancement and electrode dropout handling. The proposed techniques are tested in a real-time mixed reality simulation environment that mimics vision perceived by visual prostheses users. Twelve experiments were conducted to measure the performance of the participants in object recognition and localization. The experiments involved single objects, multiple objects and navigation. To evaluate the performance of the participants in objects recognition, we measure their recognition time, recognition accuracy and confidence level. For object localization, two metrics were used to measure the performance of the participants which are the grasping attempt time and the grasping accuracy. The results demonstrate that using all enhancement techniques simultaneously gives higher accuracy, higher confidence level and less time for recognizing and grasping objects in comparison to not applying the enhancement techniques or applying pair-wise combinations of them. Visual prostheses could benefit from the proposed approach to provide users with an enhanced perception.
Collapse
Affiliation(s)
- Reham H Elnabawy
- Digital Media Engineering and Technology Department, Faculty of Media Engineering and Technology, German University in Cairo, Cairo, Egypt
| | - Slim Abdennadher
- Computer Science and Engineering Department, Faculty of Media Engineering and Technology, German University in Cairo, Cairo, Egypt.,Computer Science Department, Faculty of Informatics and Computer Science, German International University, New Administrative Capital, Egypt
| | - Olaf Hellwich
- Chair of Computer Vision and Remote Sensing, Technische Universität Berlin, Berlin, Germany
| | - Seif Eldawlatly
- Computer and Systems Engineering Department, Faculty of Engineering, Ain Shams University, 1 El-Sarayat St., Abbassia, Cairo, Egypt. .,Computer Science and Engineering Department, The American University in Cairo, Cairo, Egypt.
| |
Collapse
|
5
|
|
6
|
Wang J, Zhao R, Li P, Fang Z, Li Q, Han Y, Zhou R, Zhang Y. Clinical Progress and Optimization of Information Processing in Artificial Visual Prostheses. SENSORS (BASEL, SWITZERLAND) 2022; 22:6544. [PMID: 36081002 PMCID: PMC9460383 DOI: 10.3390/s22176544] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Revised: 08/22/2022] [Accepted: 08/26/2022] [Indexed: 06/15/2023]
Abstract
Visual prostheses, used to assist in restoring functional vision to the visually impaired, convert captured external images into corresponding electrical stimulation patterns that are stimulated by implanted microelectrodes to induce phosphenes and eventually visual perception. Detecting and providing useful visual information to the prosthesis wearer under limited artificial vision has been an important concern in the field of visual prosthesis. Along with the development of prosthetic device design and stimulus encoding methods, researchers have explored the possibility of the application of computer vision by simulating visual perception under prosthetic vision. Effective image processing in computer vision is performed to optimize artificial visual information and improve the ability to restore various important visual functions in implant recipients, allowing them to better achieve their daily demands. This paper first reviews the recent clinical implantation of different types of visual prostheses, summarizes the artificial visual perception of implant recipients, and especially focuses on its irregularities, such as dropout and distorted phosphenes. Then, the important aspects of computer vision in the optimization of visual information processing are reviewed, and the possibilities and shortcomings of these solutions are discussed. Ultimately, the development direction and emphasis issues for improving the performance of visual prosthesis devices are summarized.
Collapse
Affiliation(s)
- Jing Wang
- School of Information, Shanghai Ocean University, Shanghai 201306, China
- Key Laboratory of Fishery Information, Ministry of Agriculture, Shanghai 200335, China
| | - Rongfeng Zhao
- School of Information, Shanghai Ocean University, Shanghai 201306, China
| | - Peitong Li
- School of Information, Shanghai Ocean University, Shanghai 201306, China
| | - Zhiqiang Fang
- School of Information, Shanghai Ocean University, Shanghai 201306, China
| | - Qianqian Li
- School of Information, Shanghai Ocean University, Shanghai 201306, China
| | - Yanling Han
- School of Information, Shanghai Ocean University, Shanghai 201306, China
| | - Ruyan Zhou
- School of Information, Shanghai Ocean University, Shanghai 201306, China
| | - Yun Zhang
- School of Information, Shanghai Ocean University, Shanghai 201306, China
| |
Collapse
|
7
|
Furl N, Begum F, Ferrarese FP, Jans S, Woolley C, Sulik J. Caricatured facial movements enhance perception of emotional facial expressions. Perception 2022; 51:313-343. [PMID: 35341407 PMCID: PMC9017061 DOI: 10.1177/03010066221086452] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Although faces “in the wild” constantly undergo complicated movements, humans adeptly
perceive facial identity and expression. Previous studies, focusing mainly on identity,
used photographic caricature to show that distinctive form increases perceived
dissimilarity. We tested whether distinctive facial movements showed
similar effects, and we focussed on both perception of expression and
identity. We caricatured the movements of an animated computer head,
using physical motion metrics extracted from videos. We verified that these “ground truth”
metrics showed the expected effects: Caricature increased physical dissimilarity between
faces differing in expression and those differing in identity. Like the ground truth
dissimilarity, participants’ dissimilarity perception was increased by caricature when
faces differed in expression. We found these perceived dissimilarities to reflect the
“representational geometry” of the ground truth. However, neither of these findings held
for faces differing in identity. These findings replicated across two paradigms: pairwise
ratings and multiarrangement. In a final study, motion caricature did not improve
recognition memory for identity, whether manipulated at study or test. We report several
forms of converging evidence for spatiotemporal caricature effects on dissimilarity
perception of different expressions. However, more work needs to be done to discover what
identity-specific movements can enhance face identification.
Collapse
Affiliation(s)
| | | | | | - Sarah Jans
- Royal Holloway, 3162University of London, UK
| | | | - Justin Sulik
- Royal Holloway, 3162University of London, UK; Cognition, Values & Behavior, Ludwig Maximilian University of Munich, Germany
| |
Collapse
|
8
|
Meikle SJ, Wong YT. Neurophysiological considerations for visual implants. Brain Struct Funct 2021; 227:1523-1543. [PMID: 34773502 DOI: 10.1007/s00429-021-02417-2] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2021] [Accepted: 10/17/2021] [Indexed: 11/26/2022]
Abstract
Neural implants have the potential to restore visual capabilities in blind individuals by electrically stimulating the neurons of the visual system. This stimulation can produce visual percepts known as phosphenes. The ideal location of electrical stimulation for achieving vision restoration is widely debated and dependent on the physiological properties of the targeted tissue. Here, the neurophysiology of several potential target structures within the visual system will be explored regarding their benefits and downfalls in producing phosphenes. These regions will include the lateral geniculate nucleus, primary visual cortex, visual area 2, visual area 3, visual area 4 and the middle temporal area. Based on the existing engineering limitations of neural prostheses, we anticipate that electrical stimulation of any singular brain region will be incapable of achieving high-resolution naturalistic perception including color, texture, shape and motion. As improvements in visual acuity facilitate improvements in quality of life, emulating naturalistic vision should be one of the ultimate goals of visual prostheses. To achieve this goal, we propose that multiple brain areas will need to be targeted in unison enabling different aspects of vision to be recreated.
Collapse
Affiliation(s)
- Sabrina J Meikle
- Department of Electrical and Computer Systems Engineering, Monash University, 14 Alliance Lane, Clayton, Vic, 3800, Australia
- Department of Physiology and Biomedicine Discovery Institute, Monash University, 14 Alliance Lane, Clayton, Vic, 3800, Australia
- Monash Vision Group, Monash University, 14 Alliance Lane, Clayton, Vic, 3800, Australia
| | - Yan T Wong
- Department of Electrical and Computer Systems Engineering, Monash University, 14 Alliance Lane, Clayton, Vic, 3800, Australia.
- Department of Physiology and Biomedicine Discovery Institute, Monash University, 14 Alliance Lane, Clayton, Vic, 3800, Australia.
- Monash Vision Group, Monash University, 14 Alliance Lane, Clayton, Vic, 3800, Australia.
| |
Collapse
|
9
|
Pio-Lopez L, Poulkouras R, Depannemaecker D. Visual cortical prosthesis: an electrical perspective. J Med Eng Technol 2021; 45:394-407. [PMID: 33843427 DOI: 10.1080/03091902.2021.1907468] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
Abstract
The electrical stimulation of the visual cortices has the potential to restore vision to blind individuals. Until now, the results of visual cortical prosthetics have been limited as no prosthesis has restored a full working vision but the field has shown a renewed interest these last years, thanks to wireless and technological advances. However, several scientific and technical challenges are still open to achieve the therapeutic benefit expected by these new devices. One of the main challenges is the electrical stimulation of the brain itself. In this review, we analyse the results in electrode-based visual cortical prosthetics from the electrical point of view. We first describe what is known about the electrode-tissue interface and safety of electrical stimulation. Then we focus on the psychophysics of prosthetic vision and the state-of-the-art on the interplay between the electrical stimulation of the visual cortex and the phosphene perception. Lastly, we discuss the challenges and perspectives of visual cortex electrical stimulation and electrode array design to develop the new generation implantable cortical visual prostheses.
Collapse
Affiliation(s)
| | - Romanos Poulkouras
- Department of Bioelectronics, Ecole Nationale Supérieure des Mines, CMP-EMSE, Gardanne, France.,Institut de Neurosciences de la Timone, UMR 7289, CNRS, Aix-Marseille Université, Marseille, France
| | - Damien Depannemaecker
- Department of Integrative and Computational Neuroscience, Paris-Saclay Institute of Neuroscience, Centre National de la Recherche Scientifique, Gif-sur-Yvette, France
| |
Collapse
|
10
|
Stoney C, Robbins RA, Mckone E. A stimulus set of people famous to current generation Australian undergraduates, with recognition norms for face images and names. AUSTRALIAN JOURNAL OF PSYCHOLOGY 2021. [DOI: 10.1111/ajpy.12295] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Affiliation(s)
- Corinne Stoney
- Research School of Psychology, The Australian National University, Canberra, Australian Capital Territory, Australia,
| | - Rachel A. Robbins
- Research School of Psychology, The Australian National University, Canberra, Australian Capital Territory, Australia,
| | - Elinor Mckone
- Research School of Psychology, The Australian National University, Canberra, Australian Capital Territory, Australia,
| |
Collapse
|
11
|
Thorn JT, Migliorini E, Ghezzi D. Virtual reality simulation of epiretinal stimulation highlights the relevance of the visual angle in prosthetic vision. J Neural Eng 2020; 17:056019. [DOI: 10.1088/1741-2552/abb5bc] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
|
12
|
Yin Z, Wang H, Wang J, Tang J, Wang W. Defense against adversarial attacks by low‐level image transformations. INT J INTELL SYST 2020. [DOI: 10.1002/int.22258] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Affiliation(s)
- Zhaoxia Yin
- Anhui Provincial Key Laboratory of Multimodal Cognitive Computation, School of Computer Science and TechnologyAnhui University Hefei China
| | - Hua Wang
- Anhui Provincial Key Laboratory of Multimodal Cognitive Computation, School of Computer Science and TechnologyAnhui University Hefei China
| | - Jie Wang
- Anhui Provincial Key Laboratory of Multimodal Cognitive Computation, School of Computer Science and TechnologyAnhui University Hefei China
| | - Jin Tang
- Anhui Provincial Key Laboratory of Multimodal Cognitive Computation, School of Computer Science and TechnologyAnhui University Hefei China
| | - Wenzhong Wang
- Anhui Provincial Key Laboratory of Multimodal Cognitive Computation, School of Computer Science and TechnologyAnhui University Hefei China
| |
Collapse
|
13
|
Sanchez-Garcia M, Martinez-Cantin R, Guerrero JJ. Semantic and structural image segmentation for prosthetic vision. PLoS One 2020; 15:e0227677. [PMID: 31995568 PMCID: PMC6988941 DOI: 10.1371/journal.pone.0227677] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2019] [Accepted: 12/24/2019] [Indexed: 01/12/2023] Open
Abstract
Prosthetic vision is being applied to partially recover the retinal stimulation of visually impaired people. However, the phosphenic images produced by the implants have very limited information bandwidth due to the poor resolution and lack of color or contrast. The ability of object recognition and scene understanding in real environments is severely restricted for prosthetic users. Computer vision can play a key role to overcome the limitations and to optimize the visual information in the prosthetic vision, improving the amount of information that is presented. We present a new approach to build a schematic representation of indoor environments for simulated phosphene images. The proposed method combines a variety of convolutional neural networks for extracting and conveying relevant information about the scene such as structural informative edges of the environment and silhouettes of segmented objects. Experiments were conducted with normal sighted subjects with a Simulated Prosthetic Vision system. The results show good accuracy for object recognition and room identification tasks for indoor scenes using the proposed approach, compared to other image processing methods.
Collapse
Affiliation(s)
- Melani Sanchez-Garcia
- Instituto de Investigación en Ingeniería de Aragón (I3A), Universidad de Zaragoza, Zaragoza, Spain
| | - Ruben Martinez-Cantin
- Instituto de Investigación en Ingeniería de Aragón (I3A), Universidad de Zaragoza, Zaragoza, Spain
| | - Jose J. Guerrero
- Instituto de Investigación en Ingeniería de Aragón (I3A), Universidad de Zaragoza, Zaragoza, Spain
| |
Collapse
|
14
|
Ho E, Boffa J, Palanker D. Performance of complex visual tasks using simulated prosthetic vision via augmented-reality glasses. J Vis 2019; 19:22. [PMID: 31770773 PMCID: PMC6880846 DOI: 10.1167/19.13.22] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2019] [Accepted: 10/05/2019] [Indexed: 11/29/2022] Open
Abstract
Photovoltaic subretinal prosthesis is designed for restoration of central vision in patients with age-related macular degeneration (AMD). We investigated the utility of prosthetic central vision for complex visual tasks using augmented-reality (AR) glasses simulating reduced acuity, contrast, and visual field. AR glasses with blocked central 20° of visual field included an integrated video camera and software which adjusts the image quality according to three user-defined parameters: resolution, corresponding to the equivalent pixel size of an implant; field of view, corresponding to the implant size; and number of grayscale levels. The real-time processed video was streamed on a screen in front of the right eye. Nineteen healthy participants were recruited to complete visual tasks including vision charts, sentence reading, and face recognition. With vision charts, letter acuity exceeded the pixel-sampling limit by 0.2 logMAR. Reading speed decreased with increasing pixel size and with reduced field of view (7°-12°). In the face recognition task (four-way forced choice, 5° angular size) participants identified faces at >75% accuracy, even with 100 μm pixels and only two grayscale levels. With 60 μm pixels and eight grayscale levels, the accuracy exceeded 97%. Subjects with simulated prosthetic vision performed slightly better than the sampling limit on the letter acuity tasks, and were highly accurate at recognizing faces, even with 100 μm/pixel resolution. These results indicate feasibility of reading and face recognition using prosthetic central vision even with 100 μm pixels, and performance improves further with smaller pixels.
Collapse
Affiliation(s)
- Elton Ho
- Department of Physics, Stanford University, Stanford, CA, USA
- Hansen Experimental Physics Laboratory, Stanford University, Stanford, CA, USA
| | - Jack Boffa
- Hansen Experimental Physics Laboratory, Stanford University, Stanford, CA, USA
| | - Daniel Palanker
- Hansen Experimental Physics Laboratory, Stanford University, Stanford, CA, USA
- Department of Ophthalmology, Stanford University, Stanford, CA, USA
| |
Collapse
|
15
|
Lane J, Robbins RA, Rohan EMF, Crookes K, Essex RW, Maddess T, Sabeti F, Mazlin JL, Irons J, Gradden T, Dawel A, Barnes N, He X, Smithson M, McKone E. Caricaturing can improve facial expression recognition in low-resolution images and age-related macular degeneration. J Vis 2019; 19:18. [DOI: 10.1167/19.6.18] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Affiliation(s)
- Jo Lane
- Research School of Psychology and ARC Centre of Excellence in Cognition and its Disorders, The Australian National University, Canberra, ACT, Australia
| | - Rachel A. Robbins
- Research School of Psychology, The Australian National University, Canberra, ACT, Australia
| | - Emilie M. F. Rohan
- John Curtin School of Medical Research (JCSMR), The Australian National University, Canberra, ACT, Australia
| | - Kate Crookes
- Research School of Psychology and ARC Centre of Excellence in Cognition and its Disorders, The Australian National University, Canberra, ACT, Australia
- School of Psychological Science, University of Western Australia, Perth, WA, Australia
| | - Rohan W. Essex
- Academic Unit of Ophthalmology, Medical School, The Australian National University, Canberra, ACT, Australia
| | - Ted Maddess
- John Curtin School of Medical Research (JCSMR), The Australian National University, Canberra, ACT, Australia
| | - Faran Sabeti
- John Curtin School of Medical Research (JCSMR), The Australian National University, Canberra, ACT, Australia
- Discipline of Optometry and Vision Science, The University of Canberra, Bruce, ACT, Australia
- Collaborative Research in Bioactives and Biomarkers (CRIBB) Group, Canberra, ACT, Australia
| | - Jamie-Lee Mazlin
- Research School of Psychology, The Australian National University, Canberra, ACT, Australia
| | - Jessica Irons
- Research School of Psychology, The Australian National University, Canberra, ACT, Australia
| | - Tamara Gradden
- Research School of Psychology, The Australian National University, Canberra, ACT, Australia
| | - Amy Dawel
- Research School of Psychology and ARC Centre of Excellence in Cognition and its Disorders, The Australian National University, Canberra, ACT, Australia
| | - Nick Barnes
- Research School of Engineering, The Australian National University and Data61, Commonwealth Scientific and Industrial Research Organisation, Canberra, ACT, Australia
| | - Xuming He
- School of Information Science and Technology, Shanghai Tech University, Shanghai, China
| | - Michael Smithson
- Research School of Psychology, The Australian National University, Canberra, ACT, Australia
| | - Elinor McKone
- Research School of Psychology and ARC Centre of Excellence in Cognition and its Disorders, The Australian National University, Canberra, ACT, Australia
| |
Collapse
|
16
|
Impacts of impaired face perception on social interactions and quality of life in age-related macular degeneration: A qualitative study and new community resources. PLoS One 2018; 13:e0209218. [PMID: 30596660 PMCID: PMC6312296 DOI: 10.1371/journal.pone.0209218] [Citation(s) in RCA: 31] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2018] [Accepted: 11/20/2018] [Indexed: 12/22/2022] Open
Abstract
Aims Previous studies and community information about everyday difficulties in age-related macular degeneration (AMD) have focussed on domains such as reading and driving. Here, we provide the first in-depth examination of how impaired face perception impacts social interactions and quality of life in AMD. We also develop a Faces and Social Life in AMD brochure and information sheet, plus accompanying conversation starter, aimed at AMD patients and those who interact with them (family, friends, nursing home staff). Method Semi-structured face-to-face interviews were conducted with 21 AMD patients covering the full range from mild vision loss to legally blind. Thematic analysis was used to explore the range of patient experiences. Results Patients reported faces appeared blurred and/or distorted. They described recurrent failures to recognise others' identity, facial expressions and emotional states, plus failures of alternative non-face strategies (e.g., hairstyle, voice). They reported failures to follow social nuances (e.g., to pick up that someone was joking), and feelings of missing out ('I can't join in'). Concern about offending others (e.g., by unintentionally ignoring them) was common, as were concerns of appearing fraudulent ('Other people don't understand'). Many reported social disengagement. Many reported specifically face-perception-related reductions in social life, confidence, and quality of life. All effects were observed even with only mild vision loss. Patients endorsed the value of our Faces and Social Life in AMD Information Sheet, developed from the interview results, and supported future technological assistance (digital image enhancement). Conclusion Poor face perception in AMD is an important domain contributing to impaired social interactions and quality of life. This domain should be directly assessed in quantitative quality of life measures, and in resources designed to improve community understanding. The identity-related social difficulties mirror those in prosopagnosia, of cortical rather than retinal origin, implying findings may generalise to all low-vision disorders.
Collapse
|
17
|
Lane J, Rohan EMF, Sabeti F, Essex RW, Maddess T, Barnes N, He X, Robbins RA, Gradden T, McKone E. Improving face identity perception in age-related macular degeneration via caricaturing. Sci Rep 2018; 8:15205. [PMID: 30315188 PMCID: PMC6185956 DOI: 10.1038/s41598-018-33543-3] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2018] [Accepted: 09/26/2018] [Indexed: 11/09/2022] Open
Abstract
Patients with age-related macular degeneration (AMD) have difficulty recognising people's faces. We tested whether this could be improved using caricaturing: an image enhancement procedure derived from cortical coding in a perceptual 'face-space'. Caricaturing exaggerates the distinctive ways in which an individual's face shape differs from the average. We tested 19 AMD-affected eyes (from 12 patients; ages 66-93 years) monocularly, selected to cover the full range of vision loss. Patients rated how different in identity people's faces appeared when compared in pairs (e.g., two young men, both Caucasian), at four caricature strengths (0, 20, 40, 60% exaggeration). This task gives data reliable enough to analyse statistically at the individual-eye level. All 9 eyes with mild vision loss (acuity ≥ 6/18) showed significant improvement in identity discrimination (higher dissimilarity ratings) with caricaturing. The size of improvement matched that in normal-vision young adults. The caricature benefit became less stable as visual acuity further decreased, but caricaturing was still effective in half the eyes with moderate and severe vision loss (significant improvement in 5 of 10 eyes; at acuities from 6/24 to poorer than <6/360). We conclude caricaturing has the potential to help many AMD patients recognise faces.
Collapse
Affiliation(s)
- Jo Lane
- Research School of Psychology, and ARC Centre of Excellence in Cognition and its Disorders, The Australian National University, Canberra, ACT, Australia
| | - Emilie M F Rohan
- John Curtin School of Medical Research (JCSMR), The Australian National University, Canberra, ACT, Australia
| | - Faran Sabeti
- John Curtin School of Medical Research (JCSMR), The Australian National University, Canberra, ACT, Australia
- Discipline of Optometry and Vision Science, The University of Canberra, Bruce, ACT, Australia
| | - Rohan W Essex
- Academic Unit of Ophthalmology, The Australian National University, Canberra, ACT, Australia
| | - Ted Maddess
- John Curtin School of Medical Research (JCSMR), The Australian National University, Canberra, ACT, Australia
| | - Nick Barnes
- Research School of Engineering, The Australian National University, and Data61, Commonwealth Scientific and Industrial Research Organisation (CSIRO), Canberra, ACT, Australia
| | - Xuming He
- School of Information Science and Technology, ShanghaiTech University, Shanghai, China
| | - Rachel A Robbins
- Research School of Psychology, The Australian National University, Canberra, ACT, Australia
| | - Tamara Gradden
- Research School of Psychology, The Australian National University, Canberra, ACT, Australia
| | - Elinor McKone
- Research School of Psychology, and ARC Centre of Excellence in Cognition and its Disorders, The Australian National University, Canberra, ACT, Australia.
| |
Collapse
|
18
|
McKone E, Robbins RA, He X, Barnes N. Caricaturing faces to improve identity recognition in low vision simulations: How effective is current-generation automatic assignment of landmark points? PLoS One 2018; 13:e0204361. [PMID: 30286112 PMCID: PMC6171855 DOI: 10.1371/journal.pone.0204361] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2018] [Accepted: 09/05/2018] [Indexed: 11/25/2022] Open
Abstract
PURPOSE Previous behavioural studies demonstrate that face caricaturing can provide an effective image enhancement method for improving poor face identity perception in low vision simulations (e.g., age-related macular degeneration, bionic eye). To translate caricaturing usefully to patients, assignment of the multiple face landmark points needed to produce the caricatures needs to be fully automatised. Recent development in computer science allows automatic face landmark detection of 68 points in real time and in multiple viewpoints. However, previous demonstrations of the behavioural effectiveness of caricaturing have used higher-precision caricatures with 147 landmark points per face, assigned by hand. Here, we test the effectiveness of the auto-assigned 68-point caricatures. We also compare this to the hand-assigned 147-point caricatures. METHOD We assessed human perception of how different in identity pairs of faces appear, when veridical (uncaricatured), caricatured with 68-points, and caricatured with 147-points. Across two experiments, we tested two types of low-vision images: a simulation of blur, as experienced in macular degeneration (testing two blur levels); and a simulation of the phosphenised images seen in prosthetic vision (at three resolutions). RESULTS The 68-point caricatures produced significant improvements in identity discrimination relative to veridical. They were approximately 50% as effective as the 147-point caricatures. CONCLUSION Realistic translation to patients (e.g., via real time caricaturing with the enhanced signal sent to smart glasses or visual prosthetic) is approaching feasibility. For maximum effectiveness software needs to be able to assign landmark points tracing out all details of feature and face shape, to produce high-precision caricatures.
Collapse
Affiliation(s)
- Elinor McKone
- Research School of Psychology, and ARC Centre of Excellence in Cognition and its Disorders, The Australian National University, Canberra, Australian Capital Territory, Australia
| | - Rachel A. Robbins
- Research School of Psychology, The Australian National University, Canberra, Australian Capital Territory, Australia
| | - Xuming He
- School of Information Science and Technology, ShanghaiTech University, Shanghai, China
| | - Nick Barnes
- Research School of Engineering, Australian National University, Canberra, Australian Capital Territory, Australia
- Data61, Commonwealth Scientific and Industrial Research Organisation (CSIRO), Canberra, Australian Capital Territory, Australia
- Bionic Vision Australia, Carlton, Victoria, Australia
| |
Collapse
|