1
|
Mawatari G, Hiwatashi S, Motani T, Nagatomo S, Ando E, Kuwahata T, Ishizu M, Ikeda Y. Efficacy of a wearable night-vision aid in patients with concentric peripheral visual field loss: a randomized, crossover trial. Jpn J Ophthalmol 2024:10.1007/s10384-024-01068-0. [PMID: 38795195 DOI: 10.1007/s10384-024-01068-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Accepted: 04/10/2024] [Indexed: 05/27/2024]
Abstract
PURPOSE To investigate the efficacy of our wearable night-vision aid in patients with concentric peripheral visual field loss. STUDY DESIGN Prospective, single blind, three-group, and three-period crossover clinical study. METHODS The study included patients with concentric peripheral visual field loss, a best-corrected visual acuity (decimal visual acuity) of 0.1 or higher in the better eye, and the presence of a central visual field. HOYA MW10 HiKARI® (HOYA Corporation), our original wearable night-vision aid, was used as the test device with three types of camera lenses (standard-, middle-, and wide-angle lenses). Under both bright and dark conditions, the angle of the horizontal visual field was measured using each of the three lens types for each group. The baseline angle was measured when each participant wore the night-vision aid (powered off). RESULTS The study included 21 participants. Under bright condition, the perceived horizontal visual field was significantly wider than the baseline setup when using the standard-angle lens ("the standard lens"); the middle-angle lens ("the middle lens") was significantly wider than both the baseline setup and the standard lens; and the wide-angle lens ("the wide lens") was significantly wider than the other lenses. Under dark condition, the perceived horizontal visual field was again significantly wider when using the middle lens than the baseline setup and the standard lens, and when using the wide lens, the perceived horizontal visual field was again wider than when using the other lenses. The control in the bright condition was significantly wider (p < 0.001) than when used in the dark condition, while the standard-angle lens in the dark condition was significantly wider (p = 0.05) than when used in the bright condition. In regards to the middle and wide lenses, there was no statistically significant result emerging from either of the illumination conditions. CONCLUSION Our wearable night-vision aid with a middle-angle or wide-angle lens appears to provide wider visual field images in patients with concentric peripheral visual field loss, regardless of whether the illumination conditions are bright or dark.
Collapse
Affiliation(s)
- Go Mawatari
- Department of Ophthalmology, Faculty of Medicine, University of Miyazaki, Miyazaki, Japan
| | - Shogo Hiwatashi
- Department of Ophthalmology, Faculty of Medicine, University of Miyazaki, Miyazaki, Japan
| | - Tsubasa Motani
- Department of Ophthalmology, Faculty of Medicine, University of Miyazaki, Miyazaki, Japan
| | - Saori Nagatomo
- Department of Ophthalmology, Faculty of Medicine, University of Miyazaki, Miyazaki, Japan
| | - Eri Ando
- Department of Ophthalmology, Faculty of Medicine, University of Miyazaki, Miyazaki, Japan
| | - Toshiki Kuwahata
- Department of Ophthalmology, Faculty of Medicine, University of Miyazaki, Miyazaki, Japan
| | - Masataka Ishizu
- Department of Ophthalmology, Faculty of Medicine, University of Miyazaki, Miyazaki, Japan
| | - Yasuhiro Ikeda
- Department of Ophthalmology, Faculty of Medicine, University of Miyazaki, Miyazaki, Japan.
- , 5200 Kihara, Kiyotake, Miyazaki, 889-1692, Japan.
| |
Collapse
|
2
|
Alonso JR, Fernández A, Javidi B. Spatial perception in stereoscopic augmented reality based on multifocus sensing. OPTICS EXPRESS 2024; 32:5943-5955. [PMID: 38439309 DOI: 10.1364/oe.510688] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Accepted: 01/12/2024] [Indexed: 03/06/2024]
Abstract
In many areas ranging from medical imaging to visual entertainment, 3D information acquisition and display is a key task. In this regard, in multifocus computational imaging, stacks of images of a certain 3D scene are acquired under different focus configurations and are later combined by means of post-capture algorithms based on image formation model in order to synthesize images with novel viewpoints of the scene. Stereoscopic augmented reality devices, through which is possible to simultaneously visualize the three dimensional real world along with overlaid digital stereoscopic image pair, could benefit from the binocular content allowed by multifocus computational imaging. Spatial perception of the displayed stereo pairs can be controlled by synthesizing the desired point of view of each image of the stereo-pair along with their parallax setting. The proposed method has the potential to alleviate the accommodation-convergence conflict and make augmented reality stereoscopic devices less vulnerable to visual fatigue.
Collapse
|
3
|
Kasowski J, Johnson BA, Neydavood R, Akkaraju A, Beyeler M. A systematic review of extended reality (XR) for understanding and augmenting vision loss. J Vis 2023; 23:5. [PMID: 37140911 PMCID: PMC10166121 DOI: 10.1167/jov.23.5.5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Accepted: 04/04/2023] [Indexed: 05/05/2023] Open
Abstract
Over the past decade, extended reality (XR) has emerged as an assistive technology not only to augment residual vision of people losing their sight but also to study the rudimentary vision restored to blind people by a visual neuroprosthesis. A defining quality of these XR technologies is their ability to update the stimulus based on the user's eye, head, or body movements. To make the best use of these emerging technologies, it is valuable and timely to understand the state of this research and identify any shortcomings that are present. Here we present a systematic literature review of 227 publications from 106 different venues assessing the potential of XR technology to further visual accessibility. In contrast to other reviews, we sample studies from multiple scientific disciplines, focus on technology that augments a person's residual vision, and require studies to feature a quantitative evaluation with appropriate end users. We summarize prominent findings from different XR research areas, show how the landscape has changed over the past decade, and identify scientific gaps in the literature. Specifically, we highlight the need for real-world validation, the broadening of end-user participation, and a more nuanced understanding of the usability of different XR-based accessibility aids.
Collapse
Affiliation(s)
- Justin Kasowski
- Graduate Program in Dynamical Neuroscience, University of California, Santa Barbara, CA, USA
| | - Byron A Johnson
- Department of Psychological & Brain Sciences, University of California, Santa Barbara, CA, USA
| | - Ryan Neydavood
- Department of Psychological & Brain Sciences, University of California, Santa Barbara, CA, USA
| | - Anvitha Akkaraju
- Department of Psychological & Brain Sciences, University of California, Santa Barbara, CA, USA
| | - Michael Beyeler
- Department of Psychological & Brain Sciences, University of California, Santa Barbara, CA, USA
- Department of Computer Science, University of California, Santa Barbara, CA, USA
| |
Collapse
|
4
|
Gsaxner C, Li J, Pepe A, Jin Y, Kleesiek J, Schmalstieg D, Egger J. The HoloLens in medicine: A systematic review and taxonomy. Med Image Anal 2023; 85:102757. [PMID: 36706637 DOI: 10.1016/j.media.2023.102757] [Citation(s) in RCA: 17] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Revised: 01/05/2023] [Accepted: 01/18/2023] [Indexed: 01/22/2023]
Abstract
The HoloLens (Microsoft Corp., Redmond, WA), a head-worn, optically see-through augmented reality (AR) display, is the main player in the recent boost in medical AR research. In this systematic review, we provide a comprehensive overview of the usage of the first-generation HoloLens within the medical domain, from its release in March 2016, until the year of 2021. We identified 217 relevant publications through a systematic search of the PubMed, Scopus, IEEE Xplore and SpringerLink databases. We propose a new taxonomy including use case, technical methodology for registration and tracking, data sources, visualization as well as validation and evaluation, and analyze the retrieved publications accordingly. We find that the bulk of research focuses on supporting physicians during interventions, where the HoloLens is promising for procedures usually performed without image guidance. However, the consensus is that accuracy and reliability are still too low to replace conventional guidance systems. Medical students are the second most common target group, where AR-enhanced medical simulators emerge as a promising technology. While concerns about human-computer interactions, usability and perception are frequently mentioned, hardly any concepts to overcome these issues have been proposed. Instead, registration and tracking lie at the core of most reviewed publications, nevertheless only few of them propose innovative concepts in this direction. Finally, we find that the validation of HoloLens applications suffers from a lack of standardized and rigorous evaluation protocols. We hope that this review can advance medical AR research by identifying gaps in the current literature, to pave the way for novel, innovative directions and translation into the medical routine.
Collapse
Affiliation(s)
- Christina Gsaxner
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; BioTechMed, 8010 Graz, Austria.
| | - Jianning Li
- Institute of AI in Medicine, University Medicine Essen, 45131 Essen, Germany; Cancer Research Center Cologne Essen, University Medicine Essen, 45147 Essen, Germany
| | - Antonio Pepe
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; BioTechMed, 8010 Graz, Austria
| | - Yuan Jin
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; Research Center for Connected Healthcare Big Data, Zhejiang Lab, Hangzhou, 311121 Zhejiang, China
| | - Jens Kleesiek
- Institute of AI in Medicine, University Medicine Essen, 45131 Essen, Germany; Cancer Research Center Cologne Essen, University Medicine Essen, 45147 Essen, Germany
| | - Dieter Schmalstieg
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; BioTechMed, 8010 Graz, Austria
| | - Jan Egger
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; Institute of AI in Medicine, University Medicine Essen, 45131 Essen, Germany; BioTechMed, 8010 Graz, Austria; Cancer Research Center Cologne Essen, University Medicine Essen, 45147 Essen, Germany
| |
Collapse
|
5
|
Fox DR, Ahmadzada A, Wang CT, Azenkot S, Chu MA, Manduchi R, Cooper EA. Using augmented reality to cue obstacles for people with low vision. OPTICS EXPRESS 2023; 31:6827-6848. [PMID: 36823931 DOI: 10.1364/oe.479258] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Accepted: 01/16/2023] [Indexed: 06/18/2023]
Abstract
Detecting and avoiding obstacles while navigating can pose a challenge for people with low vision, but augmented reality (AR) has the potential to assist by enhancing obstacle visibility. Perceptual and user experience research is needed to understand how to craft effective AR visuals for this purpose. We developed a prototype AR application capable of displaying multiple kinds of visual cues for obstacles on an optical see-through head-mounted display. We assessed the usability of these cues via a study in which participants with low vision navigated an obstacle course. The results suggest that 3D world-locked AR cues were superior to directional heads-up cues for most participants during this activity.
Collapse
|
6
|
Pur DR, Lee-Wing N, Bona MD. The use of augmented reality and virtual reality for visual field expansion and visual acuity improvement in low vision rehabilitation: a systematic review. Graefes Arch Clin Exp Ophthalmol 2023; 261:1743-1755. [PMID: 36633669 DOI: 10.1007/s00417-022-05972-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Revised: 10/31/2022] [Accepted: 12/30/2022] [Indexed: 01/13/2023] Open
Abstract
INTRODUCTION Developments in image processing techniques and display technology have led to the emergence of augmented reality (AR) and virtual reality (VR)-based low vision devices (LVDs). However, their promise and limitations in low vision rehabilitation are poorly understood. The objective of this systematic review is to appraise the application of AR/VR LVDs aimed at visual field expansion and visual acuity improvement in low vision rehabilitation. METHODS A systematic search of the literature was performed using MEDLINE, Embase, PsychInfo, HealthStar, and National Library of Medicine (PubMed) from inception to March 6, 2022. Articles were eligible if they included an AR or VR LVD tested on a sample of individuals with low vision and provided visual outcomes such as visual acuity, visual fields, and object recognition. RESULTS Of the 652 articles identified, 16 studies comprising 382 individuals with a mean age of 52.17 (SD = 18.30) years, and with heterogeneous low vision etiologies (i.e., glaucoma, age-related macular degeneration, retinitis pigmentosa) were included in this systematic review. Most articles used AR (53%), VR (40%), and one article used both AR and VR. The main visual outcomes evaluated were visual fields (67%), visual acuity (65%), and contrast sensitivity (27%). Various visual enhancement techniques were employed including variable magnification using digital zoom (67%), contrast enhancements (53%), and minification (27%). AR LVDs were reported to expand the visual field from threefold to ninefold. On average, individuals using AR/VR LVDs experienced an improved in visual acuity from 0.9 to 0.2 logMAR. Ten articles were classified as high or moderate risk of bias. CONCLUSION AR/VR LVDs were found to afford visual field expansion and visual acuity improvement in low vision populations. Even though the results of this review are promising, the lack of controlled studies with well-defined populations, use of small, convenience samples, and incomplete reporting of inclusion and exclusion criteria among included studies makes it challenging to judge the true impact of these devices. Future studies should address these limitations and compare various AR/LVDs to determine what is the ideal LVD type and vision enhancement combination based on the user's level of visual ability and lifestyle.
Collapse
Affiliation(s)
- Daiana R Pur
- Schulich School of Medicine and Dentistry, Western University, London, ON, Canada.
| | - Nathan Lee-Wing
- Max Randy College of Medicine, University of Manitoba, Winnipeg, MB, Canada
| | - Mark D Bona
- Department of Ophthalmology, Queen's University and Hotel Dieu Hospital, Kingston, ON, Canada
| |
Collapse
|
7
|
Jeganathan VSE, Kumagai A, Shergill H, Fetters MD, Moroi SE, Gosbee J, Kim DS, Weiland JD, Ehrlich JR. Design of Smart Head–Mounted Display Technology: A Convergent Mixed-Methods Study. JOURNAL OF VISUAL IMPAIRMENT & BLINDNESS 2022. [DOI: 10.1177/0145482x221130068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
Introduction: The purpose of this study was to characterize functional impairments and human factor considerations that affect perceptions and preferences for head-mounted display (HMD) technology for adults with low vision and chronic eye disease. Methods: Through a convergent mixed-methods design, participants with visual impairments (age-related macular degeneration, diabetic retinopathy, glaucoma, or retinitis pigmentosa) were recruited. Participants completed the Impact of Vision Impairment (IVI) questionnaire, used commercially available HMDs (eSight, NuEyes, and Epson Moverio), and were interviewed. The IVI was used to identify groups with low, moderate, and high vision–related quality of life (VRQOL). Transcribed interviews were analyzed using a thematic approach. The survey and qualitative findings were integrated using mixed-methods joint display analysis. Results: Twenty-one participants were enrolled (mean age of 58.2 years, 57% male, median Snellen acuity of 20/40 [range: 20/20–hand movement]). An equal number ( n = 9) expressed a preference for eSight and NuEyes, while ( n = 3) preferred the Moverio. Participants emphasized ease of use, including HMD controls and screen, as common reasons for preference. Those with lower IVI well-being scores preferred eSight due to vision improvement. Those with moderate IVI well-being scores preferred NuEyes due to comfort and size. Those with high IVI well-being scores cited usability as the most important feature. Discussion: User preferences for HMD features were associated with VRQOL. A mixed-methods approach explained how varying degrees of visual impairment and HMD preferences were qualitatively related to usability at the individual level. Implications for Practitioners: To increase acceptance, new HMD development for low vision should focus on performance, usability, and human factors engineering. Although HMD technology can benefit individuals with low vision, device features and functions vary in meaningful ways based on vision parameters. Practitioners should be aware of how patient and device variations influence preferences when they recommend wearable systems and optimize training to harness these systems.
Collapse
Affiliation(s)
- V. Swetha E. Jeganathan
- Department of Biomedical Engineering, University of Michigan, Ann Arbor, MI, USA
- Department of Internal Medicine, University of Michigan, Ann Arbor, MI, USA
| | - Abigail Kumagai
- School of Medicine, Wayne State University, Detroit, MI, USA
| | - Harleen Shergill
- Department of Biomedical Engineering, University of Michigan, Ann Arbor, MI, USA
| | - Michael D. Fetters
- Mixed Methods Program, Department of Family Medicine, University of Michigan, Ann Arbor, MI, USA
| | - Sayoko E. Moroi
- Department of Ophthalmology and Visual Sciences, The Ohio State University, Columbus, OH, USA
- Department of Ophthalmology and Visual Sciences, University of Michigan, Ann Arbor, MI, USA
| | - John Gosbee
- Department of Biomedical Engineering, University of Michigan, Ann Arbor, MI, USA
- Department of Internal Medicine, University of Michigan, Ann Arbor, MI, USA
- Departments of Graduate Medical Education, University of Michigan, Ann Arbor, MI, USA
| | - Dae Shik Kim
- Department of Blindness and Low Vision Studies, Western Michigan University, Kalamazoo, MI, USA
| | - James D. Weiland
- Department of Biomedical Engineering, University of Michigan, Ann Arbor, MI, USA
- Department of Ophthalmology and Visual Sciences, University of Michigan, Ann Arbor, MI, USA
- Biointerfaces Institute, University of Michigan, Ann Arbor, MI, USA
| | - Joshua R. Ehrlich
- Department of Ophthalmology and Visual Sciences, University of Michigan, Ann Arbor, MI, USA
| |
Collapse
|
8
|
Nguyen JD, Tan SM, Azenkot S, Chu MA, Cooper EA. Longitudinal Trends in Case Histories and Rehabilitative Device Assessments at Low Vision Examinations. Optom Vis Sci 2022; 99:817-829. [PMID: 36301592 PMCID: PMC9704812 DOI: 10.1097/opx.0000000000001953] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2022] [Accepted: 09/26/2022] [Indexed: 01/06/2023] Open
Abstract
SIGNIFICANCE Understanding longitudinal changes in why individuals frequent low-vision clinics is crucial for ensuring that patient care keeps current with changing technology and changing lifestyles. Among other findings, our results suggest that reading remains a prevailing patient complaint, with shifting priorities toward technology-related topics. PURPOSE This study aimed to understand changes in patient priorities and patient care in low vision over the past decade. METHODS We conducted a retrospective study of examination records (2009 to 2019, 3470 examinations) from two U.S. low-vision clinics. Automated word searches summarized two properties of the records: topics discussed during the case history and types of rehabilitative devices assessed. Logistic regression was used to model the effects of examination year, patient age, patient sex, and level of visual impairment. RESULTS Collapsing across all years, the most common topic discussed was reading (78%), followed by light-related topics (71%) and technology (59%). Whereas the odds of discussing reading trended downward over the decade (odds ratio, 0.57; P = .03), technology, social interaction, mobility, and driving trended upward (odds ratios, 4.53, 3.31, 2.71, and 1.95; all P 's < 0.001). The most frequently assessed devices were tinted lenses (95%). Over time, video magnifier and spectacle assessments trended downward (odds ratios, 0.64 and 0.72; P = .004, 0.04), whereas assessments of other optical aids increased. The data indicate several consistent differences among patient demographics. CONCLUSIONS Reading is likely to remain a prevailing patient complaint, but an increase in technology-related topics suggests shifting priorities, particularly in younger demographics. "Low-tech" optical aids have remained prominent in low-vision care even as "high-tech" assistive devices in the marketplace continue to advance.
Collapse
Affiliation(s)
- Jacqueline D. Nguyen
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, Berkeley, California
- Kellogg Eye Center, Department of Ophthalmology and Visual Sciences, University of Michigan, Ann Arbor, Michigan
| | - Steven M. Tan
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, Berkeley, California
| | - Shiri Azenkot
- Information Science, Jacobs Technion-Cornell Institute, Cornell Tech, Cornell University, New York, New York
| | - Marlena A. Chu
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, Berkeley, California
| | - Emily A. Cooper
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, Berkeley, California
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, California
| |
Collapse
|
9
|
Li Y, Kim K, Erickson A, Norouzi N, Jules J, Bruder G, Welch GF. A Scoping Review of Assistance and Therapy with Head-Mounted Displays for People Who Are Visually Impaired. ACM TRANSACTIONS ON ACCESSIBLE COMPUTING 2022. [DOI: 10.1145/3522693] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
Given the inherent visual affordances of Head-Mounted Displays (HMDs) used for Virtual and Augmented Reality (VR/AR), they have been actively used over many years as assistive and therapeutic devices for the people who are visually impaired. In this paper, we report on a scoping review of literature describing the use of HMDs in these areas. Our high-level objectives included detailed reviews and quantitative analyses of the literature, and the development of insights related to emerging trends and future research directions.
Our review began with a pool of 1251 papers collected through a variety of mechanisms. Through a structured screening process, we identified 61 English research papers employing HMDs to enhance the visual sense of people with visual impairments for more detailed analyses. Our analyses reveal that there is an increasing amount of HMD-based research on visual assistance and therapy, and there are trends in the approaches associated with the research objectives. For example, AR is most often used for visual assistive purposes, whereas VR is used for therapeutic purposes. We report on eight existing survey papers, and present detailed analyses of the 61 research papers, looking at the mitigation objectives of the researchers (assistive versus therapeutic), the approaches used, the types of HMDs, the targeted visual conditions, and the inclusion of user studies. In addition to our detailed reviews and analyses of the various characteristics, we present observations related to apparent emerging trends and future research directions.
Collapse
Affiliation(s)
- Yifan Li
- University of Central Florida, USA
| | - Kangsoo Kim
- University of Central Florida, USA and University of Calgary, Canada
| | | | | | | | | | | |
Collapse
|
10
|
Zhao X, Jiang X, Han A, Mao T, He W, Chen Q. Photon-efficient 3D reconstruction employing a edge enhancement method. OPTICS EXPRESS 2022; 30:1555-1569. [PMID: 35209313 DOI: 10.1364/oe.446369] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/19/2021] [Accepted: 12/20/2021] [Indexed: 06/14/2023]
Abstract
Photon-efficient 3D reconstruction under sparse photon conditions remains challenges. Especially for scene edge locations, the light scattering results in a weaker echo signal than non-edge locations. Depth images can be viewed as smooth regions stitched together by edge segmentation, yet none of the existing methods focus on how to improve the accuracy of edge reconstruction when performing 3D reconstruction. Moreover, the impact of edge reconstruction to overall depth reconstruction hasn't been investigated. In this paper, we explore how to improve the edge reconstruction accuracy from various aspects such as improving the network structure, employing hybrid loss functions and taking advantages of the non-local correlation of SPAD measurements. Meanwhile, we investigate the correlation between the edge reconstruction accuracy and the reconstruction accuracy of overall depth based on quantitative metrics. The experimental results show that the proposed method achieves superior performance in both edge reconstruction and overall depth reconstruction compared with other state-of-the-art methods. Besides, it proves that the improvement of edge reconstruction accuracy promotes the reconstruction accuracy of depth map.
Collapse
|
11
|
Li T, Li C, Zhang X, Liang W, Chen Y, Ye Y, Lin H. Augmented Reality in Ophthalmology: Applications and Challenges. Front Med (Lausanne) 2021; 8:733241. [PMID: 34957138 PMCID: PMC8703032 DOI: 10.3389/fmed.2021.733241] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Accepted: 11/19/2021] [Indexed: 12/16/2022] Open
Abstract
Augmented reality (AR) has been developed rapidly and implemented in many fields such as medicine, maintenance, and cultural heritage. Unlike other specialties, ophthalmology connects closely with AR since most AR systems are based on vision systems. Here we summarize the applications and challenges of AR in ophthalmology and provide insights for further research. Firstly, we illustrate the structure of the standard AR system and present essential hardware. Secondly, we systematically introduce applications of AR in ophthalmology, including therapy, education, and clinical assistance. To conclude, there is still a large room for development, which needs researchers to pay more effort. Applications in diagnosis and protection might be worth exploring. Although the obstacles of hardware restrict the development of AR in ophthalmology at present, the AR will realize its potential and play an important role in ophthalmology in the future with the rapidly developing technology and more in-depth research.
Collapse
Affiliation(s)
- Tongkeng Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China.,Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, China
| | - Chenghao Li
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, China
| | - Xiayin Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China.,Guangdong Eye Institute, Department of Ophthalmology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Wenting Liang
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, China
| | - Yongxin Chen
- School of Biomedical Engineering, Sun Yat-sen University, Guangzhou, China
| | - Yunpeng Ye
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China.,Center for Precision Medicine, Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
12
|
Papadopoulos N, Melanitis N, Lozano A, Soto-Sanchez C, Fernandez E, Nikita KS. Machine Learning Method for Functional Assessment of Retinal Models. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:4293-4296. [PMID: 34892171 DOI: 10.1109/embc46164.2021.9629599] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Challenges in the field of retinal prostheses motivate the development of retinal models to accurately simulate Retinal Ganglion Cells (RGCs) responses. The goal of retinal prostheses is to enable blind individuals to solve complex, reallife visual tasks. In this paper, we introduce the functional assessment (FA) of retinal models, which describes the concept of evaluating the performance of retinal models on visual understanding tasks. We present a machine learning method for FA: we feed traditional machine learning classifiers with RGC responses generated by retinal models, to solve object and digit recognition tasks (CIFAR-10, MNIST, Fashion MNIST, Imagenette). We examined critical FA aspects, including how the performance of FA depends on the task, how to optimally feed RGC responses to the classifiers and how the number of output neurons correlates with the model's accuracy. To increase the number of output neurons, we manipulated input images - by splitting and then feeding them to the retinal model and we found that image splitting does not significantly improve the model's accuracy. We also show that differences in the structure of datasets result in largely divergent performance of the retinal model (MNIST and Fashion MNIST exceeded 80% accuracy, while CIFAR-10 and Imagenette achieved ∼40%). Furthermore, retinal models which perform better in standard evaluation, i.e. more accurately predict RGC response, perform better in FA as well. However, unlike standard evaluation, FA results can be straightforwardly interpreted in the context of comparing the quality of visual perception.
Collapse
|
13
|
Candy TR, Cormack LK. Recent understanding of binocular vision in the natural environment with clinical implications. Prog Retin Eye Res 2021; 88:101014. [PMID: 34624515 PMCID: PMC8983798 DOI: 10.1016/j.preteyeres.2021.101014] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Revised: 09/26/2021] [Accepted: 09/29/2021] [Indexed: 10/20/2022]
Abstract
Technological advances in recent decades have allowed us to measure both the information available to the visual system in the natural environment and the rich array of behaviors that the visual system supports. This review highlights the tasks undertaken by the binocular visual system in particular and how, for much of human activity, these tasks differ from those considered when an observer fixates a static target on the midline. The everyday motor and perceptual challenges involved in generating a stable, useful binocular percept of the environment are discussed, together with how these challenges are but minimally addressed by much of current clinical interpretation of binocular function. The implications for new technology, such as virtual reality, are also highlighted in terms of clinical and basic research application.
Collapse
Affiliation(s)
- T Rowan Candy
- School of Optometry, Programs in Vision Science, Neuroscience and Cognitive Science, Indiana University, 800 East Atwater Avenue, Bloomington, IN, 47405, USA.
| | - Lawrence K Cormack
- Department of Psychology, Institute for Neuroscience, and Center for Perceptual Systems, The University of Texas at Austin, Austin, TX, 78712, USA.
| |
Collapse
|
14
|
Zang Z, Xiao D, Day-Uei Li D. Non-fusion time-resolved depth image reconstruction using a highly efficient neural network architecture. OPTICS EXPRESS 2021; 29:19278-19291. [PMID: 34266040 DOI: 10.1364/oe.425917] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/23/2021] [Accepted: 05/21/2021] [Indexed: 06/13/2023]
Abstract
Single-photon avalanche diodes (SPAD) are powerful sensors for 3D light detection and ranging (LiDAR) in low light scenarios due to their single-photon sensitivity. However, accurately retrieving ranging information from noisy time-of-arrival (ToA) point clouds remains a challenge. This paper proposes a photon-efficient, non-fusion neural network architecture that can directly reconstruct high-fidelity depth images from ToA data without relying on other guiding images. Besides, the neural network architecture was compressed via a low-bit quantization scheme so that it is suitable to be implemented on embedded hardware platforms. The proposed quantized neural network architecture achieves superior reconstruction accuracy and fewer parameters than previously reported networks.
Collapse
|
15
|
Neugebauer A, Stingl K, Ivanov I, Wahl S. Influence of Systematic Gaze Patterns in Navigation and Search Tasks with Simulated Retinitis Pigmentosa. Brain Sci 2021; 11:brainsci11020223. [PMID: 33673036 PMCID: PMC7917782 DOI: 10.3390/brainsci11020223] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Revised: 02/08/2021] [Accepted: 02/09/2021] [Indexed: 12/29/2022] Open
Abstract
People living with a degenerative retinal disease such as retinitis pigmentosa are oftentimes faced with difficulties navigating in crowded places and avoiding obstacles due to their severely limited field of view. The study aimed to assess the potential of different patterns of eye movement (scanning patterns) to (i) increase the effective area of perception of participants with simulated retinitis pigmentosa scotoma and (ii) maintain or improve performance in visual tasks. Using a virtual reality headset with eye tracking, we simulated tunnel vision of 20° in diameter in visually healthy participants (n = 9). Employing this setup, we investigated how different scanning patterns influence the dynamic field of view—the average area over time covered by the field of view—of the participants in an obstacle avoidance task and in a search task. One of the two tested scanning patterns showed a significant improvement in both dynamic field of view (navigation 11%, search 7%) and collision avoidance (33%) when compared to trials without the suggested scanning pattern. However, participants took significantly longer (31%) to finish the navigation task when applying this scanning pattern. No significant improvements in search task performance were found when applying scanning patterns.
Collapse
Affiliation(s)
- Alexander Neugebauer
- ZEISS Vision Science Lab., Institute for Ophthalmic Research, Eberhard-Karls-University Tuebingen, 72076 Tuebingen, Germany;
- Correspondence:
| | - Katarina Stingl
- Center for Ophthalmology, University Eye Hospital, Eberhard Karls University Tuebingen, 72076 Tuebingen, Germany;
- Center for Rare Eye Diseases, Eberhard Karls University Tuebingen, 72076 Tuebingen, Germany
| | - Iliya Ivanov
- Carl Zeiss Vision International GmbH, 73430 Aalen, Germany;
| | - Siegfried Wahl
- ZEISS Vision Science Lab., Institute for Ophthalmic Research, Eberhard-Karls-University Tuebingen, 72076 Tuebingen, Germany;
- Carl Zeiss Vision International GmbH, 73430 Aalen, Germany;
| |
Collapse
|
16
|
Aydındoğan G, Kavaklı K, Şahin A, Artal P, Ürey H. Applications of augmented reality in ophthalmology [Invited]. BIOMEDICAL OPTICS EXPRESS 2021; 12:511-538. [PMID: 33659087 PMCID: PMC7899512 DOI: 10.1364/boe.405026] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/11/2020] [Revised: 12/08/2020] [Accepted: 12/10/2020] [Indexed: 05/21/2023]
Abstract
Throughout the last decade, augmented reality (AR) head-mounted displays (HMDs) have gradually become a substantial part of modern life, with increasing applications ranging from gaming and driver assistance to medical training. Owing to the tremendous progress in miniaturized displays, cameras, and sensors, HMDs are now used for the diagnosis, treatment, and follow-up of several eye diseases. In this review, we discuss the current state-of-the-art as well as potential uses of AR in ophthalmology. This review includes the following topics: (i) underlying optical technologies, displays and trackers, holography, and adaptive optics; (ii) accommodation, 3D vision, and related problems such as presbyopia, amblyopia, strabismus, and refractive errors; (iii) AR technologies in lens and corneal disorders, in particular cataract and keratoconus; (iv) AR technologies in retinal disorders including age-related macular degeneration (AMD), glaucoma, color blindness, and vision simulators developed for other types of low-vision patients.
Collapse
Affiliation(s)
- Güneş Aydındoğan
- Koç University, Department of Electrical Engineering and Translational Medicine Research Center (KUTTAM), Istanbul 34450, Turkey
| | - Koray Kavaklı
- Koç University, Department of Electrical Engineering and Translational Medicine Research Center (KUTTAM), Istanbul 34450, Turkey
| | - Afsun Şahin
- Koç University, School of Medicine and Translational Medicine Research Center (KUTTAM), Istanbul 34450, Turkey
| | - Pablo Artal
- Laboratorio de Óptica, Instituto Universitario de Investigación en Óptica y Nanofísica, Universidad de Murcia, Campus de Espinardo, E-30100 Murcia, Spain
| | - Hakan Ürey
- Koç University, Department of Electrical Engineering and Translational Medicine Research Center (KUTTAM), Istanbul 34450, Turkey
| |
Collapse
|