1
|
Hibbard PB. Virtual Reality for Vision Science. Curr Top Behav Neurosci 2023; 65:131-159. [PMID: 36723780 DOI: 10.1007/7854_2023_416] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
Virtual reality (VR) allows us to create visual stimuli that are both immersive and reactive. VR provides many new opportunities in vision science. In particular, it allows us to present wide field-of-view, immersive visual stimuli; for observers to actively explore the environments that we create; and for us to understand how visual information is used in the control of behaviour. In contrast with traditional psychophysical experiments, VR provides much greater flexibility in creating environments and tasks that are more closely aligned with our everyday experience. These benefits of VR are of particular value in developing our theories of the behavioural goals of the visual system and explaining how visual information is processed to achieve these goals. The use of VR in vision science presents a number of technical challenges, relating to how the available software and hardware limit our ability to accurately specify the visual information that defines our virtual environments and the interpretation of data gathered in experiments with a freely moving observer in a responsive environment.
Collapse
Affiliation(s)
- Paul B Hibbard
- Department of Psychology, University of Essex, Colchester, UK.
| |
Collapse
|
2
|
Zhao C, Kim AS, Beams R, Badano A. Spatiotemporal image quality of virtual reality head mounted displays. Sci Rep 2022; 12:20235. [PMID: 36424434 PMCID: PMC9691731 DOI: 10.1038/s41598-022-24345-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Accepted: 11/14/2022] [Indexed: 11/27/2022] Open
Abstract
Virtual reality (VR) head mounted displays (HMDs) require both high spatial resolution and fast temporal response. However, methods to quantify the VR image quality in the spatiotemporal domain when motion exists are not yet standardized. In this study, we characterize the spatiotemporal capabilities of three VR devices: the HTC VIVE, VIVE Pro, and VIVE Pro 2 during smooth pursuit. A spatiotemporal model for VR HMDs is presented using measured spatial and temporal characteristics. Among the three evaluated headsets, the VIVE Pro 2 improves the display temporal performance using a fast 120 Hz refresh rate and pulsed emission with a small duty cycle of 5%. In combination with a high pixel resolution beyond 2 k [Formula: see text] 2 k per eye, the VIVE Pro 2 achieves an improved spatiotemporal performance compared to the VIVE and VIVE Pro in the high spatial frequency range above 8 cycles per degree during smooth pursuit. The result demonstrates that reducing the display emission duty cycle to less than 20% is beneficial to mitigate motion blur in VR HMDs. Frame rate reduction (e.g., to below 60 Hz) of the input signal compared to the display refresh rate of 120 Hz yields replicated shadow images that can affect the image quality under motion. This work supports the regulatory science research efforts in development of testing methods to characterize the spatiotemporal performance of VR devices for medical use.
Collapse
Affiliation(s)
- Chumin Zhao
- grid.417587.80000 0001 2243 3366Center for Devices and Radiological Health, U.S. Food and Drug Administration, 10903 New Hampshire Ave, Silver Spring, MD 20993 USA
| | - Andrea S. Kim
- grid.417587.80000 0001 2243 3366Center for Devices and Radiological Health, U.S. Food and Drug Administration, 10903 New Hampshire Ave, Silver Spring, MD 20993 USA
| | - Ryan Beams
- grid.417587.80000 0001 2243 3366Center for Devices and Radiological Health, U.S. Food and Drug Administration, 10903 New Hampshire Ave, Silver Spring, MD 20993 USA
| | - Aldo Badano
- grid.417587.80000 0001 2243 3366Center for Devices and Radiological Health, U.S. Food and Drug Administration, 10903 New Hampshire Ave, Silver Spring, MD 20993 USA
| |
Collapse
|
3
|
Li Z, Pestourie R, Park JS, Huang YW, Johnson SG, Capasso F. Inverse design enables large-scale high-performance meta-optics reshaping virtual reality. Nat Commun 2022; 13:2409. [PMID: 35504864 PMCID: PMC9064995 DOI: 10.1038/s41467-022-29973-3] [Citation(s) in RCA: 24] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2021] [Accepted: 04/11/2022] [Indexed: 12/30/2022] Open
Abstract
Meta-optics has achieved major breakthroughs in the past decade; however, conventional forward design faces challenges as functionality complexity and device size scale up. Inverse design aims at optimizing meta-optics design but has been currently limited by expensive brute-force numerical solvers to small devices, which are also difficult to realize experimentally. Here, we present a general inverse-design framework for aperiodic large-scale (20k × 20k λ2) complex meta-optics in three dimensions, which alleviates computational cost for both simulation and optimization via a fast approximate solver and an adjoint method, respectively. Our framework naturally accounts for fabrication constraints via a surrogate model. In experiments, we demonstrate aberration-corrected metalenses working in the visible with high numerical aperture, poly-chromatic focusing, and large diameter up to the centimeter scale. Such large-scale meta-optics opens a new paradigm for applications, and we demonstrate its potential for future virtual-reality platforms by using a meta-eyepiece and a laser back-illuminated micro-Liquid Crystal Display.
Collapse
Affiliation(s)
- Zhaoyi Li
- Harvard John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA.
| | - Raphaël Pestourie
- Department of Mathematics, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Joon-Suh Park
- Harvard John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA
- Nanophotonics Research Center, Korea Institute of Science and Technology, Seoul, Republic of Korea
| | - Yao-Wei Huang
- Harvard John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA
- Department of Electrical and Computer Engineering, National University of Singapore, Singapore, Singapore
- Department of Photonics, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
| | - Steven G Johnson
- Department of Mathematics, Massachusetts Institute of Technology, Cambridge, MA, USA.
| | - Federico Capasso
- Harvard John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA.
| |
Collapse
|
4
|
Beams R, Brown E, Cheng WC, Joyner JS, Kim AS, Kontson K, Amiras D, Baeuerle T, Greenleaf W, Grossmann RJ, Gupta A, Hamilton C, Hua H, Huynh TT, Leuze C, Murthi SB, Penczek J, Silva J, Spiegel B, Varshney A, Badano A. Evaluation Challenges for the Application of Extended Reality Devices in Medicine. J Digit Imaging 2022; 35:1409-1418. [PMID: 35469355 PMCID: PMC9582055 DOI: 10.1007/s10278-022-00622-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2021] [Revised: 02/04/2022] [Accepted: 03/08/2022] [Indexed: 12/24/2022] Open
Abstract
Augmented and virtual reality devices are being actively investigated and implemented for a wide range of medical uses. However, significant gaps in the evaluation of these medical devices and applications hinder their regulatory evaluation. Addressing these gaps is critical to demonstrating the devices' safety and effectiveness. We outline the key technical and clinical evaluation challenges discussed during the US Food and Drug Administration's public workshop, "Medical Extended Reality: Toward Best Evaluation Practices for Virtual and Augmented Reality in Medicine" and future directions for evaluation method development. Evaluation challenges were categorized into several key technical and clinical areas. Finally, we highlight current efforts in the standards communities and illustrate connections between the evaluation challenges and the intended uses of the medical extended reality (MXR) devices. Participants concluded that additional research is needed to assess the safety and effectiveness of MXR devices across the use cases.
Collapse
Affiliation(s)
- Ryan Beams
- Center for Devices and Radiological Health, Food and Drug Administration, Silver Spring, MD, USA.
| | - Ellenor Brown
- Center for Devices and Radiological Health, Food and Drug Administration, Silver Spring, MD, USA
| | - Wei-Chung Cheng
- Center for Devices and Radiological Health, Food and Drug Administration, Silver Spring, MD, USA
| | - Janell S Joyner
- Center for Devices and Radiological Health, Food and Drug Administration, Silver Spring, MD, USA
| | - Andrea S Kim
- Center for Devices and Radiological Health, Food and Drug Administration, Silver Spring, MD, USA
| | - Kimberly Kontson
- Center for Devices and Radiological Health, Food and Drug Administration, Silver Spring, MD, USA
| | - Dimitri Amiras
- Department of Imaging, Imperial College Healthcare NHS Trust, London, UK
| | | | - Walter Greenleaf
- Stanford University Virtual Human Interaction Lab, Stanford University, Stanford, CA, USA
| | | | | | | | - Hong Hua
- James C. Wyant College of Optical Sciences, University of Arizona, Tucson, AZ, USA
| | | | - Christoph Leuze
- Department of Radiology, Stanford University, Stanford, CA, USA
| | - Sarah B Murthi
- R Adams Cowley Shock Trauma Center, University of Maryland Baltimore, Baltimore, MD, USA
| | - John Penczek
- NIST, Boulder, CO, USA.,University of Colorado, Boulder, CO, USA
| | - Jennifer Silva
- SentiAR, Inc., St Louis, MT, USA.,School of Medicine, Division of Pediatric Cardiology, Washington University, St Louis, MO, USA
| | - Brennan Spiegel
- Department of Medicine, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Amitabh Varshney
- Department of Computer Science, University of Maryland, College Park, MD, USA
| | - Aldo Badano
- Center for Devices and Radiological Health, Food and Drug Administration, Silver Spring, MD, USA
| |
Collapse
|
5
|
Zhang Q, Song W, Hu X, Hu K, Weng D, Liu Y, Wang Y. Design of a near-eye display measurement system using an anthropomorphic vision imaging method. OPTICS EXPRESS 2021; 29:13204-13218. [PMID: 33985060 DOI: 10.1364/oe.421920] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/04/2021] [Accepted: 04/05/2021] [Indexed: 06/12/2023]
Abstract
We developed a new near-eye display measurement system using anthropomorphic vision imaging to measure the key parameters of near-eye displays, including field-of-view (FOV), angular resolution, eye box, and virtual image depth. The characteristics of the human eye, such as pupil position, pupil size variation, accommodation function, and the high resolution of the fovea, are imitated by the proposed measurement system. A FOV scanning structure, together with a non-vignetting image-telecentric lens system, captures the virtual image from the near-eye display by imitating human eye function. As a proof-of-concept, a prototype device was used to obtain large-range, high-resolution measurements for key parameters of near-eye displays.
Collapse
|
6
|
Penczek J, Boynton PA, Beams R, Sriram RD. Measurement Challenges for Medical Image Display Devices. J Digit Imaging 2021; 34:458-472. [PMID: 33846889 DOI: 10.1007/s10278-021-00438-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2020] [Revised: 01/27/2021] [Accepted: 02/24/2021] [Indexed: 12/25/2022] Open
Abstract
Visual information is a critical component in the evaluation and communication of patient medical information. As display technologies have evolved, the medical community has sought to take advantage of advances in wider color gamuts, greater display portability, and more immersive imagery. These image quality enhancements have shown improvements in the quality of healthcare through greater efficiency, higher diagnostic accuracy, added functionality, enhanced training, and better health records. However, the display technology advances typically introduce greater complexity in the image workflow and display evaluation. This paper highlights some of the optical measurement challenges created by these new display technologies and offers possible pathways to address them.
Collapse
Affiliation(s)
- J Penczek
- National Institute of Standards and Technology, Boulder and University of Colorado, CO, 80305, Boulder, USA.
| | - P A Boynton
- National Institute of Standards and Technology, Gaithersburg, MD, 20899, USA
| | - R Beams
- Food and Drug Administration, Silver Springs, MD, 20993, USA
| | - R D Sriram
- National Institute of Standards and Technology, Gaithersburg, MD, 20899, USA
| |
Collapse
|
7
|
Li Z, Lin P, Huang YW, Park JS, Chen WT, Shi Z, Qiu CW, Cheng JX, Capasso F. Meta-optics achieves RGB-achromatic focusing for virtual reality. SCIENCE ADVANCES 2021; 7:7/5/eabe4458. [PMID: 33571130 PMCID: PMC7840120 DOI: 10.1126/sciadv.abe4458] [Citation(s) in RCA: 65] [Impact Index Per Article: 21.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/21/2020] [Accepted: 12/14/2020] [Indexed: 05/08/2023]
Abstract
Virtual and augmented realities are rapidly developing technologies, but their large-scale penetration will require lightweight optical components with small aberrations. We demonstrate millimeter-scale diameter, high-NA, submicron-thin, metasurface-based lenses that achieve diffraction-limited achromatic focusing of the primary colors by exploiting constructive interference of light from multiple zones and dispersion engineering. To illustrate the potential of this approach, we demonstrate a virtual reality system based on a home-built fiber scanning near-eye display.
Collapse
Affiliation(s)
- Zhaoyi Li
- Harvard John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA 02138, USA.
| | - Peng Lin
- Department of Electrical and Computer Engineering, Boston University, Boston, MA 02215, USA
| | - Yao-Wei Huang
- Harvard John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA 02138, USA
- Department of Electrical and Computer Engineering, National University of Singapore, Singapore, Singapore
| | - Joon-Suh Park
- Harvard John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA 02138, USA
- Nanophotonics Research Center, Korea Institute of Science and Technology, Seoul, Republic of Korea
| | - Wei Ting Chen
- Harvard John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA 02138, USA
| | - Zhujun Shi
- Harvard John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA 02138, USA
- Department of Physics, Harvard University, Cambridge, MA 02138, USA
| | - Cheng-Wei Qiu
- Department of Electrical and Computer Engineering, National University of Singapore, Singapore, Singapore
| | - Ji-Xin Cheng
- Department of Electrical and Computer Engineering, Boston University, Boston, MA 02215, USA
- Department of Biomedical Engineering, Boston University, Boston, MA 02215, USA
| | - Federico Capasso
- Harvard John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA 02138, USA.
| |
Collapse
|
8
|
Southworth MK, Silva JNA, Blume WM, Van Hare GF, Dalal AS, Silva JR. Performance Evaluation of Mixed Reality Display for Guidance During Transcatheter Cardiac Mapping and Ablation. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE 2020; 8:1900810. [PMID: 32742821 PMCID: PMC7390021 DOI: 10.1109/jtehm.2020.3007031] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/15/2020] [Revised: 06/23/2020] [Accepted: 06/24/2020] [Indexed: 01/18/2023]
Abstract
Cardiac electrophysiology procedures present the physician with a wealth of 3D information, typically presented on fixed 2D monitors. New developments in wearable mixed reality displays offer the potential to simplify and enhance 3D visualization while providing hands-free, dynamic control of devices within the procedure room. OBJECTIVE This work aims to evaluate the performance and quality of a mixed reality system designed for intraprocedural use in cardiac electrophysiology. METHOD The Enhanced Electrophysiology Visualization and Interaction System (ĒLVIS) mixed reality system performance criteria, including image quality, hardware performance, and usability were evaluated using existing display validation procedures adapted to the electrophysiology specific use case. Additional performance and user validation were performed through a 10 patient, in-human observational study, the Engineering ĒLVIS (E2) Study. RESULTS The ĒLVIS system achieved acceptable frame rate, latency, and battery runtime with acceptable dynamic range and depth distortion as well as minimal geometric distortion. Bench testing results corresponded with physician feedback in the observational study, and potential improvements in geometric understanding were noted. CONCLUSION The ĒLVIS system, based on current commercially available mixed reality hardware, is capable of meeting the hardware performance, image quality, and usability requirements of the electroanatomic mapping display for intraprocedural, real-time use in electrophysiology procedures. Verifying off the shelf mixed reality hardware for specific clinical use can accelerate the adoption of this transformative technology and provide novel visualization, understanding, and control of clinically relevant data in real-time.
Collapse
Affiliation(s)
| | - Jennifer N. Avari Silva
- SentiAR, Inc.St. LouisMO63108USA
- Department of PediatricsSchool of MedicineWashington University in St. LouisSt. LouisMO63130USA
| | | | - George F. Van Hare
- Department of PediatricsSchool of MedicineWashington University in St. LouisSt. LouisMO63130USA
| | - Aarti S. Dalal
- Department of PediatricsSchool of MedicineWashington University in St. LouisSt. LouisMO63130USA
| | - Jonathan R. Silva
- SentiAR, Inc.St. LouisMO63108USA
- Department of Biomedical EngineeringWashington University in St. LouisSt. LouisMO63130USA
| |
Collapse
|