1
|
Mergen M, Graf N, Meyerheim M. Reviewing the current state of virtual reality integration in medical education - a scoping review. BMC MEDICAL EDUCATION 2024; 24:788. [PMID: 39044186 PMCID: PMC11267750 DOI: 10.1186/s12909-024-05777-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/14/2023] [Accepted: 07/15/2024] [Indexed: 07/25/2024]
Abstract
BACKGROUND In medical education, new technologies like Virtual Reality (VR) are increasingly integrated to enhance digital learning. Originally used to train surgical procedures, now use cases also cover emergency scenarios and non-technical skills like clinical decision-making. This scoping review aims to provide an overview of VR in medical education, including requirements, advantages, disadvantages, as well as evaluation methods and respective study results to establish a foundation for future VR integration into medical curricula. METHODS This review follows the updated JBI methodology for scoping reviews and adheres to the respective PRISMA extension. We included reviews in English or German language from 2012 to March 2022 that examine the use of VR in education for medical and nursing students, registered nurses, and qualified physicians. Data extraction focused on medical specialties, subjects, curricula, technical/didactic requirements, evaluation methods and study outcomes as well as advantages and disadvantages of VR. RESULTS A total of 763 records were identified. After eligibility assessment, 69 studies were included. Nearly half of them were published between 2021 and 2022, predominantly from high-income countries. Most reviews focused on surgical training in laparoscopic and minimally invasive procedures (43.5%) and included studies with qualified physicians as participants (43.5%). Technical, didactic and organisational requirements were highlighted and evaluations covering performance time and quality, skills acquisition and validity, often showed positive outcomes. Accessibility, repeatability, cost-effectiveness, and improved skill development were reported as advantages, while financial challenges, technical limitations, lack of scientific evidence, and potential user discomfort were cited as disadvantages. DISCUSSION Despite a high potential of VR in medical education, there are mandatory requirements for its integration into medical curricula addressing challenges related to finances, technical limitations, and didactic aspects. The reported lack of standardised and validated guidelines for evaluating VR training must be overcome to enable high-quality evidence for VR usage in medical education. Interdisciplinary teams of software developers, AI experts, designers, medical didactics experts and end users are required to design useful VR courses. Technical issues and compromised realism can be mitigated by further technological advancements.
Collapse
Affiliation(s)
- Marvin Mergen
- Department of Pediatric Oncology and Hematology, Faculty of Medicine, Saarland University, Building 9, Kirrberger Strasse 100, 66421, Homburg, Germany.
| | - Norbert Graf
- Department of Pediatric Oncology and Hematology, Faculty of Medicine, Saarland University, Building 9, Kirrberger Strasse 100, 66421, Homburg, Germany
| | - Marcel Meyerheim
- Department of Pediatric Oncology and Hematology, Faculty of Medicine, Saarland University, Building 9, Kirrberger Strasse 100, 66421, Homburg, Germany
| |
Collapse
|
2
|
Lastrucci A, Wandael Y, Barra A, Ricci R, Maccioni G, Pirrera A, Giansanti D. Exploring Augmented Reality Integration in Diagnostic Imaging: Myth or Reality? Diagnostics (Basel) 2024; 14:1333. [PMID: 39001224 PMCID: PMC11240696 DOI: 10.3390/diagnostics14131333] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2024] [Revised: 06/06/2024] [Accepted: 06/18/2024] [Indexed: 07/16/2024] Open
Abstract
This study delves into the transformative potential of integrating augmented reality (AR) within imaging technologies, shedding light on this evolving landscape. Through a comprehensive narrative review, this research uncovers a wealth of literature exploring the intersection between AR and medical imaging, highlighting its growing prominence in healthcare. AR's integration offers a host of potential opportunities to enhance surgical precision, bolster patient engagement, and customize medical interventions. Moreover, when combined with technologies like virtual reality (VR), artificial intelligence (AI), and robotics, AR opens up new avenues for innovation in clinical practice, education, and training. However, amidst these promising prospects lie numerous unanswered questions and areas ripe for exploration. This study emphasizes the need for rigorous research to elucidate the clinical efficacy of AR-integrated interventions, optimize surgical workflows, and address technological challenges. As the healthcare landscape continues to evolve, sustained research efforts are crucial to fully realizing AR's transformative impact in medical imaging. Systematic reviews on AR in healthcare also overlook regulatory and developmental factors, particularly in regard to medical devices. These include compliance with standards, safety regulations, risk management, clinical validation, and developmental processes. Addressing these aspects will provide a comprehensive understanding of the challenges and opportunities in integrating AR into clinical settings, informing stakeholders about crucial regulatory and developmental considerations for successful implementation. Moreover, navigating the regulatory approval process requires substantial financial resources and expertise, presenting barriers to entry for smaller innovators. Collaboration across disciplines and concerted efforts to overcome barriers will be essential in navigating this frontier and harnessing the potential of AR to revolutionize healthcare delivery.
Collapse
Affiliation(s)
- Andrea Lastrucci
- Department of Allied Health Professions, Azienda Ospedaliero-Universitaria Careggi, 50134 Florence, Italy
| | - Yannick Wandael
- Department of Allied Health Professions, Azienda Ospedaliero-Universitaria Careggi, 50134 Florence, Italy
| | - Angelo Barra
- Department of Allied Health Professions, Azienda Ospedaliero-Universitaria Careggi, 50134 Florence, Italy
| | - Renzo Ricci
- Department of Allied Health Professions, Azienda Ospedaliero-Universitaria Careggi, 50134 Florence, Italy
| | | | - Antonia Pirrera
- Centre TISP, Istituto Superiore di Sanità, 00161 Roma, Italy
| | | |
Collapse
|
3
|
Li Y, Gunasekeran DV, RaviChandran N, Tan TF, Ong JCL, Thirunavukarasu AJ, Polascik BW, Habash R, Khaderi K, Ting DSW. The next generation of healthcare ecosystem in the metaverse. Biomed J 2024; 47:100679. [PMID: 38048990 PMCID: PMC11245972 DOI: 10.1016/j.bj.2023.100679] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Revised: 11/04/2023] [Accepted: 11/19/2023] [Indexed: 12/06/2023] Open
Abstract
The Metaverse has gained wide attention for being the application interface for the next generation of Internet. The potential of the Metaverse is growing, as Web 3·0 development and adoption continues to advance medicine and healthcare. We define the next generation of interoperable healthcare ecosystem in the Metaverse. We examine the existing literature regarding the Metaverse, explain the technology framework to deliver an immersive experience, along with a technical comparison of legacy and novel Metaverse platforms that are publicly released and in active use. The potential applications of different features of the Metaverse, including avatar-based meetings, immersive simulations, and social interactions are examined with different roles from patients to healthcare providers and healthcare organizations. Present challenges in the development of the Metaverse healthcare ecosystem are discussed, along with potential solutions including capabilities requiring technological innovation, use cases requiring regulatory supervision, and sound governance. This proposed concept and framework of the Metaverse could potentially redefine the traditional healthcare system and enhance digital transformation in healthcare. Similar to AI technology at the beginning of this decade, real-world development and implementation of these capabilities are relatively nascent. Further pragmatic research is needed for the development of an interoperable healthcare ecosystem in the Metaverse.
Collapse
Affiliation(s)
- Yong Li
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore, Singapore; The Ophthalmology & Visual Sciences Academic Clinical Programme, Duke-NUS Medical School, Singapore, Singapore
| | - Dinesh Visva Gunasekeran
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore, Singapore; The Ophthalmology & Visual Sciences Academic Clinical Programme, Duke-NUS Medical School, Singapore, Singapore; Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | | | - Ting Fang Tan
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore, Singapore
| | | | | | - Bryce W Polascik
- Wake Forest University School of Medicine, Winston-Salem, North Carolina, USA
| | - Ranya Habash
- Bascom Palmer Eye Institute, University of Miami, Florida, USA
| | - Khizer Khaderi
- Department of Ophthalmology, Stanford University, California, USA
| | - Daniel S W Ting
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore, Singapore; The Ophthalmology & Visual Sciences Academic Clinical Programme, Duke-NUS Medical School, Singapore, Singapore; Department of Ophthalmology, Stanford University, California, USA.
| |
Collapse
|
4
|
Chou DW, Annadata V, Willson G, Gray M, Rosenberg J. Augmented and Virtual Reality Applications in Facial Plastic Surgery: A Scoping Review. Laryngoscope 2024; 134:2568-2577. [PMID: 37947302 DOI: 10.1002/lary.31178] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2023] [Revised: 10/05/2023] [Accepted: 10/27/2023] [Indexed: 11/12/2023]
Abstract
OBJECTIVES Augmented reality (AR) and virtual reality (VR) are emerging technologies with wide potential applications in health care. We performed a scoping review of the current literature on the application of augmented and VR in the field of facial plastic and reconstructive surgery (FPRS). DATA SOURCES PubMed and Web of Science. REVIEW METHODS According to PRISMA guidelines, PubMed and Web of Science were used to perform a scoping review of literature regarding the utilization of AR and/or VR relevant to FPRS. RESULTS Fifty-eight articles spanning 1997-2023 met the criteria for review. Five overarching categories of AR and/or VR applications were identified across the articles: preoperative, intraoperative, training/education, feasibility, and technical. The following clinical areas were identified: burn, craniomaxillofacial surgery (CMF), face transplant, face lift, facial analysis, facial palsy, free flaps, head and neck surgery, injectables, locoregional flaps, mandible reconstruction, mandibuloplasty, microtia, skin cancer, oculoplastic surgery, rhinology, rhinoplasty, and trauma. CONCLUSION AR and VR have broad applications in FPRS. AR for surgical navigation may have the most emerging potential in CMF surgery and free flap harvest. VR is useful as distraction analgesia for patients and as an immersive training tool for surgeons. More data on these technologies' direct impact on objective clinical outcomes are still needed. LEVEL OF EVIDENCE N/A Laryngoscope, 134:2568-2577, 2024.
Collapse
Affiliation(s)
- David W Chou
- Division of Facial Plastic and Reconstructive Surgery, Department of Otolaryngology-Head and Neck Surgery, Emory University School of Medicine, Atlanta, Georgia, USA
| | - Vivek Annadata
- Division of Facial Plastic and Reconstructive Surgery, Department of Otolaryngology-Head and Neck Surgery, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Gloria Willson
- Education and Research Services, Levy Library, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Mingyang Gray
- Division of Facial Plastic and Reconstructive Surgery, Department of Otolaryngology-Head and Neck Surgery, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Joshua Rosenberg
- Division of Facial Plastic and Reconstructive Surgery, Department of Otolaryngology-Head and Neck Surgery, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| |
Collapse
|
5
|
Cho JS, Jotwani R, Chan S, Thaker DM, On JD, Yong RJ, Hao D. Extended reality navigation for pain procedures: a narrative review. Reg Anesth Pain Med 2024:rapm-2024-105352. [PMID: 38754990 DOI: 10.1136/rapm-2024-105352] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Accepted: 05/01/2024] [Indexed: 05/18/2024]
Abstract
BACKGROUND Extended reality (XR) technology, encompassing virtual reality, augmented reality, and mixed reality, has been widely studied for procedural navigation in surgical specialties. Similar to how ultrasound transformed regional anesthesia, XR has the potential to reshape how anesthesiologists and pain physicians perform procedures to relieve pain. OBJECTIVE This narrative review examines the clinical benefits of XR for navigation in various pain procedures. It defines key terms and concepts related to XR technology and explores characteristics of procedures that are most amenable to XR-based navigation. Finally, it suggests best practices for developing XR navigation systems and discusses the role of emerging technology in the future of XR in regional anesthesia and pain medicine. EVIDENCE REVIEW A search was performed across PubMed, Embase, and Cochrane Central Register of Controlled Trials for primary literature investigating the clinical benefits of XR navigation for pain procedures. FINDINGS Thirteen studies using XR for procedural navigation are included. The evidence includes randomized controlled trials, retrospective studies, and case series. CONCLUSIONS Early randomized controlled trials show potential for XR to improve procedural efficiency, but more comprehensive research is needed to determine if there are significant clinical benefits. Case reports demonstrate XR's utility in generating patient-specific navigation plans when difficult anatomy is encountered. Procedures that facilitate the generation and registration of XR images are most conducive to XR navigation, whereas those that rely on frequent re-imaging will continue to depend on traditional modes of navigation.
Collapse
Affiliation(s)
- James Sungjai Cho
- Department of Anesthesia, Critical Care and Pain Medicine, Massachusetts General Hospital, Boston, Massachusetts, USA
- Harvard Medical School, Boston, Massachusetts, USA
| | - Rohan Jotwani
- Department of Anesthesiology, Weill Cornell Medicine, New York, New York, USA
| | | | - Devaunsh Manish Thaker
- Department of Anesthesiology, Perioperative Care & Pain Medicine, NYU Langone Health, New York, New York, USA
| | - Jungmin Daniel On
- Department of Anesthesiology, Rush University Medical Center, Chicago, Illinois, USA
| | - R Jason Yong
- Department of Anesthesiology, Perioperative and Pain Medicine, Brigham and Women's Hospital, Boston, Massachusetts, USA
- Harvard Medical School, Boston, Massachusetts, USA
| | - David Hao
- Department of Anesthesia, Critical Care and Pain Medicine, Massachusetts General Hospital, Boston, Massachusetts, USA
- Harvard Medical School, Boston, Massachusetts, USA
| |
Collapse
|
6
|
Petrella F, Rizzo SMR, Rampinelli C, Casiraghi M, Bagnardi V, Frassoni S, Pozzi S, Pappalardo O, Pravettoni G, Spaggiari L. Assessment of pulmonary vascular anatomy: comparing augmented reality by holograms versus standard CT images/reconstructions using surgical findings as reference standard. Eur Radiol Exp 2024; 8:57. [PMID: 38724831 PMCID: PMC11082107 DOI: 10.1186/s41747-024-00458-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2023] [Accepted: 03/07/2024] [Indexed: 05/12/2024] Open
Abstract
BACKGROUND We compared computed tomography (CT) images and holograms (HG) to assess the number of arteries of the lung lobes undergoing lobectomy and assessed easiness in interpretation by radiologists and thoracic surgeons with both techniques. METHODS Patients scheduled for lobectomy for lung cancer were prospectively included and underwent CT for staging. A patient-specific three-dimensional model was generated and visualized in an augmented reality setting. One radiologist and one thoracic surgeon evaluated CT images and holograms to count lobar arteries, having as reference standard the number of arteries recorded at surgery. The easiness of vessel identification was graded according to a Likert scale. Wilcoxon signed-rank test and κ statistics were used. RESULTS Fifty-two patients were prospectively included. The two doctors detected the same number of arteries in 44/52 images (85%) and in 51/52 holograms (98%). The mean difference between the number of artery branches detected by surgery and CT images was 0.31 ± 0.98, whereas it was 0.09 ± 0.37 between surgery and HGs (p = 0.433). In particular, the mean difference in the number of arteries detected in the upper lobes was 0.67 ± 1.08 between surgery and CT images and 0.17 ± 0.46 between surgery and holograms (p = 0.029). Both radiologist and surgeon showed a higher agreement for holograms (κ = 0.99) than for CT (κ = 0.81) and found holograms easier to evaluate than CTs (p < 0.001). CONCLUSIONS Augmented reality by holograms is an effective tool for preoperative vascular anatomy assessment of lungs, especially when evaluating the upper lobes, more prone to anatomical variations. TRIAL REGISTRATION ClinicalTrials.gov, NCT04227444 RELEVANCE STATEMENT: Preoperative evaluation of the lung lobe arteries through augmented reality may help the thoracic surgeons to carefully plan a lobectomy, thus contributing to optimize patients' outcomes. KEY POINTS • Preoperative assessment of the lung arteries may help surgical planning. • Lung artery detection by augmented reality was more accurate than that by CT images, particularly for the upper lobes. • The assessment of the lung arterial vessels was easier by using holograms than CT images.
Collapse
Affiliation(s)
- Francesco Petrella
- Department of Thoracic Surgery, IRCCS European Institute of Oncology, Via Ripamonti 435, 20141, Milan, Italy
- Department of Oncology and Hemato-oncology, University of Milan, Via Festa del Perdono 7, 20122, Milan, Italy
- Department of Thoracic Surgery, Fondazione IRCCS San Gerardo dei Tintori, Via G. B. Pergolesi, 33, 20900, Monza, Italy
| | - Stefania Maria Rita Rizzo
- Clinic of Radiology, Imaging Institute of Southern Switzerland (IIMSI), Ente Ospedaliero Cantonale (EOC) Via Tesserete 46, 6900, Lugano, Switzerland.
- Facoltà di Scienze biomediche, Università della Svizzera italiana (USI), Via Buffi 13, 6900, Lugano, Switzerland.
| | - Cristiano Rampinelli
- Division of Radiology, IRCCS European Institute of Oncology, Via Ripamonti 435, 20141, Milan, Italy
| | - Monica Casiraghi
- Department of Thoracic Surgery, IRCCS European Institute of Oncology, Via Ripamonti 435, 20141, Milan, Italy
- Department of Oncology and Hemato-oncology, University of Milan, Via Festa del Perdono 7, 20122, Milan, Italy
| | - Vincenzo Bagnardi
- Department of Statistics and Quantitative Methods, University of Milano-Bicocca, 20126, Milan, Italy
| | - Samuele Frassoni
- Department of Statistics and Quantitative Methods, University of Milano-Bicocca, 20126, Milan, Italy
| | - Silvia Pozzi
- Artiness srl, Viale Cassala 57, 20143, Milan, Italy
| | | | - Gabriella Pravettoni
- Department of Oncology and Hemato-oncology, University of Milan, Via Festa del Perdono 7, 20122, Milan, Italy
| | - Lorenzo Spaggiari
- Department of Thoracic Surgery, IRCCS European Institute of Oncology, Via Ripamonti 435, 20141, Milan, Italy
- Department of Oncology and Hemato-oncology, University of Milan, Via Festa del Perdono 7, 20122, Milan, Italy
| |
Collapse
|
7
|
Waisberg E, Ong J, Masalkhi M, Zaman N, Sarker P, Lee AG, Tavakkoli A. Apple Vision Pro and why extended reality will revolutionize the future of medicine. Ir J Med Sci 2024; 193:531-532. [PMID: 37365445 DOI: 10.1007/s11845-023-03437-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2023] [Accepted: 06/19/2023] [Indexed: 06/28/2023]
Abstract
Apple unveiled its highly anticipated mixed-reality headset, called the Apple Vision Pro on June 5, 2023. The primary user interface relies on eye tracking, hand, gestures, cameras, and sensors, eliminating the need for physical controllers such as keyboards or touch screens. The refined capabilities of this technology can be utilized for diverse purposes, including but not limited to medical and surgical education, and remote medical consultations. All things considered, virtual reality is a highly promising area for the future of medicine, from improving medical education and vision screening to physical and psychological rehabilitation. We look forward to further innovations in this exciting area for years to come.
Collapse
Affiliation(s)
- Ethan Waisberg
- University College Dublin School of Medicine, Belfield, Dublin 4, Ireland.
| | - Joshua Ong
- Michigan Medicine, University of Michigan, Ann Arbor, MI, USA
| | - Mouayad Masalkhi
- University College Dublin School of Medicine, Belfield, Dublin 4, Ireland
| | - Nasif Zaman
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, Reno, NV, USA
| | - Prithul Sarker
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, Reno, NV, USA
| | - Andrew G Lee
- Center for Space Medicine, Baylor College of Medicine, Houston, TX, USA
- Department of Ophthalmology, Blanton Eye Institute, Houston Methodist Hospital, Houston, TX, USA
- The Houston Methodist Research Institute, Houston Methodist Hospital, Houston, TX, USA
- Departments of Ophthalmology, Neurology, and Neurosurgery, Weill Cornell Medicine, New York, NY, USA
- Department of Ophthalmology, University of Texas Medical Branch, Galveston, TX, USA
- University of Texas MD Anderson Cancer Center, Houston, TX, USA
- Texas A&M College of Medicine, Bryan, TX, USA
- Department of Ophthalmology, The University of Iowa Hospitals and Clinics, Iowa City, IA, USA
| | - Alireza Tavakkoli
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, Reno, NV, USA
| |
Collapse
|
8
|
Ahmed Y, Reddy M, Mederos J, McDermott KC, Varma DK, Ludwig CA, Ahmed IK, Khaderi KR. Democratizing Health Care in the Metaverse: How Video Games can Monitor Eye Conditions Using the Vision Performance Index: A Pilot Study. OPHTHALMOLOGY SCIENCE 2024; 4:100349. [PMID: 37869021 PMCID: PMC10587622 DOI: 10.1016/j.xops.2023.100349] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/23/2022] [Revised: 05/23/2023] [Accepted: 05/30/2023] [Indexed: 10/24/2023]
Abstract
Objective In a world where digital media is deeply engrained into our everyday lives, there lies an opportunity to leverage interactions with technology for health and wellness. The Vision Performance Index (VPI) leverages natural human-technology interaction to evaluate visual function using visual, cognitive, and motor psychometric data over 5 domains: field of view, accuracy, multitracking, endurance, and detection. The purpose of this study was to describe a novel method of evaluating holistic visual function through video game-derived VPI score data in patients with specific ocular pathology. Design Prospective comparative analysis. Participants Patients with dry eye, glaucoma, cataract, diabetic retinopathy (DR), age-related macular degeneration, and healthy individuals. Methods The Vizzario Inc software development kit was integrated into 2 video game applications, Balloon Pop and Picture Perfect, which allowed for generation of VPI scores. Study participants were instructed to play rounds of each video game, from which a VPI score was compiled. Main Outcome Measures The primary outcome was VPI overall score in each comparison group. Vision Performance Index component, subcomponent scores, and psychophysical inputs were also compared. Results Vision Performance Index scores were generated from 93 patients with macular degeneration (n = 10), cataract (n = 10), DR (n = 15), dry eye (n = 15), glaucoma (n = 16), and no ocular disease (n = 27). The VPI overall score was not significantly different across comparison groups. The VPI subcomponent "reaction accuracy" score was significantly greater in DR patients (106 ± 13.2) versus controls (96.9 ± 11.5), P = 0.0220. The VPI subcomponent "color detection" score was significantly lower in patients with DR (96.8 ± 2.5; p=0.0217) and glaucoma (98.5 ± 6.3; P = 0.0093) compared with controls (101 ± 11). Psychophysical measures were statistically significantly different from controls: proportion correct (lower in DR, age-related macular degeneration), contrast errors (higher in cataract, DR), and saturation errors (higher in dry eye). Conclusions Vision Performance Index scores can be generated from interactions of an ocular disease population with video games. The VPI may offer utility in monitoring select ocular diseases through evaluation of subcomponent and psychophysical input scores; however, future larger-scale studies must evaluate the validity of this tool. Financial Disclosures Proprietary or commercial disclosure may be found after the references.
Collapse
Affiliation(s)
- Yusuf Ahmed
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
| | - Mohan Reddy
- Spencer Center for Vision Research, Byers Eye Institute, Stanford University, Palo Alto, California
- Vizzario, Inc, Venice, California
| | - Jacob Mederos
- Spencer Center for Vision Research, Byers Eye Institute, Stanford University, Palo Alto, California
- Vizzario, Inc, Venice, California
| | - Kyle C. McDermott
- Spencer Center for Vision Research, Byers Eye Institute, Stanford University, Palo Alto, California
- Vizzario, Inc, Venice, California
| | - Devesh K. Varma
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
- Prism Eye Institute, Oakville, Ontario, Canada
| | - Cassie A. Ludwig
- Spencer Center for Vision Research, Byers Eye Institute, Stanford University, Palo Alto, California
- Massachusetts Eye and Ear, Harvard Medical School, Boston, Massachusetts
| | - Iqbal K. Ahmed
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
- Prism Eye Institute, Oakville, Ontario, Canada
- Moran Eye Centre, University of Utah School of Medicine, Salt Lake City, Utah
| | - Khizer R. Khaderi
- Spencer Center for Vision Research, Byers Eye Institute, Stanford University, Palo Alto, California
- Vizzario, Inc, Venice, California
| |
Collapse
|
9
|
Woodall WJ, Chang EH, Toy S, Lee DR, Sherman JH. Does Extended Reality Simulation Improve Surgical/Procedural Learning and Patient Outcomes When Compared With Standard Training Methods?: A Systematic Review. Simul Healthc 2024; 19:S98-S111. [PMID: 38240622 DOI: 10.1097/sih.0000000000000767] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2024]
Abstract
INTRODUCTION The use of extended reality (XR) technologies, including virtual, augmented, and mixed reality, has increased within surgical and procedural training programs. Few studies have assessed experiential learning- and patient-based outcomes using XR compared with standard training methods. METHODS As a working group for the Society for Simulation in Healthcare, we used Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines and a PICO strategy to perform a systematic review of 4238 articles to assess the effectiveness of XR technologies compared with standard training methods. Outcomes were grouped into knowledge, time-to-completion, technical proficiency, reactions, and patient outcomes. Because of study heterogeneity, a meta-analysis was not feasible. RESULTS Thirty-two studies met eligibility criteria: 18 randomized controlled trials, 7 comparative studies, and 7 systematic reviews. Outcomes of most studies included Kirkpatrick levels of evidence I-III (reactions, knowledge, and behavior), while few reported level IV outcomes (patient). The overall risk of bias was low. With few exceptions, included studies showed XR technology to be more effective than standard training methods in improving objective skills and performance, shortening procedure time, and receiving more positive learner ratings. However, XR use did not show significant differences in gained knowledge. CONCLUSIONS Surgical or procedural XR training may improve technical skill development among trainees and is generally favored over standard training methods. However, there should be an additional focus on how skill development translates to clinically relevant outcomes. We recommend longitudinal studies to examine retention and transfer of training to clinical settings, methods to improve timely, adaptive feedback for deliberate practice, and cost analyses.
Collapse
Affiliation(s)
- William J Woodall
- From the Medical College of Georgia (W.J.W.), Augusta, GA; Department of Otolaryngology (E.H.C.), University of Arizona, Tucson, AZ; Departments of Basic Science Education and Health Systems & Implementation Science (S.T.), Virginia Tech Carilion School of Medicine, Roanoke, VA; University of Michigan School of Nursing (D.R.L.), Ann Arbor, MI; and WVU Rockefeller Neuroscience Institute (J.H.S.), Morgantown, WV
| | | | | | | | | |
Collapse
|
10
|
Masalkhi M, Waisberg E, Ong J, Zaman N, Sarker P, Lee AG, Tavakkoli A. Apple Vision Pro for Ophthalmology and Medicine. Ann Biomed Eng 2023; 51:2643-2646. [PMID: 37332003 DOI: 10.1007/s10439-023-03283-1] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Accepted: 06/13/2023] [Indexed: 06/20/2023]
Abstract
The emergence of new technologies continues to break barriers and transform the way we perceive and interact with the world. In this scientific article, we explore the potential impact of the new Apple XR headset on revolutionizing accessibility for individuals with visual deficits. With its rumored exceptional 4-K displays per eye and 5000 nits of brightness, this headset has the potential to enhance the visual experience and provide a new level of accessibility for users with visual impairments. We delve into the technical specifications, discuss the implications for accessibility, and envision how this groundbreaking technology could open up new possibilities for individuals with visual deficits.
Collapse
Affiliation(s)
- Mouayad Masalkhi
- University College Dublin School of Medicine, Belfield, Dublin 4, Ireland.
| | - Ethan Waisberg
- University College Dublin School of Medicine, Belfield, Dublin 4, Ireland
| | - Joshua Ong
- Michigan Medicine, University of Michigan, Ann Arbor, MI, USA
| | - Nasif Zaman
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, Reno, NV, USA
| | - Prithul Sarker
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, Reno, NV, USA
| | - Andrew G Lee
- Center for Space Medicine, Baylor College of Medicine, Houston, TX, USA
- Department of Ophthalmology, Blanton Eye Institute, Houston Methodist Hospital, Houston, TX, USA
- Houston Methodist Hospital, The Houston Methodist Research Institute, Houston, TX, USA
- Departments of Ophthalmology, Neurology, and Neurosurgery, Weill Cornell Medicine, New York, NY, USA
- Department of Ophthalmology, University of Texas Medical Branch, Galveston, TX, USA
- University of Texas MD Anderson Cancer Center, Houston, TX, USA
- Texas A&M College of Medicine, Bryan, TX, USA
- Department of Ophthalmology, The University of Iowa Hospitals and Clinics, Iowa City, IA, USA
| | - Alireza Tavakkoli
- Human-Machine Perception Laboratory, Department of Computer Science and Engineering, University of Nevada, Reno, Reno, NV, USA
| |
Collapse
|
11
|
Wong KA, Ang BCH, Gunasekeran DV, Husain R, Boon J, Vikneson K, Tan ZPQ, Tan GSW, Wong TY, Agrawal R. Remote Perimetry in a Virtual Reality Metaverse Environment for Out-of-Hospital Functional Eye Screening Compared Against the Gold Standard Humphrey Visual Fields Perimeter: Proof-of-Concept Pilot Study. J Med Internet Res 2023; 25:e45044. [PMID: 37856179 PMCID: PMC10623222 DOI: 10.2196/45044] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2022] [Revised: 04/01/2023] [Accepted: 05/31/2023] [Indexed: 10/20/2023] Open
Abstract
BACKGROUND The growing global burden of visual impairment necessitates better population eye screening for early detection of eye diseases. However, accessibility to testing is often limited and centralized at in-hospital settings. Furthermore, many eye screening programs were disrupted by the COVID-19 pandemic, presenting an urgent need for out-of-hospital solutions. OBJECTIVE This study investigates the performance of a novel remote perimetry application designed in a virtual reality metaverse environment to enable functional testing in community-based and primary care settings. METHODS This was a prospective observational study investigating the performance of a novel remote perimetry solution in comparison with the gold standard Humphrey visual field (HVF) perimeter. Subjects received a comprehensive ophthalmologic assessment, HVF perimetry, and remote perimetry testing. The primary outcome measure was the agreement in the classification of overall perimetry result normality by the HVF (Swedish interactive threshold algorithm-fast) and testing with the novel algorithm. Secondary outcome measures included concordance of individual testing points and perimetry topographic maps. RESULTS We recruited 10 subjects with an average age of 59.6 (range 28-81) years. Of these, 7 (70%) were male and 3 (30%) were female. The agreement in the classification of overall perimetry results was high (9/10, 90%). The pointwise concordance in the automated classification of individual test points was 83.3% (8.2%; range 75%-100%). In addition, there was good perimetry topographic concordance with the HVF in all subjects. CONCLUSIONS Remote perimetry in a metaverse environment had good concordance with gold standard perimetry using the HVF and could potentially avail functional eye screening in out-of-hospital settings.
Collapse
Affiliation(s)
- Kang-An Wong
- National University of Singapore, Singapore, Singapore
- National Healthcare Group Eye Institute, Tan Tock Seng Hospital, Singapore, Singapore
| | - Bryan Chin Hou Ang
- National Healthcare Group Eye Institute, Tan Tock Seng Hospital, Singapore, Singapore
| | - Dinesh Visva Gunasekeran
- National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore, Singapore
- Raffles Medical Group, Singapore, Singapore
- Eye-ACP, Duke-NUS Medical School, Singapore, Singapore
| | - Rahat Husain
- Singapore Eye Research Institute, Singapore, Singapore
- Eye-ACP, Duke-NUS Medical School, Singapore, Singapore
- School of Medicine, University of New South Wales, Sydney, Australia
| | - Joewee Boon
- National Healthcare Group Eye Institute, Tan Tock Seng Hospital, Singapore, Singapore
| | - Krishna Vikneson
- School of Medicine, University of New South Wales, Sydney, Australia
| | - Zyna Pei Qi Tan
- National Healthcare Group Eye Institute, Tan Tock Seng Hospital, Singapore, Singapore
| | - Gavin Siew Wei Tan
- Singapore Eye Research Institute, Singapore, Singapore
- Eye-ACP, Duke-NUS Medical School, Singapore, Singapore
- Singapore National Eye Center, Singapore General Hospital, Singapore, Singapore
| | - Tien Yin Wong
- Singapore Eye Research Institute, Singapore, Singapore
- Eye-ACP, Duke-NUS Medical School, Singapore, Singapore
- Singapore National Eye Center, Singapore General Hospital, Singapore, Singapore
- Tsinghua Medicine, Tsinghua University, Beijing, China
| | - Rupesh Agrawal
- National Healthcare Group Eye Institute, Tan Tock Seng Hospital, Singapore, Singapore
- Singapore Eye Research Institute, Singapore, Singapore
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Singapore
| |
Collapse
|
12
|
Tan TF, Thirunavukarasu AJ, Jin L, Lim J, Poh S, Teo ZL, Ang M, Chan RVP, Ong J, Turner A, Karlström J, Wong TY, Stern J, Ting DSW. Artificial intelligence and digital health in global eye health: opportunities and challenges. Lancet Glob Health 2023; 11:e1432-e1443. [PMID: 37591589 DOI: 10.1016/s2214-109x(23)00323-6] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2023] [Revised: 06/26/2023] [Accepted: 07/04/2023] [Indexed: 08/19/2023]
Abstract
Global eye health is defined as the degree to which vision, ocular health, and function are maximised worldwide, thereby optimising overall wellbeing and quality of life. Improving eye health is a global priority as a key to unlocking human potential by reducing the morbidity burden of disease, increasing productivity, and supporting access to education. Although extraordinary progress fuelled by global eye health initiatives has been made over the last decade, there remain substantial challenges impeding further progress. The accelerated development of digital health and artificial intelligence (AI) applications provides an opportunity to transform eye health, from facilitating and increasing access to eye care to supporting clinical decision making with an objective, data-driven approach. Here, we explore the opportunities and challenges presented by digital health and AI in global eye health and describe how these technologies could be leveraged to improve global eye health. AI, telehealth, and emerging technologies have great potential, but require specific work to overcome barriers to implementation. We suggest that a global digital eye health task force could facilitate coordination of funding, infrastructural development, and democratisation of AI and digital health to drive progress forwards in this domain.
Collapse
Affiliation(s)
- Ting Fang Tan
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore; Singapore National Eye Centre, Singapore General Hospital, Singapore
| | - Arun J Thirunavukarasu
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore; Corpus Christi College, University of Cambridge, Cambridge, UK; School of Clinical Medicine, University of Cambridge, Cambridge, UK
| | - Liyuan Jin
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore; Duke-NUS Medical School, National University of Singapore, Singapore
| | - Joshua Lim
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore; Singapore National Eye Centre, Singapore General Hospital, Singapore
| | - Stanley Poh
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore; Singapore National Eye Centre, Singapore General Hospital, Singapore
| | - Zhen Ling Teo
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore; Singapore National Eye Centre, Singapore General Hospital, Singapore
| | - Marcus Ang
- Singapore National Eye Centre, Singapore General Hospital, Singapore; Duke-NUS Medical School, National University of Singapore, Singapore
| | - R V Paul Chan
- Illinois Eye and Ear Infirmary, University of Illinois College of Medicine, Urbana-Champaign, IL, USA
| | - Jasmine Ong
- Pharmacy Department, Singapore General Hospital, Singapore
| | - Angus Turner
- Lions Eye Institute, University of Western Australia, Nedlands, WA, Australia
| | - Jonas Karlström
- Duke-NUS Medical School, National University of Singapore, Singapore
| | - Tien Yin Wong
- Singapore National Eye Centre, Singapore General Hospital, Singapore; Tsinghua Medicine, Tsinghua University, Beijing, China
| | - Jude Stern
- The International Agency for the Prevention of Blindness, London, UK
| | - Daniel Shu-Wei Ting
- Artificial Intelligence and Digital Innovation Research Group, Singapore Eye Research Institute, Singapore; Singapore National Eye Centre, Singapore General Hospital, Singapore; Duke-NUS Medical School, National University of Singapore, Singapore.
| |
Collapse
|
13
|
Csoba I, Kunkli R. Rendering algorithms for aberrated human vision simulation. Vis Comput Ind Biomed Art 2023; 6:5. [PMID: 36930412 PMCID: PMC10023823 DOI: 10.1186/s42492-023-00132-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2022] [Accepted: 03/06/2023] [Indexed: 03/18/2023] Open
Abstract
Vision-simulated imagery-the process of generating images that mimic the human visual system-is a valuable tool with a wide spectrum of possible applications, including visual acuity measurements, personalized planning of corrective lenses and surgeries, vision-correcting displays, vision-related hardware development, and extended reality discomfort reduction. A critical property of human vision is that it is imperfect because of the highly influential wavefront aberrations that vary from person to person. This study provides an overview of the existing computational image generation techniques that properly simulate human vision in the presence of wavefront aberrations. These algorithms typically apply ray tracing with a detailed description of the simulated eye or utilize the point-spread function of the eye to perform convolution on the input image. Based on the description of the vision simulation techniques, several of their characteristic features have been evaluated and some potential application areas and research directions have been outlined.
Collapse
Affiliation(s)
- István Csoba
- Faculty of Informatics, University of Debrecen, Debrecen 4028, Hungary. .,Doctoral School of Informatics, University of Debrecen, Debrecen 4028, Hungary.
| | - Roland Kunkli
- Faculty of Informatics, University of Debrecen, Debrecen 4028, Hungary
| |
Collapse
|
14
|
Tlili A, Huang R, Kinshuk. Metaverse for climbing the ladder toward ‘Industry 5.0’ and ‘Society 5.0’? SERVICE INDUSTRIES JOURNAL 2023. [DOI: 10.1080/02642069.2023.2178644] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/24/2023]
Affiliation(s)
- Ahmed Tlili
- Smart Learning Institute of Beijing Normal University, Beijing, People’s Republic of China
| | - Ronghuai Huang
- Smart Learning Institute of Beijing Normal University, Beijing, People’s Republic of China
| | - Kinshuk
- College of Information, University of North Texas, Denton, TX, USA
| |
Collapse
|
15
|
Abstract
PURPOSE OF THE REVIEW Neuro-ophthalmologists rapidly adopted telehealth during the COVID-19 pandemic to minimize disruption to patient care. This article reviews recent research on tele-neuro-ophthalmology adoption, current limitations, and potential use beyond the pandemic. The review considers how digital transformation, including machine learning and augmented reality, may be applied to future iterations of tele-neuro-ophthalmology. RECENT FINDINGS Telehealth utilization has been sustained among neuro-ophthalmologists throughout the pandemic. Adoption of tele-neuro-ophthalmology may provide solutions to subspecialty workforce shortage, patient access, physician wellness, and trainee educational needs within the field of neuro-ophthalmology. Digital transformation technologies have the potential to augment tele-neuro-ophthalmology care delivery by providing automated workflow solutions, home-based visual testing and therapies, and trainee education via simulators. Tele-neuro-ophthalmology use has and will continue beyond the COVID-19 pandemic. Digital transformation technologies, when applied to telehealth, will drive and revolutionize the next phase of tele-neuro-ophthalmology adoption and use in the years to come.
Collapse
Affiliation(s)
- Kevin E Lai
- Department of Ophthalmology, Indiana University School of Medicine, Indianapolis, IN, USA
- Ophthalmology Service, Richard L. Roudebush Veterans Administration Medical Center, Indianapolis, IN, USA
- Neuro-Ophthalmology Service, Midwest Eye Institute, Carmel, IN, USA
| | - Melissa W Ko
- Department of Ophthalmology, Indiana University School of Medicine, Indianapolis, IN, USA.
- Departments of Neurology and Neurosurgery, Indiana University School of Medicine, Indianapolis, IN, USA.
| |
Collapse
|
16
|
Xin N, Wu X, Chen Z, Wei R, Saito Y, Lachkar S, Salvicchi A, Fumimoto S, Drevet G, Xu Z, Huang K, Tang H. A new preoperative localization of pulmonary nodules guided by mixed reality: a pilot study of an animal model. Transl Lung Cancer Res 2023; 12:150-157. [PMID: 36762064 PMCID: PMC9903086 DOI: 10.21037/tlcr-22-884] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Accepted: 01/03/2023] [Indexed: 01/12/2023]
Abstract
Background With the popularity of high-resolution computed tomography (HRCT), more and more pulmonary nodules are being discovered. Video-assisted thoracoscopic surgery (VATS) has become the first choice for surgical treatment of pulmonary nodules. The use of accurate preoperative localization is crucial for successful resection in VATS. At present, there are many kinds of preoperative localization methods, but there are certain disadvantages. This study aimed to evaluate the feasibility and safety of mixed reality (MR)-guided pulmonary nodules localization, which is a new method that can benefit patients to a greater extent. Methods By constructing an animal model of pulmonary nodules localization, 28 cases of pulmonary nodules were located by MR-guided localization. We recorded the localization accuracy, localization time, insertion attempts, and incidence of complications related to localization under MR-guidance. Results All 28 nodules were successfully located: the deviation of MR-guided localization was 5.71±2.59 mm, localization time was 8.07±1.44 min, and insertion attempts was 1. A pneumothorax and localizer dislodgement occurred in 1 case, respectively. Conclusions Since preoperative localization is critical for VATS resection of pulmonary nodules, we investigated a new localization method. As indicated by our study, MR-guided localization of pulmonary nodules is feasible and safe, which is worthy of further research and promotion. We have also registered corresponding clinical trials to further investigate and help to improve our understanding of this technique.
Collapse
Affiliation(s)
- Ning Xin
- Department of Thoracic Surgery, PLA 960th Hospital, Jinan, China;,Department of Thoracic Surgery, Shanghai Changzheng Hospital, Navy Military Medical University, Shanghai, China
| | - Xiaoyu Wu
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, China
| | - Zihao Chen
- Department of Thoracic Surgery, Shanghai Changzheng Hospital, Navy Military Medical University, Shanghai, China
| | - Rongqiang Wei
- Department of Thoracic Surgery, Shanghai Changzheng Hospital, Navy Military Medical University, Shanghai, China
| | - Yuichi Saito
- Department of Surgery, Teikyo University School of Medicine, Tokyo, Japan
| | - Samy Lachkar
- Department of Pulmonology, Thoracic Oncology and Respiratory Intensive Care, Hôpital Charles Nicolle, CHU de Rouen, Rouen Cedex, France
| | | | - Satoshi Fumimoto
- Department of Thoracic and Cardiovascular Surgery, Osaka Medical and Pharmaceutical University, Osaka, Japan
| | - Gabrielle Drevet
- Department of Thoracic Surgery, Lung and Heart-Lung Transplantation, Louis Pradel Hospital, Hospices Civils de Lyon, Lyon, France
| | - Zhifei Xu
- Department of Thoracic Surgery, Shanghai Changzheng Hospital, Navy Military Medical University, Shanghai, China
| | - Kenan Huang
- Department of Thoracic Surgery, Shanghai Changzheng Hospital, Navy Military Medical University, Shanghai, China
| | - Hua Tang
- Department of Thoracic Surgery, Shanghai Changzheng Hospital, Navy Military Medical University, Shanghai, China
| |
Collapse
|
17
|
Ramesh PV, Joshua T, Ray P, Devadas AK, Raj PM, Ramesh SV, Ramesh MK, Rajasekaran R. Holographic elysium of a 4D ophthalmic anatomical and pathological metaverse with extended reality/mixed reality. Indian J Ophthalmol 2022; 70:3116-3121. [PMID: 35918983 DOI: 10.4103/ijo.ijo_120_22] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022] Open
Abstract
Extended reality is one of the leading cutting-edge technologies, which has not yet fully set foot into the field of ophthalmology. The use of extended reality technology especially in ophthalmic education and counseling will revolutionize the face of teaching and counseling on a whole new level. We have used this novel technology and have created a holographic museum of various anatomical structures such as the eyeball, cerebral venous system, cerebral arterial system, cranial nerves, and various parts of the brain in fine detail. These four-dimensional (4D) ophthalmic holograms created by us (patent pending) are cost-effectively constructed with TrueColor confocal images to serve as a new-age immersive 4D pedagogical and counseling tool for gameful learning and counseling, respectively. According to our knowledge, this concept has not been reported in the literature before.
Collapse
Affiliation(s)
- Prasanna V Ramesh
- Medical Officer, Department of Glaucoma and Research, Mahathma Centre of Moving Images, Trichy, Tamil Nadu, India
| | - Tensingh Joshua
- Head of the Department, Mahathma Centre of Moving Images, Trichy, Tamil Nadu, India
| | - Prajnya Ray
- Consultant Optometrist, Department of Optometry and Visual Science, Mahathma Centre of Moving Images, Trichy, Tamil Nadu, India
| | - Aji K Devadas
- Consultant Optometrist, Department of Optometry and Visual Science, Mahathma Centre of Moving Images, Trichy, Tamil Nadu, India
| | - Pragash M Raj
- Multimedia Consultant, Mahathma Centre of Moving Images, Trichy, Tamil Nadu, India
| | - Shruthy V Ramesh
- Medical Officer, Department of Cataract and Refractive Surgery, Mahathma Eye Hospital Private Limited, Trichy, Tamil Nadu, India
| | - Meena K Ramesh
- Head of the Department of Cataract and Refractive Surgery, Mahathma Eye Hospital Private Limited, Trichy, Tamil Nadu, India
| | - Ramesh Rajasekaran
- Chief Medical Officer, Mahathma Eye Hospital Private Limited, Trichy, Tamil Nadu, India
| |
Collapse
|
18
|
Tan TF, Li Y, Lim JS, Gunasekeran DV, Teo ZL, Ng WY, Ting DS. Metaverse and Virtual Health Care in Ophthalmology: Opportunities and Challenges. Asia Pac J Ophthalmol (Phila) 2022; 11:237-246. [PMID: 35772084 DOI: 10.1097/apo.0000000000000537] [Citation(s) in RCA: 21] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022] Open
Abstract
ABSTRACT The outbreak of the coronavirus disease 2019 has further increased the urgent need for digital transformation within the health care settings, with the use of artificial intelligence/deep learning, internet of things, telecommunication network/virtual platform, and blockchain. The recent advent of metaverse, an interconnected online universe, with the synergistic combination of augmented, virtual, and mixed reality described several years ago, presents a new era of immersive and real-time experiences to enhance human-to-human social interaction and connection. In health care and ophthalmology, the creation of virtual environment with three-dimensional (3D) space and avatar, could be particularly useful in patient-fronting platforms (eg, telemedicine platforms), operational uses (eg, meeting organization), digital education (eg, simulated medical and surgical education), diagnostics, and therapeutics. On the other hand, the implementation and adoption of these emerging virtual health care technologies will require multipronged approaches to ensure interoperability with real-world virtual clinical settings, user-friendliness of the technologies and clinical efficiencies while complying to the clinical, health economics, regulatory, and cybersecurity standards. To serve the urgent need, it is important for the eye community to continue to innovate, invent, adapt, and harness the unique abilities of virtual health care technology to provide better eye care worldwide.
Collapse
Affiliation(s)
- Ting Fang Tan
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore, Singapore
| | - Yong Li
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore, Singapore
- Duke-NUS Medical School, Singapore, Singapore
| | - Jane Sujuan Lim
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore, Singapore
| | | | - Zhen Ling Teo
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore, Singapore
| | - Wei Yan Ng
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore, Singapore
| | - Daniel Sw Ting
- Singapore National Eye Centre, Singapore Eye Research Institute, Singapore, Singapore
- Duke-NUS Medical School, Singapore, Singapore
| |
Collapse
|
19
|
Abstract
Ophthalmology is a medical profession with a tradition in teaching that has developed throughout history. Although ophthalmologists are generally considered to only prescribe contact lenses, and they handle more than half of eye-related enhancements, diagnoses, and treatments. The training of qualified ophthalmologists is generally carried out under the traditional settings, where there is a supervisor and a student, and training is based on the use of animal eyes or artificial eye models. These models have significant disadvantages, as they are not immersive and are extremely expensive and difficult to acquire. Therefore, technologies related to Augmented Reality (AR) and Virtual Reality (VR) are rapidly and prominently positioning themselves in the medical sector, and the field of ophthalmology is growing exponentially both in terms of the training of professionals and in the assistance and recovery of patients. At the same time, it is necessary to highlight and analyze the developments that have made use of game technologies for the teaching of ophthalmology and the results that have been obtained. This systematic review aims to investigate software and hardware applications developed exclusively for educational environments related to ophthalmology and provide an analysis of other related tools. In addition, the advantages and disadvantages, limitations, and challenges involved in the use of virtual reality, augmented reality, and game technologies in this field are also presented.
Collapse
|
20
|
The Role of Technology in Ophthalmic Surgical Education During COVID-19. CURRENT SURGERY REPORTS 2022; 10:239-245. [PMID: 36404795 PMCID: PMC9662128 DOI: 10.1007/s40137-022-00334-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/26/2022] [Indexed: 11/16/2022]
Abstract
Purpose of Review To describe the effect of COVID-19 on ophthalmic training programs and to review the various roles of technology in ophthalmology surgical education including virtual platforms, novel remote learning curricula, and the use of surgical simulators. Recent Findings COVID-19 caused significant disruption to in-person clinical and surgical patient encounters. Ophthalmology trainees worldwide faced surgical training challenges due to social distancing restrictions, trainee redeployment, and reduction in surgical case volume. Virtual platforms, such as Zoom and Microsoft Teams, were widely used during the pandemic to conduct remote teaching sessions. Novel virtual wet lab and dry lab curricula were developed. Training programs found utility in virtual reality surgical simulators, such as the Eyesi, to substitute experience lost from live patient surgical cases. Summary Although several of these described technologies were incorporated into ophthalmology surgical training programs prior to COVID-19, the pandemic highlighted the importance of developing a formal surgical curriculum that can be delivered virtually. Novel telementoring, collaboration between training institutions, and hybrid formats of didactic and practical training sessions should be continued. Future research should investigate the utility of augmented reality and artificial intelligence for trainee learning.
Collapse
|
21
|
Li T, Li C, Zhang X, Liang W, Chen Y, Ye Y, Lin H. Augmented Reality in Ophthalmology: Applications and Challenges. Front Med (Lausanne) 2021; 8:733241. [PMID: 34957138 PMCID: PMC8703032 DOI: 10.3389/fmed.2021.733241] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Accepted: 11/19/2021] [Indexed: 12/16/2022] Open
Abstract
Augmented reality (AR) has been developed rapidly and implemented in many fields such as medicine, maintenance, and cultural heritage. Unlike other specialties, ophthalmology connects closely with AR since most AR systems are based on vision systems. Here we summarize the applications and challenges of AR in ophthalmology and provide insights for further research. Firstly, we illustrate the structure of the standard AR system and present essential hardware. Secondly, we systematically introduce applications of AR in ophthalmology, including therapy, education, and clinical assistance. To conclude, there is still a large room for development, which needs researchers to pay more effort. Applications in diagnosis and protection might be worth exploring. Although the obstacles of hardware restrict the development of AR in ophthalmology at present, the AR will realize its potential and play an important role in ophthalmology in the future with the rapidly developing technology and more in-depth research.
Collapse
Affiliation(s)
- Tongkeng Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China.,Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, China
| | - Chenghao Li
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, China
| | - Xiayin Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China.,Guangdong Eye Institute, Department of Ophthalmology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Wenting Liang
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, China
| | - Yongxin Chen
- School of Biomedical Engineering, Sun Yat-sen University, Guangzhou, China
| | - Yunpeng Ye
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, China
| | - Haotian Lin
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China.,Center for Precision Medicine, Sun Yat-sen University, Guangzhou, China
| |
Collapse
|