1
|
Sun W, Jiang X, Dong X, Yu G, Feng Z, Shuai L. The evolution of simulation-based medical education research: From traditional to virtual simulations. Heliyon 2024; 10:e35627. [PMID: 39170203 PMCID: PMC11337719 DOI: 10.1016/j.heliyon.2024.e35627] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2024] [Revised: 06/04/2024] [Accepted: 07/31/2024] [Indexed: 08/23/2024] Open
Abstract
Background Simulation-based medical education (SBME) is a widely used method in medical education. This study aims to analyze publications on SBME in terms of countries, institutions, journals, authors, and keyword co-occurrence, as well as to identify trends in SBME research. Methods We retrieved the Publications on SBME from the Web of Science Core Collection (WoSCC) database from its inception to January 27, 2024. Microsoft Excel 2019, CiteSpace, and VOSviewer were used to identify the distribution of countries, journals, and authors, as well as to determine the research hotspots. Results We retrieved a total of 11272 publications from WoSCC. The number of documents published in 2022 was the highest in the last few decades. The USA, the UK, and Canada were three key contributors to this field. The University of Toronto, Stanford University, and Harvard Medical School were the top major institutions with a larger number of publications. Konge, Lars was the most productive author, while McGaghie, William C was the highest cited author. BMC Medical Education has the highest number of publications among journals. The foundational themes of SBME are "Patient simulation," "extending reality," and "surgical skills." Conclusions SBME has attracted considerable attention in medical education. The research hotspot is gradually shifting from traditional simulations with real people or mannequins to virtual, digitally-based simulations and online education. Further studies will be conducted to elucidate the mechanisms of SBME. The utilization of SBME will be more rationalized.
Collapse
Affiliation(s)
- Weiming Sun
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Jiangxi Medical College, Nanchang University, Nanchang 330006, China
- Postdoctoral Innovation Practice Base, The First Affiliated Hospital, Jiangxi Medical College, Nanchang University, Nanchang 330006, China
- The First Clinical Medical College, Jiangxi Medical College, Nanchang University, Nanchang 330031, China
| | - Xing Jiang
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Jiangxi Medical College, Nanchang University, Nanchang 330006, China
- The First Clinical Medical College, Jiangxi Medical College, Nanchang University, Nanchang 330031, China
| | - Xiangli Dong
- Department of Psychosomatic Medicine, The Second Affiliated Hospital, Jiangxi Medical College, Nanchang University, Nanchang 330006, China
| | - Guohua Yu
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Jiangxi Medical College, Nanchang University, Nanchang 330006, China
- The First Clinical Medical College, Jiangxi Medical College, Nanchang University, Nanchang 330031, China
| | - Zhen Feng
- Postdoctoral Innovation Practice Base, The First Affiliated Hospital, Jiangxi Medical College, Nanchang University, Nanchang 330006, China
- The First Clinical Medical College, Jiangxi Medical College, Nanchang University, Nanchang 330031, China
| | - Lang Shuai
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Jiangxi Medical College, Nanchang University, Nanchang 330006, China
- The First Clinical Medical College, Jiangxi Medical College, Nanchang University, Nanchang 330031, China
| |
Collapse
|
2
|
Leiderman YI, Gerber MJ, Hubschman JP, Yi D. Artificial intelligence applications in ophthalmic surgery. Curr Opin Ophthalmol 2024:00055735-990000000-00187. [PMID: 39145488 DOI: 10.1097/icu.0000000000001033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/16/2024]
Abstract
PURPOSE OF REVIEW Technologies in healthcare incorporating artificial intelligence tools are experiencing rapid growth in static-image-based applications such as diagnostic imaging. Given the proliferation of artificial intelligence (AI)-technologies created for video-based imaging, ophthalmic microsurgery is likely to experience significant benefits from the application of emerging technologies to multiple facets of the care of the surgical patient. RECENT FINDINGS Proof-of-concept research and early phase clinical trials are in progress for AI-based surgical technologies that aim to provide preoperative planning and decision support, intraoperative image enhancement, surgical guidance, surgical decision-making support, tactical assistive technologies, enhanced surgical training and assessment of trainee progress, and semi-autonomous tool control or autonomous elements of surgical procedures. SUMMARY The proliferation of AI-based technologies in static imaging in clinical ophthalmology, continued refinement of AI tools designed for video-based applications, and development of AI-based digital tools in allied surgical fields suggest that ophthalmic surgery is poised for the integration of AI into our microsurgical paradigm.
Collapse
Affiliation(s)
- Yannek I Leiderman
- Departments of Ophthalmology and Bioengineering, University of Illinois Chicago
| | - Matthew J Gerber
- Department of Ophthalmology, University of California at Los Angeles, Los Angeles, California, USA
| | - Jean-Pierre Hubschman
- Department of Ophthalmology, University of California at Los Angeles, Los Angeles, California, USA
| | - Darvin Yi
- Departments of Ophthalmology and Bioengineering, University of Illinois Chicago
| |
Collapse
|
3
|
Mergen M, Graf N, Meyerheim M. Reviewing the current state of virtual reality integration in medical education - a scoping review. BMC MEDICAL EDUCATION 2024; 24:788. [PMID: 39044186 PMCID: PMC11267750 DOI: 10.1186/s12909-024-05777-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/14/2023] [Accepted: 07/15/2024] [Indexed: 07/25/2024]
Abstract
BACKGROUND In medical education, new technologies like Virtual Reality (VR) are increasingly integrated to enhance digital learning. Originally used to train surgical procedures, now use cases also cover emergency scenarios and non-technical skills like clinical decision-making. This scoping review aims to provide an overview of VR in medical education, including requirements, advantages, disadvantages, as well as evaluation methods and respective study results to establish a foundation for future VR integration into medical curricula. METHODS This review follows the updated JBI methodology for scoping reviews and adheres to the respective PRISMA extension. We included reviews in English or German language from 2012 to March 2022 that examine the use of VR in education for medical and nursing students, registered nurses, and qualified physicians. Data extraction focused on medical specialties, subjects, curricula, technical/didactic requirements, evaluation methods and study outcomes as well as advantages and disadvantages of VR. RESULTS A total of 763 records were identified. After eligibility assessment, 69 studies were included. Nearly half of them were published between 2021 and 2022, predominantly from high-income countries. Most reviews focused on surgical training in laparoscopic and minimally invasive procedures (43.5%) and included studies with qualified physicians as participants (43.5%). Technical, didactic and organisational requirements were highlighted and evaluations covering performance time and quality, skills acquisition and validity, often showed positive outcomes. Accessibility, repeatability, cost-effectiveness, and improved skill development were reported as advantages, while financial challenges, technical limitations, lack of scientific evidence, and potential user discomfort were cited as disadvantages. DISCUSSION Despite a high potential of VR in medical education, there are mandatory requirements for its integration into medical curricula addressing challenges related to finances, technical limitations, and didactic aspects. The reported lack of standardised and validated guidelines for evaluating VR training must be overcome to enable high-quality evidence for VR usage in medical education. Interdisciplinary teams of software developers, AI experts, designers, medical didactics experts and end users are required to design useful VR courses. Technical issues and compromised realism can be mitigated by further technological advancements.
Collapse
Affiliation(s)
- Marvin Mergen
- Department of Pediatric Oncology and Hematology, Faculty of Medicine, Saarland University, Building 9, Kirrberger Strasse 100, 66421, Homburg, Germany.
| | - Norbert Graf
- Department of Pediatric Oncology and Hematology, Faculty of Medicine, Saarland University, Building 9, Kirrberger Strasse 100, 66421, Homburg, Germany
| | - Marcel Meyerheim
- Department of Pediatric Oncology and Hematology, Faculty of Medicine, Saarland University, Building 9, Kirrberger Strasse 100, 66421, Homburg, Germany
| |
Collapse
|
4
|
Lim JI, Rachitskaya AV, Hallak JA, Gholami S, Alam MN. Artificial intelligence for retinal diseases. Asia Pac J Ophthalmol (Phila) 2024; 13:100096. [PMID: 39209215 DOI: 10.1016/j.apjo.2024.100096] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2024] [Revised: 08/02/2024] [Accepted: 08/20/2024] [Indexed: 09/04/2024] Open
Abstract
PURPOSE To discuss the worldwide applications and potential impact of artificial intelligence (AI) for the diagnosis, management and analysis of treatment outcomes of common retinal diseases. METHODS We performed an online literature review, using PubMed Central (PMC), of AI applications to evaluate and manage retinal diseases. Search terms included AI for screening, diagnosis, monitoring, management, and treatment outcomes for age-related macular degeneration (AMD), diabetic retinopathy (DR), retinal surgery, retinal vascular disease, retinopathy of prematurity (ROP) and sickle cell retinopathy (SCR). Additional search terms included AI and color fundus photographs, optical coherence tomography (OCT), and OCT angiography (OCTA). We included original research articles and review articles. RESULTS Research studies have investigated and shown the utility of AI for screening for diseases such as DR, AMD, ROP, and SCR. Research studies using validated and labeled datasets confirmed AI algorithms could predict disease progression and response to treatment. Studies showed AI facilitated rapid and quantitative interpretation of retinal biomarkers seen on OCT and OCTA imaging. Research articles suggest AI may be useful for planning and performing robotic surgery. Studies suggest AI holds the potential to help lessen the impact of socioeconomic disparities on the outcomes of retinal diseases. CONCLUSIONS AI applications for retinal diseases can assist the clinician, not only by disease screening and monitoring for disease recurrence but also in quantitative analysis of treatment outcomes and prediction of treatment response. The public health impact on the prevention of blindness from DR, AMD, and other retinal vascular diseases remains to be determined.
Collapse
Affiliation(s)
- Jennifer I Lim
- University of Illinois at Chicago, College of Medicine, Department of Ophthalmology and Visual Sciences, Chicago, IL, United States.
| | - Aleksandra V Rachitskaya
- Department of Ophthalmology at Case Western Reserve University, Cleveland Clinic Lerner College of Medicine, Cleveland Clinic Cole Eye Institute, United States
| | - Joelle A Hallak
- University of Illinois at Chicago, College of Medicine, Department of Ophthalmology and Visual Sciences, Chicago, IL, United States; Department of Ophthalmology and Visual Sciences, College of Medicine, University of Illinois at Chicago, Chicago, IL, United States
| | - Sina Gholami
- University of North Carolina at Charlotte, United States
| | - Minhaj N Alam
- University of North Carolina at Charlotte, United States
| |
Collapse
|
5
|
Wang N, Yang S, Gao Q, Jin X. Immersive teaching using virtual reality technology to improve ophthalmic surgical skills for medical postgraduate students. Postgrad Med 2024; 136:487-495. [PMID: 38819302 DOI: 10.1080/00325481.2024.2363171] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2024] [Accepted: 05/28/2024] [Indexed: 06/01/2024]
Abstract
Medical education is primarily based on practical schooling and the accumulation of experience and skills, which is important for the growth and development of young ophthalmic surgeons. However, present learning and refresher methods are constrained by several factors. Nevertheless, virtual reality (VR) technology has considerably contributed to medical training worldwide, providing convenient and practical auxiliary value for the selection of students' sub-majors. Moreover, it offers previously inaccessible surgical step training, scenario simulations, and immersive evaluation exams. This paper outlines the current applications of VR immersive teaching methods for ophthalmic surgery interns.
Collapse
Affiliation(s)
- Ning Wang
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou, Zhejiang, China
| | - Shuo Yang
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou, Zhejiang, China
| | - Qi Gao
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou, Zhejiang, China
| | - Xiuming Jin
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou, Zhejiang, China
| |
Collapse
|
6
|
Gordon M, Daniel M, Ajiboye A, Uraiby H, Xu NY, Bartlett R, Hanson J, Haas M, Spadafore M, Grafton-Clarke C, Gasiea RY, Michie C, Corral J, Kwan B, Dolmans D, Thammasitboon S. A scoping review of artificial intelligence in medical education: BEME Guide No. 84. MEDICAL TEACHER 2024; 46:446-470. [PMID: 38423127 DOI: 10.1080/0142159x.2024.2314198] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Accepted: 01/31/2024] [Indexed: 03/02/2024]
Abstract
BACKGROUND Artificial Intelligence (AI) is rapidly transforming healthcare, and there is a critical need for a nuanced understanding of how AI is reshaping teaching, learning, and educational practice in medical education. This review aimed to map the literature regarding AI applications in medical education, core areas of findings, potential candidates for formal systematic review and gaps for future research. METHODS This rapid scoping review, conducted over 16 weeks, employed Arksey and O'Malley's framework and adhered to STORIES and BEME guidelines. A systematic and comprehensive search across PubMed/MEDLINE, EMBASE, and MedEdPublish was conducted without date or language restrictions. Publications included in the review spanned undergraduate, graduate, and continuing medical education, encompassing both original studies and perspective pieces. Data were charted by multiple author pairs and synthesized into various thematic maps and charts, ensuring a broad and detailed representation of the current landscape. RESULTS The review synthesized 278 publications, with a majority (68%) from North American and European regions. The studies covered diverse AI applications in medical education, such as AI for admissions, teaching, assessment, and clinical reasoning. The review highlighted AI's varied roles, from augmenting traditional educational methods to introducing innovative practices, and underscores the urgent need for ethical guidelines in AI's application in medical education. CONCLUSION The current literature has been charted. The findings underscore the need for ongoing research to explore uncharted areas and address potential risks associated with AI use in medical education. This work serves as a foundational resource for educators, policymakers, and researchers in navigating AI's evolving role in medical education. A framework to support future high utility reporting is proposed, the FACETS framework.
Collapse
Affiliation(s)
- Morris Gordon
- School of Medicine and Dentistry, University of Central Lancashire, Preston, UK
- Blackpool Hospitals NHS Foundation Trust, Blackpool, UK
| | - Michelle Daniel
- School of Medicine, University of California, San Diego, SanDiego, CA, USA
| | - Aderonke Ajiboye
- School of Medicine and Dentistry, University of Central Lancashire, Preston, UK
| | - Hussein Uraiby
- Department of Cellular Pathology, University Hospitals of Leicester NHS Trust, Leicester, UK
| | - Nicole Y Xu
- School of Medicine, University of California, San Diego, SanDiego, CA, USA
| | - Rangana Bartlett
- Department of Cognitive Science, University of California, San Diego, CA, USA
| | - Janice Hanson
- Department of Medicine and Office of Education, School of Medicine, Washington University in Saint Louis, Saint Louis, MO, USA
| | - Mary Haas
- Department of Emergency Medicine, University of Michigan Medical School, Ann Arbor, MI, USA
| | - Maxwell Spadafore
- Department of Emergency Medicine, University of Michigan Medical School, Ann Arbor, MI, USA
| | | | | | - Colin Michie
- School of Medicine and Dentistry, University of Central Lancashire, Preston, UK
| | - Janet Corral
- Department of Medicine, University of Nevada Reno, School of Medicine, Reno, NV, USA
| | - Brian Kwan
- School of Medicine, University of California, San Diego, SanDiego, CA, USA
| | - Diana Dolmans
- School of Health Professions Education, Faculty of Health, Maastricht University, Maastricht, NL, USA
| | - Satid Thammasitboon
- Center for Research, Innovation and Scholarship in Health Professions Education, Baylor College of Medicine, Houston, TX, USA
| |
Collapse
|
7
|
Zhu Y, Salowe R, Chow C, Li S, Bastani O, O'Brien JM. Advancing Glaucoma Care: Integrating Artificial Intelligence in Diagnosis, Management, and Progression Detection. Bioengineering (Basel) 2024; 11:122. [PMID: 38391608 PMCID: PMC10886285 DOI: 10.3390/bioengineering11020122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Revised: 01/23/2024] [Accepted: 01/24/2024] [Indexed: 02/24/2024] Open
Abstract
Glaucoma, the leading cause of irreversible blindness worldwide, comprises a group of progressive optic neuropathies requiring early detection and lifelong treatment to preserve vision. Artificial intelligence (AI) technologies are now demonstrating transformative potential across the spectrum of clinical glaucoma care. This review summarizes current capabilities, future outlooks, and practical translation considerations. For enhanced screening, algorithms analyzing retinal photographs and machine learning models synthesizing risk factors can identify high-risk patients needing diagnostic workup and close follow-up. To augment definitive diagnosis, deep learning techniques detect characteristic glaucomatous patterns by interpreting results from optical coherence tomography, visual field testing, fundus photography, and other ocular imaging. AI-powered platforms also enable continuous monitoring, with algorithms that analyze longitudinal data alerting physicians about rapid disease progression. By integrating predictive analytics with patient-specific parameters, AI can also guide precision medicine for individualized glaucoma treatment selections. Advances in robotic surgery and computer-based guidance demonstrate AI's potential to improve surgical outcomes and surgical training. Beyond the clinic, AI chatbots and reminder systems could provide patient education and counseling to promote medication adherence. However, thoughtful approaches to clinical integration, usability, diversity, and ethical implications remain critical to successfully implementing these emerging technologies. This review highlights AI's vast capabilities to transform glaucoma care while summarizing key achievements, future prospects, and practical considerations to progress from bench to bedside.
Collapse
Affiliation(s)
- Yan Zhu
- Department of Ophthalmology, Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Rebecca Salowe
- Department of Ophthalmology, Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Caven Chow
- Department of Ophthalmology, Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Shuo Li
- Department of Computer & Information Science, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Osbert Bastani
- Department of Computer & Information Science, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Joan M O'Brien
- Department of Ophthalmology, Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA 19104, USA
| |
Collapse
|
8
|
Tsoutsanis P, Tsoutsanis A. Evaluation of Large language model performance on the Multi-Specialty Recruitment Assessment (MSRA) exam. Comput Biol Med 2024; 168:107794. [PMID: 38043471 DOI: 10.1016/j.compbiomed.2023.107794] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 11/27/2023] [Accepted: 11/28/2023] [Indexed: 12/05/2023]
Abstract
INTRODUCTION AI-powered platforms have gained prominence in medical education and training, offering diverse applications from surgical performance assessment to exam preparation. This research paper examines the capabilities of Large Language Models (LLMs), including Llama 2, Google Bard, Bing Chat, and ChatGPT-3.5, in answering multiple-choice questions of the Clinical Problem Solving (CPS) paper of the Multi-Specialty Recruitment Assessment (MSRA) exam. METHODS Using a dataset of 100 CPS questions from ten subject categories, we assessed the LLMs' performance against medical doctors preparing for the exam. RESULTS Results showed that Bing Chat outperformed all other LLMs and even surpassed human users from the Qbank question bank. Conversely, Llama 2's performance was inferior to human users. Google Bard and ChatGPT 3.5 did not exhibit statistically significant differences in correct response rates compared to human candidates. Pairwise comparisons demonstrated Bing Chat's significant superiority over Llama 2, Google Bard, and ChatGPT 3.5. However, no significant differences were found between Llama 2 and Google Bard, Llama 2, and ChatGPT-3.5, and Google Bard and ChatGPT-3.5. DISCUSSION Freely available LLMs have already demonstrated that they can perform as well or even outperform human users in answering MSRA exam questions. Bing Chat emerged as a particularly strong performer. The study also highlights the potential for enhancing LLMs' medical knowledge acquisition through tailored fine-tuning. Medical knowledge tailored LLMs such as Med-PaLM, have already shown promising results. CONCLUSION We provided valuable insights into LLMs' competence in answering medical MCQs and their potential integration into medical education and assessment processes.
Collapse
Affiliation(s)
- Panagiotis Tsoutsanis
- Northern Care Alliance NHS Foundation Trust, Rochdale Eye Unit, Rochdale infirmary, Greater Manchester, UK; Department of Education, University of Oxford, Oxford, UK.
| | | |
Collapse
|
9
|
Masoumian Hosseini M, Sadat Manzari Z, Gazerani A, Masoumian Hosseini ST, Gazerani A, Rohaninasab M. Can gamified surgical sets improve surgical instrument recognition and student performance retention in the operating room? A multi-institutional experimental crossover study. BMC MEDICAL EDUCATION 2023; 23:907. [PMID: 38031011 PMCID: PMC10688061 DOI: 10.1186/s12909-023-04868-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Accepted: 11/13/2023] [Indexed: 12/01/2023]
Abstract
INTRODUCTION Surgery requires a high degree of precision, speed, and concentration. Owing to the complexity of the modern world, traditional methods cannot meet these requirements. Therefore, in this study, we investigated students' diagnostic skills in the Operating Room in the context of surgical instruments by using gamification of surgical instruments and a crossover design. METHOD The study design was a multi-institutional quasi-experimental crossover and involved a three-arm intervention (with gender-specific block randomisation: Group A, B, and C) with a pre-test and three post-tests. A total of 90 students fell into three groups of 30 participants each. The surgical sets were learned for one semester through game-based instruction and traditional teaching, and then three OSCE tests were administered with time and location differences. Using one-way ANOVA, OSCE results were compared in the game, traditional, and control groups. The effectiveness of the intervention was tested in each group by repeated measures. RESULT The pretest scores of all three groups did not differ significantly. In the OSCE tests, both groups, A and B, performed similarly. However, these tests showed a significant difference in grouping between training through games and training in the traditional way. There was no significant difference between OSCE tests 2 and 3 in the game-based training group, indicating that what was learned was retained, while in the traditional method training group, OSCE 3 test scores declined significantly. Furthermore, repeated measures showed the effectiveness of game-based training. CONCLUSION In this study, gamification has turned out to be very effective in helping learners learn practical skills and leading to more sustainable learning.
Collapse
Affiliation(s)
- Mohsen Masoumian Hosseini
- Department of E-Learning in Medical Science, Tehran University of Medical Sciences, Tehran, Iran
- CyberPatient Research Affiliate, Interactive Health International, Department of Surgery, University of British Columbia, Vancouver, Canada
| | - Zahra Sadat Manzari
- Nursing and Midwifery Care Research Center, Mashhad University of Medical Sciences, Mashhad, Iran.
| | - Azam Gazerani
- Department of Nursing, Neyshabur University of Medical Sciences, Neyshabur, Iran
| | - Seyedeh Toktam Masoumian Hosseini
- CyberPatient Research Affiliate, Interactive Health International, Department of Surgery, University of British Columbia, Vancouver, Canada
- Department of Nursing, School of Nursing and Midwifery, Torbat Heydariyeh University of Medical Sciences, Torbat Heydariyeh, Iran
| | - Akram Gazerani
- Student Research Committee, School of Nursing and Midwifery, Mashhad University of Medical Sciences, Mashhad, Iran.
| | - Mehrdad Rohaninasab
- Department of Operating Room, Neyshabur University of Medical Sciences, Neyshabur, Iran
| |
Collapse
|
10
|
Felsenreich DM, Yang W, Taskin HE, Abdelbaki T, Shahabi S, Zakeri R, Talishinskiy T, Gero D, Neimark A, Chiappetta S. Young-IFSO Bariatric/Metabolic Surgery Training and Education Survey. Obes Surg 2023; 33:2816-2830. [PMID: 37505341 DOI: 10.1007/s11695-023-06751-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2023] [Revised: 07/13/2023] [Accepted: 07/14/2023] [Indexed: 07/29/2023]
Abstract
BACKGROUND This international Young-IFSO survey aims to address variations, trends, and obstacles in bariatric/metabolic surgery (BMS) training globally, since expectations and resources differ among young surgeons. METHODS The Young-IFSO scientific team designed an online confidential questionnaire with 50 questions analyzing the individual BMS training. The survey link was sent to all IFSO/ASMBS members and was shared in social media. All Young-IFSO members (age up to 45 years) were invited to participate between 16 December 2022 and 4 February 2023. RESULTS A total of 240 respondents from 61 countries took the survey. Most respondents (70.24%) described their current position as a consultant surgeon with an average of 5.43 years' experience working in BMS, and 55% are working in a bariatric center of excellence. More than 50% of the respondents performed none or less than 10 BMS during residency. Preparation of the stomach and stapling during sleeve gastrectomy (SG) were the first steps performed, and SG was the first BMS completed as a first operating surgeon by most of the respondents (74%). In total, 201 (84.45%) surgeons reported to perform scientific work. Most respondents (90.13%) reported that surgical mentorship had improved their surgical skills. CONCLUSION This international experts' survey underlines the lack of a standardized global surgical curriculum of BMS during residency. It shows that SG is the single most performed procedure by young surgeons. These data might underline the importance of advancing surgical education in BMS, and accredited fellowship programs should be offered globally to maintain and raise quality of BMS.
Collapse
Affiliation(s)
- Daniel M Felsenreich
- Division of Visceral Surgery, Department of General Surgery, Medical University of Vienna, Vienna, Austria.
| | - Wah Yang
- Department of Metabolic and Bariatric Surgery, The First Affiliated Hospital of Jinan University, Guangzhou, China
| | - Halit E Taskin
- Bariatric Surgery Center, Department of Surgery, Cerrahpasa Faculty of Medicine, Istanbul University-Cerrahpasa, Istanbul, Turkey
| | - Tamer Abdelbaki
- General Surgery Department, Alexandria University, Faculty of Medicine, Alexandria, Egypt
| | - Shahab Shahabi
- Division of Minimally Invasive and Bariatric Surgery, Department of Surgery, Minimally Invasive Surgery Research Center, Rasool-E Akram Hospital, School of Medicine, Iran University of Medical Sciences, Tehran, Iran
| | - Roxanna Zakeri
- Department of Upper GI Surgery, University College London Hospital NHS Foundation Trust, London, UK
| | - Toghrul Talishinskiy
- Bariatric Surgery Minimally Invasive and Robotic Surgery, St. Joseph's University Medical Center, Paterson, USA
| | - Daniel Gero
- Department of Surgery and Transplantation, University Hospital Zurich, University of Zurich, Zurich, Switzerland
| | - Alexandr Neimark
- Almazov National Medical Research Centre, Saint-Petersburg, Russia
| | - Sonja Chiappetta
- Bariatric and Metabolic Surgery Unit, Department of General Surgery, Ospedale Evangelico Betania, Via Argine 604, 80147, Naples, Italy.
| |
Collapse
|
11
|
Iqbal J, Cortés Jaimes DC, Makineni P, Subramani S, Hemaida S, Thugu TR, Butt AN, Sikto JT, Kaur P, Lak MA, Augustine M, Shahzad R, Arain M. Reimagining Healthcare: Unleashing the Power of Artificial Intelligence in Medicine. Cureus 2023; 15:e44658. [PMID: 37799217 PMCID: PMC10549955 DOI: 10.7759/cureus.44658] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/04/2023] [Indexed: 10/07/2023] Open
Abstract
Artificial intelligence (AI) has opened new medical avenues and revolutionized diagnostic and therapeutic practices, allowing healthcare providers to overcome significant challenges associated with cost, disease management, accessibility, and treatment optimization. Prominent AI technologies such as machine learning (ML) and deep learning (DL) have immensely influenced diagnostics, patient monitoring, novel pharmaceutical discoveries, drug development, and telemedicine. Significant innovations and improvements in disease identification and early intervention have been made using AI-generated algorithms for clinical decision support systems and disease prediction models. AI has remarkably impacted clinical drug trials by amplifying research into drug efficacy, adverse events, and candidate molecular design. AI's precision and analysis regarding patients' genetic, environmental, and lifestyle factors have led to individualized treatment strategies. During the COVID-19 pandemic, AI-assisted telemedicine set a precedent for remote healthcare delivery and patient follow-up. Moreover, AI-generated applications and wearable devices have allowed ambulatory monitoring of vital signs. However, apart from being immensely transformative, AI's contribution to healthcare is subject to ethical and regulatory concerns. AI-backed data protection and algorithm transparency should be strictly adherent to ethical principles. Vigorous governance frameworks should be in place before incorporating AI in mental health interventions through AI-operated chatbots, medical education enhancements, and virtual reality-based training. The role of AI in medical decision-making has certain limitations, necessitating the importance of hands-on experience. Therefore, reaching an optimal balance between AI's capabilities and ethical considerations to ensure impartial and neutral performance in healthcare applications is crucial. This narrative review focuses on AI's impact on healthcare and the importance of ethical and balanced incorporation to make use of its full potential.
Collapse
Affiliation(s)
| | - Diana Carolina Cortés Jaimes
- Epidemiology, Universidad Autónoma de Bucaramanga, Bucaramanga, COL
- Medicine, Pontificia Universidad Javeriana, Bogotá, COL
| | - Pallavi Makineni
- Medicine, All India Institute of Medical Sciences, Bhubaneswar, Bhubaneswar, IND
| | - Sachin Subramani
- Medicine and Surgery, Employees' State Insurance Corporation (ESIC) Medical College, Gulbarga, IND
| | - Sarah Hemaida
- Internal Medicine, Istanbul Okan University, Istanbul, TUR
| | - Thanmai Reddy Thugu
- Internal Medicine, Sri Padmavathi Medical College for Women, Sri Venkateswara Institute of Medical Sciences (SVIMS), Tirupati, IND
| | - Amna Naveed Butt
- Medicine/Internal Medicine, Allama Iqbal Medical College, Lahore, PAK
| | | | - Pareena Kaur
- Medicine, Punjab Institute of Medical Sciences, Jalandhar, IND
| | | | | | - Roheen Shahzad
- Medicine, Combined Military Hospital (CMH) Lahore Medical College and Institute of Dentistry, Lahore, PAK
| | - Mustafa Arain
- Internal Medicine, Civil Hospital Karachi, Karachi, PAK
| |
Collapse
|
12
|
Sindal MD, Ratra A, Ratra D. Enhancing surgical training - Role of simulators and mentors. Indian J Ophthalmol 2023; 71:3260-3261. [PMID: 37602619 PMCID: PMC10565920 DOI: 10.4103/ijo.ijo_1798_23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/22/2023] Open
Affiliation(s)
- Manavi D Sindal
- Vitreoretina Services, Aravind Eye Hospital, Pondicherry, India
| | - Aashna Ratra
- Department of Ophthalmology, Stanley Medical College, Chennai, Tamil Nadu, India
| | - Dhanashree Ratra
- Department of Vitreoretinal Diseases, Medical Research Foundation, Sankara Nethralaya, Chennai, Tamil Nadu, India
| |
Collapse
|
13
|
Pan-Doh N, Sikder S, Woreta FA, Handa JT. Using the language of surgery to enhance ophthalmology surgical education. Surg Open Sci 2023; 14:52-59. [PMID: 37528917 PMCID: PMC10387608 DOI: 10.1016/j.sopen.2023.07.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2023] [Accepted: 07/09/2023] [Indexed: 08/03/2023] Open
Abstract
Background Currently, surgical education utilizes a combination of the apprentice model, wet-lab training, and simulation, but due to reliance on subjective data, the quality of teaching and assessment can be variable. The "language of surgery," an established concept in engineering literature whose incorporation into surgical education has been limited, is defined as the description of each surgical maneuver using quantifiable metrics. This concept is different from the traditional notion of surgical language, generally thought of as the qualitative definitions and terminology used by surgeons. Methods A literature search was conducted through April 2023 using MEDLINE/PubMed using search terms to investigate wet-lab, virtual simulators, and robotics in ophthalmology, along with the language of surgery and surgical education. Articles published before 2005 were mostly excluded, although a few were included on a case-by-case basis. Results Surgical maneuvers can be quantified by leveraging technological advances in virtual simulators, video recordings, and surgical robots to create a language of surgery. By measuring and describing maneuver metrics, the learning surgeon can adjust surgical movements in an appropriately graded fashion that is based on objective and standardized data. The main contribution is outlining a structured education framework that details how surgical education could be improved by incorporating the language of surgery, using ophthalmology surgical education as an example. Conclusion By describing each surgical maneuver in quantifiable, objective, and standardized terminology, a language of surgery can be created that can be used to learn, teach, and assess surgical technical skill with an approach that minimizes bias. Key message The "language of surgery," defined as the quantification of each surgical movement's characteristics, is an established concept in the engineering literature. Using ophthalmology surgical education as an example, we describe a structured education framework based on the language of surgery to improve surgical education. Classifications Surgical education, robotic surgery, ophthalmology, education standardization, computerized assessment, simulations in teaching. Competencies Practice-Based Learning and Improvement.
Collapse
Affiliation(s)
- Nathan Pan-Doh
- Wilmer Eye Institute, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Shameema Sikder
- Wilmer Eye Institute, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Fasika A. Woreta
- Wilmer Eye Institute, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - James T. Handa
- Wilmer Eye Institute, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| |
Collapse
|
14
|
Curran VR, Xu X, Aydin MY, Meruvia-Pastor O. Use of Extended Reality in Medical Education: An Integrative Review. MEDICAL SCIENCE EDUCATOR 2023; 33:275-286. [PMID: 36569366 PMCID: PMC9761044 DOI: 10.1007/s40670-022-01698-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 11/28/2022] [Indexed: 06/17/2023]
Abstract
UNLABELLED Extended reality (XR) has emerged as an innovative simulation-based learning modality. An integrative review was undertaken to explore the nature of evidence, usage, and effectiveness of XR modalities in medical education. One hundred and thirty-three (N = 133) studies and articles were reviewed. XR technologies are commonly reported in surgical and anatomical education, and the evidence suggests XR may be as effective as traditional medical education teaching methods and, potentially, a more cost-effective means of curriculum delivery. Further research to compare different variations of XR technologies and best applications in medical education and training are required to advance the field. SUPPLEMENTARY INFORMATION The online version contains supplementary material available at 10.1007/s40670-022-01698-4.
Collapse
Affiliation(s)
- Vernon R. Curran
- Office of Professional and Educational Development, Faculty of Medicine, Health Sciences Centre, Memorial University of Newfoundland, Room H2982, St. John’s, NL A1B 3V6 Canada
| | - Xiaolin Xu
- Faculty of Health Sciences, Queen’s University, Kingston, ON Canada
| | - Mustafa Yalin Aydin
- Department of Computer Sciences, Memorial University of Newfoundland, St. John’s, NL Canada
| | - Oscar Meruvia-Pastor
- Department of Computer Sciences, Memorial University of Newfoundland, St. John’s, NL Canada
| |
Collapse
|
15
|
Park JJ, Tiefenbach J, Demetriades AK. The role of artificial intelligence in surgical simulation. FRONTIERS IN MEDICAL TECHNOLOGY 2022; 4:1076755. [PMID: 36590155 PMCID: PMC9794840 DOI: 10.3389/fmedt.2022.1076755] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Accepted: 11/21/2022] [Indexed: 12/15/2022] Open
Abstract
Artificial Intelligence (AI) plays an integral role in enhancing the quality of surgical simulation, which is increasingly becoming a popular tool for enriching the training experience of a surgeon. This spans the spectrum from facilitating preoperative planning, to intraoperative visualisation and guidance, ultimately with the aim of improving patient safety. Although arguably still in its early stages of widespread clinical application, AI technology enables personal evaluation and provides personalised feedback in surgical training simulations. Several forms of surgical visualisation technologies currently in use for anatomical education and presurgical assessment rely on different AI algorithms. However, while it is promising to see clinical examples and technological reports attesting to the efficacy of AI-supported surgical simulators, barriers to wide-spread commercialisation of such devices and software remain complex and multifactorial. High implementation and production costs, scarcity of reports evidencing the superiority of such technology, and intrinsic technological limitations remain at the forefront. As AI technology is key to driving the future of surgical simulation, this paper will review the literature delineating its current state, challenges, and prospects. In addition, a consolidated list of FDA/CE approved AI-powered medical devices for surgical simulation is presented, in order to shed light on the existing gap between academic achievements and the universal commercialisation of AI-enabled simulators. We call for further clinical assessment of AI-supported surgical simulators to support novel regulatory body approved devices and usher surgery into a new era of surgical education.
Collapse
Affiliation(s)
- Jay J. Park
- Department of General Surgery, Norfolk and Norwich University Hospital, Norwich, United Kingdom,Edinburgh Medical School, University of Edinburgh, Edinburgh, United Kingdom
| | - Jakov Tiefenbach
- Neurological Institute, Cleveland Clinic, Cleveland, OH, United States
| | - Andreas K. Demetriades
- Edinburgh Medical School, University of Edinburgh, Edinburgh, United Kingdom,Department of Neurosurgery, Royal Infirmary of Edinburgh, Edinburgh, United Kingdom
| |
Collapse
|
16
|
|
17
|
Application of Artificial Intelligence within Virtual Reality for Production of Digital Media Art. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:3781750. [PMID: 35990155 PMCID: PMC9385317 DOI: 10.1155/2022/3781750] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Revised: 07/16/2022] [Accepted: 07/19/2022] [Indexed: 11/28/2022]
Abstract
As technology changes, virtual reality generates realistic images through computer graphics and provides users with an immersive experience through various interactive means. In the context of digitalization, the application of VR for digital media art creation becomes a normalized method. Today's digital media art creation is closely related to vigorous technological innovation behind it, so the influence of modern technology is inevitable. Virtual reality and artificial intelligence have gradually become the main technical means in line with the development aim for digital media art creation. This work proposes an art object detection method AODNET in virtual reality digital media art creation with AI. Aiming at the particularity of object detection in this direction, an art object detection strategy based on residual network and clustering idea is proposed. First of all, it uses ResNet50 as backbone, which deepens network depth and improves the model feature extraction ability. Second, it uses the K-means++ algorithm to perform clustering statistics on the size of the real annotated boxes in the dataset to obtain appropriate hyperparameters for preset candidate boxes, which enhances the tolerance of the algorithm to the target size. Third, it replaces the ROI pooling algorithm with ROI align to eliminate the error caused by the quantization operation on the characteristics of the candidate region. Fourth, to reduce the missed detection rate of overlapping targets, soft-NMS algorithm is used instead of the NMS algorithm to post-process the candidate boxes. Finally, this work conducts extensive experiments to verify the superiority of AODNET for object detection in virtual reality digital media art creation.
Collapse
|
18
|
Du R, Xie S, Fang Y, Hagino S, Yamamoto S, Moriyama M, Yoshida T, Igarashi-Yokoi T, Takahashi H, Nagaoka N, Uramoto K, Onishi Y, Watanabe T, Nakao N, Takahashi T, Kaneko Y, Azuma T, Hatake R, Nomura T, Sakura T, Yana M, Xiong J, Chen C, Ohno-Matsui K. Validation of Soft Labels in Developing Deep Learning Algorithms for Detecting Lesions of Myopic Maculopathy From Optical Coherence Tomographic Images. Asia Pac J Ophthalmol (Phila) 2022; 11:227-236. [PMID: 34937047 DOI: 10.1097/apo.0000000000000466] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
PURPOSE It is common for physicians to be uncertain when examining some images. Models trained with human uncertainty could be a help for physicians in diagnosing pathologic myopia. DESIGN This is a hospital-based study that included 9176 images from 1327 patients that were collected between October 2015 and March 2019. METHODS All collected images were graded by 21 myopia specialists according to the presence of myopic neovascularization (MNV), myopic traction maculopathy (MTM), and dome-shaped macula (DSM). Hard labels were made by the rule of major wins, while soft labels were possibilities calculated by whole grading results from the different graders. The area under the curve (AUC) of the receiver operating characteristics curve, the area under precision-recall (AUPR) curve, F-score, and least square errors were used to evaluate the performance of the models. RESULTS The AUC values of models trained by soft labels in MNV, MTM, and DSM models were 0.985, 0.946, and 0.978; and the AUPR values were 0.908, 0.876, and 0.653 respectively. However, 0.56% of MNV "negative" cases were answered as "positive" with high certainty by the hard label model, whereas no case was graded with extreme errors by the soft label model. The same results were found for the MTM (0.95% vs none) and DSM (0.43% vs 0.09%) models. CONCLUSIONS The predicted possibilities from the models trained by soft labels were close to the results made by myopia specialists. These findings could inspire the novel use of deep learning models in the medical field.
Collapse
Affiliation(s)
- Ran Du
- Department of Ophthalmology and Visual Science, Tokyo Medical and Dental University, Tokyo, Japan
| | - Shiqi Xie
- Department of Ophthalmology and Visual Science, Tokyo Medical and Dental University, Tokyo, Japan
| | - Yuxin Fang
- Department of Ophthalmology and Visual Science, Tokyo Medical and Dental University, Tokyo, Japan
- Beijing Tongren Eye Center, Beijing Key Laboratory of Ophthalmology and Visual Science, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | | | | | - Muka Moriyama
- Department of Ophthalmology and Visual Science, Tokyo Medical and Dental University, Tokyo, Japan
| | - Takeshi Yoshida
- Department of Ophthalmology and Visual Science, Tokyo Medical and Dental University, Tokyo, Japan
| | - Tae Igarashi-Yokoi
- Department of Ophthalmology and Visual Science, Tokyo Medical and Dental University, Tokyo, Japan
| | - Hiroyuki Takahashi
- Department of Ophthalmology and Visual Science, Tokyo Medical and Dental University, Tokyo, Japan
| | - Natsuko Nagaoka
- Department of Ophthalmology and Visual Science, Tokyo Medical and Dental University, Tokyo, Japan
| | - Kengo Uramoto
- Department of Ophthalmology and Visual Science, Tokyo Medical and Dental University, Tokyo, Japan
| | - Yuka Onishi
- Department of Ophthalmology and Visual Science, Tokyo Medical and Dental University, Tokyo, Japan
| | - Takashi Watanabe
- Department of Ophthalmology and Visual Science, Tokyo Medical and Dental University, Tokyo, Japan
| | - Noriko Nakao
- Department of Ophthalmology and Visual Science, Tokyo Medical and Dental University, Tokyo, Japan
| | - Tomonari Takahashi
- Department of Ophthalmology and Visual Science, Tokyo Medical and Dental University, Tokyo, Japan
| | - Yuichiro Kaneko
- Department of Ophthalmology and Visual Science, Tokyo Medical and Dental University, Tokyo, Japan
| | - Takeshi Azuma
- Department of Ophthalmology and Visual Science, Tokyo Medical and Dental University, Tokyo, Japan
| | - Ryoma Hatake
- Department of Ophthalmology and Visual Science, Tokyo Medical and Dental University, Tokyo, Japan
| | - Takuhei Nomura
- Department of Ophthalmology and Visual Science, Tokyo Medical and Dental University, Tokyo, Japan
| | - Tatsuro Sakura
- Department of Ophthalmology and Visual Science, Tokyo Medical and Dental University, Tokyo, Japan
| | - Mariko Yana
- Department of Ophthalmology and Visual Science, Tokyo Medical and Dental University, Tokyo, Japan
| | - Jianping Xiong
- Department of Ophthalmology and Visual Science, Tokyo Medical and Dental University, Tokyo, Japan
| | - Changyu Chen
- Department of Ophthalmology and Visual Science, Tokyo Medical and Dental University, Tokyo, Japan
| | - Kyoko Ohno-Matsui
- Department of Ophthalmology and Visual Science, Tokyo Medical and Dental University, Tokyo, Japan
| |
Collapse
|
19
|
Al-Khaled T, Acaba-Berrocal L, Cole E, Ting DSW, Chiang MF, Chan RVP. Digital Education in Ophthalmology. Asia Pac J Ophthalmol (Phila) 2022; 11:267-272. [PMID: 34966034 PMCID: PMC9240107 DOI: 10.1097/apo.0000000000000484] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
ABSTRACT Accessibility to the Internet and computer systems has prompted the gravitation towards digital learning in medicine, including ophthalmology. Using the PubMed database and Google search engine, current initiatives in ophthalmology that serve as alternatives to traditional in-person learning with the purpose of enhancing clinical and surgical training were reviewed. This includes the development of teleeducation modules, construction of libraries of clinical and surgical videos, conduction of didactics via video communication, and the implementation of simulators and intelligent tutoring systems into clinical and surgical training programs. In this age of digital communication, teleophthalmology programs, virtual ophthalmological society meetings, and online examinations have become necessary for conducting clinical work and educational training in ophthalmology, especially in light of recent global events that have prevented large gatherings as well as the rural location of various populations. Looking forward, web-based modules and resources, artificial intelligence-based systems, and telemedicine programs will augment current curricula for ophthalmology trainees.
Collapse
Affiliation(s)
- Tala Al-Khaled
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, IL, US
| | - Luis Acaba-Berrocal
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, IL, US
| | - Emily Cole
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, IL, US
| | - Daniel S W Ting
- Singapore Eye Research institute, Singapore National Eye centre, Singapore
- Duke-NUS Medical School, National University Singapore, Singapore
| | - Michael F Chiang
- National Eye Institute, National Institutes of Health, Bethesda, MD, US
| | - R V Paul Chan
- Department of Ophthalmology and Visual Sciences, Illinois Eye and Ear Infirmary, University of Illinois at Chicago, Chicago, IL, US
| |
Collapse
|
20
|
Abstract
Ophthalmology is a medical profession with a tradition in teaching that has developed throughout history. Although ophthalmologists are generally considered to only prescribe contact lenses, and they handle more than half of eye-related enhancements, diagnoses, and treatments. The training of qualified ophthalmologists is generally carried out under the traditional settings, where there is a supervisor and a student, and training is based on the use of animal eyes or artificial eye models. These models have significant disadvantages, as they are not immersive and are extremely expensive and difficult to acquire. Therefore, technologies related to Augmented Reality (AR) and Virtual Reality (VR) are rapidly and prominently positioning themselves in the medical sector, and the field of ophthalmology is growing exponentially both in terms of the training of professionals and in the assistance and recovery of patients. At the same time, it is necessary to highlight and analyze the developments that have made use of game technologies for the teaching of ophthalmology and the results that have been obtained. This systematic review aims to investigate software and hardware applications developed exclusively for educational environments related to ophthalmology and provide an analysis of other related tools. In addition, the advantages and disadvantages, limitations, and challenges involved in the use of virtual reality, augmented reality, and game technologies in this field are also presented.
Collapse
|
21
|
Surgical Competency Assessment in Ophthalmology Residency. CURRENT SURGERY REPORTS 2022. [DOI: 10.1007/s40137-022-00309-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
22
|
The Role of Technology in Ophthalmic Surgical Education During COVID-19. CURRENT SURGERY REPORTS 2022; 10:239-245. [PMID: 36404795 PMCID: PMC9662128 DOI: 10.1007/s40137-022-00334-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/26/2022] [Indexed: 11/16/2022]
Abstract
Purpose of Review To describe the effect of COVID-19 on ophthalmic training programs and to review the various roles of technology in ophthalmology surgical education including virtual platforms, novel remote learning curricula, and the use of surgical simulators. Recent Findings COVID-19 caused significant disruption to in-person clinical and surgical patient encounters. Ophthalmology trainees worldwide faced surgical training challenges due to social distancing restrictions, trainee redeployment, and reduction in surgical case volume. Virtual platforms, such as Zoom and Microsoft Teams, were widely used during the pandemic to conduct remote teaching sessions. Novel virtual wet lab and dry lab curricula were developed. Training programs found utility in virtual reality surgical simulators, such as the Eyesi, to substitute experience lost from live patient surgical cases. Summary Although several of these described technologies were incorporated into ophthalmology surgical training programs prior to COVID-19, the pandemic highlighted the importance of developing a formal surgical curriculum that can be delivered virtually. Novel telementoring, collaboration between training institutions, and hybrid formats of didactic and practical training sessions should be continued. Future research should investigate the utility of augmented reality and artificial intelligence for trainee learning.
Collapse
|
23
|
Nasiri M, Eslami J, Rashidi N, Paim CPP, Akbari F, Torabizadeh C, Havaeji FS, Goldmeier S, Abbasi M. "Playing with Surgical Instruments (PlaSurIn)" game to train operating room novices how to set up basic surgical instruments: A validation study. NURSE EDUCATION TODAY 2021; 105:105047. [PMID: 34242904 DOI: 10.1016/j.nedt.2021.105047] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Revised: 06/01/2021] [Accepted: 06/29/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND Game-based training has been considered as an alternative modality to traditional training in different perioperative nursing fields. OBJECTIVES To describe the adaptation and validation process of "Playing with Tweezers", a Portuguese game developed for novices to set up basic surgical instruments on the Mayo stand or a back table. DESIGN A validation study with three phases of translation, reconciliation, and evaluation (face, content, and construct validity). SETTINGS Several medical universities in Iran. PARTICIPANTS Twelve students in a pilot translation test, 18 experts in the reconciliation phase, 20 experts in the face and content validity stages, and 120 students (72 novices, 26 intermediates, and 22 experts) in the construct validity stage. METHODS Following "forward-backward" translation from Portuguese to English, the English version of the game was appraised in the reconciliation phase using a 57-item questionnaire. To test face and content validity of the final version of the game, a 30-item questionnaire addressing different aspects of the game was completed. The students' game performance (remained time for game completion, obtained score, and error) was compared to assess the construct validity. RESULTS Minor differences were detected and resolved during the translation process. The English version of the game was reconciled in two sequential steps, and the final game called "Playing with Surgical Instruments (PlaSurIn)" was developed. All the items regarding the face validity received 80-100% of positive responses. Moreover, regarding the content validity, all of the evaluated items obtained a content validity index of 0.90-1.0. Compared to the novices, the experts and intermediates received higher scores (p < 0.001 in two cases) and fewer errors (p < 0.001, p = 0.007). The remained time for game completion was significantly longer for experts than the novices (p = 0.011). CONCLUSIONS The "PlaSurIn", as a virtual training strategy, can prepare novices to set up basic surgical instruments.
Collapse
Affiliation(s)
- Morteza Nasiri
- Student Research Committee, Shiraz University of Medical Sciences, Shiraz, Iran; Department of Operating Room Nursing, School of Nursing and Midwifery, Shiraz University of Medical Sciences, Shiraz, Iran.
| | - Jamshid Eslami
- Department of Operating Room Nursing, School of Nursing and Midwifery, Shiraz University of Medical Sciences, Shiraz, Iran.
| | - Neda Rashidi
- Department of Operating Room, School of Paramedical Sciences, Dezful University of Medical Science, Dezful, Iran.
| | - Crislaine Pires Padilha Paim
- Department of Graduate Nursing Program, Institute of Cardiology of Rio Grande do Sul, University Foundation of Cardiology, Porto Alegre, Brazil.
| | - Fakhridokht Akbari
- Department of Nursing, Behbahan Faculty of Medical Sciences, Behbahan, Iran.
| | - Camellia Torabizadeh
- Department of Nursing, School of Nursing and Midwifery, Shiraz University of Medical Sciences, Shiraz, Iran.
| | - Fahimeh Sadat Havaeji
- Department of Operating Room, School of Paramedical Sciences, Qom University of Medical Sciences, Qom, Iran.
| | - Silvia Goldmeier
- Department of Post-Graduate Program Research and Innovation Processes in Health, Institute of Cardiology of Rio Grande do Sul, University Foundation of Cardiology, Porto Alegre, Brazil.
| | - Mohammad Abbasi
- Department of Nursing, School of Nursing and Midwifery, Qom University of Medical Sciences, Qom, Iran.
| |
Collapse
|
24
|
Abstract
PURPOSE OF REVIEW Artificial intelligence and deep learning have become important tools in extracting data from ophthalmic surgery to evaluate, teach, and aid the surgeon in all phases of surgical management. The purpose of this review is to highlight the ever-increasing intersection of computer vision, machine learning, and ophthalmic microsurgery. RECENT FINDINGS Deep learning algorithms are being applied to help evaluate and teach surgical trainees. Artificial intelligence tools are improving real-time surgical instrument tracking, phase segmentation, as well as enhancing the safety of robotic-assisted vitreoretinal surgery. SUMMARY Similar to strides appreciated in ophthalmic medical disease, artificial intelligence will continue to become an important part of surgical management of ocular conditions. Machine learning applications will help push the boundaries of what surgeons can accomplish to improve patient outcomes.
Collapse
Affiliation(s)
- Kapil Mishra
- Department of Ophthalmology, Byers Eye Institute at Stanford, Stanford University School of Medicine, Palo Alto, California, USA
| | | |
Collapse
|