1
|
Kowalski B, Huang X, Dubra A. Embedded CPU-GPU pupil tracking. BIOMEDICAL OPTICS EXPRESS 2024; 15:6799-6815. [PMID: 39679407 PMCID: PMC11640584 DOI: 10.1364/boe.541421] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/06/2024] [Revised: 10/30/2024] [Accepted: 11/05/2024] [Indexed: 12/17/2024]
Abstract
We explore camera-based pupil tracking using high-level programming in computing platforms with end-user discrete and integrated central processing units (CPUs) and graphics processing units (GPUs), seeking low calculation latencies previously achieved with specialized hardware and programming (Kowalski et al., [Biomed. Opt. Express12, 6496 (2021)10.1364/BOE.433766]. Various desktop and embedded computers were tested, some with two operating systems, using the traditional sequential pupil tracking paradigm, in which the processing of the camera image only starts after it is fully downloaded to the computer. The pupil tracking was demonstrated using two Scheimpflug optical setups, telecentric in both image and object spaces, with different optical magnifications and nominal diffraction-limited performance over an ∼18 mm full field of view illuminated with 940 nm light. Eye images from subjects with different iris and skin pigmentation captured at this wavelength suggest that the proposed pupil tracking does not suffer from ethnic bias. The optical axis of the setups is tilted at 45° to facilitate integration with other instruments without the need for beam splitting. Tracking with ∼0.9-4.4 µm precision and safe light levels was demonstrated using two complementary metal-oxide-semiconductor cameras with global shutter, operating at 438 and 1,045 fps with an ∼500 × 420 pixel region of interest (ROI), and at 633 and 1,897 fps with ∼315 × 280 pixel ROI. For these image sizes, the desktop computers achieved calculation times as low as 0.5 ms, while low-cost embedded computers delivered calculation times in the 0.8-1.3 ms range.
Collapse
Affiliation(s)
| | - Xiaojing Huang
- Department of Ophthalmology, Stanford University, Palo Alto, CA 94303, USA
| | - Alfredo Dubra
- Department of Ophthalmology, Stanford University, Palo Alto, CA 94303, USA
| |
Collapse
|
2
|
Gao L, Wang C, Wu G. Wearable Biosensor Smart Glasses Based on Augmented Reality and Eye Tracking. SENSORS (BASEL, SWITZERLAND) 2024; 24:6740. [PMID: 39460220 PMCID: PMC11511461 DOI: 10.3390/s24206740] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/30/2024] [Revised: 09/27/2024] [Accepted: 10/18/2024] [Indexed: 10/28/2024]
Abstract
With the rapid development of wearable biosensor technology, the combination of head-mounted displays and augmented reality (AR) technology has shown great potential for health monitoring and biomedical diagnosis applications. However, further optimizing its performance and improving data interaction accuracy remain crucial issues that must be addressed. In this study, we develop smart glasses based on augmented reality and eye tracking technology. Through real-time information interaction with the server, the smart glasses realize accurate scene perception and analysis of the user's intention and combine with mixed-reality display technology to provide dynamic and real-time intelligent interaction services. A multi-level hardware architecture and optimized data processing process are adopted during the research process to enhance the system's real-time accuracy. Meanwhile, combining the deep learning method with the geometric model significantly improves the system's ability to perceive user behavior and environmental information in complex environments. The experimental results show that when the distance between the subject and the display is 1 m, the eye tracking accuracy of the smart glasses can reach 1.0° with an error of no more than ±0.1°. This study demonstrates that the effective integration of AR and eye tracking technology dramatically improves the functional performance of smart glasses in multiple scenarios. Future research will further optimize smart glasses' algorithms and hardware performance, enhance their application potential in daily health monitoring and medical diagnosis, and provide more possibilities for the innovative development of wearable devices in medical and health management.
Collapse
Affiliation(s)
- Lina Gao
- School of Opto-Electronical Engineering, Xi’an Technological University, Xi’an 710021, China; (L.G.); (G.W.)
| | - Changyuan Wang
- School of Computer Science, Xi’an Technological University, Xi’an 710021, China
| | - Gongpu Wu
- School of Opto-Electronical Engineering, Xi’an Technological University, Xi’an 710021, China; (L.G.); (G.W.)
| |
Collapse
|
3
|
Mokatren M, Kuflik T, Shimshoni I. Calibration-Free Mobile Eye-Tracking Using Corneal Imaging. SENSORS (BASEL, SWITZERLAND) 2024; 24:1237. [PMID: 38400392 PMCID: PMC10892865 DOI: 10.3390/s24041237] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Revised: 02/07/2024] [Accepted: 02/09/2024] [Indexed: 02/25/2024]
Abstract
In this paper, we present and evaluate a calibration-free mobile eye-traking system. The system's mobile device consists of three cameras: an IR eye camera, an RGB eye camera, and a front-scene RGB camera. The three cameras build a reliable corneal imaging system that is used to estimate the user's point of gaze continuously and reliably. The system auto-calibrates the device unobtrusively. Since the user is not required to follow any special instructions to calibrate the system, they can simply put on the eye tracker and start moving around using it. Deep learning algorithms together with 3D geometric computations were used to auto-calibrate the system per user. Once the model is built, a point-to-point transformation from the eye camera to the front camera is computed automatically by matching corneal and scene images, which allows the gaze point in the scene image to be estimated. The system was evaluated by users in real-life scenarios, indoors and outdoors. The average gaze error was 1.6∘ indoors and 1.69∘ outdoors, which is considered very good compared to state-of-the-art approaches.
Collapse
Affiliation(s)
| | | | - Ilan Shimshoni
- The Department of Information Systems, University of Haifa, Haifa 3498838, Israel; (M.M.); (T.K.)
| |
Collapse
|
4
|
Guo W, Forte V, Davies JC, Kahrs LA. An interactive augmented reality software for facial reconstructive surgeries. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 244:107970. [PMID: 38101087 DOI: 10.1016/j.cmpb.2023.107970] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Revised: 11/03/2023] [Accepted: 12/03/2023] [Indexed: 12/17/2023]
Abstract
BACKGROUND AND OBJECTIVE Surgical trainees need a lot of training and practice before being able to operate independently. The current approach of surgical education mainly involves didactic teaching and psychomotor training through physical models or real tissue. Due to the unavailability of physical resources and lack of objective ways of evaluation, there is a demand for developing alternative training methods for surgeons. In this paper, we present an application that provides additional training opportunities to surgical trainees in the field of facial reconstructive surgeries. METHODS We built a mobile augmented reality application that helps the user to visualize important concepts and experiment with different surgical plans for facial reconstructive surgeries. The application can overlay relaxed skin tension lines on a live video input or a patient's photo, which serve as bases for aligning a skin flap. A surgical trainee can interactively compare different skin flap design choices with estimated final scars on a photo of a patient. Data collection capability is also added to the application, and we performed a Monte Carlo experiment with simulated users (five classes of 100 users each) as an example of objectively measuring user performance. RESULTS The application can overlay relaxed skin tension lines on a patient's face in real time on a modern mobile device. Accurate overlays were achieved in over 91% as well as 84% and 88% out of 263 generated face images, depending on the method. Visual comparisons of the three overlay methods are presented on sample faces from different population groups. From the Monte Carlo experiment, we see that user actions in each class follow a normal distribution with a distinct set of parameters. CONCLUSIONS This application can serve as a basis for teaching surgical trainees the fundamentals of different facial reconstructive procedures, especially concepts related to relaxed skin tension lines and skin flaps. It can objectively evaluate the performance of surgical trainees in a course. This setup focuses on illustrating the relationship between the orientation of skin flaps and relaxed skin tension lines, which is a prerequisite of minimizing scarring in patients in addition to good surgical techniques.
Collapse
Affiliation(s)
- Wenzhangzhi Guo
- Department of Computer Science, University of Toronto, Toronto, ON, Canada; Wilfred and Joyce Posluns Centre for Image Guided Innovation and Therapeutic Intervention, The Hospital for Sick Children, Toronto, ON, Canada.
| | - Vito Forte
- Wilfred and Joyce Posluns Centre for Image Guided Innovation and Therapeutic Intervention, The Hospital for Sick Children, Toronto, ON, Canada; Department of Otolaryngology - Head and Neck Surgery, University of Toronto, Toronto, ON, Canada
| | - Joel C Davies
- Department of Otolaryngology - Head and Neck Surgery, Sinai Health System, University of Toronto, Toronto, ON, Canada
| | - Lueder A Kahrs
- Department of Computer Science, University of Toronto, Toronto, ON, Canada; Department of Mathematical and Computational Sciences, University of Toronto Mississauga, Mississauga, ON, Canada; Institute of Biomedical Engineering, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
5
|
Liu L, Yu B, Xu L, Wang S, Zhao L, Wu H. Comparison of stereopsis thresholds measured with conventional methods and a new eye tracking method. PLoS One 2023; 18:e0293735. [PMID: 37917615 PMCID: PMC10621823 DOI: 10.1371/journal.pone.0293735] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Accepted: 10/18/2023] [Indexed: 11/04/2023] Open
Abstract
PURPOSE Stereopsis is the ability to perceive depth using the slightly different views from two eyes. This study aims to conduct innovative stereopsis tests using the objective data outputted by eye tracking technology. METHODS A laptop and an eye tracker were used to establish the test system. Anaglyphic glasses were employed to execute the stereopsis assessment. The test symbol employed was devised to emulate the quantitative measurement component of the Random Dot 3 Stereo Acuity Test. Sub-pixel technology was used to increase the disparity accuracy of test pages. The tested disparities were: 160″, 100″, 63″, 50″, 40″, 32″, 25″, 20″, 16″, and 12.5″. The test was conducted at a distance of 0.65m. Conventional and eye tracking stereopsis assessments were conducted on 120 subjects. Wilcoxon signed-rank test was used to test the difference, while the Bland-Altman method was used to test the consistency between the two methods. RESULTS The Wilcoxon signed-rank test showed no significant difference between conventional and eye tracking thresholds of stereopsis (Z = -1.497, P = 0.134). There was a high level of agreement between the two methods using Bland- Altman statistical analysis (The 95 per cent limits of agreement were -0.40 to 0.47 log arcsec). CONCLUSIONS Stereoacuity can be evaluated utilizing an innovative stereopsis measurement system grounded in eye tracking technology.
Collapse
Affiliation(s)
- Lu Liu
- Department of Optometry, The Second Hospital of Jilin University, Changchun, China
| | - Bo Yu
- Department of Optometry, The Second Hospital of Jilin University, Changchun, China
| | - Lingxian Xu
- Department of Optometry, The Second Hospital of Jilin University, Changchun, China
| | - Shiyi Wang
- Department of Optometry, The Second Hospital of Jilin University, Changchun, China
| | - Lingzhi Zhao
- Department of Optometry, The Second Hospital of Jilin University, Changchun, China
| | - Huang Wu
- Department of Optometry, The Second Hospital of Jilin University, Changchun, China
| |
Collapse
|
6
|
Khoong YM, Luo S, Huang X, Li M, Gu S, Jiang T, Liang H, Liu Y, Zan T. The application of augmented reality in plastic surgery training and education: A narrative review. J Plast Reconstr Aesthet Surg 2023; 82:255-263. [PMID: 37207439 DOI: 10.1016/j.bjps.2023.04.033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Revised: 03/29/2023] [Accepted: 04/08/2023] [Indexed: 05/21/2023]
Abstract
Continuing problems with fewer training opportunities and a greater awareness of patient safety have led to a constant search for an alternative technique to bridge the existing theory-practice gap in plastic surgery training and education. The current COVID-19 epidemic has aggravated the situation, making it urgent to implement breakthrough technological initiatives currently underway to improve surgical education. The cutting edge of technological development, augmented reality (AR), has already been applied in numerous facets of plastic surgery training, and it is capable of realizing the aims of education and training in this field. In this article, we will take a look at some of the most important ways that AR is now being used in plastic surgery education and training, as well as offer an exciting glimpse into the potential future of this field thanks to technological advancements.
Collapse
Affiliation(s)
- Yi Min Khoong
- Department of Plastic and Reconstructive Surgery, Shanghai Ninth People's Hospital, Shanghai JiaoTong University School of Medicine, Shanghai, PR China
| | - Shenying Luo
- Department of Plastic and Reconstructive Surgery, Shanghai Ninth People's Hospital, Shanghai JiaoTong University School of Medicine, Shanghai, PR China
| | - Xin Huang
- Department of Plastic and Reconstructive Surgery, Shanghai Ninth People's Hospital, Shanghai JiaoTong University School of Medicine, Shanghai, PR China
| | - Minxiong Li
- Department of Plastic and Reconstructive Surgery, Shanghai Ninth People's Hospital, Shanghai JiaoTong University School of Medicine, Shanghai, PR China
| | - Shuchen Gu
- Department of Plastic and Reconstructive Surgery, Shanghai Ninth People's Hospital, Shanghai JiaoTong University School of Medicine, Shanghai, PR China
| | - Taoran Jiang
- Department of Plastic and Reconstructive Surgery, Shanghai Ninth People's Hospital, Shanghai JiaoTong University School of Medicine, Shanghai, PR China
| | - Hsin Liang
- Department of Plastic and Reconstructive Surgery, Shanghai Ninth People's Hospital, Shanghai JiaoTong University School of Medicine, Shanghai, PR China
| | - Yunhan Liu
- Department of Plastic and Reconstructive Surgery, Shanghai Ninth People's Hospital, Shanghai JiaoTong University School of Medicine, Shanghai, PR China
| | - Tao Zan
- Department of Plastic and Reconstructive Surgery, Shanghai Ninth People's Hospital, Shanghai JiaoTong University School of Medicine, Shanghai, PR China.
| |
Collapse
|
7
|
Li Y, Reed A, Kavoussi N, Wu JY. Eye gaze metrics for skill assessment and feedback in kidney stone surgery. Int J Comput Assist Radiol Surg 2023:10.1007/s11548-023-02901-6. [PMID: 37202714 DOI: 10.1007/s11548-023-02901-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2023] [Accepted: 03/31/2023] [Indexed: 05/20/2023]
Abstract
PURPOSE Surgical skill assessment is essential for safe operations. In endoscopic kidney stone surgery, surgeons must perform a highly skill-dependent mental mapping from the pre-operative scan to the intraoperative endoscope image. Poor mental mapping can lead to incomplete exploration of the kidney and high reoperation rates. Yet there are few objective ways to evaluate competency. We propose to use unobtrusive eye-gaze measurements in the task space to evaluate skill and provide feedback. METHODS We capture the surgeons' eye gaze on the surgical monitor with the Microsoft Hololens 2. To enable stable and accurate gaze detection, we develop a calibration algorithm to refine the eye tracking of the Hololens. In addition, we use a QR code to locate the eye gaze on the surgical monitor. We then run a user study with three expert and three novice surgeons. Each surgeon is tasked to locate three needles representing kidney stones in three different kidney phantoms. RESULTS We find that experts have more focused gaze patterns. They complete the task faster, have smaller total gaze area, and the gaze fewer times outside the area of interest. While fixation to non-fixation ratio did not show significant difference in our findings, tracking the ratio over time shows different patterns between novices and experts. CONCLUSION We show that a non-negligible difference holds between novice and expert surgeons' gaze metrics in kidney stone identification in phantoms. Expert surgeons demonstrate more targeted gaze throughout a trial, indicating their higher level of proficiency. To improve the skill acquisition process for novice surgeons, we suggest providing sub-task specific feedback. This approach presents an objective and non-invasive method to assess surgical competence.
Collapse
Affiliation(s)
- Yizhou Li
- Department of Computer Science, Vanderbilt University, 2301 Vanderbilt Pl, Nashville, TN, 37240, USA.
| | - Amy Reed
- Department of Urology, Vanderbilt University Medical Center, 1211 Medical Center Dr, Nashville, TN, 37232, USA
| | - Nicholas Kavoussi
- Department of Urology, Vanderbilt University Medical Center, 1211 Medical Center Dr, Nashville, TN, 37232, USA
| | - Jie Ying Wu
- Department of Computer Science, Vanderbilt University, 2301 Vanderbilt Pl, Nashville, TN, 37240, USA.
| |
Collapse
|
8
|
Sanders JJ, Blanch-Hartigan D, Ericson J, Tarbi E, Rizzo D, Gramling R, van Vliet L. Methodological innovations to strengthen evidence-based serious illness communication. PATIENT EDUCATION AND COUNSELING 2023; 114:107790. [PMID: 37207565 DOI: 10.1016/j.pec.2023.107790] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Revised: 04/29/2023] [Accepted: 05/08/2023] [Indexed: 05/21/2023]
Abstract
BACKGROUND/OBJECTIVE A growing population of those affected by serious illness, prognostic uncertainty, patient diversity, and healthcare digitalization pose challenges for the future of serious illness communication. Yet, there is paucity of evidence to support serious illness communication behaviors among clinicians. Herein, we propose three methodological innovations to advance the basic science of serious illness communication. RESULTS First, advanced computation techniques - e.g. machine-learning techniques and natural language processing - offer the possibility to measure the characteristics and complex patterns of audible serious illness communication in large datasets. Second, immersive technologies - e.g., virtual- and augmented reality - allow for experimentally manipulating and testing the effects of specific communication strategies, and interactional and environmental aspects of serious illness communication. Third, digital-health technologies - e.g., shared notes and videoconferences - can be used to unobtrusively observe and manipulate communication, and compare in-person to digitally-mediated communication elements and effects. Immersive and digital health technologies allow integration of physiological measurement (e.g. synchrony or gaze) that may advance our understanding of patient experience. CONCLUSION/PRACTICE IMPLICATIONS New technologies and measurement approaches, while imperfect, will help advance our understanding of the epidemiology and quality of serious illness communication in an evolving healthcare environment.
Collapse
Affiliation(s)
- Justin J Sanders
- Department of Family Medicine, McGill University, Montreal, QC, Canada.
| | | | - Jonathan Ericson
- Department of Information Design and Corporate Communication, Bentley University, Waltham, MA, USA.
| | - Elise Tarbi
- Department of Nursing, University of Vermont, Burlington, VT, USA.
| | - Donna Rizzo
- Department of Civil & Environmental Engineering, University of Vermont, Burlington, VT, USA.
| | - Robert Gramling
- Department of Family Medicine, University of Vermont, Burlington, VT, USA.
| | - Liesbeth van Vliet
- Department of Health and Medical Psychology, University of Leiden, Netherlands
| |
Collapse
|
9
|
Gsaxner C, Li J, Pepe A, Jin Y, Kleesiek J, Schmalstieg D, Egger J. The HoloLens in medicine: A systematic review and taxonomy. Med Image Anal 2023; 85:102757. [PMID: 36706637 DOI: 10.1016/j.media.2023.102757] [Citation(s) in RCA: 30] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Revised: 01/05/2023] [Accepted: 01/18/2023] [Indexed: 01/22/2023]
Abstract
The HoloLens (Microsoft Corp., Redmond, WA), a head-worn, optically see-through augmented reality (AR) display, is the main player in the recent boost in medical AR research. In this systematic review, we provide a comprehensive overview of the usage of the first-generation HoloLens within the medical domain, from its release in March 2016, until the year of 2021. We identified 217 relevant publications through a systematic search of the PubMed, Scopus, IEEE Xplore and SpringerLink databases. We propose a new taxonomy including use case, technical methodology for registration and tracking, data sources, visualization as well as validation and evaluation, and analyze the retrieved publications accordingly. We find that the bulk of research focuses on supporting physicians during interventions, where the HoloLens is promising for procedures usually performed without image guidance. However, the consensus is that accuracy and reliability are still too low to replace conventional guidance systems. Medical students are the second most common target group, where AR-enhanced medical simulators emerge as a promising technology. While concerns about human-computer interactions, usability and perception are frequently mentioned, hardly any concepts to overcome these issues have been proposed. Instead, registration and tracking lie at the core of most reviewed publications, nevertheless only few of them propose innovative concepts in this direction. Finally, we find that the validation of HoloLens applications suffers from a lack of standardized and rigorous evaluation protocols. We hope that this review can advance medical AR research by identifying gaps in the current literature, to pave the way for novel, innovative directions and translation into the medical routine.
Collapse
Affiliation(s)
- Christina Gsaxner
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; BioTechMed, 8010 Graz, Austria.
| | - Jianning Li
- Institute of AI in Medicine, University Medicine Essen, 45131 Essen, Germany; Cancer Research Center Cologne Essen, University Medicine Essen, 45147 Essen, Germany
| | - Antonio Pepe
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; BioTechMed, 8010 Graz, Austria
| | - Yuan Jin
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; Research Center for Connected Healthcare Big Data, Zhejiang Lab, Hangzhou, 311121 Zhejiang, China
| | - Jens Kleesiek
- Institute of AI in Medicine, University Medicine Essen, 45131 Essen, Germany; Cancer Research Center Cologne Essen, University Medicine Essen, 45147 Essen, Germany
| | - Dieter Schmalstieg
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; BioTechMed, 8010 Graz, Austria
| | - Jan Egger
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; Institute of AI in Medicine, University Medicine Essen, 45131 Essen, Germany; BioTechMed, 8010 Graz, Austria; Cancer Research Center Cologne Essen, University Medicine Essen, 45147 Essen, Germany
| |
Collapse
|
10
|
Curran VR, Xu X, Aydin MY, Meruvia-Pastor O. Use of Extended Reality in Medical Education: An Integrative Review. MEDICAL SCIENCE EDUCATOR 2023; 33:275-286. [PMID: 36569366 PMCID: PMC9761044 DOI: 10.1007/s40670-022-01698-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 11/28/2022] [Indexed: 06/17/2023]
Abstract
Extended reality (XR) has emerged as an innovative simulation-based learning modality. An integrative review was undertaken to explore the nature of evidence, usage, and effectiveness of XR modalities in medical education. One hundred and thirty-three (N = 133) studies and articles were reviewed. XR technologies are commonly reported in surgical and anatomical education, and the evidence suggests XR may be as effective as traditional medical education teaching methods and, potentially, a more cost-effective means of curriculum delivery. Further research to compare different variations of XR technologies and best applications in medical education and training are required to advance the field. Supplementary Information The online version contains supplementary material available at 10.1007/s40670-022-01698-4.
Collapse
Affiliation(s)
- Vernon R. Curran
- Office of Professional and Educational Development, Faculty of Medicine, Health Sciences Centre, Memorial University of Newfoundland, Room H2982, St. John’s, NL A1B 3V6 Canada
| | - Xiaolin Xu
- Faculty of Health Sciences, Queen’s University, Kingston, ON Canada
| | - Mustafa Yalin Aydin
- Department of Computer Sciences, Memorial University of Newfoundland, St. John’s, NL Canada
| | - Oscar Meruvia-Pastor
- Department of Computer Sciences, Memorial University of Newfoundland, St. John’s, NL Canada
| |
Collapse
|
11
|
Minty I, Lawson J, Guha P, Luo X, Malik R, Cerneviciute R, Kinross J, Martin G. The use of mixed reality technology for the objective assessment of clinical skills: a validation study. BMC MEDICAL EDUCATION 2022; 22:639. [PMID: 35999532 PMCID: PMC9395785 DOI: 10.1186/s12909-022-03701-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Accepted: 08/16/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND Mixed Reality technology may provide many advantages over traditional teaching methods. Despite its potential, the technology has yet to be used for the formal assessment of clinical competency. This study sought to collect validity evidence and assess the feasibility of using the HoloLens 2 mixed reality headset for the conduct and augmentation of Objective Structured Clinical Examinations (OSCEs). METHODS A prospective cohort study was conducted to compare the assessment of undergraduate medical students undertaking OSCEs via HoloLens 2 live (HLL) and recorded (HLR), and gold-standard in-person (IP) methods. An augmented mixed reality scenario was also assessed. RESULTS Thirteen undergraduate participants completed a total of 65 OSCE stations. Overall inter-modality correlation was 0.81 (p = 0.01), 0.98 (p = 0.01) and 0.82 (p = 0.01) for IP vs. HLL, HLL vs. HLR and IP vs. HLR respectively. Skill based correlations for IP vs. HLR were assessed for history taking (0.82, p = 0.01), clinical examination (0.81, p = 0.01), procedural (0.88, p = 0.01) and clinical skills (0.92, p = 0.01), and assessment of a virtual mixed reality patient (0.74, p = 0.01). The HoloLens device was deemed to be usable and practical (Standard Usability Scale (SUS) score = 51.5), and the technology was thought to deliver greater flexibility and convenience, and have the potential to expand and enhance assessment opportunities. CONCLUSIONS HoloLens 2 is comparable to traditional in-person examination of undergraduate medical students for both live and recorded assessments, and therefore is a valid and robust method for objectively assessing performance. The technology is in its infancy, and users need to develop confidence in its usability and reliability as an assessment tool. However, the potential to integrate additional functionality including holographic content, automated tracking and data analysis, and to facilitate remote assessment may allow the technology to enhance, expand and standardise examinations across a range of educational contexts.
Collapse
Affiliation(s)
- Iona Minty
- Department of Surgery and Cancer, Imperial College London, St Mary's Hospital, 10th Floor QEQM Building, London, W2 1NY, UK
| | - Jason Lawson
- Department of Surgery and Cancer, Imperial College London, St Mary's Hospital, 10th Floor QEQM Building, London, W2 1NY, UK
| | - Payal Guha
- Department of Surgery and Cancer, Imperial College London, St Mary's Hospital, 10th Floor QEQM Building, London, W2 1NY, UK
| | - Xun Luo
- Department of Surgery and Cancer, Imperial College London, St Mary's Hospital, 10th Floor QEQM Building, London, W2 1NY, UK
| | - Rukhnoor Malik
- Department of Surgery and Cancer, Imperial College London, St Mary's Hospital, 10th Floor QEQM Building, London, W2 1NY, UK
| | - Raminta Cerneviciute
- Department of Surgery and Cancer, Imperial College London, St Mary's Hospital, 10th Floor QEQM Building, London, W2 1NY, UK
| | - James Kinross
- Department of Surgery and Cancer, Imperial College London, St Mary's Hospital, 10th Floor QEQM Building, London, W2 1NY, UK
| | - Guy Martin
- Department of Surgery and Cancer, Imperial College London, St Mary's Hospital, 10th Floor QEQM Building, London, W2 1NY, UK.
| |
Collapse
|
12
|
Nagayo Y, Saito T, Oyama H. Augmented reality self-training system for suturing in open surgery: A randomized controlled trial. Int J Surg 2022; 102:106650. [PMID: 35525415 DOI: 10.1016/j.ijsu.2022.106650] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2022] [Revised: 04/15/2022] [Accepted: 04/19/2022] [Indexed: 10/18/2022]
Abstract
BACKGROUND Existing self-training materials are insufficient to learn open surgical procedures, and a new self-training system that provides three-dimensional procedural information is needed. The effectiveness and usability of a self-training system providing three-dimensional information by augmented reality (AR) were compared to those of an existing self-training system, instructional video, in self-learning of suturing in open surgery. MATERIALS AND METHODS This was a prospective, evaluator-blinded, randomized, controlled study. Medical students who were suturing novices were randomized into 2 groups: practice with the AR training system (AR group) or an instructional video (video group). Participants were instructed in subcuticular interrupted suture and each training system and watched the instructional video once. They then completed a pretest performing the suture on a skin pad. Participants in each group practiced the procedure 10 times using each training system, followed by a posttest. The pretest and posttest were video-recorded and graded by blinded evaluators using a validated scoring form composed of global rating (GR) and task-specific (TS) subscales. Students completed a post-study questionnaire assessing system usability, each system's usefulness, and their confidence and interest in surgery. RESULTS Nineteen participants in each group completed the trial. No significant difference was found between the AR and video groups on the improvement of the scores from pretest to posttest (GR: p = 0.54, TS: p = 0.91). The posttest scores of both GR and TS improved significantly from pretest in both groups (GR: both p < 0.001, TS: both p < 0.001). There was no significant difference between the groups in the system usability scale scores (p = 0.38). The motion provided in the AR system was more helpful for manipulating surgical instruments than the video (p = 0.02). CONCLUSION The AR system was considered as understandable and easy to use as the instructional video in learning suture technique in open surgery for novices.
Collapse
Affiliation(s)
- Yuri Nagayo
- Department of Clinical Information Engineering, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-0033, Japan.
| | - Toki Saito
- Department of Clinical Information Engineering, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-0033, Japan.
| | - Hiroshi Oyama
- Department of Clinical Information Engineering, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-0033, Japan.
| |
Collapse
|
13
|
Liu X, Sanchez Perdomo YP, Zheng B, Duan X, Zhang Z, Zhang D. When medical trainees encountering a performance difficulty: evidence from pupillary responses. BMC MEDICAL EDUCATION 2022; 22:191. [PMID: 35305623 PMCID: PMC8934497 DOI: 10.1186/s12909-022-03256-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Accepted: 03/13/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND Medical trainees are required to learn many procedures following instructions to improve their skills. This study aims to investigate the pupillary response of trainees when they encounter moment of performance difficulty (MPD) during skill learning. Detecting the moment of performance difficulty is essential for educators to assist trainees when they need it. METHODS Eye motions were recorded while trainees practiced the thoracostomy procedure in the simulation model. To make pupillary data comparable among trainees, we proposed the adjusted pupil size (APS) normalizing pupil dilation for each trainee in their entire procedure. APS variables including APS, maxAPS, minAPS, meanAPS, medianAPS, and max interval indices were compared between easy and difficult subtasks; the APSs were compared among the three different performance situations, the moment of normal performance (MNP), MPD, and moment of seeking help (MSH). RESULTS The mixed ANOVA revealed that the adjusted pupil size variables, such as the maxAPS, the minAPS, the meanAPS, and the medianAPS, had significant differences between performance situations. Compared to MPD and MNP, pupil size was reduced during MSH. Trainees displayed a smaller accumulative frequency of APS during difficult subtask when compared to easy subtasks. CONCLUSIONS Results from this project suggest that pupil responses can be a good behavioral indicator. This study is a part of our research aiming to create an artificial intelligent system for medical trainees with automatic detection of their performance difficulty and delivering instructional messages using augmented reality technology.
Collapse
Affiliation(s)
- Xin Liu
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, 100083, China
- Surgical Simulation Research Lab, Department of Surgery, University of Alberta, Edmonton, AB, T6G 2E1, Canada
- Beijing Key Laboratory of Knowledge Engineering for Materials Science, Beijing, 100083, China
| | - Yerly Paola Sanchez Perdomo
- Surgical Simulation Research Lab, Department of Surgery, University of Alberta, Edmonton, AB, T6G 2E1, Canada
| | - Bin Zheng
- Surgical Simulation Research Lab, Department of Surgery, University of Alberta, Edmonton, AB, T6G 2E1, Canada.
- Department of Surgery, Faculty of Medicine and Dentistry, 162 Heritage Medical Research Centre, University of Alberta, 8440 112 St. NW. Edmonton, Alberta, T6G 2E1, Canada.
| | - Xiaoqin Duan
- Surgical Simulation Research Lab, Department of Surgery, University of Alberta, Edmonton, AB, T6G 2E1, Canada
- Department of Rehabilitation Medicine, Second Hospital of Jilin University, Changchun, Jilin, 130041, China
| | - Zhongshi Zhang
- Surgical Simulation Research Lab, Department of Surgery, University of Alberta, Edmonton, AB, T6G 2E1, Canada
- Department of Biological Sciences, University of Alberta, Edmonton, AB, T6G 2E9, Canada
| | - Dezheng Zhang
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, 100083, China
- Beijing Key Laboratory of Knowledge Engineering for Materials Science, Beijing, 100083, China
| |
Collapse
|
14
|
Pereira D, De Pra Y, Tiberi E, Monaco V, Dario P, Ciuti G. Flipping food during grilling tasks, a dataset of utensils kinematics and dynamics, food pose and subject gaze. Sci Data 2022; 9:5. [PMID: 35022437 PMCID: PMC8755801 DOI: 10.1038/s41597-021-01101-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2021] [Accepted: 11/08/2021] [Indexed: 11/27/2022] Open
Abstract
This paper presents a multivariate dataset of 2866 food flipping movements, performed by 4 chefs and 5 home cooks, with different grilled food and two utensils (spatula and tweezers). The 3D trajectories of strategic points in the utensils were tracked using optoelectronic motion capture. The pinching force of the tweezers, the bending force and torsion torque of the spatula were also recorded, as well as videos and the subject gaze. These data were collected using a custom experimental setup that allowed the execution of flipping movements with freshly cooked food, without having the sensors near the dangerous cooking area. Complementary, the 2D position of food was computed from the videos. The action of flipping food is, indeed, gaining the attention of both researchers and manufacturers of foodservice technology. The reported dataset contains valuable measurements (1) to characterize and model flipping movements as performed by humans, (2) to develop bio-inspired methods to control a cooking robot, or (3) to study new algorithms for human actions recognition.
Collapse
Affiliation(s)
- Débora Pereira
- The BioRobotics Institute, Scuola Superiore Sant'Anna, Pisa, 56127, Italy.
- Department of Excellence in Robotics & AI, Scuola Superiore Sant'Anna, Pisa, 56127, Italy.
- The Research Hub by Electrolux Professional SpA, AD&T, Pordenone, 33170, Italy.
| | - Yuri De Pra
- The Research Hub by Electrolux Professional SpA, AD&T, Pordenone, 33170, Italy
- University of Udine, Department of Computer Science, Mathematics and Physics, Udine, 33100, Italy
| | - Emidio Tiberi
- The Research Hub by Electrolux Professional SpA, AD&T, Pordenone, 33170, Italy
| | - Vito Monaco
- The BioRobotics Institute, Scuola Superiore Sant'Anna, Pisa, 56127, Italy
- Department of Excellence in Robotics & AI, Scuola Superiore Sant'Anna, Pisa, 56127, Italy
| | - Paolo Dario
- The BioRobotics Institute, Scuola Superiore Sant'Anna, Pisa, 56127, Italy
- Department of Excellence in Robotics & AI, Scuola Superiore Sant'Anna, Pisa, 56127, Italy
| | - Gastone Ciuti
- The BioRobotics Institute, Scuola Superiore Sant'Anna, Pisa, 56127, Italy.
- Department of Excellence in Robotics & AI, Scuola Superiore Sant'Anna, Pisa, 56127, Italy.
| |
Collapse
|
15
|
Xin L, Bin Z, Xiaoqin D, Wenjing H, Yuandong L, Jinyu Z, Chen Z, Lin W. Detecting Task Difficulty of Learners in Colonoscopy: Evidence from Eye-Tracking. J Eye Mov Res 2021; 14. [PMID: 34345375 PMCID: PMC8327395 DOI: 10.16910/jemr.14.2.5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/17/2023] Open
Abstract
Eye-tracking can help decode the intricate control mechanism in human performance. In healthcare, physicians-in-training require extensive practice to improve their healthcare skills. When a trainee encounters any difficulty in the practice, they will need feedback from experts to improve their performance. Personal feedback is time-consuming and subjected to bias. In this study, we tracked the eye movements of trainees during their colonoscopic performance in simulation. We examined changes in eye movement behavior during the moments of navigation loss (MNL), a signature sign for task difficulty during colonoscopy, and tested whether deep learning algorithms can detect the MNL by feeding data from eye-tracking. Human eye gaze and pupil characteristics were learned and verified by the deep convolutional generative adversarial networks (DCGANs); the generated data were fed to the Long Short-Term Memory (LSTM) networks with three different data feeding strategies to classify MNLs from the entire colonoscopic procedure. Outputs from deep learning were compared to the expert's judgment on the MNLs based on colonoscopic videos. The best classification outcome was achieved when we fed human eye data with 1000 synthesized eye data, where accuracy (91.80%), sensitivity (90.91%), and specificity (94.12%) were optimized. This study built an important foundation for our work of developing an education system for training healthcare skills using simulation.
Collapse
Affiliation(s)
- Liu Xin
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, China.,Surgical Simulation Research Lab, Department of Surgery, University of Alberta, Edmonton, Alberta, Canada
| | - Zheng Bin
- Surgical Simulation Research Lab, Department of Surgery, University of Alberta, Edmonton, Alberta, Canada
| | - Duan Xiaoqin
- Department of Rehabilitation Medicine, Jilin University Second Hospital, Changchun, Jilin, China.,Surgical Simulation Research Lab, Department of Surgery, University of Alberta, Edmonton, Alberta, Canada
| | - He Wenjing
- Department of Surgery, University of Manitoba, Winnipeg, Manitoba, Canada
| | - Li Yuandong
- Department of Surgery, Shanxi Bethune Hospital, Taiyuan, Shanxi, China
| | - Zhao Jinyu
- Surgical Simulation Research Lab, Department of Surgery, University of Alberta, Edmonton, Alberta, Canada
| | - Zhao Chen
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, China.,Beijing Key Laboratory of Knowledge Engineering for Materials Science, Beijing, China
| | - Wang Lin
- Surgical Simulation Research Lab, Department of Surgery, University of Alberta, Edmonton, Alberta, Canada
| |
Collapse
|
16
|
A Novel Suture Training System for Open Surgery Replicating Procedures Performed by Experts Using Augmented Reality. J Med Syst 2021; 45:60. [PMID: 33829327 PMCID: PMC8026441 DOI: 10.1007/s10916-021-01735-6] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2021] [Accepted: 03/25/2021] [Indexed: 01/22/2023]
Abstract
The surgical education environment has been changing significantly due to restricted work hours, limited resources, and increasing public concern for safety and quality, leading to the evolution of simulation-based training in surgery. Of the various simulators, low-fidelity simulators are widely used to practice surgical skills such as sutures because they are portable, inexpensive, and easy to use without requiring complicated settings. However, since low-fidelity simulators do not offer any teaching information, trainees do self-practice with them, referring to textbooks or videos, which are insufficient to learn open surgical procedures. This study aimed to develop a new suture training system for open surgery that provides trainees with the three-dimensional information of exemplary procedures performed by experts and allows them to observe and imitate the procedures during self-practice. The proposed system consists of a motion capture system of surgical instruments and a three-dimensional replication system of captured procedures on the surgical field. Motion capture of surgical instruments was achieved inexpensively by using cylindrical augmented reality (AR) markers, and replication of captured procedures was realized by visualizing them three-dimensionally at the same position and orientation as captured, using an AR device. For subcuticular interrupted suture, it was confirmed that the proposed system enabled users to observe experts' procedures from any angle and imitate them by manipulating the actual surgical instruments during self-practice. We expect that this training system will contribute to developing a novel surgical training method that enables trainees to learn surgical skills by themselves in the absence of experts.
Collapse
|
17
|
Caruso TJ, Hess O, Roy K, Wang E, Rodriguez S, Palivathukal C, Haber N. Integrated eye tracking on Magic Leap One during augmented reality medical simulation: a technical report. BMJ SIMULATION & TECHNOLOGY ENHANCED LEARNING 2021; 7:431-434. [DOI: 10.1136/bmjstel-2020-000782] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/09/2020] [Revised: 02/03/2021] [Accepted: 02/09/2021] [Indexed: 11/04/2022]
Abstract
Augmented reality (AR) has been studied as a clinical teaching tool, however eye-tracking capabilities integrated within an AR medical simulator have limited research. The recently developed Chariot Augmented Reality Medical (CHARM) simulator integrates real-time communication into a portable medical simulator. The purpose of this project was to refine the gaze-tracking capabilities of the CHARM simulator on the Magic Leap One (ML1). Adults aged 18 years and older were recruited using convenience sampling. Participants were provided with an ML1 headset that projected a hologram of a patient, bed and monitor. They were instructed via audio recording to gaze at variables in this scenario. The participant gaze targets from the ML1 output were compared with the specified gaze points from the audio recording. A priori investigators planned to iterative modifications of the eye-tracking software until a capture rate of 80% was achieved. Two consecutive participants with a capture rate less than 80% triggered software modifications and the project concluded after three consecutive participants’ capture rates were greater than 80%. Thirteen participants were included in the study. Eye-tracking concordance was less than 80% reliable in the first 10 participants. The investigators hypothesised that the eye movement detection threshold was too sensitive, thus the algorithm was adjusted to reduce noise. The project concluded after the final three participants’ gaze capture rates were 80%, 80% and 80.1%, respectively. This report suggests that eye-tracking technology can be reliably used with the ML1 enabled with CHARM simulator software.
Collapse
|