1
|
Golomingi R, Dobay A, Franckenberg S, Ebert L, Sieberth T. Augmented reality in forensics and forensic medicine - Current status and future prospects. Sci Justice 2023; 63:451-455. [PMID: 37453776 DOI: 10.1016/j.scijus.2023.04.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Revised: 03/27/2023] [Accepted: 04/22/2023] [Indexed: 07/18/2023]
Abstract
Forensic investigations require a vast variety of knowledge and expertise of each specialist involved. With the increase in digitization and advanced technical possibilities, the traditional use of a computer with a screen for visualization and a mouse and keyboard for interactions has limitations, especially when visualizing the content in relation to the real world. Augmented reality (AR) can be used in such instances to support investigators in various tasks at the scene as well as later in the investigation process. In this article, we present current applications of AR in forensics and forensic medicine, the technological basics of AR, and the advantages that AR brings for forensic investigations. Furthermore, we will have a brief look at other fields of application and at future developments of AR in forensics.
Collapse
Affiliation(s)
- Raffael Golomingi
- 3D Center Zurich, Institute of Forensic Medicine, University of Zurich, 8057 Zurich, Switzerland.
| | - Akos Dobay
- 3D Center Zurich, Institute of Forensic Medicine, University of Zurich, 8057 Zurich, Switzerland.
| | - Sabine Franckenberg
- 3D Center Zurich, Institute of Forensic Medicine, University of Zurich, 8057 Zurich, Switzerland; Diagnostic and Interventional Radiology, University Hospital Zurich, 8091 Zurich, Switzerland.
| | - Lars Ebert
- 3D Center Zurich, Institute of Forensic Medicine, University of Zurich, 8057 Zurich, Switzerland.
| | - Till Sieberth
- 3D Center Zurich, Institute of Forensic Medicine, University of Zurich, 8057 Zurich, Switzerland.
| |
Collapse
|
2
|
Esfandiari H, Troxler P, Hodel S, Suter D, Farshad M, Cavalcanti N, Wetzel O, Mania S, Cornaz F, Selman F, Kabelitz M, Zindel C, Weber S, Haupt S, Fürnstahl P. Introducing a brain-computer interface to facilitate intraoperative medical imaging control – a feasibility study. BMC Musculoskelet Disord 2022; 23:701. [PMID: 35869451 PMCID: PMC9306028 DOI: 10.1186/s12891-022-05384-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/29/2021] [Accepted: 04/29/2022] [Indexed: 12/04/2022] Open
Abstract
Background Safe and accurate execution of surgeries to date mainly rely on preoperative plans generated based on preoperative imaging. Frequent intraoperative interaction with such patient images during the intervention is needed, which is currently a cumbersome process given that such images are generally displayed on peripheral two-dimensional (2D) monitors and controlled through interface devices that are outside the sterile filed. This study proposes a new medical image control concept based on a Brain Computer Interface (BCI) that allows for hands-free and direct image manipulation without relying on gesture recognition methods or voice commands. Method A software environment was designed for displaying three-dimensional (3D) patient images onto external monitors, with the functionality of hands-free image manipulation based on the user’s brain signals detected by the BCI device (i.e., visually evoked signals). In a user study, ten orthopedic surgeons completed a series of standardized image manipulation tasks to navigate and locate predefined 3D points in a Computer Tomography (CT) image using the developed interface. Accuracy was assessed as the mean error between the predefined locations (ground truth) and the navigated locations by the surgeons. All surgeons rated the performance and potential intraoperative usability in a standardized survey using a five-point Likert scale (1 = strongly disagree to 5 = strongly agree). Results When using the developed interface, the mean image control error was 15.51 mm (SD: 9.57). The user's acceptance was rated with a Likert score of 4.07 (SD: 0.96) while the overall impressions of the interface was rated as 3.77 (SD: 1.02) by the users. We observed a significant correlation between the users' overall impression and the calibration score they achieved. Conclusions The use of the developed BCI, that allowed for a purely brain-guided medical image control, yielded promising results, and showed its potential for future intraoperative applications. The major limitation to overcome was noted as the interaction delay. Supplementary Information The online version contains supplementary material available at 10.1186/s12891-022-05384-9.
Collapse
|
3
|
Implementation Details for Controlling Contactless 3D Virtual Endoscopy. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12115757] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
In the medical world, with the innovative application of medical informatics, it is possible to enable many aspects of surgeries that were not able to be addressed before. One of these is contactless surgery planning and controlling the visualization of medical data. In our approach to contactless surgery, we adopted a new framework for hand and motion detection based on augmented reality. We developed a contactless interface for a surgeon to control the visualization options in our DICOM (Digital Imaging and Communications in Medicine) viewer platform that uses a stereo camera as a sensor device input that controls hand/finger motions, in contactless mode, and applied it to 3D virtual endoscopy. In this paper, we will present our proposal for defining motion parameters in contactless, incisionless surgeries. We enabled better surgeon’s experience, more precise surgery, real-time feedback, depth motion tracking, and contactless control of visualization, which gives freedom to the surgeon during the surgery. We implemented motion tracking using stereo cameras with depth resolution and precise shutter sensors for depth streaming. Our solution provides contactless control with a range up to 2–3 m that enables the application in the operating room.
Collapse
|
4
|
Kim JT, Cha YH, Yoo JI, Park CH. Touchless Control of Picture Archiving and Communication System in Operating Room Environment: A Comparative Study of Input Methods. Clin Orthop Surg 2021; 13:436-446. [PMID: 34484637 PMCID: PMC8380534 DOI: 10.4055/cios20004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/08/2020] [Revised: 01/12/2021] [Accepted: 01/12/2021] [Indexed: 11/06/2022] Open
Abstract
Background The advancement of computer information technology would maximize its potential in operating rooms with touchless input devices. A picture archiving and communication system (PACS) was compared with a touchless input device (LMC-GW), relaying to another person to control a mouse through verbal guidance, and directly controlling a mouse. Methods Participants (n = 34; mean age, 29.6 years) were prospectively enrolled and given nine scenarios to compare the three methods. Each scenario consisted of eight tasks, which required 6 essential functions of PACS. Time elapsed and measurement values were recorded for objective evaluation, while subjective evaluation was conducted with a questionnaire. Results In all 8 tasks, manipulation using the mouse took significantly less time than the other methods (all p < 0.05). Study selection, panning, zooming, scrolling, distance measuring, and leg length measurement took significantly less time when LMC-GW was used compared to relaying to another person (all p < 0.01), whereas there were no significant differences in time required for measuring the angles and windowing. Although the touchless input device provided higher accessibility and lower contamination risk, it was more difficult to handle than the other input methods (all p < 0.01). Conclusions The touchless input device provided superior or equal performance to the method of verbal instruction in the environment of operating room. Surgeons agreed that the device would be helpful for manipulating PACS in operating rooms with less contamination risk and disturbance of workflow. The touchless input device can be an alternative option for direct manipulation of a mouse in operation rooms in the future.
Collapse
Affiliation(s)
- Jung-Taek Kim
- Department of Orthopedic Surgery, Ajou University School of Medicine, Ajou Medical Center, Suwon, Korea
| | - Yong-Han Cha
- Department of Orthopedic Surgery, Eulji University Hospital, Daejeon, Korea
| | - Jun-Il Yoo
- Department of Orthopaedic Surgery, Gyeongsang National University Hospital, Jinju, Korea
| | - Chan-Ho Park
- Department of Orthopedic Surgery, Yeungnam University Hospital, Daegu, Korea
| |
Collapse
|
5
|
Ezzat A, Kogkas A, Holt J, Thakkar R, Darzi A, Mylonas G. An eye-tracking based robotic scrub nurse: proof of concept. Surg Endosc 2021; 35:5381-5391. [PMID: 34101012 PMCID: PMC8186017 DOI: 10.1007/s00464-021-08569-w] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2021] [Accepted: 05/18/2021] [Indexed: 11/25/2022]
Abstract
BACKGROUND Within surgery, assistive robotic devices (ARD) have reported improved patient outcomes. ARD can offer the surgical team a "third hand" to perform wider tasks and more degrees of motion in comparison with conventional laparoscopy. We test an eye-tracking based robotic scrub nurse (RSN) in a simulated operating room based on a novel real-time framework for theatre-wide 3D gaze localization in a mobile fashion. METHODS Surgeons performed segmental resection of pig colon and handsewn end-to-end anastomosis while wearing eye-tracking glasses (ETG) assisted by distributed RGB-D motion sensors. To select instruments, surgeons (ST) fixed their gaze on a screen, initiating the RSN to pick up and transfer the item. Comparison was made between the task with the assistance of a human scrub nurse (HSNt) versus the task with the assistance of robotic and human scrub nurse (R&HSNt). Task load (NASA-TLX), technology acceptance (Van der Laan's), metric data on performance and team communication were measured. RESULTS Overall, 10 ST participated. NASA-TLX feedback for ST on HSNt vs R&HSNt usage revealed no significant difference in mental, physical or temporal demands and no change in task performance. ST reported significantly higher frustration score with R&HSNt. Van der Laan's scores showed positive usefulness and satisfaction scores in using the RSN. No significant difference in operating time was observed. CONCLUSIONS We report initial findings of our eye-tracking based RSN. This enables mobile, unrestricted hands-free human-robot interaction intra-operatively. Importantly, this platform is deemed non-inferior to HSNt and accepted by ST and HSN test users.
Collapse
Affiliation(s)
- Ahmed Ezzat
- Hamlyn Centre for Robotic Surgery, Imperial College London, London, SW7 2AZ, UK.
- Department of Surgery and Cancer, Faculty of Medicine, Imperial College London, London, SW7 2AZ, UK.
| | - Alexandros Kogkas
- Hamlyn Centre for Robotic Surgery, Imperial College London, London, SW7 2AZ, UK
- Department of Surgery and Cancer, Faculty of Medicine, Imperial College London, London, SW7 2AZ, UK
| | | | | | - Ara Darzi
- Hamlyn Centre for Robotic Surgery, Imperial College London, London, SW7 2AZ, UK
- Department of Surgery and Cancer, Faculty of Medicine, Imperial College London, London, SW7 2AZ, UK
| | - George Mylonas
- Hamlyn Centre for Robotic Surgery, Imperial College London, London, SW7 2AZ, UK
- Department of Surgery and Cancer, Faculty of Medicine, Imperial College London, London, SW7 2AZ, UK
| |
Collapse
|
6
|
Nishihori M, Izumi T, Nagano Y, Sato M, Tsukada T, Kropp AE, Wakabayashi T. Development and clinical evaluation of a contactless operating interface for three-dimensional image-guided navigation for endovascular neurosurgery. Int J Comput Assist Radiol Surg 2021; 16:663-671. [PMID: 33709240 PMCID: PMC7951120 DOI: 10.1007/s11548-021-02330-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2020] [Accepted: 02/22/2021] [Indexed: 11/29/2022]
Abstract
Purpose In endovascular neurosurgery, the operator often acquires three-dimensional (3D) images of the cerebral vessels. Although workstation reoperation is required in some situations during treatment, it leads to time loss because a sterile condition cannot be maintained and treatment must be temporarily interrupted. Therefore, a workstation reoperating system is required while maintaining the desired sterility. Methods A contactless operating interface using Kinect to control 3D images was developed via gesture recognition for endovascular neurosurgery and was applied to a 3D volume rendering technique (VRT) image reconstructed at the workstation. The left-hand movement determines the assigned functions, whereas the right-hand movement is used like a computer mouse to pan and zoom in/out. In addition to the interface, voice commands were used and assigned to digital operations, such as image view changes and mode signal changes. Results This system was used for the actual endovascular treatment of cerebral aneurysms and cerebral arteriovenous malformations. The operator and gesture were recognized without any problems. Using voice operation, it was possible to expeditiously set the VRT image back to the reference angle. Furthermore, it was possible to finely adjust gesture operations, including mouse operation, and treatment was completed while maintaining sterile conditions. Conclusion A contactless operating interface was developed by combining the existing workstation system with Kinect and voice recognition software, allowing surgeons to perform a series of operations, which are normally performed in a console room, while maintaining sterile conditions. Supplementary Information The online version contains supplementary material available at 10.1007/s11548-021-02330-3.
Collapse
Affiliation(s)
- Masahiro Nishihori
- Department of Neurosurgery, Nagoya University Graduate School of Medicine, 65 Tsurumai-cho, Showa-ku, Nagoya, Aichi, 466-8550, Japan.
| | - Takashi Izumi
- Department of Neurosurgery, Nagoya University Graduate School of Medicine, 65 Tsurumai-cho, Showa-ku, Nagoya, Aichi, 466-8550, Japan
| | - Yoshitaka Nagano
- Department of Electronic Control and Robot Engineering, Aichi University of Technology, Gamagori, Aichi, Japan
| | - Masaki Sato
- Department of Neurosurgery, Nagoya University Graduate School of Medicine, 65 Tsurumai-cho, Showa-ku, Nagoya, Aichi, 466-8550, Japan
| | - Tetsuya Tsukada
- Department of Neurosurgery, Nagoya University Graduate School of Medicine, 65 Tsurumai-cho, Showa-ku, Nagoya, Aichi, 466-8550, Japan
| | - Asuka Elisabeth Kropp
- Department of Neurosurgery, Nagoya University Graduate School of Medicine, 65 Tsurumai-cho, Showa-ku, Nagoya, Aichi, 466-8550, Japan
| | - Toshihiko Wakabayashi
- Department of Neurosurgery, Nagoya University Graduate School of Medicine, 65 Tsurumai-cho, Showa-ku, Nagoya, Aichi, 466-8550, Japan
| |
Collapse
|
7
|
Bulliard J, Eggert S, Ampanozi G, Affolter R, Gascho D, Sieberth T, Thali MJ, Ebert LC. Preliminary testing of an augmented reality headset as a DICOM viewer during autopsy. FORENSIC IMAGING 2020. [DOI: 10.1016/j.fri.2020.200417] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
8
|
Sato M, Takahashi M, Hoshino H, Terashita T, Hayashi N, Watanabe H, Ogura T. Development of an Eye-Tracking Image Manipulation System for Angiography: A Comparative Study. Acad Radiol 2020; 29:1196-1205. [PMID: 33158704 DOI: 10.1016/j.acra.2020.09.027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2020] [Revised: 09/21/2020] [Accepted: 09/30/2020] [Indexed: 11/17/2022]
Abstract
RATIONALE AND OBJECTIVES Appropriate image manipulation of angiographic image display systems during interventional radiology is performed by radiological technologists and/or nurses given instructions from radiologists. However, appropriate images might not be displayed because of communication errors. Therefore, we developed a manipulation system that uses an eye tracker. The study aimed to determine if an angiographic image display system can be manipulated as well by using an eye tracker as by using a mouse. MATERIALS AND METHODS An angiographic image display system using an eye tracker to calculate the gaze position on the screen and state of fixation was developed. Fourteen radiological technologists participated in an observer study by manipulating 10 images for each of 5 typical cases frequently performed in angiography, such as renal tumor, cerebral aneurysm, liver tumor, uterine bleeding, and hypersplenism. We measured the time from the start to the end of manipulating a series of images required when using the eye tracker and the conventional mouse. In this study, the statistical processing was done using Excel and R and R studio. RESULTS The average time required for all observers for completing all cases was significantly shorter when using the eye tracker than when using the mouse (10.4 ± 2.1 s and 16.9 ± 2.6 s, respectively; p< 0.001 by paired t test). CONCLUSION Radiologists were able to manipulate an angiographic image display system directly by using the newly developed eye tracker system without touching contact devices, such as a mouse or angiography console. Therefore, communication error could be avoided.
Collapse
Affiliation(s)
- Mitsuru Sato
- Department of Radiological Technology, Gunma Prefectural College of Health Sciences, 323-1 Kamioki-cho, Maebashi, Gunma, 371-0052, Japan; Department of Radiology, Japan Red Cross Society Maebashi Hospital, Gunma, Japan.
| | - Minoru Takahashi
- Department of Radiology, Japan Red Cross Society Maebashi Hospital, Gunma, Japan
| | - Hiromitsu Hoshino
- Department of Radiology, Japan Red Cross Society Maebashi Hospital, Gunma, Japan
| | - Takayoshi Terashita
- Department of Radiological Technology, Gunma Prefectural College of Health Sciences, 323-1 Kamioki-cho, Maebashi, Gunma, 371-0052, Japan
| | - Norio Hayashi
- Department of Radiological Technology, Gunma Prefectural College of Health Sciences, 323-1 Kamioki-cho, Maebashi, Gunma, 371-0052, Japan
| | - Haruyuki Watanabe
- Department of Radiological Technology, Gunma Prefectural College of Health Sciences, 323-1 Kamioki-cho, Maebashi, Gunma, 371-0052, Japan
| | - Toshihiro Ogura
- Department of Radiological Technology, Gunma Prefectural College of Health Sciences, 323-1 Kamioki-cho, Maebashi, Gunma, 371-0052, Japan
| |
Collapse
|
9
|
Alcaraz-Mateos E, Turic I, Nieto-Olivares A, Pérez-Ramos M, Poblet E. Head-tracking as an interface device for image control in digital pathology: a comparative study. REVISTA ESPANOLA DE PATOLOGIA : PUBLICACION OFICIAL DE LA SOCIEDAD ESPANOLA DE ANATOMIA PATOLOGICA Y DE LA SOCIEDAD ESPANOLA DE CITOLOGIA 2020; 53:213-217. [PMID: 33012490 PMCID: PMC7343653 DOI: 10.1016/j.patol.2020.05.007] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/27/2020] [Accepted: 05/26/2020] [Indexed: 11/06/2022]
Abstract
BACKGROUND Inasmuch as the conventional mouse is not an ideal input device for digital pathology, the aim of this study was to evaluate alternative systems with the goal of identifying a natural user interface (NUI) for controlling whole slide images (WSI). DESIGN Four pathologists evaluated three webcam-based, head-tracking mouse emulators: Enable Viacam (eViacam, CREA Software), Nouse (JLG Health Solutions Inc), and Camera Mouse (CM Solutions Inc). Twenty WSI dermatopathological cases were randomly selected and examined with Image Viewer (Ventana, AZ, USA). The NASA-TLX was used to rate the perceived workload of using these systems and time was recorded. In addition, a satisfaction survey was used. RESULTS The mean total time needed for diagnosis with Camera Mouse, eViacam, and Nouse was 18'57", 19'37" and 22'32", respectively (57/59/68seconds per case, respectively). The NASA-TLX workload score, where lower scores are better, was 42.1 for eViacam, 53.3 for Nouse and 60.62 for Camera Mouse. This correlated with the pathologists' degree of satisfaction on a scale of 1-5: 3.4 for eViacam, 3 for Nouse, and 2 for Camera Mouse (p<0.05). CONCLUSIONS Head-tracking systems enable pathologists to control the computer cursor and virtual slides without their hands using only a webcam as an input device. - Of the three software solutions examined, eViacam seems to be the best of those evaluated in this study, followed by Nouse and, finally, Camera Mouse. - Further studies integrating other systems should be performed in conjunction with software developments to identify the ideal device for digital pathology.
Collapse
Affiliation(s)
- Eduardo Alcaraz-Mateos
- Servicio de Anatomía Patológica. Hospital Universitario Morales Meseguer, Murcia, España. Av. Marqués de los Vélez s/n, 30008, Murcia, España.
| | - Iva Turic
- Faculty of Medicine, University of Split, Split, Croatia. Poljička cesta 35, 21000, Split, Croacia
| | - Andrés Nieto-Olivares
- Servicio de Anatomía Patológica. Hospital Universitario Morales Meseguer, Murcia, España. Av. Marqués de los Vélez s/n, 30008, Murcia, España
| | - Miguel Pérez-Ramos
- Servicio de Anatomía Patológica. Hospital Universitario Morales Meseguer, Murcia, España. Av. Marqués de los Vélez s/n, 30008, Murcia, España
| | - Enrique Poblet
- Servicio de Anatomía Patológica. Hospital Universitario Reina Sofía, Murcia, España. Av. Intendente Jorge Palacios, 1, 30003, Murcia, España
| |
Collapse
|
10
|
Abstract
This work presents a novel design of a new 3D user interface for an immersive virtual reality desktop and a new empirical analysis of the proposed interface using three interaction modes. The proposed novel dual-layer 3D user interface allows for user interactions with multiple screens portrayed within a curved 360-degree effective field of view available for the user. Downward gaze allows the user to raise the interaction layer that facilitates several traditional desktop tasks. The 3D user interface is analyzed using three different interaction modes, point-and-click, controller-based direct manipulation, and a gesture-based user interface. A comprehensive user study is performed within a mixed-methods approach for the usability and user experience analysis of all three user interaction modes. Each user interaction is quantitatively and qualitatively analyzed for simple and compound tasks in both standing and seated positions. The crafted mixed approach for this study allows to collect, evaluate, and validate the viability of the new 3D user interface. The results are used to draw conclusions about the suitability of the interaction modes for a variety of tasks in an immersive Virtual Reality 3D desktop environment.
Collapse
|
11
|
Artificial Intelligence in Interventional Radiology: A Literature Review and Future Perspectives. JOURNAL OF ONCOLOGY 2019; 2019:6153041. [PMID: 31781215 PMCID: PMC6874978 DOI: 10.1155/2019/6153041] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 05/18/2019] [Revised: 09/22/2019] [Accepted: 10/01/2019] [Indexed: 01/17/2023]
Abstract
The term “artificial intelligence” (AI) includes computational algorithms that can perform tasks considered typical of human intelligence, with partial to complete autonomy, to produce new beneficial outputs from specific inputs. The development of AI is largely based on the introduction of artificial neural networks (ANN) that allowed the introduction of the concepts of “computational learning models,” machine learning (ML) and deep learning (DL). AI applications appear promising for radiology scenarios potentially improving lesion detection, segmentation, and interpretation with a recent application also for interventional radiology (IR) practice, including the ability of AI to offer prognostic information to both patients and physicians about interventional oncology procedures. This article integrates evidence-reported literature and experience-based perceptions to assist not only residents and fellows who are training in interventional radiology but also practicing colleagues who are approaching to locoregional mini-invasive treatments.
Collapse
|
12
|
Massaroni C, Giurazza F, Tesei M, Schena E, Corvino F, Meneo M, Corletti L, Niola R, Setola R. A Touchless system for image visualization during surgery: preliminary experience in clinical settings. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2019; 2018:5794-5797. [PMID: 30441652 DOI: 10.1109/embc.2018.8513631] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Today clinicians may access large medical datasets, but very few systems have been designed to allow a practical and efficient exploration of data directly in critical medical environments such as operating rooms (OR). This work aims to assess during tests in laboratory and clinical settings a Surgery Touchless System (STS). This system allows clinicians to interact with medical images by using two different approaches: a gesture recognition and a voice recognition based system. These two methods are based on the use of a Microsoft Kinect and of a selective microphone, respectively. The STS allows navigating in a specifically designed interface, to perform several tasks, among others, to manipulate biomedical images. In this article, we assessed both the recognitions approaches in laboratory with 5 users. In addition, the STS was tested using only the voice-based recognition approach in clinical settings. The assessment was performed during three procedures by two interventionalradiologists. The five volunteers and the 2 radiologists filled two questionnaires to assess the system. The system usability was positively evaluated in laboratory tests. From clinical trials emerged that the STS was considered safe and useful by both the radiologists: they used the system an averaged number of times of 10 and 15 for patients, and found the system useful. These promising results allow considering this system useful for providing information not otherwise accessible and limiting the impact of human error during the operation. Future work will be focused on the use of the STS on a high number and different types of procedure.
Collapse
|
13
|
Preference elicitation: Obtaining gestural guidelines for PACS in neurosurgery. Int J Med Inform 2019; 130:103934. [PMID: 31437619 DOI: 10.1016/j.ijmedinf.2019.07.013] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2018] [Revised: 05/14/2019] [Accepted: 07/19/2019] [Indexed: 11/21/2022]
Abstract
OBJECTIVE Accessing medical records is an integral part of neurosurgical procedures in the Operating Room (OR). Gestural interfaces can help reduce the risks for infections by allowing the surgical staff to browse Picture Archiving and Communication Systems (PACS) without touch. The main objectives of this work are to: a) Elicit gestures from neurosurgeons to analyze their preferences, b) Develop heuristics for gestural interfaces, and c) Produce a lexicon that maximizes surgeons' preferences. MATERIALS AND METHODS A gesture elicitation study was conducted with nine neurosurgeons. Initially, subjects were asked to outline the gestures on a drawing board for each of the PACS commands. Next, the subjects performed one of three imaging tasks using gestures instead of the keyboard and mouse. Each gesture was annotated with respect to the presence/absence of gesture descriptors. Next, K-nearest neighbor approach was used to obtain the final lexicon that complies with the preferred/popular descriptors. RESULTS The elicitation study resulted in nine gesture lexicons, each comprised of 28 gestures. A paired t-test between the popularity of the overall gesture and the top three descriptors showed that the latter is significantly higher than the former (89.5%-59.7% vs 19.4%, p < 0.001), meaning more than half of the subjects agreed on these descriptors. Next, the gesture heuristics were generated for each command using the popular descriptors. Lastly, we developed a lexicon that complies with surgeons' preferences. CONCLUSIONS Neurosurgeons do agree on fundamental characteristics of gestures to perform image manipulation tasks. The proposed heuristics could potentially guide the development of future gesture-based interaction of PACS for the OR.
Collapse
|
14
|
Alvarez-Lopez F, Maina MF, Saigí-Rubió F. Use of Commercial Off-The-Shelf Devices for the Detection of Manual Gestures in Surgery: Systematic Literature Review. J Med Internet Res 2019; 21:e11925. [PMID: 31066679 PMCID: PMC6533048 DOI: 10.2196/11925] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2018] [Revised: 01/04/2019] [Accepted: 01/25/2019] [Indexed: 01/08/2023] Open
Abstract
Background The increasingly pervasive presence of technology in the operating room raises the need to study the interaction between the surgeon and computer system. A new generation of tools known as commercial off-the-shelf (COTS) devices enabling touchless gesture–based human-computer interaction is currently being explored as a solution in surgical environments. Objective The aim of this systematic literature review was to provide an account of the state of the art of COTS devices in the detection of manual gestures in surgery and to identify their use as a simulation tool for motor skills teaching in minimally invasive surgery (MIS). Methods For this systematic literature review, a search was conducted in PubMed, Excerpta Medica dataBASE, ScienceDirect, Espacenet, OpenGrey, and the Institute of Electrical and Electronics Engineers databases. Articles published between January 2000 and December 2017 on the use of COTS devices for gesture detection in surgical environments and in simulation for surgical skills learning in MIS were evaluated and selected. Results A total of 3180 studies were identified, 86 of which met the search selection criteria. Microsoft Kinect (Microsoft Corp) and the Leap Motion Controller (Leap Motion Inc) were the most widely used COTS devices. The most common intervention was image manipulation in surgical and interventional radiology environments, followed by interaction with virtual reality environments for educational or interventional purposes. The possibility of using this technology to develop portable low-cost simulators for skills learning in MIS was also examined. As most of the articles identified in this systematic review were proof-of-concept or prototype user testing and feasibility testing studies, we concluded that the field was still in the exploratory phase in areas requiring touchless manipulation within environments and settings that must adhere to asepsis and antisepsis protocols, such as angiography suites and operating rooms. Conclusions COTS devices applied to hand and instrument gesture–based interfaces in the field of simulation for skills learning and training in MIS could open up a promising field to achieve ubiquitous training and presurgical warm up.
Collapse
Affiliation(s)
- Fernando Alvarez-Lopez
- Faculty of Health Sciences, Universitat Oberta de Catalunya, Barcelona, Spain.,Faculty of Health Sciences, Universidad de Manizales, Caldas, Colombia
| | - Marcelo Fabián Maina
- Faculty of Psychology and Education Sciences, Universitat Oberta de Catalunya, Barcelona, Spain
| | | |
Collapse
|
15
|
Bachmann D, Weichert F, Rinkenauer G. Review of Three-Dimensional Human-Computer Interaction with Focus on the Leap Motion Controller. SENSORS (BASEL, SWITZERLAND) 2018; 18:E2194. [PMID: 29986517 PMCID: PMC6068627 DOI: 10.3390/s18072194] [Citation(s) in RCA: 74] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/31/2018] [Revised: 06/30/2018] [Accepted: 07/02/2018] [Indexed: 11/16/2022]
Abstract
Modern hardware and software development has led to an evolution of user interfaces from command-line to natural user interfaces for virtual immersive environments. Gestures imitating real-world interaction tasks increasingly replace classical two-dimensional interfaces based on Windows/Icons/Menus/Pointers (WIMP) or touch metaphors. Thus, the purpose of this paper is to survey the state-of-the-art Human-Computer Interaction (HCI) techniques with a focus on the special field of three-dimensional interaction. This includes an overview of currently available interaction devices, their applications of usage and underlying methods for gesture design and recognition. Focus is on interfaces based on the Leap Motion Controller (LMC) and corresponding methods of gesture design and recognition. Further, a review of evaluation methods for the proposed natural user interfaces is given.
Collapse
Affiliation(s)
- Daniel Bachmann
- Department of Computer Science VII, TU Dortmund University, 44221 Dortmund, Germany.
| | - Frank Weichert
- Department of Computer Science VII, TU Dortmund University, 44221 Dortmund, Germany.
| | - Gerhard Rinkenauer
- Leibniz Research Centre for Working Environment and Human Factors, 44139 Dortmund, Germany.
| |
Collapse
|
16
|
Madapana N, Gonzalez G, Rodgers R, Zhang L, Wachs JP. Gestures for Picture Archiving and Communication Systems (PACS) operation in the operating room: Is there any standard? PLoS One 2018; 13:e0198092. [PMID: 29894481 PMCID: PMC5997313 DOI: 10.1371/journal.pone.0198092] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2017] [Accepted: 05/14/2018] [Indexed: 11/18/2022] Open
Abstract
Objective Gestural interfaces allow accessing and manipulating Electronic Medical Records (EMR) in hospitals while keeping a complete sterile environment. Particularly, in the Operating Room (OR), these interfaces enable surgeons to browse Picture Archiving and Communication System (PACS) without the need of delegating functions to the surgical staff. Existing gesture based medical interfaces rely on a suboptimal and an arbitrary small set of gestures that are mapped to a few commands available in PACS software. The objective of this work is to discuss a method to determine the most suitable set of gestures based on surgeon’s acceptability. To achieve this goal, the paper introduces two key innovations: (a) a novel methodology to incorporate gestures’ semantic properties into the agreement analysis, and (b) a new agreement metric to determine the most suitable gesture set for a PACS. Materials and methods Three neurosurgical diagnostic tasks were conducted by nine neurosurgeons. The set of commands and gesture lexicons were determined using a Wizard of Oz paradigm. The gestures were decomposed into a set of 55 semantic properties based on the motion trajectory, orientation and pose of the surgeons’ hands and their ground truth values were manually annotated. Finally, a new agreement metric was developed, using the known Jaccard similarity to measure consensus between users over a gesture set. Results A set of 34 PACS commands were found to be a sufficient number of actions for PACS manipulation. In addition, it was found that there is a level of agreement of 0.29 among the surgeons over the gestures found. Two statistical tests including paired t-test and Mann Whitney Wilcoxon test were conducted between the proposed metric and the traditional agreement metric. It was found that the agreement values computed using the former metric are significantly higher (p < 0.001) for both tests. Conclusions This study reveals that the level of agreement among surgeons over the best gestures for PACS operation is higher than the previously reported metric (0.29 vs 0.13). This observation is based on the fact that the agreement focuses on main features of the gestures rather than the gestures themselves. The level of agreement is not very high, yet indicates a majority preference, and is better than using gestures based on authoritarian or arbitrary approaches. The methods described in this paper provide a guiding framework for the design of future gesture based PACS systems for the OR.
Collapse
Affiliation(s)
- Naveen Madapana
- School of Industrial Engineering, Purdue University, West Lafayette, Indiana, United States of America
| | - Glebys Gonzalez
- School of Industrial Engineering, Purdue University, West Lafayette, Indiana, United States of America
| | - Richard Rodgers
- Goodman Campbell Brain and Spine, Indianapolis, Indiana, United States of America
| | - Lingsong Zhang
- Department of Statistics, Purdue University, West Lafayette, Indiana, United States of America
| | - Juan P. Wachs
- School of Industrial Engineering, Purdue University, West Lafayette, Indiana, United States of America
- * E-mail:
| |
Collapse
|
17
|
Abstract
The widespread use of technology in hospitals and the difficulty of sterilising computer controls has increased opportunities for the spread of pathogens. This leads to an interest in touchless user interfaces for computer systems. We present a review of touchless interaction with computer equipment in the hospital environment, based on a systematic search of the literature. Sterility provides an implied theme and motivation for the field as a whole, but other advantages, such as hands-busy settings, are also proposed. Overcoming hardware restrictions has been a major theme, but in recent research, technical difficulties have receded. Image navigation is the most frequently considered task and the operating room the most frequently considered environment. Gestures have been implemented for input, system and content control. Most of the studies found have small sample sizes and focus on feasibility, acceptability or gesture-recognition accuracy. We conclude this article with an agenda for future work.
Collapse
|
18
|
Sánchez-Margallo FM, Sánchez-Margallo JA, Moyano-Cuevas JL, Pérez EM, Maestre J. Use of natural user interfaces for image navigation during laparoscopic surgery: initial experience. MINIM INVASIV THER 2017; 26:253-261. [PMID: 28349758 DOI: 10.1080/13645706.2017.1304964] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
BACKGROUND Surgical environments require special aseptic conditions for direct interaction with the preoperative images. We aim to test the feasibility of using a set of gesture control sensors combined with voice control to interact in a sterile manner with preoperative information and an integrated operating room (OR) during laparoscopic surgery. MATERIAL AND METHODS Two hepatectomies and two partial nephrectomies were performed by three experienced surgeons in a porcine model. The Kinect, Leap Motion, and MYO armband in combination with voice control were used as natural user interfaces (NUIs). After surgery, surgeons completed a questionnaire about their experience. RESULTS Surgeons required <10 min training with each NUI. They stated that NUIs improved the access to preoperative patient information and kept them more focused on the surgical site. The Kinect system was reported as the most physically demanding NUI and the MYO armband in combination with voice commands as the most intuitive and accurate. The need to release one of the laparoscopic instruments in order to use the NUIs was identified as the main limitation. CONCLUSIONS The presented NUIs are feasible to directly interact in a more intuitive and sterile manner with the preoperative images and the integrated OR functionalities during laparoscopic surgery.
Collapse
Affiliation(s)
| | - Juan A Sánchez-Margallo
- b Bioengineering and Health Technologies Unit , Jesús Usón Minimally Invasive Surgery Centre , Cáceres , Spain
| | - José L Moyano-Cuevas
- b Bioengineering and Health Technologies Unit , Jesús Usón Minimally Invasive Surgery Centre , Cáceres , Spain
| | - Eva María Pérez
- c Department of Surgery , University of Extremadura , Cáceres , Spain
| | - Juan Maestre
- d General Surgery Unit , Jesús Usón Minimally Invasive Surgery Centre , Cáceres , Spain
| |
Collapse
|
19
|
Appleby R, Zur Linden A, Sears W. INTRAOPERATIVE IMAGE NAVIGATION: EXPERIMENTAL STUDY OF THE FEASIBILITY AND SURGEON PREFERENCE BETWEEN A STERILE ENCASED NINTENDO WII TM REMOTE AND STANDARD WIRELESS COMPUTER MOUSE. Vet Radiol Ultrasound 2017; 58:266-272. [PMID: 28176448 DOI: 10.1111/vru.12479] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2016] [Revised: 11/07/2016] [Accepted: 11/07/2016] [Indexed: 11/26/2022] Open
Abstract
Diagnostic imaging plays an important role in the operating room, providing surgeons with a reference and surgical plan. Surgeon autonomy in the operating room has been suggested to decrease errors that stem from communication mistakes. A standard computer mouse was compared to a wireless remote-control style controller for computer game consoles (Wiimote) for the navigation of diagnostic imaging studies by sterile personnel in this prospective survey study. Participants were recruited from a cohort of residents and faculty that use the surgical suites at our institution. Outcome assessments were based on survey data completed by study participants following each use of either the mouse or Wiimote, and compared using an analysis of variance. The mouse was significantly preferred by the study participants in the categories of handling, accuracy and efficiency, and overall satisfaction (P <0.05). The mouse was preferred to both the Wiimote and to no device, when participants were asked to rank options for image navigation. This indicates the need for the implementation of intraoperative image navigation devices, to increase surgeon autonomy in the operating room.
Collapse
Affiliation(s)
- Ryan Appleby
- Department of Clinical Studies, Ontario Veterinary College, Guelph, ON N1G 2W1, Canada
| | - Alex Zur Linden
- Department of Clinical Studies, Ontario Veterinary College, Guelph, ON N1G 2W1, Canada
| | - William Sears
- Department of Population Medicine, Ontario Veterinary College, Guelph, ON N1G 2W1, Canada
| |
Collapse
|
20
|
Hennersperger C, Fuerst B, Virga S, Zettinig O, Frisch B, Neff T, Navab N. Towards MRI-Based Autonomous Robotic US Acquisitions: A First Feasibility Study. IEEE TRANSACTIONS ON MEDICAL IMAGING 2017; 36:538-548. [PMID: 27831861 DOI: 10.1109/tmi.2016.2620723] [Citation(s) in RCA: 38] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Robotic ultrasound has the potential to assist and guide physicians during interventions. In this work, we present a set of methods and a workflow to enable autonomous MRI-guided ultrasound acquisitions. Our approach uses a structured-light 3D scanner for patient-to-robot and image-to-patient calibration, which in turn is used to plan 3D ultrasound trajectories. These MRI-based trajectories are followed autonomously by the robot and are further refined online using automatic MRI/US registration. Despite the low spatial resolution of structured light scanners, the initial planned acquisition path can be followed with an accuracy of 2.46 ± 0.96 mm. This leads to a good initialization of the MRI/US registration: the 3D-scan-based alignment for planning and acquisition shows an accuracy (distance between planned ultrasound and MRI) of 4.47 mm, and 0.97 mm after an online-update of the calibration based on a closed loop registration.
Collapse
|
21
|
Mewes A, Hensen B, Wacker F, Hansen C. Touchless interaction with software in interventional radiology and surgery: a systematic literature review. Int J Comput Assist Radiol Surg 2016; 12:291-305. [PMID: 27647327 DOI: 10.1007/s11548-016-1480-6] [Citation(s) in RCA: 40] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2016] [Accepted: 08/31/2016] [Indexed: 11/25/2022]
Abstract
PURPOSE In this article, we systematically examine the current state of research of systems that focus on touchless human-computer interaction in operating rooms and interventional radiology suites. We further discuss the drawbacks of current solutions and underline promising technologies for future development. METHODS A systematic literature search of scientific papers that deal with touchless control of medical software in the immediate environment of the operation room and interventional radiology suite was performed. This includes methods for touchless gesture interaction, voice control and eye tracking. RESULTS Fifty-five research papers were identified and analyzed in detail including 33 journal publications. Most of the identified literature (62 %) deals with the control of medical image viewers. The others present interaction techniques for laparoscopic assistance (13 %), telerobotic assistance and operating room control (9 % each) as well as for robotic operating room assistance and intraoperative registration (3.5 % each). Only 8 systems (14.5 %) were tested in a real clinical environment, and 7 (12.7 %) were not evaluated at all. CONCLUSION In the last 10 years, many advancements have led to robust touchless interaction approaches. However, only a few have been systematically evaluated in real operating room settings. Further research is required to cope with current limitations of touchless software interfaces in clinical environments. The main challenges for future research are the improvement and evaluation of usability and intuitiveness of touchless human-computer interaction and the full integration into productive systems as well as the reduction of necessary interaction steps and further development of hands-free interaction.
Collapse
Affiliation(s)
- André Mewes
- Faculty of Computer Science, University of Magdeburg, Magdeburg, Germany.
| | - Bennet Hensen
- Institute for Diagnostic and Interventional Radiology, Medical School Hanover, Hanover, Germany
| | - Frank Wacker
- Institute for Diagnostic and Interventional Radiology, Medical School Hanover, Hanover, Germany
| | - Christian Hansen
- Faculty of Computer Science, University of Magdeburg, Magdeburg, Germany
| |
Collapse
|
22
|
Wan Hassan WN, Abu Kassim NL, Jhawar A, Shurkri NM, Kamarul Baharin NA, Chan CS. User acceptance of a touchless sterile system to control virtual orthodontic study models. Am J Orthod Dentofacial Orthop 2016; 149:567-78. [PMID: 27021461 DOI: 10.1016/j.ajodo.2015.10.018] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2015] [Revised: 10/01/2015] [Accepted: 10/01/2015] [Indexed: 10/22/2022]
Abstract
INTRODUCTION In this article, we present an evaluation of user acceptance of our innovative hand-gesture-based touchless sterile system for interaction with and control of a set of 3-dimensional digitized orthodontic study models using the Kinect motion-capture sensor (Microsoft, Redmond, Wash). METHODS The system was tested on a cohort of 201 participants. Using our validated questionnaire, the participants evaluated 7 hand-gesture-based commands that allowed the user to adjust the model in size, position, and aspect and to switch the image on the screen to view the maxillary arch, the mandibular arch, or models in occlusion. Participants' responses were assessed using Rasch analysis so that their perceptions of the usefulness of the hand gestures for the commands could be directly referenced against their acceptance of the gestures. Their perceptions of the potential value of this system for cross-infection control were also evaluated. RESULTS Most participants endorsed these commands as accurate. Our designated hand gestures for these commands were generally accepted. We also found a positive and significant correlation between our participants' level of awareness of cross infection and their endorsement to use this system in clinical practice. CONCLUSIONS This study supports the adoption of this promising development for a sterile touch-free patient record-management system.
Collapse
Affiliation(s)
- Wan Nurazreena Wan Hassan
- Senior lecturer, Department of Paediatric Dentistry and Orthodontics and Clinical Craniofacial Dentistry Research Group, Faculty of Dentistry, University of Malaya, Kuala Lumpur, Malaysia.
| | - Noor Lide Abu Kassim
- Associate professor, Department of Community Health and Health Care for Mass Gathering, Umm Al Qura University, Mecca, Saudi Arabia; and Kuliyyah of Dentistry, International Islamic University Malaysia, Kuantan Campus, Pahang Darul Makmur, Malaysia
| | - Abhishek Jhawar
- Research assistant, Department of Paediatric Dentistry and Orthodontics, Faculty of Dentistry and Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, University of Malaya, Kuala Lumpur, Malaysia
| | - Norsyafiqah Mohd Shurkri
- Research student, Department of Paediatric Dentistry and Orthodontics, Faculty of Dentistry, University of Malaya, Kuala Lumpur, Malaysia
| | - Nur Azreen Kamarul Baharin
- Research student, Department of Paediatric Dentistry and Orthodontics, Faculty of Dentistry, University of Malaya, Kuala Lumpur, Malaysia
| | - Chee Seng Chan
- Senior lecturer, Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, University of Malaya, Kuala Lumpur, Malaysia
| |
Collapse
|
23
|
Di Tommaso L, Aubry S, Godard J, Katranji H, Pauchot J. [A new human machine interface in neurosurgery: The Leap Motion(®). Technical note regarding a new touchless interface]. Neurochirurgie 2016; 62:178-81. [PMID: 27234915 DOI: 10.1016/j.neuchi.2016.01.006] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2015] [Revised: 12/30/2015] [Accepted: 01/26/2016] [Indexed: 11/25/2022]
Abstract
Currently, cross-sectional imaging viewing is used in routine practice whereas the surgical procedure requires physical contact with an interface (mouse or touch-sensitive screen). This type of contact results in a risk of lack of aseptic control and causes loss of time. The recent appearance of devices such as the Leap Motion(®) (Leap Motion society, San Francisco, USA) a sensor which enables to interact with the computer without any physical contact is of major interest in the field of surgery. However, its configuration and ergonomics produce key challenges in order to adapt to the practitioner's requirements, the imaging software as well as the surgical environment. This article aims to suggest an easy configuration of the Leap Motion(®) in neurosurgery on a PC for an optimized utilization with Carestream(®) Vue PACS v11.3.4 (Carestream Health, Inc., Rochester, USA) using a plug-in (to download at: https://drive.google.com/?usp=chrome_app#folders/0B_F4eBeBQc3ybElEeEhqME5DQkU) and a video tutorial (https://www.youtube.com/watch?v=yVPTgxg-SIk).
Collapse
Affiliation(s)
- L Di Tommaso
- Service de neurochirurgie, CHU Jean-Minjoz, 25030 Besançon, France.
| | - S Aubry
- Service d'imagerie musculo-squelettique, CHU Jean-Minjoz, 25030 Besançon, France; EA 4268I4S IFR 133 Inserm, unité de recherche, 25030 Besançon, France; Université de Franche-Comté, 25000 Besançon, France
| | - J Godard
- Service de neurochirurgie, CHU Jean-Minjoz, 25030 Besançon, France
| | - H Katranji
- Service de neurochirurgie, CHU Jean-Minjoz, 25030 Besançon, France
| | - J Pauchot
- EA 4268I4S IFR 133 Inserm, unité de recherche, 25030 Besançon, France; Université de Franche-Comté, 25000 Besançon, France; Service de chirurgie orthopédique, traumatologique, plastique, esthétique, reconstructrice et assistance main, CHU Jean-Minjoz, 25030 Besançon, France
| |
Collapse
|
24
|
Wipfli R, Dubois-Ferrière V, Budry S, Hoffmeyer P, Lovis C. Gesture-Controlled Image Management for Operating Room: A Randomized Crossover Study to Compare Interaction Using Gestures, Mouse, and Third Person Relaying. PLoS One 2016; 11:e0153596. [PMID: 27082758 PMCID: PMC4833285 DOI: 10.1371/journal.pone.0153596] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2015] [Accepted: 03/31/2016] [Indexed: 11/19/2022] Open
Abstract
OBJECTIVE In this work, we aim at comparing formally three different interaction modes for image manipulation that are usable in a surgery setting: 1) A gesture-controlled approach using Kinect ®; 2) oral instructions to a third part dedicated to manipulate the images; and 3) direct manipulation using a mouse. MATERIALS AND METHODS Each participant used the radiology image viewer Weasis with the three interaction modes. In a crossover randomized controlled trial participants were attributed block wise to six experimental groups. For each group, the order for testing the three modes was randomly assigned. Nine standardized scenarios were used. RESULTS 30 physicians and senior medical students participated in the experiment. Efficiency, measured as time used to pass the scenario, was best when using the mouse (M = 109.10s, SD = 25.96), followed by gesture-controlled (M = 214.97s, SD = 46.29) and oral instructions (M = 246.33s, SD = 76.50). Satisfaction, measured by a questionnaire, was rated highest in the condition mouse (M = 6.63, SD = 0.56), followed by gesture-controlled (M = 5.77, SD = 0.93) and oral instructions (M = 4.40, SD = 1.71). Differences in efficiency and satisfaction rating were significant. No significant difference in effectiveness, measured with error rates, was found. DISCUSSION The study shows with formal evaluation that the use of gestures is advantageous over instructions to a third person. In particular, the use of gestures is more efficient than verbalizing instructions. The given gestures could be learned easily and reliability of the tested gesture-control system is good. CONCLUSION Under the premise that mouse cannot be used directly during surgery, gesture-controlled approaches demonstrate to be superior to oral instructions for image manipulation.
Collapse
Affiliation(s)
- Rolf Wipfli
- Division of Medical Information Sciences, Geneva University Hospitals, Geneva, Switzerland
| | - Victor Dubois-Ferrière
- Division of Orthopaedics and Trauma Surgery, Geneva University Hospitals, Geneva, Switzerland
- * E-mail:
| | - Sylvain Budry
- University of Geneva, Faculty of Medicine, Geneva, Switzerland
| | - Pierre Hoffmeyer
- Division of Orthopaedics and Trauma Surgery, Geneva University Hospitals, Geneva, Switzerland
- University of Geneva, Faculty of Medicine, Geneva, Switzerland
| | - Christian Lovis
- Division of Medical Information Sciences, Geneva University Hospitals, Geneva, Switzerland
- University of Geneva, Faculty of Medicine, Geneva, Switzerland
| |
Collapse
|
25
|
Alvarez-Lopez F, Maina MF, Saigí-Rubió F. Natural User Interfaces: Is It a Solution to Accomplish Ubiquitous Training in Minimally Invasive Surgery? Surg Innov 2016; 23:429-30. [PMID: 27009688 DOI: 10.1177/1553350616639145] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Affiliation(s)
- Fernando Alvarez-Lopez
- Universitat Oberta de Catalunya, Barcelona, Spain Universidad de Manizales, Manizales, Colombia
| | | | | |
Collapse
|
26
|
Device- and system-independent personal touchless user interface for operating rooms : One personal UI to control all displays in an operating room. Int J Comput Assist Radiol Surg 2016; 11:853-61. [PMID: 26984551 DOI: 10.1007/s11548-016-1375-6] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2016] [Accepted: 03/02/2016] [Indexed: 10/22/2022]
Abstract
INTRODUCTION In the modern day operating room, the surgeon performs surgeries with the support of different medical systems that showcase patient information, physiological data, and medical images. It is generally accepted that numerous interactions must be performed by the surgical team to control the corresponding medical system to retrieve the desired information. Joysticks and physical keys are still present in the operating room due to the disadvantages of mouses, and surgeons often communicate instructions to the surgical team when requiring information from a specific medical system. In this paper, a novel user interface is developed that allows the surgeon to personally perform touchless interaction with the various medical systems, switch effortlessly among them, all of this without modifying the systems' software and hardware. METHODS To achieve this, a wearable RGB-D sensor is mounted on the surgeon's head for inside-out tracking of his/her finger with any of the medical systems' displays. Android devices with a special application are connected to the computers on which the medical systems are running, simulating a normal USB mouse and keyboard. When the surgeon performs interaction using pointing gestures, the desired cursor position in the targeted medical system display, and gestures, are transformed into general events and then sent to the corresponding Android device. Finally, the application running on the Android devices generates the corresponding mouse or keyboard events according to the targeted medical system. RESULTS AND CONCLUSION To simulate an operating room setting, our unique user interface was tested by seven medical participants who performed several interactions with the visualization of CT, MRI, and fluoroscopy images at varying distances from them. Results from the system usability scale and NASA-TLX workload index indicated a strong acceptance of our proposed user interface.
Collapse
|
27
|
Pauchot J, Di Tommaso L, Lounis A, Benassarou M, Mathieu P, Bernot D, Aubry S. Leap Motion Gesture Control With Carestream Software in the Operating Room to Control Imaging: Installation Guide and Discussion. Surg Innov 2015; 22:615-20. [PMID: 26002115 DOI: 10.1177/1553350615587992] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Nowadays, routine cross-sectional imaging viewing during a surgical procedure requires physical contact with an interface (mouse or touch-sensitive screen). Such contact risks exposure to aseptic conditions and causes loss of time. Devices such as the recently introduced Leap Motion (Leap Motion Society, San Francisco, CA), which enables interaction with the computer without any physical contact, are of wide interest in the field of surgery, but configuration and ergonomics are key challenges for the practitioner, imaging software, and surgical environment. This article aims to suggest an easy configuration of Leap Motion on a PC for optimized use with Carestream Vue PACS v11.3.4 (Carestream Health, Inc, Rochester, NY) using a plug-in (to download at https://drive.google.com/open?id=0B_F4eBeBQc3yNENvTXlnY09qS00&authuser=0) and a video tutorial (https://www.youtube.com/watch?v=yVPTgxg-SIk). Videos of surgical procedure and discussion about innovative gesture control technology and its various configurations are provided in this article.
Collapse
Affiliation(s)
- Julien Pauchot
- Orthopedic, Traumatology, Aesthetic, Plastic, Reconstructive and Hand Surgery Unit, University Hospital of Besançon, Besançon, France
| | - Laetitia Di Tommaso
- Neurosurgery Department, University Hospital of Besançon, University of Franche-Comté, Besançon, France
| | - Ahmed Lounis
- Department of Musculoskeletal Imaging, University Hospital of Besançon, University of Franche-Comté, Besançon, France
| | - Mourad Benassarou
- MaxilloFacial and Stomatology Department, University Hospital of Besançon, University of Franche-Comté, Besançon, France
| | - Pierre Mathieu
- Liver Transplantation and Digestive Surgery Unit, University Hospital of Besançon, University of Franche-Comté, Besançon, France
| | - Dominique Bernot
- Informatics Department, University Hospital of Besançon, University of Franche-Comté, Besançon, France
| | - Sébastien Aubry
- Department of Musculoskeletal Imaging, University Hospital of Besançon, University of Franche-Comté, Besançon, France
| |
Collapse
|
28
|
Mewes A, Saalfeld P, Riabikin O, Skalej M, Hansen C. A gesture-controlled projection display for CT-guided interventions. Int J Comput Assist Radiol Surg 2015; 11:157-64. [DOI: 10.1007/s11548-015-1215-0] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2015] [Accepted: 04/21/2015] [Indexed: 11/24/2022]
|
29
|
Chao C, Tan J, Castillo EM, Zawaideh M, Roberts AC, Kinney TB. Comparative efficacy of new interfaces for intra-procedural imaging review: the Microsoft Kinect, Hillcrest Labs Loop Pointer, and the Apple iPad. J Digit Imaging 2015; 27:463-9. [PMID: 24706159 DOI: 10.1007/s10278-014-9687-y] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022] Open
Abstract
We adapted and evaluated the Microsoft Kinect (touchless interface), Hillcrest Labs Loop Pointer (gyroscopic mouse), and the Apple iPad (multi-touch tablet) for intra-procedural imaging review efficacy in a simulation using MIM Software DICOM viewers. Using each device, 29 radiologists executed five basic interactions to complete the overall task of measuring an 8.1-cm hepatic lesion: scroll, window, zoom, pan, and measure. For each interaction, participants assessed the devices on a 3-point subjective scale (3 = highest usability score). The five individual scores were summed to calculate a subjective composite usability score (max 15 points). Overall task time to completion was recorded. Each user also assessed each device for its potential to jeopardize a sterile field. The composite usability scores were as follows: Kinect 9.9 (out of 15.0; SD = 2.8), Loop Pointer 12.9 (SD = 13.5), and iPad 13.5 (SD = 1.8). Mean task completion times were as follows: Kinect 156.7 s (SD = 86.5), Loop Pointer 51.5 s (SD = 30.6), and iPad 41.1 s (SD = 25.3). The mean hepatic lesion measurements were as follows: Kinect was 7.3 cm (SD = 0.9), Loop Pointer 7.8 cm (SD = 1.1), and iPad 8.2 cm (SD = 1.2). The mean deviations from true hepatic lesion measurement were as follows: Kinect 1.0 cm and for both the Loop Pointer and iPad, 0.9 cm (SD = 0.7). The Kinect had the least and iPad had the most subjective concern for compromising the sterile field. A new intra-operative imaging review interface may be near. Most surveyed foresee these devices as useful in procedures, and most do not anticipate problems with a sterile field. An ideal device would combine iPad's usability and accuracy with the Kinect's touchless aspect.
Collapse
Affiliation(s)
- Cherng Chao
- UCSD Medical Center, 200 West Arbor Drive, San Diego, 92103-8756, USA,
| | | | | | | | | | | |
Collapse
|
30
|
Bergen T, Wittenberg T. Stitching and Surface Reconstruction From Endoscopic Image Sequences: A Review of Applications and Methods. IEEE J Biomed Health Inform 2014; 20:304-21. [PMID: 25532214 DOI: 10.1109/jbhi.2014.2384134] [Citation(s) in RCA: 40] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Endoscopic procedures form part of routine clinical practice for minimally invasive examinations and interventions. While they are beneficial for the patient, reducing surgical trauma and making convalescence times shorter, they make orientation and manipulation more challenging for the physician, due to the limited field of view through the endoscope. However, this drawback can be reduced by means of medical image processing and computer vision, using image stitching and surface reconstruction methods to expand the field of view. This paper provides a comprehensive overview of the current state of the art in endoscopic image stitching and surface reconstruction. The literature in the relevant fields of application and algorithmic approaches is surveyed. The technological maturity of the methods and current challenges and trends are analyzed.
Collapse
|
31
|
Mobile markerless augmented reality and its application in forensic medicine. Int J Comput Assist Radiol Surg 2014; 10:573-86. [PMID: 25149272 DOI: 10.1007/s11548-014-1106-9] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2014] [Accepted: 07/30/2014] [Indexed: 10/24/2022]
Abstract
PURPOSE During autopsy, forensic pathologists today mostly rely on visible indication, tactile perception and experience to determine the cause of death. Although computed tomography (CT) data is often available for the bodies under examination, these data are rarely used due to the lack of radiological workstations in the pathological suite. The data may prevent the forensic pathologist from damaging evidence by allowing him to associate, for example, external wounds to internal injuries. To facilitate this, we propose a new multimodal approach for intuitive visualization of forensic data and evaluate its feasibility. METHODS A range camera is mounted on a tablet computer and positioned in a way such that the camera simultaneously captures depth and color information of the body. A server estimates the camera pose based on surface registration of CT and depth data to allow for augmented reality visualization of the internal anatomy directly on the tablet. Additionally, projection of color information onto the CT surface is implemented. RESULTS We validated the system in a postmortem pilot study using fiducials attached to the skin for quantification of a mean target registration error of [Formula: see text] mm. CONCLUSIONS The system is mobile, markerless, intuitive and real-time capable with sufficient accuracy. It can support the forensic pathologist during autopsy with augmented reality and textured surfaces. Furthermore, the system enables multimodal documentation for presentation in court. Despite its preliminary prototype status, it has high potential due to its low price and simplicity.
Collapse
|
32
|
Gong RH, Güler Ö, Kürklüoglu M, Lovejoy J, Yaniv Z. Interactive initialization of 2D/3D rigid registration. Med Phys 2014; 40:121911. [PMID: 24320522 DOI: 10.1118/1.4830428] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Registration is one of the key technical components in an image-guided navigation system. A large number of 2D/3D registration algorithms have been previously proposed, but have not been able to transition into clinical practice. The authors identify the primary reason for the lack of adoption with the prerequisite for a sufficiently accurate initial transformation, mean target registration error of about 10 mm or less. In this paper, the authors present two interactive initialization approaches that provide the desired accuracy for x-ray/MR and x-ray/CT registration in the operating room setting. METHODS The authors have developed two interactive registration methods based on visual alignment of a preoperative image, MR, or CT to intraoperative x-rays. In the first approach, the operator uses a gesture based interface to align a volume rendering of the preoperative image to multiple x-rays. The second approach uses a tracked tool available as part of a navigation system. Preoperatively, a virtual replica of the tool is positioned next to the anatomical structures visible in the volumetric data. Intraoperatively, the physical tool is positioned in a similar manner and subsequently used to align a volume rendering to the x-ray images using an augmented reality (AR) approach. Both methods were assessed using three publicly available reference data sets for 2D/3D registration evaluation. RESULTS In the authors' experiments, the authors show that for x-ray/MR registration, the gesture based method resulted in a mean target registration error (mTRE) of 9.3 ± 5.0 mm with an average interaction time of 146.3 ± 73.0 s, and the AR-based method had mTREs of 7.2 ± 3.2 mm with interaction times of 44 ± 32 s. For x-ray/CT registration, the gesture based method resulted in a mTRE of 7.4 ± 5.0 mm with an average interaction time of 132.1 ± 66.4 s, and the AR-based method had mTREs of 8.3 ± 5.0 mm with interaction times of 58 ± 52 s. CONCLUSIONS Based on the authors' evaluation, the authors conclude that the registration approaches are sufficiently accurate for initializing 2D/3D registration in the OR setting, both when a tracking system is not in use (gesture based approach), and when a tracking system is already in use (AR based approach).
Collapse
Affiliation(s)
- Ren Hui Gong
- The Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Medical Center, Washington, DC 20010
| | | | | | | | | |
Collapse
|
33
|
O’Hara K, Gonzalez G, Penney G, Sellen A, Corish R, Mentis H, Varnavas A, Criminisi A, Rouncefield M, Dastur N, Carrell T. Interactional Order and Constructed Ways of Seeing with Touchless Imaging Systems in Surgery. Comput Support Coop Work 2014. [DOI: 10.1007/s10606-014-9203-4] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
34
|
Bizzotto N, Costanzo A, Bizzotto L, Regis D, Sandri A, Magnan B. Leap Motion Gesture Control With OsiriX in the Operating Room to Control Imaging. Surg Innov 2014; 21:655-6. [PMID: 24742500 DOI: 10.1177/1553350614528384] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Affiliation(s)
| | | | | | - Dario Regis
- Azienda Ospedaliera Universitaria Integrata, Verona, Italy
| | - Andrea Sandri
- Azienda Ospedaliera Universitaria Integrata, Verona, Italy
| | - Bruno Magnan
- Azienda Ospedaliera Universitaria Integrata, Verona, Italy
| |
Collapse
|
35
|
Flach PM, Gascho D, Schweitzer W, Ruder TD, Berger N, Ross SG, Thali MJ, Ampanozi G. Imaging in forensic radiology: an illustrated guide for postmortem computed tomography technique and protocols. Forensic Sci Med Pathol 2014; 10:583-606. [DOI: 10.1007/s12024-014-9555-6] [Citation(s) in RCA: 131] [Impact Index Per Article: 13.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/10/2014] [Indexed: 11/30/2022]
|
36
|
Iannessi A, Marcy PY, Clatz O, Fillard P, Ayache N. Touchless intra-operative display for interventional radiologist. Diagn Interv Imaging 2014; 95:333-7. [DOI: 10.1016/j.diii.2013.09.007] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
37
|
Farhadi-Niaki F, Etemad SA, Arya A. Design and Usability Analysis of Gesture-Based Control for Common Desktop Tasks. HUMAN-COMPUTER INTERACTION. INTERACTION MODALITIES AND TECHNIQUES 2013. [DOI: 10.1007/978-3-642-39330-3_23] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
|
38
|
Bhatia A, Patel S, Pantol G, Wu YY, Plitnikas M, Hancock C. Intra and Inter-Observer Reliability of Mobile Tablet PACS Viewer System vs. Standard PACS Viewing Station-Diagnosis of Acute Central Nervous System Events. ACTA ACUST UNITED AC 2013. [DOI: 10.4236/ojrad.2013.32014] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
39
|
|
40
|
Tan JH, Chao C, Zawaideh M, Roberts AC, Kinney TB. Informatics in Radiology: developing a touchless user interface for intraoperative image control during interventional radiology procedures. Radiographics 2012; 33:E61-70. [PMID: 23264282 DOI: 10.1148/rg.332125101] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Review of prior and real-time patient images is critical during an interventional radiology procedure; however, it often poses the challenge of efficiently reviewing images while maintaining a sterile field. Although interventional radiologists can "scrub out" of the procedure, use sterile console covers, or verbally relay directions to an assistant, the ability of the interventionalist to directly control the images without having to touch the console could offer potential gains in terms of sterility, procedure efficiency, and radiation reduction. The authors investigated a potential solution with a low-cost, touch-free motion-tracking device that was originally designed as a video game controller. The device tracks a person's skeletal frame and its motions, a capacity that was adapted to allow manipulation of medical images by means of hand gestures. A custom software program called the Touchless Radiology Imaging Control System translates motion information obtained with the motion-tracking device into commands to review images on a workstation. To evaluate this system, 29 radiologists at the authors' institution were asked to perform a set of standardized tasks during a routine abdominal computed tomographic study. Participants evaluated the device for its efficacy as well as its possible advantages and disadvantages. The majority (69%) of those surveyed believed that the device could be useful in an interventional radiology practice and did not foresee problems with maintaining a sterile field. This proof-of-concept prototype and study demonstrate the potential utility of the motion-tracking device for enhancing imaging-guided treatment in the interventional radiology suite while maintaining a sterile field. Supplemental material available at http://radiographics.rsna.org/lookup/suppl/doi:10.1148/rg.332125101/-/DC1.
Collapse
Affiliation(s)
- Justin H Tan
- Department of Radiology, University of California San Diego Medical Center, 200 W Arbor Dr, San Diego, CA 92103-8756, USA.
| | | | | | | | | |
Collapse
|
41
|
Jacob MG, Wachs JP, Packer RA. Hand-gesture-based sterile interface for the operating room using contextual cues for the navigation of radiological images. J Am Med Inform Assoc 2012; 20:e183-6. [PMID: 23250787 DOI: 10.1136/amiajnl-2012-001212] [Citation(s) in RCA: 45] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022] Open
Abstract
This paper presents a method to improve the navigation and manipulation of radiological images through a sterile hand gesture recognition interface based on attentional contextual cues. Computer vision algorithms were developed to extract intention and attention cues from the surgeon's behavior and combine them with sensory data from a commodity depth camera. The developed interface was tested in a usability experiment to assess the effectiveness of the new interface. An image navigation and manipulation task was performed, and the gesture recognition accuracy, false positives and task completion times were computed to evaluate system performance. Experimental results show that gesture interaction and surgeon behavior analysis can be used to accurately navigate, manipulate and access MRI images, and therefore this modality could replace the use of keyboard and mice-based interfaces.
Collapse
Affiliation(s)
- Mithun George Jacob
- School of Industrial Engineering, Purdue University, West Lafayette, Indiana 47907, USA
| | | | | |
Collapse
|
42
|
Kranzfelder M, Staub C, Fiolka A, Schneider A, Gillen S, Wilhelm D, Friess H, Knoll A, Feussner H. Toward increased autonomy in the surgical OR: needs, requests, and expectations. Surg Endosc 2012; 27:1681-8. [PMID: 23239307 DOI: 10.1007/s00464-012-2656-y] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2012] [Accepted: 10/10/2012] [Indexed: 11/24/2022]
Affiliation(s)
- Michael Kranzfelder
- Department of Surgery, Klinikum Rechts der Isar, Technische Universität München, 81675 München, Germany.
| | | | | | | | | | | | | | | | | |
Collapse
|
43
|
Feasibility of touch-less control of operating room lights. Int J Comput Assist Radiol Surg 2012; 8:259-68. [PMID: 22806717 DOI: 10.1007/s11548-012-0778-2] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2012] [Accepted: 06/14/2012] [Indexed: 10/28/2022]
Abstract
PURPOSE Today's highly technical operating rooms lead to fairly complex surgical workflows where the surgeon has to interact with a number of devices, including the operating room light. Hence, ideally, the surgeon could direct the light without major disruption of his work. We studied whether a gesture tracking-based control of an automated operating room light is feasible. METHODS So far, there has been little research on control approaches for operating lights. We have implemented an exemplary setup to mimic an automated light controlled by a gesture tracking system. The setup includes a articulated arm to position the light source and an off-the-shelf RGBD camera to detect the user interaction. We assessed the tracking performance using a robot-mounted hand phantom and ran a number of tests with 18 volunteers to evaluate the potential of touch-less light control. RESULTS All test persons were comfortable with using the gesture-based system and quickly learned how to move a light spot on flat surface. The hand tracking error is direction-dependent and in the range of several centimeters, with a standard deviation of less than 1 mm and up to 3.5 mm orthogonal and parallel to the finger orientation, respectively. However, the subjects had no problems following even more complex paths with a width of less than 10 cm. The average speed was 0.15 m/s, and even initially slow subjects improved over time. Gestures to initiate control can be performed in approximately 2 s. Two-thirds of the subjects considered gesture control to be simple, and a majority considered it to be rather efficient. CONCLUSIONS Implementation of an automated operating room light and touch-less control using an RGBD camera for gesture tracking is feasible. The remaining tracking error does not affect smooth control, and the use of the system is intuitive even for inexperienced users.
Collapse
|
44
|
Intention, Context and Gesture Recognition for Sterile MRI Navigation in the Operating Room. PROGRESS IN PATTERN RECOGNITION, IMAGE ANALYSIS, COMPUTER VISION, AND APPLICATIONS 2012. [DOI: 10.1007/978-3-642-33275-3_27] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
|