1
|
Matinfar S, Dehghani S, Salehi M, Sommersperger M, Navab N, Faridpooya K, Fairhurst M, Navab N. From tissue to sound: A new paradigm for medical sonic interaction design. Med Image Anal 2025; 103:103571. [PMID: 40222195 DOI: 10.1016/j.media.2025.103571] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2024] [Accepted: 03/25/2025] [Indexed: 04/15/2025]
Abstract
Medical imaging maps tissue characteristics into image intensity values, enhancing human perception. However, comprehending this data, especially in high-stakes scenarios such as surgery, is prone to errors. Additionally, current multimodal methods do not fully leverage this valuable data in their design. We introduce "From Tissue to Sound," a new paradigm for medical sonic interaction design. This paradigm establishes a comprehensive framework for mapping tissue characteristics to auditory displays, providing dynamic and intuitive access to medical images that complement visual data, thereby enhancing multimodal perception. "From Tissue to Sound" provides an advanced and adaptable framework for the interactive sonification of multimodal medical imaging data. This framework employs a physics-based sound model composed of a network of multiple oscillators, whose mechanical properties-such as friction and stiffness-are defined by tissue characteristics extracted from imaging data. This approach enables the representation of anatomical structures and the creation of unique acoustic profiles in response to excitations of the sound model. This method allows users to explore data at a fundamental level, identifying tissue characteristics ranging from rigid to soft, dense to sparse, and structured to scattered. It facilitates intuitive discovery of both general and detailed patterns with minimal preprocessing. Unlike conventional methods that transform low-dimensional data into global sound features through a parametric approach, this method utilizes model-based unsupervised mapping between data and an anatomical sound model, enabling high-dimensional data processing. The versatility of this method is demonstrated through feasibility experiments confirming the generation of perceptually discernible acoustic signals. Furthermore, we present a novel application developed based on this framework for retinal surgery. This new paradigm opens up possibilities for designing multisensory applications for multimodal imaging data. It also facilitates the creation of interactive sonification models with various auditory causality approaches, enhancing both directness and richness.
Collapse
Affiliation(s)
- Sasan Matinfar
- Computer Aided Medical Procedures (CAMP), Technical University of Munich, Munich, Germany.
| | - Shervin Dehghani
- Computer Aided Medical Procedures (CAMP), Technical University of Munich, Munich, Germany
| | - Mehrdad Salehi
- Computer Aided Medical Procedures (CAMP), Technical University of Munich, Munich, Germany
| | - Michael Sommersperger
- Computer Aided Medical Procedures (CAMP), Technical University of Munich, Munich, Germany
| | - Navid Navab
- Topological Media Lab, Concordia University, Montreal, Canada
| | | | - Merle Fairhurst
- Centre for Tactile Internet with Human-in-the-Loop, Technical University of Dresden, Dresden, Germany
| | - Nassir Navab
- Computer Aided Medical Procedures (CAMP), Technical University of Munich, Munich, Germany
| |
Collapse
|
2
|
Egger J, Gsaxner C, Luijten G, Chen J, Chen X, Bian J, Kleesiek J, Puladi B. Is the Apple Vision Pro the Ultimate Display? A First Perspective and Survey on Entering the Wonderland of Precision Medicine. JMIR Serious Games 2024; 12:e52785. [PMID: 39292499 PMCID: PMC11447423 DOI: 10.2196/52785] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Revised: 03/26/2024] [Accepted: 07/02/2024] [Indexed: 09/19/2024] Open
Abstract
At the Worldwide Developers Conference in June 2023, Apple introduced the Vision Pro. The Apple Vision Pro (AVP) is a mixed reality headset; more specifically, it is a virtual reality device with an additional video see-through capability. The video see-through capability turns the AVP into an augmented reality (AR) device. The AR feature is enabled by streaming the real world via cameras on the (virtual reality) screens in front of the user's eyes. This is, of course, not unique and is similar to other devices, such as the Varjo XR-3 (Varjo Technologies Oy). Nevertheless, the AVP has some interesting features, such as an inside-out screen that can show the headset wearer's eyes to "outsiders," and a button on the top, called the "digital crown," that allows a seamless blend of digital content with the user's physical space by turning it. In addition, it is untethered, except for the cable to the battery, which makes the headset more agile, compared to the Varjo XR-3. This could actually come closer to "The Ultimate Display," which Ivan Sutherland had already sketched in 1965. After a great response from the media and social networks to the release, we were able to test and review the new AVP ourselves in March 2024. Including an expert survey with 13 of our colleagues after testing the AVP in our institute, this Viewpoint explores whether the AVP can overcome clinical challenges that AR especially still faces in the medical domain; we also go beyond this and discuss whether the AVP could support clinicians in essential tasks to allow them to spend more time with their patients.
Collapse
Affiliation(s)
- Jan Egger
- Institute for Artificial Intelligence in Medicine, Essen University Hospital (AöR), Essen, Germany
- Center for Virtual and Extended Reality in Medicine (ZvRM), Essen University Hospital (AöR), Essen, Germany
- Cancer Research Center Cologne Essen (CCCE), University Medicine Essen (AöR), Essen, Germany
| | - Christina Gsaxner
- Institute for Artificial Intelligence in Medicine, Essen University Hospital (AöR), Essen, Germany
- Department of Oral and Maxillofacial Surgery & Institute of Medical Informatics, University Hospital RWTH Aachen, Aachen, Germany
- Institute of Medical Informatics, University Hospital RWTH Aachen, Aachen, Germany
- Institute of Computer Graphics and Vision, Graz University of Technology, Graz, Austria
| | - Gijs Luijten
- Institute for Artificial Intelligence in Medicine, Essen University Hospital (AöR), Essen, Germany
- Institute of Computer Graphics and Vision, Graz University of Technology, Graz, Austria
| | - Jianxu Chen
- Leibniz-Institut für Analytische Wissenschaften (ISAS), Dortmund, Germany
| | - Xiaojun Chen
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
- Institute of Medical Robotic, Shanghai Jiao Tong University, Shanghai, China
| | - Jiang Bian
- Health Outcomes and Biomedical Informatics, College of Medicine, University of Florida, Gainesville, FL, United States
| | - Jens Kleesiek
- Institute for Artificial Intelligence in Medicine, Essen University Hospital (AöR), Essen, Germany
- Cancer Research Center Cologne Essen (CCCE), University Medicine Essen (AöR), Essen, Germany
- German Cancer Consortium (DKTK), Partner Site Essen, Essen, Germany
- Department of Physics, TU Dortmund University, Dortmund, Germany
| | - Behrus Puladi
- Department of Oral and Maxillofacial Surgery & Institute of Medical Informatics, University Hospital RWTH Aachen, Aachen, Germany
- Institute of Medical Informatics, University Hospital RWTH Aachen, Aachen, Germany
| |
Collapse
|
3
|
Spiller M, Esmaeili N, Sühn T, Boese A, Turial S, Gumbs AA, Croner R, Friebe M, Illanes A. Enhancing Veress Needle Entry with Proximal Vibroacoustic Sensing for Automatic Identification of Peritoneum Puncture. Diagnostics (Basel) 2024; 14:1698. [PMID: 39125574 PMCID: PMC11311580 DOI: 10.3390/diagnostics14151698] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2024] [Revised: 07/12/2024] [Accepted: 07/31/2024] [Indexed: 08/12/2024] Open
Abstract
Laparoscopic access, a critical yet challenging step in surgical procedures, often leads to complications. Existing systems, such as improved Veress needles and optical trocars, offer limited safety benefits but come with elevated costs. In this study, a prototype of a novel technology for guiding needle interventions based on vibroacoustic signals is evaluated in porcine cadavers. The prototype consistently detected successful abdominal cavity entry in 100% of cases during 193 insertions across eight porcine cadavers. The high signal quality allowed for the precise identification of all Veress needle insertion phases, including peritoneum puncture. The findings suggest that this vibroacoustic-based guidance technology could enhance surgeons' situational awareness and provide valuable support during laparoscopic access. Unlike existing solutions, this technology does not require sensing elements in the instrument's tip and remains compatible with medical instruments from various manufacturers.
Collapse
Affiliation(s)
- Moritz Spiller
- SURAG Medical GmbH, 04229 Leipzig, Germany; (N.E.); (T.S.); (A.I.)
| | - Nazila Esmaeili
- SURAG Medical GmbH, 04229 Leipzig, Germany; (N.E.); (T.S.); (A.I.)
- Chair for Computer Aided Medical Procedures and Augmented Reality, Technical University of Munich, 85748 Munich, Germany
| | - Thomas Sühn
- SURAG Medical GmbH, 04229 Leipzig, Germany; (N.E.); (T.S.); (A.I.)
- Department of Orthopaedic Surgery, Otto-von-Guericke University Magdeburg, 39106 Magdeburg, Germany
| | - Axel Boese
- INKA—Innovation Laboratory for Image Guided Therapy, Otto-von-Guericke University Magdeburg, 39106 Magdeburg, Germany; (A.B.); (M.F.)
| | - Salmai Turial
- Department of Pediatric Surgery and Pediatric Traumatology, University Clinic for General, Visceral, Vascular and Transplant Surgery, University Hospital Magdeburg, 39120 Magdeburg, Germany;
| | - Andrew A. Gumbs
- University Clinic for General, Visceral, Vascular and Transplant Surgery, University Hospital Magdeburg, 39120 Magdeburg, Germany; (A.A.G.); (R.C.)
- Advanced & Minimally Invasive Surgery Excellence Center, American Hospital Tblisi, 0102 Tblisi, Georgia
| | - Roland Croner
- University Clinic for General, Visceral, Vascular and Transplant Surgery, University Hospital Magdeburg, 39120 Magdeburg, Germany; (A.A.G.); (R.C.)
| | - Michael Friebe
- INKA—Innovation Laboratory for Image Guided Therapy, Otto-von-Guericke University Magdeburg, 39106 Magdeburg, Germany; (A.B.); (M.F.)
- Faculty of Computer Science, AGH University of Science and Technology, 30-059 Krakow, Poland
- Center for Innovation, Business Development & Entrepreneurship, FOM University of Applied Sciences, 45141 Essen, Germany
| | - Alfredo Illanes
- SURAG Medical GmbH, 04229 Leipzig, Germany; (N.E.); (T.S.); (A.I.)
| |
Collapse
|
4
|
Remschmidt B, Rieder M, Gsaxner C, Gaessler J, Payer M, Wallner J. Augmented Reality-Guided Apicoectomy Based on Maxillofacial CBCT Scans. Diagnostics (Basel) 2023; 13:3037. [PMID: 37835780 PMCID: PMC10572956 DOI: 10.3390/diagnostics13193037] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2023] [Revised: 09/13/2023] [Accepted: 09/21/2023] [Indexed: 10/15/2023] Open
Abstract
Implementation of augmented reality (AR) image guidance systems using preoperative cone beam computed tomography (CBCT) scans in apicoectomies promises to help surgeons overcome iatrogenic complications associated with this procedure. This study aims to evaluate the intraoperative feasibility and usability of HoloLens 2, an established AR image guidance device, in the context of apicoectomies. Three experienced surgeons carried out four AR-guided apicoectomies each on human cadaver head specimens. Preparation and operating times of each procedure, as well as the subjective usability of HoloLens for AR image guidance in apicoectomies using the System Usability Scale (SUS), were measured. In total, twelve AR-guided apicoectomies on six human cadaver head specimens were performed (n = 12). The average preparation time amounted to 162 (±34) s. The surgical procedure itself took on average 9 (±2) min. There was no statistically significant difference between the three surgeons. Quantification of the usability of HoloLens revealed a mean SUS score of 80.4 (±6.8), indicating an "excellent" usability level. In conclusion, this study implies the suitability, practicality, and simplicity of AR image guidance systems such as the HoloLens in apicoectomies and advocates their routine implementation.
Collapse
Affiliation(s)
- Bernhard Remschmidt
- Division of Oral and Maxillofacial Surgery, Department of Dental Medicine and Oral Health, Medical University of Graz, 8036 Graz, Austria
- Division of Oral Surgery and Orthodontics, Department of Dental Medicine and Oral Health, Medical University of Graz, 8010 Graz, Austria
| | - Marcus Rieder
- Division of Oral and Maxillofacial Surgery, Department of Dental Medicine and Oral Health, Medical University of Graz, 8036 Graz, Austria
- Division of Oral Surgery and Orthodontics, Department of Dental Medicine and Oral Health, Medical University of Graz, 8010 Graz, Austria
| | - Christina Gsaxner
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria
| | - Jan Gaessler
- Division of Oral and Maxillofacial Surgery, Department of Dental Medicine and Oral Health, Medical University of Graz, 8036 Graz, Austria
- Division of Oral Surgery and Orthodontics, Department of Dental Medicine and Oral Health, Medical University of Graz, 8010 Graz, Austria
| | - Michael Payer
- Division of Oral Surgery and Orthodontics, Department of Dental Medicine and Oral Health, Medical University of Graz, 8010 Graz, Austria
| | - Juergen Wallner
- Division of Oral and Maxillofacial Surgery, Department of Dental Medicine and Oral Health, Medical University of Graz, 8036 Graz, Austria
| |
Collapse
|
5
|
Matinfar S, Salehi M, Suter D, Seibold M, Dehghani S, Navab N, Wanivenhaus F, Fürnstahl P, Farshad M, Navab N. Sonification as a reliable alternative to conventional visual surgical navigation. Sci Rep 2023; 13:5930. [PMID: 37045878 PMCID: PMC10097653 DOI: 10.1038/s41598-023-32778-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2022] [Accepted: 04/02/2023] [Indexed: 04/14/2023] Open
Abstract
Despite the undeniable advantages of image-guided surgical assistance systems in terms of accuracy, such systems have not yet fully met surgeons' needs or expectations regarding usability, time efficiency, and their integration into the surgical workflow. On the other hand, perceptual studies have shown that presenting independent but causally correlated information via multimodal feedback involving different sensory modalities can improve task performance. This article investigates an alternative method for computer-assisted surgical navigation, introduces a novel four-DOF sonification methodology for navigated pedicle screw placement, and discusses advanced solutions based on multisensory feedback. The proposed method comprises a novel four-DOF sonification solution for alignment tasks in four degrees of freedom based on frequency modulation synthesis. We compared the resulting accuracy and execution time of the proposed sonification method with visual navigation, which is currently considered the state of the art. We conducted a phantom study in which 17 surgeons executed the pedicle screw placement task in the lumbar spine, guided by either the proposed sonification-based or the traditional visual navigation method. The results demonstrated that the proposed method is as accurate as the state of the art while decreasing the surgeon's need to focus on visual navigation displays instead of the natural focus on surgical tools and targeted anatomy during task execution.
Collapse
Affiliation(s)
- Sasan Matinfar
- Computer Aided Medical Procedures (CAMP), Technical University of Munich, 85748, Munich, Germany.
- Nuklearmedizin rechts der Isar, Technical University of Munich, 81675, Munich, Germany.
| | - Mehrdad Salehi
- Computer Aided Medical Procedures (CAMP), Technical University of Munich, 85748, Munich, Germany
| | - Daniel Suter
- Department of Orthopaedics, Balgrist University Hospital, 8008, Zurich, Switzerland
| | - Matthias Seibold
- Computer Aided Medical Procedures (CAMP), Technical University of Munich, 85748, Munich, Germany
- Research in Orthopedic Computer Science (ROCS), Balgrist University Hospital, University of Zurich, Balgrist Campus, 8008, Zurich, Switzerland
| | - Shervin Dehghani
- Computer Aided Medical Procedures (CAMP), Technical University of Munich, 85748, Munich, Germany
- Nuklearmedizin rechts der Isar, Technical University of Munich, 81675, Munich, Germany
| | - Navid Navab
- Topological Media Lab, Concordia University, Montreal, H3G 2W1, Canada
| | - Florian Wanivenhaus
- Department of Orthopaedics, Balgrist University Hospital, 8008, Zurich, Switzerland
| | - Philipp Fürnstahl
- Research in Orthopedic Computer Science (ROCS), Balgrist University Hospital, University of Zurich, Balgrist Campus, 8008, Zurich, Switzerland
| | - Mazda Farshad
- Department of Orthopaedics, Balgrist University Hospital, 8008, Zurich, Switzerland
| | - Nassir Navab
- Computer Aided Medical Procedures (CAMP), Technical University of Munich, 85748, Munich, Germany
| |
Collapse
|
6
|
Gsaxner C, Li J, Pepe A, Jin Y, Kleesiek J, Schmalstieg D, Egger J. The HoloLens in medicine: A systematic review and taxonomy. Med Image Anal 2023; 85:102757. [PMID: 36706637 DOI: 10.1016/j.media.2023.102757] [Citation(s) in RCA: 30] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Revised: 01/05/2023] [Accepted: 01/18/2023] [Indexed: 01/22/2023]
Abstract
The HoloLens (Microsoft Corp., Redmond, WA), a head-worn, optically see-through augmented reality (AR) display, is the main player in the recent boost in medical AR research. In this systematic review, we provide a comprehensive overview of the usage of the first-generation HoloLens within the medical domain, from its release in March 2016, until the year of 2021. We identified 217 relevant publications through a systematic search of the PubMed, Scopus, IEEE Xplore and SpringerLink databases. We propose a new taxonomy including use case, technical methodology for registration and tracking, data sources, visualization as well as validation and evaluation, and analyze the retrieved publications accordingly. We find that the bulk of research focuses on supporting physicians during interventions, where the HoloLens is promising for procedures usually performed without image guidance. However, the consensus is that accuracy and reliability are still too low to replace conventional guidance systems. Medical students are the second most common target group, where AR-enhanced medical simulators emerge as a promising technology. While concerns about human-computer interactions, usability and perception are frequently mentioned, hardly any concepts to overcome these issues have been proposed. Instead, registration and tracking lie at the core of most reviewed publications, nevertheless only few of them propose innovative concepts in this direction. Finally, we find that the validation of HoloLens applications suffers from a lack of standardized and rigorous evaluation protocols. We hope that this review can advance medical AR research by identifying gaps in the current literature, to pave the way for novel, innovative directions and translation into the medical routine.
Collapse
Affiliation(s)
- Christina Gsaxner
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; BioTechMed, 8010 Graz, Austria.
| | - Jianning Li
- Institute of AI in Medicine, University Medicine Essen, 45131 Essen, Germany; Cancer Research Center Cologne Essen, University Medicine Essen, 45147 Essen, Germany
| | - Antonio Pepe
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; BioTechMed, 8010 Graz, Austria
| | - Yuan Jin
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; Research Center for Connected Healthcare Big Data, Zhejiang Lab, Hangzhou, 311121 Zhejiang, China
| | - Jens Kleesiek
- Institute of AI in Medicine, University Medicine Essen, 45131 Essen, Germany; Cancer Research Center Cologne Essen, University Medicine Essen, 45147 Essen, Germany
| | - Dieter Schmalstieg
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; BioTechMed, 8010 Graz, Austria
| | - Jan Egger
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; Institute of AI in Medicine, University Medicine Essen, 45131 Essen, Germany; BioTechMed, 8010 Graz, Austria; Cancer Research Center Cologne Essen, University Medicine Essen, 45147 Essen, Germany
| |
Collapse
|
7
|
Usevitch DE, Bronheim RS, Reyes MC, Babilonia C, Margalit A, Jain A, Armand M. Review of Enhanced Handheld Surgical Drills. Crit Rev Biomed Eng 2023; 51:29-50. [PMID: 37824333 PMCID: PMC10874117 DOI: 10.1615/critrevbiomedeng.2023049106] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2023]
Abstract
The handheld drill has been used as a conventional surgical tool for centuries. Alongside the recent successes of surgical robots, the development of new and enhanced medical drills has improved surgeon ability without requiring the high cost and consuming setup times that plague medical robot systems. This work provides an overview of enhanced handheld surgical drill research focusing on systems that include some form of image guidance and do not require additional hardware that physically supports or guides drilling. Drilling is reviewed by main contribution divided into audio-, visual-, or hardware-enhanced drills. A vision for future work to enhance handheld drilling systems is also discussed.
Collapse
Affiliation(s)
- David E. Usevitch
- Laboratory for Computational Sensing and Robotics (LCSR), Johns Hopkins University, Baltimore, MD, United States
- Department of Orthopedic Surgery, Johns Hopkins University, Baltimore, MD, United States
| | - Rachel S. Bronheim
- Department of Orthopedic Surgery, Johns Hopkins University, Baltimore, MD, United States
| | - Miguel C. Reyes
- Department of Orthopedic Surgery, Johns Hopkins University, Baltimore, MD, United States
| | - Carlos Babilonia
- Department of Orthopedic Surgery, Johns Hopkins University, Baltimore, MD, United States
| | - Adam Margalit
- Department of Orthopedic Surgery, Johns Hopkins University, Baltimore, MD, United States
| | - Amit Jain
- Department of Orthopedic Surgery, Johns Hopkins University, Baltimore, MD, United States
| | - Mehran Armand
- Laboratory for Computational Sensing and Robotics (LCSR), Johns Hopkins University, Baltimore, MD, United States
- Department of Orthopedic Surgery, Johns Hopkins University, Baltimore, MD, United States
| |
Collapse
|
8
|
Ziemer T, Schultheis H. PAMPAS: A PsychoAcoustical Method for the Perceptual Analysis of multidimensional Sonification. Front Neurosci 2022; 16:930944. [PMID: 36277997 PMCID: PMC9583394 DOI: 10.3389/fnins.2022.930944] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2022] [Accepted: 09/05/2022] [Indexed: 11/13/2022] Open
Abstract
The sonification of data to communicate information to a user is a relatively new approach that established itself around the 1990s. To date, many researchers have designed their individual sonification from scratch. There are no standards in sonification design and evaluation. But researchers and practitioners have formulated several requirements and established several methods. There is a wide consensus that psychoacocustics could play an important role in the sonification design and evaluation phase. But this requires a) an adaption of psychoacoustic methods to the signal types of sonification and b) a preparation of the sonification for the psychoacoustic experiment procedure. In this method paper, we present a PsychoAcoustical Method for the Perceptual Analysis of multidimensional Sonification (PAMPAS) dedicated to the researchers of sonification. A well-defined and well-established, efficient, reliable, and replicable just noticeable difference (JND) experiment using the maximum likelihood procedure (MLP) serves as the basis to achieve perceptual linearity of parameter mapping during the sonification design stage and to identify and quantify perceptual effects during the sonification evaluation stage, namely the perceptual resolution, hysteresis effects and perceptual interferences. The experiment results are scores from standardized data space and a standardized procedure. These scores can serve to compare multiple sonification designs of a single researcher or even among different research groups. This method can supplement other sonification designs and evaluation methods from a perceptual viewpoint.
Collapse
|
9
|
|
10
|
Heinrich F, Joeres F, Lawonn K, Hansen C. Comparison of Projective Augmented Reality Concepts to Support Medical Needle Insertion. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2019; 25:2157-2167. [PMID: 30892210 DOI: 10.1109/tvcg.2019.2903942] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Augmented reality (AR) is a promising tool to improve instrument navigation in needle-based interventions. Limited research has been conducted regarding suitable navigation visualizations. In this work, three navigation concepts based on existing approaches were compared in a user study using a projective AR setup. Each concept was implemented with three different scales for accuracy-to-color mapping and two methods of navigation indicator scaling. Participants were asked to perform simulated needle insertion tasks with each of the resulting 18 prototypes. Insertion angle and insertion depth accuracies were measured and analyzed, as well as task completion time and participants' subjectively perceived task difficulty. Results show a clear ranking of visualization concepts across variables. Less consistent results were obtained for the color and indicator scaling factors. Results suggest that logarithmic indicator scaling achieved better accuracy, but participants perceived it to be more difficult than linear scaling. With specific results for angle and depth accuracy, our study contributes to the future composition of improved navigation support and systems for precise needle insertion or similar applications.
Collapse
|
11
|
Plazak J, DiGiovanni DA, Collins DL, Kersten-Oertel M. Cognitive load associations when utilizing auditory display within image-guided neurosurgery. Int J Comput Assist Radiol Surg 2019; 14:1431-1438. [PMID: 30997635 DOI: 10.1007/s11548-019-01970-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2019] [Accepted: 04/04/2019] [Indexed: 11/26/2022]
Abstract
PURPOSE The combination of data visualization and auditory display (e.g., sonification) has been shown to increase accuracy, and reduce perceived difficulty, within 3D navigation tasks. While accuracy within such tasks can be measured in real time, subjective impressions about the difficulty of a task are more elusive to obtain. Prior work utilizing electrophysiology (EEG) has found robust support that cognitive load and working memory can be monitored in real time using EEG data. METHODS In this study, we replicated a 3D navigation task (within the context of image-guided surgery) while recording data pertaining to participants' cognitive load through the use of EEG relative alpha-band weighting data. Specifically, 13 subjects navigated a tracked surgical tool to randomly placed 3D virtual locations on a CT cerebral angiography volume while being aided by visual, aural, or both visual and aural feedback. During the study EEG data were captured from the participants, and after the study a NASA TLX questionnaire was filled out by the subjects. In addition to replicating an existing experimental design on auditory display within image-guided neurosurgery, our primary aim sought to determine whether EEG-based markers of cognitive load mirrored subjective ratings of task difficulty RESULTS : Similar to existing literature, our study found evidence consistent with the hypothesis that auditory display can increase the accuracy of navigating to a specified target. We also found significant differences in cognitive working load across different feedback modalities, but none of which supported the experiments hypotheses. Finally, we found mixed results regarding the relationship between real-time measurements of cognitive workload and a posteriori subjective impressions of task difficulty. CONCLUSIONS Although we did not find a significant correlation between the subjective and physiological measurements, differences in cognitive working load were found. As well, our study further supports the use of auditory display in image-guided surgery.
Collapse
Affiliation(s)
- Joseph Plazak
- Gina Cody School of Engineering and Computer Science, Concordia University, EV 3.301, 1455 De Maisonneuve Blvd. W., Montreal, QC, H3G 1M8, Canada
| | | | - D Louis Collins
- Montreal Neurological Institute, McGill University, Montreal, QC, Canada
| | - Marta Kersten-Oertel
- Gina Cody School of Engineering and Computer Science, Concordia University, EV 3.301, 1455 De Maisonneuve Blvd. W., Montreal, QC, H3G 1M8, Canada.
| |
Collapse
|
12
|
Matinfar S, Nasseri MA, Eck U, Kowalsky M, Roodaki H, Navab N, Lohmann CP, Maier M, Navab N. Surgical soundtracks: automatic acoustic augmentation of surgical procedures. Int J Comput Assist Radiol Surg 2018; 13:1345-1355. [PMID: 30054775 DOI: 10.1007/s11548-018-1827-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2018] [Accepted: 07/11/2018] [Indexed: 11/26/2022]
Abstract
PURPOSE Advances in sensing and digitalization enable us to acquire and present various heterogeneous datasets to enhance clinical decisions. Visual feedback is the dominant way of conveying such information. However, environments rich with many sources of information all presented through the same channel pose the risk of over stimulation and missing crucial information. The augmentation of the cognitive field by additional perceptual modalities such as sound is a workaround to this problem. A major challenge in auditory augmentation is the automatic generation of pleasant and ergonomic audio in complex routines, as opposed to overly simplistic feedback, to avoid alarm fatigue. METHODS In this work, without loss of generality to other procedures, we propose a method for aural augmentation of medical procedures via automatic modification of musical pieces. RESULTS Evaluations of this concept regarding recognizability of the conveyed information along with qualitative aesthetics show the potential of our method. CONCLUSION In this paper, we proposed a novel sonification method for automatic musical augmentation of tasks within surgical procedures. Our experimental results suggest that these augmentations are aesthetically pleasing and have the potential to successfully convey useful information. This work opens a path for advanced sonification techniques in the operating room, in order to complement traditional visual displays and convey information more efficiently.
Collapse
Affiliation(s)
- Sasan Matinfar
- Computer Aided Medical Procedures, Technische Universität München, Munich, Germany.
| | - M Ali Nasseri
- Computer Aided Medical Procedures, Technische Universität München, Munich, Germany
- Augenklinik rechts der Isar, Technische Universität München, Munich, Germany
| | - Ulrich Eck
- Computer Aided Medical Procedures, Technische Universität München, Munich, Germany
| | - Michael Kowalsky
- Computer Aided Medical Procedures, Technische Universität München, Munich, Germany
| | - Hessam Roodaki
- Computer Aided Medical Procedures, Technische Universität München, Munich, Germany
| | - Navid Navab
- Topological Media Lab, Concordia University, Montreal, Canada
| | - Chris P Lohmann
- Augenklinik rechts der Isar, Technische Universität München, Munich, Germany
| | - Mathias Maier
- Augenklinik rechts der Isar, Technische Universität München, Munich, Germany
| | - Nassir Navab
- Computer Aided Medical Procedures, Technische Universität München, Munich, Germany
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, USA
| |
Collapse
|
13
|
Plazak J, Drouin S, Collins L, Kersten-Oertel M. Distance sonification in image-guided neurosurgery. Healthc Technol Lett 2017; 4:199-203. [PMID: 29184665 PMCID: PMC5683246 DOI: 10.1049/htl.2017.0074] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2017] [Accepted: 07/31/2017] [Indexed: 11/20/2022] Open
Abstract
Image-guided neurosurgery, or neuronavigation, has been used to visualise the location of a surgical probe by mapping the probe location to pre-operative models of a patient's anatomy. One common limitation of this approach is that it requires the surgeon to divert their attention away from the patient and towards the neuronavigation system. In order to improve this type of application, the authors designed a system that sonifies (i.e. provides audible feedback of) distance information between a surgical probe and the location of the anatomy of interest. A user study (n = 15) was completed to determine the utility of sonified distance information within an existing neuronavigation platform (Intraoperative Brain Imaging System (IBIS) Neuronav). The authors' results were consistent with the idea that combining auditory distance cues with existing visual information from image-guided surgery systems may result in greater accuracy when locating specified points on a pre-operative scan, thereby potentially reducing the extent of the required surgical openings, as well as potentially increasing the precision of individual surgical tasks. Further, the authors' results were also consistent with the hypothesis that combining auditory and visual information reduces the perceived difficulty in locating a target location within a three-dimensional volume.
Collapse
Affiliation(s)
- Joseph Plazak
- Department of Computer Science and Software Engineering & PERFORM Centre, Concordia University, Montréal, Canada.,School of Music, Illinois Wesleyan University, Bloomington, IL, USA
| | - Simon Drouin
- McConnell Brain Imaging Centre, Montreal Neuro, McGill University, Montréal, Canada
| | - Louis Collins
- McConnell Brain Imaging Centre, Montreal Neuro, McGill University, Montréal, Canada
| | - Marta Kersten-Oertel
- Department of Computer Science and Software Engineering & PERFORM Centre, Concordia University, Montréal, Canada
| |
Collapse
|
14
|
Auditory display as feedback for a novel eye-tracking system for sterile operating room interaction. Int J Comput Assist Radiol Surg 2017; 13:37-45. [PMID: 29079993 DOI: 10.1007/s11548-017-1677-3] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2017] [Accepted: 10/13/2017] [Indexed: 10/18/2022]
Abstract
PURPOSE The growing number of technical systems in the operating room has increased attention on developing touchless interaction methods for sterile conditions. However, touchless interaction paradigms lack the tactile feedback found in common input devices such as mice and keyboards. We propose a novel touchless eye-tracking interaction system with auditory display as a feedback method for completing typical operating room tasks. Auditory display provides feedback concerning the selected input into the eye-tracking system as well as a confirmation of the system response. METHODS An eye-tracking system with a novel auditory display using both earcons and parameter-mapping sonification was developed to allow touchless interaction for six typical scrub nurse tasks. An evaluation with novice participants compared auditory display with visual display with respect to reaction time and a series of subjective measures. RESULTS When using auditory display to substitute for the lost tactile feedback during eye-tracking interaction, participants exhibit reduced reaction time compared to using visual-only display. In addition, the auditory feedback led to lower subjective workload and higher usefulness and system acceptance ratings. CONCLUSION Due to the absence of tactile feedback for eye-tracking and other touchless interaction methods, auditory display is shown to be a useful and necessary addition to new interaction concepts for the sterile operating room, reducing reaction times while improving subjective measures, including usefulness, user satisfaction, and cognitive workload.
Collapse
|
15
|
Black D, Hansen C, Nabavi A, Kikinis R, Hahn H. A Survey of auditory display in image-guided interventions. Int J Comput Assist Radiol Surg 2017; 12:1665-1676. [PMID: 28275890 PMCID: PMC5591070 DOI: 10.1007/s11548-017-1547-z] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2017] [Accepted: 02/21/2017] [Indexed: 01/26/2023]
Abstract
PURPOSE This article investigates the current state of the art of the use of auditory display in image-guided medical interventions. Auditory display is a means of conveying information using sound, and we review the use of this approach to support navigated interventions. We discuss the benefits and drawbacks of published systems and outline directions for future investigation. METHODS We undertook a review of scientific articles on the topic of auditory rendering in image-guided intervention. This includes methods for avoidance of risk structures and instrument placement and manipulation. The review did not include auditory display for status monitoring, for instance in anesthesia. RESULTS We identified 15 publications in the course of the search. Most of the literature (60%) investigates the use of auditory display to convey distance of a tracked instrument to an object using proximity or safety margins. The remainder discuss continuous guidance for navigated instrument placement. Four of the articles present clinical evaluations, 11 present laboratory evaluations, and 3 present informal evaluation (2 present both laboratory and clinical evaluations). CONCLUSION Auditory display is a growing field that has been largely neglected in research in image-guided intervention. Despite benefits of auditory displays reported in both the reviewed literature and non-medical fields, adoption in medicine has been slow. Future challenges include increasing interdisciplinary cooperation with auditory display investigators to develop more meaningful auditory display designs and comprehensive evaluations which target the benefits and drawbacks of auditory display in image guidance.
Collapse
Affiliation(s)
- David Black
- Medical Image Computing, University of Bremen, Bremen, Germany.
- Jacobs University, Bremen, Germany.
- Fraunhofer Institute for Medical Image Computing MEVIS, Bremen, Germany.
| | | | - Arya Nabavi
- International Neuroscience Institute, Hannover, Germany
| | - Ron Kikinis
- Medical Image Computing, University of Bremen, Bremen, Germany
- Fraunhofer Institute for Medical Image Computing MEVIS, Bremen, Germany
- Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Horst Hahn
- Jacobs University, Bremen, Germany
- Fraunhofer Institute for Medical Image Computing MEVIS, Bremen, Germany
| |
Collapse
|
16
|
Black D, Hahn HK, Kikinis R, Wårdell K, Haj-Hosseini N. Auditory display for fluorescence-guided open brain tumor surgery. Int J Comput Assist Radiol Surg 2017; 13:25-35. [PMID: 28929305 DOI: 10.1007/s11548-017-1667-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2017] [Accepted: 09/07/2017] [Indexed: 02/02/2023]
Abstract
PURPOSE Protoporphyrin (PpIX) fluorescence allows discrimination of tumor and normal brain tissue during neurosurgery. A handheld fluorescence (HHF) probe can be used for spectroscopic measurement of 5-ALA-induced PpIX to enable objective detection compared to visual evaluation of fluorescence. However, current technology requires that the surgeon either views the measured values on a screen or employs an assistant to verbally relay the values. An auditory feedback system was developed and evaluated for communicating measured fluorescence intensity values directly to the surgeon. METHODS The auditory display was programmed to map the values measured by the HHF probe to the playback of tones that represented three fluorescence intensity ranges and one error signal. Ten persons with no previous knowledge of the application took part in a laboratory evaluation. After a brief training period, participants performed measurements on a tray of 96 wells of liquid fluorescence phantom and verbally stated the perceived measurement values for each well. The latency and accuracy of the participants' verbal responses were recorded. The long-term memorization of sound function was evaluated in a second set of 10 participants 2-3 and 7-12 days after training. RESULTS The participants identified the played tone accurately for 98% of measurements after training. The median response time to verbally identify the played tones was 2 pulses. No correlation was found between the latency and accuracy of the responses, and no significant correlation with the musical proficiency of the participants was observed on the function responses. Responses for the memory test were 100% accurate. CONCLUSION The employed auditory display was shown to be intuitive, easy to learn and remember, fast to recognize, and accurate in providing users with measurements of fluorescence intensity or error signal. The results of this work establish a basis for implementing and further evaluating auditory displays in clinical scenarios involving fluorescence guidance and other areas for which categorized auditory display could be useful.
Collapse
Affiliation(s)
- David Black
- Medical Image Computing, University of Bremen, Bremen, Germany.
- Jacobs University, Bremen, Germany.
- Fraunhofer MEVIS, Bremen, Germany.
| | - Horst K Hahn
- Jacobs University, Bremen, Germany
- Fraunhofer MEVIS, Bremen, Germany
| | - Ron Kikinis
- Medical Image Computing, University of Bremen, Bremen, Germany
- Fraunhofer MEVIS, Bremen, Germany
- Brigham and Women's Hospital and Harvard Medical School, Boston, MA, USA
| | - Karin Wårdell
- Department of Biomedical Engineering, Linköping University, Linköping, Sweden
| | - Neda Haj-Hosseini
- Department of Biomedical Engineering, Linköping University, Linköping, Sweden
| |
Collapse
|
17
|
Instrument-mounted displays for reducing cognitive load during surgical navigation. Int J Comput Assist Radiol Surg 2017; 12:1599-1605. [PMID: 28233166 PMCID: PMC5568989 DOI: 10.1007/s11548-017-1540-6] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2016] [Accepted: 02/09/2017] [Indexed: 10/25/2022]
Abstract
PURPOSE Surgical navigation systems rely on a monitor placed in the operating room to relay information. Optimal monitor placement can be challenging in crowded rooms, and it is often not possible to place the monitor directly beside the situs. The operator must split attention between the navigation system and the situs. We present an approach for needle-based interventions to provide navigational feedback directly on the instrument and close to the situs by mounting a small display onto the needle. METHODS By mounting a small and lightweight smartwatch display directly onto the instrument, we are able to provide navigational guidance close to the situs and directly in the operator's field of view, thereby reducing the need to switch the focus of view between the situs and the navigation system. We devise a specific variant of the established crosshair metaphor suitable for the very limited screen space. We conduct an empirical user study comparing our approach to using a monitor and a combination of both. RESULTS Results from the empirical user study show significant benefits for cognitive load, user preference, and general usability for the instrument-mounted display, while achieving the same level of performance in terms of time and accuracy compared to using a monitor. CONCLUSION We successfully demonstrate the feasibility of our approach and potential benefits. With ongoing technological advancements, instrument-mounted displays might complement standard monitor setups for surgical navigation in order to lower cognitive demands and for improved usability of such systems.
Collapse
|
18
|
Black D, Hettig J, Luz M, Hansen C, Kikinis R, Hahn H. Auditory feedback to support image-guided medical needle placement. Int J Comput Assist Radiol Surg 2017; 12:1655-1663. [PMID: 28213646 DOI: 10.1007/s11548-017-1537-1] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2016] [Accepted: 02/01/2017] [Indexed: 11/27/2022]
Abstract
PURPOSE During medical needle placement using image-guided navigation systems, the clinician must concentrate on a screen. To reduce the clinician's visual reliance on the screen, this work proposes an auditory feedback method as a stand-alone method or to support visual feedback for placing the navigated medical instrument, in this case a needle. METHODS An auditory synthesis model using pitch comparison and stereo panning parameter mapping was developed to augment or replace visual feedback for navigated needle placement. In contrast to existing approaches which augment but still require a visual display, this method allows view-free needle placement. An evaluation with 12 novice participants compared both auditory and combined audiovisual feedback against existing visual methods. RESULTS Using combined audiovisual display, participants show similar task completion times and report similar subjective workload and accuracy while viewing the screen less compared to using the conventional visual method. The auditory feedback leads to higher task completion times and subjective workload compared to both combined and visual feedback. CONCLUSION Audiovisual feedback shows promising results and establishes a basis for applying auditory feedback as a supplement to visual information to other navigated interventions, especially those for which viewing a patient is beneficial or necessary.
Collapse
Affiliation(s)
- David Black
- Jacobs University, Bremen, Germany.
- Medical Image Computing, University of Bremen, Bremen, Germany.
- Fraunhofer MEVIS, Bremen, Germany.
| | - Julian Hettig
- Faculty of Computer Science, Otto-von-Guericke University Magdeburg, Magdeburg, Germany
| | - Maria Luz
- Faculty of Computer Science, Otto-von-Guericke University Magdeburg, Magdeburg, Germany
| | - Christian Hansen
- Faculty of Computer Science, Otto-von-Guericke University Magdeburg, Magdeburg, Germany
| | - Ron Kikinis
- Medical Image Computing, University of Bremen, Bremen, Germany
- Fraunhofer MEVIS, Bremen, Germany
- Surgical Planning Laboratory, Brigham and Women's Hospital, Boston, MA, USA
| | - Horst Hahn
- Jacobs University, Bremen, Germany
- Fraunhofer MEVIS, Bremen, Germany
| |
Collapse
|
19
|
Howard T, Szewczyk J. Improving Precision in Navigating Laparoscopic Surgery Instruments toward a Planar Target Using Haptic and Visual Feedback. Front Robot AI 2016. [DOI: 10.3389/frobt.2016.00037] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|