1
|
Kim YC, Park CU, Lee SJ, Jeong WS, Na SW, Choi JW. Application of augmented reality using automatic markerless registration for facial plastic and reconstructive surgery. J Craniomaxillofac Surg 2024; 52:246-251. [PMID: 38199944 DOI: 10.1016/j.jcms.2023.12.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Accepted: 12/20/2023] [Indexed: 01/12/2024] Open
Abstract
This study aimed to present a novel markerless augmented reality (AR) system using automatic registration based on machine-learning algorithms that visualize the facial region and provide an intraoperative guide for facial plastic and reconstructive surgeries. This study prospectively enrolled 20 patients scheduled for facial plastic and reconstructive surgeries. The AR system visualizes computed tomographic data in three-dimensional (3D) space by aligning with the point clouds captured by a 3D camera. Point cloud registration consists of two stages: the preliminary registration gives an initial estimate of the transformation using landmark detection, followed by the precise registration using Iterative Closest Point algorithms. Computed Tomography (CT) data can be visualized as two-dimensional slice images or 3D images by the AR system. The AR registration error was defined as the cloud-to-cloud distance between the surface data obtained from the CT and 3D camera. The error was calculated in each facial territory, including the upper, middle, and lower face, while patients were awake and orally intubated, respectively. The mean registration errors were 1.490 ± 0.384 mm and 1.948 ± 0.638 mm while patients were awake and orally intubated, respectively. There was a significant difference in the errors in the lower face between patients while they were awake (1.502 ± 0.480 mm) and orally intubated (2.325 ± 0.971 mm) when stratified by facial territories (p = 0.006). The markerless AR can accurately visualize the facial region with a mean overall registration error of 1-2 mm, with a slight increase in the lower face due to errors arising from tube intubation.
Collapse
Affiliation(s)
- Young Chul Kim
- Department of Plastic and Reconstructive Surgery, Ulsan University College of Medicine, Asan Medical Center, Seoul, South Korea
| | | | - Seok Joon Lee
- Department of Plastic and Reconstructive Surgery, Ulsan University College of Medicine, Asan Medical Center, Seoul, South Korea
| | - Woo Shik Jeong
- Department of Plastic and Reconstructive Surgery, Ulsan University College of Medicine, Asan Medical Center, Seoul, South Korea
| | | | - Jong Woo Choi
- Department of Plastic and Reconstructive Surgery, Ulsan University College of Medicine, Asan Medical Center, Seoul, South Korea.
| |
Collapse
|
2
|
Golomingi R, Dobay A, Franckenberg S, Ebert L, Sieberth T. Augmented reality in forensics and forensic medicine - Current status and future prospects. Sci Justice 2023; 63:451-455. [PMID: 37453776 DOI: 10.1016/j.scijus.2023.04.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Revised: 03/27/2023] [Accepted: 04/22/2023] [Indexed: 07/18/2023]
Abstract
Forensic investigations require a vast variety of knowledge and expertise of each specialist involved. With the increase in digitization and advanced technical possibilities, the traditional use of a computer with a screen for visualization and a mouse and keyboard for interactions has limitations, especially when visualizing the content in relation to the real world. Augmented reality (AR) can be used in such instances to support investigators in various tasks at the scene as well as later in the investigation process. In this article, we present current applications of AR in forensics and forensic medicine, the technological basics of AR, and the advantages that AR brings for forensic investigations. Furthermore, we will have a brief look at other fields of application and at future developments of AR in forensics.
Collapse
Affiliation(s)
- Raffael Golomingi
- 3D Center Zurich, Institute of Forensic Medicine, University of Zurich, 8057 Zurich, Switzerland.
| | - Akos Dobay
- 3D Center Zurich, Institute of Forensic Medicine, University of Zurich, 8057 Zurich, Switzerland.
| | - Sabine Franckenberg
- 3D Center Zurich, Institute of Forensic Medicine, University of Zurich, 8057 Zurich, Switzerland; Diagnostic and Interventional Radiology, University Hospital Zurich, 8091 Zurich, Switzerland.
| | - Lars Ebert
- 3D Center Zurich, Institute of Forensic Medicine, University of Zurich, 8057 Zurich, Switzerland.
| | - Till Sieberth
- 3D Center Zurich, Institute of Forensic Medicine, University of Zurich, 8057 Zurich, Switzerland.
| |
Collapse
|
3
|
Gsaxner C, Li J, Pepe A, Jin Y, Kleesiek J, Schmalstieg D, Egger J. The HoloLens in medicine: A systematic review and taxonomy. Med Image Anal 2023; 85:102757. [PMID: 36706637 DOI: 10.1016/j.media.2023.102757] [Citation(s) in RCA: 20] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Revised: 01/05/2023] [Accepted: 01/18/2023] [Indexed: 01/22/2023]
Abstract
The HoloLens (Microsoft Corp., Redmond, WA), a head-worn, optically see-through augmented reality (AR) display, is the main player in the recent boost in medical AR research. In this systematic review, we provide a comprehensive overview of the usage of the first-generation HoloLens within the medical domain, from its release in March 2016, until the year of 2021. We identified 217 relevant publications through a systematic search of the PubMed, Scopus, IEEE Xplore and SpringerLink databases. We propose a new taxonomy including use case, technical methodology for registration and tracking, data sources, visualization as well as validation and evaluation, and analyze the retrieved publications accordingly. We find that the bulk of research focuses on supporting physicians during interventions, where the HoloLens is promising for procedures usually performed without image guidance. However, the consensus is that accuracy and reliability are still too low to replace conventional guidance systems. Medical students are the second most common target group, where AR-enhanced medical simulators emerge as a promising technology. While concerns about human-computer interactions, usability and perception are frequently mentioned, hardly any concepts to overcome these issues have been proposed. Instead, registration and tracking lie at the core of most reviewed publications, nevertheless only few of them propose innovative concepts in this direction. Finally, we find that the validation of HoloLens applications suffers from a lack of standardized and rigorous evaluation protocols. We hope that this review can advance medical AR research by identifying gaps in the current literature, to pave the way for novel, innovative directions and translation into the medical routine.
Collapse
Affiliation(s)
- Christina Gsaxner
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; BioTechMed, 8010 Graz, Austria.
| | - Jianning Li
- Institute of AI in Medicine, University Medicine Essen, 45131 Essen, Germany; Cancer Research Center Cologne Essen, University Medicine Essen, 45147 Essen, Germany
| | - Antonio Pepe
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; BioTechMed, 8010 Graz, Austria
| | - Yuan Jin
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; Research Center for Connected Healthcare Big Data, Zhejiang Lab, Hangzhou, 311121 Zhejiang, China
| | - Jens Kleesiek
- Institute of AI in Medicine, University Medicine Essen, 45131 Essen, Germany; Cancer Research Center Cologne Essen, University Medicine Essen, 45147 Essen, Germany
| | - Dieter Schmalstieg
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; BioTechMed, 8010 Graz, Austria
| | - Jan Egger
- Institute of Computer Graphics and Vision, Graz University of Technology, 8010 Graz, Austria; Institute of AI in Medicine, University Medicine Essen, 45131 Essen, Germany; BioTechMed, 8010 Graz, Austria; Cancer Research Center Cologne Essen, University Medicine Essen, 45147 Essen, Germany
| |
Collapse
|
4
|
Roopa D, Bose S. A Rapid Dual Feature Tracking Method for Medical Equipments Assembly and Disassembly in Markerless Augmented Reality. JOURNAL OF MEDICAL IMAGING AND HEALTH INFORMATICS 2022. [DOI: 10.1166/jmihi.2022.3944] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Markerless Augmented Reality (MAR) is a superior technology that is currently used by the medical device assembler with aid in design, assembly, disassembly and maintenance operations. The medical assembler assembles the medical equipment based on the doctors requirement, they also
maintains quality and sanitation of the equipment. The major research challenges in MAR are as follows: establish automatic registration parts, find and track the orientation of parts, and lack of depth and visual features. This work proposes a rapid dual feature tracking method i.e., combination
of Visual Simultaneous Localization and Mapping (SLAM) and Matched Pairs Selection (MAPSEL). The main idea of this work is to attain high tracking accuracy using the combined method. To get a good depth image map, a Graph-Based Joint Bilateral with Sharpening Filter (GRB-JBF with SF) is proposed
since depth images are noisy due to the dynamic change of environmental factors that affects tracking accuracy. Then, the best feature points are obtained for matching using Oriented Fast and Rotated Brief (ORB) as a feature detector, Fast Retina Key point with Histogram of Gradients (FREAK-HoG)
as a feature descriptor, and Feature Matching using Rajsk’s distance. Finally, the virtual object is rendered based on 3D affine and projection transformation. This work computes the performance in terms of tracking accuracy, tracking time, and rotation error for different distances
using MATLAB R2017b. From the observed results, it is perceived that the proposed method attained the least position error value about 0.1 cm to 0.3 cm. Also, rotation error is observed as minimal between 2.40 (Deg) to 3.10 and its average scale is observed as 2.7140. Further, the proposed
combination consumes less time against frames compared with other combinations and obtained a higher tracking accuracy of about 95.14% for 180 tracked points. The witnessed outcomes from the proposed scheme display superior performance compared with existing methods.
Collapse
Affiliation(s)
- D. Roopa
- Department of Computer Science and Engineering, Sri Sai Ram Institute of Technology, Anna University, Chennai, 600044, Tamil Nadu, India
| | - S. Bose
- Department of Computer Science and Engineering, Anna University, Chennai, 600025, Tamil Nadu, India
| |
Collapse
|
5
|
Image segmentation of post-mortem computed tomography data in forensic imaging: Methods and applications. FORENSIC IMAGING 2022. [DOI: 10.1016/j.fri.2021.200483] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
|
6
|
Li J, Deng Z, Shen N, He Z, Feng L, Li Y, Yao J. A fully automatic surgical registration method for percutaneous abdominal puncture surgical navigation. Comput Biol Med 2021; 136:104663. [PMID: 34375903 DOI: 10.1016/j.compbiomed.2021.104663] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Revised: 07/12/2021] [Accepted: 07/17/2021] [Indexed: 01/16/2023]
Abstract
Surgical registration that maps surgical space onto image space plays an important role in surgical navigation. Accurate surgical registration can help surgeons efficiently locate surgical instruments. The complicated marker-based surgical registration method is highly accurate, but it is time-consuming. Therefore, a marker-less surgical registration method with high-precision and high-efficiency is proposed without human intervention. Firstly, the surgical navigation system based on the multi-vision system is calibrated by using a specially-designed calibration board. When extracting the abdominal point cloud acquired by the structured light vision system, the constraint is constructed by using Computed Tomography (CT) image to filter out the points in irrelevant areas to improve the computational efficiency. The Coherent Point Drift (CPD) algorithm based on Gaussian Mixture Model (GMM) is applied in the registration of abdominal point cloud with lack of surface features. To enhance the efficiency of the CPD algorithm, firstly, the system calibration result is used in rough registration of the point cloud, and then the proper point cloud pretreatment method and its parameters are studied through experiments. Finally, the puncturing simulation experiments were carried out by using the abdominal phantom. The experimental results show that the proposed surgical registration method has high accuracy and efficiency, and has potential clinical application value.
Collapse
Affiliation(s)
- Jing Li
- Shanghai Key Laboratory of Intelligent Manufacturing and Robotics, School of Mechatronic Engineering and Automation, Shanghai University, Shanghai, China
| | - Zongqian Deng
- Shanghai Key Laboratory of Intelligent Manufacturing and Robotics, School of Mechatronic Engineering and Automation, Shanghai University, Shanghai, China
| | - Nanyan Shen
- Shanghai Key Laboratory of Intelligent Manufacturing and Robotics, School of Mechatronic Engineering and Automation, Shanghai University, Shanghai, China.
| | - Zhou He
- Shanghai Key Laboratory of Intelligent Manufacturing and Robotics, School of Mechatronic Engineering and Automation, Shanghai University, Shanghai, China
| | - Lanyun Feng
- Department of Integrative Oncology, Fudan University Shanghai Cancer Center, Shanghai, China; Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Yingjie Li
- Shanghai Key Laboratory of Intelligent Manufacturing and Robotics, School of Mechatronic Engineering and Automation, Shanghai University, Shanghai, China
| | - Jia Yao
- Shanghai Key Laboratory of Intelligent Manufacturing and Robotics, School of Mechatronic Engineering and Automation, Shanghai University, Shanghai, China
| |
Collapse
|
7
|
Pham Dang N, Chandelon K, Barthélémy I, Devoize L, Bartoli A. A proof-of-concept augmented reality system in oral and maxillofacial surgery. JOURNAL OF STOMATOLOGY, ORAL AND MAXILLOFACIAL SURGERY 2021; 122:338-342. [PMID: 34087435 DOI: 10.1016/j.jormas.2021.05.012] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/25/2021] [Accepted: 05/31/2021] [Indexed: 01/16/2023]
Abstract
BACKGROUND The advent of digital medical imaging, medical image analysis and computer vision has opened the surgeon horizons with the possibility to add virtual information to the real operative field. For oral and maxillofacial surgeons, overlaying anatomical structures to protect (such as teeth, sinus floors, inferior and superior alveolar nerves) or to remove (such as cysts, tumours, impacted teeth) presents a real clinical interest. MATERIAL AND METHODS Through this work, we propose a proof-of-concept markerless augmented reality system for oral and maxillofacial surgery, where a virtual scene is generated preoperatively and mixed with reality to reveal the location of hidden anatomical structures intraoperatively. We devised a computer software to process still video frames of the operating field and to display them on the operating room screens. RESULTS Firstly, we give a description of the proposed system, where virtuality aligns with reality without artificial markers. The dental occlusion plan analysis and cusps detection allow us to initialise the alignment process. Secondly, we validate the feasibility with an experimental approach on a 3D printed jaw phantom and an ex-vivo pig jaw. Thirdly, we evaluate the potential clinical benefit on a patient. CONCLUSION this proof-of-concept highlights the feasibility and the interest of augmented reality for hidden anatomical structures visualisation without artificial markers.
Collapse
Affiliation(s)
- Nathalie Pham Dang
- Department of Oral and Maxillofacial surgery, NHE - CHU de Clermont-Ferrand, Université d'Auvergne, Clermont-Ferrand 63003, France; EnCoV, Institut Pascal, UMR 6602, CNRS/UBP/SIGMA, EnCoV, 63000, Clermont-Ferrand, France; UMR Inserm/UdA, U1107, Neuro-Dol, Trigeminal Pain and Migraine, Université d'Auvergne, Clermont-Ferrand 63003, France.
| | - Kilian Chandelon
- EnCoV, Institut Pascal, UMR 6602, CNRS/UBP/SIGMA, EnCoV, 63000, Clermont-Ferrand, France
| | - Isabelle Barthélémy
- Department of Oral and Maxillofacial surgery, NHE - CHU de Clermont-Ferrand, Université d'Auvergne, Clermont-Ferrand 63003, France; UMR Inserm/UdA, U1107, Neuro-Dol, Trigeminal Pain and Migraine, Université d'Auvergne, Clermont-Ferrand 63003, France
| | - Laurent Devoize
- UMR Inserm/UdA, U1107, Neuro-Dol, Trigeminal Pain and Migraine, Université d'Auvergne, Clermont-Ferrand 63003, France; Department of Odontology, CHU de Clermont-Ferrand, Université d'Auvergne, Clermont-Ferrand 63003, France
| | - Adrien Bartoli
- EnCoV, Institut Pascal, UMR 6602, CNRS/UBP/SIGMA, EnCoV, 63000, Clermont-Ferrand, France
| |
Collapse
|
8
|
A review of visualization techniques of post-mortem computed tomography data for forensic death investigations. Int J Legal Med 2021; 135:1855-1867. [PMID: 33931808 PMCID: PMC8354982 DOI: 10.1007/s00414-021-02581-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2020] [Accepted: 03/16/2021] [Indexed: 11/17/2022]
Abstract
Postmortem computed tomography (PMCT) is a standard image modality used in forensic death investigations. Case- and audience-specific visualizations are vital for identifying relevant findings and communicating them appropriately. Different data types and visualization methods exist in 2D and 3D, and all of these types have specific applications. 2D visualizations are more suited for the radiological assessment of PMCT data because they allow the depiction of subtle details. 3D visualizations are better suited for creating visualizations for medical laypersons, such as state attorneys, because they maintain the anatomical context. Visualizations can be refined by using additional techniques, such as annotation or layering. Specialized methods such as 3D printing and virtual and augmented reality often require data conversion. The resulting data can also be used to combine PMCT data with other 3D data such as crime scene laser scans to create crime scene reconstructions. Knowledge of these techniques is essential for the successful handling of PMCT data in a forensic setting. In this review, we present an overview of current visualization techniques for PMCT.
Collapse
|
9
|
A Systematic Investigation of Models for Color Image Processing in Wound Size Estimation. COMPUTERS 2021. [DOI: 10.3390/computers10040043] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
Abstract
In recent years, research in tracking and assessing wound severity using computerized image processing has increased. With the emergence of mobile devices, powerful functionalities and processing capabilities have provided multiple non-invasive wound evaluation opportunities in both clinical and non-clinical settings. With current imaging technologies, objective and reliable techniques provide qualitative information that can be further processed to provide quantitative information on the size, structure, and color characteristics of wounds. These efficient image analysis algorithms help determine the injury features and the progress of healing in a short time. This paper presents a systematic investigation of articles that specifically address the measurement of wounds’ sizes with image processing techniques, promoting the connection between computer science and health. Of the 208 studies identified by searching electronic databases, 20 were included in the review. From the perspective of image processing color models, the most dominant model was the hue, saturation, and value (HSV) color space. We proposed that a method for measuring the wound area must implement different stages, including conversion to grayscale for further implementation of the threshold and a segmentation method to measure the wound area as the number of pixels for further conversion to metric units. Regarding devices, mobile technology is shown to have reached the level of reliable accuracy.
Collapse
|
10
|
Bulliard J, Eggert S, Ampanozi G, Affolter R, Gascho D, Sieberth T, Thali MJ, Ebert LC. Preliminary testing of an augmented reality headset as a DICOM viewer during autopsy. FORENSIC IMAGING 2020. [DOI: 10.1016/j.fri.2020.200417] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
11
|
Singh P, Alsadoon A, Prasad P, Venkata HS, Ali RS, Haddad S, Alrubaie A. A novel augmented reality to visualize the hidden organs and internal structure in surgeries. Int J Med Robot 2020; 16:e2055. [DOI: 10.1002/rcs.2055] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2018] [Revised: 10/27/2019] [Accepted: 10/28/2019] [Indexed: 11/08/2022]
Affiliation(s)
- P. Singh
- School of Computing and MathematicsCharles Sturt University Sydney New South Wales Australia
| | - Abeer Alsadoon
- School of Computing and MathematicsCharles Sturt University Sydney New South Wales Australia
| | - P.W.C. Prasad
- School of Computing and MathematicsCharles Sturt University Sydney New South Wales Australia
| | | | - Rasha S. Ali
- Department of Computer Techniques EngineeringAL Nisour University College Baghdad Iraq
| | - Sami Haddad
- Department of Oral and Maxillofacial ServicesGreater Western Sydney Area Health Services New South Wales Australia
- Department of Oral and Maxillofacial ServicesCentral Coast Area Health Gosford New South Wales Australia
| | - Ahmad Alrubaie
- Faculty of MedicineUniversity of New South Wales Sydney New South Wales Australia
| |
Collapse
|
12
|
Bayrak M, Alsadoon A, Prasad P, Venkata HS, Ali RS, Haddad S. A novel rotation invariant and Manhattan metric–based pose refinement: Augmented reality–based oral and maxillofacial surgery. Int J Med Robot 2020; 16:e2077. [DOI: 10.1002/rcs.2077] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2019] [Revised: 01/07/2020] [Accepted: 01/09/2020] [Indexed: 01/14/2023]
Affiliation(s)
- Mucahit Bayrak
- School of Computing and MathematicsCharles Sturt University Sydney New South Wales Australia
| | - Abeer Alsadoon
- School of Computing and MathematicsCharles Sturt University Sydney New South Wales Australia
| | - P.W.C. Prasad
- School of Computing and MathematicsCharles Sturt University Sydney New South Wales Australia
| | | | - Rasha S. Ali
- Department of Computer Techniques EngineeringAL Nisour University College Baghdad Iraq
| | - Sami Haddad
- Department of Oral and Maxillofacial ServicesGreater Western Sydney Area Health Services Mount Druitt New South Wales Australia
- Department of Oral and Maxillofacial ServicesCentral Coast Area Health Gosford New South Wales Australia
| |
Collapse
|
13
|
Parry NMA, Stoll A. The rise of veterinary forensics. Forensic Sci Int 2019; 306:110069. [PMID: 31830618 DOI: 10.1016/j.forsciint.2019.110069] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2018] [Revised: 02/08/2019] [Accepted: 11/15/2019] [Indexed: 01/26/2023]
Abstract
Veterinary forensics is rapidly emerging as a distinct branch of veterinary medicine, especially because of increasing mindfulness about animal cruelty, and of the link between acts of cruelty to animals and violence toward humans. Nevertheless, the application of forensic sciences in veterinary cases lags behind its application in medical cases. Although gaps persist in veterinarians' knowledge of forensics and in how to apply this field to medicolegal cases involving animals, continued research and publication in veterinary forensics are rapidly developing the evidence base in this area. Additionally, educational opportunities in veterinary forensics are also increasing at both undergraduate and postgraduate levels. Together, these changes will continue to improve veterinarians' abilities to investigate cases involving animals. To further strengthen these investigations, veterinarians should also collaborate with the appropriate experts in different disciplines of forensic science.
Collapse
Affiliation(s)
| | - Alexander Stoll
- Veterinary Pathology Centre, School of Veterinary Medicine, University of Surrey, Francis Crick Road, GU2 7YW, United Kingdom
| |
Collapse
|
14
|
Turner OC, Aeffner F, Bangari DS, High W, Knight B, Forest T, Cossic B, Himmel LE, Rudmann DG, Bawa B, Muthuswamy A, Aina OH, Edmondson EF, Saravanan C, Brown DL, Sing T, Sebastian MM. Society of Toxicologic Pathology Digital Pathology and Image Analysis Special Interest Group Article*: Opinion on the Application of Artificial Intelligence and Machine Learning to Digital Toxicologic Pathology. Toxicol Pathol 2019; 48:277-294. [DOI: 10.1177/0192623319881401] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
Abstract
Toxicologic pathology is transitioning from analog to digital methods. This transition seems inevitable due to a host of ongoing social and medical technological forces. Of these, artificial intelligence (AI) and in particular machine learning (ML) are globally disruptive, rapidly growing sectors of technology whose impact on the long-established field of histopathology is quickly being realized. The development of increasing numbers of algorithms, peering ever deeper into the histopathological space, has demonstrated to the scientific community that AI pathology platforms are now poised to truly impact the future of precision and personalized medicine. However, as with all great technological advances, there are implementation and adoption challenges. This review aims to define common and relevant AI and ML terminology, describe data generation and interpretation, outline current and potential future business cases, discuss validation and regulatory hurdles, and most importantly, propose how overcoming the challenges of this burgeoning technology may shape toxicologic pathology for years to come, enabling pathologists to contribute even more effectively to answering scientific questions and solving global health issues. [Box: see text]
Collapse
Affiliation(s)
- Oliver C. Turner
- Novartis, Novartis Institutes for Biomedical Research, Preclinical Safety, East Hanover, NJ, USA
| | - Famke Aeffner
- Amgen Inc, Research, Comparative Biology and Safety Sciences, San Francisco, CA, USA
| | | | - Wanda High
- High Preclinical Pathology Consulting, Rochester, NY, USA
| | - Brian Knight
- Boehringer Ingelheim Pharmaceuticals Incorporated, Nonclinical Drug Safety, Ridgefield, CT, USA
| | | | - Brieuc Cossic
- Roche, Pharmaceutical Research and Early Development (pRED), Roche Innovation Center, Basel, Switzerland
| | - Lauren E. Himmel
- Division of Animal Care, Department of Pathology, Microbiology and Immunology, Vanderbilt University Medical Center, Nashville, TN, USA
| | | | | | | | | | - Elijah F. Edmondson
- Pathology/Histotechnology Laboratory, Frederick National Laboratory for Cancer Research, NIH, Frederick, MD, USA
| | - Chandrassegar Saravanan
- Novartis, Novartis Institutes for Biomedical Research, Preclinical Safety, Cambridge, MA, USA
| | | | - Tobias Sing
- Novartis, Novartis Institutes for Biomedical Research, NIBR Informatics, Basel, Switzerland
| | - Manu M. Sebastian
- Department of Epigenetics and Molecular Carcinogenesis, The University of Texas MD Anderson Cancer Center, Smithville, TX, USA
| |
Collapse
|
15
|
Comparison of Two Innovative Strategies Using Augmented Reality for Communication in Aesthetic Dentistry: A Pilot Study. JOURNAL OF HEALTHCARE ENGINEERING 2019; 2019:7019046. [PMID: 31073394 PMCID: PMC6470451 DOI: 10.1155/2019/7019046] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/22/2018] [Revised: 02/28/2019] [Accepted: 03/14/2019] [Indexed: 11/18/2022]
Abstract
During dental prosthetic rehabilitation, communication and conception are achieved using rigorous methodologies such as smile design protocols. The aim of the present pilot study was to compare two innovative strategies that used augmented reality for communication in dentistry. These strategies enable the user to instantly try a virtual smile proposition by taking a set of pictures from different points of view or by using the iPad as an enhanced mirror. Sixth-year dental students (n=18, women = 13, men = 5, mean age = 23.8) were included in this pilot study and were asked to answer a 5-question questionnaire studying the user experience using a visual analog scale (VAS). Answers were converted into a numerical result ranging from 0 to 100 for statistical analysis. Participants were not able to report a difference between the two strategies in terms of handling of the device (p=0.45), quality of the reconstruction (p=0.73), and fluidity of the software (p=0.67). Even if the participants' experience with the enhanced mirror was more often reported as immersive and more likely to be integrated in a daily dental office practice, no significant increase was reported (p=0.15 and p=0.07). Further investigations are required to evaluate time and cost savings in daily practice. Software accuracy is also a major point to investigate in order to go further in clinical applications.
Collapse
|
16
|
Accuracy assessment for the co-registration between optical and VIVE head-mounted display tracking. Int J Comput Assist Radiol Surg 2019; 14:1207-1215. [PMID: 31069642 DOI: 10.1007/s11548-019-01992-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2019] [Accepted: 04/25/2019] [Indexed: 10/26/2022]
Abstract
PURPOSE We report on the development and accuracy assessment of a hybrid tracking system that integrates optical spatial tracking into a video pass-through head-mounted display. METHODS The hybrid system uses a dual-tracked co-calibration apparatus to provide a co-registration between the origins of an optical dynamic reference frame and the VIVE Pro controller through a point-based registration. This registration provides the location of optically tracked tools with respect to the VIVE controller's origin and thus the VIVE's tracking system. RESULTS The positional accuracy was assessed using a CNC machine to collect a grid of points with 25 samples per location. The positional trueness and precision for the hybrid tracking system were [Formula: see text] and [Formula: see text], respectively. The rotational accuracy was assessed through inserting a stylus tracked by all three systems into a hemispherical phantom with cylindrical openings at known angles and collecting 25 samples per cylinder for each system. The rotational trueness and precision for the hybrid tracking system were [Formula: see text] and [Formula: see text], respectively. The difference in position and rotational trueness between the OTS and the hybrid tracking system was [Formula: see text] and [Formula: see text], respectively. CONCLUSIONS We developed a hybrid tracking system that allows the pose of optically tracked surgical instruments to be known within a first-person HMD visualization system, achieving submillimeter accuracy. This research validated the positional and rotational accuracy of the hybrid tracking system and subsequently the optical tracking and VIVE tracking systems. This work provides a method to determine the position of an optically tracked surgical tool with a surgically acceptable accuracy within a low-cost commercial-grade video pass-through HMD. The hybrid tracking system provides the foundation for the continued development of virtual reality or augmented virtuality surgical navigation systems for training or practicing surgical techniques.
Collapse
|
17
|
Effective Application of Mixed Reality Device HoloLens: Simple Manual Alignment of Surgical Field and Holograms. Plast Reconstr Surg 2019; 143:647-651. [PMID: 30688914 DOI: 10.1097/prs.0000000000005215] [Citation(s) in RCA: 47] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
The technology used to add information to a real visual field is defined as augmented reality technology. Augmented reality technology that can interactively manipulate displayed information is called mixed reality technology. HoloLens from Microsoft, which is a head-mounted mixed reality device released in 2016, can display a precise three-dimensional model stably on the real visual field as hologram. If it is possible to accurately superimpose the position/direction of the hologram in the surgical field, surgical navigation-like use can be expected. However, in HoloLens, there was no such function. The authors devised a method that can align the surgical field and holograms precisely within a short time using a simple manual operation. The mechanism is to match the three points on the hologram to the corresponding marking points of the body surface. By making it possible to arbitrarily select any of the three points as a pivot/axis of the rotational movement of the hologram, alignment by manual operation becomes very easy. The alignment between the surgical field and the hologram was good and thus contributed to intraoperative objective judgment. By using the method of this study, the clinical usefulness of the mixed reality device HoloLens will be expanded.
Collapse
|
18
|
|
19
|
Casas S, Portalés C, Vera L, Riera JV. Virtual and Augmented Reality Mirrors for Mental Health Treatment. ADVANCES IN PSYCHOLOGY, MENTAL HEALTH, AND BEHAVIORAL STUDIES 2019. [DOI: 10.4018/978-1-5225-7168-1.ch007] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
Virtual and Augmented Reality are technologies widely used in a variety of areas, including the medical sector. On the other hand, regular mirrors have been traditionally used as tools to aid in mental health treatment for a variety of diseases and disorders. Although it is possible to build Virtual and Augmented Reality experiences based on mirror metaphors, there are very few contributions of this kind in the medical sector. In this chapter, the great benefits that regular mirrors have brought for mental health treatment are addressed. In addition, a review on the state of the art in mirror-based Virtual and Augmented Reality applications is given, highlighting the potential benefits that these enhanced mirrors could bring for the mental health treatment.
Collapse
|
20
|
Basnet BR, Alsadoon A, Withana C, Deva A, Paul M. A novel noise filtered and occlusion removal: navigational accuracy in augmented reality-based constructive jaw surgery. Oral Maxillofac Surg 2018; 22:385-401. [PMID: 30206745 DOI: 10.1007/s10006-018-0719-5] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2018] [Accepted: 08/28/2018] [Indexed: 06/08/2023]
Abstract
PURPOSE Augmented reality-based constructive jaw surgery has been facing various limitations such as noise in real-time images, the navigational error of implants and jaw, image overlay error, and occlusion handling which have limited the implementation of augmented reality (AR) in corrective jaw surgery. This research aimed to improve the navigational accuracy, through noise and occlusion removal, during positioning of an implant in relation to the jaw bone to be cut or drilled. METHOD The proposed system consists of a weighting-based de-noising filter and depth mapping-based occlusion removal for removing any occluded object such as surgical tools, the surgeon's body parts, and blood. RESULTS The maxillary (upper jaw) and mandibular (lower jaw) jaw bone sample results show that the proposed method can achieve the image overlay error (video accuracy) of 0.23~0.35 mm and processing time of 8-12 frames per second compared to 0.35~0.45 mm and 6-11 frames per second by the existing best system. CONCLUSION The proposed system concentrates on removing the noise from the real-time video frame and the occlusion. Thus, the acceptable range of accuracy and the processing time are provided by this study for surgeons for carrying out a smooth surgical flow.
Collapse
Affiliation(s)
- Bijaya Raj Basnet
- School of Computing and Mathematics, Charles Sturt University, Sydney Campus, Sydney, Australia
| | - Abeer Alsadoon
- School of Computing and Mathematics, Charles Sturt University, Sydney Campus, Sydney, Australia
| | - Chandana Withana
- School of Computing and Mathematics, Charles Sturt University, Sydney Campus, Sydney, Australia.
| | - Anand Deva
- Faculty of Medicine and Health Sciences, Macquarie University, Sydney, Australia
| | - Manoranjan Paul
- School of Computing and Mathematics, Charles Sturt University, Sydney Campus, Sydney, Australia
| |
Collapse
|
21
|
Fida B, Cutolo F, di Franco G, Ferrari M, Ferrari V. Augmented reality in open surgery. Updates Surg 2018; 70:389-400. [PMID: 30006832 DOI: 10.1007/s13304-018-0567-8] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2018] [Accepted: 07/08/2018] [Indexed: 12/17/2022]
Abstract
Augmented reality (AR) has been successfully providing surgeons an extensive visual information of surgical anatomy to assist them throughout the procedure. AR allows surgeons to view surgical field through the superimposed 3D virtual model of anatomical details. However, open surgery presents new challenges. This study provides a comprehensive overview of the available literature regarding the use of AR in open surgery, both in clinical and simulated settings. In this way, we aim to analyze the current trends and solutions to help developers and end/users discuss and understand benefits and shortcomings of these systems in open surgery. We performed a PubMed search of the available literature updated to January 2018 using the terms (1) "augmented reality" AND "open surgery", (2) "augmented reality" AND "surgery" NOT "laparoscopic" NOT "laparoscope" NOT "robotic", (3) "mixed reality" AND "open surgery", (4) "mixed reality" AND "surgery" NOT "laparoscopic" NOT "laparoscope" NOT "robotic". The aspects evaluated were the following: real data source, virtual data source, visualization processing modality, tracking modality, registration technique, and AR display type. The initial search yielded 502 studies. After removing the duplicates and by reading abstracts, a total of 13 relevant studies were chosen. In 1 out of 13 studies, in vitro experiments were performed, while the rest of the studies were carried out in a clinical setting including pancreatic, hepatobiliary, and urogenital surgeries. AR system in open surgery appears as a versatile and reliable tool in the operating room. However, some technological limitations need to be addressed before implementing it into the routine practice.
Collapse
Affiliation(s)
- Benish Fida
- Department of Information Engineering, University of Pisa, Pisa, Italy.,Department of Translational Research and New Technologies in Medicine and Surgery, EndoCAS Center, University of Pisa, Pisa, Italy
| | - Fabrizio Cutolo
- Department of Information Engineering, University of Pisa, Pisa, Italy. .,Department of Translational Research and New Technologies in Medicine and Surgery, EndoCAS Center, University of Pisa, Pisa, Italy.
| | - Gregorio di Franco
- General Surgery Unit, Department of Surgery, Translational and New Technologies, University of Pisa, Pisa, Italy
| | - Mauro Ferrari
- Department of Translational Research and New Technologies in Medicine and Surgery, EndoCAS Center, University of Pisa, Pisa, Italy.,Vascular Surgery Unit, Cisanello University Hospital AOUP, Pisa, Italy
| | - Vincenzo Ferrari
- Department of Information Engineering, University of Pisa, Pisa, Italy.,Department of Translational Research and New Technologies in Medicine and Surgery, EndoCAS Center, University of Pisa, Pisa, Italy
| |
Collapse
|
22
|
Bornik A, Urschler M, Schmalstieg D, Bischof H, Krauskopf A, Schwark T, Scheurer E, Yen K. Integrated computer-aided forensic case analysis, presentation, and documentation based on multimodal 3D data. Forensic Sci Int 2018; 287:12-24. [DOI: 10.1016/j.forsciint.2018.03.031] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2017] [Revised: 03/13/2018] [Accepted: 03/15/2018] [Indexed: 11/24/2022]
|
23
|
Recent Development of Augmented Reality in Surgery: A Review. JOURNAL OF HEALTHCARE ENGINEERING 2017; 2017:4574172. [PMID: 29065604 PMCID: PMC5585624 DOI: 10.1155/2017/4574172] [Citation(s) in RCA: 156] [Impact Index Per Article: 22.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/12/2017] [Accepted: 07/03/2017] [Indexed: 12/11/2022]
Abstract
Introduction The development augmented reality devices allow physicians to incorporate data visualization into diagnostic and treatment procedures to improve work efficiency, safety, and cost and to enhance surgical training. However, the awareness of possibilities of augmented reality is generally low. This review evaluates whether augmented reality can presently improve the results of surgical procedures. Methods We performed a review of available literature dating from 2010 to November 2016 by searching PubMed and Scopus using the terms “augmented reality” and “surgery.” Results. The initial search yielded 808 studies. After removing duplicates and including only journal articles, a total of 417 studies were identified. By reading of abstracts, 91 relevant studies were chosen to be included. 11 references were gathered by cross-referencing. A total of 102 studies were included in this review. Conclusions The present literature suggest an increasing interest of surgeons regarding employing augmented reality into surgery leading to improved safety and efficacy of surgical procedures. Many studies showed that the performance of newly devised augmented reality systems is comparable to traditional techniques. However, several problems need to be addressed before augmented reality is implemented into the routine practice.
Collapse
|
24
|
Intraoperative Evaluation of Body Surface Improvement by an Augmented Reality System That a Clinician Can Modify. PLASTIC AND RECONSTRUCTIVE SURGERY-GLOBAL OPEN 2017; 5:e1432. [PMID: 28894655 PMCID: PMC5585428 DOI: 10.1097/gox.0000000000001432] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2017] [Accepted: 06/09/2017] [Indexed: 01/01/2023]
Abstract
BACKGROUND Augmented reality (AR) technology that can combine computer-generated images with a real scene has been reported in the medical field recently. We devised the AR system for evaluation of improvements of the body surface, which is important for plastic surgery. METHODS We constructed an AR system that is easy to modify by combining existing devices and free software. We superimposed the 3-dimensional images of the body surface and the bone (obtained from VECTRA H1 and CT) onto the actual surgical field by Moverio BT-200 smart glasses and evaluated improvements of the body surface in 8 cases. RESULTS In all cases, the 3D image was successfully projected on the surgical field. Improvement of the display method of the 3D image made it easier to distinguish the different shapes in the 3D image and surgical field, making comparison easier. In a patient with fibrous dysplasia, the symmetrized body surface image was useful for confirming improvement of the real body surface. In a patient with complex facial fracture, the simulated bone image was useful as a reference for reduction. In a patient with an osteoma of the forehead, simultaneously displayed images of the body surface and the bone made it easier to understand these positional relationships. CONCLUSIONS This study confirmed that AR technology is helpful for evaluation of the body surface in several clinical applications. Our findings are not only useful for body surface evaluation but also for effective utilization of AR technology in the field of plastic surgery.
Collapse
|
25
|
MITK-OpenIGTLink for combining open-source toolkits in real-time computer-assisted interventions. Int J Comput Assist Radiol Surg 2016; 12:351-361. [DOI: 10.1007/s11548-016-1488-y] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2016] [Accepted: 09/08/2016] [Indexed: 11/26/2022]
|
26
|
Xiao D, Luo H, Jia F, Zhang Y, Li Y, Guo X, Cai W, Fang C, Fan Y, Zheng H, Hu Q. A Kinect™camera based navigation system for percutaneous abdominal puncture. Phys Med Biol 2016; 61:5687-705. [DOI: 10.1088/0031-9155/61/15/5687] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
27
|
Loy Rodas N, Barrera F, Padoy N. See It With Your Own Eyes: Markerless Mobile Augmented Reality for Radiation Awareness in the Hybrid Room. IEEE Trans Biomed Eng 2016; 64:429-440. [PMID: 27164565 DOI: 10.1109/tbme.2016.2560761] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
GOAL We present an approach to provide awareness to the harmful ionizing radiation generated during X-ray-guided minimally invasive procedures. METHODS A hand-held screen is used to display directly in the user's view information related to radiation safety in a mobile augmented reality (AR) manner. Instead of using markers, we propose a method to track the observer's viewpoint, which relies on the use of multiple RGB-D sensors and combines equipment detection for tracking initialization with a KinectFusion-like approach for frame-to-frame tracking. Two of the sensors are ceiling-mounted and a third one is attached to the hand-held screen. The ceiling cameras keep an updated model of the room's layout, which is used to exploit context information and improve the relocalization procedure. RESULTS The system is evaluated on a multicamera dataset generated inside an operating room (OR) and containing ground-truth poses of the AR display. This dataset includes a wide variety of sequences with different scene configurations, occlusions, motion in the scene, and abrupt viewpoint changes. Qualitative results illustrating the different AR visualization modes for radiation awareness provided by the system are also presented. CONCLUSION Our approach allows the user to benefit from a large AR visualization area and permits to recover from tracking failure caused by vast motion or changes in the scene just by looking at a piece of equipment. SIGNIFICANCE The system enables the user to see the 3-D propagation of radiation, the medical staff's exposure, and/or the doses deposited on the patient's surface as seen through his own eyes.
Collapse
|
28
|
Towards markerless navigation for percutaneous needle insertions. Int J Comput Assist Radiol Surg 2015; 11:107-17. [PMID: 26018847 DOI: 10.1007/s11548-015-1156-7] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2014] [Accepted: 01/26/2015] [Indexed: 10/23/2022]
Abstract
PURPOSE Percutaneous needle insertions are increasingly used for diagnosis and treatment of abdominal lesions. The challenging part of computed tomography (CT)-guided punctures is the transfer of the insertion trajectory planned in the CT image to the patient. Conventionally, this often results in several needle repositionings and control CT scans. To address this issue, several navigation systems for percutaneous needle insertions have been presented; however, none of them has thus far become widely accepted in clinical routine. Their benefit for the patient could not exceed the additional higher costs and the increased complexity in terms of bulky tracking systems and specialized markers for registration and tracking. METHODS We present the first markerless and trackerless navigation concept for real-time patient localization and instrument guidance. It has specifically been designed to be integrated smoothly into the clinical workflow and does not require markers or an external tracking system. The main idea is the utilization of a range imaging device that allows for contactless and radiation-free acquisition of both range and color information used for patient localization and instrument guidance. RESULTS A first feasibility study in phantom and porcine models yielded a median targeting accuracy of 6.9 and 19.4 mm, respectively. CONCLUSIONS Although system performance remains to be improved for clinical use, expected advances in camera technology as well as consideration of respiratory motion and automation of the individual steps will make this approach an interesting alternative for guiding percutaneous needle insertions.
Collapse
|