1
|
Schmidt A, Morales-Álvarez P, Cooper LA, Newberg LA, Enquobahrie A, Molina R, Katsaggelos AK. Focused active learning for histopathological image classification. Med Image Anal 2024; 95:103162. [PMID: 38593644 DOI: 10.1016/j.media.2024.103162] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Revised: 11/05/2023] [Accepted: 04/02/2024] [Indexed: 04/11/2024]
Abstract
Active Learning (AL) has the potential to solve a major problem of digital pathology: the efficient acquisition of labeled data for machine learning algorithms. However, existing AL methods often struggle in realistic settings with artifacts, ambiguities, and class imbalances, as commonly seen in the medical field. The lack of precise uncertainty estimations leads to the acquisition of images with a low informative value. To address these challenges, we propose Focused Active Learning (FocAL), which combines a Bayesian Neural Network with Out-of-Distribution detection to estimate different uncertainties for the acquisition function. Specifically, the weighted epistemic uncertainty accounts for the class imbalance, aleatoric uncertainty for ambiguous images, and an OoD score for artifacts. We perform extensive experiments to validate our method on MNIST and the real-world Panda dataset for the classification of prostate cancer. The results confirm that other AL methods are 'distracted' by ambiguities and artifacts which harm the performance. FocAL effectively focuses on the most informative images, avoiding ambiguities and artifacts during acquisition. For both experiments, FocAL outperforms existing AL approaches, reaching a Cohen's kappa of 0.764 with only 0.69% of the labeled Panda data.
Collapse
Affiliation(s)
- Arne Schmidt
- Department of Computer Science and Artificial Intelligence, University of Granada, Granada, 18010, Spain.
| | - Pablo Morales-Álvarez
- Department of Statistics and Operation Research, University of Granada, Granada, 18010, Spain.
| | - Lee Ad Cooper
- Department of Pathology, Northwestern University, Chicago, IL, 60611, USA.
| | | | | | - Rafael Molina
- Department of Computer Science and Artificial Intelligence, University of Granada, Granada, 18010, Spain.
| | - Aggelos K Katsaggelos
- Department of Electrical Computer Engineering, Northwestern University, Evanston, IL, 60208, USA.
| |
Collapse
|
2
|
Gerber S, Niethammer M, Ebrahim E, Piven J, Dager SR, Styner M, Aylward S, Enquobahrie A. Optimal transport features for morphometric population analysis. Med Image Anal 2023; 84:102696. [PMID: 36495600 PMCID: PMC9829456 DOI: 10.1016/j.media.2022.102696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2021] [Revised: 07/28/2022] [Accepted: 11/17/2022] [Indexed: 11/23/2022]
Abstract
Brain pathologies often manifest as partial or complete loss of tissue. The goal of many neuroimaging studies is to capture the location and amount of tissue changes with respect to a clinical variable of interest, such as disease progression. Morphometric analysis approaches capture local differences in the distribution of tissue or other quantities of interest in relation to a clinical variable. We propose to augment morphometric analysis with an additional feature extraction step based on unbalanced optimal transport. The optimal transport feature extraction step increases statistical power for pathologies that cause spatially dispersed tissue loss, minimizes sensitivity to shifts due to spatial misalignment or differences in brain topology, and separates changes due to volume differences from changes due to tissue location. We demonstrate the proposed optimal transport feature extraction step in the context of a volumetric morphometric analysis of the OASIS-1 study for Alzheimer's disease. The results demonstrate that the proposed approach can identify tissue changes and differences that are not otherwise measurable.
Collapse
Affiliation(s)
| | | | | | - Joseph Piven
- University of North Carolina, Chapel Hill, NC, USA
| | | | | | | | | |
Collapse
|
3
|
Tu L, Porras AR, Enquobahrie A, Buck B S GC, Tsering M S D, Horvath S, Keating R, Oh AK, Rogers GF, George Linguraru M. Automated Measurement of Intracranial Volume Using Three-Dimensional Photography. Plast Reconstr Surg 2020; 146:314e-323e. [PMID: 32459727 DOI: 10.1097/prs.0000000000007066] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
BACKGROUND Current methods to analyze three-dimensional photography do not quantify intracranial volume, an important metric of development. This study presents the first noninvasive, radiation-free, accurate, and reproducible method to quantify intracranial volume from three-dimensional photography. METHODS In this retrospective study, cranial bones and head skin were automatically segmented from computed tomographic images of 575 subjects without cranial abnormality (average age, 5 ± 5 years; range, 0 to 16 years). The intracranial volume and the head volume were measured at the cranial vault region, and their relation was modeled by polynomial regression, also accounting for age and sex. Then, the regression model was used to estimate the intracranial volume of 30 independent pediatric patients from their head volume measured using three-dimensional photography. Evaluation was performed by comparing the estimated intracranial volume with the true intracranial volume of these patients computed from paired computed tomographic images; two growth models were used to compensate for the time gap between computed tomographic and three-dimensional photography. RESULTS The regression model estimated the intracranial volume of the normative population from the head volume calculated from computed tomographic images with an average error of 3.81 ± 3.15 percent (p = 0.93) and a correlation (R) of 0.96. The authors obtained an average error of 4.07 ± 3.01 percent (p = 0.57) in estimating the intracranial volume of the patients from three-dimensional photography using the regression model. CONCLUSION Three-dimensional photography with image analysis provides measurement of intracranial volume with clinically acceptable accuracy, thus offering a noninvasive, precise, and reproducible method to evaluate normal and abnormal brain development in young children. CLINICAL QUESTION/LEVEL OF EVIDENCE Diagnostic, V.
Collapse
Affiliation(s)
- Liyun Tu
- From the Sheikh Zayed Institute for Pediatric Surgical Innovation, the Division of Neurosurgery, and the Division of Plastic and Reconstructive Surgery, Children's National Hospital; Kitware, Inc.; and the Departments of Radiology and Pediatrics, School of Medicine and Health Sciences, George Washington University
| | - Antonio R Porras
- From the Sheikh Zayed Institute for Pediatric Surgical Innovation, the Division of Neurosurgery, and the Division of Plastic and Reconstructive Surgery, Children's National Hospital; Kitware, Inc.; and the Departments of Radiology and Pediatrics, School of Medicine and Health Sciences, George Washington University
| | - Andinet Enquobahrie
- From the Sheikh Zayed Institute for Pediatric Surgical Innovation, the Division of Neurosurgery, and the Division of Plastic and Reconstructive Surgery, Children's National Hospital; Kitware, Inc.; and the Departments of Radiology and Pediatrics, School of Medicine and Health Sciences, George Washington University
| | - Graham C Buck B S
- From the Sheikh Zayed Institute for Pediatric Surgical Innovation, the Division of Neurosurgery, and the Division of Plastic and Reconstructive Surgery, Children's National Hospital; Kitware, Inc.; and the Departments of Radiology and Pediatrics, School of Medicine and Health Sciences, George Washington University
| | - Deki Tsering M S
- From the Sheikh Zayed Institute for Pediatric Surgical Innovation, the Division of Neurosurgery, and the Division of Plastic and Reconstructive Surgery, Children's National Hospital; Kitware, Inc.; and the Departments of Radiology and Pediatrics, School of Medicine and Health Sciences, George Washington University
| | - Samantha Horvath
- From the Sheikh Zayed Institute for Pediatric Surgical Innovation, the Division of Neurosurgery, and the Division of Plastic and Reconstructive Surgery, Children's National Hospital; Kitware, Inc.; and the Departments of Radiology and Pediatrics, School of Medicine and Health Sciences, George Washington University
| | - Robert Keating
- From the Sheikh Zayed Institute for Pediatric Surgical Innovation, the Division of Neurosurgery, and the Division of Plastic and Reconstructive Surgery, Children's National Hospital; Kitware, Inc.; and the Departments of Radiology and Pediatrics, School of Medicine and Health Sciences, George Washington University
| | - Albert K Oh
- From the Sheikh Zayed Institute for Pediatric Surgical Innovation, the Division of Neurosurgery, and the Division of Plastic and Reconstructive Surgery, Children's National Hospital; Kitware, Inc.; and the Departments of Radiology and Pediatrics, School of Medicine and Health Sciences, George Washington University
| | - Gary F Rogers
- From the Sheikh Zayed Institute for Pediatric Surgical Innovation, the Division of Neurosurgery, and the Division of Plastic and Reconstructive Surgery, Children's National Hospital; Kitware, Inc.; and the Departments of Radiology and Pediatrics, School of Medicine and Health Sciences, George Washington University
| | - Marius George Linguraru
- From the Sheikh Zayed Institute for Pediatric Surgical Innovation, the Division of Neurosurgery, and the Division of Plastic and Reconstructive Surgery, Children's National Hospital; Kitware, Inc.; and the Departments of Radiology and Pediatrics, School of Medicine and Health Sciences, George Washington University
| |
Collapse
|
4
|
Enquobahrie A, Horvath S, Arikatla S, Rosenberg A, Cleary K, Sharma K. Development and face validation of ultrasound-guided renal biopsy virtual trainer. Healthc Technol Lett 2019; 6:210-213. [PMID: 32038859 PMCID: PMC6952253 DOI: 10.1049/htl.2019.0081] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2019] [Accepted: 10/02/2019] [Indexed: 12/02/2022] Open
Abstract
The overall prevalence of chronic kidney disease in the general population is ∼14% with more than 661,000 Americans having a kidney failure. Ultrasound (US)-guided renal biopsy is a critically important tool in the evaluation and management of renal pathologies. This Letter presents KBVTrainer, a virtual simulator that the authors developed to train clinicians to improve procedural skill competence in US-guided renal biopsy. The simulator was built using low-cost hardware components and open source software libraries. They conducted a face validation study with five experts who were either adult/pediatric nephrologists or interventional/diagnostic radiologists. The trainer was rated very highly (>4.4) for the usefulness of the real US images (highest at 4.8), potential usefulness of the trainer in training for needle visualization, tracking, steadiness and hand-eye coordination, and overall promise of the trainer to be useful for training US-guided needle biopsies. The lowest score of 2.4 was received for the look and feel of the US probe and needle compared to clinical practice. The force feedback received a moderate score of 3.0. The clinical experts provided abundant verbal and written subjective feedback and were highly enthusiastic about using the trainer as a valuable tool for future trainees.
Collapse
Affiliation(s)
| | - Sam Horvath
- Medical Computing, Kitware Inc, Carrboro, NC, USA
| | | | - Avi Rosenberg
- School of Medicine, Johns Hopkins University, Baltimore, MD, USA
| | - Kevin Cleary
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Health System, Washington, DC, USA
| | - Karun Sharma
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Health System, Washington, DC, USA
| |
Collapse
|
5
|
Horvath S, Arikatla S, Cleary K, Sharma K, Rosenberg A, Enquobahrie A. Towards an Advanced Virtual Ultrasound-guided Renal Biopsy Trainer. Proc SPIE Int Soc Opt Eng 2019; 10951. [PMID: 31474785 DOI: 10.1117/12.2512871] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Ultrasound (US)-guided renal biopsy is a critically important tool in the evaluation and management of non-malignant renal pathologies with diagnostic and prognostic significance. It requires a good biopsy technique and skill to safely and consistently obtain high yield biopsy samples for tissue analysis. This project aims to develop a virtual trainer to help clinicians to improve procedural skill competence in real-time ultrasound-guided renal biopsy. This paper presents a cost-effective, high-fidelity trainer built using low-cost hardware components and open source visualization and interactive simulation libraries: interactive medical simulation toolkit (iMSTK) and 3D Slicer. We used a physical mannequin to simulate the tactile feedback that trainees experience while scanning a real patient and to provide trainees with spatial awareness of the US scanning plane with respect to the patient's anatomy. The ultrasound probe and biopsy needle were modeled using commonly used clinical tools and were instrumented to communicate with the simulator. 3D Slicer was used to visualize an image sliced from a pre-acquired 3-D ultrasound volume based on the location of the probe, with a realistic needle rendering. The simulation engine in iMSTK modeled the interaction between the needle and the virtual tissue to generate visual deformations on the tissue and tactile forces on the needle which are transmitted to the needle that the user holds. Initial testing has shown promising results with respect to quality of simulated images and system responsiveness. Further evaluation by clinicians is planned for the next stage.
Collapse
|
6
|
Tu L, Porras AR, Oh A, Lepore N, Buck GC, Tsering D, Enquobahrie A, Keating R, Rogers GF, Linguraru MG. Quantitative evaluation of local head malformations from three-dimensional photography: application to craniosynostosis. Proc SPIE Int Soc Opt Eng 2019; 10950:1095035. [PMID: 31379402 PMCID: PMC6677125 DOI: 10.1117/12.2512272] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
The evaluation of head malformations plays an essential role in the early diagnosis, the decision to perform surgery and the assessment of the surgical outcome of patients with craniosynostosis. Clinicians rely on two metrics to evaluate the head shape: head circumference (HC) and cephalic index (CI). However, they present a high inter-observer variability and they do not take into account the location of the head abnormalities. In this study, we present an automated framework to objectively quantify the head malformations, HC, and CI from three-dimensional (3D) photography, a radiation-free, fast and non-invasive imaging modality. Our method automatically extracts the head shape using a set of landmarks identified by registering the head surface of a patient to a reference template in which the position of the landmarks is known. Then, we quantify head malformations as the local distances between the patient's head and its closest normal from a normative statistical head shape multi-atlas. We calculated cranial malformations, HC, and CI for 28 patients with craniosynostosis, and we compared them with those computed from the normative population. Malformation differences between the two populations were statistically significant (p<0.05) at the head regions with abnormal development due to suture fusion. We also trained a support vector machine classifier using the malformations calculated and we obtained an improved accuracy of 91.03% in the detection of craniosynostosis, compared to 78.21% obtained with HC or CI. This method has the potential to assist in the longitudinal evaluation of cranial malformations after surgical treatment of craniosynostosis.
Collapse
Affiliation(s)
- Liyun Tu
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children’s National Health System, Washington DC, USA
| | - Antonio R. Porras
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children’s National Health System, Washington DC, USA
| | - Albert Oh
- Division of Plastic and Reconstructive Surgery, Children’s National Health System, Washington DC, USA
| | - Natasha Lepore
- CIBORG Lab, Children’s Hospital Los Angeles and University of Southern California, Los Angeles, CA, USA
| | - Graham C. Buck
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children’s National Health System, Washington DC, USA
| | - Deki Tsering
- Division of Neurosurgery, Children’s National Health System, Washington DC, USA
| | | | - Robert Keating
- Division of Neurosurgery, Children’s National Health System, Washington DC, USA
| | - Gary F. Rogers
- Division of Plastic and Reconstructive Surgery, Children’s National Health System, Washington DC, USA
| | - Marius George Linguraru
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children’s National Health System, Washington DC, USA
- Departments of Radiology and Pediatrics, School of Medicine and Health Sciences, George Washington University, Washington DC, USA
| |
Collapse
|
7
|
Fu Y, Cavuoto L, Qi D, Panneerselvam K, Yang G, Arikatla VS, Enquobahrie A, De S, Schwaitzberg SD. Correction to: Validation of a virtual intracorporeal suturing simulator. Surg Endosc 2018; 33:2473-2474. [PMID: 30519884 DOI: 10.1007/s00464-018-06615-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
The surname of Sreekanth Arikatla incorrectly appeared as Sreekanth Artikala.
Collapse
Affiliation(s)
- Yaoyu Fu
- Department of Industrial and Systems Engineering, University at Buffalo, Buffalo, NY, 14260, USA
| | - Lora Cavuoto
- Department of Industrial and Systems Engineering, University at Buffalo, Buffalo, NY, 14260, USA.
| | - Di Qi
- Center for Modeling, Simulation and Imaging in Medicine (CeMSIM), Rensselaer Polytechnic Institute, Troy, NY, USA
| | - Karthikeyan Panneerselvam
- Center for Modeling, Simulation and Imaging in Medicine (CeMSIM), Rensselaer Polytechnic Institute, Troy, NY, USA
| | - Gene Yang
- Department of Surgery, University at Buffalo, Buffalo, NY, USA
| | | | | | - Suvranu De
- Center for Modeling, Simulation and Imaging in Medicine (CeMSIM), Rensselaer Polytechnic Institute, Troy, NY, USA
| | | |
Collapse
|
8
|
Clipp RB, Vicory J, Horvath S, Mitran S, Kimbell JS, Rhee JS, Enquobahrie A. An Interactive, Patient-Specific Virtual Surgical Planning System for Upper Airway Obstruction Treatments. Annu Int Conf IEEE Eng Med Biol Soc 2018; 2018:5802-5805. [PMID: 30441654 DOI: 10.1109/embc.2018.8513672] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Upper airway obstructions leading todifficulty breathing are significant problems that often require surgery to improve patient quality of life. However, these surgeries often have poor outcomes with little symptom improvement. This paper outlines the design of an interactive, patient-specific virtual surgical planning system that uses patient CT scans to generate three-dimensional representations of the airways and incorporates computational fluid dynamics (CFD) as a part of the surgical planning process. Individualized virtual surgeries can be performed by editing these models, which are then analyzed using CFD to compare pre- and post- surgery flow characteristics to assess patient symptom improvement. The prototype system shows significant promise by being intuitive, interactive, with a potential fast flow solver that provides near real-time feedback to the clinician.
Collapse
|
9
|
Abstract
Multicenter clinical trials that use positron emission tomography (PET) imaging frequently rely on stable bias in imaging biomarkers to assess drug effectiveness. Many well-documented factors cause variability in PET intensity values. Two of the largest scanner-dependent errors are scanner calibration and reconstructed image resolution variations. For clinical trials, an increase in measurement error significantly increases the number of patient scans needed. We aim to provide a robust quality assurance system using portable PET/computed tomography “pocket” phantoms and automated image analysis algorithms with the goal of reducing PET measurement variability. A set of the “pocket” phantoms was scanned with patients, affixed to the underside of a patient bed. Our software analyzed the obtained images and estimated the image parameters. The analysis consisted of 2 steps, automated phantom detection and estimation of PET image resolution and global bias. Performance of the algorithm was tested under variations in image bias, resolution, noise, and errors in the expected sphere size. A web-based application was implemented to deploy the image analysis pipeline in a cloud-based infrastructure to support multicenter data acquisition, under Software-as-a-Service (SaaS) model. The automated detection algorithm localized the phantom reliably. Simulation results showed stable behavior when image properties and input parameters were varied. The PET “pocket” phantom has the potential to reduce and/or check for standardized uptake value measurement errors.
Collapse
Affiliation(s)
| | - Darrin W Byrd
- Department of Radiology, University of Washington, Seattle, WA
| | - Paul E Kinahan
- Department of Radiology, University of Washington, Seattle, WA
| | | |
Collapse
|
10
|
Arikatla V, Horvath S, Fu Y, Cavuoto L, De S, Schwaitzberg S, Enquobahrie A. Development and face validation of a virtual camera navigation task trainer. Surg Endosc 2018; 33:1927-1937. [PMID: 30324462 DOI: 10.1007/s00464-018-6476-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2018] [Accepted: 10/02/2018] [Indexed: 01/22/2023]
Abstract
BACKGROUND The fundamentals of laparoscopic surgery (FLS) trainer box, which is now established as a standard for evaluating minimally invasive surgical skills, consists of five tasks: peg transfer, pattern cutting, ligation, intra- and extracorporeal suturing. Virtual simulators of these tasks have been developed and validated as part of the Virtual Basic Laparoscopic Skill Trainer (VBLaST) (Arikatla et al. in Int J Med Robot Comput Assist Surg 10:344-355, 2014; Zhang et al. in Surg Endosc 27(10):3603-3615, 2013; Sankaranarayanan et al. in J Laparoendosc Adv Surg Tech 20(2):153-157, 2010; Qi et al. J Biomed Inform 75:48-62, 2017). The virtual task trainers have many advantages including automatic real-time objective scoring, reduced costs, and eliminating human proctors. In this paper, we extend VBLaST by adding two camera navigation system tasks: (a) pattern matching and (b) path tracing. METHODS A comprehensive camera navigation simulator with two virtual tasks, simplified and cheaper hardware interface (compared to the prior version of VBLaST), graphical user interface, and automated metrics has been designed and developed. Face validity of the system is tested with medical students and residents from the University at Buffalo's medical school. RESULTS The subjects rated the simulator highly in all aspects including its usefulness in training to center the target and to teach sizing skills. The quality and usefulness of the force feedback scored the lowest at 2.62.
Collapse
Affiliation(s)
- Venkata Arikatla
- Medical Computing Team, Kitware Inc., 101 E Weaver Street, Suite G4, Carrboro, NC, 27510, USA.
| | - Sam Horvath
- Medical Computing Team, Kitware Inc., 101 E Weaver Street, Suite G4, Carrboro, NC, 27510, USA
| | - Yaoyu Fu
- School of Engineering and Applied Sciences, University at Buffalo, Buffalo, NY, USA
| | - Lora Cavuoto
- School of Engineering and Applied Sciences, University at Buffalo, Buffalo, NY, USA
| | - Suvranu De
- Center for Modeling, Simulation and Imaging in Medicine, RPI, Troy, NY, USA
| | | | - Andinet Enquobahrie
- Medical Computing Team, Kitware Inc., 101 E Weaver Street, Suite G4, Carrboro, NC, 27510, USA
| |
Collapse
|
11
|
Arikatla VS, Tyagi M, Enquobahrie A, Nguyen T, Blakey GH, White R, Paniagua B. High Fidelity Virtual Reality Orthognathic Surgery Simulator. Proc SPIE Int Soc Opt Eng 2018; 10576. [PMID: 29977103 DOI: 10.1117/12.2293690] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Surgical simulators are powerful tools that assist in providing advanced training for complex craniofacial surgical procedures and objective skills assessment such as the ones needed to perform Bilateral Sagittal Split Osteotomy (BSSO). One of the crucial steps in simulating BSSO is accurately cutting the mandible in a specific area of the jaw, where surgeons rely on high fidelity visual and haptic cues. In this paper, we present methods to simulate drilling and cutting of the bone using the burr and the motorized oscillating saw respectively. Our method allows low computational cost bone drilling or cutting while providing high fidelity haptic feedback that is suitable for real-time virtual surgery simulation.
Collapse
Affiliation(s)
| | | | | | - Tung Nguyen
- Department of Orthodontics, School of Dentistry, UNC, Chapel Hill, NC
| | - George H Blakey
- Department of Oral and Maxillofacial Surgery, School of Dentistry, UNC, Chapel Hill, NC
| | - Ray White
- Department of Oral and Maxillofacial Surgery, School of Dentistry, UNC, Chapel Hill, NC
| | | |
Collapse
|
12
|
Porras AR, Paniagua B, Ensel S, Keating R, Rogers GF, Enquobahrie A, Linguraru MG. Locally Affine Diffeomorphic Surface Registration and Its Application to Surgical Planning of Fronto-Orbital Advancement. IEEE Trans Med Imaging 2018; 37:1690-1700. [PMID: 29969419 PMCID: PMC6085886 DOI: 10.1109/tmi.2018.2816402] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Metopic craniosynostosis is a condition caused by the premature fusion of the metopic cranial suture. If untreated, it can result into brain growth restriction, increased intra-cranial pressure, visual impairment, and cognitive delay. Fronto-orbital advancement is the widely accepted surgical approach to correct cranial shape abnormalities in patients with metopic craniosynostosis, but the outcome of the surgery remains very dependent on the expertise of the surgeon because of the lack of objective and personalized cranial shape metrics to target during the intervention. We propose in this paper a locally affine diffeomorphic surface registration framework to create an optimal interventional plan personalized to each patient. Our method calculates the optimal surgical plan by minimizing cranial shape abnormalities, which are quantified using objective metrics based on a normative model of cranial shapes built from 198 healthy cases. It is guided by clinical osteotomy templates for fronto-orbital advancement, and it automatically calculates how much and in which direction each bone piece needs to be translated, rotated, and/or bent. Our locally affine framework models separately the transformation of each bone piece while ensuring the consistency of the global transformation. We used our method to calculate the optimal surgical plan for 23 patients, obtaining a significant reduction of malformations (p < 0.001) between 40.38% and 50.85% in the simulated outcome of the surgery using different osteotomy templates. In addition, malformation values were within healthy ranges (p > 0.01).
Collapse
|
13
|
Horvath S, Paniagua B, Andruejol J, Porras AR, Linguraru MG, Enquobahrie A. Osteotomy Planner: An open-source tool for osteotomy simulation. Proc SPIE Int Soc Opt Eng 2018; 10576:105762R. [PMID: 36246427 PMCID: PMC9563370 DOI: 10.1117/12.2293649] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
There has been a recent emphasis in surgical science on supplementing surgical training outside of the Operating Room (OR). Combining simulation training with the current surgical apprenticeship enhances surgical skills in the OR, without increasing the time spent in the OR practicing. Computer-assisted surgical (CAS) planning consists of performing operative techniques virtually using three-dimensional (3D) computer-based models reconstructed from 3D cross-sectional imaging. The purpose of this paper is to present a CAS system to rehearse, visualize and quantify osteotomies, and demonstrate its usefulness in two different osteotomy surgical procedures, cranial vault reconstruction and femoral osteotomy. We found that the system could sufficiently simulate these two procedures. Our system takes advantage of the high-quality visualizations possible with 3DSlicer, as well as implements new infrastructure to allow for direct 3D interaction (cutting and positioning) with the bone models. We see the proposed osteotomy planner tool evolving towards incorporating different cutting templates to help depict several surgical scenarios, help 'trained' surgeons maintain operating skills, help rehearse a surgical sequence before heading to the OR, or even to help surgical planning for specific patient cases.
Collapse
Affiliation(s)
| | | | | | - Antonio R. Porras
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children’s National Health System, Washington, DC, USA
| | - Marius George Linguraru
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children’s National Health System, Washington, DC, USA
| | | |
Collapse
|
14
|
Tu L, Porras AR, Oh A, Lepore N, Mastromanolis M, Tsering D, Paniagua B, Enquobahrie A, Keating R, Rogers GF, Linguraru MG. Radiation-free quantification of head malformations in craniosynostosis patients from 3D photography. Proc SPIE Int Soc Opt Eng 2018; 10575:105751U. [PMID: 31379400 PMCID: PMC6679651 DOI: 10.1117/12.2295374] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
The evaluation of cranial malformations plays an essential role both in the early diagnosis and in the decision to perform surgical treatment for craniosynostosis. In clinical practice, both cranial shape and suture fusion are evaluated using CT images, which involve the use of harmful radiation on children. Three-dimensional (3D) photography offers non-invasive, radiation-free, and anesthetic-free evaluation of craniofacial morphology. The aim of this study is to develop an automated framework to objectively quantify cranial malformations in patients with craniosynostosis from 3D photography. We propose a new method that automatically extracts the cranial shape by identifying a set of landmarks from a 3D photograph. Specifically, it registers the 3D photograph of a patient to a reference template in which the position of the landmarks is known. Then, the method finds the closest cranial shape to that of the patient from a normative statistical shape multi-atlas built from 3D photographs of healthy cases, and uses it to quantify objectively cranial malformations. We calculated the cranial malformations on 17 craniosynostosis patients and we compared them with the malformations of the normative population used to build the multi-atlas. The average malformations of the craniosynostosis cases were 2.68 ± 0.75 mm, which is significantly higher (p<0.001) than the average malformations of 1.70 ± 0.41 mm obtained from the normative cases. Our approach can support the quantitative assessment of surgical procedures for cranial vault reconstruction without exposing pediatric patients to harmful radiation.
Collapse
Affiliation(s)
- Liyun Tu
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children’s National Health System, Washington DC, USA
| | - Antonio R. Porras
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children’s National Health System, Washington DC, USA
| | - Albert Oh
- Division of Plastic and Reconstructive Surgery, Children’s National Health System, Washington DC, USA
| | - Natasha Lepore
- CIBORG Lab, Children’s Hospital Los Angeles and University of Southern California, Los Angeles, CA, USA
| | - Manuel Mastromanolis
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children’s National Health System, Washington DC, USA
| | - Deki Tsering
- Division of Neurosurgery, Children’s National Health System, Washington DC, USA
| | | | | | - Robert Keating
- Division of Neurosurgery, Children’s National Health System, Washington DC, USA
| | - Gary F. Rogers
- Division of Plastic and Reconstructive Surgery, Children’s National Health System, Washington DC, USA
| | - Marius George Linguraru
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children’s National Health System, Washington DC, USA
- Departments of Radiology and Pediatrics, School of Medicine and Health Sciences, George Washington University, Washington DC, USA
| |
Collapse
|
15
|
Qi D, Panneerselvam K, Ahn W, Arikatla V, Enquobahrie A, De S. Virtual interactive suturing for the Fundamentals of Laparoscopic Surgery (FLS). J Biomed Inform 2017; 75:48-62. [PMID: 28951209 PMCID: PMC5685933 DOI: 10.1016/j.jbi.2017.09.010] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2017] [Revised: 09/07/2017] [Accepted: 09/20/2017] [Indexed: 11/22/2022]
Abstract
BACKGROUND Suturing with intracorporeal knot-tying is one of the five tasks of the Fundamentals of Laparoscopic Surgery (FLS), which is a pre-requisite for board certification in general surgery. This task involves placing a short suture through two marks in a penrose drain and then tying a double-throw knot followed by two single-throw knots using two needle graspers operated by both hands. A virtual basic laparoscopic skill trainer (VBLaST©) is being developed to represent the virtual versions of the FLS tasks, including automated, real time performance measurement and feedback. In this paper, we present the development of a VBLaST suturing simulator (VBLaST-SS©). Developing such a simulator involves solving multiple challenges associated with fast collision detection, response and force feedback. METHODS In this paper, we present a novel projection-intersection based knot detection method, which can identify the validity of different types of knots at haptic update rates. A simple and robust edge-edge based collision detection algorithm is introduced to support interactive knot tying and needle insertion operations. A bimanual hardware interface integrates actual surgical instruments with haptic devices enabling not only interactive rendering of force feedback but also realistic sensation of needle grasping, which realizes an immersive surgical suturing environment. RESULTS Experiments on performing the FLS intracorporeal suturing task show that the simulator is able to run on a standard personal computer at interactive rates. CONCLUSIONS VBLaST-SS© is a computer-based interactive virtual simulation system for FLS intracorporeal knot-tying suturing task that can provide real-time objective assessment for the user's performance.
Collapse
Affiliation(s)
- Di Qi
- Center for Modeling, Simulation and Imaging in Medicine (CeMSIM), Rensselaer Polytechnic Institute, Troy, NY, USA
| | - Karthikeyan Panneerselvam
- Center for Modeling, Simulation and Imaging in Medicine (CeMSIM), Rensselaer Polytechnic Institute, Troy, NY, USA
| | - Woojin Ahn
- Intuitive Surgical Inc., Sunnyvale, CA, USA
| | | | | | - Suvranu De
- Center for Modeling, Simulation and Imaging in Medicine (CeMSIM), Rensselaer Polytechnic Institute, Troy, NY, USA.
| |
Collapse
|
16
|
Dangi S, Shah H, Porras AR, Paniagua B, Linte CA, Linguraru M, Enquobahrie A. Robust head CT image registration pipeline for craniosynostosis skull correction surgery. Healthc Technol Lett 2017; 4:174-178. [PMID: 29184660 PMCID: PMC5683203 DOI: 10.1049/htl.2017.0067] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2017] [Accepted: 07/31/2017] [Indexed: 11/19/2022] Open
Abstract
Craniosynostosis is a congenital malformation of the infant skull typically treated via corrective surgery. To accurately quantify the extent of deformation and identify the optimal correction strategy, the patient-specific skull model extracted from a pre-surgical computed tomography (CT) image needs to be registered to an atlas of head CT images representative of normal subjects. Here, the authors present a robust multi-stage, multi-resolution registration pipeline to map a patient-specific CT image to the atlas space of normal CT images. The proposed registration pipeline first performs an initial optimisation at very low resolution to yield a good initial alignment that is subsequently refined at high resolution. They demonstrate the robustness of the proposed method by evaluating its performance on 560 head CT images of 320 normal subjects and 240 craniosynostosis patients and show a success rate of 92.8 and 94.2%, respectively. Their method achieved a mean surface-to-surface distance between the patient and template skull of <2.5 mm in the targeted skull region across both the normal subjects and patients.
Collapse
Affiliation(s)
- Shusil Dangi
- Chester F. Carlson Center for Imaging Science, Rochester Institute of Technology, Rochester, NY, USA
| | | | - Antonio R Porras
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Health System, Washington, DC, USA
| | | | - Cristian A Linte
- Chester F. Carlson Center for Imaging Science, Rochester Institute of Technology, Rochester, NY, USA.,Biomedical Engineering, Rochester Institute of Technology, Rochester, NY, USA
| | - Marius Linguraru
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Health System, Washington, DC, USA.,School of Medicine and Health Sciences, George Washington University, Washington, DC, USA
| | | |
Collapse
|
17
|
Tu L, Porras AR, Ensel S, Tsering D, Paniagua B, Enquobahrie A, Oh A, Keating R, Rogers GF, Linguraru MG. Intracranial Volume Quantification from 3D Photography. Comput Assist Robot Endosc Clin Image Based Proced (2017) 2017; 10550:116-123. [PMID: 29167840 DOI: 10.1007/978-3-319-67543-5_11] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
Abstract
3D photography offers non-invasive, radiation-free, and anesthetic-free evaluation of craniofacial morphology. However, intracranial volume (ICV) quantification is not possible with current non-invasive imaging systems in order to evaluate brain development in children with cranial pathology. The aim of this study is to develop an automated, radiation-free framework to estimate ICV. Pairs of computed tomography (CT) images and 3D photographs were aligned using registration. We used the real ICV calculated from the CTs and the head volumes from their corresponding 3D photographs to create a regression model. Then, a template 3D photograph was selected as a reference from the data, and a set of landmarks defining the cranial vault were detected automatically on that template. Given the 3D photograph of a new patient, it was registered to the template to estimate the cranial vault area. After obtaining the head volume, the regression model was then used to estimate the ICV. Experiments showed that our volume regression model predicted ICV from head volumes with an average error of 5.81 ± 3.07% and a correlation (R2) of 0.96. We also demonstrated that our automated framework quantified ICV from 3D photography with an average error of 7.02 ± 7.76%, a correlation (R2) of 0.94, and an average estimation error for the position of the cranial base landmarks of 11.39 ± 4.3mm.
Collapse
Affiliation(s)
- Liyun Tu
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Health System, Washington DC, USA
| | - Antonio R Porras
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Health System, Washington DC, USA
| | - Scott Ensel
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Health System, Washington DC, USA
| | - Deki Tsering
- Division of Neurosurgery, Children's National Health System, Washington DC, USA
| | | | | | - Albert Oh
- Division of Plastic and Reconstructive Surgery, Children's National Health System, Washing-ton DC, USA
| | - Robert Keating
- Division of Neurosurgery, Children's National Health System, Washington DC, USA
| | - Gary F Rogers
- Division of Plastic and Reconstructive Surgery, Children's National Health System, Washing-ton DC, USA
| | - Marius George Linguraru
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Health System, Washington DC, USA
- School of Medicine and Health Sciences, George Washington University, Washington DC, USA
| |
Collapse
|
18
|
Hasan A, Kolahdouz EM, Enquobahrie A, Caranasos TG, Vavalle JP, Griffith BE. Image-based immersed boundary model of the aortic root. Med Eng Phys 2017; 47:72-84. [PMID: 28778565 PMCID: PMC5599309 DOI: 10.1016/j.medengphy.2017.05.007] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2017] [Revised: 05/04/2017] [Accepted: 05/24/2017] [Indexed: 10/19/2022]
Abstract
Each year, approximately 300,000 heart valve repair or replacement procedures are performed worldwide, including approximately 70,000 aortic valve replacement surgeries in the United States alone. Computational platforms for simulating cardiovascular devices such as prosthetic heart valves promise to improve device design and assist in treatment planning, including patient-specific device selection. This paper describes progress in constructing anatomically and physiologically realistic immersed boundary (IB) models of the dynamics of the aortic root and ascending aorta. This work builds on earlier IB models of fluid-structure interaction (FSI) in the aortic root, which previously achieved realistic hemodynamics over multiple cardiac cycles, but which also were limited to simplified aortic geometries and idealized descriptions of the biomechanics of the aortic valve cusps. By contrast, the model described herein uses an anatomical geometry reconstructed from patient-specific computed tomography angiography (CTA) data, and employs a description of the elasticity of the aortic valve leaflets based on a fiber-reinforced constitutive model fit to experimental tensile test data. The resulting model generates physiological pressures in both systole and diastole, and yields realistic cardiac output and stroke volume at physiological Reynolds numbers. Contact between the valve leaflets during diastole is handled automatically by the IB method, yielding a fully competent valve model that supports a physiological diastolic pressure load without regurgitation. Numerical tests show that the model is able to resolve the leaflet biomechanics in diastole and early systole at practical grid spacings. The model is also used to examine differences in the mechanics and fluid dynamics yielded by fresh valve leaflets and glutaraldehyde-fixed leaflets similar to those used in bioprosthetic heart valves. Although there are large differences in the leaflet deformations during diastole, the differences in the open configurations of the valve models are relatively small, and nearly identical hemodynamics are obtained in all cases considered.
Collapse
Affiliation(s)
- Ali Hasan
- Department of Mathematics, University of North Carolina, Chapel Hill, NC, USA
| | - Ebrahim M Kolahdouz
- Department of Mathematics, University of North Carolina, Chapel Hill, NC, USA
| | | | - Thomas G Caranasos
- Division of Cardiothoracic Surgery, Department of Surgery, University of North Carolina School of Medicine, Chapel Hill, NC, USA
| | - John P Vavalle
- Division of Cardiology, Department of Medicine, University of North Carolina School of Medicine, Chapel Hill, NC, USA
| | - Boyce E Griffith
- Department of Mathematics and McAllister Heart Institute, University of North Carolina, Chapel Hill, NC, USA.
| |
Collapse
|
19
|
Arikatla VS, Ortiz R, Thompson D, Sasaki-Adams D, Enquobahrie A, De S. A HYBRID APPROACH TO SIMULATE TISSUE BEHAVIOR DURING SURGICAL SIMULATION. Int Conf Comput Math Biomed Eng 2015; 2015:332-335. [PMID: 29657923 PMCID: PMC5898825] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Modeling interaction between deformable and rigid objects efficiently and accurately is one of the most important tasks for interactive surgical simulation. The Finite Element Method (FEM) has become very popular in this context due to its versatility in representing elastic bodies with irregular geometric features and diverse material properties. In this work we propose a hybrid FEM approach to simulating realistic tissue behavior that uses a non-linear formulation in the vicinity of the interaction while employing a less accurate and inexpensive linear formulation elsewhere. A semi-implicit time stepping is used for the non-linear portion of the domain. This avoids expensive domain decomposition strategies required to maintain consistency at the interface and allows for regular system assembly using a non-overlapping interface and single solver for both domains. This study demonstrates the advantages of our novel approach, especially for the case of real-time surgical simulation.
Collapse
Affiliation(s)
- Venkata S Arikatla
- Center for Modeling, Simulation and Imaging in Medicine, Rennselaer Polytechnic Institute, 110, 8th street, Troy, NY, 12180
| | | | | | | | | | - Suvranu De
- Center for Modeling, Simulation and Imaging in Medicine, Rennselaer Polytechnic Institute, 110, 8th street, Troy, NY, 12180
| |
Collapse
|
20
|
Olasky J, Sankaranarayanan G, Seymour NE, Magee JH, Enquobahrie A, Lin MC, Aggarwal R, Brunt LM, Schwaitzberg SD, Cao CGL, De S, Jones DB. Identifying Opportunities for Virtual Reality Simulation in Surgical Education: A Review of the Proceedings from the Innovation, Design, and Emerging Alliances in Surgery (IDEAS) Conference: VR Surgery. Surg Innov 2015; 22:514-21. [PMID: 25925424 DOI: 10.1177/1553350615583559] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
Abstract
OBJECTIVES To conduct a review of the state of virtual reality (VR) simulation technology, to identify areas of surgical education that have the greatest potential to benefit from it, and to identify challenges to implementation. BACKGROUND DATA Simulation is an increasingly important part of surgical training. VR is a developing platform for using simulation to teach technical skills, behavioral skills, and entire procedures to trainees and practicing surgeons worldwide. Questions exist regarding the science behind the technology and most effective usage of VR simulation. A symposium was held to address these issues. METHODS Engineers, educators, and surgeons held a conference in November 2013 both to review the background science behind simulation technology and to create guidelines for its use in teaching and credentialing trainees and surgeons in practice. RESULTS Several technologic challenges were identified that must be overcome in order for VR simulation to be useful in surgery. Specific areas of student, resident, and practicing surgeon training and testing that would likely benefit from VR were identified: technical skills, team training and decision-making skills, and patient safety, such as in use of electrosurgical equipment. CONCLUSIONS VR simulation has the potential to become an essential piece of surgical education curriculum but depends heavily on the establishment of an agreed upon set of goals. Researchers and clinicians must collaborate to allocate funding toward projects that help achieve these goals. The recommendations outlined here should guide further study and implementation of VR simulation.
Collapse
Affiliation(s)
- Jaisa Olasky
- Mount Auburn Hospital, Harvard Medical School, Cambridge, MA, USA
| | | | - Neal E Seymour
- Tufts University School of Medicine, Springfield, MA, USA
| | - J Harvey Magee
- University of Maryland Medical Center, Baltimore, MD, USA
| | | | - Ming C Lin
- The University of North Carolina at Chapel Hill, NC, USA
| | - Rajesh Aggarwal
- University of Pennsylvania Medical School, Philadelphia, PA, USA
| | | | | | | | - Suvranu De
- Rensselaer Polytechnic Institute, Troy, NY, USA
| | - Daniel B Jones
- Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
21
|
He L, Ortiz R, Enquobahrie A, Manocha D. Interactive Continuous Collision Detection for Topology Changing Models Using Dynamic Clustering. Proc ACM SIGGRAPH Symp Interact 3D Graph Games 2015; 2015:47-54. [PMID: 26191116 DOI: 10.1145/2699276.2699286] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
We present a fast algorithm for continuous collision detection between deformable models. Our approach performs no precomputation and can handle general triangulated models undergoing topological changes. We present a fast decomposition algorithm that represents the mesh boundary using hierarchical clusters and only needs to perform inter-cluster collision checks. The key idea is to compute such clusters quickly and merge them to generate a dynamic bounding volume hierarchy. The overall approach reduces the overhead of computing the hierarchy and also reduces the number of false positives. We highlight the the algorithm's performance on many complex benchmarks generated from medical simulations and crash analysis. In practice, we observe 1.4 to 5 times speedup over prior CCD algorithms for deformable models in our benchmarks.
Collapse
Affiliation(s)
- Liang He
- University of North Carolina at Chapel Hill
| | | | | | | |
Collapse
|
22
|
Liu Y, Kot A, Drakopoulos F, Yao C, Fedorov A, Enquobahrie A, Clatz O, Chrisochoides NP. An ITK implementation of a physics-based non-rigid registration method for brain deformation in image-guided neurosurgery. Front Neuroinform 2014; 8:33. [PMID: 24778613 PMCID: PMC3985035 DOI: 10.3389/fninf.2014.00033] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2013] [Accepted: 03/18/2014] [Indexed: 11/13/2022] Open
Abstract
As part of the ITK v4 project efforts, we have developed ITK filters for physics-based non-rigid registration (PBNRR), which satisfies the following requirements: account for tissue properties in the registration, improve accuracy compared to rigid registration, and reduce execution time using GPU and multi-core accelerators. The implementation has three main components: (1) Feature Point Selection, (2) Block Matching (mapped to both multi-core and GPU processors), and (3) a Robust Finite Element Solver. The use of multi-core and GPU accelerators in ITK v4 provides substantial performance improvements. For example, for the non-rigid registration of brain MRIs, the performance of the block matching filter on average is about 10 times faster when 12 hyperthreaded multi-cores are used and about 83 times faster when the NVIDIA Tesla GPU is used in Dell Workstation.
Collapse
Affiliation(s)
- Yixun Liu
- CRTC Lab and Computer Science, Old Dominion University Norfolk, VA, USA ; Radiology and Imaging Sciences, National Institutes of Health Bethesda, MD, USA
| | - Andriy Kot
- CRTC Lab and Computer Science, Old Dominion University Norfolk, VA, USA
| | - Fotis Drakopoulos
- CRTC Lab and Computer Science, Old Dominion University Norfolk, VA, USA
| | - Chengjun Yao
- Neurosurgery Department, Huashan Hospital Shanghai, China
| | - Andriy Fedorov
- CRTC Lab and Computer Science, Old Dominion University Norfolk, VA, USA ; Radiology, Harvard Medical School, Brigham and Women's Hospital Boston, MA, USA
| | | | - Olivier Clatz
- Asclepios Research Laboratory, INRIA Sophia Antipolis Sophia Antipolis Cedex, France
| | | |
Collapse
|
23
|
Khare R, Sala G, Kinahan P, Esposito G, Banovac F, Cleary K, Enquobahrie A. Experimental Evaluation of a Deformable Registration Algorithm for Motion Correction in PET-CT Guided Biopsy. IEEE Nucl Sci Symp Conf Rec (1997) 2013; 2013. [PMID: 25717283 DOI: 10.1109/nssmic.2013.6829037] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Positron emission tomography computed tomography (PET-CT) images are increasingly being used for guidance during percutaneous biopsy. However, due to the physics of image acquisition, PET-CT images are susceptible to problems due to respiratory and cardiac motion, leading to inaccurate tumor localization, shape distortion, and attenuation correction. To address these problems, we present a method for motion correction that relies on respiratory gated CT images aligned using a deformable registration algorithm. In this work, we use two deformable registration algorithms and two optimization approaches for registering the CT images obtained over the respiratory cycle. The two algorithms are the BSpline and the symmetric forces Demons registration. In the first optmization approach, CT images at each time point are registered to a single reference time point. In the second approach, deformation maps are obtained to align each CT time point with its adjacent time point. These deformations are then composed to find the deformation with respect to a reference time point. We evaluate these two algorithms and optimization approaches using respiratory gated CT images obtained from 7 patients. Our results show that overall the BSpline registration algorithm with the reference optimization approach gives the best results.
Collapse
Affiliation(s)
- Rahul Khare
- Children's National Medical Center, Washington, DC 20010 USA
| | | | | | | | | | - Kevin Cleary
- Children's National Medical Center, Washington, DC 20010 USA
| | | |
Collapse
|
24
|
Wu X, Yao J, Enquobahrie A, Lee HP, Audette MA. Integration of a Multigrid ODE solver into an open medical simulation framework. Annu Int Conf IEEE Eng Med Biol Soc 2013; 2012:3090-3. [PMID: 23366578 DOI: 10.1109/embc.2012.6346617] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
In this paper, we present the implementation of a Multigrid ODE solver in SOFA framework. By combining the stability advantage of coarse meshes and the transient detail preserving virtue of fine meshes, Multigrid ODE solver computes more efficiently than classic ODE solvers based on a single level discretization. With the ever wider adoption of the SOFA framework in many surgical simulation projects, introducing this Multigrid ODE solver into SOFA's pool of ODE solvers shall benefit the entire community. This contribution potentially has broad ramifications in the surgical simulation research community, given that in a single-resolution system, a constitutively realistic interactive tissue response, which presupposes large elements, is in direct conflict with the need to represent clinically relevant critical tissues in the simulation, which are typically be comprised of small elements.
Collapse
|
25
|
Vaccarella A, Enquobahrie A, Ferrigno G, Momi ED. Modular multiple sensors information management for computer-integrated surgery. Int J Med Robot 2012; 8:253-60. [PMID: 22407822 DOI: 10.1002/rcs.1412] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/07/2011] [Indexed: 12/17/2022]
Abstract
BACKGROUND In the past 20 years, technological advancements have modified the concept of modern operating rooms (ORs) with the introduction of computer-integrated surgery (CIS) systems, which promise to enhance the outcomes, safety and standardization of surgical procedures. With CIS, different types of sensor (mainly position-sensing devices, force sensors and intra-operative imaging devices) are widely used. Recently, the need for a combined use of different sensors raised issues related to synchronization and spatial consistency of data from different sources of information. METHODS In this study, we propose a centralized, multi-sensor management software architecture for a distributed CIS system, which addresses sensor information consistency in both space and time. The software was developed as a data server module in a client-server architecture, using two open-source software libraries: Image-Guided Surgery Toolkit (IGSTK) and OpenCV. The ROBOCAST project (FP7 ICT 215190), which aims at integrating robotic and navigation devices and technologies in order to improve the outcome of the surgical intervention, was used as the benchmark. An experimental protocol was designed in order to prove the feasibility of a centralized module for data acquisition and to test the application latency when dealing with optical and electromagnetic tracking systems and ultrasound (US) imaging devices. RESULTS Our results show that a centralized approach is suitable for minimizing synchronization errors; latency in the client-server communication was estimated to be 2 ms (median value) for tracking systems and 40 ms (median value) for US images. CONCLUSION The proposed centralized approach proved to be adequate for neurosurgery requirements. Latency introduced by the proposed architecture does not affect tracking system performance in terms of frame rate and limits US images frame rate at 25 fps, which is acceptable for providing visual feedback to the surgeon in the OR.
Collapse
Affiliation(s)
- Alberto Vaccarella
- NearLab, Dipartimento di Bioingegneria, Politecnico di Milano, Milano, Italy.
| | | | | | | |
Collapse
|
26
|
Enquobahrie A, Bowers M, Ibanez L, Finet J, Audette M, Kolasny A. Enabling ITK-based processing and 3D Slicer MRML scene management in ParaView. Insight J 2012; 2012:1-10. [PMID: 25285311 PMCID: PMC4181673] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
This paper documents on-going work to facilitate ITK-based processing and 3D Slicer scene management in ParaView. We believe this will broaden the use of ParaView for high performance computing and visualization in the medical imaging research community. The effort is focused on developing ParaView plug-ins for managing VTK structures from 3D Slicer MRML scenes and encapsulating ITK filters for deployment in ParaView. In this paper, we present KWScene, an open source cross-platform library that is being developed to support implementation of these types of plugins. We describe the overall design of the library and provide implementation details and conclude by presenting a concrete example that demonstrates the use of the KWScene library in computational anatomy research at Johns Hopkins Center for Imaging Science.
Collapse
Affiliation(s)
| | | | | | | | - Michel Audette
- Department of Modeling, Simulation and Visualization Engineering, Old Dominion University
| | | |
Collapse
|
27
|
Lee HP, Audette M, Joldes GR, Enquobahrie A. Neurosurgery Simulation Using Non-linear Finite Element Modeling and Haptic Interaction. Proc SPIE Int Soc Opt Eng 2012; 8316:83160H. [PMID: 24465116 PMCID: PMC3898833 DOI: 10.1117/12.911987] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Real-time surgical simulation is becoming an important component of surgical training. To meet the real-time requirement, however, the accuracy of the biomechancial modeling of soft tissue is often compromised due to computing resource constraints. Furthermore, haptic integration presents an additional challenge with its requirement for a high update rate. As a result, most real-time surgical simulation systems employ a linear elasticity model, simplified numerical methods such as the boundary element method or spring-particle systems, and coarse volumetric meshes. However, these systems are not clinically realistic. We present here an ongoing work aimed at developing an efficient and physically realistic neurosurgery simulator using a non-linear finite element method (FEM) with haptic interaction. Real-time finite element analysis is achieved by utilizing the total Lagrangian explicit dynamic (TLED) formulation and GPU acceleration of per-node and per-element operations. We employ a virtual coupling method for separating deformable body simulation and collision detection from haptic rendering, which needs to be updated at a much higher rate than the visual simulation. The system provides accurate biomechancial modeling of soft tissue while retaining a real-time performance with haptic interaction. However, our experiments showed that the stability of the simulator depends heavily on the material property of the tissue and the speed of colliding objects. Hence, additional efforts including dynamic relaxation are required to improve the stability of the system.
Collapse
Affiliation(s)
- Huai-Ping Lee
- Kitware Inc., Clifton Park, NY 12065, USA ; Dept. of Computer Science, Univ. of North Carolina, Chapel Hill, NC 27599, USA
| | - Michel Audette
- Dept. of MSVE, Old Dominion University, Norfolk, VA 23529, USA
| | - Grand Roman Joldes
- School of Mechanical Engineering, The Univ. of Western Australia, Perth, Australia
| | | |
Collapse
|
28
|
Vaccarella A, Comparetti MD, Enquobahrie A, Ferrigno G, De Momi E. Sensors management in robotic neurosurgery: the ROBOCAST project. Annu Int Conf IEEE Eng Med Biol Soc 2012; 2011:2119-22. [PMID: 22254756 DOI: 10.1109/iembs.2011.6090395] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Robot and computer-aided surgery platforms bring a variety of sensors into the operating room. These sensors generate information to be synchronized and merged for improving the accuracy and the safety of the surgical procedure for both patients and operators. In this paper, we present our work on the development of a sensor management architecture that is used is to gather and fuse data from localization systems, such as optical and electromagnetic trackers and ultrasound imaging devices. The architecture follows a modular client-server approach and was implemented within the EU-funded project ROBOCAST (FP7 ICT 215190). Furthermore it is based on very well-maintained open-source libraries such as OpenCV and Image-Guided Surgery Toolkit (IGSTK), which are supported from a worldwide community of developers and allow a significant reduction of software costs. We conducted experiments to evaluate the performance of the sensor manager module. We computed the response time needed for a client to receive tracking data or video images, and the time lag between synchronous acquisition with an optical tracker and ultrasound machine. Results showed a median delay of 1.9 ms for a client request of tracking data and about 40 ms for US images; these values are compatible with the data generation rate (20-30 Hz for tracking system and 25 fps for PAL video). Simultaneous acquisitions have been performed with an optical tracking system and US imaging device: data was aligned according to the timestamp associated with each sample and the delay was estimated with a cross-correlation study. A median value of 230 ms delay was calculated showing that realtime 3D reconstruction is not feasible (an offline temporal calibration is needed), although a slow exploration is possible. In conclusion, as far as asleep patient neurosurgery is concerned, the proposed setup is indeed useful for registration error correction because the brain shift occurs with a time constant of few tens of minutes.
Collapse
Affiliation(s)
- Alberto Vaccarella
- Politecnico di Milano, Bioengineering Department, Neuroengineering and Medical Robotics Laboratory, Piazza Leonardo da Vinci 32, 20133 Milano, Italy.
| | | | | | | | | |
Collapse
|
29
|
Gary K, Enquobahrie A, Ibanez L, Cheng P, Yaniv Z, Cleary K, Kokoori S, Muffih B, Heidenreich J. Agile Methods for Open Source Safety-Critical Software. Softw Pract Exp 2011; 41:945-962. [PMID: 21799545 PMCID: PMC3142956 DOI: 10.1002/spe.1075] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
The introduction of software technology in a life-dependent environment requires the development team to execute a process that ensures a high level of software reliability and correctness. Despite their popularity, agile methods are generally assumed to be inappropriate as a process family in these environments due to their lack of emphasis on documentation, traceability, and other formal techniques. Agile methods, notably Scrum, favor empirical process control, or small constant adjustments in a tight feedback loop. This paper challenges the assumption that agile methods are inappropriate for safety-critical software development. Agile methods are flexible enough to encourage the rightamount of ceremony; therefore if safety-critical systems require greater emphasis on activities like formal specification and requirements management, then an agile process will include these as necessary activities. Furthermore, agile methods focus more on continuous process management and code-level quality than classic software engineering process models. We present our experiences on the image-guided surgical toolkit (IGSTK) project as a backdrop. IGSTK is an open source software project employing agile practices since 2004. We started with the assumption that a lighter process is better, focused on evolving code, and only adding process elements as the need arose. IGSTK has been adopted by teaching hospitals and research labs, and used for clinical trials. Agile methods have matured since the academic community suggested they are not suitable for safety-critical systems almost a decade ago, we present our experiences as a case study for renewing the discussion.
Collapse
Affiliation(s)
- Kevin Gary
- Department of Engineering, Arizona State University, Mesa, Arizona, 85212, USA
| | | | | | - Patrick Cheng
- Imaging Science and Information Systems (ISIS) Center, Department of Radiology, Georgetown University Medical Center, Washington, DC, 20007, USA
| | - Ziv Yaniv
- Imaging Science and Information Systems (ISIS) Center, Department of Radiology, Georgetown University Medical Center, Washington, DC, 20007, USA
| | - Kevin Cleary
- Imaging Science and Information Systems (ISIS) Center, Department of Radiology, Georgetown University Medical Center, Washington, DC, 20007, USA
| | - Shylaja Kokoori
- Department of Engineering, Arizona State University, Mesa, Arizona, 85212, USA
| | - Benjamin Muffih
- Department of Engineering, Arizona State University, Mesa, Arizona, 85212, USA
| | - John Heidenreich
- Department of Engineering, Arizona State University, Mesa, Arizona, 85212, USA
| |
Collapse
|
30
|
Pace DF, Enquobahrie A, Yang H, Aylward SR, Niethammer M. Deformable Image Registration of Sliding Organs Using Anisotropic Diffusive Regularization. Proc IEEE Int Symp Biomed Imaging 2011:407-413. [PMID: 21785755 DOI: 10.1109/isbi.2011.5872434] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Traditional deformable image registration imposes a uniform smoothness constraint on the deformation field. This is not appropriate when registering images visualizing organs that slide relative to each other, and therefore leads to registration inaccuracies. In this paper, we present a deformation field regularization term that is based on anisotropic diffusion and accommodates the deformation field discontinuities that are expected when considering sliding motion. The registration algorithm was assessed first using artificial images of geometric objects. In a second validation, monomodal chest images depicting both respiratory and cardiac motion were generated using an anatomically-realistic software phantom and then registered. Registration accuracy was assessed based on the distances between corresponding segmented organ surfaces. Compared to an established diffusive regularization approach, the anisotropic diffusive regularization gave deformation fields that represented more plausible image correspondences, while giving rise to similar transformed moving images and comparable registration accuracy.
Collapse
|
31
|
Enquobahrie A, Gobbi D, Turek M, Cheng P, Yaniv Z, Lindseth F, Cleary K. Designing Tracking Software for Image-Guided Surgery Applications: IGSTK Experience. Int J Comput Assist Radiol Surg 2008; 3:395-403. [PMID: 20037671 PMCID: PMC2796844 DOI: 10.1007/s11548-008-0243-4] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
OBJECTIVE: Many image-guided surgery applications require tracking devices as part of their core functionality. The Image-Guided Surgery Toolkit (IGSTK) was designed and developed to interface tracking devices with software applications incorporating medical images. METHODS: IGSTK was designed as an open source C++ library that provides the basic components needed for fast prototyping and development of image-guided surgery applications. This library follows a component-based architecture with several components designed for specific sets of image-guided surgery functions. At the core of the toolkit is the tracker component that handles communication between a control computer and navigation device to gather pose measurements of surgical instruments present in the surgical scene. The representations of the tracked instruments are superimposed on anatomical images to provide visual feedback to the clinician during surgical procedures. RESULTS: The initial version of the IGSTK toolkit has been released in the public domain and several trackers are supported. The toolkit and related information are available at www.igstk.org. CONCLUSION: With the increased popularity of minimally invasive procedures in health care, several tracking devices have been developed for medical applications. Designing and implementing high-quality and safe software to handle these different types of trackers in a common framework is a challenging task. It requires establishing key software design principles that emphasize abstraction, extensibility, reusability, fault-tolerance, and portability. IGSTK is an open source library that satisfies these needs for the image-guided surgery community.
Collapse
Affiliation(s)
| | - David Gobbi
- School of Computing, Queen's University, Kingston, ON, K7L 3N6, Canada
| | - Matt Turek
- Kitware Inc., Clifton Park, NY, 12065, USA
| | - Patrick Cheng
- Imaging Science and Information Systems (ISIS) Center, Department of Radiology, Georgetown University Medical Center, Washington, DC, 20007, USA
| | - Ziv Yaniv
- Imaging Science and Information Systems (ISIS) Center, Department of Radiology, Georgetown University Medical Center, Washington, DC, 20007, USA
| | - Frank Lindseth
- SINTEF Health Research and the National Center for 3D Ultrasound in Surgery, Trondheim, Norway
| | - Kevin Cleary
- Imaging Science and Information Systems (ISIS) Center, Department of Radiology, Georgetown University Medical Center, Washington, DC, 20007, USA
| |
Collapse
|
32
|
Enquobahrie A, Cheng P, Gary K, Ibanez L, Gobbi D, Lindseth F, Yaniv Z, Aylward S, Jomier J, Cleary K. The image-guided surgery toolkit IGSTK: an open source C++ software toolkit. J Digit Imaging 2007; 20 Suppl 1:21-33. [PMID: 17703338 PMCID: PMC2039836 DOI: 10.1007/s10278-007-9054-3] [Citation(s) in RCA: 69] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2007] [Revised: 07/11/2007] [Accepted: 07/12/2007] [Indexed: 11/30/2022] Open
Abstract
This paper presents an overview of the image-guided surgery toolkit (IGSTK). IGSTK is an open source C++ software library that provides the basic components needed to develop image-guided surgery applications. It is intended for fast prototyping and development of image-guided surgery applications. The toolkit was developed through a collaboration between academic and industry partners. Because IGSTK was designed for safety-critical applications, the development team has adopted lightweight software processes that emphasizes safety and robustness while, at the same time, supporting geographically separated developers. A software process that is philosophically similar to agile software methods was adopted emphasizing iterative, incremental, and test-driven development principles. The guiding principle in the architecture design of IGSTK is patient safety. The IGSTK team implemented a component-based architecture and used state machine software design methodologies to improve the reliability and safety of the components. Every IGSTK component has a well-defined set of features that are governed by state machines. The state machine ensures that the component is always in a valid state and that all state transitions are valid and meaningful. Realizing that the continued success and viability of an open source toolkit depends on a strong user community, the IGSTK team is following several key strategies to build an active user community. These include maintaining a users and developers’ mailing list, providing documentation (application programming interface reference document and book), presenting demonstration applications, and delivering tutorial sessions at relevant scientific conferences.
Collapse
Affiliation(s)
| | - Patrick Cheng
- Imaging Science and Information Systems (ISIS) Center, Department of Radiology, Georgetown University Medical Center, Washington, DC 20007 USA
| | - Kevin Gary
- Division of Computing Studies, Arizona State University, Mesa, AZ 85212 USA
| | | | | | - Frank Lindseth
- SINTEF Health Research and the National Center for 3D Ultrasound in Surgery, Trondheim, Norway
| | - Ziv Yaniv
- Imaging Science and Information Systems (ISIS) Center, Department of Radiology, Georgetown University Medical Center, Washington, DC 20007 USA
| | | | | | - Kevin Cleary
- Imaging Science and Information Systems (ISIS) Center, Department of Radiology, Georgetown University Medical Center, Washington, DC 20007 USA
| |
Collapse
|