1
|
Youssef H, Demirer M, Middlebrooks EH, Anisetti B, Meschia JF, Lin MP. Framingham Stroke Risk Profile Score and White Matter Disease Progression. Neurologist 2024:00127893-990000000-00143. [PMID: 38867496 DOI: 10.1097/nrl.0000000000000567] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/14/2024]
Abstract
OBJECTIVES To evaluate the relationship between Framingham Stroke Risk Profile (FSRP) score and rate of white matter hyperintensity (WMH) progression and cognition. METHODS Consecutive patients enrolled in the Mayo Clinic Florida Familial Cerebrovascular Diseases Registry (2011-2020) with 2 brain-MRI scans at least 1 year apart were included. The primary outcome was annual change in WMH volume (cm3/year) stratified as fast versus slow (above vs. below median). Cognition was assessed using a Mini-Mental State Exam (MMSE, 0-30). FSRP score (0 to 8) was calculated by summing the presence of age 65 years or older, smoking, systolic blood pressure greater than 130 mmHg, diabetes, coronary disease, atrial fibrillation, left ventricular hypertrophy, and antihypertensive medication use. Linear and logistic regression analyses were performed to examine the association between FSRP and WMH progression, and cognition. RESULTS In all, 207 patients were included, with a mean age of 60±16 y and 54.6% female. FSRP scores risk distribution was: 31.9% scored 0 to 1, 36.7% scored 2 to 3, and 31.4% scored ≥4. The baseline WMH volume was 9.6 cm3 (IQR: 3.3-28.4 cm3), and the annual rate of WMH progression was 0.9 cm3/year (IQR: 0.1 to 3.1 cm3/year). A higher FSRP score was associated with fast WMH progression (odds ratio, 1.45; 95% CI: 1.22-1.72; P<0.001) and a lower MMSE score (23.6 vs. 27.1; P<0.001). There was a dose-dependent relationship between higher FSRP score and fast WMH progression (odds ratios, 2.20, 4.64, 7.86, 8.03 for FSRP scores 1, 2, 3, and ≥4, respectively; trend P<0.001). CONCLUSIONS This study demonstrated an association between higher FSRP scores and accelerated WMH progression, as well as lower cognition.
Collapse
Affiliation(s)
| | - Mutlu Demirer
- Department of Radiology, Mayo Clinic, Jacksonville, FL
| | | | | | | | | |
Collapse
|
2
|
Mayer C, Pepe A, Hossain S, Karner B, Arnreiter M, Kleesiek J, Schmid J, Janisch M, Hannes D, Fuchsjäger M, Zimpfer D, Egger J, Mächler H. Type B Aortic Dissection CTA Collection with True and False Lumen Expert Annotations for the Development of AI-based Algorithms. Sci Data 2024; 11:596. [PMID: 38844767 PMCID: PMC11156948 DOI: 10.1038/s41597-024-03284-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Accepted: 04/22/2024] [Indexed: 06/09/2024] Open
Abstract
Aortic dissections (ADs) are serious conditions of the main artery of the human body, where a tear in the inner layer of the aortic wall leads to the formation of a new blood flow channel, named false lumen. ADs affecting the aorta distally to the left subclavian artery are classified as a Stanford type B aortic dissection (type B AD). This is linked to substantial morbidity and mortality, however, the course of the disease for the individual case is often unpredictable. Computed tomography angiography (CTA) is the gold standard for the diagnosis of type B AD. To advance the tools available for the analysis of CTA scans, we provide a CTA collection of 40 type B AD cases from clinical routine with corresponding expert segmentations of the true and false lumina. Segmented CTA scans might aid clinicians in decision making, especially if it is possible to fully automate the process. Therefore, the data collection is meant to be used to develop, train and test algorithms.
Collapse
Affiliation(s)
- Christian Mayer
- Division of Cardiac Surgery, Department of Surgery, Medical University of Graz, Auenbruggerplatz 29, 8036, Graz, Austria
| | - Antonio Pepe
- Institute of Computer Graphics and Vision (ICG), Graz University of Technology, Inffeldgasse 16/II, 8010, Graz, Austria
| | - Sophie Hossain
- Division of Cardiac Surgery, Department of Surgery, Medical University of Graz, Auenbruggerplatz 29, 8036, Graz, Austria
| | - Barbara Karner
- Division of Cardiac Surgery, Department of Surgery, Medical University of Graz, Auenbruggerplatz 29, 8036, Graz, Austria
| | - Melanie Arnreiter
- Division of Cardiac Surgery, Department of Surgery, Medical University of Graz, Auenbruggerplatz 29, 8036, Graz, Austria
| | - Jens Kleesiek
- Institute for Artificial Intelligence in Medicine (IKIM), AI-guided Therapies (AIT), Essen University Hospital (AöR), Girardetstraße 2, 45131, Essen, Germany
| | - Johannes Schmid
- Division of General Radiology, Department of Radiology, Medical University of Graz, Auenbruggerplatz 9, 8036, Graz, Austria
| | - Michael Janisch
- Division of General Radiology, Department of Radiology, Medical University of Graz, Auenbruggerplatz 9, 8036, Graz, Austria
| | - Deutschmann Hannes
- Division of General Radiology, Department of Radiology, Medical University of Graz, Auenbruggerplatz 9, 8036, Graz, Austria
| | - Michael Fuchsjäger
- Division of General Radiology, Department of Radiology, Medical University of Graz, Auenbruggerplatz 9, 8036, Graz, Austria
| | - Daniel Zimpfer
- Division of Cardiac Surgery, Department of Surgery, Medical University of Graz, Auenbruggerplatz 29, 8036, Graz, Austria
| | - Jan Egger
- Institute of Computer Graphics and Vision (ICG), Graz University of Technology, Inffeldgasse 16/II, 8010, Graz, Austria.
- Institute for Artificial Intelligence in Medicine (IKIM), AI-guided Therapies (AIT), Essen University Hospital (AöR), Girardetstraße 2, 45131, Essen, Germany.
| | - Heinrich Mächler
- Division of Cardiac Surgery, Department of Surgery, Medical University of Graz, Auenbruggerplatz 29, 8036, Graz, Austria.
| |
Collapse
|
3
|
Frodl A, Lange T, Siegel M, Meine H, Taghizadeh E, Schmal H, Izadpanah K. Individual Influence of Trochlear Dysplasia on Patellofemoral Kinematics after Isolated MPFL Reconstruction. J Pers Med 2022; 12:jpm12122049. [PMID: 36556269 PMCID: PMC9786691 DOI: 10.3390/jpm12122049] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Revised: 12/05/2022] [Accepted: 12/07/2022] [Indexed: 12/14/2022] Open
Abstract
INTRODUCTION The influence of the MPFL graft in cases of patella instability with dysplastic trochlea is a controversial topic. The effect of the MPFL reconstruction as single therapy is under investigation, especially with severely dysplastic trochlea (Dejour types C and D). The purpose of this study was to evaluate the impact of trochlear dysplasia on patellar kinematics in patients suffering from low flexion patellar instability under weight-bearing conditions after isolated MPFL reconstruction. MATERIAL AND METHODS Thirteen patients were included in this study, among them were eight patients with mild dysplasia (Dejour type A and B) and five patients with severe dysplasia (Dejour type C and D). By performing a knee MRI with in situ loading, patella kinematics and the patellofemoral cartilage contact area could be measured under the activation of the quadriceps musculature in knee flexion angles of 0°, 15° and 30°. To mitigate MRI motion artefacts, prospective motion correction based on optical tracking was applied. Bone and cartilage segmentation were performed semi-automatically for further data analysis. Cartilage contact area (CCA) and patella tilt were the main outcome measures for this study. Pre- and post-surgery measures were compared for each group. RESULTS Data visualized a trending lower patella tilt after MPFL graft installation in both groups and flexion angles of the knee. There were no significant changes in patella tilt at 0° (unloaded pre-surgery: 22.6 ± 15.2; post-surgery: 17.7 ± 14.3; p = 0.110) and unloaded 15° flexion (pre-surgery: 18.9 ± 12.7; post-surgery: 12.2 ± 13.0; p = 0.052) of the knee in patients with mild dysplasia, whereas in patients with severe dysplasia of the trochlea the results happened not to be significant in the same angles with loading of 5 kg (0° flexion pre-surgery: 34.4 ± 12.1; post-surgery: 31.2 ± 16.1; p = 0.5; 15° flexion pre-surgery: 33.3 ± 6.1; post-surgery: 23.4 ± 8.6; p = 0.068). CCA increased in every flexion angle and group, but significant increase was seen only between 0°-15° (unloaded and loaded) in mild dysplasia of the trochlea, where significant increase in Dejour type C and D group was seen with unloaded full extension of the knee (0° flexion) and 30° flexion (unloaded and loaded). CONCLUSION This study proves a significant effect of the MPFL graft to cartilage contact area, as well as an improvement of the patella tilt in patients with mild dysplasia of the trochlea. Thus, the MPFL can be used as a single treatment for patient with Dejour type A and B dysplasia. However, in patients with severe dysplasia the MPFL graft alone does not significantly increase CCA.
Collapse
Affiliation(s)
- Andreas Frodl
- Department of Orthopedics and Traumatology, Freiburg University Hospital, 79106 Freiburg, Germany
- Correspondence:
| | - Thomas Lange
- Department of Radiology, Medical Physics, Freiburg University Hospital, 79106 Freiburg, Germany
| | - Markus Siegel
- Department of Orthopedics and Traumatology, Freiburg University Hospital, 79106 Freiburg, Germany
| | - Hans Meine
- Fraunhofer Institute for Digital Medicine, 28359 Freiburg, Germany
| | - Elham Taghizadeh
- Fraunhofer Institute for Digital Medicine, 28359 Freiburg, Germany
| | - Hagen Schmal
- Department of Orthopedic Surgery, University Hospital Odense, Sdr. Boulevard 29, 5000 Odense, Denmark
| | - Kaywan Izadpanah
- Department of Orthopedics and Traumatology, Freiburg University Hospital, 79106 Freiburg, Germany
| |
Collapse
|
4
|
Durrani S, Onyedimma C, Jarrah R, Bhatti A, Nathani KR, Bhandarkar AR, Mualem W, Ghaith AK, Zamanian C, Michalopoulos GD, Alexander AY, Jean W, Bydon M. The Virtual Vision of Neurosurgery: How Augmented Reality and Virtual Reality are Transforming the Neurosurgical Operating Room. World Neurosurg 2022; 168:190-201. [DOI: 10.1016/j.wneu.2022.10.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2022] [Revised: 09/30/2022] [Accepted: 10/01/2022] [Indexed: 11/22/2022]
|
5
|
MeVisLab-OpenVR prototyping platform for virtual reality medical applications. Int J Comput Assist Radiol Surg 2022; 17:2065-2069. [PMID: 35674999 DOI: 10.1007/s11548-022-02678-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2022] [Accepted: 05/09/2022] [Indexed: 11/05/2022]
Abstract
PURPOSE Virtual reality (VR) can provide an added value for diagnosis and/or intervention planning. Several VR software implementations have been proposed but they are often application dependent. Previous attempts for a more generic solution incorporating VR in medical prototyping software (MeVisLab) were still lacking functionality precluding easy and flexible development. METHODS We propose an alternative solution that uses rendering to a graphical processing unit (GPU) texture to enable rendering arbitrary Open Inventor scenes in a VR context. It facilitates flexible development of user interaction and rendering of more complex scenes involving multiple objects. We tested the platform in planning a transcatheter cardiac stent placement procedure. RESULTS This approach proved to enable development of a particular implementation that facilitates planning of percutaneous treatment of a sinus venosus atrial septal defect. The implementation showed it is intuitive to plan and verify the procedure using VR. CONCLUSION An alternative implementation for linking OpenVR with MeVisLab is provided that offers more flexible development of VR prototypes which can facilitate further clinical validation of this technology in various medical disciplines.
Collapse
|
6
|
Chen Z, Wang Y, Li X, Wang K, Li Z, Yang P. An automatic measurement system of distal femur morphological parameters using 3D slicer software. Bone 2022; 156:116300. [PMID: 34958998 DOI: 10.1016/j.bone.2021.116300] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Revised: 12/08/2021] [Accepted: 12/10/2021] [Indexed: 11/29/2022]
Abstract
In the field of joint surgery, the computer-aided design of knee prostheses suitable for the Chinese population requires a large quantity of anatomical knee data. In this study, we propose a new method that uses 3D Slicer software to automatically measure the morphological parameters of the distal femur. First, 141 femur samples were segmented from CT data to establish the femoral shape library. Next, balanced iterative reducing and clustering using hierarchies (BIRCH) combined with iterative closest point (ICP) and generalised procrustes analysis (GPA) were used to achieve fast registration of the femur samples. The statistical model was automatically calculated from the registered femur samples, and an orthopaedic surgeon marked the points on the statistical model. Finally, we developed an automatic measurement system using 3D Slicer software, and a deformable model matching method was applied to establish the point correspondence between the statistical model and the other samples. By matching points on the statistical model to corresponding points in other samples, we measured all other samples. We marked six points and measured eight parameters. We evaluated the performance of automatic matching by comparing the points marked manually with those matched automatically and verified the accuracy of the system by comparing the manual and automatic measurement results. The results indicated that the average error of the automatic matching points was 1.03 mm, and the average length error and average angle error measured automatically by the system were 0.37 mm and 0.63°, respectively. These errors were smaller than the intra-rater and inter-rater errors measured manually by two different surgeons, which showed that the accuracy of our automatic method was high. Taken together, this study established an accurate and automatic measurement system for the distal femur based on the secondary development of 3D Slicer software to assist orthopaedic surgeons in completing the measurements of big data and further promote the improved design of Chinese-specific knee prostheses.
Collapse
Affiliation(s)
- Zhen Chen
- College of Computer Science, Xi'an University of Posts and Telecommunications, Xi'an, Shaanxi 710121, PR China
| | - Yagang Wang
- College of Computer Science, Xi'an University of Posts and Telecommunications, Xi'an, Shaanxi 710121, PR China
| | - Xinghua Li
- Department of Radiology, The Second Affiliated Hospital of Medical College, Xi'an Jiaotong University, Xi'an, Shaanxi 710004, PR China
| | - Kunzheng Wang
- Department of Bone and Joint Surgery, The Second Affiliated Hospital of Medical College, Xi'an Jiaotong University, Xi'an, Shaanxi 710004, PR China
| | - Zhe Li
- Department of Bone and Joint Surgery, The Second Affiliated Hospital of Medical College, Xi'an Jiaotong University, Xi'an, Shaanxi 710004, PR China.
| | - Pei Yang
- Department of Bone and Joint Surgery, The Second Affiliated Hospital of Medical College, Xi'an Jiaotong University, Xi'an, Shaanxi 710004, PR China.
| |
Collapse
|
7
|
Egger J, Wild D, Weber M, Bedoya CAR, Karner F, Prutsch A, Schmied M, Dionysio C, Krobath D, Jin Y, Gsaxner C, Li J, Pepe A. Studierfenster: an Open Science Cloud-Based Medical Imaging Analysis Platform. J Digit Imaging 2022; 35:340-355. [PMID: 35064372 PMCID: PMC8782222 DOI: 10.1007/s10278-021-00574-8] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Revised: 12/14/2021] [Accepted: 12/16/2021] [Indexed: 02/06/2023] Open
Abstract
Imaging modalities such as computed tomography (CT) and magnetic resonance imaging (MRI) are widely used in diagnostics, clinical studies, and treatment planning. Automatic algorithms for image analysis have thus become an invaluable tool in medicine. Examples of this are two- and three-dimensional visualizations, image segmentation, and the registration of all anatomical structure and pathology types. In this context, we introduce Studierfenster (www.studierfenster.at): a free, non-commercial open science client-server framework for (bio-)medical image analysis. Studierfenster offers a wide range of capabilities, including the visualization of medical data (CT, MRI, etc.) in two-dimensional (2D) and three-dimensional (3D) space in common web browsers, such as Google Chrome, Mozilla Firefox, Safari, or Microsoft Edge. Other functionalities are the calculation of medical metrics (dice score and Hausdorff distance), manual slice-by-slice outlining of structures in medical images, manual placing of (anatomical) landmarks in medical imaging data, visualization of medical data in virtual reality (VR), and a facial reconstruction and registration of medical data for augmented reality (AR). More sophisticated features include the automatic cranial implant design with a convolutional neural network (CNN), the inpainting of aortic dissections with a generative adversarial network, and a CNN for automatic aortic landmark detection in CT angiography images. A user study with medical and non-medical experts in medical image analysis was performed, to evaluate the usability and the manual functionalities of Studierfenster. When participants were asked about their overall impression of Studierfenster in an ISO standard (ISO-Norm) questionnaire, a mean of 6.3 out of 7.0 possible points were achieved. The evaluation also provided insights into the results achievable with Studierfenster in practice, by comparing these with two ground truth segmentations performed by a physician of the Medical University of Graz in Austria. In this contribution, we presented an online environment for (bio-)medical image analysis. In doing so, we established a client-server-based architecture, which is able to process medical data, especially 3D volumes. Our online environment is not limited to medical applications for humans. Rather, its underlying concept could be interesting for researchers from other fields, in applying the already existing functionalities or future additional implementations of further image processing applications. An example could be the processing of medical acquisitions like CT or MRI from animals [Clinical Pharmacology & Therapeutics, 84(4):448–456, 68], which get more and more common, as veterinary clinics and centers get more and more equipped with such imaging devices. Furthermore, applications in entirely non-medical research in which images/volumes need to be processed are also thinkable, such as those in optical measuring techniques, astronomy, or archaeology.
Collapse
Affiliation(s)
- Jan Egger
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia.
- Computer Algorithms for Medicine Laboratory, Graz, Austria.
- Institute for Artificial Intelligence in Medicine, AI-guided Therapies, University Hospital Essen, Girardetstraße 2, 45131, Essen, Germany.
| | - Daniel Wild
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Maximilian Weber
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Christopher A Ramirez Bedoya
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Florian Karner
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Alexander Prutsch
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Michael Schmied
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Christina Dionysio
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Dominik Krobath
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Yuan Jin
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
- Research Center for Connected Healthcare Big Data, ZhejiangLab, 311121, Hangzhou, Zhejiang, China
| | - Christina Gsaxner
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Jianning Li
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
- Institute for Artificial Intelligence in Medicine, AI-guided Therapies, University Hospital Essen, Girardetstraße 2, 45131, Essen, Germany
| | - Antonio Pepe
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Australia
- Computer Algorithms for Medicine Laboratory, Graz, Austria
| |
Collapse
|
8
|
Prieto Prada JD, Im J, Oh H, Song C. Enhanced location tracking in sensor fusion-assisted virtual reality micro-manipulation environments. PLoS One 2021; 16:e0261933. [PMID: 34962969 PMCID: PMC8714085 DOI: 10.1371/journal.pone.0261933] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2020] [Accepted: 12/14/2021] [Indexed: 12/02/2022] Open
Abstract
Virtual reality (VR) technology plays a significant role in many biomedical applications. These VR scenarios increase the valuable experience of tasks requiring great accuracy with human subjects. Unfortunately, commercial VR controllers have large positioning errors in a micro-manipulation task. Here, we propose a VR-based framework along with a sensor fusion algorithm to improve the microposition tracking performance of a microsurgical tool. To the best of our knowledge, this is the first application of Kalman filter in a millimeter scale VR environment, by using the position data between the VR controller and an inertial measuring device. This study builds and tests two cases: (1) without sensor fusion tracking and (2) location tracking with active sensor fusion. The static and dynamic experiments demonstrate that the Kalman filter can provide greater precision during micro-manipulation in small scale VR scenarios.
Collapse
Affiliation(s)
| | - Jintaek Im
- Department of Robotics Engineering, DGIST, Daegu, South Korea
| | - Hyondong Oh
- School of Mechanical, Aerospace and Nuclear Engineering, UNIST, Ulsan, South Korea
| | - Cheol Song
- Department of Robotics Engineering, DGIST, Daegu, South Korea
- * E-mail:
| |
Collapse
|
9
|
Deng S, Wheeler G, Toussaint N, Munroe L, Bhattacharya S, Sajith G, Lin E, Singh E, Chu KYK, Kabir S, Pushparajah K, Simpson JM, Schnabel JA, Gomez A. A Virtual Reality System for Improved Image-Based Planning of Complex Cardiac Procedures. J Imaging 2021; 7:151. [PMID: 34460787 PMCID: PMC8404926 DOI: 10.3390/jimaging7080151] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Revised: 08/13/2021] [Accepted: 08/17/2021] [Indexed: 12/03/2022] Open
Abstract
The intricate nature of congenital heart disease requires understanding of the complex, patient-specific three-dimensional dynamic anatomy of the heart, from imaging data such as three-dimensional echocardiography for successful outcomes from surgical and interventional procedures. Conventional clinical systems use flat screens, and therefore, display remains two-dimensional, which undermines the full understanding of the three-dimensional dynamic data. Additionally, the control of three-dimensional visualisation with two-dimensional tools is often difficult, so used only by imaging specialists. In this paper, we describe a virtual reality system for immersive surgery planning using dynamic three-dimensional echocardiography, which enables fast prototyping for visualisation such as volume rendering, multiplanar reformatting, flow visualisation and advanced interaction such as three-dimensional cropping, windowing, measurement, haptic feedback, automatic image orientation and multiuser interactions. The available features were evaluated by imaging and nonimaging clinicians, showing that the virtual reality system can help improve the understanding and communication of three-dimensional echocardiography imaging and potentially benefit congenital heart disease treatment.
Collapse
Affiliation(s)
- Shujie Deng
- School of Biomedical Engineering & Imaging Sciences, King’s College London, London SE1 7EU, UK; (S.D.); (G.W.); (N.T.); (L.M.); (S.B.); (G.S.); (E.L.); (E.S.); (K.Y.K.C.); (K.P.); (J.M.S.); (J.A.S.)
| | - Gavin Wheeler
- School of Biomedical Engineering & Imaging Sciences, King’s College London, London SE1 7EU, UK; (S.D.); (G.W.); (N.T.); (L.M.); (S.B.); (G.S.); (E.L.); (E.S.); (K.Y.K.C.); (K.P.); (J.M.S.); (J.A.S.)
| | - Nicolas Toussaint
- School of Biomedical Engineering & Imaging Sciences, King’s College London, London SE1 7EU, UK; (S.D.); (G.W.); (N.T.); (L.M.); (S.B.); (G.S.); (E.L.); (E.S.); (K.Y.K.C.); (K.P.); (J.M.S.); (J.A.S.)
| | - Lindsay Munroe
- School of Biomedical Engineering & Imaging Sciences, King’s College London, London SE1 7EU, UK; (S.D.); (G.W.); (N.T.); (L.M.); (S.B.); (G.S.); (E.L.); (E.S.); (K.Y.K.C.); (K.P.); (J.M.S.); (J.A.S.)
| | - Suryava Bhattacharya
- School of Biomedical Engineering & Imaging Sciences, King’s College London, London SE1 7EU, UK; (S.D.); (G.W.); (N.T.); (L.M.); (S.B.); (G.S.); (E.L.); (E.S.); (K.Y.K.C.); (K.P.); (J.M.S.); (J.A.S.)
| | - Gina Sajith
- School of Biomedical Engineering & Imaging Sciences, King’s College London, London SE1 7EU, UK; (S.D.); (G.W.); (N.T.); (L.M.); (S.B.); (G.S.); (E.L.); (E.S.); (K.Y.K.C.); (K.P.); (J.M.S.); (J.A.S.)
| | - Ei Lin
- School of Biomedical Engineering & Imaging Sciences, King’s College London, London SE1 7EU, UK; (S.D.); (G.W.); (N.T.); (L.M.); (S.B.); (G.S.); (E.L.); (E.S.); (K.Y.K.C.); (K.P.); (J.M.S.); (J.A.S.)
| | - Eeshar Singh
- School of Biomedical Engineering & Imaging Sciences, King’s College London, London SE1 7EU, UK; (S.D.); (G.W.); (N.T.); (L.M.); (S.B.); (G.S.); (E.L.); (E.S.); (K.Y.K.C.); (K.P.); (J.M.S.); (J.A.S.)
| | - Ka Yee Kelly Chu
- School of Biomedical Engineering & Imaging Sciences, King’s College London, London SE1 7EU, UK; (S.D.); (G.W.); (N.T.); (L.M.); (S.B.); (G.S.); (E.L.); (E.S.); (K.Y.K.C.); (K.P.); (J.M.S.); (J.A.S.)
| | - Saleha Kabir
- Department of Congenital Heart Disease, Evelina London Children’s Hospital, Guy’s and St Thomas’ National Health Service Foundation Trust, London SE1 7EH, UK;
| | - Kuberan Pushparajah
- School of Biomedical Engineering & Imaging Sciences, King’s College London, London SE1 7EU, UK; (S.D.); (G.W.); (N.T.); (L.M.); (S.B.); (G.S.); (E.L.); (E.S.); (K.Y.K.C.); (K.P.); (J.M.S.); (J.A.S.)
- Department of Congenital Heart Disease, Evelina London Children’s Hospital, Guy’s and St Thomas’ National Health Service Foundation Trust, London SE1 7EH, UK;
| | - John M. Simpson
- School of Biomedical Engineering & Imaging Sciences, King’s College London, London SE1 7EU, UK; (S.D.); (G.W.); (N.T.); (L.M.); (S.B.); (G.S.); (E.L.); (E.S.); (K.Y.K.C.); (K.P.); (J.M.S.); (J.A.S.)
- Department of Congenital Heart Disease, Evelina London Children’s Hospital, Guy’s and St Thomas’ National Health Service Foundation Trust, London SE1 7EH, UK;
| | - Julia A. Schnabel
- School of Biomedical Engineering & Imaging Sciences, King’s College London, London SE1 7EU, UK; (S.D.); (G.W.); (N.T.); (L.M.); (S.B.); (G.S.); (E.L.); (E.S.); (K.Y.K.C.); (K.P.); (J.M.S.); (J.A.S.)
- Department of Informatics, Technische Universität München, 85748 Garching, Germany
- Helmholtz Zentrum München—German Research Center for Environmental Health, 85764 Neuherberg, Germany
| | - Alberto Gomez
- School of Biomedical Engineering & Imaging Sciences, King’s College London, London SE1 7EU, UK; (S.D.); (G.W.); (N.T.); (L.M.); (S.B.); (G.S.); (E.L.); (E.S.); (K.Y.K.C.); (K.P.); (J.M.S.); (J.A.S.)
| |
Collapse
|
10
|
Memon AR, Li J, Egger J, Chen X. A review on patient-specific facial and cranial implant design using Artificial Intelligence (AI) techniques. Expert Rev Med Devices 2021; 18:985-994. [PMID: 34404280 DOI: 10.1080/17434440.2021.1969914] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
INTRODUCTION Researchers and engineers have found their importance in healthcare industry including recent updates in patient-specific implant (PSI) design. CAD/CAM technology plays an important role in the design and development of Artificial Intelligence (AI) based implants.The across the globe have their interest focused on the design and manufacturing of AI-based implants in everyday professional use can decrease the cost, improve patient's health and increase efficiency, and thus many implant designers and manufacturers practice. AREAS COVERED The focus of this study has been to manufacture smart devices that can make contact with the world as normal people do, understand their language, and learn to improve from real-life examples. Machine learning can be guided using a heavy amount of data sets and algorithms that can improve its ability to learn to perform the task. In this review, artificial intelligence (AI), deep learning, and machine-learning techniques are studied in the design of biomedical implants. EXPERT OPINION The main purpose of this article was to highlight important AI techniques to design PSIs. These are the automatic techniques to help designers to design patient-specific implants using AI algorithms such as deep learning, machine learning, and some other automatic methods.
Collapse
Affiliation(s)
- Afaque Rafique Memon
- State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Institute of Bio-medical Manufacturing and Life Quality Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Jianning Li
- Faculty of Computer Science and Biomedical Engineering, Institute of Computer Graphics and Vision, Graz University of Technology, Graz, Austria.,The Laboratory of Computer Algorithm for Medicine, Medical University of Graz, Graz, Austria.,Department of Neurosurgery, Medical University of Graz, Graz, Austria
| | - Jan Egger
- Faculty of Computer Science and Biomedical Engineering, Institute of Computer Graphics and Vision, Graz University of Technology, Graz, Austria.,The Laboratory of Computer Algorithm for Medicine, Medical University of Graz, Graz, Austria.,Department of Neurosurgery, Medical University of Graz, Graz, Austria.,Department of Oral and Maxillofacial Surgery, Medical University of Graz, Graz, Austria
| | - Xiaojun Chen
- State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Institute of Bio-medical Manufacturing and Life Quality Engineering, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
11
|
Pepe A, Trotta GF, Mohr-Ziak P, Gsaxner C, Wallner J, Bevilacqua V, Egger J. A Marker-Less Registration Approach for Mixed Reality-Aided Maxillofacial Surgery: a Pilot Evaluation. J Digit Imaging 2021; 32:1008-1018. [PMID: 31485953 DOI: 10.1007/s10278-019-00272-6] [Citation(s) in RCA: 35] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022] Open
Abstract
As of common routine in tumor resections, surgeons rely on local examinations of the removed tissues and on the swiftly made microscopy findings of the pathologist, which are based on intraoperatively taken tissue probes. This approach may imply an extended duration of the operation, increased effort for the medical staff, and longer occupancy of the operating room (OR). Mixed reality technologies, and particularly augmented reality, have already been applied in surgical scenarios with positive initial outcomes. Nonetheless, these methods have used manual or marker-based registration. In this work, we design an application for a marker-less registration of PET-CT information for a patient. The algorithm combines facial landmarks extracted from an RGB video stream, and the so-called Spatial-Mapping API provided by the HMD Microsoft HoloLens. The accuracy of the system is compared with a marker-based approach, and the opinions of field specialists have been collected during a demonstration. A survey based on the standard ISO-9241/110 has been designed for this purpose. The measurements show an average positioning error along the three axes of (x, y, z) = (3.3 ± 2.3, - 4.5 ± 2.9, - 9.3 ± 6.1) mm. Compared with the marker-based approach, this shows an increment of the positioning error of approx. 3 mm along two dimensions (x, y), which might be due to the absence of explicit markers. The application has been positively evaluated by the specialists; they have shown interest in continued further work and contributed to the development process with constructive criticism.
Collapse
Affiliation(s)
- Antonio Pepe
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Austria. .,Computer Algorithms for Medicine Laboratory, Graz, Austria.
| | - Gianpaolo Francesco Trotta
- Computer Algorithms for Medicine Laboratory, Graz, Austria.,Department of Mechanics, Mathematics and Management, Polytechnic University of Bari, Via Orabona, 4, Bari, Italy
| | - Peter Mohr-Ziak
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Austria.,VRVis-Zentrum für Virtual Reality und Visualisierung Forschungs-GmbH, Donau-City-Straße 11, 1220, Vienna, Austria
| | - Christina Gsaxner
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Austria.,Computer Algorithms for Medicine Laboratory, Graz, Austria
| | - Jürgen Wallner
- Computer Algorithms for Medicine Laboratory, Graz, Austria.,Department of Oral & Maxillofacial Surgery, Medical University of Graz, Auenbruggerplatz 5/1, 8036, Graz, Styria, Austria
| | - Vitoantonio Bevilacqua
- Department of Electrical and Information Engineering, Polytechnic University of Bari, Via Orabona, 4, Bari, Italy
| | - Jan Egger
- Institute of Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Austria.,Computer Algorithms for Medicine Laboratory, Graz, Austria.,Department of Oral & Maxillofacial Surgery, Medical University of Graz, Auenbruggerplatz 5/1, 8036, Graz, Styria, Austria
| |
Collapse
|
12
|
Steiniger BS, Pfeffer H, Guthe M, Lobachev O. Exploring human splenic red pulp vasculature in virtual reality: details of sheathed capillaries and the open capillary network. Histochem Cell Biol 2021; 155:341-354. [PMID: 33074357 PMCID: PMC8021519 DOI: 10.1007/s00418-020-01924-3] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/18/2020] [Indexed: 02/07/2023]
Abstract
We reconstructed serial sections of a representative adult human spleen to clarify the unknown arrangement of the splenic microvasculature, such as terminal arterioles, sheathed capillaries, the red pulp capillary network and venules. The resulting 3D model was evaluated in virtual reality (VR). Capillary sheaths often occurred after the second or third branching of a terminal arteriole and covered its capillary side or end branches. The sheaths started directly after the final smooth muscle cells of the arteriole and consisted of cuboidal CD271++ stromal sheath cells surrounded and infiltrated by B lymphocytes and macrophages. Some sheaths covered up to four sequential capillary bifurcations thus forming bizarre elongated structures. Each sheath had a unique form. Apart from symmetric dichotomous branchings inside the sheath, sheathed capillaries also gave off side branches, which crossed the sheath and freely ended at its surface. These side branches are likely to distribute materials from the incoming blood to sheath-associated B lymphocytes and macrophages and thus represent the first location for recognition of blood-borne antigens in the spleen. A few non-sheathed bypasses from terminal arterioles to the red pulp capillary network also exist. Red pulp venules are primarily supplied by sinuses, but they also exhibit a few connections to the capillary network. Thus, the human splenic red pulp harbors a primarily open microcirculation with a very minor closed part.
Collapse
Affiliation(s)
- Birte S Steiniger
- Institute of Anatomy and Cell Biology, University of Marburg, Robert-Koch-Str.8, 35037, Marburg, Germany.
| | - Henriette Pfeffer
- Institute of Anatomy and Cell Biology, University of Marburg, Robert-Koch-Str.8, 35037, Marburg, Germany
| | - Michael Guthe
- Visual Computing, Institute of Computer Science, University of Bayreuth, 95440, Bayreuth, Germany
| | - Oleg Lobachev
- Visual Computing, Institute of Computer Science, University of Bayreuth, 95440, Bayreuth, Germany
- Institute of Functional and Applied Anatomy, Hannover Medical School, 30625, Hannover, Germany
- Leibniz-Fachhochschule School of Business, 30539, Hannover, Germany
| |
Collapse
|
13
|
Cai H, Lin T, Chen L, Weng H, Zhu R, Chen Y, Cai G. Evaluating the effect of immersive virtual reality technology on gait rehabilitation in stroke patients: a study protocol for a randomized controlled trial. Trials 2021; 22:91. [PMID: 33494805 PMCID: PMC7836462 DOI: 10.1186/s13063-021-05031-z] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2020] [Accepted: 01/07/2021] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND The high incidence of cerebral apoplexy makes it one of the most important causes of adult disability. Gait disorder is one of the hallmark symptoms in the sequelae of cerebral apoplexy. The recovery of walking ability is critical for improving patients' quality of life. Innovative virtual reality technology has been widely used in post-stroke rehabilitation, whose effectiveness and safety have been widely verified. To date, however, there are few studies evaluating the effect of immersive virtual reality on stroke-related gait rehabilitation. This study outlines the application of immersive VR-assisted rehabilitation for gait rehabilitation of stroke patients for comparative evaluation with traditional rehabilitation. METHODS The study describes a prospective, randomized controlled clinical trial. Thirty-six stroke patients will be screened and enrolled as subjects within 1 month of initial stroke and randomized into two groups. The VRT group (n = 18) will receive VR-assisted training (30 min) 5 days/week for 3 weeks. The non-VRT group (n = 18) will receive functional gait rehabilitation training (30 min) 5 days/week for 3 weeks. The primary outcomes and secondary outcomes will be conducted before intervention, 3 weeks after intervention, and 6 months after intervention. The primary outcomes will include time "up & go" test (TUGT). The secondary outcomes will include MMT muscle strength grading standard (MMT), Fugal-Meyer scale (FMA), motor function assessment scale (MAS), improved Barthel index scale (ADL), step with maximum knee angle, total support time, step frequency, step length, pace, and stride length. DISCUSSION Virtual reality is an innovative technology with broad applications, current and prospective. Immersive VR-assisted rehabilitation in patients with vivid treatment scenarios in the form of virtual games will stimulate patients' interest through active participation. The feedback of VR games can also provide patients with performance awareness and effect feedback, which could be incentivizing. This study may reveal an improved method of stroke rehabilitation which can be helpful for clinical decision-making and future practice. TRIAL REGISTRATION Chinese Clinical Trial Registry ChiCTR1900025375 . Registered on 25 August 2019.
Collapse
Affiliation(s)
- Huihui Cai
- Department of Neurology, Fujian Medical University Union Hospital, Institute of Clinical Neurology, Fujian Medical University, Fuzhou, 350001, Fujian, China.,Department of Clinical Medicine, Fujian Medical University, Fuzhou, 350001, Fujian, China
| | - Tao Lin
- Department of Neurology, Fujian Medical University Union Hospital, Institute of Clinical Neurology, Fujian Medical University, Fuzhou, 350001, Fujian, China.,Department of Clinical Medicine, Fujian Medical University, Fuzhou, 350001, Fujian, China
| | - Lina Chen
- Department of Neurology, Fujian Medical University Union Hospital, Institute of Clinical Neurology, Fujian Medical University, Fuzhou, 350001, Fujian, China
| | - Huidan Weng
- Department of Neurology, Fujian Medical University Union Hospital, Institute of Clinical Neurology, Fujian Medical University, Fuzhou, 350001, Fujian, China
| | - Ruihan Zhu
- Department of Neurology, Fujian Medical University Union Hospital, Institute of Clinical Neurology, Fujian Medical University, Fuzhou, 350001, Fujian, China
| | - Ying Chen
- Department of Neurology, Fujian Medical University Union Hospital, Institute of Clinical Neurology, Fujian Medical University, Fuzhou, 350001, Fujian, China.
| | - Guoen Cai
- Department of Neurology, Fujian Medical University Union Hospital, Institute of Clinical Neurology, Fujian Medical University, Fuzhou, 350001, Fujian, China.
| |
Collapse
|
14
|
Lungu AJ, Swinkels W, Claesen L, Tu P, Egger J, Chen X. A review on the applications of virtual reality, augmented reality and mixed reality in surgical simulation: an extension to different kinds of surgery. Expert Rev Med Devices 2020; 18:47-62. [PMID: 33283563 DOI: 10.1080/17434440.2021.1860750] [Citation(s) in RCA: 51] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Background: Research proves that the apprenticeship model, which is the gold standard for training surgical residents, is obsolete. For that reason, there is a continuing effort toward the development of high-fidelity surgical simulators to replace the apprenticeship model. Applying Virtual Reality Augmented Reality (AR) and Mixed Reality (MR) in surgical simulators increases the fidelity, level of immersion and overall experience of these simulators.Areas covered: The objective of this review is to provide a comprehensive overview of the application of VR, AR and MR for distinct surgical disciplines, including maxillofacial surgery and neurosurgery. The current developments in these areas, as well as potential future directions, are discussed.Expert opinion: The key components for incorporating VR into surgical simulators are visual and haptic rendering. These components ensure that the user is completely immersed in the virtual environment and can interact in the same way as in the physical world. The key components for the application of AR and MR into surgical simulators include the tracking system as well as the visual rendering. The advantages of these surgical simulators are the ability to perform user evaluations and increase the training frequency of surgical residents.
Collapse
Affiliation(s)
- Abel J Lungu
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Wout Swinkels
- Computational Sensing Systems, Department of Engineering Technology, Hasselt University, Diepenbeek, Belgium
| | - Luc Claesen
- Computational Sensing Systems, Department of Engineering Technology, Hasselt University, Diepenbeek, Belgium
| | - Puxun Tu
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Jan Egger
- Graz University of Technology, Institute of Computer Graphics and Vision, Graz, Austria.,Graz Department of Oral &maxillofacial Surgery, Medical University of Graz, Graz, Austria.,The Laboratory of Computer Algorithms for Medicine, Medical University of Graz, Graz, Austria
| | - Xiaojun Chen
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
15
|
Yang L, Pan Y, Lin J, Liu Y, Shang Y, Yang S, Cao H. Automatic Guidance Method for Laser Tracker Based on Rotary-Laser Scanning Angle Measurement. SENSORS (BASEL, SWITZERLAND) 2020; 20:s20154168. [PMID: 32727122 PMCID: PMC7436201 DOI: 10.3390/s20154168] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/20/2020] [Revised: 07/17/2020] [Accepted: 07/21/2020] [Indexed: 06/11/2023]
Abstract
Laser-tracking measurement systems (laser tracker) have been playing a critical role in large-scale 3D high-precision coordinate measurement. However, the existing visual guidance of laser trackers is still limited by the disadvantages of operator-dependence, small-angle view field, time-consuming laser-guided process. This paper presents an automatic guidance method for laser trackers based on the rotary-laser scanning angle measurement technology. In this method, a special target consisting of six photoelectric receivers and a retroreflector is integrated into the rotary-laser scanning transmitter' coordinate systems. Real-time constraints calculated by the proposed method would provide the coordinates of the target in a laser tracker coordinates system for guidance. Finally, the experimental results verified the automatic re-establish of sightline can be realized in horizontal 360° angle field within tens of arc-seconds, and this method is robust against the fast movement of the target.
Collapse
Affiliation(s)
| | | | - Jiarui Lin
- Correspondence: ; Tel.: +86-022-2740-6643
| | | | | | | | | |
Collapse
|
16
|
Memon AR, Wang E, Hu J, Egger J, Chen X. A review on computer-aided design and manufacturing of patient-specific maxillofacial implants. Expert Rev Med Devices 2020; 17:345-356. [PMID: 32105159 PMCID: PMC7175472 DOI: 10.1080/17434440.2020.1736040] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2019] [Accepted: 02/25/2020] [Indexed: 10/25/2022]
Abstract
Introduction: Various prefabricated maxillofacial implants are used in the clinical routine for the surgical treatment of patients. In addition to these prefabricated implants, customized CAD/CAM implants become increasingly important for a more precise replacement of damaged anatomical structures. This paper reviews the design and manufacturing of patient-specific implants for the maxillofacial area.Areas covered: The contribution of this publication is to give a state-of-the-art overview in the usage of customized facial implants. Moreover, it provides future perspectives, including 3D printing technologies, for the manufacturing of patient-individual facial implants that are based on patient's data acquisitions, like Computed Tomography (CT) or Magnetic Resonance Imaging (MRI).Expert opinion: The main target of this review is to present various designing software and 3D manufacturing technologies that have been applied to fabricate facial implants. In doing so, different CAD designing software's are discussed, which are based on various methods and have been implemented and evaluated by researchers. Finally, recent 3D printing technologies that have been applied to manufacture patient-individual implants will be introduced and discussed.
Collapse
Affiliation(s)
- Afaque Rafique Memon
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Enpeng Wang
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Junlei Hu
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Jan Egger
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
- Institute for Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Graz, Austria
- Department of Oral &maxillofacial Surgery, Medical University of Graz, Graz, Austria
- The Laboratory of Computer Algorithms for Medicine, Medical University of Graz, Graz, Austria
| | - Xiaojun Chen
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
17
|
Movement Estimation Using Soft Sensors Based on Bi-LSTM and Two-Layer LSTM for Human Motion Capture. SENSORS 2020; 20:s20061801. [PMID: 32214039 PMCID: PMC7146561 DOI: 10.3390/s20061801] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/27/2020] [Revised: 03/06/2020] [Accepted: 03/09/2020] [Indexed: 11/17/2022]
Abstract
The importance of estimating human movement has increased in the field of human motion capture. HTC VIVE is a popular device that provides a convenient way of capturing human motions using several sensors. Recently, the motion of only users’ hands has been captured, thereby greatly reducing the range of motion captured. This paper proposes a framework to estimate single-arm orientations using soft sensors mainly by combining a Bi-long short-term memory (Bi-LSTM) and two-layer LSTM. Positions of the two hands are measured using an HTC VIVE set, and the orientations of a single arm, including its corresponding upper arm and forearm, are estimated using the proposed framework based on the estimated positions of the two hands. Given that the proposed framework is meant for a single arm, if orientations of two arms are required to be estimated, the estimations are performed twice. To obtain the ground truth of the orientations of single-arm movements, two Myo gesture-control sensory armbands are employed on the single arm: one for the upper arm and the other for the forearm. The proposed framework analyzed the contextual features of consecutive sensory arm movements, which provides an efficient way to improve the accuracy of arm movement estimation. In comparison with the ground truth, the proposed method estimated the arm movements using a dynamic time warping distance, which was the average of 73.90% less than that of a conventional Bayesian framework. The distinct feature of our proposed framework is that the number of sensors attached to end-users is reduced. Additionally, with the use of our framework, the arm orientations can be estimated with any soft sensor, and good accuracy of the estimations can be ensured. Another contribution is the suggestion of the combination of the Bi-LSTM and two-layer LSTM.
Collapse
|
18
|
Weber M, Wild D, Wallner J, Egger J. A Client/Server based Online Environment for the Calculation of Medical Segmentation Scores. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2019:3463-3467. [PMID: 31946624 DOI: 10.1109/embc.2019.8856481] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Image segmentation plays a major role in medical imaging. Especially in radiology, the detection and development of tumors and other diseases can be supported by image segmentation applications. Tools that provide image segmentation and calculation of segmentation scores are not available at any time for every device due to the size and scope of functionalities they offer. These tools need huge periodic updates and do not properly work on old or weak systems. However, medical use-cases often require fast and accurate results. A complex and slow software can lead to additional stress and thus unnecessary errors. The aim of this contribution is the development of a cross-platform tool for medical segmentation use-cases. The goal is a device-independent and always available possibility for medical imaging including manual segmentation and metric calculation. The result is Studierfenster (studierfenster.at), a web-tool for manual segmentation and segmentation metric calculation. In this contribution, the focus lies on the segmentation metric calculation part of the tool. It provides the functionalities of calculating directed and undirected Hausdorff Distance (HD) and Dice Similarity Coefficient (DSC) scores for two uploaded volumes, filtering for specific values, searching for specific values in the calculated metrics and exporting filtered metric lists in different file formats.
Collapse
|
19
|
Chen L, Zhang F, Zhan W, Gan M, Sun L. Optimization of virtual and real registration technology based on augmented reality in a surgical navigation system. Biomed Eng Online 2020; 19:1. [PMID: 31915014 PMCID: PMC6950982 DOI: 10.1186/s12938-019-0745-z] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2019] [Accepted: 12/30/2019] [Indexed: 12/19/2022] Open
Abstract
Background The traditional navigation interface was intended only for two-dimensional observation by doctors; thus, this interface does not display the total spatial information for the lesion area. Surgical navigation systems have become essential tools that enable for doctors to accurately and safely perform complex operations. The image navigation interface is separated from the operating area, and the doctor needs to switch the field of vision between the screen and the patient’s lesion area. In this paper, augmented reality (AR) technology was applied to spinal surgery to provide more intuitive information to surgeons. The accuracy of virtual and real registration was improved via research on AR technology. During the operation, the doctor could observe the AR image and the true shape of the internal spine through the skin. Methods To improve the accuracy of virtual and real registration, a virtual and real registration technique based on an improved identification method and robot-assisted method was proposed. The experimental method was optimized by using the improved identification method. X-ray images were used to verify the effectiveness of the puncture performed by the robot. Results The final experimental results show that the average accuracy of the virtual and real registration based on the general identification method was 9.73 ± 0.46 mm (range 8.90–10.23 mm). The average accuracy of the virtual and real registration based on the improved identification method was 3.54 ± 0.13 mm (range 3.36–3.73 mm). Compared with the virtual and real registration based on the general identification method, the accuracy was improved by approximately 65%. The highest accuracy of the virtual and real registration based on the robot-assisted method was 2.39 mm. The accuracy was improved by approximately 28.5% based on the improved identification method. Conclusion The experimental results show that the two optimized methods are highly very effective. The proposed AR navigation system has high accuracy and stability. This system may have value in future spinal surgeries.
Collapse
Affiliation(s)
- Long Chen
- School of Mechanical and Electrical Engineering, Soochow University, Suzhou, 215006, China
| | - Fengfeng Zhang
- School of Mechanical and Electrical Engineering, Soochow University, Suzhou, 215006, China. .,Collaborative Innovation Center of Suzhou Nano Science and Technology, Soochow University, Suzhou, 215123, China.
| | - Wei Zhan
- Department of Radiation Oncology, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Minfeng Gan
- Department of Radiation Oncology, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Lining Sun
- School of Mechanical and Electrical Engineering, Soochow University, Suzhou, 215006, China.,Collaborative Innovation Center of Suzhou Nano Science and Technology, Soochow University, Suzhou, 215123, China
| |
Collapse
|
20
|
Gsaxner C, Wallner J, Chen X, Zemann W, Egger J. Facial model collection for medical augmented reality in oncologic cranio-maxillofacial surgery. Sci Data 2019; 6:310. [PMID: 31819060 PMCID: PMC6901520 DOI: 10.1038/s41597-019-0327-8] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2019] [Accepted: 11/21/2019] [Indexed: 01/25/2023] Open
Abstract
Medical augmented reality (AR) is an increasingly important topic in many medical fields. AR enables x-ray vision to see through real world objects. In medicine, this offers pre-, intra- or post-interventional visualization of "hidden" structures. In contrast to a classical monitor view, AR applications provide visualization not only on but also in relation to the patient. However, research and development of medical AR applications is challenging, because of unique patient-specific anatomies and pathologies. Working with several patients during the development for weeks or even months is not feasible. One alternative are commercial patient phantoms, which are very expensive. Hence, this data set provides a unique collection of head and neck cancer patient PET-CT scans with corresponding 3D models, provided as stereolitography (STL) files. The 3D models are optimized for effective 3D printing at low cost. This data can be used in the development and evaluation of AR applications for head and neck surgery.
Collapse
Affiliation(s)
- Christina Gsaxner
- Department of Oral and Maxillofacial Surgery, Medical University of Graz, Auenbruggerplatz 6/1, 8036, Graz, Austria
- Computer Algorithms for Medicine Laboratory, Graz, Austria
- Institute for Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16c/II, 8010, Graz, Austria
| | - Jürgen Wallner
- Department of Oral and Maxillofacial Surgery, Medical University of Graz, Auenbruggerplatz 6/1, 8036, Graz, Austria.
- Computer Algorithms for Medicine Laboratory, Graz, Austria.
| | - Xiaojun Chen
- Shanghai Jiao Tong University, School of Mechanical Engineering, 800 Dong Chuan Road, Shanghai, 200240, China
| | - Wolfgang Zemann
- Department of Oral and Maxillofacial Surgery, Medical University of Graz, Auenbruggerplatz 6/1, 8036, Graz, Austria
| | - Jan Egger
- Department of Oral and Maxillofacial Surgery, Medical University of Graz, Auenbruggerplatz 6/1, 8036, Graz, Austria
- Computer Algorithms for Medicine Laboratory, Graz, Austria
- Institute for Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16c/II, 8010, Graz, Austria
- Shanghai Jiao Tong University, School of Mechanical Engineering, 800 Dong Chuan Road, Shanghai, 200240, China
| |
Collapse
|
21
|
Wallner J, Schwaiger M, Hochegger K, Gsaxner C, Zemann W, Egger J. A review on multiplatform evaluations of semi-automatic open-source based image segmentation for cranio-maxillofacial surgery. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2019; 182:105102. [PMID: 31610359 DOI: 10.1016/j.cmpb.2019.105102] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/02/2019] [Revised: 09/09/2019] [Accepted: 09/27/2019] [Indexed: 06/10/2023]
Abstract
BACKGROUND AND OBJECTIVES Computer-assisted technologies, such as image-based segmentation, play an important role in the diagnosis and treatment support in cranio-maxillofacial surgery. However, although many segmentation software packages exist, their clinical in-house use is often challenging due to constrained technical, human or financial resources. Especially technological solutions or systematic evaluations of open-source based segmentation approaches are lacking. The aim of this contribution is to assess and review the segmentation quality and the potential clinical use of multiple commonly available and license-free segmentation methods on different medical platforms. METHODS In this contribution, the quality and accuracy of open-source segmentation methods was assessed on different platforms using patient-specific clinical CT-data and reviewed with the literature. The image-based segmentation algorithms GrowCut, Robust Statistics Segmenter, Region Growing 3D, Otsu & Picking, Canny Segmentation and Geodesic Segmenter were investigated in the mandible on the platforms 3D Slicer, MITK and MeVisLab. Comparisons were made between the segmentation algorithms and the ground truth segmentations of the same anatomy performed by two clinical experts (n = 20). Assessment parameters were the Dice Score Coefficient (DSC), the Hausdorff Distance (HD), and Pearsons correlation coefficient (r). RESULTS The segmentation accuracy was highest with the GrowCut (DSC 85.6%, HD 33.5 voxel) and the Canny (DSC 82.1%, HD 8.5 voxel) algorithm. Statistical differences between the assessment parameters were not significant (p < 0.05) and correlation coefficients were close to the value one (r > 0.94) for any of the comparison made between the segmentation methods and the ground truth schemes. Functionally stable and time-saving segmentations were observed. CONCLUSION High quality image-based semi-automatic segmentation was provided by the GrowCut and the Canny segmentation method. In the cranio-maxillofacial complex, these segmentation methods provide algorithmic alternatives for image-based segmentation in the clinical practice for e.g. surgical planning or visualization of treatment results and offer advantages through their open-source availability. This is the first systematic multi-platform comparison that evaluates multiple license-free, open-source segmentation methods based on clinical data for the improvement of algorithms and a potential clinical use in patient-individualized medicine. The results presented are reproducible by others and can be used for clinical and research purposes.
Collapse
Affiliation(s)
- Jürgen Wallner
- Medical University of Graz, Department of Oral and Maxillofacial Surgery, Auenbruggerplatz 5/1, Graz 8036, Austria; Computer Algorithms for Medicine Laboratory, Graz 8010, Austria.
| | - Michael Schwaiger
- Medical University of Graz, Department of Oral and Maxillofacial Surgery, Auenbruggerplatz 5/1, Graz 8036, Austria; Computer Algorithms for Medicine Laboratory, Graz 8010, Austria
| | - Kerstin Hochegger
- Computer Algorithms for Medicine Laboratory, Graz 8010, Austria; Institute for Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16c/II, Graz 8010, Austria
| | - Christina Gsaxner
- Medical University of Graz, Department of Oral and Maxillofacial Surgery, Auenbruggerplatz 5/1, Graz 8036, Austria; Computer Algorithms for Medicine Laboratory, Graz 8010, Austria; Institute for Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16c/II, Graz 8010, Austria
| | - Wolfgang Zemann
- Medical University of Graz, Department of Oral and Maxillofacial Surgery, Auenbruggerplatz 5/1, Graz 8036, Austria
| | - Jan Egger
- Medical University of Graz, Department of Oral and Maxillofacial Surgery, Auenbruggerplatz 5/1, Graz 8036, Austria; Computer Algorithms for Medicine Laboratory, Graz 8010, Austria; Institute for Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16c/II, Graz 8010, Austria; Shanghai Jiao Tong University, School of Mechanical Engineering, Dong Chuan Road 800, Shanghai 200240, China
| |
Collapse
|
22
|
Egger J, Pfarrkirchner B, Gsaxner C, Lindner L, Schmalstieg D, Wallner J. Fully Convolutional Mandible Segmentation on a valid Ground- Truth Dataset. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2019; 2018:656-660. [PMID: 30440482 DOI: 10.1109/embc.2018.8512458] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
This contribution presents the automatic segmentation of the lower jawbone (mandible) in humans' computed tomography (CT) images with the support of trained deep learning networks. CT acquisitions from the mandible frequently include radiological artifacts e.g., from metal dental restorations, ostheosynthesis materials or include trauma related free pieces of bones with missing bone contour anatomy. As a result, manual outlining these slices to generate the ground truth for evaluating segmentation algorithms lead to massive uncertainties and results in significant interphysician disagreement. Simply excluding these slices is also not the option of choice, regarding the treatment outcome. Hence, we defined strict inclusion and exclusion criteria for our datasets to avoid subjectivity or occurring bias in the groundtruth creation. Amongst others, datasets must display a complete physiological mandible without teeth. According to these data selection criteria such images are difficult to find since they originate from the clinical routine and therefore need a medical indication (such as trauma or pathologic lesions) to be provided as CT data. Furthermore, to prove the adequateness of our ground-truth, clinical experts segmented all cases twice manually, showing the great qualitative and quantitative agreement between them. Our dataset collection and the corresponding ground truth is an absolute novelty and the first serious evaluation of segmentation algorithms for the mandible.
Collapse
|
23
|
Virtual Scene Construction for Seismic Damage of Building Ceilings and Furniture. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app9173465] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
A valid seismic damage scene for indoor nonstructural components is critical for virtual earthquake safety drills which can teach occupants how to survive in earthquakes. A virtual scene construction method for the seismic damage of suspended ceilings and moveable furniture is proposed based on FEMA P-58 and a physics engine. First, a modeling framework is designed based on building information modeling (BIM) to create consistent structural and scene models for the subsequent structural time-history analysis (THA) and scene construction. Subsequently, FEMA P-58 is employed to determine the damage states of nonstructural components based on the results of the THA. Finally, the physical models on the movements of the damaged components are designed using a physics engine and are also validated through the experiments such as an existing shaking table test. Considering a six-story building as a case study, a virtual earthquake scene of the indoor nonstructural components is constructed and applied in an earthquake safety drill. The outcome of this study provides well-founded scenes of the seismic damage to indoor nonstructural components for performing virtual earthquake safety drills.
Collapse
|
24
|
Computed tomography data collection of the complete human mandible and valid clinical ground truth models. Sci Data 2019; 6:190003. [PMID: 30694227 PMCID: PMC6350631 DOI: 10.1038/sdata.2019.3] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2018] [Accepted: 12/14/2018] [Indexed: 11/08/2022] Open
Abstract
Image-based algorithmic software segmentation is an increasingly important topic in many medical fields. Algorithmic segmentation is used for medical three-dimensional visualization, diagnosis or treatment support, especially in complex medical cases. However, accessible medical databases are limited, and valid medical ground truth databases for the evaluation of algorithms are rare and usually comprise only a few images. Inaccuracy or invalidity of medical ground truth data and image-based artefacts also limit the creation of such databases, which is especially relevant for CT data sets of the maxillomandibular complex. This contribution provides a unique and accessible data set of the complete mandible, including 20 valid ground truth segmentation models originating from 10 CT scans from clinical practice without artefacts or faulty slices. From each CT scan, two 3D ground truth models were created by clinical experts through independent manual slice-by-slice segmentation, and the models were statistically compared to prove their validity. These data could be used to conduct serial image studies of the human mandible, evaluating segmentation algorithms and developing adequate image tools.
Collapse
|
25
|
Towards Virtual VATS, Face, and Construct Evaluation for Peg Transfer Training of Box, VR, AR, and MR Trainer. JOURNAL OF HEALTHCARE ENGINEERING 2019; 2019:6813719. [PMID: 30723539 PMCID: PMC6339710 DOI: 10.1155/2019/6813719] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/30/2018] [Revised: 10/31/2018] [Accepted: 11/29/2018] [Indexed: 11/17/2022]
Abstract
The aim of this study is to develop and assess the peg transfer training module face, content and construct validation use of the box, virtual reality (VR), cognitive virtual reality (CVR), augmented reality (AR), and mixed reality (MR) trainer, thereby to compare advantages and disadvantages of these simulators. Training system (VatsSim-XR) design includes customized haptic-enabled thoracoscopic instruments, virtual reality helmet set, endoscope kit with navigation, and the patient-specific corresponding training environment. A cohort of 32 trainees comprising 24 novices and 8 experts underwent the real and virtual simulators that were conducted in the department of thoracic surgery of Yunnan First People's Hospital. Both subjective and objective evaluations have been developed to explore the visual and haptic potential promotions in peg transfer education. Experiments and evaluation results conducted by both professional and novice thoracic surgeons show that the surgery skills from experts are better than novices overall, AR trainer is able to provide a more balanced training environments on visuohaptic fidelity and accuracy, box trainer and MR trainer demonstrated the best realism 3D perception and surgical immersive performance, respectively, and CVR trainer shows a better clinic effect that the traditional VR trainer. Combining these in a systematic approach, tuned with specific fidelity requirements, medical simulation systems would be able to provide a more immersive and effective training environment.
Collapse
|
26
|
Shattuck DW. Multiuser virtual reality environment for visualising neuroimaging data. Healthc Technol Lett 2018; 5:183-188. [PMID: 30464851 PMCID: PMC6222246 DOI: 10.1049/htl.2018.5077] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2018] [Accepted: 08/20/2018] [Indexed: 11/19/2022] Open
Abstract
The recent advent of high-performance consumer virtual reality (VR) systems has opened new possibilities for immersive visualisation of numerous types of data. Medical imaging has long made use of advanced visualisation techniques, and VR offers exciting new opportunities for data exploration. The author presents a new framework for interacting with neuroimaging data, including MRI volumes, neuroanatomical surface models, diffusion tensors, and streamline tractography, as well as text-based annotations. The system was developed for the HTC Vive using C++, OpenGL, and the OpenVR software development kit. The author developed custom GLSL shaders for each type of data to provide high-performance real-time rendering suitable for use in a VR environment. These are integrated with an interface that enables the user to manipulate the scene through the Vive controllers and perform operations such as volume slicing, fibre track selection, and structural queries. The software can read data generated by existing automated brain MRI analysis packages, enabling the rapid development of subject-specific visualisations of multimodal data or annotated atlases. The system can also support multiple simultaneous users, placing them in the same virtual space to interact with each other while visualising the same datasets, opening new possibilities for teaching and for collaborative exploration of neuroimaging data.
Collapse
Affiliation(s)
- David W. Shattuck
- Ahmanson-Lovelace Brain Mapping Center, Department of Neurology, David Geffen School of Medicine at UCLA, Los Angeles, California, USA
| |
Collapse
|
27
|
Wheeler G, Deng S, Toussaint N, Pushparajah K, Schnabel JA, Simpson JM, Gomez A. Virtual interaction and visualisation of 3D medical imaging data with VTK and Unity. Healthc Technol Lett 2018; 5:148-153. [PMID: 30800321 PMCID: PMC6372083 DOI: 10.1049/htl.2018.5064] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2018] [Accepted: 08/20/2018] [Indexed: 11/22/2022] Open
Abstract
The authors present a method to interconnect the Visualisation Toolkit (VTK) and Unity. This integration enables them to exploit the visualisation capabilities of VTK with Unity's widespread support of virtual, augmented, and mixed reality displays, and interaction and manipulation devices, for the development of medical image applications for virtual environments. The proposed method utilises OpenGL context sharing between Unity and VTK to render VTK objects into the Unity scene via a Unity native plugin. The proposed method is demonstrated in a simple Unity application that performs VTK volume rendering to display thoracic computed tomography and cardiac magnetic resonance images. Quantitative measurements of the achieved frame rates show that this approach provides over 90 fps using standard hardware, which is suitable for current augmented reality/virtual reality display devices.
Collapse
Affiliation(s)
- Gavin Wheeler
- School of Imaging Sciences & Biomedical Engineering, King's College London, London, UK
| | - Shujie Deng
- School of Imaging Sciences & Biomedical Engineering, King's College London, London, UK
| | - Nicolas Toussaint
- School of Imaging Sciences & Biomedical Engineering, King's College London, London, UK
| | - Kuberan Pushparajah
- School of Imaging Sciences & Biomedical Engineering, King's College London, London, UK
- Department of Congenital Heart Disease, Evelina London Children's Hospital, London, UK
| | - Julia A. Schnabel
- School of Imaging Sciences & Biomedical Engineering, King's College London, London, UK
| | - John M. Simpson
- School of Imaging Sciences & Biomedical Engineering, King's College London, London, UK
- Department of Congenital Heart Disease, Evelina London Children's Hospital, London, UK
| | - Alberto Gomez
- School of Imaging Sciences & Biomedical Engineering, King's College London, London, UK
| |
Collapse
|
28
|
Wallner J, Hochegger K, Chen X, Mischak I, Reinbacher K, Pau M, Zrnc T, Schwenzer-Zimmerer K, Zemann W, Schmalstieg D, Egger J. Clinical evaluation of semi-automatic open-source algorithmic software segmentation of the mandibular bone: Practical feasibility and assessment of a new course of action. PLoS One 2018; 13:e0196378. [PMID: 29746490 PMCID: PMC5944980 DOI: 10.1371/journal.pone.0196378] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2016] [Accepted: 04/12/2018] [Indexed: 11/19/2022] Open
Abstract
INTRODUCTION Computer assisted technologies based on algorithmic software segmentation are an increasing topic of interest in complex surgical cases. However-due to functional instability, time consuming software processes, personnel resources or licensed-based financial costs many segmentation processes are often outsourced from clinical centers to third parties and the industry. Therefore, the aim of this trial was to assess the practical feasibility of an easy available, functional stable and licensed-free segmentation approach to be used in the clinical practice. MATERIAL AND METHODS In this retrospective, randomized, controlled trail the accuracy and accordance of the open-source based segmentation algorithm GrowCut was assessed through the comparison to the manually generated ground truth of the same anatomy using 10 CT lower jaw data-sets from the clinical routine. Assessment parameters were the segmentation time, the volume, the voxel number, the Dice Score and the Hausdorff distance. RESULTS Overall semi-automatic GrowCut segmentation times were about one minute. Mean Dice Score values of over 85% and Hausdorff Distances below 33.5 voxel could be achieved between the algorithmic GrowCut-based segmentations and the manual generated ground truth schemes. Statistical differences between the assessment parameters were not significant (p<0.05) and correlation coefficients were close to the value one (r > 0.94) for any of the comparison made between the two groups. DISCUSSION Complete functional stable and time saving segmentations with high accuracy and high positive correlation could be performed by the presented interactive open-source based approach. In the cranio-maxillofacial complex the used method could represent an algorithmic alternative for image-based segmentation in the clinical practice for e.g. surgical treatment planning or visualization of postoperative results and offers several advantages. Due to an open-source basis the used method could be further developed by other groups or specialists. Systematic comparisons to other segmentation approaches or with a greater data amount are areas of future works.
Collapse
Affiliation(s)
- Jürgen Wallner
- Department of Oral & Maxillofacial Surgery, Medical University of Graz, Auenbruggerplatz 5/1, Graz, Austria
- Computer Algorithms for Medicine (Cafe) Laboratory, Graz, Austria
| | - Kerstin Hochegger
- Computer Algorithms for Medicine (Cafe) Laboratory, Graz, Austria
- Institute for Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16c/II, Graz, Austria
| | - Xiaojun Chen
- School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Irene Mischak
- Department of Dental Medicine and Oral Health, Medical University of Graz, Billrothgasse 4, Graz, Austria
| | - Knut Reinbacher
- Department of Oral & Maxillofacial Surgery, Medical University of Graz, Auenbruggerplatz 5/1, Graz, Austria
| | - Mauro Pau
- Department of Oral & Maxillofacial Surgery, Medical University of Graz, Auenbruggerplatz 5/1, Graz, Austria
| | - Tomislav Zrnc
- Department of Oral & Maxillofacial Surgery, Medical University of Graz, Auenbruggerplatz 5/1, Graz, Austria
| | - Katja Schwenzer-Zimmerer
- Department of Oral & Maxillofacial Surgery, Medical University of Graz, Auenbruggerplatz 5/1, Graz, Austria
| | - Wolfgang Zemann
- Department of Oral & Maxillofacial Surgery, Medical University of Graz, Auenbruggerplatz 5/1, Graz, Austria
| | - Dieter Schmalstieg
- Institute for Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16c/II, Graz, Austria
| | - Jan Egger
- Computer Algorithms for Medicine (Cafe) Laboratory, Graz, Austria
- Institute for Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16c/II, Graz, Austria
- BioTechMed-Graz, Krenngasse 37/1, Graz, Austria
| |
Collapse
|
29
|
IMHOTEP: virtual reality framework for surgical applications. Int J Comput Assist Radiol Surg 2018; 13:741-748. [PMID: 29551011 DOI: 10.1007/s11548-018-1730-x] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2018] [Accepted: 03/06/2018] [Indexed: 01/01/2023]
Abstract
PURPOSE The data which is available to surgeons before, during and after surgery is steadily increasing in quantity as well as diversity. When planning a patient's treatment, this large amount of information can be difficult to interpret. To aid in processing the information, new methods need to be found to present multimodal patient data, ideally combining textual, imagery, temporal and 3D data in a holistic and context-aware system. METHODS We present an open-source framework which allows handling of patient data in a virtual reality (VR) environment. By using VR technology, the workspace available to the surgeon is maximized and 3D patient data is rendered in stereo, which increases depth perception. The framework organizes the data into workspaces and contains tools which allow users to control, manipulate and enhance the data. Due to the framework's modular design, it can easily be adapted and extended for various clinical applications. RESULTS The framework was evaluated by clinical personnel (77 participants). The majority of the group stated that a complex surgical situation is easier to comprehend by using the framework, and that it is very well suited for education. Furthermore, the application to various clinical scenarios-including the simulation of excitation propagation in the human atrium-demonstrated the framework's adaptability. As a feasibility study, the framework was used during the planning phase of the surgical removal of a large central carcinoma from a patient's liver. CONCLUSION The clinical evaluation showed a large potential and high acceptance for the VR environment in a medical context. The various applications confirmed that the framework is easily extended and can be used in real-time simulation as well as for the manipulation of complex anatomical structures.
Collapse
|
30
|
Steiniger BS, Ulrich C, Berthold M, Guthe M, Lobachev O. Capillary networks and follicular marginal zones in human spleens. Three-dimensional models based on immunostained serial sections. PLoS One 2018; 13:e0191019. [PMID: 29420557 PMCID: PMC5805169 DOI: 10.1371/journal.pone.0191019] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2017] [Accepted: 12/27/2017] [Indexed: 12/21/2022] Open
Abstract
We have reconstructed small parts of capillary networks in the human splenic white pulp using serial sections immunostained for CD34 alone or for CD34 and CD271. The three-dimensional (3D) models show three types of interconnected networks: a network with very few long capillaries inside the white pulp originating from central arteries, a denser network surrounding follicles plus periarterial T-cell regions and a network in the red pulp. Capillaries of the perifollicular network and the red pulp network have open ends. Perifollicular capillaries form an arrangement similar to a basketball net located in the outer marginal zone. The marginal zone is defined by MAdCAM-1+ marginal reticular stromal cells. Perifollicular capillaries are connected to red pulp capillaries surrounded by CD271+ stromal capillary sheath cells. The scarcity of capillaries inside the splenic white pulp is astonishing, as non-polarised germinal centres with proliferating B-cells occur in adult human spleens. We suggest that specialized stromal marginal reticular cells form a barrier inside the splenic marginal zone, which together with the scarcity of capillaries guarantees the maintenance of gradients necessary for positioning of migratory B- and T-lymphocytes in the human splenic white pulp.
Collapse
Affiliation(s)
- Birte S. Steiniger
- Institute of Anatomy and Cell Biology, University of Marburg, Marburg, Germany
| | - Christine Ulrich
- Institute of Psychology, University of Marburg, Marburg, Germany
| | - Moritz Berthold
- Institute of Computer Sciences, University of Bayreuth, Bayreuth, Germany
| | - Michael Guthe
- Institute of Computer Sciences, University of Bayreuth, Bayreuth, Germany
| | - Oleg Lobachev
- Institute of Computer Sciences, University of Bayreuth, Bayreuth, Germany
| |
Collapse
|
31
|
Hann A, Bettac L, Haenle MM, Graeter T, Berger AW, Dreyhaupt J, Schmalstieg D, Zoller WG, Egger J. Algorithm guided outlining of 105 pancreatic cancer liver metastases in Ultrasound. Sci Rep 2017; 7:12779. [PMID: 28986569 PMCID: PMC5630585 DOI: 10.1038/s41598-017-12925-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2017] [Accepted: 09/20/2017] [Indexed: 12/19/2022] Open
Abstract
Manual segmentation of hepatic metastases in ultrasound images acquired from patients suffering from pancreatic cancer is common practice. Semiautomatic measurements promising assistance in this process are often assessed using a small number of lesions performed by examiners who already know the algorithm. In this work, we present the application of an algorithm for the segmentation of liver metastases due to pancreatic cancer using a set of 105 different images of metastases. The algorithm and the two examiners had never assessed the images before. The examiners first performed a manual segmentation and, after five weeks, a semiautomatic segmentation using the algorithm. They were satisfied in up to 90% of the cases with the semiautomatic segmentation results. Using the algorithm was significantly faster and resulted in a median Dice similarity score of over 80%. Estimation of the inter-operator variability by using the intra class correlation coefficient was good with 0.8. In conclusion, the algorithm facilitates fast and accurate segmentation of liver metastases, comparable to the current gold standard of manual segmentation.
Collapse
Affiliation(s)
- Alexander Hann
- Department of Internal Medicine I, Ulm University, Ulm, Germany. .,Department of Internal Medicine and Gastroenterology, Katharinenhospital, Kriegsbergstraße 60, 70174, Stuttgart, Germany.
| | - Lucas Bettac
- Department of Internal Medicine I, Ulm University, Ulm, Germany
| | - Mark M Haenle
- Department of Internal Medicine I, Ulm University, Ulm, Germany
| | - Tilmann Graeter
- Department of Diagnostic and Interventional Radiology, Ulm University, Ulm, Germany
| | | | - Jens Dreyhaupt
- Institute of Epidemiology & Medical Biometry, Ulm University, Ulm, Germany
| | - Dieter Schmalstieg
- Institute for Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Austria
| | - Wolfram G Zoller
- Department of Internal Medicine and Gastroenterology, Katharinenhospital, Kriegsbergstraße 60, 70174, Stuttgart, Germany
| | - Jan Egger
- Institute for Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Austria.,BioTechMed, Krenngasse 37/1, 8010, Graz, Austria
| |
Collapse
|
32
|
Egger J, Wallner J, Gall M, Chen X, Schwenzer-Zimmerer K, Reinbacher K, Schmalstieg D. Computer-aided position planning of miniplates to treat facial bone defects. PLoS One 2017; 12:e0182839. [PMID: 28817607 PMCID: PMC5560576 DOI: 10.1371/journal.pone.0182839] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2017] [Accepted: 07/25/2017] [Indexed: 11/18/2022] Open
Abstract
In this contribution, a software system for computer-aided position planning of miniplates to treat facial bone defects is proposed. The intra-operatively used bone plates have to be passively adapted on the underlying bone contours for adequate bone fragment stabilization. However, this procedure can lead to frequent intra-operatively performed material readjustments especially in complex surgical cases. Our approach is able to fit a selection of common implant models on the surgeon's desired position in a 3D computer model. This happens with respect to the surrounding anatomical structures, always including the possibility of adjusting both the direction and the position of the used osteosynthesis material. By using the proposed software, surgeons are able to pre-plan the out coming implant in its form and morphology with the aid of a computer-visualized model within a few minutes. Further, the resulting model can be stored in STL file format, the commonly used format for 3D printing. Using this technology, surgeons are able to print the virtual generated implant, or create an individually designed bending tool. This method leads to adapted osteosynthesis materials according to the surrounding anatomy and requires further a minimum amount of money and time.
Collapse
Affiliation(s)
- Jan Egger
- Institute for Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Graz, Austria
- BioTechMed-Graz, Graz, Austria
| | - Jürgen Wallner
- Department of Oral & Maxillofacial Surgery, Medical University of Graz, Graz, Styria, Austria
| | - Markus Gall
- Institute for Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Graz, Austria
| | - Xiaojun Chen
- Institute of Biomedical Manufacturing and Life Quality Engineering, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | | | - Knut Reinbacher
- Department of Oral & Maxillofacial Surgery, Medical University of Graz, Graz, Styria, Austria
| | - Dieter Schmalstieg
- Institute for Computer Graphics and Vision, Faculty of Computer Science and Biomedical Engineering, Graz University of Technology, Graz, Austria
| |
Collapse
|