1
|
Villa M, Sancho J, Rosa G, Chavarrias M, Juarez E, Sanz C. HyperMRI: hyperspectral and magnetic resonance fusion methodology for neurosurgery applications. Int J Comput Assist Radiol Surg 2024; 19:1367-1374. [PMID: 38761318 PMCID: PMC11230967 DOI: 10.1007/s11548-024-03102-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2024] [Accepted: 03/04/2024] [Indexed: 05/20/2024]
Abstract
PURPOSE Magnetic resonance imaging (MRI) is a common technique in image-guided neurosurgery (IGN). Recent research explores the integration of methods like ultrasound and tomography, among others, with hyperspectral (HS) imaging gaining attention due to its non-invasive real-time tissue classification capabilities. The main challenge is the registration process, often requiring manual intervention. This work introduces an automatic, markerless method for aligning HS images with MRI. METHODS This work presents a multimodal system that combines RGB-Depth (RGBD) and HS cameras. The RGBD camera captures the patient's facial geometry, which is used for registration with the preoperative MR through ICP. Once MR-depth registration is complete, the integration of HS data is achieved using a calibrated homography transformation. The incorporation of external tracking with a novel calibration method allows camera mobility from the registration position to the craniotomy area. This methodology streamlines the fusion of RGBD, HS and MR images within the craniotomy area. RESULTS Using the described system and an anthropomorphic phantom head, the system has been characterised by registering the patient's face in 25 positions and 5 positions resulted in a fiducial registration error of 1.88 ± 0.19 mm and a target registration error of 4.07 ± 1.28 mm, respectively. CONCLUSIONS This work proposes a new methodology to automatically register MR and HS information with a sufficient accuracy. It can support the neurosurgeons to guide the diagnosis using multimodal data over an augmented reality representation. However, in its preliminary prototype stage, this system exhibits significant promise, driven by its cost-effectiveness and user-friendly design.
Collapse
Affiliation(s)
- Manuel Villa
- CITSEM, Universidad Politécnica de Madrid, 28031, Madrid, Spain
| | - Jaime Sancho
- CITSEM, Universidad Politécnica de Madrid, 28031, Madrid, Spain
| | - Gonzalo Rosa
- CITSEM, Universidad Politécnica de Madrid, 28031, Madrid, Spain
| | | | - Eduardo Juarez
- CITSEM, Universidad Politécnica de Madrid, 28031, Madrid, Spain.
| | - Cesar Sanz
- CITSEM, Universidad Politécnica de Madrid, 28031, Madrid, Spain
| |
Collapse
|
2
|
Wang H, Ni D, Wang Y. Recursive Deformable Pyramid Network for Unsupervised Medical Image Registration. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:2229-2240. [PMID: 38319758 DOI: 10.1109/tmi.2024.3362968] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/08/2024]
Abstract
Complicated deformation problems are frequently encountered in medical image registration tasks. Although various advanced registration models have been proposed, accurate and efficient deformable registration remains challenging, especially for handling the large volumetric deformations. To this end, we propose a novel recursive deformable pyramid (RDP) network for unsupervised non-rigid registration. Our network is a pure convolutional pyramid, which fully utilizes the advantages of the pyramid structure itself, but does not rely on any high-weight attentions or transformers. In particular, our network leverages a step-by-step recursion strategy with the integration of high-level semantics to predict the deformation field from coarse to fine, while ensuring the rationality of the deformation field. Meanwhile, due to the recursive pyramid strategy, our network can effectively attain deformable registration without separate affine pre-alignment. We compare the RDP network with several existing registration methods on three public brain magnetic resonance imaging (MRI) datasets, including LPBA, Mindboggle and IXI. Experimental results demonstrate our network consistently outcompetes state of the art with respect to the metrics of Dice score, average symmetric surface distance, Hausdorff distance, and Jacobian. Even for the data without the affine pre-alignment, our network maintains satisfactory performance on compensating for the large deformation. The code is publicly available at https://github.com/ZAX130/RDP.
Collapse
|
3
|
Laura L, Marion C, Stijn B, Lennart S, Benedicte V. Reduced Intratendinous Sliding in Achilles Tendinopathy During Active Plantarflexion Regardless of Horizontal Foot Position. Scand J Med Sci Sports 2024; 34:e14679. [PMID: 38898554 DOI: 10.1111/sms.14679] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2024] [Revised: 05/14/2024] [Accepted: 06/02/2024] [Indexed: 06/21/2024]
Abstract
PURPOSE The Achilles tendon consists of three subtendons with the ability to slide relative to each other. As optimal intratendinous sliding is thought to reduce the overall stress in the tendon, alterations in sliding behavior could potentially play a role in the development of Achilles tendinopathy. The aims of this study were to investigate the difference in intratendinous sliding within the Achilles tendon during isometric contractions between asymptomatic controls and patients with Achilles tendinopathy and the effect of changing the horizontal foot position on intratendinous sliding in both groups. METHODS Twenty-nine participants (13 Achilles tendinopathy and 16 controls) performed isometric plantarflexion contractions at 60% of their maximal voluntary contraction (MVC), in toes-neutral, and at 30% MVC in toes-neutral, toes-in, and toes-out positions during which ultrasound images were recorded. Intratendinous sliding was estimated as the superficial-to-middle and middle-to-deep relative displacement. RESULTS Patients with Achilles tendinopathy present lower intratendinous sliding than asymptomatic controls. Regarding the horizontal foot position in both groups, the toes-out foot position resulted in increased sliding compared with both toes-neutral and toes-out foot position. CONCLUSION We provided evidence that patients with Achilles tendinopathy show lower intratendinous sliding than asymptomatic controls. Since intratendinous sliding is a physiological feature of the Achilles tendon, the external foot position holds promise to increase sliding in patients with Achilles tendinopathy and promote healthy tendon behavior. Future research should investigate if implementing this external foot position in rehabilitation programs stimulates sliding within the Achilles tendon and improves clinical outcome.
Collapse
Affiliation(s)
- Lecompte Laura
- Human Movement Biomechanics Research Group, Department of Movement Sciences, KU Leuven, Leuven, Belgium
| | - Crouzier Marion
- Human Movement Biomechanics Research Group, Department of Movement Sciences, KU Leuven, Leuven, Belgium
- Nantes Université, Mouvement - Interactions - Performance (MIP), Nantes, France
| | - Bogaerts Stijn
- Physical and Rehabilitation Medicine Department, University Hospitals Leuven, Leuven, Belgium
- Department of Development and Regeneration, KU Leuven, Leuven, Belgium
| | - Scheys Lennart
- Department of Development and Regeneration, Institute for Orthopaedic Research and Training (IORT), KU Leuven, Leuven, Belgium
- Orthopedics Division, University Hospitals Leuven, Leuven, Belgium
| | - Vanwanseele Benedicte
- Human Movement Biomechanics Research Group, Department of Movement Sciences, KU Leuven, Leuven, Belgium
| |
Collapse
|
4
|
Perin P, Cossellu D, Vivado E, Batti L, Gantar I, Voigt FF, Pizzala R. Temporal bone marrow of the rat and its connections to the inner ear. Front Neurol 2024; 15:1386654. [PMID: 38817550 PMCID: PMC11137668 DOI: 10.3389/fneur.2024.1386654] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2024] [Accepted: 04/10/2024] [Indexed: 06/01/2024] Open
Abstract
Calvarial bone marrow has been found to be central in the brain immune response, being connected to the dura through channels which allow leukocyte trafficking. Temporal bone marrow is thought to play important roles in relation to the inner ear, but is still largely uncharacterized, given this bone complex anatomy. We characterized the geometry and connectivity of rat temporal bone marrow using lightsheet imaging of cleared samples and microCT. Bone marrow was identified in cleared tissue by cellular content (and in particular by the presence of megakaryocytes); since air-filled cavities are absent in rodents, marrow clusters could be recognized in microCT scans by their geometry. In cleared petrosal bone, autofluorescence allowed delineation of the otic capsule layers. Within the endochondral layer, bone marrow was observed in association to the cochlear base and vestibule, and to the cochlear apex. Cochlear apex endochondral marrow (CAEM) was a separated cluster from the remaining endochondral marrow, which was therefore defined as "vestibular endochondral marrow" (VEM). A much larger marrow island (petrosal non-endochondral marrow, PNEM) extended outside the otic capsule surrounding semicircular canal arms. PNEM was mainly connected to the dura, through bone channels similar to those of calvarial bone, and only a few channels were directed toward the canal periosteum. On the contrary, endochondral bone marrow was well connected to the labyrinth through vascular loops (directed to the spiral ligament for CAEM and to the bony labyrinth periosteum for VEM), and to dural sinuses. In addition, CAEM was also connected to the tensor tympani fossa of the middle ear and VEM to the endolymphatic sac. Endochondral marrow was made up of small lobules connected to each other and to other structures by channels lined by elongated macrophages, whereas PNEM displayed larger lobules connected by channels with a sparse macrophage population. Our data suggest that the rat inner ear is surrounded by bone marrow at the junctions with middle ear and brain, most likely with "customs" role, restricting pathogen spread; a second marrow network with different structural features is found within the endochondral bone layer of the otic capsule and may play different functional roles.
Collapse
Affiliation(s)
- Paola Perin
- Department of Brain and Behaviour Sciences, University of Pavia, Pavia, Italy
| | - Daniele Cossellu
- Department of Molecular Medicine, University of Pavia, Pavia, Italy
| | - Elisa Vivado
- Department of Molecular Medicine, University of Pavia, Pavia, Italy
| | - Laura Batti
- Wyss Center for Bio and Neuro Engineering, Geneva, Switzerland
| | - Ivana Gantar
- Wyss Center for Bio and Neuro Engineering, Geneva, Switzerland
| | - Fabian F. Voigt
- Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA, United States
| | - Roberto Pizzala
- Department of Molecular Medicine, University of Pavia, Pavia, Italy
| |
Collapse
|
5
|
Guenanten H, Retailleau M, Dorel S, Sarcher A, Colloud F, Nordez A. Muscle-Tendon Unit Length Measurement Using 3D Ultrasound in Passive Conditions: OpenSim Validation and Development of Personalized Models. Ann Biomed Eng 2024; 52:997-1008. [PMID: 38286938 DOI: 10.1007/s10439-023-03436-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2023] [Accepted: 12/26/2023] [Indexed: 01/31/2024]
Abstract
This study investigated the validity of using OpenSim to measure muscle-tendon unit (MTU) length of the bi-articular lower limb muscles in several postures (shortened, lengthened, a combination of shortened and lengthened involving both joints, neutral and standing) using 3D freehand ultrasound (US), and to propose new personalized models. MTU length was measured on 14 participants and 6 bi-articular muscles (semimembranosus SM, semitendinosus ST, biceps femoris BF, rectus femoris RF, gastrocnemius medialis GM and gastrocnemius lateralis GL), considering 5 to 6 postures. MTU length was computed using OpenSim with three different models: OS (the generic OpenSim scaled model), OS + INSER (OS with personalized 3D US MTU insertions), OS + INSER + PATH (OS with personalized 3D US MTU insertions and path obtained from one posture). Significant differences in MTU length were found between OS and 3D US models for RF, GM and GL (from - 6.3 to 10.9%). Non-significant effects were reported for the hamstrings, notably for the ST (- 1.5%) and BF (- 1.9%), while the SM just crossed the alpha level (- 3.4%, p = 0.049). The OS + INSER model reduced the magnitude of bias by an average of 4% for RF, GM and GL. The OS + INSER + PATH model showed the smallest biases in length estimates, which made them negligible and non-significant for all the MTU (i.e. ≤ 2.2%). A 3D US pipeline was developed and validated to estimate the MTU length from a limited number of measurements. This opens up new perspectives for personalizing musculoskeletal models using low-cost user-friendly devices.
Collapse
Affiliation(s)
- Hugo Guenanten
- Nantes Université, Movement - Interactions - Performance, MIP, UR 4334, 44000, Nantes, France
- Institut Pprime, CNRS, Université de Poitiers, ISAE-ENSMA, UPR 3346, 86360, Chasseneuil-du-Poitou, France
| | - Maëva Retailleau
- Nantes Université, Movement - Interactions - Performance, MIP, UR 4334, 44000, Nantes, France
- Arts et Métiers Institute of Technology, Institut de Biomécanique Humaine Georges Charpak, 75013, Paris, France
| | - Sylvain Dorel
- Nantes Université, Movement - Interactions - Performance, MIP, UR 4334, 44000, Nantes, France
| | - Aurélie Sarcher
- Nantes Université, Movement - Interactions - Performance, MIP, UR 4334, 44000, Nantes, France
| | - Floren Colloud
- Arts et Métiers Institute of Technology, Institut de Biomécanique Humaine Georges Charpak, 75013, Paris, France
| | - Antoine Nordez
- Nantes Université, Movement - Interactions - Performance, MIP, UR 4334, 44000, Nantes, France.
- Institut Universitaire de France (IUF), Paris, France.
- , 23, rue du Recteur Schmitt Bât F0 - BP 92235, 44322, Nantes Cedex 3, France.
| |
Collapse
|
6
|
Zhao L, Fong TC, Bell MAL. Detection of COVID-19 features in lung ultrasound images using deep neural networks. COMMUNICATIONS MEDICINE 2024; 4:41. [PMID: 38467808 PMCID: PMC10928066 DOI: 10.1038/s43856-024-00463-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Accepted: 02/16/2024] [Indexed: 03/13/2024] Open
Abstract
BACKGROUND Deep neural networks (DNNs) to detect COVID-19 features in lung ultrasound B-mode images have primarily relied on either in vivo or simulated images as training data. However, in vivo images suffer from limited access to required manual labeling of thousands of training image examples, and simulated images can suffer from poor generalizability to in vivo images due to domain differences. We address these limitations and identify the best training strategy. METHODS We investigated in vivo COVID-19 feature detection with DNNs trained on our carefully simulated datasets (40,000 images), publicly available in vivo datasets (174 images), in vivo datasets curated by our team (958 images), and a combination of simulated and internal or external in vivo datasets. Seven DNN training strategies were tested on in vivo B-mode images from COVID-19 patients. RESULTS Here, we show that Dice similarity coefficients (DSCs) between ground truth and DNN predictions are maximized when simulated data are mixed with external in vivo data and tested on internal in vivo data (i.e., 0.482 ± 0.211), compared with using only simulated B-mode image training data (i.e., 0.464 ± 0.230) or only external in vivo B-mode training data (i.e., 0.407 ± 0.177). Additional maximization is achieved when a separate subset of the internal in vivo B-mode images are included in the training dataset, with the greatest maximization of DSC (and minimization of required training time, or epochs) obtained after mixing simulated data with internal and external in vivo data during training, then testing on the held-out subset of the internal in vivo dataset (i.e., 0.735 ± 0.187). CONCLUSIONS DNNs trained with simulated and in vivo data are promising alternatives to training with only real or only simulated data when segmenting in vivo COVID-19 lung ultrasound features.
Collapse
Affiliation(s)
- Lingyi Zhao
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Tiffany Clair Fong
- Department of Emergency Medicine, Johns Hopkins Medicine, Baltimore, MD, USA
| | - Muyinatu A Lediju Bell
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA.
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA.
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA.
| |
Collapse
|
7
|
Frouin A, Le Sant G, Barbier L, Jacquemin E, McNair PJ, Ellis R, Nordez A, Lacourpaille L. Individual distribution of muscle hypertrophy among hamstring muscle heads: Adding muscle volume where you need is not so simple. Scand J Med Sci Sports 2024; 34:e14608. [PMID: 38515303 DOI: 10.1111/sms.14608] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Revised: 03/01/2024] [Accepted: 03/11/2024] [Indexed: 03/23/2024]
Abstract
PURPOSE The aim of this study was to determine whether a 9-week resistance training program based on high load (HL) versus low load combined with blood flow restriction (LL-BFR) induced a similar (i) distribution of muscle hypertrophy among hamstring heads (semimembranosus, SM; semitendinosus, ST; and biceps femoris long head, BF) and (ii) magnitude of tendon hypertrophy of ST, using a parallel randomized controlled trial. METHODS A total of 45 participants were randomly allocated to one of three groups: HL, LL-BFR, and control (CON). Both HL and LL-BFR performed a 9-week resistance training program composed of seated leg curl and stiff-leg deadlift exercises. Freehand 3D ultrasound was used to assess the changes in muscle and tendon volume. RESULTS The increase in ST volume was greater in HL (26.5 ± 25.5%) compared to CON (p = 0.004). No difference was found between CON and LL-BFR for the ST muscle volume (p = 0.627). The change in SM muscle volume was greater for LL-BFR (21.6 ± 27.8%) compared to CON (p = 0.025). No difference was found between HL and CON for the SM muscle volume (p = 0.178).There was no change in BF muscle volume in LL-BFR (14.0 ± 16.5%; p = 0.436) compared to CON group. No difference was found between HL and CON for the BF muscle volume (p = 1.0). Regarding ST tendon volume, we did not report an effect of training regimens (p = 0.411). CONCLUSION These results provide evidence that the HL program induced a selective hypertrophy of the ST while LL-BFR induced hypertrophy of SM. The magnitude of the selective hypertrophy observed within each group varied greatly between individuals. This finding suggests that it is very difficult to early determine the location of the hypertrophy among a muscle group.
Collapse
Affiliation(s)
- A Frouin
- Nantes Université, Movement - Interactions - Performance, MIP, Nantes, France
- Institut Sport Atlantique, ISA, Nantes, France
| | - G Le Sant
- Nantes Université, Movement - Interactions - Performance, MIP, Nantes, France
- School of Physiotherapy, IFM3R, Nantes, France
| | - L Barbier
- Nantes Université, Movement - Interactions - Performance, MIP, Nantes, France
- School of Physiotherapy, IFM3R, Nantes, France
| | - E Jacquemin
- Nantes Université, Movement - Interactions - Performance, MIP, Nantes, France
- School of Physiotherapy, IFM3R, Nantes, France
| | - P J McNair
- Health and Rehabilitation Research Institute, Faculty of Health and Environmental Sciences, Auckland University of Technology, Auckland, New Zealand
| | - R Ellis
- Health and Rehabilitation Research Institute, Faculty of Health and Environmental Sciences, Auckland University of Technology, Auckland, New Zealand
- Active Living and Rehabilitation: Aotearoa, School of Clinical Sciences, Auckland University of Technology, Auckland, New Zealand
| | - A Nordez
- Nantes Université, Movement - Interactions - Performance, MIP, Nantes, France
- Health and Rehabilitation Research Institute, Faculty of Health and Environmental Sciences, Auckland University of Technology, Auckland, New Zealand
- Institut Universitaire de France (IUF), Paris, France
| | - L Lacourpaille
- Nantes Université, Movement - Interactions - Performance, MIP, Nantes, France
| |
Collapse
|
8
|
Zeineldin RA, Karar ME, Burgert O, Mathis-Ullrich F. NeuroIGN: Explainable Multimodal Image-Guided System for Precise Brain Tumor Surgery. J Med Syst 2024; 48:25. [PMID: 38393660 DOI: 10.1007/s10916-024-02037-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Accepted: 02/03/2024] [Indexed: 02/25/2024]
Abstract
Precise neurosurgical guidance is critical for successful brain surgeries and plays a vital role in all phases of image-guided neurosurgery (IGN). Neuronavigation software enables real-time tracking of surgical tools, ensuring their presentation with high precision in relation to a virtual patient model. Therefore, this work focuses on the development of a novel multimodal IGN system, leveraging deep learning and explainable AI to enhance brain tumor surgery outcomes. The study establishes the clinical and technical requirements of the system for brain tumor surgeries. NeuroIGN adopts a modular architecture, including brain tumor segmentation, patient registration, and explainable output prediction, and integrates open-source packages into an interactive neuronavigational display. The NeuroIGN system components underwent validation and evaluation in both laboratory and simulated operating room (OR) settings. Experimental results demonstrated its accuracy in tumor segmentation and the success of ExplainAI in increasing the trust of medical professionals in deep learning. The proposed system was successfully assembled and set up within 11 min in a pre-clinical OR setting with a tracking accuracy of 0.5 (± 0.1) mm. NeuroIGN was also evaluated as highly useful, with a high frame rate (19 FPS) and real-time ultrasound imaging capabilities. In conclusion, this paper describes not only the development of an open-source multimodal IGN system but also demonstrates the innovative application of deep learning and explainable AI algorithms in enhancing neuronavigation for brain tumor surgeries. By seamlessly integrating pre- and intra-operative patient image data with cutting-edge interventional devices, our experiments underscore the potential for deep learning models to improve the surgical treatment of brain tumors and long-term post-operative outcomes.
Collapse
Affiliation(s)
- Ramy A Zeineldin
- Department of Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander University Erlangen-Nürnberg, 91052, Erlangen, Germany.
- Research Group Computer Assisted Medicine (CaMed), Reutlingen University, 72762, Reutlingen, Germany.
- Faculty of Electronic Engineering (FEE), Menoufia University, Minuf, 32952, Egypt.
| | - Mohamed E Karar
- Faculty of Electronic Engineering (FEE), Menoufia University, Minuf, 32952, Egypt
| | - Oliver Burgert
- Research Group Computer Assisted Medicine (CaMed), Reutlingen University, 72762, Reutlingen, Germany
| | - Franziska Mathis-Ullrich
- Department of Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander University Erlangen-Nürnberg, 91052, Erlangen, Germany
| |
Collapse
|
9
|
van der Zee JM, Fitski M, van de Sande MAJ, Buser MAD, Hiep MAJ, Terwisscha van Scheltinga CEJ, Hulsker CCC, van den Bosch CH, van de Ven CP, van der Heijden L, Bökkerink GMJ, Wijnen MHWA, Siepel FJ, van der Steeg AFW. Tracked ultrasound registration for intraoperative navigation during pediatric bone tumor resections with soft tissue components: a porcine cadaver study. Int J Comput Assist Radiol Surg 2024; 19:297-302. [PMID: 37924438 PMCID: PMC10838821 DOI: 10.1007/s11548-023-03021-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Accepted: 09/12/2023] [Indexed: 11/06/2023]
Abstract
PURPOSE Resection of pediatric osteosarcoma in the extremities with soft tissue involvement presents surgical challenges due to difficult visualization and palpation of the tumor. Therefore, an adequate image-guided surgery (IGS) system is required for more accurate tumor resection. The use of a 3D model in combination with intraoperative tracked ultrasound (iUS) may enhance surgical decision making. This study evaluates the clinical feasibility of iUS as a surgical tool using a porcine cadaver model. METHODS First, a 3D model of the porcine lower limb was created based on preoperative scans. Second, the bone surface of the tibia was automatically detected with an iUS by a sweep on the skin. The bone surface of the preoperative 3D model was then matched with the bone surface detected by the iUS. Ten artificial targets were used to calculate the target registration error (TRE). Intraoperative performance of iUS IGS was evaluated by six pediatric surgeons and two pediatric oncologic orthopedists. Finally, user experience was assessed with a post-procedural questionnaire. RESULTS Eight registration procedures were performed with a mean TRE of 6.78 ± 1.33 mm. The surgeons agreed about the willingness for clinical implementation in their current clinical practice. They mentioned the additional clinical value of iUS in combination with the 3D model for the localization of the soft tissue components of the tumor. The concept of the proposed IGS system is considered feasible by the clinical panel, but the large TRE and degree of automation need to be addressed in further work. CONCLUSION The participating pediatric surgeons and orthopedists were convinced of the clinical value of the interaction between the iUS and the 3D model. Further research is required to improve the surgical accuracy and degree of automation of iUS-based registration systems for the surgical management of pediatric osteosarcoma.
Collapse
Affiliation(s)
- J M van der Zee
- Princess Máxima Center for Pediatric Oncology, Utrecht, The Netherlands.
- Technical Medicine, TechMed Centre, University of Twente, Enschede, The Netherlands.
| | - M Fitski
- Princess Máxima Center for Pediatric Oncology, Utrecht, The Netherlands
| | - M A J van de Sande
- Princess Máxima Center for Pediatric Oncology, Utrecht, The Netherlands
- Department of Orthopaedics, Leiden University Medical Center, Leiden, The Netherlands
| | - M A D Buser
- Princess Máxima Center for Pediatric Oncology, Utrecht, The Netherlands
| | - M A J Hiep
- Department of Surgical Oncology, Netherlands Cancer Institute, Amsterdam, The Netherlands
| | | | - C C C Hulsker
- Princess Máxima Center for Pediatric Oncology, Utrecht, The Netherlands
| | - C H van den Bosch
- Princess Máxima Center for Pediatric Oncology, Utrecht, The Netherlands
| | - C P van de Ven
- Princess Máxima Center for Pediatric Oncology, Utrecht, The Netherlands
| | - L van der Heijden
- Princess Máxima Center for Pediatric Oncology, Utrecht, The Netherlands
| | - G M J Bökkerink
- Princess Máxima Center for Pediatric Oncology, Utrecht, The Netherlands
| | - M H W A Wijnen
- Princess Máxima Center for Pediatric Oncology, Utrecht, The Netherlands
| | - F J Siepel
- Robotics and Mechatronics, TechMed Centre, University of Twente, Enschede, The Netherlands
| | | |
Collapse
|
10
|
Langlieb J, Sachdev NS, Balderrama KS, Nadaf NM, Raj M, Murray E, Webber JT, Vanderburg C, Gazestani V, Tward D, Mezias C, Li X, Flowers K, Cable DM, Norton T, Mitra P, Chen F, Macosko EZ. The molecular cytoarchitecture of the adult mouse brain. Nature 2023; 624:333-342. [PMID: 38092915 PMCID: PMC10719111 DOI: 10.1038/s41586-023-06818-7] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2023] [Accepted: 11/01/2023] [Indexed: 12/17/2023]
Abstract
The function of the mammalian brain relies upon the specification and spatial positioning of diversely specialized cell types. Yet, the molecular identities of the cell types and their positions within individual anatomical structures remain incompletely known. To construct a comprehensive atlas of cell types in each brain structure, we paired high-throughput single-nucleus RNA sequencing with Slide-seq1,2-a recently developed spatial transcriptomics method with near-cellular resolution-across the entire mouse brain. Integration of these datasets revealed the cell type composition of each neuroanatomical structure. Cell type diversity was found to be remarkably high in the midbrain, hindbrain and hypothalamus, with most clusters requiring a combination of at least three discrete gene expression markers to uniquely define them. Using these data, we developed a framework for genetically accessing each cell type, comprehensively characterized neuropeptide and neurotransmitter signalling, elucidated region-specific specializations in activity-regulated gene expression and ascertained the heritability enrichment of neurological and psychiatric phenotypes. These data, available as an online resource ( www.BrainCellData.org ), should find diverse applications across neuroscience, including the construction of new genetic tools and the prioritization of specific cell types and circuits in the study of brain diseases.
Collapse
Affiliation(s)
| | | | | | - Naeem M Nadaf
- Broad Institute of Harvard and MIT, Cambridge, MA, USA
| | - Mukund Raj
- Broad Institute of Harvard and MIT, Cambridge, MA, USA
| | - Evan Murray
- Broad Institute of Harvard and MIT, Cambridge, MA, USA
| | | | | | | | - Daniel Tward
- Departments of Computational Medicine and Neurology, University of California, Los Angeles, Los Angeles, CA, USA
| | - Chris Mezias
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, USA
| | - Xu Li
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, USA
| | | | - Dylan M Cable
- Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA, USA
| | | | - Partha Mitra
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, USA
| | - Fei Chen
- Broad Institute of Harvard and MIT, Cambridge, MA, USA.
- Harvard Stem Cell and Regenerative Biology, Cambridge, MA, USA.
| | - Evan Z Macosko
- Broad Institute of Harvard and MIT, Cambridge, MA, USA.
- Department of Psychiatry, Massachusetts General Hospital, Boston, MA, USA.
| |
Collapse
|
11
|
Torgersen KT, Bouton BJ, Hebert AR, Kleyla NJ, Plasencia X, Rolfe GL, Tagliacollo VA, Albert JS. Phylogenetic structure of body shape in a diverse inland ichthyofauna. Sci Rep 2023; 13:20758. [PMID: 38007528 PMCID: PMC10676429 DOI: 10.1038/s41598-023-48086-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Accepted: 11/22/2023] [Indexed: 11/27/2023] Open
Abstract
Body shape is a fundamental metric of animal diversity affecting critical behavioral and ecological dynamics and conservation status, yet previously available methods capture only a fraction of total body-shape variance. Here we use structure-from-motion (SFM) 3D photogrammetry to generate digital 3D models of adult fishes from the Lower Mississippi Basin, one of the most diverse temperate-zone freshwater faunas on Earth, and 3D geometric morphometrics to capture morphologically distinct shape variables, interpreting principal components as growth fields. The mean body shape in this fauna resembles plesiomorphic teleost fishes, and the major dimensions of body-shape disparity are similar to those of other fish faunas worldwide. Major patterns of body-shape disparity are structured by phylogeny, with nested clades occupying distinct portions of the morphospace, most of the morphospace occupied by multiple distinct clades, and one clade (Acanthomorpha) accounting for over half of the total body shape variance. In contrast to previous studies, variance in body depth (59.4%) structures overall body-shape disparity more than does length (31.1%), while width accounts for a non-trivial (9.5%) amount of the total body-shape disparity.
Collapse
Affiliation(s)
| | | | - Alyx R Hebert
- Department of Biology, University of Louisiana, Lafayette, USA
| | - Noah J Kleyla
- Department of Biology, University of Louisiana, Lafayette, USA
| | | | - Garrett L Rolfe
- Department of Biology, University of Louisiana, Lafayette, USA
| | | | - James S Albert
- Department of Biology, University of Louisiana, Lafayette, USA
| |
Collapse
|
12
|
Park TY, Koh H, Lee W, Park SH, Chang WS, Kim H. Real-Time Acoustic Simulation Framework for tFUS: A Feasibility Study Using Navigation System. Neuroimage 2023; 282:120411. [PMID: 37844771 DOI: 10.1016/j.neuroimage.2023.120411] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2023] [Revised: 10/10/2023] [Accepted: 10/13/2023] [Indexed: 10/18/2023] Open
Abstract
Transcranial focused ultrasound (tFUS), in which acoustic energy is focused on a small region in the brain through the skull, is a non-invasive therapeutic method with high spatial resolution and depth penetration. Image-guided navigation has been widely utilized to visualize the location of acoustic focus in the cranial cavity. However, this system is often inaccurate because of the significant aberrations caused by the skull. Therefore, acoustic simulations using a numerical solver have been widely adopted to compensate for this inaccuracy. Although the simulation can predict the intracranial acoustic pressure field, real-time application during tFUS treatment is almost impossible due to the high computational cost. In this study, we propose a neural network-based real-time acoustic simulation framework and test its feasibility by implementing a simulation-guided navigation (SGN) system. Real-time acoustic simulation is performed using a 3D conditional generative adversarial network (3D-cGAN) model featuring residual blocks and multiple loss functions. This network was trained by the conventional numerical acoustic simulation program (i.e., k-Wave). The SGN system is then implemented by integrating real-time acoustic simulation with a conventional image-guided navigation system. The proposed system can provide simulation results with a frame rate of 5 Hz (i.e., about 0.2 s), including all processing times. In numerical validation (3D-cGAN vs. k-Wave), the average peak intracranial pressure error was 6.8 ± 5.5%, and the average acoustic focus position error was 5.3 ± 7.7 mm. In experimental validation using a skull phantom (3D-cGAN vs. actual measurement), the average peak intracranial pressure error was 4.5%, and the average acoustic focus position error was 6.6 mm. These results demonstrate that the SGN system can predict the intracranial acoustic field according to transducer placement in real-time.
Collapse
Affiliation(s)
- Tae Young Park
- Bionics Research Center, Biomedical Research Division, Korea Institute of Science and Technology, Seoul 02792, Republic of Korea; Division of Bio-Medical Science and Technology, KIST School, Korea University of Science and Technology, Seoul 02792, Republic of Korea
| | - Heekyung Koh
- Bionics Research Center, Biomedical Research Division, Korea Institute of Science and Technology, Seoul 02792, Republic of Korea
| | - Wonhye Lee
- Bionics Research Center, Biomedical Research Division, Korea Institute of Science and Technology, Seoul 02792, Republic of Korea; Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - So Hee Park
- Department of Neurosurgery, Yeungnam University Medical Center, Daegu 42415, Republic of Korea
| | - Won Seok Chang
- Department of Neurosurgery, Brain Research Institute, Yonsei University College of Medicine, Seoul 04527, Republic of Korea
| | - Hyungmin Kim
- Bionics Research Center, Biomedical Research Division, Korea Institute of Science and Technology, Seoul 02792, Republic of Korea; Division of Bio-Medical Science and Technology, KIST School, Korea University of Science and Technology, Seoul 02792, Republic of Korea.
| |
Collapse
|
13
|
Manuel TJ, Sigona MK, Phipps MA, Kusunose J, Luo H, Yang PF, Newton AT, Gore JC, Grissom W, Chen LM, Caskey CF. Small volume blood-brain barrier opening in macaques with a 1 MHz ultrasound phased array. J Control Release 2023; 363:707-720. [PMID: 37827222 DOI: 10.1016/j.jconrel.2023.10.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Revised: 10/05/2023] [Accepted: 10/08/2023] [Indexed: 10/14/2023]
Abstract
The use of focused ultrasound to open the blood-brain barrier (BBB) has the potential to deliver drugs to specific regions of the brain. The size of the BBB opening and ability to localize the opening determines the spatial extent and is a limiting factor in many applications of BBB opening where targeting a small brain region is desired. Here we evaluate the performance of a system designed for small opening volumes and highlight the unique challenges associated with pushing the spatial precision of this technique. To achieve small volume openings in cortical regions of the macaque brain, we tested a custom 1 MHz array transducer integrated into a magnetic resonance image-guided focused ultrasound system. Using real-time cavitation monitoring, we demonstrated twelve instances of single sonication, small volume BBB opening with average volumes of 59 ± 37 mm3 and 184 ± 2 mm3 in cortical and subcortical targets, respectively. We found high correlation between subject-specific acoustic simulations and observed openings when incorporating grey matter segmentation (R2 = 0.8577), and the threshold for BBB opening based on simulations was 0.53 MPa. Analysis of MRI-based safety assessment and cavitation signals indicate a safe pressure range for 1 MHz BBB opening and suggest that our system can be used to deliver drugs and gene therapy to small brain regions.
Collapse
Affiliation(s)
- Thomas J Manuel
- Vanderbilt University, Nashville, TN, USA; Vanderbilt University Institute of Imaging Science, Nashville, TN, USA
| | - Michelle K Sigona
- Vanderbilt University, Nashville, TN, USA; Vanderbilt University Institute of Imaging Science, Nashville, TN, USA
| | - M Anthony Phipps
- Vanderbilt University Medical Center, Nashville, TN, USA; Vanderbilt University Institute of Imaging Science, Nashville, TN, USA
| | - Jiro Kusunose
- Vanderbilt University Medical Center, Nashville, TN, USA; Vanderbilt University Institute of Imaging Science, Nashville, TN, USA
| | - Huiwen Luo
- Vanderbilt University, Nashville, TN, USA; Vanderbilt University Institute of Imaging Science, Nashville, TN, USA
| | - Pai-Feng Yang
- Vanderbilt University Medical Center, Nashville, TN, USA; Vanderbilt University Institute of Imaging Science, Nashville, TN, USA
| | - Allen T Newton
- Vanderbilt University Medical Center, Nashville, TN, USA; Vanderbilt University Institute of Imaging Science, Nashville, TN, USA
| | - John C Gore
- Vanderbilt University, Nashville, TN, USA; Vanderbilt University Medical Center, Nashville, TN, USA; Vanderbilt University Institute of Imaging Science, Nashville, TN, USA
| | - William Grissom
- Vanderbilt University, Nashville, TN, USA; Vanderbilt University Medical Center, Nashville, TN, USA; Vanderbilt University Institute of Imaging Science, Nashville, TN, USA
| | - Li Min Chen
- Vanderbilt University Medical Center, Nashville, TN, USA; Vanderbilt University Institute of Imaging Science, Nashville, TN, USA
| | - Charles F Caskey
- Vanderbilt University, Nashville, TN, USA; Vanderbilt University Medical Center, Nashville, TN, USA; Vanderbilt University Institute of Imaging Science, Nashville, TN, USA.
| |
Collapse
|
14
|
Sigona MK, Manuel TJ, Anthony Phipps M, Boroujeni KB, Treuting RL, Womelsdorf T, Caskey CF. Generating Patient-Specific Acoustic Simulations for Transcranial Focused Ultrasound Procedures Based on Optical Tracking Information. IEEE OPEN JOURNAL OF ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2023; 3:146-156. [PMID: 38222464 PMCID: PMC10785958 DOI: 10.1109/ojuffc.2023.3318560] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 01/16/2024]
Abstract
Optical tracking is a real-time transducer positioning method for transcranial focused ultrasound (tFUS) procedures, but the predicted focus from optical tracking typically does not incorporate subject-specific skull information. Acoustic simulations can estimate the pressure field when propagating through the cranium but rely on accurately replicating the positioning of the transducer and skull in a simulated space. Here, we develop and characterize the accuracy of a workflow that creates simulation grids based on optical tracking information in a neuronavigated phantom with and without transmission through an ex vivo skull cap. The software pipeline could replicate the geometry of the tFUS procedure within the limits of the optical tracking system (transcranial target registration error (TRE): 3.9 ± 0.7 mm). The simulated focus and the free-field focus predicted by optical tracking had low Euclidean distance errors of 0.5±0.1 and 1.2±0.4 mm for phantom and skull cap, respectively, and some skull-specific effects were captured by the simulation. However, the TRE of simulation informed by optical tracking was 4.6±0.2, which is as large or greater than the focal spot size used by many tFUS systems. By updating the position of the transducer using the original TRE offset, we reduced the simulated TRE to 1.1 ± 0.4 mm. Our study describes a software pipeline for treatment planning, evaluates its accuracy, and demonstrates an approach using MR-acoustic radiation force imaging as a method to improve dosimetry. Overall, our software pipeline helps estimate acoustic exposure, and our study highlights the need for image feedback to increase the accuracy of tFUS dosimetry.
Collapse
Affiliation(s)
- Michelle K Sigona
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN 37212, USA
- Vanderbilt University Institute of Imaging Science, Nashville, TN 37232, USA
| | - Thomas J Manuel
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN 37212, USA
- Vanderbilt University Institute of Imaging Science, Nashville, TN 37232, USA
| | - M Anthony Phipps
- Vanderbilt University Institute of Imaging Science, Nashville, TN 37232, USA
- Department of Radiology and Radiological Sciences, Vanderbilt University Medical Center, Nashville, TN 37212, USA
| | | | - Robert Louie Treuting
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN 37212, USA
| | - Thilo Womelsdorf
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN 37212, USA
- Department of Psychology, Vanderbilt University, Nashville, TN 37240, USA
| | - Charles F Caskey
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN 37212, USA
- Vanderbilt University Institute of Imaging Science, Nashville, TN 37232, USA
- Department of Radiology and Radiological Sciences, Vanderbilt University Medical Center, Nashville, TN 37212, USA
| |
Collapse
|
15
|
Hiep MAJ, Heerink WJ, Groen HC, Ruers TJM. Feasibility of tracked ultrasound registration for pelvic-abdominal tumor navigation: a patient study. Int J Comput Assist Radiol Surg 2023; 18:1725-1734. [PMID: 37227572 DOI: 10.1007/s11548-023-02937-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Accepted: 04/24/2023] [Indexed: 05/26/2023]
Abstract
PURPOSE Surgical navigation techniques can guide surgeons in localizing pelvic-abdominal malignancies. For abdominal navigation, accurate patient registration is crucial and is generally performed using an intra-operative cone-beam CT (CBCT). However, this method causes 15-min surgical preparation workflow interruption and radiation exposure, and more importantly, it cannot be repeated during surgery to compensate for large patient movement. As an alternative, the accuracy and feasibility of tracked ultrasound (US) registration are assessed in this patient study. METHODS Patients scheduled for surgical navigation during laparotomy of pelvic-abdominal malignancies were prospectively included. In the operating room, two percutaneous tracked US scans of the pelvic bone were acquired: one in supine and one in Trendelenburg patient position. Postoperatively, the bone surface was semiautomatically segmented from US images and registered to the bone surface on the preoperative CT scan. The US registration accuracy was computed using the CBCT registration as a reference and acquisition times were compared. Additionally, both US measurements were compared to quantify the registration error caused by patient movement into Trendelenburg. RESULTS In total, 18 patients were included and analyzed. US registration resulted in a mean surface registration error of 1.2 ± 0.2 mm and a mean target registration error of 3.3 ± 1.4 mm. US acquisitions were 4 × faster than the CBCT scans (two-sample t-test P < 0.05) and could even be performed during standard patient preparation before skin incision. Patient repositioning in Trendelenburg caused a mean target registration error of 7.7 ± 3.3 mm, mainly in cranial direction. CONCLUSION US registration based on the pelvic bone is accurate, fast and feasible for surgical navigation. Further optimization of the bone segmentation algorithm will allow for real-time registration in the clinical workflow. In the end, this would allow intra-operative US registration to correct for large patient movement. TRIAL REGISTRATION This study is registered in ClinicalTrials.gov (NCT05637359).
Collapse
Affiliation(s)
- M A J Hiep
- Department of Surgical Oncology, Netherlands Cancer Institute, 1066 CX, Amsterdam, The Netherlands.
| | - W J Heerink
- Department of Surgical Oncology, Netherlands Cancer Institute, 1066 CX, Amsterdam, The Netherlands
| | - H C Groen
- Department of Surgical Oncology, Netherlands Cancer Institute, 1066 CX, Amsterdam, The Netherlands
| | - T J M Ruers
- Department of Surgical Oncology, Netherlands Cancer Institute, 1066 CX, Amsterdam, The Netherlands
- Faculty of Science and Technology (TNW), Nanobiophysics Group (NBP), University of Twente, 7500 AE, Enschede, The Netherlands
| |
Collapse
|
16
|
Aimi T, Nakamura Y. A novel method for estimating sternoclavicular posterior rotation with promising accuracy: A validity comparison with minimizing acromioclavicular rotation approach. Med Eng Phys 2023; 118:104010. [PMID: 37536833 DOI: 10.1016/j.medengphy.2023.104010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Revised: 05/14/2023] [Accepted: 06/22/2023] [Indexed: 08/05/2023]
Abstract
The human shoulder complex's motion is modeled by nine rotational degrees of freedom (DoF) at the sternoclavicular (SC), acromioclavicular (AC), and glenohumeral joints. Non-invasive measurement of these rotations is desirable for shoulder kinematic assessment or musculoskeletal modeling. Accuracy of the conventional method for estimating SC posterior rotation is unclear and might be overestimated because it assumes no rotation in the AC joint. We aimed to explore whether our new method, allowing AC rotation, provides a more accurate estimation of SC posterior rotation than the conventional method. We compared estimates by both methods, in 18 postures among 8 healthy men, with those measured by the registration method from magnetic resonance images. Posthoc analyses showed significant differences between the registration and conventional methods in all 18 postures and in only one posture when compared to our method. While the conventional method tended toward overestimation and showed a 22.7° root-mean-square error for all postures, the new method had greater accuracy (6.8° root-mean-square error). By combining this method with the scapulothoracic rotation measurement method and other traditional methods, it should be possible to indirectly measure 3-DoF AC rotation, implying that non-invasive measurement of all 9-DoF rotations of the shoulder complex would now be possible.
Collapse
Affiliation(s)
- Takayuki Aimi
- Graduate School of Health and Sports Science, Doshisha University, 1-3 Tatara Miyakodani, Kyotanabe-shi, Kyoto-fu, 610-0394, Japan; Japan Society for the Promotion of Science, Kojimachi Business Center Building, 5-3-1 Kojimachi, Chiyoda-ku, Tokyo, 102-0083, Japan.
| | - Yasuo Nakamura
- Faculty of Health and Sports Science, Doshisha University, 1-3 Tatara Miyakodani, Kyotanabe-shi, Kyoto-fu, 610-0394, Japan
| |
Collapse
|
17
|
Frouin A, Guenanten H, Le Sant G, Lacourpaille L, Liebard M, Sarcher A, McNair PJ, Ellis R, Nordez A. Validity and Reliability of 3-D Ultrasound Imaging to Measure Hamstring Muscle and Tendon Volumes. ULTRASOUND IN MEDICINE & BIOLOGY 2023; 49:1457-1464. [PMID: 36948893 DOI: 10.1016/j.ultrasmedbio.2023.02.012] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Revised: 02/17/2023] [Accepted: 02/20/2023] [Indexed: 05/11/2023]
Abstract
OBJECTIVE The validity and reliability of 3-D ultrasound (US) in estimation of muscle and tendon volume was assessed in a very limited number of muscles that can be easily immersed. The objective of the present study was to assess the validity and reliability of muscle volume measurements for all hamstring muscle heads and gracilis (GR), as well as tendon volume for the semitendinosus (ST) and GR using freehand 3-D US. METHODS Three-dimensional US acquisitions were performed for 13 participants in two distinct sessions on separate days, in addition to one session dedicated to magnetic resonance imaging (MRI). Volumes of ST, semimembranosus (SM), biceps femoris short (BFsh) and long (BFlh) heads, and GR muscles and from the tendon from semitendinosus (STtd) and gracilis (GRtd) were collected. RESULTS The bias and the 95% confidence intervals of 3-D US compared with MRI ranged from -1.9 mL (-0.8%) to 1.2 mL (1.0%) for muscle volume and from 0.01 mL (0.2%) to -0.03 mL (-2.6%) for tendon volume. For muscle volume assessed using 3-D US, intraclass correlation coefficients (ICCs) ranged from 0.98 (GR) to 1.00, and coefficients of variation (CV) from 1.1% (SM) to 3.4% (BFsh). For tendon volume, ICCs were 0.99, and CVs between 3.2% (STtd) and 3.4% (GRtd). CONCLUSION Three-dimensional US can provide a valid and reliable inter-day measurement of hamstrings and GR for both muscle and tendon volumes. In the future, this technique could be used as an outcome for strengthening interventions and potentially in clinical environments.
Collapse
Affiliation(s)
- Antoine Frouin
- Nantes Université, Movement - Interactions - Performance, MIP, UR 4334, F-44000 Nantes, France; Institut Sport Atlantique (ISA), Nantes, France
| | - Hugo Guenanten
- Nantes Université, Movement - Interactions - Performance, MIP, UR 4334, F-44000 Nantes, France
| | - Guillaume Le Sant
- Nantes Université, Movement - Interactions - Performance, MIP, UR 4334, F-44000 Nantes, France; School of Physiotherapy, IFM3R, Nantes, France
| | - Lilian Lacourpaille
- Nantes Université, Movement - Interactions - Performance, MIP, UR 4334, F-44000 Nantes, France
| | - Martin Liebard
- Nantes Université, Movement - Interactions - Performance, MIP, UR 4334, F-44000 Nantes, France; School of Physiotherapy, IFM3R, Nantes, France
| | - Aurélie Sarcher
- Nantes Université, Movement - Interactions - Performance, MIP, UR 4334, F-44000 Nantes, France
| | - Peter J McNair
- Health and Rehabilitation Research Institute, Faculty of Health and Environmental Sciences, Auckland University of Technology, Auckland, New Zealand
| | - Richard Ellis
- Health and Rehabilitation Research Institute, Faculty of Health and Environmental Sciences, Auckland University of Technology, Auckland, New Zealand; Active Living and Rehabilitation: Aotearoa, Health and Rehabilitation Research Institute, Faculty of Health and Environmental Sciences, Auckland University of Technology, Auckland, New Zealand
| | - Antoine Nordez
- Nantes Université, Movement - Interactions - Performance, MIP, UR 4334, F-44000 Nantes, France; Health and Rehabilitation Research Institute, Faculty of Health and Environmental Sciences, Auckland University of Technology, Auckland, New Zealand; Institut Universitaire de France (IUF), Paris, France.
| |
Collapse
|
18
|
Laudenschlager S, Cai XC. An inner-outer subcycling algorithm for parallel cardiac electrophysiology simulations. INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING 2023; 39:e3677. [PMID: 36573938 DOI: 10.1002/cnm.3677] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/18/2021] [Revised: 11/16/2022] [Accepted: 12/21/2022] [Indexed: 06/17/2023]
Abstract
This paper explores cardiac electrophysiological simulations of the monodomain equations and introduces a novel subcycling time integration algorithm to exploit the structure of the ionic model. The aim of this work is to improve upon the efficiency of parallel cardiac monodomain simulations by using our subcycling algorithm in the computation of the ionic model to handle the local sharp changes of the solution. This will reduce the turnaround time for the simulation of basic cardiac electrical function on both idealized and patient-specific geometry. Numerical experiments show that the proposed approach is accurate and also has close to linear parallel scalability on a computer with more than 1000 processor cores. Ultimately, the reduction in simulation time can be beneficial in clinical applications, where multiple simulations are often required to tune a model to match clinical measurements.
Collapse
Affiliation(s)
| | - Xiao-Chuan Cai
- Department of Mathematics, University of Macau, Macau, China
| |
Collapse
|
19
|
Pandey PU, Guy P, Lefaivre KA, Hodgson AJ. What are the optimal targeting visualizations for performing surgical navigation of iliosacral screws? A user study. Arch Orthop Trauma Surg 2023; 143:677-690. [PMID: 34402930 DOI: 10.1007/s00402-021-04120-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Accepted: 08/10/2021] [Indexed: 10/20/2022]
Abstract
INTRODUCTION Complex orthopaedic procedures, such as iliosacral screw (ISS) fixations, can take advantage of surgical navigation technology to achieve accurate results. Although the impact of surgical navigation on outcomes has been studied, no studies to date have quantified how the design of the targeting display used for navigation affects ISS targeting performance. However, it is known in other contexts that how task information is displayed can have significant effects on both accuracy and time required to perform motor tasks, and that this can be different among users with different experience levels. This study aimed to investigate which visualization techniques helped experienced surgeons and inexperienced users most efficiently and accurately align a surgical tool to a target axis. METHODS We recruited 21 participants and conducted a user study to investigate five proposed 2D visualizations (bullseye, rotated bullseye, target-fixed, tool-fixed in translation, and tool-fixed in translation and rotation) with varying representations of the ISS targets and tool, and one 3D visualization. We measured the targeting accuracy achieved by each participant, as well as the time required to perform the task using each of the visualizations. RESULTS We found that all 2D visualizations had equivalent translational and rotational errors, with mean translational errors below 0.9 mm and rotational errors below 1.1[Formula: see text]. The 3D visualization had statistically greater mean translational and rotational errors (4.29 mm and 5.47[Formula: see text], p < 0.001) across all users. We also found that the 2D bullseye view allowed users to complete the simulated task most efficiently (mean 30.2 s; 95% CI 26.4-35.7 s), even when combined with other visualizations. CONCLUSIONS Our results show that 2D bullseye views helped both experienced orthopaedic trauma surgeons and inexperienced users target iliosacral screws accurately and efficiently. These findings could inform the design of visualizations for use in a surgical navigation system for screw insertions for both training and surgical practice.
Collapse
Affiliation(s)
- Prashant U Pandey
- School of Biomedical Engineering, University of British Columbia, 2222 Health Sciences Mall, Vancouver, BC, V6T 1Z3, Canada.
| | - Pierre Guy
- Department of Orthopaedics, Faculty of Medicine, University of British Columbia, 11th Floor, 2775 Laurel Street, Vancouver, BC, V5Z 1M9, Canada
| | - Kelly A Lefaivre
- Department of Orthopaedics, Faculty of Medicine, University of British Columbia, 11th Floor, 2775 Laurel Street, Vancouver, BC, V5Z 1M9, Canada
| | - Antony J Hodgson
- Department of Mechanical Engineering, University of British Columbia, 2054-6250 Applied Science Lane, Vancouver, BC, V6T 1Z4, Canada
| |
Collapse
|
20
|
Li W, Fan J, Li S, Zheng Z, Tian Z, Ai D, Song H, Chen X, Yang J. An incremental registration method for endoscopic sinus and skull base surgery navigation: From phantom study to clinical trials. Med Phys 2023; 50:226-239. [PMID: 35997999 DOI: 10.1002/mp.15941] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2021] [Revised: 06/30/2022] [Accepted: 08/02/2022] [Indexed: 01/27/2023] Open
Abstract
PURPOSE Surface-based image-to-patient registration in current surgical navigation is mainly achieved by a 3D scanner, which has several limitations in clinical practice such as uncontrollable scanning range, complicated operation, and even high failure rate. An accurate, robust, and easy-to-perform image-to-patient registration method is urgently required. METHODS An incremental point cloud registration method was proposed for surface-based image-to-patient registration. The point cloud in image space was extracted from the computed tomography (CT) image, and a template matching method was applied to remove the redundant points. The corresponding point cloud in patient space was incrementally collected by an optically tracked pointer, while the nearest point distance (NPD) constraint was applied to ensure the uniformity of the collected points. A coarse-to-fine registration method under the constraints of coverage ratio (CR) and outliers ratio (OR) was then proposed to obtain the optimal rigid transformation from image to patient space. The proposed method was integrated in the recently developed endoscopic navigation system, and phantom study and clinical trials were conducted to evaluate the performance of the proposed method. RESULTS The results of the phantom study revealed that the proposed constraints greatly improved the accuracy and robustness of registration. The comparative experimental results revealed that the proposed registration method significantly outperform the scanner-based method, and achieved comparable accuracy to the fiducial-based method. In the clinical trials, the average registration duration was 1.24 ± 0.43 min, the target registration error (TRE) of 294 marker points (59 patients) was 1.25 ± 0.40 mm, and the lower 97.5% confidence limit of the success rate of positioning marker points exceeds the expected value (97.56% vs. 95.00%), revealed that the accuracy of the proposed method significantly met the clinical requirements (TRE ⩽ 2 mm, p < 0.05). CONCLUSIONS The proposed method has both the advantages of high accuracy and convenience, which were absent in the scanner-based method and the fiducial-based method. Our findings will help improve the quality of endoscopic sinus and skull base surgery.
Collapse
Affiliation(s)
- Wenjie Li
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Jingfan Fan
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Shaowen Li
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Zhao Zheng
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Zhaorui Tian
- Ariemedi Medical Technology (Beijing) Co., Ltd., Beijing, China
| | - Danni Ai
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Hong Song
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China
| | - Xiaohong Chen
- Department of Otolaryngology Head and Neck Surgery, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Jian Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| |
Collapse
|
21
|
Bagher Zadeh Ansari N, Léger É, Kersten-Oertel M. VentroAR: an augmented reality platform for ventriculostomy using the Microsoft HoloLens. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2022. [DOI: 10.1080/21681163.2022.2156394] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Affiliation(s)
| | - Étienne Léger
- Department of Computer Science and Software Engineering, Concordia University, Montreal, QC, Canada
| | - Marta Kersten-Oertel
- Department of Computer Science and Software Engineering, Concordia University, Montreal, QC, Canada
- PERFORM Centre, Concordia University, Montreal, QC, Canada
| |
Collapse
|
22
|
Allen DR, Clarke C, Peters TM, Chen EC. Development and evaluation of an open-source virtual reality C-Arm simulator. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2022. [DOI: 10.1080/21681163.2022.2152374] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/04/2022]
Affiliation(s)
- Daniel R. Allen
- School of Biomedical Engineering, Western University, London, Ontario, Canada
- Robarts Research Institute, Western University, London, Ontario, Canada
| | - Collin Clarke
- Department of Anesthesia, London Health Sciences Centre, London, Ontario, Canada
| | - Terry M. Peters
- School of Biomedical Engineering, Western University, London, Ontario, Canada
- Robarts Research Institute, Western University, London, Ontario, Canada
- Department of Medical Biophysics, Western University, London, Ontario, Canada
| | - Elvis C.S Chen
- School of Biomedical Engineering, Western University, London, Ontario, Canada
- Robarts Research Institute, Western University, London, Ontario, Canada
- Department of Medical Biophysics, Western University, London, Ontario, Canada
| |
Collapse
|
23
|
Iribar-Zabala A, Benito R, Sánchez-Merino G, Cortes CA, Garcia-Fidalgo MA, Lopez-Linares K, Bertelsen Á. MIGHTY: a comprehensive platform for the development of medical image-guided holographic therapy. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2022. [DOI: 10.1080/21681163.2022.2152373] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/02/2022]
Affiliation(s)
- Amaia Iribar-Zabala
- Vicomtech Foundation, Basque Research and Technology Alliance (BRTA), Donostia-San Sebastián, Spain
| | - Rafael Benito
- Vicomtech Foundation, Basque Research and Technology Alliance (BRTA), Donostia-San Sebastián, Spain
| | - Gaspar Sánchez-Merino
- Bioaraba, New Technologies and Information Systems in Health Research Group, Vitoria-Gasteiz, Spain
- Osakidetza Basque Health Service, Medical Physics Department, Araba University Hospital, Vitoria-Gasteiz, Spain
| | - Camilo A. Cortes
- Vicomtech Foundation, Basque Research and Technology Alliance (BRTA), Donostia-San Sebastián, Spain
- Bioengineering Area, Biodonostia Health Research Institute, San Sebastián, Spain
| | - M. Angeles Garcia-Fidalgo
- Bioaraba, New Technologies and Information Systems in Health Research Group, Vitoria-Gasteiz, Spain
- Osakidetza Basque Health Service, Araba University Hospital, Vitoria-Gasteiz, Spain
| | - Karen Lopez-Linares
- Vicomtech Foundation, Basque Research and Technology Alliance (BRTA), Donostia-San Sebastián, Spain
- Bioengineering Area, Biodonostia Health Research Institute, San Sebastián, Spain
| | - Álvaro Bertelsen
- Vicomtech Foundation, Basque Research and Technology Alliance (BRTA), Donostia-San Sebastián, Spain
- Bioengineering Area, Biodonostia Health Research Institute, San Sebastián, Spain
| |
Collapse
|
24
|
Lasso A, Herz C, Nam H, Cianciulli A, Pieper S, Drouin S, Pinter C, St-Onge S, Vigil C, Ching S, Sunderland K, Fichtinger G, Kikinis R, Jolley MA. SlicerHeart: An open-source computing platform for cardiac image analysis and modeling. Front Cardiovasc Med 2022; 9:886549. [PMID: 36148054 PMCID: PMC9485637 DOI: 10.3389/fcvm.2022.886549] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Accepted: 08/08/2022] [Indexed: 11/25/2022] Open
Abstract
Cardiovascular disease is a significant cause of morbidity and mortality in the developed world. 3D imaging of the heart's structure is critical to the understanding and treatment of cardiovascular disease. However, open-source tools for image analysis of cardiac images, particularly 3D echocardiographic (3DE) data, are limited. We describe the rationale, development, implementation, and application of SlicerHeart, a cardiac-focused toolkit for image analysis built upon 3D Slicer, an open-source image computing platform. We designed and implemented multiple Python scripted modules within 3D Slicer to import, register, and view 3DE data, including new code to volume render and crop 3DE. In addition, we developed dedicated workflows for the modeling and quantitative analysis of multi-modality image-derived heart models, including heart valves. Finally, we created and integrated new functionality to facilitate the planning of cardiac interventions and surgery. We demonstrate application of SlicerHeart to a diverse range of cardiovascular modeling and simulation including volume rendering of 3DE images, mitral valve modeling, transcatheter device modeling, and planning of complex surgical intervention such as cardiac baffle creation. SlicerHeart is an evolving open-source image processing platform based on 3D Slicer initiated to support the investigation and treatment of congenital heart disease. The technology in SlicerHeart provides a robust foundation for 3D image-based investigation in cardiovascular medicine.
Collapse
Affiliation(s)
- Andras Lasso
- Laboratory for Percutaneous Surgery, School of Computing, Queen's University, Kingston, ON, Canada
| | - Christian Herz
- Department of Anesthesiology and Critical Care Medicine, Children's Hospital of Philadelphia, Philadelphia, PA, United States
| | - Hannah Nam
- Department of Anesthesiology and Critical Care Medicine, Children's Hospital of Philadelphia, Philadelphia, PA, United States
| | - Alana Cianciulli
- Department of Anesthesiology and Critical Care Medicine, Children's Hospital of Philadelphia, Philadelphia, PA, United States
| | | | - Simon Drouin
- Software and Information Technology Engineering, École de Technologie Supérieure, Montreal, QC, Canada
| | | | - Samuelle St-Onge
- Software and Information Technology Engineering, École de Technologie Supérieure, Montreal, QC, Canada
| | - Chad Vigil
- Department of Anesthesiology and Critical Care Medicine, Children's Hospital of Philadelphia, Philadelphia, PA, United States
| | - Stephen Ching
- Department of Anesthesiology and Critical Care Medicine, Children's Hospital of Philadelphia, Philadelphia, PA, United States
| | - Kyle Sunderland
- Laboratory for Percutaneous Surgery, School of Computing, Queen's University, Kingston, ON, Canada
| | - Gabor Fichtinger
- Laboratory for Percutaneous Surgery, School of Computing, Queen's University, Kingston, ON, Canada
| | - Ron Kikinis
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, United States
| | - Matthew A. Jolley
- Department of Anesthesiology and Critical Care Medicine, Children's Hospital of Philadelphia, Philadelphia, PA, United States,Division of Cardiology, Children's Hospital of Philadelphia, Philadelphia, PA, United States,*Correspondence: Matthew A. Jolley
| |
Collapse
|
25
|
Ehrlich J, Jamzad A, Asselin M, Rodgers JR, Kaufmann M, Haidegger T, Rudan J, Mousavi P, Fichtinger G, Ungi T. Sensor-Based Automated Detection of Electrosurgical Cautery States. SENSORS (BASEL, SWITZERLAND) 2022; 22:5808. [PMID: 35957364 PMCID: PMC9371045 DOI: 10.3390/s22155808] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Revised: 07/30/2022] [Accepted: 08/01/2022] [Indexed: 02/04/2023]
Abstract
In computer-assisted surgery, it is typically required to detect when the tool comes into contact with the patient. In activated electrosurgery, this is known as the energy event. By continuously tracking the electrosurgical tools' location using a navigation system, energy events can help determine locations of sensor-classified tissues. Our objective was to detect the energy event and determine the settings of electrosurgical cautery-robustly and automatically based on sensor data. This study aims to demonstrate the feasibility of using the cautery state to detect surgical incisions, without disrupting the surgical workflow. We detected current changes in the wires of the cautery device and grounding pad using non-invasive current sensors and an oscilloscope. An open-source software was implemented to apply machine learning on sensor data to detect energy events and cautery settings. Our methods classified each cautery state at an average accuracy of 95.56% across different tissue types and energy level parameters altered by surgeons during an operation. Our results demonstrate the feasibility of automatically identifying energy events during surgical incisions, which could be an important safety feature in robotic and computer-integrated surgery. This study provides a key step towards locating tissue classifications during breast cancer operations and reducing the rate of positive margins.
Collapse
Affiliation(s)
- Josh Ehrlich
- School of Computing, Queen’s University, Kingston, ON K7L 3N6, Canada; (J.E.); (A.J.); (M.A.); (J.R.R.); (P.M.); (G.F.)
| | - Amoon Jamzad
- School of Computing, Queen’s University, Kingston, ON K7L 3N6, Canada; (J.E.); (A.J.); (M.A.); (J.R.R.); (P.M.); (G.F.)
| | - Mark Asselin
- School of Computing, Queen’s University, Kingston, ON K7L 3N6, Canada; (J.E.); (A.J.); (M.A.); (J.R.R.); (P.M.); (G.F.)
| | - Jessica Robin Rodgers
- School of Computing, Queen’s University, Kingston, ON K7L 3N6, Canada; (J.E.); (A.J.); (M.A.); (J.R.R.); (P.M.); (G.F.)
| | - Martin Kaufmann
- Department of Surgery, Kingston Health Sciences Centre, Kingston, ON K7L 2V7, Canada; (M.K.); (J.R.)
| | - Tamas Haidegger
- University Research and Innovation Center (EKIK), Óbuda University, 1034 Budapest, Hungary
| | - John Rudan
- Department of Surgery, Kingston Health Sciences Centre, Kingston, ON K7L 2V7, Canada; (M.K.); (J.R.)
| | - Parvin Mousavi
- School of Computing, Queen’s University, Kingston, ON K7L 3N6, Canada; (J.E.); (A.J.); (M.A.); (J.R.R.); (P.M.); (G.F.)
| | - Gabor Fichtinger
- School of Computing, Queen’s University, Kingston, ON K7L 3N6, Canada; (J.E.); (A.J.); (M.A.); (J.R.R.); (P.M.); (G.F.)
| | - Tamas Ungi
- School of Computing, Queen’s University, Kingston, ON K7L 3N6, Canada; (J.E.); (A.J.); (M.A.); (J.R.R.); (P.M.); (G.F.)
| |
Collapse
|
26
|
External rotation of the foot position during plantarflexion increases non-uniform motions of the Achilles tendon. J Biomech 2022; 141:111232. [PMID: 35905508 DOI: 10.1016/j.jbiomech.2022.111232] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2022] [Revised: 06/23/2022] [Accepted: 07/19/2022] [Indexed: 11/22/2022]
Abstract
The medial (GM) and lateral gastrocnemius (GL) muscles enroll to different subparts of the Achilles tendon to form their respective subtendons. The relative gastrocnemii activations during submaximal plantarflexion contraction depend on the position of the foot in the horizontal plane: with toes-in, GL activation increases and GM activation decreases, compared to toes-out. The aim of the current study was to investigate whether horizontal foot position during submaximal isometric plantarflexion contraction differently affects the subtendons within the Achilles tendon in terms of their (i) length at rest, and (ii) elongations and distal motions. Twenty healthy subjects (12 females/8 males) participated in the study. Three-dimensional ultrasound images were taken to capture subtendon lengths at rest and during isometric contraction. Ultrasound images were recorded at the distal end of Achilles tendon (sagittal plane) during ramped contractions and analyzed using a speckle tracking algorithm. All tasks were conducted twice, ones with toes-in and ones with toes-out. At rest, subtendons were shorter with toes-out compared to toes-in. During contraction, the GM subtendon lengthened more in toes-out, compared to the GL, and vice versa (all p <.01). The relative motions within the Achilles tendon (middle minus top layers displacements) were smaller in toes-in compared to toes-out (p =.05) for higher contraction intensity. Our results demonstrated that the horizontal foot position during plantarflexion contraction impacts Achilles tendon motions. Such findings may be relevant in a clinical context, for example in pathologies affecting Achilles tendon motions such as Achilles tendinopathy.
Collapse
|
27
|
Connolly L, Deguet A, Leonard S, Tokuda J, Ungi T, Krieger A, Kazanzides P, Mousavi P, Fichtinger G, Taylor RH. Bridging 3D Slicer and ROS2 for Image-Guided Robotic Interventions. SENSORS (BASEL, SWITZERLAND) 2022; 22:5336. [PMID: 35891016 PMCID: PMC9324680 DOI: 10.3390/s22145336] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Revised: 07/13/2022] [Accepted: 07/14/2022] [Indexed: 06/15/2023]
Abstract
Developing image-guided robotic systems requires access to flexible, open-source software. For image guidance, the open-source medical imaging platform 3D Slicer is one of the most adopted tools that can be used for research and prototyping. Similarly, for robotics, the open-source middleware suite robot operating system (ROS) is the standard development framework. In the past, there have been several "ad hoc" attempts made to bridge both tools; however, they are all reliant on middleware and custom interfaces. Additionally, none of these attempts have been successful in bridging access to the full suite of tools provided by ROS or 3D Slicer. Therefore, in this paper, we present the SlicerROS2 module, which was designed for the direct use of ROS2 packages and libraries within 3D Slicer. The module was developed to enable real-time visualization of robots, accommodate different robot configurations, and facilitate data transfer in both directions (between ROS and Slicer). We demonstrate the system on multiple robots with different configurations, evaluate the system performance and discuss an image-guided robotic intervention that can be prototyped with this module. This module can serve as a starting point for clinical system development that reduces the need for custom interfaces and time-intensive platform setup.
Collapse
Affiliation(s)
- Laura Connolly
- Whiting School of Engineering, Johns Hopkins University, Baltimore, MD 21218, USA; (A.D.); (S.L.); (A.K.); (P.K.); (R.H.T.)
- School of Computing, Queen’s University, Kingston, ON K7L 3N6, Canada; (T.U.); (P.M.); (G.F.)
| | - Anton Deguet
- Whiting School of Engineering, Johns Hopkins University, Baltimore, MD 21218, USA; (A.D.); (S.L.); (A.K.); (P.K.); (R.H.T.)
| | - Simon Leonard
- Whiting School of Engineering, Johns Hopkins University, Baltimore, MD 21218, USA; (A.D.); (S.L.); (A.K.); (P.K.); (R.H.T.)
| | | | - Tamas Ungi
- School of Computing, Queen’s University, Kingston, ON K7L 3N6, Canada; (T.U.); (P.M.); (G.F.)
| | - Axel Krieger
- Whiting School of Engineering, Johns Hopkins University, Baltimore, MD 21218, USA; (A.D.); (S.L.); (A.K.); (P.K.); (R.H.T.)
| | - Peter Kazanzides
- Whiting School of Engineering, Johns Hopkins University, Baltimore, MD 21218, USA; (A.D.); (S.L.); (A.K.); (P.K.); (R.H.T.)
| | - Parvin Mousavi
- School of Computing, Queen’s University, Kingston, ON K7L 3N6, Canada; (T.U.); (P.M.); (G.F.)
| | - Gabor Fichtinger
- School of Computing, Queen’s University, Kingston, ON K7L 3N6, Canada; (T.U.); (P.M.); (G.F.)
| | - Russell H. Taylor
- Whiting School of Engineering, Johns Hopkins University, Baltimore, MD 21218, USA; (A.D.); (S.L.); (A.K.); (P.K.); (R.H.T.)
| |
Collapse
|
28
|
Navigation guidance for ventricular septal defect closure in heart phantoms. Int J Comput Assist Radiol Surg 2022; 17:1947-1956. [PMID: 35798998 DOI: 10.1007/s11548-022-02711-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Accepted: 06/24/2022] [Indexed: 11/05/2022]
Abstract
PURPOSE Transesophageal echocardiography (TEE) is the preferred imaging modality in a hybrid procedure used to close ventricular septal defects (VSDs). However, the limited field of view of TEE hinders the maneuvering of surgical instruments inside the beating heart. This study evaluates the accuracy of a method that aims to support navigation guidance in the hybrid procedure. METHODS A cardiologist maneuvered a needle to puncture the patient's heart and to access a VSD, guided by information displayed in a virtual environment. The information displayed included a model of the patient's heart and a virtual needle that reproduced the position and orientation of the real needle in real time. The physical and the virtual worlds were calibrated with a landmark registration and an iterative closest point algorithms, using an electromagnetic measurement system (EMS). For experiments, we developed a setup that included heart phantoms representing the patient's heart. RESULTS Experimental results from two pediatric cases studied suggested that the information provided for guidance was accurate enough when the landmark registration algorithm was fed with coordinates of seven points clearly identified on the surfaces of the physical and virtual hearts. Indeed, with a registration error of 2.28 mm RMS, it was possible to successfully access two VSDs (6.2 mm and 6.3 mm in diameter) in all the attempts with a needle (5 attempts) and a guidewire (7 attempts). CONCLUSION We found that information provided in a virtual environment facilitates guidance in the hybrid procedure for VSD closure. A clear identification of anatomical details in the heart surfaces is key to the accuracy of the procedure.
Collapse
|
29
|
Devaraj H, Murphy E, Halter RJ. Bioimpedance Sensing Surgical Drill - In Vivo Porcine Model. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:869-872. [PMID: 36086290 DOI: 10.1109/embc48229.2022.9871554] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Surgical drilling to place dental implants in the mandible and maxilla is associated high risk of iatrogenic injuries to inferior alveolar nerve and maxillary sinus. Real-time tissue margin sensing at the drill-tip using electrical impedance spectroscopy (EIS) could reduce this risk by providing feedback to surgeons. Studies with saline analogues, ex-vivo tissues, in-situ tissues and computer models have been previously conducted to evaluate these impedance sensors. Understanding in-vivo electrical properties of tissues in the mandible and maxilla is critical to further develop the sensor and tissue margin sensing algorithms. In this paper, we propose an in-vivo animal model using pigs and discuss methods to test the sensor. Intra-operative imaging and optical tracking systems to assist in surgical navigation are described. The process of registering imaging and tracking information to localize impedance measurement sites within the anatomy are detailed. Results from one in-vivo case of drilling through the mandible are presented and discussed. Clinical Relevance- This model is crucial for characterizing in-vivo electrical properties of mandibular and maxillary tissues encountered during dental implant surgical drilling and for translating bioimpedance sensing drill technology to clinical space.
Collapse
|
30
|
Park TY, Kim HJ, Park SH, Chang WS, Kim H, Yoon K. Differential evolution method to find optimal location of a single-element transducer for transcranial focused ultrasound therapy. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 219:106777. [PMID: 35397411 DOI: 10.1016/j.cmpb.2022.106777] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/14/2021] [Revised: 03/14/2022] [Accepted: 03/24/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Focused ultrasound (FUS) has been receiving growing attention as a noninvasive brain stimulation tool because of its superior spatial specificity and depth penetrability. However, the large mismatch of acoustic properties between the skull and water can disrupt and shift the acoustic focus in the brain. In this paper, we present a numerical method to find the optimal location of a single-element FUS transducer, which creates focus on the target region. METHODS The score function, representing the superposition of acoustic waves according to the relative phase difference and transmissibility, was defined based on time-reversal invariance of acoustic waves and depending on the spatial location of the transducer. The optimal location of the transducer was then determined using a differential evolution algorithm. To assess the proposed method, we conducted a forward simulation and compared the resulting focal location to the desired target point. We also performed experimental validation by measuring the acoustic pressure field through an ex vivo human skull in a water tank. RESULTS The numerical results indicated that the score function had a positive proportional relationship with the acoustic pressure at the target. Moreover, for the optimized transducer location, both the numerical and experimental results showed that the normalized acoustic pressure at the target was higher than 0.9. CONCLUSIONS In this study, we developed an optimization method to place a single-element transducer that effectively transmits acoustic energy to the targeted region in the brain. Our numerical and experimental results demonstrate that the proposed method can provide an optimal transducer location for safe and efficient FUS treatment.
Collapse
Affiliation(s)
- Tae Young Park
- Bionics Research Center, Biomedical Research Division, Korea Institute of Science and Technology, Seoul 02792, Republic of Korea; Division of Bio-Medical Science and Technology, KIST School, Korea University of Science and Technology, Seoul 02792, Republic of Korea.
| | - Hyo-Jin Kim
- Center for Healthcare Robotics, Korea Institute of Science and Technology, Seoul 02792, Republic of Korea
| | - So Hui Park
- Department of Neurosurgery, Brain Research Institute, Yonsei University College of Medicine, Seoul 04527, Republic of Korea
| | - Won Seok Chang
- Department of Neurosurgery, Brain Research Institute, Yonsei University College of Medicine, Seoul 04527, Republic of Korea
| | - Hyungmin Kim
- Bionics Research Center, Biomedical Research Division, Korea Institute of Science and Technology, Seoul 02792, Republic of Korea; Division of Bio-Medical Science and Technology, KIST School, Korea University of Science and Technology, Seoul 02792, Republic of Korea.
| | - Kyungho Yoon
- School of Mathematics and Computing (Computational Science and Engineering), Yonsei University, Seoul 03722, Republic of Korea.
| |
Collapse
|
31
|
Hu Z, Nasute Fauerbach PV, Yeung C, Ungi T, Rudan J, Engel CJ, Mousavi P, Fichtinger G, Jabs D. Real-time automatic tumor segmentation for ultrasound-guided breast-conserving surgery navigation. Int J Comput Assist Radiol Surg 2022; 17:1663-1672. [PMID: 35588339 DOI: 10.1007/s11548-022-02658-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2022] [Accepted: 04/22/2022] [Indexed: 11/05/2022]
Abstract
PURPOSE Ultrasound-based navigation is a promising method in breast-conserving surgery, but tumor contouring often requires a radiologist at the time of surgery. Our goal is to develop a real-time automatic neural network-based tumor contouring process for intraoperative guidance. Segmentation accuracy is evaluated by both pixel-based metrics and expert visual rating. METHODS This retrospective study includes 7318 intraoperative ultrasound images acquired from 33 breast cancer patients, randomly split between 80:20 for training and testing. We implement a u-net architecture to label each pixel on ultrasound images as either tumor or healthy breast tissue. Quantitative metrics are calculated to evaluate the model's accuracy. Contour quality and usability are also assessed by fellowship-trained breast radiologists and surgical oncologists. Additionally, the viability of using our u-net model in an existing surgical navigation system is evaluated by measuring the segmentation frame rate. RESULTS The mean dice similarity coefficient of our u-net model is 0.78, with an area under the receiver-operating characteristics curve of 0.94, sensitivity of 0.95, and specificity of 0.67. Expert visual ratings are positive, with 93% of responses rating tumor contour quality at or above 7/10, and 75% of responses rating contour quality at or above 8/10. Real-time tumor segmentation achieved a frame rate of 16 frames-per-second, sufficient for clinical use. CONCLUSION Neural networks trained with intraoperative ultrasound images provide consistent tumor segmentations that are well received by clinicians. These findings suggest that neural networks are a promising adjunct to alleviate radiologist workload as well as improving efficiency in breast-conserving surgery navigation systems.
Collapse
Affiliation(s)
- Zoe Hu
- School of Medicine, Queen's University, 88 Stuart Street, Kingston, ON, K7L 3N6, Canada.
| | | | - Chris Yeung
- School of Computing, Queen's University, Kingston, ON, Canada
| | - Tamas Ungi
- School of Computing, Queen's University, Kingston, ON, Canada
| | - John Rudan
- Department of Surgery, Queen's University, Kingston, ON, Canada
| | - Cecil Jay Engel
- Department of Surgery, Queen's University, Kingston, ON, Canada
| | - Parvin Mousavi
- School of Computing, Queen's University, Kingston, ON, Canada
| | | | - Doris Jabs
- Department of Radiology, Queen's University, Kingston, ON, Canada
| |
Collapse
|
32
|
de Geer AF, van Alphen MJA, Zuur CL, Loeve AJ, van Veen RLP, Karakullukcu MB. A hybrid registration method using the mandibular bone surface for electromagnetic navigation in mandibular surgery. Int J Comput Assist Radiol Surg 2022; 17:1343-1353. [PMID: 35441961 DOI: 10.1007/s11548-022-02610-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Accepted: 03/10/2022] [Indexed: 11/29/2022]
Abstract
PURPOSE To utilize navigated mandibular (reconstructive) surgery, accurate registration of the preoperative CT scan with the actual patient in the operating room (OR) is required. In this phantom study, the feasibility of a noninvasive hybrid registration method is assessed. This method consists of a point registration with anatomic landmarks for initialization and a surface registration using the bare mandibular bone surface for optimization. METHODS Three mandible phantoms with reference notches on two osteotomy planes were 3D printed. An electromagnetic tracking system in combination with 3D Slicer software was used for navigation. Different configurations, i.e., different surface point areas and number and configuration of surface points, were tested with a dentate phantom (A) in a metal-free environment. To simulate the intraoperative environment and different anatomies, the registration procedure was also performed with an OR bed using the dentate phantom and two (partially) edentulous phantoms with atypical anatomy (B and C). The accuracy of the registration was calculated using the notches on the osteotomy planes and was expressed as the target registration error (TRE). TRE values of less than 2.0 mm were considered as clinically acceptable. RESULTS In all experiments, the mean TRE was less than 2.0 mm. No differences were found using different surface point areas or number or configurations of surface points. Registration accuracy in the simulated intraoperative setting was-mean (SD)-0.96 (0.22), 0.93 (0.26), and 1.50 (0.28) mm for phantom A, phantom B, and phantom C. CONCLUSION Hybrid registration is a noninvasive method that requires only a small area of the bare mandibular bone surface to obtain high accuracy in phantom setting. Future studies should test this method in clinical setting during actual surgery.
Collapse
Affiliation(s)
- A F de Geer
- Verwelius 3D Lab, Department of Head and Neck Surgery and Oncology, Netherlands Cancer Institute, Antoni van Leeuwenhoek, Amsterdam, The Netherlands.,Educational Program Technical Medicine, Leiden University Medical Center, Delft University of Technology, Erasmus University Medical Center, Leiden, Delft, Rotterdam, The Netherlands
| | - M J A van Alphen
- Verwelius 3D Lab, Department of Head and Neck Surgery and Oncology, Netherlands Cancer Institute, Antoni van Leeuwenhoek, Amsterdam, The Netherlands.
| | - C L Zuur
- Verwelius 3D Lab, Department of Head and Neck Surgery and Oncology, Netherlands Cancer Institute, Antoni van Leeuwenhoek, Amsterdam, The Netherlands.,Department of Otorhinolaryngology, Leiden University Medical Center, Leiden, The Netherlands
| | - A J Loeve
- Department of BioMechanical Engineering, Faculty of Mechanical, Maritime and Materials Engineering, Delft University of Technology, Delft, The Netherlands
| | - R L P van Veen
- Verwelius 3D Lab, Department of Head and Neck Surgery and Oncology, Netherlands Cancer Institute, Antoni van Leeuwenhoek, Amsterdam, The Netherlands
| | - M B Karakullukcu
- Verwelius 3D Lab, Department of Head and Neck Surgery and Oncology, Netherlands Cancer Institute, Antoni van Leeuwenhoek, Amsterdam, The Netherlands
| |
Collapse
|
33
|
García-Sevilla M, Moreta-Martinez R, García-Mato D, Arenas de Frutos G, Ochandiano S, Navarro-Cuéllar C, Sanjuán de Moreta G, Pascau J. Surgical Navigation, Augmented Reality, and 3D Printing for Hard Palate Adenoid Cystic Carcinoma En-Bloc Resection: Case Report and Literature Review. Front Oncol 2022; 11:741191. [PMID: 35059309 PMCID: PMC8763795 DOI: 10.3389/fonc.2021.741191] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2021] [Accepted: 11/26/2021] [Indexed: 12/18/2022] Open
Abstract
Adenoid Cystic Carcinoma is a rare and aggressive tumor representing less than 1% of head and neck cancers. This malignancy often arises from the minor salivary glands, being the palate its most common location. Surgical en-bloc resection with clear margins is the primary treatment. However, this location presents a limited line of sight and a high risk of injuries, making the surgical procedure challenging. In this context, technologies such as intraoperative navigation can become an effective tool, reducing morbidity and improving the safety and accuracy of the procedure. Although their use is extended in fields such as neurosurgery, their application in maxillofacial surgery has not been widely evidenced. One reason is the need to rigidly fixate a navigation reference to the patient, which often entails an invasive setup. In this work, we studied three alternative and less invasive setups using optical tracking, 3D printing and augmented reality. We evaluated their precision in a patient-specific phantom, obtaining errors below 1 mm. The optimum setup was finally applied in a clinical case, where the navigation software was used to guide the tumor resection. Points were collected along the surgical margins after resection and compared with the real ones identified in the postoperative CT. Distances of less than 2 mm were obtained in 90% of the samples. Moreover, the navigation provided confidence to the surgeons, who could then undertake a less invasive and more conservative approach. The postoperative CT scans showed adequate resection margins and confirmed that the patient is free of disease after two years of follow-up.
Collapse
Affiliation(s)
- Mónica García-Sevilla
- Departamento de Bioingeniería e Ingeniería Aeroespacial, Universidad Carlos III de Madrid, Madrid, Spain.,Instituto de Investigación Sanitaria Gregorio Marañón, Madrid, Spain
| | - Rafael Moreta-Martinez
- Departamento de Bioingeniería e Ingeniería Aeroespacial, Universidad Carlos III de Madrid, Madrid, Spain.,Instituto de Investigación Sanitaria Gregorio Marañón, Madrid, Spain
| | - David García-Mato
- Departamento de Bioingeniería e Ingeniería Aeroespacial, Universidad Carlos III de Madrid, Madrid, Spain.,Instituto de Investigación Sanitaria Gregorio Marañón, Madrid, Spain
| | - Gema Arenas de Frutos
- Instituto de Investigación Sanitaria Gregorio Marañón, Madrid, Spain.,Servicio de Cirugía Oral y Maxilofacial, Hospital General Universitario Gregorio Marañón, Madrid, Spain
| | - Santiago Ochandiano
- Instituto de Investigación Sanitaria Gregorio Marañón, Madrid, Spain.,Servicio de Cirugía Oral y Maxilofacial, Hospital General Universitario Gregorio Marañón, Madrid, Spain
| | - Carlos Navarro-Cuéllar
- Instituto de Investigación Sanitaria Gregorio Marañón, Madrid, Spain.,Servicio de Cirugía Oral y Maxilofacial, Hospital General Universitario Gregorio Marañón, Madrid, Spain
| | - Guillermo Sanjuán de Moreta
- Instituto de Investigación Sanitaria Gregorio Marañón, Madrid, Spain.,Servicio de Otorrinolaringología, Hospital General Universitario Gregorio Marañón, Madrid, Spain
| | - Javier Pascau
- Departamento de Bioingeniería e Ingeniería Aeroespacial, Universidad Carlos III de Madrid, Madrid, Spain.,Instituto de Investigación Sanitaria Gregorio Marañón, Madrid, Spain
| |
Collapse
|
34
|
Gomez A, Zimmer VA, Wheeler G, Toussaint N, Deng S, Wright R, Skelton E, Matthew J, Kainz B, Hajnal J, Schnabel J. PRETUS: A plug-in based platform for real-time ultrasound imaging research. SOFTWAREX 2022; 17:100959. [PMID: 36619798 PMCID: PMC7614027 DOI: 10.1016/j.softx.2021.100959] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
We present PRETUS - a Plugin-based Real Time UltraSound software platform for live ultrasound image analysis and operator support. The software is lightweight; functionality is brought in via independent plug-ins that can be arranged in sequence. The software allows to capture the real-time stream of ultrasound images from virtually any ultrasound machine, applies computational methods and visualizes the results on-the-fly. Plug-ins can run concurrently without blocking each other. They can be implemented in C++ and Python. A graphical user interface can be implemented for each plug-in, and presented to the user in a compact way. The software is free and open source, and allows for rapid prototyping and testing of real-time ultrasound imaging methods in a manufacturer-agnostic fashion. The software is provided with input, output and processing plug-ins, as well as with tutorials to illustrate how to develop new plug-ins for PRETUS.
Collapse
Affiliation(s)
- Alberto Gomez
- School of Biomedical Engineering & Imaging Sciences, King’s College London, UK
| | - Veronika A. Zimmer
- School of Biomedical Engineering & Imaging Sciences, King’s College London, UK
- Department of Informatics, Technical University Munich, Germany
| | - Gavin Wheeler
- School of Biomedical Engineering & Imaging Sciences, King’s College London, UK
| | - Nicolas Toussaint
- School of Biomedical Engineering & Imaging Sciences, King’s College London, UK
| | - Shujie Deng
- School of Biomedical Engineering & Imaging Sciences, King’s College London, UK
| | - Robert Wright
- School of Biomedical Engineering & Imaging Sciences, King’s College London, UK
| | - Emily Skelton
- School of Biomedical Engineering & Imaging Sciences, King’s College London, UK
| | - Jackie Matthew
- School of Biomedical Engineering & Imaging Sciences, King’s College London, UK
| | - Bernhard Kainz
- Department of Computing, Imperial College London, UK
- Friedrich-Alexander-University Erlangen-Nürnberg, Germany
| | - Jo Hajnal
- School of Biomedical Engineering & Imaging Sciences, King’s College London, UK
| | - Julia Schnabel
- School of Biomedical Engineering & Imaging Sciences, King’s College London, UK
- Department of Informatics, Technical University Munich, Germany
- Helmholtz Zentrum München – German Research Center for Environmental Health, Germany
| |
Collapse
|
35
|
Poole M, Ungi T, Fichtinger G, Zevin B. Training in soft tissue resection using real-time visual computer navigation feedback from the Surgery Tutor: A randomized controlled trial. Surgery 2021; 172:89-95. [PMID: 34969526 DOI: 10.1016/j.surg.2021.11.037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2021] [Revised: 10/13/2021] [Accepted: 11/29/2021] [Indexed: 10/19/2022]
Abstract
BACKGROUND In competency-based medical education, surgery trainees are often required to learn procedural skills in a simulated setting before proceeding to the clinical environment. The Surgery Tutor computer navigation platform allows for real-time proctor-less assessment of open soft tissue resection skills; however, the use of this platform as an aid in acquisition of procedural skills is yet to be explored. METHODS In this prospective randomized controlled trial, 20 final year medical students were randomized to receive either training with real-time computer navigation feedback (Intervention, n = 10) or simulation training without navigation feedback (Control, n = 10) during resection of simulated non-palpable soft tissue tumors. Real-time computer navigation feedback allowed participants to visualize the position of their scalpel relative to the tumor. Computer navigation feedback was removed for postintervention assessment. Primary outcome was positive margin rate. Secondary outcomes were procedure time, mass of tissue excised, number of scalpel motions, and distance traveled by the scalpel. RESULTS Training with real-time computer navigation resulted in a significantly lower positive margin rate as compared to training without navigation feedback (0% vs 40%, P = .025). All other performance metrics were not significantly different between the 2 groups. Participants in the intervention group displayed significant improvement in positive margin rate from baseline to final assessment (80% vs 0%, P < .01), whereas participants in the Control group did not. CONCLUSION Real-time visual computer navigation feedback from the Surgery Tutor resulted in superior acquisition of procedural skills as compared to training without navigation feedback.
Collapse
Affiliation(s)
- Meredith Poole
- Kingston Health Sciences Center, Queen's University, Kingston, Ontario, Canada
| | - Tamas Ungi
- Kingston Health Sciences Center, Queen's University, Kingston, Ontario, Canada
| | - Gabor Fichtinger
- Kingston Health Sciences Center, Queen's University, Kingston, Ontario, Canada
| | - Boris Zevin
- Kingston Health Sciences Center, Queen's University, Kingston, Ontario, Canada.
| |
Collapse
|
36
|
Xie X, Song Y, Ye F, Yan H, Wang S, Zhao X, Dai J. Prior information guided auto-contouring of breast gland for deformable image registration in postoperative breast cancer radiotherapy. Quant Imaging Med Surg 2021; 11:4721-4730. [PMID: 34888184 DOI: 10.21037/qims-20-1141] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2020] [Accepted: 03/04/2021] [Indexed: 12/24/2022]
Abstract
Background Contouring of breast gland in planning CT is important to postoperative radiotherapy of patients after breast conserving surgery (BCS). However, the contouring task is difficult because of the poorer contrast of breast gland in planning CT. To improve its efficiency and accuracy, prior information was introduced in a 3D U-Net model to predict the contour of breast gland automatically. Methods The preoperative CT was first aligned to the planning CT via affine registration. The resulting transform was then applied to the contour of breast gland in preoperative CT, and the corresponding contour in planning CT was obtained. This transformed contour was a preliminary estimation of breast gland in planning CT and was used as prior information in a 3D U-Net model to obtain a more accurate contour. For evaluation, the dice similarity coefficient (DSC) and Hausdorff distance (HD) were used to assess the deep learning (DL) model's prediction accuracy. Results The average DSC and HD of the prediction model were 0.775±0.065 and 44.979±20.565 for breast gland without the input of prior information, while the average values were 0.830±0.038 and 17.896±5.737 with the input of prior information (0.775 vs. 0.830, P=0.0014<0.05; 44.979 vs. 17.896, P=0.002<0.05). Conclusions The prediction accuracy was increased significantly with the introduction of prior information, which provided valuable geometrical distribution of target for model training. This method provides an effective way to identify low-contrast targets from surrounding tissues in CT and will be useful in other image modalities.
Collapse
Affiliation(s)
- Xin Xie
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Yuchun Song
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Feng Ye
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Hui Yan
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Shulian Wang
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Xinming Zhao
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Jianrong Dai
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| |
Collapse
|
37
|
Zopf LM, Heimel P, Geyer SH, Kavirayani A, Reier S, Fröhlich V, Stiglbauer-Tscholakoff A, Chen Z, Nics L, Zinnanti J, Drexler W, Mitterhauser M, Helbich T, Weninger WJ, Slezak P, Obenauf A, Bühler K, Walter A. Cross-Modality Imaging of Murine Tumor Vasculature-a Feasibility Study. Mol Imaging Biol 2021. [PMID: 34101107 DOI: 10.1007/s11307-021-01615-y/figures/6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/14/2023]
Abstract
Tumor vasculature and angiogenesis play a crucial role in tumor progression. Their visualization is therefore of utmost importance to the community. In this proof-of-principle study, we have established a novel cross-modality imaging (CMI) pipeline to characterize exactly the same murine tumors across scales and penetration depths, using orthotopic models of melanoma cancer. This allowed the acquisition of a comprehensive set of vascular parameters for a single tumor. The workflow visualizes capillaries at different length scales, puts them into the context of the overall tumor vessel network and allows quantification and comparison of vessel densities and morphologies by different modalities. The workflow adds information about hypoxia and blood flow rates. The CMI approach includes well-established technologies such as magnetic resonance imaging (MRI), positron emission tomography (PET), computed tomography (CT), and ultrasound (US), and modalities that are recent entrants into preclinical discovery such as optical coherence tomography (OCT) and high-resolution episcopic microscopy (HREM). This novel CMI platform establishes the feasibility of combining these technologies using an extensive image processing pipeline. Despite the challenges pertaining to the integration of microscopic and macroscopic data across spatial resolutions, we also established an open-source pipeline for the semi-automated co-registration of the diverse multiscale datasets, which enables truly correlative vascular imaging. Although focused on tumor vasculature, our CMI platform can be used to tackle a multitude of research questions in cancer biology.
Collapse
Affiliation(s)
- Lydia M Zopf
- Austrian BioImaging/CMI, Vienna BioCenter Core Facilities GmbH (VBCF), Vienna, Austria
| | - Patrick Heimel
- Ludwig Boltzmann Institute for Experimental and Clinical Traumatology in the AUVA Trauma Research Center, Austrian BioImaging/CMI, Vienna, Austria
- Core Facility Hard Tissue and Biomaterial Research, Karl Donath Laboratory, University Clinic of Dentistry, Medical University Vienna, Vienna, Austria
| | - Stefan H Geyer
- Division of Anatomy, MIC, Medical University of Vienna, Austrian BioImaging/CMI, Vienna, Austria
| | - Anoop Kavirayani
- Austrian BioImaging/CMI, Vienna BioCenter Core Facilities GmbH (VBCF), Vienna, Austria
| | - Susanne Reier
- Austrian BioImaging/CMI, Vienna BioCenter Core Facilities GmbH (VBCF), Vienna, Austria
| | - Vanessa Fröhlich
- Department of Biomedical Imaging and Image-guided Therapy, Division of Molecular and Structural Preclinical Imaging, Medical University of Vienna, Vienna, Austria
| | - Alexander Stiglbauer-Tscholakoff
- Department of Biomedical Imaging and Image-guided Therapy, Division of Molecular and Structural Preclinical Imaging, Medical University of Vienna, Vienna, Austria
| | - Zhe Chen
- Medical University of Vienna, Vienna, Austria
| | - Lukas Nics
- Medical University of Vienna, Vienna, Austria
| | - Jelena Zinnanti
- Austrian BioImaging/CMI, Vienna BioCenter Core Facilities GmbH (VBCF), Vienna, Austria
| | | | - Markus Mitterhauser
- Medical University of Vienna, Vienna, Austria
- Ludwig Boltzmann Institute Applied Diagnostics, Vienna, Austria
| | - Thomas Helbich
- Department of Biomedical Imaging and Image-guided Therapy, Division of Molecular and Structural Preclinical Imaging, Medical University of Vienna, Vienna, Austria
| | - Wolfgang J Weninger
- Division of Anatomy, MIC, Medical University of Vienna, Austrian BioImaging/CMI, Vienna, Austria
| | - Paul Slezak
- Ludwig Boltzmann Institute for Experimental and Clinical Traumatology in the AUVA Trauma Research Center, Austrian BioImaging/CMI, Vienna, Austria
| | - Anna Obenauf
- Research Institute of Molecular Pathology (IMP), Vienna Biocenter (VBC), Vienna, Austria
| | - Katja Bühler
- VRVis Zentrum für Virtual Reality und Visualisierung Forschungs-GmbH, Austrian BioImaging/CMI, Vienna, Austria
| | - Andreas Walter
- Austrian BioImaging/CMI, Vienna BioCenter Core Facilities GmbH (VBCF), Vienna, Austria.
| |
Collapse
|
38
|
García-Sevilla M, Moreta-Martinez R, García-Mato D, Pose-Diez-de-la-Lastra A, Pérez-Mañanes R, Calvo-Haro JA, Pascau J. Augmented Reality as a Tool to Guide PSI Placement in Pelvic Tumor Resections. SENSORS 2021; 21:s21237824. [PMID: 34883825 PMCID: PMC8659846 DOI: 10.3390/s21237824] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/03/2021] [Revised: 11/17/2021] [Accepted: 11/22/2021] [Indexed: 02/02/2023]
Abstract
Patient-specific instruments (PSIs) have become a valuable tool for osteotomy guidance in complex surgical scenarios such as pelvic tumor resection. They provide similar accuracy to surgical navigation systems but are generally more convenient and faster. However, their correct placement can become challenging in some anatomical regions, and it cannot be verified objectively during the intervention. Incorrect installations can result in high deviations from the planned osteotomy, increasing the risk of positive resection margins. In this work, we propose to use augmented reality (AR) to guide and verify PSIs placement. We designed an experiment to assess the accuracy provided by the system using a smartphone and the HoloLens 2 and compared the results with the conventional freehand method. The results showed significant differences, where AR guidance prevented high osteotomy deviations, reducing maximal deviation of 54.03 mm for freehand placements to less than 5 mm with AR guidance. The experiment was performed in two versions of a plastic three-dimensional (3D) printed phantom, one including a silicone layer to simulate tissue, providing more realism. We also studied how differences in shape and location of PSIs affect their accuracy, concluding that those with smaller sizes and a homogeneous target surface are more prone to errors. Our study presents promising results that prove AR's potential to overcome the present limitations of PSIs conveniently and effectively.
Collapse
Affiliation(s)
- Mónica García-Sevilla
- Departamento de Bioingeniería e Ingeniería Aeroespacial, Universidad Carlos III de Madrid, 28911 Leganés, Spain; (M.G.-S.); (R.M.-M.); (D.G.-M.); (A.P.-D.-d.-l.-L.)
- Instituto de Investigación Sanitaria Gregorio Marañón, 28007 Madrid, Spain; (R.P.-M.); (J.A.C.-H.)
| | - Rafael Moreta-Martinez
- Departamento de Bioingeniería e Ingeniería Aeroespacial, Universidad Carlos III de Madrid, 28911 Leganés, Spain; (M.G.-S.); (R.M.-M.); (D.G.-M.); (A.P.-D.-d.-l.-L.)
- Instituto de Investigación Sanitaria Gregorio Marañón, 28007 Madrid, Spain; (R.P.-M.); (J.A.C.-H.)
| | - David García-Mato
- Departamento de Bioingeniería e Ingeniería Aeroespacial, Universidad Carlos III de Madrid, 28911 Leganés, Spain; (M.G.-S.); (R.M.-M.); (D.G.-M.); (A.P.-D.-d.-l.-L.)
- Instituto de Investigación Sanitaria Gregorio Marañón, 28007 Madrid, Spain; (R.P.-M.); (J.A.C.-H.)
| | - Alicia Pose-Diez-de-la-Lastra
- Departamento de Bioingeniería e Ingeniería Aeroespacial, Universidad Carlos III de Madrid, 28911 Leganés, Spain; (M.G.-S.); (R.M.-M.); (D.G.-M.); (A.P.-D.-d.-l.-L.)
- Instituto de Investigación Sanitaria Gregorio Marañón, 28007 Madrid, Spain; (R.P.-M.); (J.A.C.-H.)
| | - Rubén Pérez-Mañanes
- Instituto de Investigación Sanitaria Gregorio Marañón, 28007 Madrid, Spain; (R.P.-M.); (J.A.C.-H.)
- Servicio de Cirugía Ortopédica y Traumatología, Hospital General Universitario Gregorio Marañón, 28007 Madrid, Spain
| | - José Antonio Calvo-Haro
- Instituto de Investigación Sanitaria Gregorio Marañón, 28007 Madrid, Spain; (R.P.-M.); (J.A.C.-H.)
- Servicio de Cirugía Ortopédica y Traumatología, Hospital General Universitario Gregorio Marañón, 28007 Madrid, Spain
| | - Javier Pascau
- Departamento de Bioingeniería e Ingeniería Aeroespacial, Universidad Carlos III de Madrid, 28911 Leganés, Spain; (M.G.-S.); (R.M.-M.); (D.G.-M.); (A.P.-D.-d.-l.-L.)
- Instituto de Investigación Sanitaria Gregorio Marañón, 28007 Madrid, Spain; (R.P.-M.); (J.A.C.-H.)
- Correspondence: ; Tel.: +34-91-624-8196
| |
Collapse
|
39
|
Barr C, Hisey R, Ungi T, Fichtinger G. Ultrasound Probe Pose Classification for Task Recognition in Central Venous Catheterization. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:5023-5026. [PMID: 34892335 DOI: 10.1109/embc46164.2021.9630033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Central Line Tutor is a system that facilitates real-time feedback during training for central venous catheterization. One limitation of Central Line Tutor is its reliance on expensive, cumbersome electromagnetic tracking to facilitate various training aids, including ultrasound task identification and segmentation of neck vasculature. The purpose of this study is to validate deep learning methods for vessel segmentation and ultrasound pose classification in order to mitigate the system's reliance on electromagnetic tracking. A large dataset of segmented and classified ultrasound images was generated from participant data captured using Central Line Tutor. A U-Net architecture was used to perform vessel segmentation, while a shallow Convolutional Neural Network (CNN) architecture was designed to classify the pose of the ultrasound probe. A second classifier architecture was also tested that used the U-Net output as the CNN input. The mean testing set Intersect over Union score for U-Net cross-validation was 0.746 ± 0.052. The mean test set classification accuracy for the CNN was 92.0% ± 3.0, while the U-Net + CNN achieved 92.7% ± 2.1%. This study highlights the potential for deep learning on ultrasound images to replace the current electromagnetic tracking-based methods for vessel segmentation and ultrasound pose classification, and represents an important step towards removing the electromagnetic tracker altogether. Removing the need for an external tracking system would significantly reduce the cost of Central Line Tutor and make it far more accessible to the medical trainees that would benefit from it most.
Collapse
|
40
|
Jackson P, Simon R, Linte C. Integrating Real-time Video View with Pre-operative Models for Image-guided Renal Navigation: An in vitro Evaluation Study. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:1366-1371. [PMID: 34891539 PMCID: PMC9137973 DOI: 10.1109/embc46164.2021.9629683] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
To provide a complete picture of a scene sufficient to conduct a minimally invasive, image-guided renal intervention, real-time laparoscopic video needs to be integrated with underlying anatomy information typically available from pre- or intra-operative images. Here we present a simple and efficient hand-eye calibration method for an optically tracked camera, which only requires the acquisition of several poses of a Polaris stylus featuring 4 markers automatically localized by both the camera and the optical tracker. We evaluate the calibration using both the Polaris stylus, as well as a patient-specific 3D printed kidney phantom in terms of the number of poses acquired, as well as the depth of the imaged scene into the field of view of the camera, by projecting the several landmarks on the imaged object at known location in the 3D world onto the camera image. The RMS projection error decreases with increasing distance from the camera to the imaged object from 7 pixels at 15-18 mm, to under 2 pixels at 28-30 mm, which corresponds to a 2 mm and 1 mm error, respectively, in 3D space.
Collapse
|
41
|
Connolly L, Jamzad A, Kaufmann M, Farquharson CE, Ren K, Rudan JF, Fichtinger G, Mousavi P. Combined Mass Spectrometry and Histopathology Imaging for Perioperative Tissue Assessment in Cancer Surgery. J Imaging 2021; 7:203. [PMID: 34677289 PMCID: PMC8539093 DOI: 10.3390/jimaging7100203] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2021] [Revised: 09/28/2021] [Accepted: 09/30/2021] [Indexed: 12/16/2022] Open
Abstract
Mass spectrometry is an effective imaging tool for evaluating biological tissue to detect cancer. With the assistance of deep learning, this technology can be used as a perioperative tissue assessment tool that will facilitate informed surgical decisions. To achieve such a system requires the development of a database of mass spectrometry signals and their corresponding pathology labels. Assigning correct labels, in turn, necessitates precise spatial registration of histopathology and mass spectrometry data. This is a challenging task due to the domain differences and noisy nature of images. In this study, we create a registration framework for mass spectrometry and pathology images as a contribution to the development of perioperative tissue assessment. In doing so, we explore two opportunities in deep learning for medical image registration, namely, unsupervised, multi-modal deformable image registration and evaluation of the registration. We test this system on prostate needle biopsy cores that were imaged with desorption electrospray ionization mass spectrometry (DESI) and show that we can successfully register DESI and histology images to achieve accurate alignment and, consequently, labelling for future training. This automation is expected to improve the efficiency and development of a deep learning architecture that will benefit the use of mass spectrometry imaging for cancer diagnosis.
Collapse
Affiliation(s)
- Laura Connolly
- School of Computing, Queen’s University, Kingston, ON K7L 3N6, Canada; (A.J.); (C.E.F.); (G.F.); (P.M.)
| | - Amoon Jamzad
- School of Computing, Queen’s University, Kingston, ON K7L 3N6, Canada; (A.J.); (C.E.F.); (G.F.); (P.M.)
| | - Martin Kaufmann
- Department of Surgery, Queen’s University, Kingston, ON K7L 3N6, Canada; (M.K.); (J.F.R.)
| | - Catriona E. Farquharson
- School of Computing, Queen’s University, Kingston, ON K7L 3N6, Canada; (A.J.); (C.E.F.); (G.F.); (P.M.)
| | - Kevin Ren
- Department of Pathology and Molecular Medicine, Queen’s University, Kingston, ON K7L 3N6, Canada;
| | - John F. Rudan
- Department of Surgery, Queen’s University, Kingston, ON K7L 3N6, Canada; (M.K.); (J.F.R.)
| | - Gabor Fichtinger
- School of Computing, Queen’s University, Kingston, ON K7L 3N6, Canada; (A.J.); (C.E.F.); (G.F.); (P.M.)
| | - Parvin Mousavi
- School of Computing, Queen’s University, Kingston, ON K7L 3N6, Canada; (A.J.); (C.E.F.); (G.F.); (P.M.)
| |
Collapse
|
42
|
|
43
|
Holden MS, Portillo A, Salame G. Skills Classification in Cardiac Ultrasound with Temporal Convolution and Domain Knowledge Using a Low-Cost Probe Tracker. ULTRASOUND IN MEDICINE & BIOLOGY 2021; 47:3002-3013. [PMID: 34344562 DOI: 10.1016/j.ultrasmedbio.2021.06.011] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/10/2020] [Revised: 04/29/2021] [Accepted: 06/17/2021] [Indexed: 06/13/2023]
Abstract
As point-of-care ultrasound (POCUS) becomes more integrated into clinical practice, it is essential to address all aspects of ultrasound operator proficiency. Ultrasound proficiency requires the ability to acquire, interpret and integrate bedside ultrasound images. The difference in image acquisition psychomotor skills between novice (trainee) and expert (instructor) ultrasonographer has not been described. We created an inexpensive system, called Probe Watch, to record probe motion and assess image acquisition in cardiac POCUS using an inertial measurement device and software for data recording based on open-source components. We designed a temporal convolutional network for skills classification from probe motion that integrates clinical domain knowledge. We further designed data augmentation methods to improve its generalization. Subsequently, we validated the setup and assessment method on a set of novice and expert sonographers performing cardiac ultrasound in a simulation-based training environment. The proposed methods classified participants as novice or expert with areas under the receiver operating characteristic curve of 0.931 and 0.761 for snippets and trials, respectively. Integrating domain knowledge into the neural network had added value. Furthermore, we identified the most discriminative features for assessment. Probe Watch quantifies motion during cardiac ultrasound and provides insight into probe motion behavior. It may be deployed during cardiac ultrasound training to monitor learning curves objectively and automatically.
Collapse
Affiliation(s)
- Matthew S Holden
- School of Computer Science, Carleton University, Ottawa, Ontario, Canada.
| | | | | |
Collapse
|
44
|
Tang S, Yang X, Shajudeen P, Sears C, Taraballi F, Weiner B, Tasciotti E, Dollahon D, Park H, Righetti R. A CNN-based method to reconstruct 3-D spine surfaces from US images in vivo. Med Image Anal 2021; 74:102221. [PMID: 34520960 DOI: 10.1016/j.media.2021.102221] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2020] [Revised: 08/26/2021] [Accepted: 08/27/2021] [Indexed: 01/12/2023]
Abstract
Three-dimensional (3-D) reconstruction of the spine surface is of strong clinical relevance for the diagnosis and prognosis of spine disorders and intra-operative image guidance. In this paper, we report a new technique to reconstruct lumbar spine surfaces in 3-D from non-invasive ultrasound (US) images acquired in free-hand mode. US images randomly sampled from in vivo scans of 9 rabbits were used to train a U-net convolutional neural network (CNN). More specifically, a late fusion (LF)-based U-net trained jointly on B-mode and shadow-enhanced B-mode images was generated by fusing two individual U-nets and expanding the set of trainable parameters to around twice the capacity of a basic U-net. This U-net was then applied to predict spine surface labels in in vivo images obtained from another rabbit, which were then used for 3-D spine surface reconstruction. The underlying pose of the transducer during the scan was estimated by registering stacks of US images to a geometrical model derived from corresponding CT data and used to align detected surface points. Final performance of the reconstruction method was assessed by computing the mean absolute error (MAE) between pairs of spine surface points detected from US and CT and by counting the total number of surface points detected from US. Comparison was made between the LF-based U-net and a previously developed phase symmetry (PS)-based method. Using the LF-based U-net, the averaged number of US surface points across the lumbar region increased by 21.61% and MAE reduced by 26.28% relative to the PS-based method. The overall MAE (in mm) was 0.24±0.29. Based on these results, we conclude that: 1) the proposed U-net can detect the spine posterior arch with low MAE and large number of US surface points and 2) the newly proposed reconstruction framework may complement and, under certain circumstances, be used without the aid of an external tracking system in intra-operative spine applications.
Collapse
Affiliation(s)
- Songyuan Tang
- Department of Electrical and Computer Engineering, Texas A&M University, College Station, TX 77843, USA
| | - Xu Yang
- Department of Electrical and Computer Engineering, Texas A&M University, College Station, TX 77843, USA
| | - Peer Shajudeen
- Department of Electrical and Computer Engineering, Texas A&M University, College Station, TX 77843, USA
| | - Candice Sears
- Houston Methodist Hospital, Department of Orthopedics and Sports Medicine, Center for Musculoskeletal Regeneration, Houston 77030, USA
| | - Francesca Taraballi
- Houston Methodist Hospital, Department of Orthopedics and Sports Medicine, Center for Musculoskeletal Regeneration, Houston 77030, USA
| | - Bradley Weiner
- Houston Methodist Hospital, Department of Orthopedics and Sports Medicine, Center for Musculoskeletal Regeneration, Houston 77030, USA
| | - Ennio Tasciotti
- Houston Methodist Hospital, Department of Orthopedics and Sports Medicine, Center for Musculoskeletal Regeneration, Houston 77030, USA
| | - Devon Dollahon
- Department of Electrical and Computer Engineering, Texas A&M University, College Station, TX 77843, USA
| | - Hangue Park
- Department of Electrical and Computer Engineering, Texas A&M University, College Station, TX 77843, USA
| | - Raffaella Righetti
- Department of Electrical and Computer Engineering, Texas A&M University, College Station, TX 77843, USA.
| |
Collapse
|
45
|
Fichtinger G, Mousavi P, Ungi T, Fenster A, Abolmaesumi P, Kronreif G, Ruiz-Alzola J, Ndoye A, Diao B, Kikinis R. Design of an Ultrasound-Navigated Prostate Cancer Biopsy System for Nationwide Implementation in Senegal. J Imaging 2021; 7:154. [PMID: 34460790 PMCID: PMC8404908 DOI: 10.3390/jimaging7080154] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2021] [Revised: 08/04/2021] [Accepted: 08/07/2021] [Indexed: 12/05/2022] Open
Abstract
This paper presents the design of NaviPBx, an ultrasound-navigated prostate cancer biopsy system. NaviPBx is designed to support an affordable and sustainable national healthcare program in Senegal. It uses spatiotemporal navigation and multiparametric transrectal ultrasound to guide biopsies. NaviPBx integrates concepts and methods that have been independently validated previously in clinical feasibility studies and deploys them together in a practical prostate cancer biopsy system. NaviPBx is based entirely on free open-source software and will be shared as a free open-source program with no restriction on its use. NaviPBx is set to be deployed and sustained nationwide through the Senegalese Military Health Service. This paper reports on the results of the design process of NaviPBx. Our approach concentrates on "frugal technology", intended to be affordable for low-middle income (LMIC) countries. Our project promises the wide-scale application of prostate biopsy and will foster time-efficient development and programmatic implementation of ultrasound-guided diagnostic and therapeutic interventions in Senegal and beyond.
Collapse
Affiliation(s)
- Gabor Fichtinger
- School of Computing, Queen’s University, Kingston, ON K7L 2N8, Canada; (P.M.); (T.U.)
| | - Parvin Mousavi
- School of Computing, Queen’s University, Kingston, ON K7L 2N8, Canada; (P.M.); (T.U.)
| | - Tamas Ungi
- School of Computing, Queen’s University, Kingston, ON K7L 2N8, Canada; (P.M.); (T.U.)
| | - Aaron Fenster
- Department of Medical Biophysics, Schulich School of Medicine & Dentistry, Western University, London, ON N6A 5B7, Canada;
| | - Purang Abolmaesumi
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC V6T 1Z4, Canada;
| | - Gernot Kronreif
- Austrian Center for Medical Innovation and Technology, 2700 Wiener Neustadt, Austria;
| | - Juan Ruiz-Alzola
- Departamento de Señales y Comunicaciones, University of Las Palmas de Gran Canaria, 35001 Las Palmas, Spain;
| | - Alain Ndoye
- Department of Urology, Hôpital Aristide Le Dantec, Cheikh Anta Diop University, Dakar 10700, Senegal; (A.N.); (B.D.)
| | - Babacar Diao
- Department of Urology, Hôpital Aristide Le Dantec, Cheikh Anta Diop University, Dakar 10700, Senegal; (A.N.); (B.D.)
- Department of Urology, Ouakam Military Hospital, Dakar BP 5321, Senegal
| | - Ron Kikinis
- Harvard Medical School, Brigham and Women’s Hospital, Boston, MA 02115, USA;
| |
Collapse
|
46
|
Yavas G, Caliskan KE, Cagli MS. Three-dimensional-printed marker-based augmented reality neuronavigation: a new neuronavigation technique. Neurosurg Focus 2021; 51:E20. [PMID: 34333464 DOI: 10.3171/2021.5.focus21206] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2021] [Accepted: 05/13/2021] [Indexed: 11/06/2022]
Abstract
OBJECTIVE The aim of this study was to assess the precision and feasibility of 3D-printed marker-based augmented reality (AR) neurosurgical navigation and its use intraoperatively compared with optical tracking neuronavigation systems (OTNSs). METHODS Three-dimensional-printed markers for CT and MRI and intraoperative use were applied with mobile devices using an AR light detection and ranging (LIDAR) camera. The 3D segmentations of intracranial tumors were created with CT and MR images, and preoperative registration of the marker and pathology was performed. A patient-specific, surgeon-facilitated mobile application was developed, and a mobile device camera was used for neuronavigation with high accuracy, ease, and cost-effectiveness. After accuracy values were preliminarily assessed, this technique was used intraoperatively in 8 patients. RESULTS The mobile device LIDAR camera was found to successfully overlay images of virtual tumor segmentations according to the position of a 3D-printed marker. The targeting error that was measured ranged from 0.5 to 3.5 mm (mean 1.70 ± 1.02 mm, median 1.58 mm). The mean preoperative preparation time was 35.7 ± 5.56 minutes, which is longer than that for routine OTNSs, but the amount of time required for preoperative registration and the placement of the intraoperative marker was very brief compared with other neurosurgical navigation systems (mean 1.02 ± 0.3 minutes). CONCLUSIONS The 3D-printed marker-based AR neuronavigation system was a clinically feasible, highly precise, low-cost, and easy-to-use navigation technique. Three-dimensional segmentation of intracranial tumors was targeted on the brain and was clearly visualized from the skin incision to the end of surgery.
Collapse
|
47
|
Vendries V, Ungi T, Harry J, Kunz M, Podlipská J, MacKenzie L, Venne G. Three-dimensional ultrasound for knee osteophyte depiction: a comparative study to computed tomography. Int J Comput Assist Radiol Surg 2021; 16:1749-1759. [PMID: 34313914 PMCID: PMC8580923 DOI: 10.1007/s11548-021-02456-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2020] [Accepted: 07/06/2021] [Indexed: 11/29/2022]
Abstract
Purpose Osteophytes are common radiographic markers of osteoarthritis. However, they are not accurately depicted using conventional imaging, thus hampering surgical interventions that rely on pre-operative images. Studies have shown that ultrasound (US) is promising at detecting osteophytes and monitoring the progression of osteoarthritis. Furthermore, three-dimensional (3D) ultrasound reconstructions may offer a means to quantify osteophytes. The purpose of this study was to compare the accuracy of osteophyte depiction in the knee joint between 3D US and conventional computed tomography (CT). Methods Eleven human cadaveric knees were pre-screened for the presence of osteophytes. Three osteoarthritic knees were selected, and then, 3D US and CT images were obtained, segmented, and digitally reconstructed in 3D. After dissection, high-resolution structured light scanner (SLS) images of the joint surfaces were obtained. Surface matching and root mean square (RMS) error analyses of surface distances were performed to assess the accuracy of each modality in capturing osteophytes. The RMS errors were compared between 3D US, CT and SLS models. Results Average RMS error comparisons for 3D US versus SLS and CT versus SLS models were 0.87 mm ± 0.33 mm (average ± standard deviation) and 0.95 mm ± 0.32 mm, respectively. No statistical difference was found between 3D US and CT. Comparative observations of imaging modalities suggested that 3D US better depicted osteophytes with cartilage and fibrocartilage tissue characteristics compared to CT. Conclusion Using 3D US can improve the depiction of osteophytes with a cartilaginous portion compared to CT. It can also provide useful information about the presence and extent of osteophytes. Whilst algorithm improvements for automatic segmentation and registration of US are needed to provide a more robust investigation of osteophyte depiction accuracy, this investigation puts forward the potential application for 3D US in routine diagnostic evaluations and pre-operative planning of osteoarthritis.
Collapse
Affiliation(s)
- Valeria Vendries
- Anatomical Sciences Program and Department of Biomedical and Molecular Sciences, Queens University, Kingston, ON, K7L3 N6, Canada.
| | - Tamas Ungi
- School of Computing, Queen's University, Kingston, ON, K7L 3N6, Canada
| | - Jordan Harry
- Anatomical Sciences Program and Department of Biomedical and Molecular Sciences, Queens University, Kingston, ON, K7L3 N6, Canada
| | - Manuela Kunz
- School of Computing, Queen's University, Kingston, ON, K7L 3N6, Canada
| | - Jana Podlipská
- Research Unit of Medical Imaging, Physics and Technology, Faculty of Medicine, University of Oulu, Oulu, Finland
| | - Les MacKenzie
- Anatomical Sciences Program and Department of Biomedical and Molecular Sciences, Queens University, Kingston, ON, K7L3 N6, Canada
| | - Gabriel Venne
- Department of Anatomy and Cell Biology, McGill University, Montreal, QC, H3A 0G4, Canada
| |
Collapse
|
48
|
Neves CA, Tran ED, Blevins NH, Hwang PH. Deep learning automated segmentation of middle skull-base structures for enhanced navigation. Int Forum Allergy Rhinol 2021; 11:1694-1697. [PMID: 34185969 DOI: 10.1002/alr.22856] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2021] [Revised: 05/26/2021] [Accepted: 06/03/2021] [Indexed: 12/24/2022]
Affiliation(s)
- Caio A Neves
- Faculty of Medicine, University of Brasilia, Brasília, Brazil
| | - Emma D Tran
- Department of Otolaryngology-Head & Neck Surgery, Stanford University School of Medicine, Stanford, California, USA
| | - Nikolas H Blevins
- Department of Otolaryngology-Head & Neck Surgery, Stanford University School of Medicine, Stanford, California, USA
| | - Peter H Hwang
- Department of Otolaryngology-Head & Neck Surgery, Stanford University School of Medicine, Stanford, California, USA
| |
Collapse
|
49
|
Lazarus J, Asselin M, Kaestner L. Optically tracked needle for ultrasound guided percutaneous nephrolithotomy (PCNL) puncture: A preliminary report. J Endourol 2021; 35:1733-1737. [PMID: 34114486 DOI: 10.1089/end.2021.0136] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Introduction Precise needle puncture of the renal collecting system is an essential step for successful percutaneous nephrolithotomy (PCNL). The use of ultrasound for puncture is receiving increased attention. Ultrasound has recognised limitations related to poor visualisation of the needle tip. We aimed to assess if an affordable open source computerized needle navigation training system, using optically tracked ultrasonography, could improve performance of simulated PCNL puncture by urological trainees, compared to conventional free hand manual sonographic puncture. Methods This study describes a PCNL navigation system which can be recreated with any standard ultrasound machine using relatively inexpensive components. The system allows the needle tip to be precisely appreciated in the ultrasound image, its trajectory planned, and the appreciation of needle tip to target calyx proximity aided by sound. Eight urology trainees participated in assessment of the PCNL training model. Alternating freehand (control) and tracked needle (experimental) punctures were performed on a phantom kidney. Total procedure and the number of reinsertions required were recorded. Results The mean time for freehand puncture was 89 seconds (range 13 - 173), while that of the optically tracked needle was 36 seconds (range 12 - 72). Thus, puncture time was significantly reduced by an average of 53 seconds (p=0.045) in the experimental arm. The mean number of needle reinsertions was 3,3 with freehand compared to 1,3 in the optically tracked puncture (p = 0.005). The mean square root error (MSRE) of the system was 1,8 mm. Conclusion This study demonstrates that affordable hardware and open source software can be used to construct an optically tracked ultrasound navigation system for PCNL training. Statistically significant reduced puncture time and number of passes required for successful puncture was demonstrated. We feel that computerised needle tracking during PCNL puncture deserves further evaluation in a training, and potentially, a clinical setting.
Collapse
Affiliation(s)
- John Lazarus
- University of Cape Town, Division of Urology, Groote Schuur Hospital, University of Cape Town, Cape Town, South Africa, 8001;
| | - Mark Asselin
- Queen's University Faculty of Health Sciences, 12363, Laboratory for Percutaneous Surgery, Goodwin Hall Room 557, 25 Union St, Kingston, Ontario, Canada, K7L 2N8;
| | - Lisa Kaestner
- University of Cape Town (UCT), Urology, Cape Town, South Africa;
| |
Collapse
|
50
|
Zopf LM, Heimel P, Geyer SH, Kavirayani A, Reier S, Fröhlich V, Stiglbauer-Tscholakoff A, Chen Z, Nics L, Zinnanti J, Drexler W, Mitterhauser M, Helbich T, Weninger WJ, Slezak P, Obenauf A, Bühler K, Walter A. Cross-Modality Imaging of Murine Tumor Vasculature-a Feasibility Study. Mol Imaging Biol 2021; 23:874-893. [PMID: 34101107 PMCID: PMC8578087 DOI: 10.1007/s11307-021-01615-y] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2020] [Revised: 04/28/2021] [Accepted: 04/29/2021] [Indexed: 11/29/2022]
Abstract
Tumor vasculature and angiogenesis play a crucial role in tumor progression. Their visualization is therefore of utmost importance to the community. In this proof-of-principle study, we have established a novel cross-modality imaging (CMI) pipeline to characterize exactly the same murine tumors across scales and penetration depths, using orthotopic models of melanoma cancer. This allowed the acquisition of a comprehensive set of vascular parameters for a single tumor. The workflow visualizes capillaries at different length scales, puts them into the context of the overall tumor vessel network and allows quantification and comparison of vessel densities and morphologies by different modalities. The workflow adds information about hypoxia and blood flow rates. The CMI approach includes well-established technologies such as magnetic resonance imaging (MRI), positron emission tomography (PET), computed tomography (CT), and ultrasound (US), and modalities that are recent entrants into preclinical discovery such as optical coherence tomography (OCT) and high-resolution episcopic microscopy (HREM). This novel CMI platform establishes the feasibility of combining these technologies using an extensive image processing pipeline. Despite the challenges pertaining to the integration of microscopic and macroscopic data across spatial resolutions, we also established an open-source pipeline for the semi-automated co-registration of the diverse multiscale datasets, which enables truly correlative vascular imaging. Although focused on tumor vasculature, our CMI platform can be used to tackle a multitude of research questions in cancer biology.
Collapse
Affiliation(s)
- Lydia M Zopf
- Austrian BioImaging/CMI, Vienna BioCenter Core Facilities GmbH (VBCF), Vienna, Austria
| | - Patrick Heimel
- Ludwig Boltzmann Institute for Experimental and Clinical Traumatology in the AUVA Trauma Research Center, Austrian BioImaging/CMI, Vienna, Austria.,Core Facility Hard Tissue and Biomaterial Research, Karl Donath Laboratory, University Clinic of Dentistry, Medical University Vienna, Vienna, Austria
| | - Stefan H Geyer
- Division of Anatomy, MIC, Medical University of Vienna, Austrian BioImaging/CMI, Vienna, Austria
| | - Anoop Kavirayani
- Austrian BioImaging/CMI, Vienna BioCenter Core Facilities GmbH (VBCF), Vienna, Austria
| | - Susanne Reier
- Austrian BioImaging/CMI, Vienna BioCenter Core Facilities GmbH (VBCF), Vienna, Austria
| | - Vanessa Fröhlich
- Department of Biomedical Imaging and Image-guided Therapy, Division of Molecular and Structural Preclinical Imaging, Medical University of Vienna, Vienna, Austria
| | - Alexander Stiglbauer-Tscholakoff
- Department of Biomedical Imaging and Image-guided Therapy, Division of Molecular and Structural Preclinical Imaging, Medical University of Vienna, Vienna, Austria
| | - Zhe Chen
- Medical University of Vienna, Vienna, Austria
| | - Lukas Nics
- Medical University of Vienna, Vienna, Austria
| | - Jelena Zinnanti
- Austrian BioImaging/CMI, Vienna BioCenter Core Facilities GmbH (VBCF), Vienna, Austria
| | | | - Markus Mitterhauser
- Medical University of Vienna, Vienna, Austria.,Ludwig Boltzmann Institute Applied Diagnostics, Vienna, Austria
| | - Thomas Helbich
- Department of Biomedical Imaging and Image-guided Therapy, Division of Molecular and Structural Preclinical Imaging, Medical University of Vienna, Vienna, Austria
| | - Wolfgang J Weninger
- Division of Anatomy, MIC, Medical University of Vienna, Austrian BioImaging/CMI, Vienna, Austria
| | - Paul Slezak
- Ludwig Boltzmann Institute for Experimental and Clinical Traumatology in the AUVA Trauma Research Center, Austrian BioImaging/CMI, Vienna, Austria
| | - Anna Obenauf
- Research Institute of Molecular Pathology (IMP), Vienna Biocenter (VBC), Vienna, Austria
| | - Katja Bühler
- VRVis Zentrum für Virtual Reality und Visualisierung Forschungs-GmbH, Austrian BioImaging/CMI, Vienna, Austria
| | - Andreas Walter
- Austrian BioImaging/CMI, Vienna BioCenter Core Facilities GmbH (VBCF), Vienna, Austria.
| |
Collapse
|