1
|
Deep Learning for Retrospective Motion Correction in MRI: A Comprehensive Review. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:846-859. [PMID: 37831582 DOI: 10.1109/tmi.2023.3323215] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/15/2023]
Abstract
Motion represents one of the major challenges in magnetic resonance imaging (MRI). Since the MR signal is acquired in frequency space, any motion of the imaged object leads to complex artefacts in the reconstructed image in addition to other MR imaging artefacts. Deep learning has been frequently proposed for motion correction at several stages of the reconstruction process. The wide range of MR acquisition sequences, anatomies and pathologies of interest, and motion patterns (rigid vs. deformable and random vs. regular) makes a comprehensive solution unlikely. To facilitate the transfer of ideas between different applications, this review provides a detailed overview of proposed methods for learning-based motion correction in MRI together with their common challenges and potentials. This review identifies differences and synergies in underlying data usage, architectures, training and evaluation strategies. We critically discuss general trends and outline future directions, with the aim to enhance interaction between different application areas and research fields.
Collapse
|
2
|
Roadmap on the use of artificial intelligence for imaging of vulnerable atherosclerotic plaque in coronary arteries. Nat Rev Cardiol 2024; 21:51-64. [PMID: 37464183 DOI: 10.1038/s41569-023-00900-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 06/07/2023] [Indexed: 07/20/2023]
Abstract
Artificial intelligence (AI) is likely to revolutionize the way medical images are analysed and has the potential to improve the identification and analysis of vulnerable or high-risk atherosclerotic plaques in coronary arteries, leading to advances in the treatment of coronary artery disease. However, coronary plaque analysis is challenging owing to cardiac and respiratory motion, as well as the small size of cardiovascular structures. Moreover, the analysis of coronary imaging data is time-consuming, can be performed only by clinicians with dedicated cardiovascular imaging training, and is subject to considerable interreader and intrareader variability. AI has the potential to improve the assessment of images of vulnerable plaque in coronary arteries, but requires robust development, testing and validation. Combining human expertise with AI might facilitate the reliable and valid interpretation of images obtained using CT, MRI, PET, intravascular ultrasonography and optical coherence tomography. In this Roadmap, we review existing evidence on the application of AI to the imaging of vulnerable plaque in coronary arteries and provide consensus recommendations developed by an interdisciplinary group of experts on AI and non-invasive and invasive coronary imaging. We also outline future requirements of AI technology to address bias, uncertainty, explainability and generalizability, which are all essential for the acceptance of AI and its clinical utility in handling the anticipated growing volume of coronary imaging procedures.
Collapse
|
3
|
Fast fetal head compounding from multi-view 3D ultrasound. Med Image Anal 2023; 89:102793. [PMID: 37482034 DOI: 10.1016/j.media.2023.102793] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Revised: 02/26/2023] [Accepted: 03/06/2023] [Indexed: 07/25/2023]
Abstract
The diagnostic value of ultrasound images may be limited by the presence of artefacts, notably acoustic shadows, lack of contrast and localised signal dropout. Some of these artefacts are dependent on probe orientation and scan technique, with each image giving a distinct, partial view of the imaged anatomy. In this work, we propose a novel method to fuse the partially imaged fetal head anatomy, acquired from numerous views, into a single coherent 3D volume of the full anatomy. Firstly, a stream of freehand 3D US images is acquired using a single probe, capturing as many different views of the head as possible. The imaged anatomy at each time-point is then independently aligned to a canonical pose using a recurrent spatial transformer network, making our approach robust to fast fetal and probe motion. Secondly, images are fused by averaging only the most consistent and salient features from all images, producing a more detailed compounding, while minimising artefacts. We evaluated our method quantitatively and qualitatively, using image quality metrics and expert ratings, yielding state of the art performance in terms of image quality and robustness to misalignments. Being online, fast and fully automated, our method shows promise for clinical use and deployment as a real-time tool in the fetal screening clinic, where it may enable unparallelled insight into the shape and structure of the face, skull and brain.
Collapse
|
4
|
Transformer-based biomarker prediction from colorectal cancer histology: A large-scale multicentric study. Cancer Cell 2023; 41:1650-1661.e4. [PMID: 37652006 PMCID: PMC10507381 DOI: 10.1016/j.ccell.2023.08.002] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/04/2023] [Revised: 06/18/2023] [Accepted: 08/07/2023] [Indexed: 09/02/2023]
Abstract
Deep learning (DL) can accelerate the prediction of prognostic biomarkers from routine pathology slides in colorectal cancer (CRC). However, current approaches rely on convolutional neural networks (CNNs) and have mostly been validated on small patient cohorts. Here, we develop a new transformer-based pipeline for end-to-end biomarker prediction from pathology slides by combining a pre-trained transformer encoder with a transformer network for patch aggregation. Our transformer-based approach substantially improves the performance, generalizability, data efficiency, and interpretability as compared with current state-of-the-art algorithms. After training and evaluating on a large multicenter cohort of over 13,000 patients from 16 colorectal cancer cohorts, we achieve a sensitivity of 0.99 with a negative predictive value of over 0.99 for prediction of microsatellite instability (MSI) on surgical resection specimens. We demonstrate that resection specimen-only training reaches clinical-grade performance on endoscopic biopsy tissue, solving a long-standing diagnostic problem.
Collapse
|
5
|
Colorimetric Sensor Reading and Illumination Correction via Multi-Task Deep-Learning. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-5. [PMID: 38083521 DOI: 10.1109/embc40787.2023.10340185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Colorimetric sensors represent an accessible and sensitive nanotechnology for rapid and accessible measurement of a substance's properties (e.g., analyte concentration) via color changes. Although colorimetric sensors are widely used in healthcare and laboratories, interpretation of their output is performed either by visual inspection or using cameras in highly controlled illumination set-ups, limiting their usage in end-user applications, with lower resolutions and altered light conditions. For that purpose, we implement a set of image processing and deep-learning (DL) methods that correct for non-uniform illumination alterations and accurately read the target variable from the color response of the sensor. Methods that perform both tasks independently vs. jointly in a multi-task model are evaluated. Video recordings of colorimetric sensors measuring temperature conditions were collected to build an experimental reference dataset. Sensor images were augmented with non-uniform color alterations. The best-performing DL architecture disentangles the luminance, chrominance, and noise via separate decoders and integrates a regression task in the latent space to predict the sensor readings, achieving a mean squared error (MSE) performance of 0.811±0.074[°C] and r2=0.930±0.007, under strong color perturbations, resulting in an improvement of 1.26[°C] when compared to the MSE of the best performing method with independent denoising and regression tasks.Clinical Relevance- The proposed methodology aims to improve the accuracy of colorimetric sensor reading and their large-scale accessibility as point-of-care diagnostic and continuous health monitoring devices, in altered illumination conditions.
Collapse
|
6
|
Extended reality for procedural planning and guidance in structural heart disease - a review of the state-of-the-art. THE INTERNATIONAL JOURNAL OF CARDIOVASCULAR IMAGING 2023:10.1007/s10554-023-02823-z. [PMID: 37103667 DOI: 10.1007/s10554-023-02823-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Accepted: 02/22/2023] [Indexed: 04/28/2023]
Abstract
Extended reality (XR), which encompasses virtual, augmented and mixed reality, is an emerging medical imaging display platform which enables intuitive and immersive interaction in a three-dimensional space. This technology holds the potential to enhance understanding of complex spatial relationships when planning and guiding cardiac procedures in congenital and structural heart disease moving beyond conventional 2D and 3D image displays. A systematic review of the literature demonstrates a rapid increase in publications describing adoption of this technology. At least 33 XR systems have been described, with many demonstrating proof of concept, but with no specific mention of regulatory approval including some prospective studies. Validation remains limited, and true clinical benefit difficult to measure. This review describes and critically appraises the range of XR technologies and its applications for procedural planning and guidance in structural heart disease while discussing the challenges that need to be overcome in future studies to achieve safe and effective clinical adoption.
Collapse
|
7
|
A Topological Loss Function for Deep-Learning Based Image Segmentation Using Persistent Homology. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2022; 44:8766-8778. [PMID: 32886606 PMCID: PMC9721526 DOI: 10.1109/tpami.2020.3013679] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
We introduce a method for training neural networks to perform image or volume segmentation in which prior knowledge about the topology of the segmented object can be explicitly provided and then incorporated into the training process. By using the differentiable properties of persistent homology, a concept used in topological data analysis, we can specify the desired topology of segmented objects in terms of their Betti numbers and then drive the proposed segmentations to contain the specified topological features. Importantly this process does not require any ground-truth labels, just prior knowledge of the topology of the structure being segmented. We demonstrate our approach in four experiments: one on MNIST image denoising and digit recognition, one on left ventricular myocardium segmentation from magnetic resonance imaging data from the UK Biobank, one on the ACDC public challenge dataset and one on placenta segmentation from 3-D ultrasound. We find that embedding explicit prior knowledge in neural network segmentation tasks is most beneficial when the segmentation task is especially challenging and that it can be used in either a semi-supervised or post-processing context to extract a useful training gradient from images without pixelwise labels.
Collapse
|
8
|
Evaluation of a Linear Measurement Tool in Virtual Reality for Assessment of Multimodality Imaging Data-A Phantom Study. J Imaging 2022; 8:jimaging8110304. [PMID: 36354877 PMCID: PMC9696690 DOI: 10.3390/jimaging8110304] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Revised: 10/28/2022] [Accepted: 11/03/2022] [Indexed: 11/09/2022] Open
Abstract
This study aimed to evaluate the accuracy and reliability of a virtual reality (VR) system line measurement tool using phantom data across three cardiac imaging modalities: three-dimensional echocardiography (3DE), computed tomography (CT) and magnetic resonance imaging (MRI). The same phantoms were also measured using industry-standard image visualisation software packages. Two participants performed blinded measurements on volume-rendered images of standard phantoms both in VR and on an industry-standard image visualisation platform. The intra- and interrater reliability of the VR measurement method was evaluated by intraclass correlation coefficient (ICC) and coefficient of variance (CV). Measurement accuracy was analysed using Bland−Altman and mean absolute percentage error (MAPE). VR measurements showed good intra- and interobserver reliability (ICC ≥ 0.99, p < 0.05; CV < 10%) across all imaging modalities. MAPE for VR measurements compared to ground truth were 1.6%, 1.6% and 7.7% in MRI, CT and 3DE datasets, respectively. Bland−Altman analysis demonstrated no systematic measurement bias in CT or MRI data in VR compared to ground truth. A small bias toward smaller measurements in 3DE data was seen in both VR (mean −0.52 mm [−0.16 to −0.88]) and the standard platform (mean −0.22 mm [−0.03 to −0.40]) when compared to ground truth. Limits of agreement for measurements across all modalities were similar in VR and standard software. This study has shown good measurement accuracy and reliability of VR in CT and MRI data with a higher MAPE for 3DE data. This may relate to the overall smaller measurement dimensions within the 3DE phantom. Further evaluation is required of all modalities for assessment of measurements <10 mm.
Collapse
|
9
|
Placenta segmentation in ultrasound imaging: Addressing sources of uncertainty and limited field-of-view. Med Image Anal 2022; 83:102639. [PMID: 36257132 PMCID: PMC7614009 DOI: 10.1016/j.media.2022.102639] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2021] [Revised: 03/09/2022] [Accepted: 09/15/2022] [Indexed: 02/04/2023]
Abstract
Automatic segmentation of the placenta in fetal ultrasound (US) is challenging due to the (i) high diversity of placenta appearance, (ii) the restricted quality in US resulting in highly variable reference annotations, and (iii) the limited field-of-view of US prohibiting whole placenta assessment at late gestation. In this work, we address these three challenges with a multi-task learning approach that combines the classification of placental location (e.g., anterior, posterior) and semantic placenta segmentation in a single convolutional neural network. Through the classification task the model can learn from larger and more diverse datasets while improving the accuracy of the segmentation task in particular in limited training set conditions. With this approach we investigate the variability in annotations from multiple raters and show that our automatic segmentations (Dice of 0.86 for anterior and 0.83 for posterior placentas) achieve human-level performance as compared to intra- and inter-observer variability. Lastly, our approach can deliver whole placenta segmentation using a multi-view US acquisition pipeline consisting of three stages: multi-probe image acquisition, image fusion and image segmentation. This results in high quality segmentation of larger structures such as the placenta in US with reduced image artifacts which are beyond the field-of-view of single probes.
Collapse
|
10
|
Improved 3D tumour definition and quantification of uptake in simulated lung tumours using deep learning. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac65d6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Accepted: 04/08/2022] [Indexed: 11/11/2022]
Abstract
Abstract
Objective. In clinical positron emission tomography (PET) imaging, quantification of radiotracer uptake in tumours is often performed using semi-quantitative measurements such as the standardised uptake value (SUV). For small objects, the accuracy of SUV estimates is limited by the noise properties of PET images and the partial volume effect. There is need for methods that provide more accurate and reproducible quantification of radiotracer uptake. Approach. In this work, we present a deep learning approach with the aim of improving quantification of lung tumour radiotracer uptake and tumour shape definition. A set of simulated tumours, assigned with ‘ground truth’ radiotracer distributions, are used to generate realistic PET raw data which are then reconstructed into PET images. In this work, the ground truth images are generated by placing simulated tumours characterised by different sizes and activity distributions in the left lung of an anthropomorphic phantom. These images are then used as input to an analytical simulator to simulate realistic raw PET data. The PET images reconstructed from the simulated raw data and the corresponding ground truth images are used to train a 3D convolutional neural network. Results. When tested on an unseen set of reconstructed PET phantom images, the network yields improved estimates of the corresponding ground truth. The same network is then applied to reconstructed PET data generated with different point spread functions. Overall the network is able to recover better defined tumour shapes and improved estimates of tumour maximum and median activities. Significance. Our results suggest that the proposed approach, trained on data simulated with one scanner geometry, has the potential to restore PET data acquired with different scanners.
Collapse
|
11
|
Medical image analysis on left atrial LGE MRI for atrial fibrillation studies: A review. Med Image Anal 2022; 77:102360. [PMID: 35124370 PMCID: PMC7614005 DOI: 10.1016/j.media.2022.102360] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2021] [Revised: 11/04/2021] [Accepted: 01/10/2022] [Indexed: 02/08/2023]
Abstract
Late gadolinium enhancement magnetic resonance imaging (LGE MRI) is commonly used to visualize and quantify left atrial (LA) scars. The position and extent of LA scars provide important information on the pathophysiology and progression of atrial fibrillation (AF). Hence, LA LGE MRI computing and analysis are essential for computer-assisted diagnosis and treatment stratification of AF patients. Since manual delineations can be time-consuming and subject to intra- and inter-expert variability, automating this computing is highly desired, which nevertheless is still challenging and under-researched. This paper aims to provide a systematic review on computing methods for LA cavity, wall, scar, and ablation gap segmentation and quantification from LGE MRI, and the related literature for AF studies. Specifically, we first summarize AF-related imaging techniques, particularly LGE MRI. Then, we review the methodologies of the four computing tasks in detail and summarize the validation strategies applied in each task as well as state-of-the-art results on public datasets. Finally, the possible future developments are outlined, with a brief survey on the potential clinical applications of the aforementioned methods. The review indicates that the research into this topic is still in the early stages. Although several methods have been proposed, especially for the LA cavity segmentation, there is still a large scope for further algorithmic developments due to performance issues related to the high variability of enhancement appearance and differences in image acquisition.
Collapse
|
12
|
AtrialJSQnet: A New framework for joint segmentation and quantification of left atrium and scars incorporating spatial and shape information. Med Image Anal 2022; 76:102303. [PMID: 34875581 DOI: 10.1016/j.media.2021.102303] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2020] [Revised: 10/08/2021] [Accepted: 11/08/2021] [Indexed: 10/19/2022]
Abstract
Left atrial (LA) and atrial scar segmentation from late gadolinium enhanced magnetic resonance imaging (LGE MRI) is an important task in clinical practice. The automatic segmentation is however still challenging due to the poor image quality, the various LA shapes, the thin wall, and the surrounding enhanced regions. Previous methods normally solved the two tasks independently and ignored the intrinsic spatial relationship between LA and scars. In this work, we develop a new framework, namely AtrialJSQnet, where LA segmentation, scar projection onto the LA surface, and scar quantification are performed simultaneously in an end-to-end style. We propose a mechanism of shape attention (SA) via an implicit surface projection to utilize the inherent correlation between LA cavity and scars. In specific, the SA scheme is embedded into a multi-task architecture to perform joint LA segmentation and scar quantification. Besides, a spatial encoding (SE) loss is introduced to incorporate continuous spatial information of the target in order to reduce noisy patches in the predicted segmentation. We evaluated the proposed framework on 60 post-ablation LGE MRIs from the MICCAI2018 Atrial Segmentation Challenge. Moreover, we explored the domain generalization ability of the proposed AtrialJSQnet on 40 pre-ablation LGE MRIs from this challenge and 30 post-ablation multi-center LGE MRIs from another challenge (ISBI2012 Left Atrium Fibrosis and Scar Segmentation Challenge). Extensive experiments on public datasets demonstrated the effect of the proposed AtrialJSQnet, which achieved competitive performance over the state-of-the-art. The relatedness between LA segmentation and scar quantification was explicitly explored and has shown significant performance improvements for both tasks. The code has been released via https://zmiclab.github.io/projects.html.
Collapse
|
13
|
Evolving polarisation of infiltrating and alveolar macrophages in the lung during metastatic progression of melanoma suggests CCR1 as a therapeutic target. Oncogene 2022; 41:5032-5045. [PMID: 36241867 PMCID: PMC9652148 DOI: 10.1038/s41388-022-02488-3] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2020] [Revised: 09/16/2022] [Accepted: 09/26/2022] [Indexed: 12/30/2022]
Abstract
Metastatic tumour progression is facilitated by tumour associated macrophages (TAMs) that enforce pro-tumour mechanisms and suppress immunity. In pulmonary metastases, it is unclear whether TAMs comprise tissue resident or infiltrating, recruited macrophages; and the different expression patterns of these TAMs are not well established. Using the mouse melanoma B16F10 model of experimental pulmonary metastasis, we show that infiltrating macrophages (IM) change their gene expression from an early pro-inflammatory to a later tumour promoting profile as the lesions grow. In contrast, resident alveolar macrophages (AM) maintain expression of crucial pro-inflammatory/anti-tumour genes with time. During metastatic growth, the pool of macrophages, which initially contains mainly alveolar macrophages, increasingly consists of infiltrating macrophages potentially facilitating metastasis progression. Blocking chemokine receptor mediated macrophage infiltration in the lung revealed a prominent role for CCR2 in Ly6C+ pro-inflammatory monocyte/macrophage recruitment during metastasis progression, while inhibition of CCR2 signalling led to increased metastatic colony burden. CCR1 blockade, in contrast, suppressed late phase pro-tumour MR+Ly6C- monocyte/macrophage infiltration accompanied by expansion of the alveolar macrophage compartment and accumulation of NK cells, leading to reduced metastatic burden. These data indicate that IM has greater plasticity and higher phenotypic responsiveness to tumour challenge than AM. A considerable difference is also confirmed between CCR1 and CCR2 with regard to the recruited IM subsets, with CCR1 presenting a potential therapeutic target in pulmonary metastasis from melanoma.
Collapse
|
14
|
Memory-Efficient Training for Fully Unrolled Deep Learned PET Image Reconstruction with Iteration-Dependent Targets. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2022; 6:552-563. [PMID: 35664091 PMCID: PMC7612803 DOI: 10.1109/trpms.2021.3101947] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
Abstract
We propose a new version of the forward-backward splitting expectation-maximisation network (FBSEM-Net) along with a new memory-efficient training method enabling the training of fully unrolled implementations of 3D FBSEM-Net. FBSEM-Net unfolds the maximum a posteriori expectation-maximisation algorithm and replaces the regularisation step by a residual convolutional neural network. Both the gradient of the prior and the regularisation strength are learned from training data. In this new implementation, three modifications of the original framework are included. First, iteration-dependent networks are used to have a customised regularisation at each iteration. Second, iteration-dependent targets and losses are introduced so that the regularised reconstruction matches the reconstruction of noise-free data at every iteration. Third, sequential training is performed, making training of large unrolled networks far more memory efficient and feasible. Since sequential training permits unrolling a high number of iterations, there is no need for artificial use of the regularisation step as a leapfrogging acceleration. The results obtained on 2D and 3D simulated data show that FBSEM-Net using iteration-dependent targets and losses improves the consistency in the optimisation of the network parameters over different training runs. We also found that using iteration-dependent targets increases the generalisation capabilities of the network. Furthermore, unrolled networks using iteration-dependent regularisation allowed a slight reduction in reconstruction error compared to using a fixed regularisation network at each iteration. Finally, we demonstrate that sequential training successfully addresses potentially serious memory issues during the training of deep unrolled networks. In particular, it enables the training of 3D fully unrolled FBSEM-Net, not previously feasible, by reducing the memory usage by up to 98% compared to a conventional end-to-end training. We also note that the truncation of the backpropagation (due to sequential training) does not notably impact the network’s performance compared to conventional training with a full backpropagation through the entire network.
Collapse
|
15
|
Exploring a new paradigm for the fetal anomaly ultrasound scan: Artificial intelligence in real time. Prenat Diagn 2021; 42:49-59. [PMID: 34648206 DOI: 10.1002/pd.6059] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2021] [Revised: 09/20/2021] [Accepted: 10/07/2021] [Indexed: 12/12/2022]
Abstract
OBJECTIVE Advances in artificial intelligence (AI) have demonstrated potential to improve medical diagnosis. We piloted the end-to-end automation of the mid-trimester screening ultrasound scan using AI-enabled tools. METHODS A prospective method comparison study was conducted. Participants had both standard and AI-assisted US scans performed. The AI tools automated image acquisition, biometric measurement, and report production. A feedback survey captured the sonographers' perceptions of scanning. RESULTS Twenty-three subjects were studied. The average time saving per scan was 7.62 min (34.7%) with the AI-assisted method (p < 0.0001). There was no difference in reporting time. There were no clinically significant differences in biometric measurements between the two methods. The AI tools saved a satisfactory view in 93% of the cases (four core views only), and 73% for the full 13 views, compared to 98% for both using the manual scan. Survey responses suggest that the AI tools helped sonographers to concentrate on image interpretation by removing disruptive tasks. CONCLUSION Separating freehand scanning from image capture and measurement resulted in a faster scan and altered workflow. Removing repetitive tasks may allow more attention to be directed identifying fetal malformation. Further work is required to improve the image plane detection algorithm for use in real time.
Collapse
|
16
|
A Virtual Reality System for Improved Image-Based Planning of Complex Cardiac Procedures. J Imaging 2021; 7:151. [PMID: 34460787 PMCID: PMC8404926 DOI: 10.3390/jimaging7080151] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Revised: 08/13/2021] [Accepted: 08/17/2021] [Indexed: 12/03/2022] Open
Abstract
The intricate nature of congenital heart disease requires understanding of the complex, patient-specific three-dimensional dynamic anatomy of the heart, from imaging data such as three-dimensional echocardiography for successful outcomes from surgical and interventional procedures. Conventional clinical systems use flat screens, and therefore, display remains two-dimensional, which undermines the full understanding of the three-dimensional dynamic data. Additionally, the control of three-dimensional visualisation with two-dimensional tools is often difficult, so used only by imaging specialists. In this paper, we describe a virtual reality system for immersive surgery planning using dynamic three-dimensional echocardiography, which enables fast prototyping for visualisation such as volume rendering, multiplanar reformatting, flow visualisation and advanced interaction such as three-dimensional cropping, windowing, measurement, haptic feedback, automatic image orientation and multiuser interactions. The available features were evaluated by imaging and nonimaging clinicians, showing that the virtual reality system can help improve the understanding and communication of three-dimensional echocardiography imaging and potentially benefit congenital heart disease treatment.
Collapse
|
17
|
Virtual reality three-dimensional echocardiographic imaging for planning surgical atrioventricular valve repair. JTCVS Tech 2021; 7:269-277. [PMID: 34100000 PMCID: PMC8169455 DOI: 10.1016/j.xjtc.2021.02.044] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2021] [Accepted: 02/24/2021] [Indexed: 02/07/2023] Open
Abstract
OBJECTIVES To investigate how virtual reality (VR) imaging impacts decision-making in atrioventricular valve surgery. METHODS This was a single-center retrospective study involving 15 children and adolescents, median age 6 years (range, 0.33-16) requiring surgical repair of the atrioventricular valves between the years 2016 and 2019. The patients' preoperative 3-dimesnional (3D) echocardiographic data were used to create 3D visualization in a VR application. Five pediatric cardiothoracic surgeons completed a questionnaire formulated to compare their surgical decisions regarding the cases after reviewing conventionally presented 2-dimesnional and 3D echocardiographic images and again after visualization of 3D echocardiograms using the VR platform. Finally, intraoperative findings were shared with surgeons to confirm assessment of the pathology. RESULTS In 67% of cases presented with VR, surgeons reported having "more" or "much more" confidence in their understanding of each patient's pathology and their surgical approach. In all but one case, surgeons were at least as confident after reviewing the VR compared with standard imaging. The case where surgeons reported to be least confident on VR had the worst technical quality of data used. After viewing patient cases on VR, surgeons reported that they would have made minor modifications to surgical approach in 53% and major modifications in 7% of cases. CONCLUSIONS The main impact of viewing imaging on VR is the improved clarity of the anatomical structures. Surgeons reported that this would have impacted the surgical approach in the majority of cases. Poor-quality 3D echocardiographic data were associated with a negative impact of VR visualization; thus. quality assessment of imaging is necessary before projecting in a VR format.
Collapse
|
18
|
MR-guided motion-corrected PET image reconstruction for cardiac PET-MR. J Nucl Med 2021; 62:jnumed.120.254235. [PMID: 34049978 PMCID: PMC8612202 DOI: 10.2967/jnumed.120.254235] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2020] [Revised: 03/09/2021] [Accepted: 03/09/2021] [Indexed: 11/16/2022] Open
Abstract
Simultaneous PET-MR imaging has shown potential for the comprehensive assessment of myocardial health from a single examination. Furthermore, MR-derived respiratory motion information has been shown to improve PET image quality by incorporating this information into the PET image reconstruction. Separately, MR-based anatomically guided PET image reconstruction has been shown to perform effective denoising, but this has been so far demonstrated mainly in brain imaging. To date the combined benefits of motion compensation and anatomical guidance have not been demonstrated for myocardial PET-MR imaging. This work addresses this by proposing a single cardiac PET-MR image reconstruction framework which fully utilises MR-derived information to allow both motion compensation and anatomical guidance within the reconstruction. Methods: Fifteen patients underwent a 18F-FDG cardiac PET-MR scan with a previously introduced acquisition framework. The MR data processing and image reconstruction pipeline produces respiratory motion fields and a high-resolution respiratory motion-corrected MR image with good tissue contrast. This MR-derived information was then included in a respiratory motion-corrected, cardiac-gated, anatomically guided image reconstruction of the simultaneously acquired PET data. Reconstructions were evaluated by measuring myocardial contrast and noise and compared to images from several comparative intermediate methods using the components of the proposed framework separately. Results: Including respiratory motion correction, cardiac gating, and anatomical guidance significantly increased contrast. In particular, myocardium-to-blood pool contrast increased by 143% on average (p<0.0001) compared to conventional uncorrected, non-guided PET images. Furthermore, anatomical guidance significantly reduced image noise compared to non-guided image reconstruction by 16.1% (p<0.0001). Conclusion: The proposed framework for MR-derived motion compensation and anatomical guidance of cardiac PET data was shown to significantly improve image quality compared to alternative reconstruction methods. Each component of the reconstruction pipeline was shown to have a positive impact on the final image quality. These improvements have the potential to improve clinical interpretability and diagnosis based on cardiac PET-MR images.
Collapse
|
19
|
A landmark-free morphometrics pipeline for high-resolution phenotyping: application to a mouse model of Down syndrome. Development 2021; 148:dev188631. [PMID: 33712441 PMCID: PMC7969589 DOI: 10.1242/dev.188631] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2020] [Accepted: 02/01/2021] [Indexed: 12/20/2022]
Abstract
Characterising phenotypes often requires quantification of anatomical shape. Quantitative shape comparison (morphometrics) traditionally uses manually located landmarks and is limited by landmark number and operator accuracy. Here, we apply a landmark-free method to characterise the craniofacial skeletal phenotype of the Dp1Tyb mouse model of Down syndrome and a population of the Diversity Outbred (DO) mouse model, comparing it with a landmark-based approach. We identified cranial dysmorphologies in Dp1Tyb mice, especially smaller size and brachycephaly (front-back shortening), homologous to the human phenotype. Shape variation in the DO mice was partly attributable to allometry (size-dependent shape variation) and sexual dimorphism. The landmark-free method performed as well as, or better than, the landmark-based method but was less labour-intensive, required less user training and, uniquely, enabled fine mapping of local differences as planar expansion or shrinkage. Its higher resolution pinpointed reductions in interior mid-snout structures and occipital bones in both the models that were not otherwise apparent. We propose that this landmark-free pipeline could make morphometrics widely accessible beyond its traditional niches in zoology and palaeontology, especially in characterising developmental mutant phenotypes.
Collapse
|
20
|
Immersive visualisation of intracardiac blood flow in virtual reality on a patient with HLHS. Eur Heart J Cardiovasc Imaging 2021. [DOI: 10.1093/ehjci/jeaa356.408] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Abstract
Funding Acknowledgements
Type of funding sources: Other. Main funding source(s): NIHR i4i funded 3D Heart project Wellcome/EPSRC Centre for Medical Engineering [WT 203148/Z/16/Z]
onbehalf
3D Heart Project
Background/Introduction: Virtual Reality (VR) for surgical and interventional planning in the treatment of Congenital Heart Disease (CHD) is an emerging field that has the potential to improve planning. Particularly in very complex cases, VR permits enhanced visualisation and more intuitive interaction of volumetric images, compared to traditional flat-screen visualisation tools. Blood flow is severely affected by CHD and, thus, visualisation of blood flow allows direct observation of the cardiac maladaptions for surgical planning. However, blood flow is fundamentally 3D information, and viewing and interacting with it using conventional 2D displays is suboptimal.
Purpose
To demonstrate feasibility of blood flow visualisation in VR using pressure and velocity obtained from a computational fluid dynamic (CFD) simulation of the right ventricle in a patient with hypoplastic left heart syndrome (HLHS) as a proof of concept.
Methods
We extend an existing VR volume rendering application to include CFD rendering functionality using the Visualization Toolkit (VTK), an established visualisation library widely used in clinical software for visualising medical imaging data. Our prototype displays the mesh outline of the segmented heart, a slicing plane showing blood pressure on the plane within the heart, and streamlines of blood flow from a spherical source region. Existing user tools were extended to enable interactive positioning, rotation and scaling of the pressure plane and streamline origin, ensuring continuity between volume rendering and CFD interaction and, thus, ease of use. We evaluated if rendering and interaction times were low enough to ensure a comfortable, interactive VR experience. Our performance benchmark is a previous study showing VR is acceptable to clinical users when rendering speed is at least 90 fps.
Results
CFD simulations were successfully rendered, viewed and manipulated in VR, as shown in the Figure. Evaluating performance, we found that visualisation of the mesh and streamlines was at an acceptably high and stable frame rate, over 150fps. User interactions of moving, rotating or scaling the mesh or streamlines origin did not significantly reduce this frame rate. However, rendering the pressure slicing plane reduced frame rate by an unacceptable degree, to less than 10fps.
Conclusion
Visualisation of and interaction with CFD simulation data was successfully integrated into an existing VR application. This aids in surgery and intervention planning for defects heavily relying on blood flow simulation, and lays a foundation for a platform for clinicians to test interventions in VR. Pressure plane rendering performance will require significant optimisation, potentially addressed by updating the pressure plane data separately from the main, VR rendering.
Abstract Figure. An example render of CFD simulation
Collapse
|
21
|
|
22
|
Deep Learning-Based Detection and Correction of Cardiac MR Motion Artefacts During Reconstruction for High-Quality Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:4001-4010. [PMID: 32746141 DOI: 10.1109/tmi.2020.3008930] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Segmenting anatomical structures in medical images has been successfully addressed with deep learning methods for a range of applications. However, this success is heavily dependent on the quality of the image that is being segmented. A commonly neglected point in the medical image analysis community is the vast amount of clinical images that have severe image artefacts due to organ motion, movement of the patient and/or image acquisition related issues. In this paper, we discuss the implications of image motion artefacts on cardiac MR segmentation and compare a variety of approaches for jointly correcting for artefacts and segmenting the cardiac cavity. The method is based on our recently developed joint artefact detection and reconstruction method, which reconstructs high quality MR images from k-space using a joint loss function and essentially converts the artefact correction task to an under-sampled image reconstruction task by enforcing a data consistency term. In this paper, we propose to use a segmentation network coupled with this in an end-to-end framework. Our training optimises three different tasks: 1) image artefact detection, 2) artefact correction and 3) image segmentation. We train the reconstruction network to automatically correct for motion-related artefacts using synthetically corrupted cardiac MR k-space data and uncorrected reconstructed images. Using a test set of 500 2D+time cine MR acquisitions from the UK Biobank data set, we achieve demonstrably good image quality and high segmentation accuracy in the presence of synthetic motion artefacts. We showcase better performance compared to various image correction architectures.
Collapse
|
23
|
Tumour subregion analysis of colorectal liver metastases using semi-automated clustering based on DCE-MRI: Comparison with histological subregions and impact on pharmacokinetic parameter analysis. Eur J Radiol 2020; 126:108934. [PMID: 32217426 DOI: 10.1016/j.ejrad.2020.108934] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2019] [Revised: 01/21/2020] [Accepted: 03/01/2020] [Indexed: 12/29/2022]
Abstract
PURPOSE To use a novel segmentation methodology based on dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) to define tumour subregions of liver metastases from colorectal cancer (CRC), to compare these with histology, and to use these to compare extracted pharmacokinetic (PK) parameters between tumour subregions. MATERIALS AND METHODS This ethically-approved prospective study recruited patients with CRC and ≥1 hepatic metastases scheduled for hepatic resection. Patients underwent DCE-MRI pre-metastasectomy. Histological sections of resection specimens were spatially matched to DCE-MRI acquisitions and used to define histological subregions of viable and non-viable tumour. A semi-automated voxel-wise image segmentation algorithm based on the DCE-MRI contrast-uptake curves was used to define imaging subregions of viable and non-viable tumour. Overlap of histologically-defined and imaging subregions was compared using the Dice similarity coefficient (DSC). DCE-MRI PK parameters were compared for the whole tumour and histology-defined and imaging-derived subregions. RESULTS Fourteen patients were included in the analysis. Direct histological comparison with imaging was possible in nine patients. Mean DSC for viable tumour subregions defined by imaging and histology was 0.738 (range 0.540-0.930). There were significant differences between Ktrans and kep for viable and non-viable subregions (p < 0.001) and between whole lesions and viable subregions (p < 0.001). CONCLUSION We demonstrate good concordance of viable tumour segmentation based on pre-operative DCE-MRI with a post-operative histological gold-standard. This can be used to extract viable tumour-specific values from quantitative image analysis, and could improve treatment response assessment in clinical practice.
Collapse
|
24
|
Evaluation of MRI to Ultrasound Registration Methods for Brain Shift Correction: The CuRIOUS2018 Challenge. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:777-786. [PMID: 31425023 PMCID: PMC7611407 DOI: 10.1109/tmi.2019.2935060] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2023]
Abstract
In brain tumor surgery, the quality and safety of the procedure can be impacted by intra-operative tissue deformation, called brain shift. Brain shift can move the surgical targets and other vital structures such as blood vessels, thus invalidating the pre-surgical plan. Intra-operative ultrasound (iUS) is a convenient and cost-effective imaging tool to track brain shift and tumor resection. Accurate image registration techniques that update pre-surgical MRI based on iUS are crucial but challenging. The MICCAI Challenge 2018 for Correction of Brain shift with Intra-Operative UltraSound (CuRIOUS2018) provided a public platform to benchmark MRI-iUS registration algorithms on newly released clinical datasets. In this work, we present the data, setup, evaluation, and results of CuRIOUS 2018, which received 6 fully automated algorithms from leading academic and industrial research groups. All algorithms were first trained with the public RESECT database, and then ranked based on a test dataset of 10 additional cases with identical data curation and annotation protocols as the RESECT database. The article compares the results of all participating teams and discusses the insights gained from the challenge, as well as future work.
Collapse
|
25
|
An objective comparison of detection and segmentation algorithms for artefacts in clinical endoscopy. Sci Rep 2020; 10:2748. [PMID: 32066744 PMCID: PMC7026422 DOI: 10.1038/s41598-020-59413-5] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2019] [Accepted: 01/28/2020] [Indexed: 02/07/2023] Open
Abstract
We present a comprehensive analysis of the submissions to the first edition of the Endoscopy Artefact Detection challenge (EAD). Using crowd-sourcing, this initiative is a step towards understanding the limitations of existing state-of-the-art computer vision methods applied to endoscopy and promoting the development of new approaches suitable for clinical translation. Endoscopy is a routine imaging technique for the detection, diagnosis and treatment of diseases in hollow-organs; the esophagus, stomach, colon, uterus and the bladder. However the nature of these organs prevent imaged tissues to be free of imaging artefacts such as bubbles, pixel saturation, organ specularity and debris, all of which pose substantial challenges for any quantitative analysis. Consequently, the potential for improved clinical outcomes through quantitative assessment of abnormal mucosal surface observed in endoscopy videos is presently not realized accurately. The EAD challenge promotes awareness of and addresses this key bottleneck problem by investigating methods that can accurately classify, localize and segment artefacts in endoscopy frames as critical prerequisite tasks. Using a diverse curated multi-institutional, multi-modality, multi-organ dataset of video frames, the accuracy and performance of 23 algorithms were objectively ranked for artefact detection and segmentation. The ability of methods to generalize to unseen datasets was also evaluated. The best performing methods (top 15%) propose deep learning strategies to reconcile variabilities in artefact appearance with respect to size, modality, occurrence and organ type. However, no single method outperformed across all tasks. Detailed analyses reveal the shortcomings of current training strategies and highlight the need for developing new optimal metrics to accurately quantify the clinical applicability of methods.
Collapse
|
26
|
Virtual linear measurement system for accurate quantification of medical images. Healthc Technol Lett 2020; 6:220-225. [PMID: 32038861 PMCID: PMC6952242 DOI: 10.1049/htl.2019.0074] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2019] [Accepted: 10/02/2019] [Indexed: 11/29/2022] Open
Abstract
Virtual reality (VR) has the potential to aid in the understanding of complex volumetric medical images, by providing an immersive and intuitive experience accessible to both experts and non-imaging specialists. A key feature of any clinical image analysis tool is measurement of clinically relevant anatomical structures. However, this feature has been largely neglected in VR applications. The authors propose a Unity-based system to carry out linear measurements on three-dimensional (3D), purposefully designed for the measurement of 3D echocardiographic images. The proposed system is compared to commercially available, widely used image analysis packages that feature both 2D (multi-planar reconstruction) and 3D (volume rendering) measurement tools. The results indicate that the proposed system provides statistically equivalent measurements compared to the reference 2D system, while being more accurate than the commercial 3D system.
Collapse
|
27
|
Mechanically Powered Motion Imaging Phantoms: Proof of Concept. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2019:2723-2726. [PMID: 31946457 DOI: 10.1109/embc.2019.8856577] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Motion imaging phantoms are expensive, bulky and difficult to transport and set-up. The purpose of this paper is to demonstrate a simple approach to the design of multi-modality motion imaging phantoms that use mechanically stored energy to produce motion.We propose two phantom designs that use mainsprings and elastic bands to store energy. A rectangular piece was attached to an axle at the end of the transmission chain of each phantom, and underwent a rotary motion upon release of the mechanical motor. The phantoms were imaged with MRI and US, and the image sequences were embedded in a 1D non linear manifold (Laplacian Eigenmap) and the spectrogram of the embedding was used to derive the angular velocity over time. The derived velocities were consistent and reproducible within a small error. The proposed motion phantom concept showed great potential for the construction of simple and affordable motion phantoms.
Collapse
|
28
|
P1417 Acceptability of a virtual reality system for examination of congenital heart disease patients. Eur Heart J Cardiovasc Imaging 2020. [DOI: 10.1093/ehjci/jez319.849] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
Abstract
Funding Acknowledgements
Work supported by the NIHR i4i funded 3D Heart project [II-LA-0716-20001]
Background/Introduction
Virtual Reality (VR) has recently gained great interest for examining 3D images from congenital heart disease (CHD) patients. Currently, 3D printed models of the heart may be used for particularly complex cases. These have been found to be intuitive and to positively impact clinical decision-making. Although positively received, such printed models must be segmented from the image data, generally only CT/MR may be used, the prints are static, and models do not allow for cropping / slicing or easy manipulation. Our VR system is designed to address these issues, as well as providing a simple interface compared to standard software. Building such a VR system, one with intuitive interaction which is clinically useful, requires studying user acceptance and requirements.
Purpose: We evaluate the usability of our VR system
can a prototype VR system be easily learned and used by clinicians unfamiliar with VR.
Method
We tested a VR system which can display 3D echo images and enables the user to interact with them, for instance by translating, rotating and cropping. Our system is tested on a transoesophageal echocardiogram from a patient with aortic valve disease. 13 clinicians evaluated the system including 5 imaging cardiologists, 5 physiologists, 2 surgeons and an interventionist, with their clinical experience ranging from trainee to more than 5 years’ of experience. None had used VR regularly in the past. After a brief training session, they were asked to place three anatomical landmarks and identify a particular cardiac view. They then completed a questionnaire on system ease of learning and image manipulation.
Results: Results are shown in the figure below. Learning to use the system was perceived as easy for all but one participant, who rated it as ‘Somewhat difficult’. However, once trained, all users found the system easy to use. Participants found the interaction, where objects in the scene are picked up using the controller and then track the controller’s motion in a 1:1 way, to be particularly easy to learn and use.
Conclusion
Our VR system was accepted by the vast majority of clinicians, both for ease of learning and use. Intuitiveness and the ability to interact with images in a natural way were highlighted as most useful - suggesting that such a system could become accepted for routine clinical use in the future.
Abstract P1417 Figure. VR system evaluation participant feedbac
Collapse
|
29
|
Abstract
Recent developments in laser scanning microscopy have greatly extended its applicability in cancer imaging beyond the visualization of complex biology, and opened up the possibility of quantitative analysis of inherently dynamic biological processes. However, the physics of image acquisition intrinsically means that image quality is subject to a tradeoff between a number of imaging parameters, including resolution, signal-to-noise ratio, and acquisition speed. We address the problem of geometric distortion, in particular, jaggedness artefacts that are caused by the variable motion of the microscope laser, by using a combination of image processing techniques. Image restoration methods have already shown great potential for post-acquisition image analysis. The performance of our proposed image restoration technique was first quantitatively evaluated using phantom data with different textures, and then qualitatively assessed using in vivo biological imaging data. In both cases, the presented method, comprising a combination of image registration and filtering, is demonstrated to have substantial improvement over state-of-the-art microscopy acquisition methods.
Collapse
|
30
|
Abstract
Abstract
Funding Acknowledgements
Work supported by the NIHR i4i funded 3D Heart project [II-LA-0716-20001]
Background/Introduction
Cardiac measurements are clinically important and are invariably required in any clinical imaging software. The advent of Virtual Reality (VR) imaging systems is introducing intuitive and natural ways of visualising and interrogating echo images in a 3D environment. The 3D nature of the VR experience requires purpose-designed measurement tools, which may benefit from better depth perception and easier localisation of 3D landmarks.
Purpose
Comparison of the accuracy of our VR 3D linear measurement system to commercial clinical imaging software, using both multi-plane reformatting (MPR) and volume rendered views.
Method
Each virtual reality measurement was made by selecting two points in 3D, directly in the volume rendering. The participants could edit the measurements until satisfied with their accuracy. 5 expert clinicians carried out 26 measurements each - 6 measurements on a calibration phantom, and 5 anatomically meaningful measurements (for example: aortic valve, left atrium, left ventricle) on 4 datasets. The same measurements were made by all participants using our VR system (volume rendering), Philips" QLAB (MPR) and Tomtec (volume rendering). The frame number and view (for example: long axis) were consistent for each measurement across the 3 packages used.
Results
Preliminary results are shown in the figure below. MPR measurements made on Philips’ QLAB are used as a reference, as this is the most commonly used software for this purpose at our institution. We compare measurements made in Tomtec and VR, both using volume rendering, using Bland-Altman plots. Each measurement data point is the mean of all participants measurements for each dataset/measurement combination. The mean of the measurement differences for the VR system is closer to zero, compared to Tomtec. However, the variation of these differences is larger for the VR system than for Tomtec.
Conclusion
Our preliminary results suggest that the accuracy of line measurements made using volume rendering within a VR system is comparable to measurements made using approved software packages for volume rendering displayed on a 2D screen. This shows promise for more complex interrogation methods.
Abstract P801 Figure. Comparison of Tomtec and VR with QLAB
Collapse
|
31
|
P1566 Evaluation of haptic feedback for interaction with volumetric image data in virtual reality. Eur Heart J Cardiovasc Imaging 2020. [DOI: 10.1093/ehjci/jez319.986] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Abstract
Funding Acknowledgements
Work supported by the NIHR i4i funded 3D Heart project [II-LA-0716-20001]
Background
3D printing is used for surgical planning of complex congenital heart disease (CHD) because it provides an intuitive 3D representation of the image data. However, the 3D print is static and it can be costly and time consuming to create. Virtual Reality (VR) is a cheaper alternative that is able to visualise volumetric images in 3D directly from the scanner, both statically (CT and MR) and dynamically (cardiac ultrasound). However, VR visualisation is not as tangible as a 3D print - this is because it lacks the haptic feedback which would make the interactions feel more natural.
Purpose
Evaluate if adding haptic feedback (vibration) to the visualisation of volume image data in VR improves measurement accuracy and user experience.
Method
We evaluated the effect of vibration haptic feedback in our VR system using a synthetic cylinder volume dataset. The cylinder was displayed in two conditions: (1) with no haptic feedback, and (2) with haptic feedback. Ten non-clinical participants volunteered in the evaluation. They were blinded to these two test conditions. The participants were asked to measure the cylinder’s diameter horizontally and vertically, and its length, in each test condition. The measurement results were compared to the ground truth to assess the measurement accuracy. Each participant also completed a questionnaire comparing their experience of the two test conditions during the experiment.
Results
The results show a marginal improvement of measurement accuracy with haptic feedback, compared to no haptics (see Figure a). However, this improvement was not statistically significant. The haptic feedback did improve the participants’ confidence about their performance and increased the ease of use in VR, hence, they preferred the haptics condition to the no haptics condition (see Figure b). Moreover, although 70% of the participants reported relying on the visual cue more than on the haptic cue, 90% found that the haptic cue was helpful for deciding where to place the measurement point. Also, 88.9% of the participants felt more immersed in the VR scene with haptic feedback.
Conclusion
Our evaluation suggests that although haptic feedback may only marginally improve measurement accuracy, participants nevertheless preferred it because it improved confidence in their performance, increased ease of use, and facilitated a more immersive user experience.
Abstract P1566 Figure.
Collapse
|
32
|
Weakly Supervised Estimation of Shadow Confidence Maps in Fetal Ultrasound Imaging. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2755-2767. [PMID: 31021795 PMCID: PMC6892638 DOI: 10.1109/tmi.2019.2913311] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Detecting acoustic shadows in ultrasound images is important in many clinical and engineering applications. Real-time feedback of acoustic shadows can guide sonographers to a standardized diagnostic viewing plane with minimal artifacts and can provide additional information for other automatic image analysis algorithms. However, automatically detecting shadow regions using learning-based algorithms is challenging because pixel-wise ground truth annotation of acoustic shadows is subjective and time consuming. In this paper, we propose a weakly supervised method for automatic confidence estimation of acoustic shadow regions. Our method is able to generate a dense shadow-focused confidence map. In our method, a shadow-seg module is built to learn general shadow features for shadow segmentation, based on global image-level annotations as well as a small number of coarse pixel-wise shadow annotations. A transfer function is introduced to extend the obtained binary shadow segmentation to a reference confidence map. In addition, a confidence estimation network is proposed to learn the mapping between input images and the reference confidence maps. This network is able to predict shadow confidence maps directly from input images during inference. We use evaluation metrics such as DICE, inter-class correlation, and so on, to verify the effectiveness of our method. Our method is more consistent than human annotation and outperforms the state-of-the-art quantitatively in shadow segmentation and qualitatively in confidence estimation of shadow regions. Furthermore, we demonstrate the applicability of our method by integrating shadow confidence maps into tasks such as ultrasound image classification, multi-view image fusion, and automated biometric measurements.
Collapse
|
33
|
|
34
|
Fully Automated, Quality-Controlled Cardiac Analysis From CMR: Validation and Large-Scale Application to Characterize Cardiac Function. JACC Cardiovasc Imaging 2019; 13:684-695. [PMID: 31326477 PMCID: PMC7060799 DOI: 10.1016/j.jcmg.2019.05.030] [Citation(s) in RCA: 82] [Impact Index Per Article: 16.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/30/2019] [Revised: 04/26/2019] [Accepted: 05/16/2019] [Indexed: 12/13/2022]
Abstract
Objectives This study sought to develop a fully automated framework for cardiac function analysis from cardiac magnetic resonance (CMR), including comprehensive quality control (QC) algorithms to detect erroneous output. Background Analysis of cine CMR imaging using deep learning (DL) algorithms could automate ventricular function assessment. However, variable image quality, variability in phenotypes of disease, and unavoidable weaknesses in training of DL algorithms currently prevent their use in clinical practice. Methods The framework consists of a pre-analysis DL image QC, followed by a DL algorithm for biventricular segmentation in long-axis and short-axis views, myocardial feature-tracking (FT), and a post-analysis QC to detect erroneous results. The study validated the framework in healthy subjects and cardiac patients by comparison against manual analysis (n = 100) and evaluation of the QC steps’ ability to detect erroneous results (n = 700). Next, this method was used to obtain reference values for cardiac function metrics from the UK Biobank. Results Automated analysis correlated highly with manual analysis for left and right ventricular volumes (all r > 0.95), strain (circumferential r = 0.89, longitudinal r > 0.89), and filling and ejection rates (all r ≥ 0.93). There was no significant bias for cardiac volumes and filling and ejection rates, except for right ventricular end-systolic volume (bias +1.80 ml; p = 0.01). The bias for FT strain was <1.3%. The sensitivity of detection of erroneous output was 95% for volume-derived parameters and 93% for FT strain. Finally, reference values were automatically derived from 2,029 CMR exams in healthy subjects. Conclusions The study demonstrates a DL-based framework for automated, quality-controlled characterization of cardiac function from cine CMR, without the need for direct clinician oversight.
Collapse
|
35
|
Automatic CNN-based detection of cardiac MR motion artefacts using k-space data augmentation and curriculum learning. Med Image Anal 2019; 55:136-147. [PMID: 31055126 PMCID: PMC6688894 DOI: 10.1016/j.media.2019.04.009] [Citation(s) in RCA: 45] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2018] [Revised: 02/13/2019] [Accepted: 04/17/2019] [Indexed: 11/17/2022]
Abstract
Good quality of medical images is a prerequisite for the success of subsequent image analysis pipelines. Quality assessment of medical images is therefore an essential activity and for large population studies such as the UK Biobank (UKBB), manual identification of artefacts such as those caused by unanticipated motion is tedious and time-consuming. Therefore, there is an urgent need for automatic image quality assessment techniques. In this paper, we propose a method to automatically detect the presence of motion-related artefacts in cardiac magnetic resonance (CMR) cine images. We compare two deep learning architectures to classify poor quality CMR images: 1) 3D spatio-temporal Convolutional Neural Networks (3D-CNN), 2) Long-term Recurrent Convolutional Network (LRCN). Though in real clinical setup motion artefacts are common, high-quality imaging of UKBB, which comprises cross-sectional population data of volunteers who do not necessarily have health problems creates a highly imbalanced classification problem. Due to the high number of good quality images compared to the relatively low number of images with motion artefacts, we propose a novel data augmentation scheme based on synthetic artefact creation in k-space. We also investigate a learning approach using a predetermined curriculum based on synthetic artefact severity. We evaluate our pipeline on a subset of the UK Biobank data set consisting of 3510 CMR images. The LRCN architecture outperformed the 3D-CNN architecture and was able to detect 2D+time short axis images with motion artefacts in less than 1ms with high recall. We compare our approach to a range of state-of-the-art quality assessment methods. The novel data augmentation and curriculum learning approaches both improved classification performance achieving overall area under the ROC curve of 0.89.
Collapse
|
36
|
Patch-based lung ventilation estimation using multi-layer supervoxels. Comput Med Imaging Graph 2019; 74:49-60. [PMID: 31009928 DOI: 10.1016/j.compmedimag.2019.04.002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2018] [Revised: 03/31/2019] [Accepted: 04/02/2019] [Indexed: 01/03/2023]
Abstract
Patch-based approaches have received substantial attention over the recent years in medical imaging. One of their potential applications may be to provide more anatomically consistent ventilation maps estimated on dynamic lung CT. An assessment of regional lung function may act as a guide for radiotherapy, ensuring a more accurate treatment plan. This in turn, could spare well-functioning parts of the lungs. We present a novel method for lung ventilation estimation from dynamic lung CT imaging, combining a supervoxel-based image representation with deformations estimated during deformable image registration, performed between peak breathing phases. For this we propose a method that tracks changes of the intensity of previously extracted supervoxels. For the evaluation of the method we calculate correlation of the estimated ventilation maps with static ventilation images acquired from hyperpolarized Xenon129 MRI. We also investigate the influence of different image registration methods used to estimate deformations between the peak breathing phases in the dynamic CT imaging. We show that our method performs favorably to other ventilation estimation methods commonly used in the field, independently of the image registration method applied to dynamic CT. Due to the patch-based approach of our method, it may be physiologically more consistent with lung anatomy than previous methods relying on voxel-wise relationships. In our method the ventilation is estimated for supervoxels, which tend to group spatially close voxels with similar intensity values. The proposed method was evaluated on a dataset consisting of three lung cancer patients undergoing radiotherapy treatment, and this resulted in a correlation of 0.485 with XeMRI ventilation images, compared with 0.393 for the intensity-based approach, 0.231 for the Jacobian-based method and 0.386 for the Hounsfield units averaging method, on average. Within the limitation of the small number of cases analyzed, results suggest that the presented technique may be advantageous for CT-based ventilation estimation. The results showing higher values of correlation of the proposed method demonstrate the potential of our method to more accurately mimic the lung physiology.
Collapse
|
37
|
Explicit Topological Priors for Deep-Learning Based Image Segmentation Using Persistent Homology. LECTURE NOTES IN COMPUTER SCIENCE 2019. [DOI: 10.1007/978-3-030-20351-1_2] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
38
|
Segmentation of Vasculature From Fluorescently Labeled Endothelial Cells in Multi-Photon Microscopy Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:1-10. [PMID: 28796613 DOI: 10.1109/tmi.2017.2725639] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Vasculature is known to be of key biological significance, especially in the study of tumors. As such, considerable effort has been focused on the automated segmentation of vasculature in medical and pre-clinical images. The majority of vascular segmentation methods focus on bloodpool labeling methods; however, particularly, in the study of tumors, it is of particular interest to be able to visualize both the perfused and the non-perfused vasculature. Imaging vasculature by highlighting the endothelium provides a way to separate the morphology of vasculature from the potentially confounding factor of perfusion. Here, we present a method for the segmentation of tumor vasculature in 3D fluorescence microscopic images using signals from the endothelial and surrounding cells. We show that our method can provide complete and semantically meaningful segmentations of complex vasculature using a supervoxel-Markov random field approach. We show that in terms of extracting meaningful segmentations of the vasculature, our method outperforms both state-of-the-art method, specific to these data, as well as more classical vasculature segmentation methods.
Collapse
|
39
|
Functional Parameters Derived from Magnetic Resonance Imaging Reflect Vascular Morphology in Preclinical Tumors and in Human Liver Metastases. Clin Cancer Res 2018; 24:4694-4704. [PMID: 29959141 PMCID: PMC6171743 DOI: 10.1158/1078-0432.ccr-18-0033] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2018] [Revised: 05/11/2018] [Accepted: 06/25/2018] [Indexed: 12/13/2022]
Abstract
Purpose: Tumor vessels influence the growth and response of tumors to therapy. Imaging vascular changes in vivo using dynamic contrast-enhanced MRI (DCE-MRI) has shown potential to guide clinical decision making for treatment. However, quantitative MR imaging biomarkers of vascular function have not been widely adopted, partly because their relationship to structural changes in vessels remains unclear. We aimed to elucidate the relationships between vessel function and morphology in vivo Experimental Design: Untreated preclinical tumors with different levels of vascularization were imaged sequentially using DCE-MRI and CT. Relationships between functional parameters from MR (iAUC, K trans, and BATfrac) and structural parameters from CT (vessel volume, radius, and tortuosity) were assessed using linear models. Tumors treated with anti-VEGFR2 antibody were then imaged to determine whether antiangiogenic therapy altered these relationships. Finally, functional-structural relationships were measured in 10 patients with liver metastases from colorectal cancer.Results: Functional parameters iAUC and K trans primarily reflected vessel volume in untreated preclinical tumors. The relationships varied spatially and with tumor vascularity, and were altered by antiangiogenic treatment. In human liver metastases, all three structural parameters were linearly correlated with iAUC and K trans For iAUC, structural parameters also modified each other's effect.Conclusions: Our findings suggest that MR imaging biomarkers of vascular function are linked to structural changes in tumor vessels and that antiangiogenic therapy can affect this link. Our work also demonstrates the feasibility of three-dimensional functional-structural validation of MR biomarkers in vivo to improve their biological interpretation and clinical utility. Clin Cancer Res; 24(19); 4694-704. ©2018 AACR.
Collapse
|
40
|
Virtual interaction and visualisation of 3D medical imaging data with VTK and Unity. Healthc Technol Lett 2018; 5:148-153. [PMID: 30800321 PMCID: PMC6372083 DOI: 10.1049/htl.2018.5064] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2018] [Accepted: 08/20/2018] [Indexed: 11/22/2022] Open
Abstract
The authors present a method to interconnect the Visualisation Toolkit (VTK) and Unity. This integration enables them to exploit the visualisation capabilities of VTK with Unity's widespread support of virtual, augmented, and mixed reality displays, and interaction and manipulation devices, for the development of medical image applications for virtual environments. The proposed method utilises OpenGL context sharing between Unity and VTK to render VTK objects into the Unity scene via a Unity native plugin. The proposed method is demonstrated in a simple Unity application that performs VTK volume rendering to display thoracic computed tomography and cardiac magnetic resonance images. Quantitative measurements of the achieved frame rates show that this approach provides over 90 fps using standard hardware, which is suitable for current augmented reality/virtual reality display devices.
Collapse
|
41
|
Regional Multi-View Learning for Cardiac Motion Analysis: Application to Identification of Dilated Cardiomyopathy Patients. IEEE Trans Biomed Eng 2018; 66:956-966. [PMID: 30113891 DOI: 10.1109/tbme.2018.2865669] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
OBJECTIVE The aim of this paper is to describe an automated diagnostic pipeline that uses as input only ultrasound (US) data, but is at the same time informed by a training database of multimodal magnetic resonance (MR) and US image data. METHODS We create a multimodal cardiac motion atlas from three-dimensional (3-D) MR and 3-D US data followed by multi-view machine learning algorithms to combine and extract the most meaningful cardiac descriptors for classification of dilated cardiomyopathy (DCM) patients using US data only. More specifically, we propose two algorithms based on multi-view linear discriminant analysis and multi-view Laplacian support vector machines (MvLapSVMs). Furthermore, a novel regional multi-view approach is proposed to exploit the regional relationships between the two modalities. RESULTS We evaluate our pipeline on the classification task of discriminating between normals and DCM patients. Results show that the use of multi-view classifiers together with a cardiac motion atlas results in a statistically significant improvement in accuracy compared to classification without the multimodal atlas. MvLapSVM was able to achieve the highest accuracy for both the global approach (92.71%) and the regional approach (94.32%). CONCLUSION Our work represents an important contribution to the understanding of cardiac motion, which is an important aid in the quantification of the contractility and function of the left ventricular myocardium. SIGNIFICANCE The intended workflow of the developed pipeline is to make use of the prior knowledge from the multimodal atlas to enable robust extraction of indicators from 3-D US images for detecting DCM patients.
Collapse
|
42
|
Whole tumor kinetics analysis of 18F-fluoromisonidazole dynamic PET scans of non-small cell lung cancer patients, and correlations with perfusion CT blood flow. EJNMMI Res 2018; 8:73. [PMID: 30069753 PMCID: PMC6070455 DOI: 10.1186/s13550-018-0430-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2018] [Accepted: 07/23/2018] [Indexed: 01/20/2023] Open
Abstract
BACKGROUND To determine the relative abilities of compartment models to describe time-courses of 18F-fluoromisonidazole (FMISO) tumor uptake in patients with advanced stage non-small cell lung cancer (NSCLC) imaged using dynamic positron emission tomography (dPET), and study correlations between values of the blood flow-related parameter K1 obtained from fits of the models and an independent blood flow measure obtained from perfusion CT (pCT). NSCLC patients had a 45-min dynamic FMISO PET/CT scan followed by two static PET/CT acquisitions at 2 and 4-h post-injection. Perfusion CT scanning was then performed consisting of a 45-s cine CT. Reversible and irreversible two-, three- and four-tissue compartment models were fitted to 30 time-activity-curves (TACs) obtained for 15 whole tumor structures in 9 patients, each imaged twice. Descriptions of the TACs provided by the models were compared using the Akaike and Bayesian information criteria (AIC and BIC) and leave-one-out cross-validation. The precision with which fitted model parameters estimated ground-truth uptake kinetics was determined using statistical simulation techniques. Blood flow from pCT was correlated with K1 from PET kinetic models in addition to FMISO uptake levels. RESULTS An irreversible three-tissue compartment model provided the best description of whole tumor FMISO uptake time-courses according to AIC, BIC, and cross-validation scores totaled across the TACs. The simulation study indicated that this model also provided more precise estimates of FMISO uptake kinetics than other two- and three-tissue models. The K1 values obtained from fits of the irreversible three-tissue model correlated strongly with independent blood flow measurements obtained from pCT (Pearson r coefficient = 0.81). The correlation from the irreversible three-tissue model (r = 0.81) was stronger than that from than K1 values obtained from fits of a two-tissue compartment model (r = 0.68), or FMISO uptake levels in static images taken at time-points from tracer injection through to 4 h later (maximum at 2 min, r = 0.70). CONCLUSIONS Time-courses of whole tumor FMISO uptake by advanced stage NSCLC are described best by an irreversible three-tissue compartment model. The K1 values obtained from fits of the irreversible three-tissue model correlated strongly with independent blood flow measurements obtained from perfusion CT (r = 0.81).
Collapse
|
43
|
GIFTed Demons: deformable image registration with local structure-preserving regularization using supervoxels for liver applications. J Med Imaging (Bellingham) 2018; 5:024001. [PMID: 29662918 PMCID: PMC5886381 DOI: 10.1117/1.jmi.5.2.024001] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2017] [Accepted: 03/13/2018] [Indexed: 11/14/2022] Open
Abstract
Deformable image registration, a key component of motion correction in medical imaging, needs to be efficient and provides plausible spatial transformations that reliably approximate biological aspects of complex human organ motion. Standard approaches, such as Demons registration, mostly use Gaussian regularization for organ motion, which, though computationally efficient, rule out their application to intrinsically more complex organ motions, such as sliding interfaces. We propose regularization of motion based on supervoxels, which provides an integrated discontinuity preserving prior for motions, such as sliding. More precisely, we replace Gaussian smoothing by fast, structure-preserving, guided filtering to provide efficient, locally adaptive regularization of the estimated displacement field. We illustrate the approach by applying it to estimate sliding motions at lung and liver interfaces on challenging four-dimensional computed tomography (CT) and dynamic contrast-enhanced magnetic resonance imaging datasets. The results show that guided filter-based regularization improves the accuracy of lung and liver motion correction as compared to Gaussian smoothing. Furthermore, our framework achieves state-of-the-art results on a publicly available CT liver dataset.
Collapse
|
44
|
A DCE-MRI Driven 3-D Reaction-Diffusion Model of Solid Tumor Growth. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:724-732. [PMID: 29533893 DOI: 10.1109/tmi.2017.2779811] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/11/2024]
Abstract
Predicting tumor growth and its response to therapy remains a major challenge in cancer research and strongly relies on tumor growth models. In this paper, we introduce, calibrate, and verify a novel image-driven reaction-diffusion model of avascular tumor growth. The model allows for proliferation, death and spread of tumor cells, and accounts for nutrient distribution and hypoxia. It is constrained by longitudinal time series of dynamic contrast-enhancement-MRI images. Tumor specific parameters are estimated from two early time points and used to predict the spatio-temporal evolution of the tumor volume and cell densities at later time points. We first test our parameter estimation approach on synthetic data from 15 generated tumors. Our in silico study resulted in small volume errors (<5%) and high Dice overlaps (>97%), showing that model parameters can be successfully recovered and used to accurately predict the tumor growth. Encouraged by these results, we apply our model to seven pre-clinical cases of breast carcinoma. We are able to show promising preliminary results, especially for the estimation for early time points. Processes like angiogenesis and apoptosis should be included to further improve predictions for later time points.
Collapse
|
45
|
Supervoxels for Graph Cuts-Based Deformable Image Registration Using Guided Image Filtering. JOURNAL OF ELECTRONIC IMAGING 2017; 26:061607. [PMID: 29225433 PMCID: PMC5722202 DOI: 10.1117/1.jei.26.6.061607] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
In this work we propose to combine a supervoxel-based image representation with the concept of graph cuts as an efficient optimization technique for 3D deformable image registration. Due to the pixels/voxels-wise graph construction, the use of graph cuts in this context has been mainly limited to 2D applications. However, our work overcomes some of the previous limitations by posing the problem on a graph created by adjacent supervoxels, where the number of nodes in the graph is reduced from the number of voxels to the number of supervoxels. We demonstrate how a supervoxel image representation, combined with graph cuts-based optimization can be applied to 3D data. We further show that the application of a relaxed graph representation of the image, followed by guided image filtering over the estimated deformation field, allows us to model 'sliding motion'. Applying this method to lung image registration, results in highly accurate image registration and anatomically plausible estimations of the deformations. Evaluation of our method on a publicly available Computed Tomography lung image dataset (www.dir-lab.com) leads to the observation that our new approach compares very favorably with state-of-the-art in continuous and discrete image registration methods achieving Target Registration Error of 1.16mm on average per landmark.
Collapse
|
46
|
Automated mediastinal lymph node detection from CT volumes based on intensity targeted radial structure tensor analysis. J Med Imaging (Bellingham) 2017; 4:044502. [PMID: 29152534 PMCID: PMC5683200 DOI: 10.1117/1.jmi.4.4.044502] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2017] [Accepted: 10/16/2017] [Indexed: 01/10/2023] Open
Abstract
This paper presents a local intensity structure analysis based on an intensity targeted radial structure tensor (ITRST) and the blob-like structure enhancement filter based on it (ITRST filter) for the mediastinal lymph node detection algorithm from chest computed tomography (CT) volumes. Although the filter based on radial structure tensor analysis (RST filter) based on conventional RST analysis can be utilized to detect lymph nodes, some lymph nodes adjacent to regions with extremely high or low intensities cannot be detected. Therefore, we propose the ITRST filter, which integrates the prior knowledge on detection target intensity range into the RST filter. Our lymph node detection algorithm consists of two steps: (1) obtaining candidate regions using the ITRST filter and (2) removing false positives (FPs) using the support vector machine classifier. We evaluated lymph node detection performance of the ITRST filter on 47 contrast-enhanced chest CT volumes and compared it with the RST and Hessian filters. The detection rate of the ITRST filter was 84.2% with 9.1 FPs/volume for lymph nodes whose short axis was at least 10 mm, which outperformed the RST and Hessian filters.
Collapse
|
47
|
A level-set approach to joint image segmentation and registration with application to CT lung imaging. Comput Med Imaging Graph 2017; 65:58-68. [PMID: 28705410 PMCID: PMC5885990 DOI: 10.1016/j.compmedimag.2017.06.003] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2017] [Revised: 06/06/2017] [Accepted: 06/12/2017] [Indexed: 11/19/2022]
Abstract
A simple novel joint image registration and segmentation method is presented. The new algorithm is based on a level-set formulation. The algorithm merges Chan–Vese segmentation with active dense displacement estimation. Numerical implementation is evaluated on a publicly available lung CT data set. Improvement of registration and segmentation properties compared with existing methods is shown.
Automated analysis of structural imaging such as lung Computed Tomography (CT) plays an increasingly important role in medical imaging applications. Despite significant progress in the development of image registration and segmentation methods, lung registration and segmentation remain a challenging task. In this paper, we present a novel image registration and segmentation approach, for which we develop a new mathematical formulation to jointly segment and register three-dimensional lung CT volumes. The new algorithm is based on a level-set formulation, which merges a classic Chan–Vese segmentation with the active dense displacement field estimation. Combining registration with segmentation has two key advantages: it allows to eliminate the problem of initializing surface based segmentation methods, and to incorporate prior knowledge into the registration in a mathematically justified manner, while remaining computationally attractive. We evaluate our framework on a publicly available lung CT data set to demonstrate the properties of the new formulation. The presented results show the improved accuracy for our joint segmentation and registration algorithm when compared to registration and segmentation performed separately.
Collapse
|
48
|
Advances and challenges in deformable image registration: From image fusion to complex motion modelling. Med Image Anal 2016; 33:145-148. [DOI: 10.1016/j.media.2016.06.031] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2016] [Revised: 06/17/2016] [Accepted: 06/17/2016] [Indexed: 10/21/2022]
|
49
|
Comparison of linear and nonlinear implementation of the compartmental tissue uptake model for dynamic contrast-enhanced MRI. Magn Reson Med 2016; 77:2414-2423. [PMID: 27605429 PMCID: PMC5484345 DOI: 10.1002/mrm.26324] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2016] [Revised: 05/10/2016] [Accepted: 06/08/2016] [Indexed: 12/14/2022]
Abstract
Purpose Fitting tracer kinetic models using linear methods is much faster than using their nonlinear counterparts, although this comes often at the expense of reduced accuracy and precision. The aim of this study was to derive and compare the performance of the linear compartmental tissue uptake (CTU) model with its nonlinear version with respect to their percentage error and precision. Theory and Methods The linear and nonlinear CTU models were initially compared using simulations with varying noise and temporal sampling. Subsequently, the clinical applicability of the linear model was demonstrated on 14 patients with locally advanced cervical cancer examined with dynamic contrast‐enhanced magnetic resonance imaging. Results Simulations revealed equal percentage error and precision when noise was within clinical achievable ranges (contrast‐to‐noise ratio >10). The linear method was significantly faster than the nonlinear method, with a minimum speedup of around 230 across all tested sampling rates. Clinical analysis revealed that parameters estimated using the linear and nonlinear CTU model were highly correlated (ρ ≥ 0.95). Conclusion The linear CTU model is computationally more efficient and more stable against temporal downsampling, whereas the nonlinear method is more robust to variations in noise. The two methods may be used interchangeably within clinical achievable ranges of temporal sampling and noise. Magn Reson Med 77:2414–2423, 2017. © 2016 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.
Collapse
|
50
|
A DCE-MRI imaging-based model for simulation of vascular tumour growth. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2016; 2016:5949-5952. [PMID: 28269607 DOI: 10.1109/embc.2016.7592083] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Imaging-based modelling of tumour growth can serve as a powerful tool to understand and predict tumour evolution and its response to therapy. The purpose of this study was to introduce, calibrate and evaluate a multi-scale model of vascular tumour growth. The model allows for proliferation, death and spatial spread of tumour cells as well as for new vessel creation. Both the calibration and the evaluation of the tumour growth model were performed using pre-clinical longitudinal time series of dynamic contrast-enhanced magnetic resonance imaging of colon carcinoma. Tumour specific model parameters, extracted from the images at two subsequent time points, were included into the model to predict the spatio-temporal evolution of the tumour at a third point in time. Simulation results for three pre-clinical cases demonstrated the model's ability to simulate the cellular as well as the 2D evolution of the tumour.
Collapse
|