1
|
Tzitzimpasis P, Ries M, Raaymakers BW, Zachiu C. Generalized div-curl based regularization for physically constrained deformable image registration. Sci Rep 2024; 14:15002. [PMID: 38951683 PMCID: PMC11217375 DOI: 10.1038/s41598-024-65896-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2024] [Accepted: 06/25/2024] [Indexed: 07/03/2024] Open
Abstract
Variational image registration methods commonly employ a similarity metric and a regularization term that renders the minimization problem well-posed. However, many frequently used regularizations such as smoothness or curvature do not necessarily reflect the underlying physics that apply to anatomical deformations. This, in turn, can make the accurate estimation of complex deformations particularly challenging. Here, we present a new highly flexible regularization inspired from the physics of fluid dynamics which allows applying independent penalties on the divergence and curl of the deformations and/or their nth order derivative. The complexity of the proposed generalized div-curl regularization renders the problem particularly challenging using conventional optimization techniques. To this end, we develop a transformation model and an optimization scheme that uses the divergence and curl components of the deformation as control parameters for the registration. We demonstrate that the original unconstrained minimization problem reduces to a constrained problem for which we propose the use of the augmented Lagrangian method. Doing this, the equations of motion greatly simplify and become managable. Our experiments indicate that the proposed framework can be applied on a variety of different registration problems and produce highly accurate deformations with the desired physical properties.
Collapse
Affiliation(s)
- Paris Tzitzimpasis
- Department of Radiotherapy, UMC Utrecht, 3584 CX, Utrecht, The Netherlands.
| | - Mario Ries
- Imaging Division, UMC Utrecht, 3584 CX, Utrecht, The Netherlands
| | - Bas W Raaymakers
- Department of Radiotherapy, UMC Utrecht, 3584 CX, Utrecht, The Netherlands
| | - Cornel Zachiu
- Department of Radiotherapy, UMC Utrecht, 3584 CX, Utrecht, The Netherlands
| |
Collapse
|
2
|
Hua Y, Xu K, Yang X. Variational image registration with learned prior using multi-stage VAEs. Comput Biol Med 2024; 178:108785. [PMID: 38925089 DOI: 10.1016/j.compbiomed.2024.108785] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2023] [Revised: 05/16/2024] [Accepted: 06/15/2024] [Indexed: 06/28/2024]
Abstract
Variational Autoencoders (VAEs) are an efficient variational inference technique coupled with the generated network. Due to the uncertainty provided by variational inference, VAEs have been applied in medical image registration. However, a critical problem in VAEs is that the simple prior cannot provide suitable regularization, which leads to the mismatch between the variational posterior and prior. An optimal prior can close the gap between the evidence's real and variational posterior. In this paper, we propose a multi-stage VAE to learn the optimal prior, which is the aggregated posterior. A lightweight VAE is used to generate the aggregated posterior as a whole. It is an effective way to estimate the distribution of the high-dimensional aggregated posterior that commonly exists in medical image registration based on VAEs. A factorized telescoping classifier is trained to estimate the density ratio of a simple given prior and aggregated posterior, aiming to calculate the KL divergence between the variational and aggregated posterior more accurately. We analyze the KL divergence and find that the finer the factorization, the smaller the KL divergence is. However, too fine a partition is not conducive to registration accuracy. Moreover, the diagonal hypothesis of the variational posterior's covariance ignores the relationship between latent variables in image registration. To address this issue, we learn a covariance matrix with low-rank information to enable correlations with each dimension of the variational posterior. The covariance matrix is further used as a measure to reduce the uncertainty of deformation fields. Experimental results on four public medical image datasets demonstrate that our proposed method outperforms other methods in negative log-likelihood (NLL) and achieves better registration accuracy.
Collapse
Affiliation(s)
- Yong Hua
- College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, 518060, Guangdong, China
| | - Kangrong Xu
- College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, 518060, Guangdong, China
| | - Xuan Yang
- College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, 518060, Guangdong, China.
| |
Collapse
|
3
|
Hernandez M, Ramon Julvez U. Insights into traditional Large Deformation Diffeomorphic Metric Mapping and unsupervised deep-learning for diffeomorphic registration and their evaluation. Comput Biol Med 2024; 178:108761. [PMID: 38908357 DOI: 10.1016/j.compbiomed.2024.108761] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Revised: 06/04/2024] [Accepted: 06/13/2024] [Indexed: 06/24/2024]
Abstract
This paper explores the connections between traditional Large Deformation Diffeomorphic Metric Mapping methods and unsupervised deep-learning approaches for non-rigid registration, particularly emphasizing diffeomorphic registration. The study provides useful insights and establishes connections between the methods, thereby facilitating a profound understanding of the methodological landscape. The methods considered in our study are extensively evaluated in T1w MRI images using traditional NIREP and Learn2Reg OASIS evaluation protocols with a focus on fairness, to establish equitable benchmarks and facilitate informed comparisons. Through a comprehensive analysis of the results, we address key questions, including the intricate relationship between accuracy and transformation quality in performance, the disentanglement of the influence of registration ingredients on performance, and the determination of benchmark methods and baselines. We offer valuable insights into the strengths and limitations of both traditional and deep-learning methods, shedding light on their comparative performance and guiding future advancements in the field.
Collapse
Affiliation(s)
- Monica Hernandez
- Computer Science Department, University of Zaragoza, Spain; Aragon Institute on Engineering Research, Spain.
| | - Ubaldo Ramon Julvez
- Computer Science Department, University of Zaragoza, Spain; Aragon Institute on Engineering Research, Spain
| |
Collapse
|
4
|
Gazula H, Tregidgo HFJ, Billot B, Balbastre Y, Williams-Ramirez J, Herisse R, Deden-Binder LJ, Casamitjana A, Melief EJ, Latimer CS, Kilgore MD, Montine M, Robinson E, Blackburn E, Marshall MS, Connors TR, Oakley DH, Frosch MP, Young SI, Van Leemput K, Dalca AV, Fischl B, MacDonald CL, Keene CD, Hyman BT, Iglesias JE. Machine learning of dissection photographs and surface scanning for quantitative 3D neuropathology. eLife 2024; 12:RP91398. [PMID: 38896568 PMCID: PMC11186625 DOI: 10.7554/elife.91398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/21/2024] Open
Abstract
We present open-source tools for three-dimensional (3D) analysis of photographs of dissected slices of human brains, which are routinely acquired in brain banks but seldom used for quantitative analysis. Our tools can: (1) 3D reconstruct a volume from the photographs and, optionally, a surface scan; and (2) produce a high-resolution 3D segmentation into 11 brain regions per hemisphere (22 in total), independently of the slice thickness. Our tools can be used as a substitute for ex vivo magnetic resonance imaging (MRI), which requires access to an MRI scanner, ex vivo scanning expertise, and considerable financial resources. We tested our tools on synthetic and real data from two NIH Alzheimer's Disease Research Centers. The results show that our methodology yields accurate 3D reconstructions, segmentations, and volumetric measurements that are highly correlated to those from MRI. Our method also detects expected differences between post mortem confirmed Alzheimer's disease cases and controls. The tools are available in our widespread neuroimaging suite 'FreeSurfer' (https://surfer.nmr.mgh.harvard.edu/fswiki/PhotoTools).
Collapse
Affiliation(s)
- Harshvardhan Gazula
- Martinos Center for Biomedical Imaging, MGH and Harvard Medical SchoolCharlestownUnited States
| | - Henry FJ Tregidgo
- Centre for Medical Image Computing, University College LondonLondonUnited Kingdom
| | - Benjamin Billot
- Computer Science and Artificial Intelligence Laboratory, MITCambridgeUnited States
| | - Yael Balbastre
- Martinos Center for Biomedical Imaging, MGH and Harvard Medical SchoolCharlestownUnited States
| | | | - Rogeny Herisse
- Martinos Center for Biomedical Imaging, MGH and Harvard Medical SchoolCharlestownUnited States
| | - Lucas J Deden-Binder
- Martinos Center for Biomedical Imaging, MGH and Harvard Medical SchoolCharlestownUnited States
| | - Adria Casamitjana
- Centre for Medical Image Computing, University College LondonLondonUnited Kingdom
- Biomedical Imaging Group, Universitat Politècnica de CatalunyaBarcelonaSpain
| | - Erica J Melief
- BioRepository and Integrated Neuropathology (BRaIN) Laboratory and Precision Neuropathology Core, UW School of MedicineSeattleUnited States
| | - Caitlin S Latimer
- BioRepository and Integrated Neuropathology (BRaIN) Laboratory and Precision Neuropathology Core, UW School of MedicineSeattleUnited States
| | - Mitchell D Kilgore
- BioRepository and Integrated Neuropathology (BRaIN) Laboratory and Precision Neuropathology Core, UW School of MedicineSeattleUnited States
| | - Mark Montine
- BioRepository and Integrated Neuropathology (BRaIN) Laboratory and Precision Neuropathology Core, UW School of MedicineSeattleUnited States
| | - Eleanor Robinson
- Centre for Medical Image Computing, University College LondonLondonUnited Kingdom
| | - Emily Blackburn
- Centre for Medical Image Computing, University College LondonLondonUnited Kingdom
| | - Michael S Marshall
- Massachusetts Alzheimer Disease Research Center, MGH and Harvard Medical SchoolCharlestownUnited States
| | - Theresa R Connors
- Massachusetts Alzheimer Disease Research Center, MGH and Harvard Medical SchoolCharlestownUnited States
| | - Derek H Oakley
- Massachusetts Alzheimer Disease Research Center, MGH and Harvard Medical SchoolCharlestownUnited States
| | - Matthew P Frosch
- Massachusetts Alzheimer Disease Research Center, MGH and Harvard Medical SchoolCharlestownUnited States
| | - Sean I Young
- Martinos Center for Biomedical Imaging, MGH and Harvard Medical SchoolCharlestownUnited States
| | - Koen Van Leemput
- Martinos Center for Biomedical Imaging, MGH and Harvard Medical SchoolCharlestownUnited States
- Neuroscience and Biomedical Engineering, Aalto UniversityEspooFinland
| | - Adrian V Dalca
- Martinos Center for Biomedical Imaging, MGH and Harvard Medical SchoolCharlestownUnited States
- Computer Science and Artificial Intelligence Laboratory, MITCambridgeUnited States
| | - Bruce Fischl
- Martinos Center for Biomedical Imaging, MGH and Harvard Medical SchoolCharlestownUnited States
| | | | - C Dirk Keene
- BioRepository and Integrated Neuropathology (BRaIN) Laboratory and Precision Neuropathology Core, UW School of MedicineSeattleUnited States
| | - Bradley T Hyman
- Massachusetts Alzheimer Disease Research Center, MGH and Harvard Medical SchoolCharlestownUnited States
| | - Juan E Iglesias
- Martinos Center for Biomedical Imaging, MGH and Harvard Medical SchoolCharlestownUnited States
- Centre for Medical Image Computing, University College LondonLondonUnited Kingdom
- Computer Science and Artificial Intelligence Laboratory, MITCambridgeUnited States
| |
Collapse
|
5
|
Shahsavarani S, Lopez F, Ibarra-Castanedo C, Maldague XPV. Advanced Image Stitching Method for Dual-Sensor Inspection. SENSORS (BASEL, SWITZERLAND) 2024; 24:3778. [PMID: 38931562 PMCID: PMC11207425 DOI: 10.3390/s24123778] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/02/2024] [Revised: 05/24/2024] [Accepted: 06/05/2024] [Indexed: 06/28/2024]
Abstract
Efficient image stitching plays a vital role in the Non-Destructive Evaluation (NDE) of infrastructures. An essential challenge in the NDE of infrastructures is precisely visualizing defects within large structures. The existing literature predominantly relies on high-resolution close-distance images to detect surface or subsurface defects. While the automatic detection of all defect types represents a significant advancement, understanding the location and continuity of defects is imperative. It is worth noting that some defects may be too small to capture from a considerable distance. Consequently, multiple image sequences are captured and processed using image stitching techniques. Additionally, visible and infrared data fusion strategies prove essential for acquiring comprehensive information to detect defects across vast structures. Hence, there is a need for an effective image stitching method appropriate for infrared and visible images of structures and industrial assets, facilitating enhanced visualization and automated inspection for structural maintenance. This paper proposes an advanced image stitching method appropriate for dual-sensor inspections. The proposed image stitching technique employs self-supervised feature detection to enhance the quality and quantity of feature detection. Subsequently, a graph neural network is employed for robust feature matching. Ultimately, the proposed method results in image stitching that effectively eliminates perspective distortion in both infrared and visible images, a prerequisite for subsequent multi-modal fusion strategies. Our results substantially enhance the visualization capabilities for infrastructure inspection. Comparative analysis with popular state-of-the-art methods confirms the effectiveness of the proposed approach.
Collapse
Affiliation(s)
- Sara Shahsavarani
- Computer Vision and Systems Laboratory (CVSL), Department of Electrical and Computer Engineering, Faculty of Science and Engineering, Laval University, Quebec City, QC G1V 0A6, Canada;
| | - Fernando Lopez
- TORNGATS, 200 Boul. du Parc-Technologique, Quebec City, QC G1P 4S3, Canada;
| | - Clemente Ibarra-Castanedo
- Computer Vision and Systems Laboratory (CVSL), Department of Electrical and Computer Engineering, Faculty of Science and Engineering, Laval University, Quebec City, QC G1V 0A6, Canada;
| | - Xavier P. V. Maldague
- Computer Vision and Systems Laboratory (CVSL), Department of Electrical and Computer Engineering, Faculty of Science and Engineering, Laval University, Quebec City, QC G1V 0A6, Canada;
| |
Collapse
|
6
|
Lin YH, Chen LW, Wang HJ, Hsieh MS, Lu CW, Chuang JH, Chang YC, Chen JS, Chen CM, Lin MW. Quantification of Resection Margin following Sublobar Resection in Lung Cancer Patients through Pre- and Post-Operative CT Image Comparison: Utilizing a CT-Based 3D Reconstruction Algorithm. Cancers (Basel) 2024; 16:2181. [PMID: 38927887 PMCID: PMC11201844 DOI: 10.3390/cancers16122181] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2024] [Revised: 06/02/2024] [Accepted: 06/06/2024] [Indexed: 06/28/2024] Open
Abstract
Sublobar resection has emerged as a standard treatment option for early-stage peripheral non-small cell lung cancer. Achieving an adequate resection margin is crucial to prevent local tumor recurrence. However, gross measurement of the resection margin may lack accuracy due to the elasticity of lung tissue and interobserver variability. Therefore, this study aimed to develop an objective measurement method, the CT-based 3D reconstruction algorithm, to quantify the resection margin following sublobar resection in lung cancer patients through pre- and post-operative CT image comparison. An automated subvascular matching technique was first developed to ensure accuracy and reproducibility in the matching process. Following the extraction of matched feature points, another key technique involves calculating the displacement field within the image. This is particularly important for mapping discontinuous deformation fields around the surgical resection area. A transformation based on thin-plate spline is used for medical image registration. Upon completing the final step of image registration, the distance at the resection margin was measured. After developing the CT-based 3D reconstruction algorithm, we included 12 cases for resection margin distance measurement, comprising 4 right middle lobectomies, 6 segmentectomies, and 2 wedge resections. The outcomes obtained with our method revealed that the target registration error for all cases was less than 2.5 mm. Our method demonstrated the feasibility of measuring the resection margin following sublobar resection in lung cancer patients through pre- and post-operative CT image comparison. Further validation with a multicenter, large cohort, and analysis of clinical outcome correlation is necessary in future studies.
Collapse
Affiliation(s)
- Yu-Hsuan Lin
- Department of Biomedical Engineering, College of Medicine and College of Engineering, National Taiwan University, Taipei 106, Taiwan; (Y.-H.L.); (L.-W.C.); (H.-J.W.)
| | - Li-Wei Chen
- Department of Biomedical Engineering, College of Medicine and College of Engineering, National Taiwan University, Taipei 106, Taiwan; (Y.-H.L.); (L.-W.C.); (H.-J.W.)
| | - Hao-Jen Wang
- Department of Biomedical Engineering, College of Medicine and College of Engineering, National Taiwan University, Taipei 106, Taiwan; (Y.-H.L.); (L.-W.C.); (H.-J.W.)
| | - Min-Shu Hsieh
- Department of Pathology, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei 100, Taiwan;
| | - Chao-Wen Lu
- Department of Surgery, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei 100, Taiwan; (C.-W.L.); (J.-H.C.); (J.-S.C.)
| | - Jen-Hao Chuang
- Department of Surgery, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei 100, Taiwan; (C.-W.L.); (J.-H.C.); (J.-S.C.)
| | - Yeun-Chung Chang
- Department of Medical Imaging, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei 100, Taiwan;
| | - Jin-Shing Chen
- Department of Surgery, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei 100, Taiwan; (C.-W.L.); (J.-H.C.); (J.-S.C.)
| | - Chung-Ming Chen
- Department of Biomedical Engineering, College of Medicine and College of Engineering, National Taiwan University, Taipei 106, Taiwan; (Y.-H.L.); (L.-W.C.); (H.-J.W.)
| | - Mong-Wei Lin
- Department of Surgery, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei 100, Taiwan; (C.-W.L.); (J.-H.C.); (J.-S.C.)
| |
Collapse
|
7
|
Wang H, Ni D, Wang Y. Recursive Deformable Pyramid Network for Unsupervised Medical Image Registration. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:2229-2240. [PMID: 38319758 DOI: 10.1109/tmi.2024.3362968] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/08/2024]
Abstract
Complicated deformation problems are frequently encountered in medical image registration tasks. Although various advanced registration models have been proposed, accurate and efficient deformable registration remains challenging, especially for handling the large volumetric deformations. To this end, we propose a novel recursive deformable pyramid (RDP) network for unsupervised non-rigid registration. Our network is a pure convolutional pyramid, which fully utilizes the advantages of the pyramid structure itself, but does not rely on any high-weight attentions or transformers. In particular, our network leverages a step-by-step recursion strategy with the integration of high-level semantics to predict the deformation field from coarse to fine, while ensuring the rationality of the deformation field. Meanwhile, due to the recursive pyramid strategy, our network can effectively attain deformable registration without separate affine pre-alignment. We compare the RDP network with several existing registration methods on three public brain magnetic resonance imaging (MRI) datasets, including LPBA, Mindboggle and IXI. Experimental results demonstrate our network consistently outcompetes state of the art with respect to the metrics of Dice score, average symmetric surface distance, Hausdorff distance, and Jacobian. Even for the data without the affine pre-alignment, our network maintains satisfactory performance on compensating for the large deformation. The code is publicly available at https://github.com/ZAX130/RDP.
Collapse
|
8
|
Wodzinski M, Marini N, Atzori M, Müller H. RegWSI: Whole slide image registration using combined deep feature- and intensity-based methods: Winner of the ACROBAT 2023 challenge. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 250:108187. [PMID: 38657383 DOI: 10.1016/j.cmpb.2024.108187] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/05/2024] [Revised: 04/05/2024] [Accepted: 04/17/2024] [Indexed: 04/26/2024]
Abstract
BACKGROUND AND OBJECTIVE The automatic registration of differently stained whole slide images (WSIs) is crucial for improving diagnosis and prognosis by fusing complementary information emerging from different visible structures. It is also useful to quickly transfer annotations between consecutive or restained slides, thus significantly reducing the annotation time and associated costs. Nevertheless, the slide preparation is different for each stain and the tissue undergoes complex and large deformations. Therefore, a robust, efficient, and accurate registration method is highly desired by the scientific community and hospitals specializing in digital pathology. METHODS We propose a two-step hybrid method consisting of (i) deep learning- and feature-based initial alignment algorithm, and (ii) intensity-based nonrigid registration using the instance optimization. The proposed method does not require any fine-tuning to a particular dataset and can be used directly for any desired tissue type and stain. The registration time is low, allowing one to perform efficient registration even for large datasets. The method was proposed for the ACROBAT 2023 challenge organized during the MICCAI 2023 conference and scored 1st place. The method is released as open-source software. RESULTS The proposed method is evaluated using three open datasets: (i) Automatic Nonrigid Histological Image Registration Dataset (ANHIR), (ii) Automatic Registration of Breast Cancer Tissue Dataset (ACROBAT), and (iii) Hybrid Restained and Consecutive Histological Serial Sections Dataset (HyReCo). The target registration error (TRE) is used as the evaluation metric. We compare the proposed algorithm to other state-of-the-art solutions, showing considerable improvement. Additionally, we perform several ablation studies concerning the resolution used for registration and the initial alignment robustness and stability. The method achieves the most accurate results for the ACROBAT dataset, the cell-level registration accuracy for the restained slides from the HyReCo dataset, and is among the best methods evaluated on the ANHIR dataset. CONCLUSIONS The article presents an automatic and robust registration method that outperforms other state-of-the-art solutions. The method does not require any fine-tuning to a particular dataset and can be used out-of-the-box for numerous types of microscopic images. The method is incorporated into the DeeperHistReg framework, allowing others to directly use it to register, transform, and save the WSIs at any desired pyramid level (resolution up to 220k x 220k). We provide free access to the software. The results are fully and easily reproducible. The proposed method is a significant contribution to improving the WSI registration quality, thus advancing the field of digital pathology.
Collapse
Affiliation(s)
- Marek Wodzinski
- Institute of Informatics, University of Applied Sciences Western Switzerland, Sierre, Switzerland; Department of Measurement and Electronics, AGH University of Kraków, Krakow, Poland.
| | - Niccolò Marini
- Institute of Informatics, University of Applied Sciences Western Switzerland, Sierre, Switzerland
| | - Manfredo Atzori
- Institute of Informatics, University of Applied Sciences Western Switzerland, Sierre, Switzerland; Department of Neuroscience, University of Padova, Padova, Italy
| | - Henning Müller
- Institute of Informatics, University of Applied Sciences Western Switzerland, Sierre, Switzerland; Medical Faculty, University of Geneva, Geneva, Switzerland
| |
Collapse
|
9
|
Lu X, Liang X, Liu W, Miao X, Guan X. ReeGAN: MRI image edge-preserving synthesis based on GANs trained with misaligned data. Med Biol Eng Comput 2024; 62:1851-1868. [PMID: 38396277 DOI: 10.1007/s11517-024-03035-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Accepted: 01/27/2024] [Indexed: 02/25/2024]
Abstract
As a crucial medical examination technique, different modalities of magnetic resonance imaging (MRI) complement each other, offering multi-angle and multi-dimensional insights into the body's internal information. Therefore, research on MRI cross-modality conversion is of great significance, and many innovative techniques have been explored. However, most methods are trained on well-aligned data, and the impact of misaligned data has not received sufficient attention. Additionally, many methods focus on transforming the entire image and ignore crucial edge information. To address these challenges, we propose a generative adversarial network based on multi-feature fusion, which effectively preserves edge information while training on noisy data. Notably, we consider images with limited range random transformations as noisy labels and use an additional small auxiliary registration network to help the generator adapt to the noise distribution. Moreover, we inject auxiliary edge information to improve the quality of synthesized target modality images. Our goal is to find the best solution for cross-modality conversion. Comprehensive experiments and ablation studies demonstrate the effectiveness of the proposed method.
Collapse
Affiliation(s)
- Xiangjiang Lu
- Guangxi Key Lab of Multi-Source Information Mining & Security, School of Computer Science and Engineering & School of Software, Guangxi Normal University, Guilin, 541004, China.
| | - Xiaoshuang Liang
- Guangxi Key Lab of Multi-Source Information Mining & Security, School of Computer Science and Engineering & School of Software, Guangxi Normal University, Guilin, 541004, China
| | - Wenjing Liu
- Guangxi Key Lab of Multi-Source Information Mining & Security, School of Computer Science and Engineering & School of Software, Guangxi Normal University, Guilin, 541004, China
| | - Xiuxia Miao
- Guangxi Key Lab of Multi-Source Information Mining & Security, School of Computer Science and Engineering & School of Software, Guangxi Normal University, Guilin, 541004, China
| | - Xianglong Guan
- Guangxi Key Lab of Multi-Source Information Mining & Security, School of Computer Science and Engineering & School of Software, Guangxi Normal University, Guilin, 541004, China
| |
Collapse
|
10
|
Rao F, Lyu T, Feng Z, Wu Y, Ni Y, Zhu W. A landmark-supervised registration framework for multi-phase CT images with cross-distillation. Phys Med Biol 2024; 69:115059. [PMID: 38768601 DOI: 10.1088/1361-6560/ad4e01] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Accepted: 05/20/2024] [Indexed: 05/22/2024]
Abstract
Objective.Multi-phase computed tomography (CT) has become a leading modality for identifying hepatic tumors. Nevertheless, the presence of misalignment in the images of different phases poses a challenge in accurately identifying and analyzing the patient's anatomy. Conventional registration methods typically concentrate on either intensity-based features or landmark-based features in isolation, so imposing limitations on the accuracy of the registration process.Method.We establish a nonrigid cycle-registration network that leverages semi-supervised learning techniques, wherein a point distance term based on Euclidean distance between registered landmark points is introduced into the loss function. Additionally, a cross-distillation strategy is proposed in network training to further improve registration performance which incorporates response-based knowledge concerning the distances between feature points.Results.We conducted experiments using multi-centered liver CT datasets to evaluate the performance of the proposed method. The results demonstrate that our method outperforms baseline methods in terms of target registration error. Additionally, Dice scores of the warped tumor masks were calculated. Our method consistently achieved the highest scores among all the comparing methods. Specifically, it achieved scores of 82.9% and 82.5% in the hepatocellular carcinoma and the intrahepatic cholangiocarcinoma dataset, respectively.Significance.The superior registration performance indicates its potential to serve as an important tool in hepatic tumor identification and analysis.
Collapse
Affiliation(s)
- Fan Rao
- Research Center for Augmented Intelligence, Zhejiang Lab, Hangzhou 310000, People's Republic of China
| | - Tianling Lyu
- Research Center for Augmented Intelligence, Zhejiang Lab, Hangzhou 310000, People's Republic of China
| | - Zhan Feng
- Department of Radiology, College of Medicine, The First Affiliated Hospital, Zhejiang University, Hangzhou 311100, People's Republic of China
| | - Yuanfeng Wu
- Research Center for Augmented Intelligence, Zhejiang Lab, Hangzhou 310000, People's Republic of China
| | - Yangfan Ni
- Research Center for Augmented Intelligence, Zhejiang Lab, Hangzhou 310000, People's Republic of China
| | - Wentao Zhu
- Research Center for Augmented Intelligence, Zhejiang Lab, Hangzhou 310000, People's Republic of China
| |
Collapse
|
11
|
Machura B, Kucharski D, Bozek O, Eksner B, Kokoszka B, Pekala T, Radom M, Strzelczak M, Zarudzki L, Gutiérrez-Becker B, Krason A, Tessier J, Nalepa J. Deep learning ensembles for detecting brain metastases in longitudinal multi-modal MRI studies. Comput Med Imaging Graph 2024; 116:102401. [PMID: 38795690 DOI: 10.1016/j.compmedimag.2024.102401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Revised: 05/13/2024] [Accepted: 05/13/2024] [Indexed: 05/28/2024]
Abstract
Metastatic brain cancer is a condition characterized by the migration of cancer cells to the brain from extracranial sites. Notably, metastatic brain tumors surpass primary brain tumors in prevalence by a significant factor, they exhibit an aggressive growth potential and have the capacity to spread across diverse cerebral locations simultaneously. Magnetic resonance imaging (MRI) scans of individuals afflicted with metastatic brain tumors unveil a wide spectrum of characteristics. These lesions vary in size and quantity, spanning from tiny nodules to substantial masses captured within MRI. Patients may present with a limited number of lesions or an extensive burden of hundreds of them. Moreover, longitudinal studies may depict surgical resection cavities, as well as areas of necrosis or edema. Thus, the manual analysis of such MRI scans is difficult, user-dependent and cost-inefficient, and - importantly - it lacks reproducibility. We address these challenges and propose a pipeline for detecting and analyzing brain metastases in longitudinal studies, which benefits from an ensemble of various deep learning architectures originally designed for different downstream tasks (detection and segmentation). The experiments, performed over 275 multi-modal MRI scans of 87 patients acquired in 53 sites, coupled with rigorously validated manual annotations, revealed that our pipeline, built upon open-source tools to ensure its reproducibility, offers high-quality detection, and allows for precisely tracking the disease progression. To objectively quantify the generalizability of models, we introduce a new data stratification approach that accommodates the heterogeneity of the dataset and is used to elaborate training-test splits in a data-robust manner, alongside a new set of quality metrics to objectively assess algorithms. Our system provides a fully automatic and quantitative approach that may support physicians in a laborious process of disease progression tracking and evaluation of treatment efficacy.
Collapse
Affiliation(s)
| | - Damian Kucharski
- Graylight Imaging, Gliwice, Poland; Silesian University of Technology, Gliwice, Poland.
| | - Oskar Bozek
- Department of Radiodiagnostics and Invasive Radiology, School of Medicine in Katowice, Medical University of Silesia in Katowice, Katowice, Poland.
| | - Bartosz Eksner
- Department of Radiology and Nuclear Medicine, ZSM Chorzów, Chorzów, Poland.
| | - Bartosz Kokoszka
- Department of Radiodiagnostics and Invasive Radiology, School of Medicine in Katowice, Medical University of Silesia in Katowice, Katowice, Poland.
| | - Tomasz Pekala
- Department of Radiodiagnostics, Interventional Radiology and Nuclear Medicine, University Clinical Centre, Katowice, Poland.
| | - Mateusz Radom
- Department of Radiology and Diagnostic Imaging, Maria Skłodowska-Curie National Research Institute of Oncology, Gliwice Branch, Gliwice, Poland.
| | - Marek Strzelczak
- Department of Radiology and Diagnostic Imaging, Maria Skłodowska-Curie National Research Institute of Oncology, Gliwice Branch, Gliwice, Poland.
| | - Lukasz Zarudzki
- Department of Radiology and Diagnostic Imaging, Maria Skłodowska-Curie National Research Institute of Oncology, Gliwice Branch, Gliwice, Poland.
| | - Benjamín Gutiérrez-Becker
- Roche Pharma Research and Early Development, Informatics, Roche Innovation Center Basel, Basel, Switzerland.
| | - Agata Krason
- Roche Pharma Research and Early Development, Early Clinical Development Oncology, Roche Innovation Center Basel, Basel, Switzerland.
| | - Jean Tessier
- Roche Pharma Research and Early Development, Early Clinical Development Oncology, Roche Innovation Center Basel, Basel, Switzerland.
| | - Jakub Nalepa
- Graylight Imaging, Gliwice, Poland; Silesian University of Technology, Gliwice, Poland.
| |
Collapse
|
12
|
Zhou D, Yu C, Liu W, Liu F. Registration of multimodal bone images based on edge similarity metaheuristic. Comput Biol Med 2024; 174:108379. [PMID: 38631115 DOI: 10.1016/j.compbiomed.2024.108379] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2023] [Revised: 03/09/2024] [Accepted: 03/24/2024] [Indexed: 04/19/2024]
Abstract
OBJECTIVE Blurry medical images affect the accuracy and efficiency of multimodal image registration, whose existing methods require further improvement. METHODS We propose an edge-based similarity registration method optimised for multimodal medical images, especially bone images, by a balance optimiser. First, we use a GPU (graphics processing unit) rendering simulation to convert computed tomography data into digitally reconstructed radiographs. Second, we introduce the improved cascaded edge network (ICENet), a convolutional neural network that extracts edge information of blurry medical images. Then, the bilateral Gaussian-weighted similarity of pairs of X-ray images and digitally reconstructed radiographs is measured. The a balanced optimiser is iteratively applied to finally estimate the best pose to perform image registration. RESULTS Experimental results show that, on average, the proposed method with ICENet outperforms other edge detection networks by 20%, 12%, 18.83%, and 11.93% in the overall Dice similarity, overall intersection over union, peak signal-to-noise ratio, and structural similarity index, respectively, with a registration success rate up to 90% and average reduction of 220% in registration time. CONCLUSION The proposed method with ICENet can achieve a high registration success rate even for blurry medical images, and its efficiency and robustness are higher than those of existing methods. SIGNIFICANCE Our proposal may be suitable for supporting medical diagnosis, radiation therapy, image-guided surgery, and other clinical applications.
Collapse
Affiliation(s)
- Dibin Zhou
- School of Information Science and Technology, Hangzhou Normal University, Zhejiang, China.
| | - Chen Yu
- School of Information Science and Technology, Hangzhou Normal University, Zhejiang, China.
| | - Wenhao Liu
- School of Information Science and Technology, Hangzhou Normal University, Zhejiang, China.
| | - Fuchang Liu
- School of Information Science and Technology, Hangzhou Normal University, Zhejiang, China.
| |
Collapse
|
13
|
Maria Antony AN, Narisetti N, Gladilin E. Linel2D-Net: A deep learning approach to solving 2D linear elastic boundary value problems on image domains. iScience 2024; 27:109519. [PMID: 38595795 PMCID: PMC11002675 DOI: 10.1016/j.isci.2024.109519] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Revised: 02/02/2024] [Accepted: 03/14/2024] [Indexed: 04/11/2024] Open
Abstract
Efficient solution of physical boundary value problems (BVPs) remains a challenging task demanded in many applications. Conventional numerical methods require time-consuming domain discretization and solving techniques that have limited throughput capabilities. Here, we present an efficient data-driven DNN approach to non-iterative solving arbitrary 2D linear elastic BVPs. Our results show that a U-Net-based surrogate model trained on a representative set of reference FDM solutions can accurately emulate linear elastic material behavior with manifold applications in deformable modeling and simulation.
Collapse
Affiliation(s)
- Anto Nivin Maria Antony
- Leibniz Institute of Plant Genetics and Crop Plant Research, OT Gatersleben, Corrensstr. 3, 06466 Seeland, Germany
| | - Narendra Narisetti
- Leibniz Institute of Plant Genetics and Crop Plant Research, OT Gatersleben, Corrensstr. 3, 06466 Seeland, Germany
| | - Evgeny Gladilin
- Leibniz Institute of Plant Genetics and Crop Plant Research, OT Gatersleben, Corrensstr. 3, 06466 Seeland, Germany
| |
Collapse
|
14
|
Ahmad N, Dahlberg H, Jönsson H, Tarai S, Guggilla RK, Strand R, Lundström E, Bergström G, Ahlström H, Kullberg J. Voxel-wise body composition analysis using image registration of a three-slice CT imaging protocol: methodology and proof-of-concept studies. Biomed Eng Online 2024; 23:42. [PMID: 38614974 PMCID: PMC11015680 DOI: 10.1186/s12938-024-01235-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Accepted: 04/02/2024] [Indexed: 04/15/2024] Open
Abstract
BACKGROUND Computed tomography (CT) is an imaging modality commonly used for studies of internal body structures and very useful for detailed studies of body composition. The aim of this study was to develop and evaluate a fully automatic image registration framework for inter-subject CT slice registration. The aim was also to use the results, in a set of proof-of-concept studies, for voxel-wise statistical body composition analysis (Imiomics) of correlations between imaging and non-imaging data. METHODS The current study utilized three single-slice CT images of the liver, abdomen, and thigh from two large cohort studies, SCAPIS and IGT. The image registration method developed and evaluated used both CT images together with image-derived tissue and organ segmentation masks. To evaluate the performance of the registration method, a set of baseline 3-single-slice CT images (from 2780 subjects including 8285 slices) from the SCAPIS and IGT cohorts were registered. Vector magnitude and intensity magnitude error indicating inverse consistency were used for evaluation. Image registration results were further used for voxel-wise analysis of associations between the CT images (as represented by tissue volume from Hounsfield unit and Jacobian determinant) and various explicit measurements of various tissues, fat depots, and organs collected in both cohort studies. RESULTS Our findings demonstrated that the key organs and anatomical structures were registered appropriately. The evaluation parameters of inverse consistency, such as vector magnitude and intensity magnitude error, were on average less than 3 mm and 50 Hounsfield units. The registration followed by Imiomics analysis enabled the examination of associations between various explicit measurements (liver, spleen, abdominal muscle, visceral adipose tissue (VAT), subcutaneous adipose tissue (SAT), thigh SAT, intermuscular adipose tissue (IMAT), and thigh muscle) and the voxel-wise image information. CONCLUSION The developed and evaluated framework allows accurate image registrations of the collected three single-slice CT images and enables detailed voxel-wise studies of associations between body composition and associated diseases and risk factors.
Collapse
Affiliation(s)
- Nouman Ahmad
- Radiology, Department of Surgical Sciences, Uppsala University, Uppsala, Sweden.
| | - Hugo Dahlberg
- Radiology, Department of Surgical Sciences, Uppsala University, Uppsala, Sweden
| | - Hanna Jönsson
- Radiology, Department of Surgical Sciences, Uppsala University, Uppsala, Sweden
| | - Sambit Tarai
- Radiology, Department of Surgical Sciences, Uppsala University, Uppsala, Sweden
| | | | - Robin Strand
- Radiology, Department of Surgical Sciences, Uppsala University, Uppsala, Sweden
- Department of Information Technology, Uppsala University, Uppsala, Sweden
| | - Elin Lundström
- Radiology, Department of Surgical Sciences, Uppsala University, Uppsala, Sweden
| | - Göran Bergström
- Department of Molecular and Clinical Medicine, Institute of Medicine, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Department of Clinical Physiology, Sahlgrenska University Hospital, Region Västra Götaland, Gothenburg, Sweden
| | - Håkan Ahlström
- Radiology, Department of Surgical Sciences, Uppsala University, Uppsala, Sweden
- Antaros Medical, Mölndal, Sweden
| | - Joel Kullberg
- Radiology, Department of Surgical Sciences, Uppsala University, Uppsala, Sweden
- Antaros Medical, Mölndal, Sweden
| |
Collapse
|
15
|
Murr M, Bernchou U, Bubula-Rehm E, Ruschin M, Sadeghi P, Voet P, Winter JD, Yang J, Younus E, Zachiu C, Zhao Y, Zhong H, Thorwarth D. A multi-institutional comparison of retrospective deformable dose accumulation for online adaptive magnetic resonance-guided radiotherapy. Phys Imaging Radiat Oncol 2024; 30:100588. [PMID: 38883145 PMCID: PMC11176923 DOI: 10.1016/j.phro.2024.100588] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Revised: 05/07/2024] [Accepted: 05/08/2024] [Indexed: 06/18/2024] Open
Abstract
Background and Purpose Application of different deformable dose accumulation (DDA) solutions makes institutional comparisons after online-adaptive magnetic resonance-guided radiotherapy (OA-MRgRT) challenging. The aim of this multi-institutional study was to analyze accuracy and agreement of DDA-implementations in OA-MRgRT. Material and Methods One gold standard (GS) case deformed with a biomechanical-model and five clinical cases consisting of prostate (2x), cervix, liver, and lymph node cancer, treated with OA-MRgRT, were analyzed. Six centers conducted DDA using institutional implementations. Deformable image registration (DIR) and DDA results were compared using the contour metrics Dice Similarity Coefficient (DSC), surface-DSC, Hausdorff-distance (HD95%), and accumulated dose-volume histograms (DVHs) analyzed via intraclass correlation coefficient (ICC) and clinical dosimetric criteria (CDC). Results For the GS, median DDA errors ranged from 0.0 to 2.8 Gy across contours and implementations. DIR of clinical cases resulted in DSC > 0.8 for up to 81.3% of contours and a variability of surface-DSC values depending on the implementation. Maximum HD95%=73.3 mm was found for duodenum in the liver case. Although DVH ICC > 0.90 was found after DDA for all but two contours, relevant absolute CDC differences were observed in clinical cases: Prostate I/II showed maximum differences in bladder V28Gy (10.2/7.6%), while for cervix, liver, and lymph node the highest differences were found for rectum D2cm3 (2.8 Gy), duodenum Dmax (7.1 Gy), and rectum D0.5cm3 (4.6 Gy). Conclusion Overall, high agreement was found between the different DIR and DDA implementations. Case- and algorithm-dependent differences were observed, leading to potentially clinically relevant results. Larger studies are needed to define future DDA-guidelines.
Collapse
Affiliation(s)
- Martina Murr
- Section for Biomedical Physics, Department of Radiation Oncology, University of Tübingen, Germany
| | - Uffe Bernchou
- Department of Clinical Research, University of Southern Denmark, Odense, Denmark
- Laboratory of Radiation Physics, Odense University Hospital, Denmark
| | | | - Mark Ruschin
- Department of Radiation Oncology, Odette Cancer Centre, Sunnybrook Health Sciences Centre, University of Toronto, Toronto, Ontario, Canada
| | - Parisa Sadeghi
- Radiation Medicine Program, Princess Margaret Cancer Centre, University Health Network, Toronto, Ontario, Canada
| | | | - Jeff D Winter
- Radiation Medicine Program, Princess Margaret Cancer Centre, University Health Network, Toronto, Ontario, Canada
| | - Jinzhong Yang
- Department of Radiation Physics, the University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Eyesha Younus
- Department of Radiation Oncology, Odette Cancer Centre, Sunnybrook Health Sciences Centre, University of Toronto, Toronto, Ontario, Canada
- Department of Radiation Oncology, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA
| | - Cornel Zachiu
- University Medical Centre Utrecht, Department of Radiotherapy, 3584 CX Utrecht, the Netherlands
| | - Yao Zhao
- Department of Radiation Physics, the University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Hualiang Zhong
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, USA
| | - Daniela Thorwarth
- Section for Biomedical Physics, Department of Radiation Oncology, University of Tübingen, Germany
| |
Collapse
|
16
|
Yan Z, Ji J, Ma J, Cao W. HGCMorph: joint discontinuity-preserving and pose-learning via GNN and capsule networks for deformable medical images registration. Phys Med Biol 2024; 69:075032. [PMID: 38373349 DOI: 10.1088/1361-6560/ad2a96] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Accepted: 02/19/2024] [Indexed: 02/21/2024]
Abstract
Objective.This study aims to enhance medical image registration by addressing the limitations of existing approaches that rely on spatial transformations through U-Net, ConvNets, or Transformers. The objective is to develop a novel architecture that combines ConvNets, graph neural networks (GNNs), and capsule networks to improve the accuracy and efficiency of medical image registration, which can also deal with the problem of rotating registration.Approach.We propose an deep learning-based approach which can be utilized in both unsupervised and semi-supervised manners, named as HGCMorph. It leverages a hybrid framework that integrates ConvNets and GNNs to capture lower-level features, specifically short-range attention, while also utilizing capsule networks (CapsNets) to model abstract higher-level features, including entity properties such as position, size, orientation, deformation, and texture. This hybrid framework aims to provide a comprehensive representation of anatomical structures and their spatial relationships in medical images.Main results.The results demonstrate the superiority of HGCMorph over existing state-of-the-art deep learning-based methods in both qualitative and quantitative evaluations. In unsupervised training process, our model outperforms the recent SOTA method TransMorph by achieving 7%/38% increase on Dice score coefficient (DSC), and 2%/7% improvement on negative jacobian determinant for OASIS and LPBA40 datasets, respectively. Furthermore, HGCMorph achieves improved registration accuracy in semi-supervised training process. In addition, when dealing with complex 3D rotations and secondary randomly deformations, our method still achieves the best performance. We also tested our methods on lung datasets, such as Japanese Society of Radiology, Montgoermy and Shenzhen.Significance.The significance lies in its innovative design to medical image registration. HGCMorph offers a novel framework that overcomes the limitations of existing methods by efficiently capturing both local and abstract features, leading to enhanced registration accuracy, discontinuity-preserving, and pose-learning abilities. The incorporation of capsule networks introduces valuable improvements, making the proposed method a valuable contribution to the field of medical image analysis. HGCMorph not only advances the SOTA methods but also has the potential to improve various medical applications that rely on accurate image registration.
Collapse
Affiliation(s)
- Zhiyue Yan
- State Key Laboratory of Radio Frequency Heterogeneous Integration, Shenzhen University, Shenzhen 518060, Guangdong Province, People's Republic of China
| | - Jianhua Ji
- State Key Laboratory of Radio Frequency Heterogeneous Integration, Shenzhen University, Shenzhen 518060, Guangdong Province, People's Republic of China
| | - Jia Ma
- The Second People's Hospital of Futian District, Shenzhen 518049, Guangdong Province, People's Republic of China
| | - Wenming Cao
- State Key Laboratory of Radio Frequency Heterogeneous Integration, Shenzhen University, Shenzhen 518060, Guangdong Province, People's Republic of China
| |
Collapse
|
17
|
Wu Y, Wang Z, Chu Y, Peng R, Peng H, Yang H, Guo K, Zhang J. Current Research Status of Respiratory Motion for Thorax and Abdominal Treatment: A Systematic Review. Biomimetics (Basel) 2024; 9:170. [PMID: 38534855 DOI: 10.3390/biomimetics9030170] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Revised: 02/29/2024] [Accepted: 03/09/2024] [Indexed: 03/28/2024] Open
Abstract
Malignant tumors have become one of the serious public health problems in human safety and health, among which the chest and abdomen diseases account for the largest proportion. Early diagnosis and treatment can effectively improve the survival rate of patients. However, respiratory motion in the chest and abdomen can lead to uncertainty in the shape, volume, and location of the tumor, making treatment of the chest and abdomen difficult. Therefore, compensation for respiratory motion is very important in clinical treatment. The purpose of this review was to discuss the research and development of respiratory movement monitoring and prediction in thoracic and abdominal surgery, as well as introduce the current research status. The integration of modern respiratory motion compensation technology with advanced sensor detection technology, medical-image-guided therapy, and artificial intelligence technology is discussed and analyzed. The future research direction of intraoperative thoracic and abdominal respiratory motion compensation should be non-invasive, non-contact, use a low dose, and involve intelligent development. The complexity of the surgical environment, the constraints on the accuracy of existing image guidance devices, and the latency of data transmission are all present technical challenges.
Collapse
Affiliation(s)
- Yuwen Wu
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
| | - Zhisen Wang
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
| | - Yuyi Chu
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
| | - Renyuan Peng
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
| | - Haoran Peng
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
| | - Hongbo Yang
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
| | - Kai Guo
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
| | - Juzhong Zhang
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
| |
Collapse
|
18
|
Bao L, Chen K, Kong D, Ying S, Zeng T. Time multiscale regularization for nonlinear image registration. Comput Med Imaging Graph 2024; 112:102331. [PMID: 38199126 DOI: 10.1016/j.compmedimag.2024.102331] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2023] [Revised: 10/25/2023] [Accepted: 12/13/2023] [Indexed: 01/12/2024]
Abstract
Regularization-based methods are commonly used for image registration. However, fixed regularizers have limitations in capturing details and describing the dynamic registration process. To address this issue, we propose a time multiscale registration framework for nonlinear image registration in this paper. Our approach replaces the fixed regularizer with a monotone decreasing sequence, and iteratively uses the residual of the previous step as the input for registration. Particularly, first, we introduce a dynamically varying regularization strategy that updates regularizers at each iteration and incorporates them with a multiscale framework. This approach guarantees an overall smooth deformation field in the initial stage of registration and fine-tunes local details as the images become more similar. We then deduce convergence analysis under certain conditions on the regularizers and parameters. Further, we introduce a TV-like regularizer to demonstrate the efficiency of our method. Finally, we compare our proposed multiscale algorithm with some existing methods on both synthetic images and pulmonary computed tomography (CT) images. The experimental results validate that our proposed algorithm outperforms the compared methods, especially in preserving details during image registration with sharp structures.
Collapse
Affiliation(s)
- Lili Bao
- Department of Mathematics, Shanghai University, Shanghai 200444, PR China
| | - Ke Chen
- Department of Mathematics and Statistics, University of Strathclyde, Glasgow, UK.
| | - Dexing Kong
- School of Mathematical Science, Zhejiang University, Hangzhou 310027, PR China
| | - Shihui Ying
- Department of Mathematics, Shanghai University, Shanghai 200444, PR China.
| | - Tieyong Zeng
- Department of Mathematics, The Chinese University of Hong Kong, Shatin, Hong Kong
| |
Collapse
|
19
|
Zhang J, Qing C, Li Y, Wang Y. BCSwinReg: A cross-modal attention network for CBCT-to-CT multimodal image registration. Comput Biol Med 2024; 171:107990. [PMID: 38377717 DOI: 10.1016/j.compbiomed.2024.107990] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Revised: 12/26/2023] [Accepted: 01/13/2024] [Indexed: 02/22/2024]
Abstract
Computed tomography (CT) and cone beam computed tomography (CBCT) registration plays an important role in radiotherapy. However, the poor quality of CBCT makes CBCT-CT multimodal registration challenging. Effective feature fusion and mapping often lead to better registration results for multimodal registration. Therefore, we proposed a new backbone network BCSwinReg and a cross-modal attention module CrossSwin. Specifically, a cross-modal attention CrossSwin is designed to promote multi-modal feature fusion, map the multi-modal domain to the common domain, and thus helping the network learn the correspondence between images better. Furthermore, a new network, BCSwinReg, is proposed to discover correspondence through cross-attention exchange information, obtain multi-level semantic information through a multi-resolution strategy, and finally integrate the deformation of multi-resolutions by the divide-conquer cascade method. We performed experiments on the publicly available 4D-Lung dataset to demonstrate the effectiveness of CrossSwin and BCSwinReg. Compared with VoxelMorph, the BCSwinReg has obtained performance improvements of 3.3% in Dice Similarity Coefficient (DSC) and 0.19 in the average 95% Hausdorff distance (HD95).
Collapse
Affiliation(s)
- Jieming Zhang
- The East China University of Science and Technology, Shanghai, 200237, China
| | - Chang Qing
- The East China University of Science and Technology, Shanghai, 200237, China.
| | - Yu Li
- The East China University of Science and Technology, Shanghai, 200237, China
| | - Yaqi Wang
- The East China University of Science and Technology, Shanghai, 200237, China
| |
Collapse
|
20
|
Huang S, Zhong L, Shi Y. Automated Mapping of Residual Distortion Severity in Diffusion MRI. COMPUTATIONAL DIFFUSION MRI : MICCAI WORKSHOP 2024; 14328:58-69. [PMID: 38500569 PMCID: PMC10948104 DOI: 10.1007/978-3-031-47292-3_6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/20/2024]
Abstract
Susceptibility-induced distortion is a common artifact in diffusion MRI (dMRI), which deforms the dMRI locally and poses significant challenges in connectivity analysis. While various methods were proposed to correct the distortion, residual distortions often persist at varying degrees across brain regions and subjects. Generating a voxel-level residual distortion severity map can thus be a valuable tool to better inform downstream connectivity analysis. To fill this current gap in dMRI analysis, we propose a supervised deep-learning network to predict a severity map of residual distortion. The training process is supervised using the structural similarity index measure (SSIM) of the fiber orientation distribution (FOD) in two opposite phase encoding (PE) directions. Only b0 images and related outputs from the distortion correction methods are needed as inputs in the testing process. The proposed method is applicable in large-scale datasets such as the UK Biobank, Adolescent Brain Cognitive Development (ABCD), and other emerging studies that only have complete dMRI data in one PE direction but acquires b0 images in both PEs. In our experiments, we trained the proposed model using the Lifespan Human Connectome Project Aging (HCP-Aging) dataset ( n = 662 ) and apply the trained model to data ( n = 1330 ) from UK Biobank. Our results show low training, validation, and test errors, and the severity map correlates excellently with an FOD integrity measure in both HCP-Aging and UK Biobank data. The proposed method is also highly efficient and can generate the severity map in around 1 second for each subject.
Collapse
Affiliation(s)
- Shuo Huang
- Stevens Neuroimaging and Informatics Institute, Keck School of Medicine, University of Southern California (USC), Los Angeles, CA 90033, USA
- Alfred E. Mann Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California (USC), Los Angeles, CA 90089, USA
| | - Lujia Zhong
- Stevens Neuroimaging and Informatics Institute, Keck School of Medicine, University of Southern California (USC), Los Angeles, CA 90033, USA
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California (USC), Los Angeles, CA 90089, USA
| | - Yonggang Shi
- Stevens Neuroimaging and Informatics Institute, Keck School of Medicine, University of Southern California (USC), Los Angeles, CA 90033, USA
- Alfred E. Mann Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California (USC), Los Angeles, CA 90089, USA
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California (USC), Los Angeles, CA 90089, USA
| |
Collapse
|
21
|
Meyer S, Alam S, Kuo L, Hu YC, Liu Y, Lu W, Yorke E, Li A, Cervino L, Zhang P. Creating patient-specific digital phantoms with a longitudinal atlas for evaluating deformable CT-CBCT registration in adaptive lung radiotherapy. Med Phys 2024; 51:1405-1414. [PMID: 37449537 PMCID: PMC10787815 DOI: 10.1002/mp.16606] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Revised: 05/26/2023] [Accepted: 06/22/2023] [Indexed: 07/18/2023] Open
Abstract
BACKGROUND Quality assurance of deformable image registration (DIR) is challenging because the ground truth is often unavailable. In addition, current approaches that rely on artificial transformations do not adequately resemble clinical scenarios encountered in adaptive radiotherapy. PURPOSE We developed an atlas-based method to create a variety of patient-specific serial digital phantoms with CBCT-like image quality to assess the DIR performance for longitudinal CBCT imaging data in adaptive lung radiotherapy. METHODS A library of deformations was created by extracting the longitudinal changes observed between a planning CT and weekly CBCT from an atlas of lung radiotherapy patients. The planning CT of an inquiry patient was first deformed by mapping the deformation pattern from a matched atlas patient, and subsequently appended with CBCT artifacts to imitate a weekly CBCT. Finally, a group of digital phantoms around an inquiry patient was produced to simulate a series of possible evolutions of tumor and adjacent normal structures. We validated the generated deformation vector fields (DVFs) to ensure numerically and physiologically realistic transformations. The proposed framework was applied to evaluate the performance of the DIR algorithm implemented in the commercial Eclipse treatment planning system in a retrospective study of eight inquiry patients. RESULTS The generated DVFs were inverse consistent within less than 3 mm and did not exhibit unrealistic folding. The deformation patterns adequately mimicked the observed longitudinal anatomical changes of the matched atlas patients. Worse Eclipse DVF accuracy was observed in regions of low image contrast or artifacts. The structure volumes exhibiting a DVF error magnitude of equal or more than 2 mm ranged from 24.5% (spinal cord) to 69.2% (heart) and the maximum DVF error exceeded 5 mm for all structures except the spinal cord. Contour-based evaluations showed a high degree of alignment with dice similarity coefficients above 0.8 in all cases, which underestimated the overall DVF accuracy within the structures. CONCLUSIONS It is feasible to create and augment digital phantoms based on a particular patient of interest using multiple series of deformation patterns from matched patients in an atlas. This can provide a semi-automated procedure to complement the quality assurance of CT-CBCT DIR and facilitate the clinical implementation of image-guided and adaptive radiotherapy that involve longitudinal CBCT imaging studies.
Collapse
Affiliation(s)
- Sebastian Meyer
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Sadegh Alam
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - LiCheng Kuo
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Yu-Chi Hu
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Yilin Liu
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Wei Lu
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Ellen Yorke
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Anyi Li
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Laura Cervino
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Pengpeng Zhang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| |
Collapse
|
22
|
Gazula H, Tregidgo HFJ, Billot B, Balbastre Y, William-Ramirez J, Herisse R, Deden-Binder LJ, Casamitjana A, Melief EJ, Latimer CS, Kilgore MD, Montine M, Robinson E, Blackburn E, Marshall MS, Connors TR, Oakley DH, Frosch MP, Young SI, Van Leemput K, Dalca AV, FIschl B, Mac Donald CL, Keene CD, Hyman BT, Iglesias JE. Machine learning of dissection photographs and surface scanning for quantitative 3D neuropathology. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.06.08.544050. [PMID: 37333251 PMCID: PMC10274889 DOI: 10.1101/2023.06.08.544050] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/20/2023]
Abstract
We present open-source tools for 3D analysis of photographs of dissected slices of human brains, which are routinely acquired in brain banks but seldom used for quantitative analysis. Our tools can: (i) 3D reconstruct a volume from the photographs and, optionally, a surface scan; and (ii) produce a high-resolution 3D segmentation into 11 brain regions per hemisphere (22 in total), independently of the slice thickness. Our tools can be used as a substitute for ex vivo magnetic resonance imaging (MRI), which requires access to an MRI scanner, ex vivo scanning expertise, and considerable financial resources. We tested our tools on synthetic and real data from two NIH Alzheimer's Disease Research Centers. The results show that our methodology yields accurate 3D reconstructions, segmentations, and volumetric measurements that are highly correlated to those from MRI. Our method also detects expected differences between post mortem confirmed Alzheimer's disease cases and controls. The tools are available in our widespread neuroimaging suite "FreeSurfer" ( https://surfer.nmr.mgh.harvard.edu/fswiki/PhotoTools ).
Collapse
|
23
|
Zappalá S, Keenan BE, Marshall D, Wu J, Evans SL, Al-Dirini RMA. In vivo strain measurements in the human buttock during sitting using MR-based digital volume correlation. J Biomech 2024; 163:111913. [PMID: 38181575 DOI: 10.1016/j.jbiomech.2023.111913] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Revised: 10/11/2023] [Accepted: 12/20/2023] [Indexed: 01/07/2024]
Abstract
Advancements in systems for prevention and management of pressure ulcers require a more detailed understanding of the complex response of soft tissues to compressive loads. This study aimed at quantifying the progressive deformation of the buttock based on 3D measurements of soft tissue displacements from MR scans of 10 healthy subjects in a semi-recumbent position. Measurements were obtained using digital volume correlation (DVC) and released as a public dataset. A first parametric optimisation of the global registration step aimed at aligning skeletal elements showed acceptable values of Dice coefficient (around 80%). A second parametric optimisation on the deformable registration method showed errors of 0.99mm and 1.78mm against two simulated fields with magnitude 7.30±3.15mm and 19.37±9.58mm, respectively, generated with a finite element model of the buttock under sitting loads. Measurements allowed the quantification of the slide of the gluteus maximus away from the ischial tuberosity (IT, average 13.74 mm) that was only qualitatively identified in the literature, highlighting the importance of the ischial bursa in allowing sliding. Spatial evolution of the maximus shear strain on a path from the IT to the seating interface showed a peak of compression in the fat, close to the interface with the muscle. Obtained peak values were above the proposed damage threshold in the literature. Results in the study showed the complexity of the deformation of the soft tissues in the buttock and the need for further investigations aimed at isolating factors such as tissue geometry, duration and extent of load, sitting posture and tissue properties.
Collapse
Affiliation(s)
- Stefano Zappalá
- School of Computer Science and Informatics, Cardiff University, Cardiff, UK; Cardiff University Brain Research Imaging Centre (CUBRIC), School of Psychology, Cardiff University, Cardiff, UK.
| | | | - David Marshall
- School of Computer Science and Informatics, Cardiff University, Cardiff, UK
| | - Jing Wu
- School of Computer Science and Informatics, Cardiff University, Cardiff, UK
| | - Sam L Evans
- School of Engineering, Cardiff University, Cardiff, UK
| | - Rami M A Al-Dirini
- College of Science and Engineering, Flinders University of South Australia, Adelaide, Australia
| |
Collapse
|
24
|
Alvarez P, El Mouss M, Calka M, Belme A, Berillon G, Brige P, Payan Y, Perrier P, Vialet A. Predicting primate tongue morphology based on geometrical skull matching. A first step towards an application on fossil hominins. PLoS Comput Biol 2024; 20:e1011808. [PMID: 38252664 PMCID: PMC10833839 DOI: 10.1371/journal.pcbi.1011808] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Revised: 02/01/2024] [Accepted: 01/08/2024] [Indexed: 01/24/2024] Open
Abstract
As part of a long-term research project aiming at generating a biomechanical model of a fossil human tongue from a carefully designed 3D Finite Element mesh of a living human tongue, we present a computer-based method that optimally registers 3D CT images of the head and neck of the living human into similar images of another primate. We quantitatively evaluate the method on a baboon. The method generates a geometric deformation field which is used to build up a 3D Finite Element mesh of the baboon tongue. In order to assess the method's ability to generate a realistic tongue from bony structure information alone, as would be the case for fossil humans, its performance is evaluated and compared under two conditions in which different anatomical information is available: (1) combined information from soft-tissue and bony structures; (2) information from bony structures alone. An Uncertainty Quantification method is used to evaluate the sensitivity of the transformation to two crucial parameters, namely the resolution of the transformation grid and the weight of a smoothness constraint applied to the transformation, and to determine the best possible meshes. In both conditions the baboon tongue morphology is realistically predicted, evidencing that bony structures alone provide enough relevant information to generate soft tissue.
Collapse
Affiliation(s)
- Pablo Alvarez
- Sorbonne Université, Institut des Sciences du Calcul et des Données, Paris, France
- Univ. Grenoble Alpes, CNRS, Grenoble INP, TIMC, Grenoble, France
| | - Marouane El Mouss
- Sorbonne Université, Institut des Sciences du Calcul et des Données, Paris, France
| | - Maxime Calka
- Sorbonne Université, Institut des Sciences du Calcul et des Données, Paris, France
| | - Anca Belme
- Sorbonne Université, Institut des Sciences du Calcul et des Données, Paris, France
- Sorbonne Université, Institute Jean Le Rond d’Alembert, UMR 7190, Paris, France
| | - Gilles Berillon
- Muséum national d’Histoire naturelle, UMR 7194 - Histoire naturelle de l’Homme préhistorique, Paris, France
| | - Pauline Brige
- Laboratoire d’Imagerie Interventionnelle Expérimentale, CERIMED, Marseille, France
| | - Yohan Payan
- Univ. Grenoble Alpes, CNRS, Grenoble INP, TIMC, Grenoble, France
| | - Pascal Perrier
- Univ. Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, Grenoble, France
| | - Amélie Vialet
- Sorbonne Université, Institut des Sciences du Calcul et des Données, Paris, France
- Muséum national d’Histoire naturelle, UMR 7194 - Histoire naturelle de l’Homme préhistorique, Paris, France
| |
Collapse
|
25
|
Zheng JQ, Wang Z, Huang B, Lim NH, Papież BW. Residual Aligner-based Network (RAN): Motion-separable structure for coarse-to-fine discontinuous deformable registration. Med Image Anal 2024; 91:103038. [PMID: 38000258 DOI: 10.1016/j.media.2023.103038] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 11/09/2023] [Accepted: 11/15/2023] [Indexed: 11/26/2023]
Abstract
Deformable image registration, the estimation of the spatial transformation between different images, is an important task in medical imaging. Deep learning techniques have been shown to perform 3D image registration efficiently. However, current registration strategies often only focus on the deformation smoothness, which leads to the ignorance of complicated motion patterns (e.g., separate or sliding motions), especially for the intersection of organs. Thus, the performance when dealing with the discontinuous motions of multiple nearby objects is limited, causing undesired predictive outcomes in clinical usage, such as misidentification and mislocalization of lesions or other abnormalities. Consequently, we proposed a novel registration method to address this issue: a new Motion Separable backbone is exploited to capture the separate motion, with a theoretical analysis of the upper bound of the motions' discontinuity provided. In addition, a novel Residual Aligner module was used to disentangle and refine the predicted motions across the multiple neighboring objects/organs. We evaluate our method, Residual Aligner-based Network (RAN), on abdominal Computed Tomography (CT) scans and it has shown to achieve one of the most accurate unsupervised inter-subject registration for the 9 organs, with the highest-ranked registration of the veins (Dice Similarity Coefficient (%)/Average surface distance (mm): 62%/4.9mm for the vena cava and 34%/7.9mm for the portal and splenic vein), with a smaller model structure and less computation compared to state-of-the-art methods. Furthermore, when applied to lung CT, the RAN achieves comparable results to the best-ranked networks (94%/3.0mm), also with fewer parameters and less computation.
Collapse
Affiliation(s)
- Jian-Qing Zheng
- The Kennedy Institute of Rheumatology, University of Oxford, UK.
| | - Ziyang Wang
- Department of Computer Science, University of Oxford, Oxford, UK
| | - Baoru Huang
- The Hamlyn Centre for Robotic Surgery, Imperial College, London, UK
| | - Ngee Han Lim
- The Kennedy Institute of Rheumatology, University of Oxford, UK
| | | |
Collapse
|
26
|
Nenoff L, Amstutz F, Murr M, Archibald-Heeren B, Fusella M, Hussein M, Lechner W, Zhang Y, Sharp G, Vasquez Osorio E. Review and recommendations on deformable image registration uncertainties for radiotherapy applications. Phys Med Biol 2023; 68:24TR01. [PMID: 37972540 PMCID: PMC10725576 DOI: 10.1088/1361-6560/ad0d8a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Revised: 10/30/2023] [Accepted: 11/15/2023] [Indexed: 11/19/2023]
Abstract
Deformable image registration (DIR) is a versatile tool used in many applications in radiotherapy (RT). DIR algorithms have been implemented in many commercial treatment planning systems providing accessible and easy-to-use solutions. However, the geometric uncertainty of DIR can be large and difficult to quantify, resulting in barriers to clinical practice. Currently, there is no agreement in the RT community on how to quantify these uncertainties and determine thresholds that distinguish a good DIR result from a poor one. This review summarises the current literature on sources of DIR uncertainties and their impact on RT applications. Recommendations are provided on how to handle these uncertainties for patient-specific use, commissioning, and research. Recommendations are also provided for developers and vendors to help users to understand DIR uncertainties and make the application of DIR in RT safer and more reliable.
Collapse
Affiliation(s)
- Lena Nenoff
- Department of Radiation Oncology, Massachusetts General Hospital, Boston, MA, United States of America
- Harvard Medical School, Boston, MA, United States of America
- OncoRay—National Center for Radiation Research in Oncology, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Helmholtz-Zentrum Dresden—Rossendorf, Dresden Germany
- Helmholtz-Zentrum Dresden—Rossendorf, Institute of Radiooncology—OncoRay, Dresden, Germany
| | - Florian Amstutz
- Department of Physics, ETH Zurich, Switzerland
- Center for Proton Therapy, Paul Scherrer Institute, Villigen PSI, Switzerland
- Division of Medical Radiation Physics and Department of Radiation Oncology, Inselspital, Bern University Hospital, and University of Bern, Bern, Switzerland
| | - Martina Murr
- Section for Biomedical Physics, Department of Radiation Oncology, University of Tübingen, Germany
| | | | - Marco Fusella
- Department of Radiation Oncology, Abano Terme Hospital, Italy
| | - Mohammad Hussein
- Metrology for Medical Physics, National Physical Laboratory, Teddington, United Kingdom
| | - Wolfgang Lechner
- Department of Radiation Oncology, Medical University of Vienna, Austria
| | - Ye Zhang
- Center for Proton Therapy, Paul Scherrer Institute, Villigen PSI, Switzerland
| | - Greg Sharp
- Department of Radiation Oncology, Massachusetts General Hospital, Boston, MA, United States of America
- Harvard Medical School, Boston, MA, United States of America
| | - Eliana Vasquez Osorio
- Division of Cancer Sciences, The University of Manchester, Manchester, United Kingdom
| |
Collapse
|
27
|
Smolders A, Lomax A, Weber DC, Albertini F. Deep learning based uncertainty prediction of deformable image registration for contour propagation and dose accumulation in online adaptive radiotherapy. Phys Med Biol 2023; 68:245027. [PMID: 37820691 DOI: 10.1088/1361-6560/ad0282] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Accepted: 10/11/2023] [Indexed: 10/13/2023]
Abstract
Objective.Online adaptive radiotherapy aims to fully leverage the advantages of highly conformal therapy by reducing anatomical and set-up uncertainty, thereby alleviating the need for robust treatments. This requires extensive automation, among which is the use of deformable image registration (DIR) for contour propagation and dose accumulation. However, inconsistencies in DIR solutions between different algorithms have caused distrust, hampering its direct clinical use. This work aims to enable the clinical use of DIR by developing deep learning methods to predict DIR uncertainty and propagating it into clinically usable metrics.Approach.Supervised and unsupervised neural networks were trained to predict the Gaussian uncertainty of a given deformable vector field (DVF). Since both methods rely on different assumptions, their predictions differ and were further merged into a combined model. The resulting normally distributed DVFs can be directly sampled to propagate the uncertainty into contour and accumulated dose uncertainty.Main results.The unsupervised and combined models can accurately predict the uncertainty in the manually annotated landmarks on the DIRLAB dataset. Furthermore, for 5 patients with lung cancer, the propagation of the predicted DVF uncertainty into contour uncertainty yielded for both methods anexpected calibration errorof less than 3%. Additionally, theprobabilisticly accumulated dose volume histograms(DVH) encompass well the accumulated proton therapy doses using 5 different DIR algorithms. It was additionally shown that the unsupervised model can be used for different DIR algorithms without the need for retraining.Significance.Our work presents first-of-a-kind deep learning methods to predict the uncertainty of the DIR process. The methods are fast, yield high-quality uncertainty estimates and are useable for different algorithms and applications. This allows clinics to use DIR uncertainty in their workflows without the need to change their DIR implementation.
Collapse
Affiliation(s)
- A Smolders
- Paul Scherrer Institute, Center for Proton Therapy, Switzerland
- Department of Physics, ETH Zurich, Switzerland
| | - A Lomax
- Paul Scherrer Institute, Center for Proton Therapy, Switzerland
- Department of Physics, ETH Zurich, Switzerland
| | - D C Weber
- Paul Scherrer Institute, Center for Proton Therapy, Switzerland
- Department of Radiation Oncology, University Hospital Zurich, Switzerland
- Department of Radiation Oncology, Inselspital, Bern University Hospital, University of Bern, Switzerland
| | - F Albertini
- Paul Scherrer Institute, Center for Proton Therapy, Switzerland
| |
Collapse
|
28
|
Chrisochoides N, Liu Y, Drakopoulos F, Kot A, Foteinos P, Tsolakis C, Billias E, Clatz O, Ayache N, Fedorov A, Golby A, Black P, Kikinis R. Comparison of physics-based deformable registration methods for image-guided neurosurgery. Front Digit Health 2023; 5:1283726. [PMID: 38144260 PMCID: PMC10740151 DOI: 10.3389/fdgth.2023.1283726] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2023] [Accepted: 11/02/2023] [Indexed: 12/26/2023] Open
Abstract
This paper compares three finite element-based methods used in a physics-based non-rigid registration approach and reports on the progress made over the last 15 years. Large brain shifts caused by brain tumor removal affect registration accuracy by creating point and element outliers. A combination of approximation- and geometry-based point and element outlier rejection improves the rigid registration error by 2.5 mm and meets the real-time constraints (4 min). In addition, the paper raises several questions and presents two open problems for the robust estimation and improvement of registration error in the presence of outliers due to sparse, noisy, and incomplete data. It concludes with preliminary results on leveraging Quantum Computing, a promising new technology for computationally intensive problems like Feature Detection and Block Matching in addition to finite element solver; all three account for 75% of computing time in deformable registration.
Collapse
Affiliation(s)
- Nikos Chrisochoides
- Center for Real-Time Computing, Computer Science Department, Old Dominion University, Norfolk, VA, United States
| | - Yixun Liu
- Center for Real-Time Computing, Computer Science Department, Old Dominion University, Norfolk, VA, United States
| | - Fotis Drakopoulos
- Center for Real-Time Computing, Computer Science Department, Old Dominion University, Norfolk, VA, United States
| | - Andriy Kot
- Center for Real-Time Computing, Computer Science Department, Old Dominion University, Norfolk, VA, United States
| | - Panos Foteinos
- Center for Real-Time Computing, Computer Science Department, Old Dominion University, Norfolk, VA, United States
| | - Christos Tsolakis
- Center for Real-Time Computing, Computer Science Department, Old Dominion University, Norfolk, VA, United States
| | - Emmanuel Billias
- Center for Real-Time Computing, Computer Science Department, Old Dominion University, Norfolk, VA, United States
| | - Olivier Clatz
- Inria, French Research Institute for Digital Science, Sophia Antipolis, Valbonne, France
| | - Nicholas Ayache
- Inria, French Research Institute for Digital Science, Sophia Antipolis, Valbonne, France
| | - Andrey Fedorov
- Center for Real-Time Computing, Computer Science Department, Old Dominion University, Norfolk, VA, United States
- Neuroimaging Analysis Center, Department of Radiology, Harvard Medical School, Boston, MA, United States
| | - Alex Golby
- Neuroimaging Analysis Center, Department of Radiology, Harvard Medical School, Boston, MA, United States
- Image-guided Neurosurgery, Department of Neurosurgery, Harvard Medical School, Boston, MA, United States
| | - Peter Black
- Image-guided Neurosurgery, Department of Neurosurgery, Harvard Medical School, Boston, MA, United States
| | - Ron Kikinis
- Neuroimaging Analysis Center, Department of Radiology, Harvard Medical School, Boston, MA, United States
| |
Collapse
|
29
|
Wang AQ, Yu EM, Dalca AV, Sabuncu MR. A robust and interpretable deep learning framework for multi-modal registration via keypoints. Med Image Anal 2023; 90:102962. [PMID: 37769550 PMCID: PMC10591968 DOI: 10.1016/j.media.2023.102962] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Revised: 08/24/2023] [Accepted: 09/07/2023] [Indexed: 10/03/2023]
Abstract
We present KeyMorph, a deep learning-based image registration framework that relies on automatically detecting corresponding keypoints. State-of-the-art deep learning methods for registration often are not robust to large misalignments, are not interpretable, and do not incorporate the symmetries of the problem. In addition, most models produce only a single prediction at test-time. Our core insight which addresses these shortcomings is that corresponding keypoints between images can be used to obtain the optimal transformation via a differentiable closed-form expression. We use this observation to drive the end-to-end learning of keypoints tailored for the registration task, and without knowledge of ground-truth keypoints. This framework not only leads to substantially more robust registration but also yields better interpretability, since the keypoints reveal which parts of the image are driving the final alignment. Moreover, KeyMorph can be designed to be equivariant under image translations and/or symmetric with respect to the input image ordering. Finally, we show how multiple deformation fields can be computed efficiently and in closed-form at test time corresponding to different transformation variants. We demonstrate the proposed framework in solving 3D affine and spline-based registration of multi-modal brain MRI scans. In particular, we show registration accuracy that surpasses current state-of-the-art methods, especially in the context of large displacements. Our code is available at https://github.com/alanqrwang/keymorph.
Collapse
Affiliation(s)
- Alan Q Wang
- School of Electrical and Computer Engineering, Cornell University and Cornell Tech, New York, NY 10044, USA; Department of Radiology, Weill Cornell Medical School, New York, NY 10065, USA.
| | - Evan M Yu
- Iterative Scopes, Cambridge, MA 02139, USA
| | - Adrian V Dalca
- Computer Science and Artificial Intelligence Lab at the Massachusetts Institute of Technology, Cambridge, MA 02139, USA; A.A. Martinos Center for Biomedical Imaging at the Massachusetts General Hospital, Charlestown, MA 02129, USA
| | - Mert R Sabuncu
- School of Electrical and Computer Engineering, Cornell University and Cornell Tech, New York, NY 10044, USA; Department of Radiology, Weill Cornell Medical School, New York, NY 10065, USA
| |
Collapse
|
30
|
Abbasi S, Mehdizadeh A, Boveiri HR, Mosleh Shirazi MA, Javidan R, Khayami R, Tavakoli M. Unsupervised deep learning registration model for multimodal brain images. J Appl Clin Med Phys 2023; 24:e14177. [PMID: 37823748 PMCID: PMC10647957 DOI: 10.1002/acm2.14177] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Revised: 05/29/2023] [Accepted: 09/14/2023] [Indexed: 10/13/2023] Open
Abstract
Multimodal image registration is a key for many clinical image-guided interventions. However, it is a challenging task because of complicated and unknown relationships between different modalities. Currently, deep supervised learning is the state-of-theart method at which the registration is conducted in end-to-end manner and one-shot. Therefore, a huge ground-truth data is required to improve the results of deep neural networks for registration. Moreover, supervised methods may yield models that bias towards annotated structures. Here, to deal with above challenges, an alternative approach is using unsupervised learning models. In this study, we have designed a novel deep unsupervised Convolutional Neural Network (CNN)-based model based on computer tomography/magnetic resonance (CT/MR) co-registration of brain images in an affine manner. For this purpose, we created a dataset consisting of 1100 pairs of CT/MR slices from the brain of 110 neuropsychic patients with/without tumor. At the next step, 12 landmarks were selected by a well-experienced radiologist and annotated on each slice resulting in the computation of series of metrics evaluation, target registration error (TRE), Dice similarity, Hausdorff, and Jaccard coefficients. The proposed method could register the multimodal images with TRE 9.89, Dice similarity 0.79, Hausdorff 7.15, and Jaccard 0.75 that are appreciable for clinical applications. Moreover, the approach registered the images in an acceptable time 203 ms and can be appreciable for clinical usage due to the short registration time and high accuracy. Here, the results illustrated that our proposed method achieved competitive performance against other related approaches from both reasonable computation time and the metrics evaluation.
Collapse
Affiliation(s)
- Samaneh Abbasi
- Department of Medical Physics and EngineeringSchool of MedicineShiraz University of Medical SciencesShirazIran
| | - Alireza Mehdizadeh
- Research Center for Neuromodulation and PainShiraz University of Medical SciencesShirazIran
| | - Hamid Reza Boveiri
- Department of Computer Engineering and ITShiraz University of TechnologyShirazIran
| | - Mohammad Amin Mosleh Shirazi
- Ionizing and Non‐Ionizing Radiation Protection Research Center, School of Paramedical SciencesShiraz University of Medical SciencesShirazIran
| | - Reza Javidan
- Department of Computer Engineering and ITShiraz University of TechnologyShirazIran
| | - Raouf Khayami
- Department of Computer Engineering and ITShiraz University of TechnologyShirazIran
| | - Meysam Tavakoli
- Department of Radiation Oncologyand Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| |
Collapse
|
31
|
Windolf C, Yu H, Paulk AC, Meszéna D, Muñoz W, Boussard J, Hardstone R, Caprara I, Jamali M, Kfir Y, Xu D, Chung JE, Sellers KK, Ye Z, Shaker J, Lebedeva A, Raghavan M, Trautmann E, Melin M, Couto J, Garcia S, Coughlin B, Horváth C, Fiáth R, Ulbert I, Movshon JA, Shadlen MN, Churchland MM, Churchland AK, Steinmetz NA, Chang EF, Schweitzer JS, Williams ZM, Cash SS, Paninski L, Varol E. DREDge: robust motion correction for high-density extracellular recordings across species. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.10.24.563768. [PMID: 37961359 PMCID: PMC10634799 DOI: 10.1101/2023.10.24.563768] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/15/2023]
Abstract
High-density microelectrode arrays (MEAs) have opened new possibilities for systems neuroscience in human and non-human animals, but brain tissue motion relative to the array poses a challenge for downstream analyses, particularly in human recordings. We introduce DREDge (Decentralized Registration of Electrophysiology Data), a robust algorithm which is well suited for the registration of noisy, nonstationary extracellular electrophysiology recordings. In addition to estimating motion from spikes in the action potential (AP) frequency band, DREDge enables automated tracking of motion at high temporal resolution in the local field potential (LFP) frequency band. In human intraoperative recordings, which often feature fast (period <1s) motion, DREDge correction in the LFP band enabled reliable recovery of evoked potentials, and significantly reduced single-unit spike shape variability and spike sorting error. Applying DREDge to recordings made during deep probe insertions in nonhuman primates demonstrated the possibility of tracking probe motion of centimeters across several brain regions while simultaneously mapping single unit electrophysiological features. DREDge reliably delivered improved motion correction in acute mouse recordings, especially in those made with an recent ultra-high density probe. We also implemented a procedure for applying DREDge to recordings made across tens of days in chronic implantations in mice, reliably yielding stable motion tracking despite changes in neural activity across experimental sessions. Together, these advances enable automated, scalable registration of electrophysiological data across multiple species, probe types, and drift cases, providing a stable foundation for downstream scientific analyses of these rich datasets.
Collapse
Affiliation(s)
- Charlie Windolf
- Department of Statistics, Columbia University
- Zuckerman Institute, Columbia University
| | - Han Yu
- Zuckerman Institute, Columbia University
- Department of Electrical Engineering, Columbia University
| | - Angelique C Paulk
- Department of Neurology, Center for Neurotechnology and Neurorecovery, Massachusetts General Hospital, Harvard Medical School
| | - Domokos Meszéna
- Department of Neurology, Center for Neurotechnology and Neurorecovery, Massachusetts General Hospital, Harvard Medical School
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Budapest, Hungary
| | - William Muñoz
- Department of Neurosurgery, Massachusetts General Hospital, Harvard Medical School
| | - Julien Boussard
- Department of Statistics, Columbia University
- Zuckerman Institute, Columbia University
| | - Richard Hardstone
- Department of Neurology, Center for Neurotechnology and Neurorecovery, Massachusetts General Hospital, Harvard Medical School
| | - Irene Caprara
- Department of Neurosurgery, Massachusetts General Hospital, Harvard Medical School
| | - Mohsen Jamali
- Department of Neurosurgery, Massachusetts General Hospital, Harvard Medical School
| | - Yoav Kfir
- Department of Neurosurgery, Massachusetts General Hospital, Harvard Medical School
| | - Duo Xu
- Weill Institute for Neurosciences, University of California San Francisco
- Department of Neurological Surgery, University of California San Francisco
| | - Jason E Chung
- Department of Neurological Surgery, University of California San Francisco
| | - Kristin K Sellers
- Weill Institute for Neurosciences, University of California San Francisco
- Department of Neurological Surgery, University of California San Francisco
| | - Zhiwen Ye
- Department of Biological Structure, University of Washington
| | - Jordan Shaker
- Department of Biological Structure, University of Washington
| | | | | | - Eric Trautmann
- Department of Neuroscience, Columbia University Medical Center
- Zuckerman Institute, Columbia University
- Grossman Center for the Statistics of Mind, Columbia University
| | - Max Melin
- David Geffen School of Medicine, University of California Los Angeles
| | - João Couto
- David Geffen School of Medicine, University of California Los Angeles
| | - Samuel Garcia
- Centre National de la Recherche Scientifique, Centre de Recherche en Neurosciences de Lyon
| | - Brian Coughlin
- Department of Neurology, Center for Neurotechnology and Neurorecovery, Massachusetts General Hospital, Harvard Medical School
| | - Csaba Horváth
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Budapest, Hungary
| | - Richárd Fiáth
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Budapest, Hungary
| | - István Ulbert
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Budapest, Hungary
| | | | - Michael N Shadlen
- Zuckerman Institute, Columbia University
- Howard Hughes Medical Institute
| | | | - Anne K Churchland
- David Geffen School of Medicine, University of California Los Angeles
| | | | - Edward F Chang
- Weill Institute for Neurosciences, University of California San Francisco
- Department of Neurological Surgery, University of California San Francisco
| | - Jeffrey S Schweitzer
- Department of Neurosurgery, Massachusetts General Hospital, Harvard Medical School
| | - Ziv M Williams
- Department of Neurosurgery, Massachusetts General Hospital, Harvard Medical School
| | - Sydney S Cash
- Department of Neurology, Center for Neurotechnology and Neurorecovery, Massachusetts General Hospital, Harvard Medical School
| | - Liam Paninski
- Department of Statistics, Columbia University
- Zuckerman Institute, Columbia University
- Department of Neuroscience, Columbia University Medical Center
- Grossman Center for the Statistics of Mind, Columbia University
| | - Erdem Varol
- Department of Statistics, Columbia University
- Zuckerman Institute, Columbia University
- Department of Computer Science & Engineering, New York University
| |
Collapse
|
32
|
Joshi A, Hong Y. R2Net: Efficient and flexible diffeomorphic image registration using Lipschitz continuous residual networks. Med Image Anal 2023; 89:102917. [PMID: 37598607 DOI: 10.1016/j.media.2023.102917] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2022] [Revised: 06/26/2023] [Accepted: 07/25/2023] [Indexed: 08/22/2023]
Abstract
Classical diffeomorphic image registration methods, while being accurate, face the challenges of high computational costs. Deep learning based approaches provide a fast alternative to address these issues; however, most existing deep solutions either lose the good property of diffeomorphism or have limited flexibility to capture large deformations, under the assumption that deformations are driven by stationary velocity fields (SVFs). Also, the adopted squaring and scaling technique for integrating SVFs is time- and memory-consuming, hindering deep methods from handling large image volumes. In this paper, we present an unsupervised diffeomorphic image registration framework, which uses deep residual networks (ResNets) as numerical approximations of the underlying continuous diffeomorphic setting governed by ordinary differential equations, which is parameterized by either SVFs or time-varying (non-stationary) velocity fields. This flexible parameterization in our Residual Registration Network (R2Net) not only provides the model's ability to capture large deformation but also reduces the time and memory cost when integrating velocity fields for deformation generation. Also, we introduce a Lipschitz continuity constraint into the ResNet block to help achieve diffeomorphic deformations. To enhance the ability of our model for handling images with large volume sizes, we employ a hierarchical extension with a multi-phase learning strategy to solve the image registration task in a coarse-to-fine fashion. We demonstrate our models on four 3D image registration tasks with a wide range of anatomies, including brain MRIs, cine cardiac MRIs, and lung CT scans. Compared to classical methods SyN and diffeomorphic VoxelMorph, our models achieve comparable or better registration accuracy with much smoother deformations. Our source code is available online at https://github.com/ankitajoshi15/R2Net.
Collapse
Affiliation(s)
- Ankita Joshi
- School of Computing, University of Georgia, Athens, 30602, USA
| | - Yi Hong
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China.
| |
Collapse
|
33
|
Gan Z, Sun W, Liao K, Yang X. Probabilistic Modeling for Image Registration Using Radial Basis Functions: Application to Cardiac Motion Estimation. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:7324-7338. [PMID: 35073271 DOI: 10.1109/tnnls.2022.3141119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Cardiovascular diseases (CVDs) are the leading cause of death, affecting the cardiac dynamics over the cardiac cycle. Estimation of cardiac motion plays an essential role in many medical clinical tasks. This article proposes a probabilistic framework for image registration using compact support radial basis functions (CSRBFs) to estimate cardiac motion. A variational inference-based generative model with convolutional neural networks (CNNs) is proposed to learn the probabilistic coefficients of CSRBFs used in image deformation. We designed two networks to estimate the deformation coefficients of CSRBFs: the first one solves the spatial transformation using given control points, and the second one models the transformation using drifting control points. The given-point-based network estimates the probabilistic coefficients of control points. In contrast, the drifting-point-based model predicts the probabilistic coefficients and spatial distribution of control points simultaneously. To regularize these coefficients, we derive the bending energy (BE) in the variational bound by defining the covariance of coefficients. The proposed framework has been evaluated on the cardiac motion estimation and the calculation of the myocardial strain. In the experiments, 1409 slice pairs of end-diastolic (ED) and end-systolic (ES) phase in 4-D cardiac magnetic resonance (MR) images selected from three public datasets are employed to evaluate our networks. The experimental results show that our framework outperforms the state-of-the-art registration methods concerning the deformation smoothness and registration accuracy.
Collapse
|
34
|
Chrisochoides N, Fedorov A, Liu Y, Kot A, Foteinos P, Drakopoulos F, Tsolakis C, Billias E, Clatz O, Ayache N, Golby A, Black P, Kikinis R. Real-Time Dynamic Data Driven Deformable Registration for Image-Guided Neurosurgery: Computational Aspects. ARXIV 2023:arXiv:2309.03336v1. [PMID: 37731651 PMCID: PMC10508827] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 09/22/2023]
Abstract
Current neurosurgical procedures utilize medical images of various modalities to enable the precise location of tumors and critical brain structures to plan accurate brain tumor resection. The difficulty of using preoperative images during the surgery is caused by the intra-operative deformation of the brain tissue (brain shift), which introduces discrepancies concerning the preoperative configuration. Intra-operative imaging allows tracking such deformations but cannot fully substitute for the quality of the pre-operative data. Dynamic Data Driven Deformable Non-Rigid Registration (D4NRR) is a complex and time-consuming image processing operation that allows the dynamic adjustment of the pre-operative image data to account for intra-operative brain shift during the surgery. This paper summarizes the computational aspects of a specific adaptive numerical approximation method and its variations for registering brain MRIs. It outlines its evolution over the last 15 years and identifies new directions for the computational aspects of the technique.
Collapse
Affiliation(s)
- Nikos Chrisochoides
- Center for Real-Time Computing, Computer Science Department, Old Dominion University, Norfolk, VA
| | - Andrey Fedorov
- Center for Real-Time Computing, Computer Science Department, Old Dominion University, Norfolk, VA
- Neuroimaging Analysis Center, Department of Radiology, Harvard Medical School, Boston, MA
| | - Yixun Liu
- Center for Real-Time Computing, Computer Science Department, Old Dominion University, Norfolk, VA
| | - Andriy Kot
- Center for Real-Time Computing, Computer Science Department, Old Dominion University, Norfolk, VA
| | - Panos Foteinos
- Center for Real-Time Computing, Computer Science Department, Old Dominion University, Norfolk, VA
| | - Fotis Drakopoulos
- Center for Real-Time Computing, Computer Science Department, Old Dominion University, Norfolk, VA
| | - Christos Tsolakis
- Center for Real-Time Computing, Computer Science Department, Old Dominion University, Norfolk, VA
| | - Emmanuel Billias
- Center for Real-Time Computing, Computer Science Department, Old Dominion University, Norfolk, VA
| | - Olivier Clatz
- Inria, French Research Institute for Digital Science, Sophia Antipolis, France
| | - Nicholas Ayache
- Inria, French Research Institute for Digital Science, Sophia Antipolis, France
| | - Alex Golby
- Neuroimaging Analysis Center, Department of Radiology, Harvard Medical School, Boston, MA
- Image-guided Neurosurgery, Department of Neurosurgery, Harvard Medical School, Boston, MA
| | - Peter Black
- Image-guided Neurosurgery, Department of Neurosurgery, Harvard Medical School, Boston, MA
| | - Ron Kikinis
- Neuroimaging Analysis Center, Department of Radiology, Harvard Medical School, Boston, MA
| |
Collapse
|
35
|
Kimberly WT, Sorby-Adams AJ, Webb AG, Wu EX, Beekman R, Bowry R, Schiff SJ, de Havenon A, Shen FX, Sze G, Schaefer P, Iglesias JE, Rosen MS, Sheth KN. Brain imaging with portable low-field MRI. NATURE REVIEWS BIOENGINEERING 2023; 1:617-630. [PMID: 37705717 PMCID: PMC10497072 DOI: 10.1038/s44222-023-00086-w] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 06/06/2023] [Indexed: 09/15/2023]
Abstract
The advent of portable, low-field MRI (LF-MRI) heralds new opportunities in neuroimaging. Low power requirements and transportability have enabled scanning outside the controlled environment of a conventional MRI suite, enhancing access to neuroimaging for indications that are not well suited to existing technologies. Maximizing the information extracted from the reduced signal-to-noise ratio of LF-MRI is crucial to developing clinically useful diagnostic images. Progress in electromagnetic noise cancellation and machine learning reconstruction algorithms from sparse k-space data as well as new approaches to image enhancement have now enabled these advancements. Coupling technological innovation with bedside imaging creates new prospects in visualizing the healthy brain and detecting acute and chronic pathological changes. Ongoing development of hardware, improvements in pulse sequences and image reconstruction, and validation of clinical utility will continue to accelerate this field. As further innovation occurs, portable LF-MRI will facilitate the democratization of MRI and create new applications not previously feasible with conventional systems.
Collapse
Affiliation(s)
- W Taylor Kimberly
- Department of Neurology and the Center for Genomic Medicine, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Annabel J Sorby-Adams
- Department of Neurology and the Center for Genomic Medicine, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Andrew G Webb
- Department of Radiology, Leiden University Medical Center, Leiden, The Netherlands
| | - Ed X Wu
- Laboratory of Biomedical Imaging and Signal Processing, Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong SAR, China
| | - Rachel Beekman
- Division of Neurocritical Care and Emergency Neurology, Department of Neurology, Yale New Haven Hospital and Yale School of Medicine, Yale Center for Brain & Mind Health, New Haven, CT, USA
| | - Ritvij Bowry
- Departments of Neurosurgery and Neurology, McGovern Medical School, University of Texas Health Neurosciences, Houston, TX, USA
| | - Steven J Schiff
- Department of Neurosurgery, Yale School of Medicine, New Haven, CT, USA
| | - Adam de Havenon
- Division of Vascular Neurology, Department of Neurology, Yale New Haven Hospital and Yale School of Medicine, New Haven, CT, USA
| | - Francis X Shen
- Harvard Medical School Center for Bioethics, Harvard law School, Boston, MA, USA
- Department of Psychiatry, Massachusetts General Hospital, Boston, MA, USA
| | - Gordon Sze
- Department of Radiology, Yale New Haven Hospital and Yale School of Medicine, New Haven, CT, USA
| | - Pamela Schaefer
- Division of Neuroradiology, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Juan Eugenio Iglesias
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
- Centre for Medical Image Computing, University College London, London, UK
- Computer Science and AI Laboratory, Massachusetts Institute of Technology, Boston, MA, USA
| | - Matthew S Rosen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Kevin N Sheth
- Division of Neurocritical Care and Emergency Neurology, Department of Neurology, Yale New Haven Hospital and Yale School of Medicine, Yale Center for Brain & Mind Health, New Haven, CT, USA
| |
Collapse
|
36
|
Fan X, Li Z, Li Z, Wang X, Liu R, Luo Z, Huang H. Automated Learning for Deformable Medical Image Registration by Jointly Optimizing Network Architectures and Objective Functions. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2023; 32:4880-4892. [PMID: 37624710 DOI: 10.1109/tip.2023.3307215] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/27/2023]
Abstract
Deformable image registration plays a critical role in various tasks of medical image analysis. A successful registration algorithm, either derived from conventional energy optimization or deep networks, requires tremendous efforts from computer experts to well design registration energy or to carefully tune network architectures with respect to medical data available for a given registration task/scenario. This paper proposes an automated learning registration algorithm (AutoReg) that cooperatively optimizes both architectures and their corresponding training objectives, enabling non-computer experts to conveniently find off-the-shelf registration algorithms for various registration scenarios. Specifically, we establish a triple-level framework to embrace the searching for both network architectures and objectives with a cooperating optimization. Extensive experiments on multiple volumetric datasets and various registration scenarios demonstrate that AutoReg can automatically learn an optimal deep registration network for given volumes and achieve state-of-the-art performance. The automatically learned network also improves computational efficiency over the mainstream UNet architecture from 0.558 to 0.270 seconds for a volume pair on the same configuration.
Collapse
|
37
|
Alscher T, Erleben K, Darkner S. Collision-constrained deformable image registration framework for discontinuity management. PLoS One 2023; 18:e0290243. [PMID: 37594943 PMCID: PMC10437794 DOI: 10.1371/journal.pone.0290243] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Accepted: 08/03/2023] [Indexed: 08/20/2023] Open
Abstract
Topological changes like sliding motion, sources and sinks are a significant challenge in image registration. This work proposes the use of the alternating direction method of multipliers as a general framework for constraining the registration of separate objects with individual deformation fields from overlapping in image registration. This constraint is enforced by introducing a collision detection algorithm from the field of computer graphics which results in a robust divide and conquer optimization strategy using Free-Form Deformations. A series of experiments demonstrate that the proposed framework performs superior with regards to the combination of intersection prevention and image registration including synthetic examples containing complex displacement patterns. The results show compliance with the non-intersection constraints while simultaneously preventing a decrease in registration accuracy. Furthermore, the application of the proposed algorithm to the DIR-Lab data set demonstrates that the framework generalizes to real data by validating it on a lung registration problem.
Collapse
Affiliation(s)
- Thomas Alscher
- Department of Computer Science, University of Copenhagen, Copenhagen, Region Hovedstaden, Denmark
| | - Kenny Erleben
- Department of Computer Science, University of Copenhagen, Copenhagen, Region Hovedstaden, Denmark
| | - Sune Darkner
- Department of Computer Science, University of Copenhagen, Copenhagen, Region Hovedstaden, Denmark
| |
Collapse
|
38
|
Zhu H, Li T, Zhao B. Statistical Learning Methods for Neuroimaging Data Analysis with Applications. Annu Rev Biomed Data Sci 2023; 6:73-104. [PMID: 37127052 DOI: 10.1146/annurev-biodatasci-020722-100353] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
The aim of this review is to provide a comprehensive survey of statistical challenges in neuroimaging data analysis, from neuroimaging techniques to large-scale neuroimaging studies and statistical learning methods. We briefly review eight popular neuroimaging techniques and their potential applications in neuroscience research and clinical translation. We delineate four themes of neuroimaging data and review major image processing analysis methods for processing neuroimaging data at the individual level. We briefly review four large-scale neuroimaging-related studies and a consortium on imaging genomics and discuss four themes of neuroimaging data analysis at the population level. We review nine major population-based statistical analysis methods and their associated statistical challenges and present recent progress in statistical methodology to address these challenges.
Collapse
Affiliation(s)
- Hongtu Zhu
- Department of Biostatistics, Department of Statistics, Department of Genetics, and Department of Computer Science, University of North Carolina, Chapel Hill, North Carolina, USA;
- Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, North Carolina, USA
| | - Tengfei Li
- Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, North Carolina, USA
- Department of Radiology, University of North Carolina, Chapel Hill, North Carolina, USA
| | - Bingxin Zhao
- Department of Statistics and Data Science, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| |
Collapse
|
39
|
Shang J, Huang P, Zhang K, Dai J, Yan H. On-board MRI image compression using video encoder for MR-guided radiotherapy. Quant Imaging Med Surg 2023; 13:5207-5217. [PMID: 37581063 PMCID: PMC10423359 DOI: 10.21037/qims-22-1378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Accepted: 06/01/2023] [Indexed: 08/16/2023]
Abstract
Background Magnetic resonance imaging (MRI) is currently used for online target monitoring and plan adaptation in modern image-guided radiotherapy. However, storing a large amount of data accumulated during patient treatment becomes an issue. In this study, the feasibility to compress MRI images accumulated in MR-guided radiotherapy using video encoders was investigated. Methods Two sorting algorithms were employed to reorder the slices in multiple MRI sets for the input sequence of video encoder. Three cropping algorithms were used to auto-segment regions of interest for separate data storage. Four video encoders, motion-JPEG (M-JPEG), MPEG-4 (MP4), Advanced Video Coding (AVC or H.264) and High Efficiency Video Coding (HEVC or H.265) were investigated. The compression performance of video encoders was evaluated by compression ratio and time, while the restoration accuracy of video encoders was evaluated by mean square error (MSE), peak signal-to-noise ratio (PSNR), and video quality matrix (VQM). The performances of all combinations of video encoders, sorting methods, and cropping algorithms were investigated and their effects were statistically analyzed. Results The compression ratios of MP4, H.264 and H.265 with both sorting methods were improved by 26% and 5%, 42% and 27%, 72% and 43%, respectively, comparing to those of M-JPEG. The slice-prioritized sorting method showed a higher compression ratio than that of the location-prioritized sorting method for MP4 (P=0.00000), H.264 (P=0.00012) and H.265 (P=0.00000), respectively. The compression ratios of H.265 were improved significantly with the applications of morphology algorithm (P=0.01890 and P=0.00530), flood-fill algorithm (P=0.00510 and P=0.00020) and level-set algorithm (P=0.02800 and P=0.00830) for both sorting methods. Among the four video encoders, H.265 showed the best compression ratio and restoration accuracy. Conclusions The compression ratio and restoration accuracy of video encoders using inter-frame coding (MP4, H.264 and H.265) were higher than that of video encoders using intra-frame coding (M-JPEG). It is feasible to implement video encoders using inter-frame coding for high-performance MRI data storage in MR-guided radiotherapy.
Collapse
Affiliation(s)
- Jiawen Shang
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Peng Huang
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Ke Zhang
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Jianrong Dai
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Hui Yan
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| |
Collapse
|
40
|
Deng L, Zhang Y, Wang J, Huang S, Yang X. Improving performance of medical image alignment through super-resolution. Biomed Eng Lett 2023; 13:397-406. [PMID: 37519883 PMCID: PMC10382383 DOI: 10.1007/s13534-023-00268-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2022] [Revised: 01/29/2023] [Accepted: 02/01/2023] [Indexed: 02/21/2023] Open
Abstract
Medical image alignment is an important tool for tracking patient conditions, but the quality of alignment is influenced by the effectiveness of low-dose Cone-beam CT (CBCT) imaging and patient characteristics. To address these two issues, we propose an unsupervised alignment method that incorporates a preprocessing super-resolution process. We constructed the model based on a private clinical dataset and validated the enhancement of the super-resolution on alignment using clinical and public data. Through all three experiments, we demonstrate that higher resolution data yields better results in the alignment process. To fully constrain similarity and structure, a new loss function is proposed; Pearson correlation coefficient combined with regional mutual information. In all test samples, the newly proposed loss function obtains higher results than the common loss function and improve alignment accuracy. Subsequent experiments verified that, combined with the newly proposed loss function, the super-resolution processed data boosts alignment, can reaching up to 9.58%. Moreover, this boost is not limited to a single model, but is effective in different alignment models. These experiments demonstrate that the unsupervised alignment method with super-resolution preprocessing proposed in this study effectively improved alignment and plays an important role in tracking different patient conditions over time.
Collapse
Affiliation(s)
- Liwei Deng
- Heilongjiang Provincial Key Laboratory of Complex Intelligent System and Integration, School of Automation, Harbin University of Science and Technology, Harbin, 150080 Heilongjiang China
| | - Yuanzhi Zhang
- Heilongjiang Provincial Key Laboratory of Complex Intelligent System and Integration, School of Automation, Harbin University of Science and Technology, Harbin, 150080 Heilongjiang China
| | - Jing Wang
- Faculty of Rehabilitation Medicine, Biofeedback Laboratory, Guangzhou Xinhua University, Guangzhou, 510520 Guangdong China
| | - Sijuan Huang
- Department of Radiation Oncology State Key Laboratory of Oncology in South China Collaborative Innovation Center for Cancer Medicine Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Sun Yat-sen University Cancer Center, Guangzhou, 510060 Guangdong China
| | - Xin Yang
- Department of Radiation Oncology State Key Laboratory of Oncology in South China Collaborative Innovation Center for Cancer Medicine Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Sun Yat-sen University Cancer Center, Guangzhou, 510060 Guangdong China
| |
Collapse
|
41
|
Rivas-Villar D, Motschi AR, Pircher M, Hitzenberger CK, Schranz M, Roberts PK, Schmidt-Erfurth U, Bogunović H. Automated inter-device 3D OCT image registration using deep learning and retinal layer segmentation. BIOMEDICAL OPTICS EXPRESS 2023; 14:3726-3747. [PMID: 37497506 PMCID: PMC10368062 DOI: 10.1364/boe.493047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Revised: 05/18/2023] [Accepted: 05/26/2023] [Indexed: 07/28/2023]
Abstract
Optical coherence tomography (OCT) is the most widely used imaging modality in ophthalmology. There are multiple variations of OCT imaging capable of producing complementary information. Thus, registering these complementary volumes is desirable in order to combine their information. In this work, we propose a novel automated pipeline to register OCT images produced by different devices. This pipeline is based on two steps: a multi-modal 2D en-face registration based on deep learning, and a Z-axis (axial axis) registration based on the retinal layer segmentation. We evaluate our method using data from a Heidelberg Spectralis and an experimental PS-OCT device. The empirical results demonstrated high-quality registrations, with mean errors of approximately 46 µm for the 2D registration and 9.59 µm for the Z-axis registration. These registrations may help in multiple clinical applications such as the validation of layer segmentations among others.
Collapse
Affiliation(s)
- David Rivas-Villar
- Centro de investigacion CITIC, Universidade da Coruña, 15071 A Coruña, Spain
- Grupo VARPA, Instituto de Investigacion Biomédica de A Coruña (INIBIC), Universidade da Coruña, 15006 A Coruña, Spain
| | - Alice R Motschi
- Medical University of Vienna, Center for Medical Physics and Biomedical Engineering, Vienna, Austria
| | - Michael Pircher
- Medical University of Vienna, Center for Medical Physics and Biomedical Engineering, Vienna, Austria
| | - Christoph K Hitzenberger
- Medical University of Vienna, Center for Medical Physics and Biomedical Engineering, Vienna, Austria
| | - Markus Schranz
- Medical University of Vienna, Center for Medical Physics and Biomedical Engineering, Vienna, Austria
| | - Philipp K Roberts
- Medical University of Vienna, Department of Ophthalmology and Optometry, Vienna, Austria
| | - Ursula Schmidt-Erfurth
- Medical University of Vienna, Department of Ophthalmology and Optometry, Vienna, Austria
| | - Hrvoje Bogunović
- Medical University of Vienna, Department of Ophthalmology and Optometry, Christian Doppler Lab for Artificial Intelligence in Retina, Vienna, Austria
| |
Collapse
|
42
|
Yang G, Xu M, Chen W, Qiao X, Shi H, Hu Y. A brain CT-based approach for predicting and analyzing stroke-associated pneumonia from intracerebral hemorrhage. Front Neurol 2023; 14:1139048. [PMID: 37332986 PMCID: PMC10272424 DOI: 10.3389/fneur.2023.1139048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Accepted: 05/08/2023] [Indexed: 06/20/2023] Open
Abstract
Introduction Stroke-associated pneumonia (SAP) is a common complication of stroke that can increase the mortality rate of patients and the burden on their families. In contrast to prior clinical scoring models that rely on baseline data, we propose constructing models based on brain CT scans due to their accessibility and clinical universality. Methods Our study aims to explore the mechanism behind the distribution and lesion areas of intracerebral hemorrhage (ICH) in relation to pneumonia, we utilized an MRI atlas that could present brain structures and a registration method in our program to extract features that may represent this relationship. We developed three machine learning models to predict the occurrence of SAP using these features. Ten-fold cross-validation was applied to evaluate the performance of models. Additionally, we constructed a probability map through statistical analysis that could display which brain regions are more frequently impacted by hematoma in patients with SAP based on four types of pneumonia. Results Our study included a cohort of 244 patients, and we extracted 35 features that captured the invasion of ICH to different brain regions for model development. We evaluated the performance of three machine learning models, namely, logistic regression, support vector machine, and random forest, in predicting SAP, and the AUCs for these models ranged from 0.77 to 0.82. The probability map revealed that the distribution of ICH varied between the left and right brain hemispheres in patients with moderate and severe SAP, and we identified several brain structures, including the left-choroid-plexus, right-choroid-plexus, right-hippocampus, and left-hippocampus, that were more closely related to SAP based on feature selection. Additionally, we observed that some statistical indicators of ICH volume, such as mean and maximum values, were proportional to the severity of SAP. Discussion Our findings suggest that our method is effective in classifying the development of pneumonia based on brain CT scans. Furthermore, we identified distinct characteristics, such as volume and distribution, of ICH in four different types of SAP.
Collapse
Affiliation(s)
- Guangtong Yang
- School of Control Science and Engineering, Shandong University, Jinan, China
| | - Min Xu
- Neurointensive Care Unit, Shengli Oilfield Central Hospital, Dongying, China
| | - Wei Chen
- Department of Radiology, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | - Xu Qiao
- School of Control Science and Engineering, Shandong University, Jinan, China
| | - Hongfeng Shi
- Neurointensive Care Unit, Shengli Oilfield Central Hospital, Dongying, China
| | - Yongmei Hu
- School of Control Science and Engineering, Shandong University, Jinan, China
| |
Collapse
|
43
|
Song L, Ma M, Liu G. TS-Net: Two-stage deformable medical image registration network based on new smooth constraints. Magn Reson Imaging 2023; 99:26-33. [PMID: 36709011 DOI: 10.1016/j.mri.2023.01.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2022] [Revised: 05/27/2022] [Accepted: 01/14/2023] [Indexed: 01/27/2023]
Abstract
Medical image registration can establish the spatial consistency of the corresponding anatomical structures between different medical images, which is important in medical image analysis. In recent years, with the rapid development of deep learning, the image registration methods based on deep learning greatly improve the speed, accuracy, and robustness of registration. Regrettably, these methods typically do not work well for large deformations and complex deformations in the image, and neglect to preserve the topological properties of the image during deformation. Aiming at these problems, we propose a new network TS-Net that learns deformation from coarse to fine and transmits information of different scales in the two stages. Two-stage network learning deformation from coarse to fine can gradually learn the large and complex deformations in images. In the second stage, the feature maps downsampled in the first stage for skip connection can expand the local receptive field and obtain more local information. The smooth constraints function used in the past is to impose the same restriction on the global, which is not targeted. In this paper, we propose a new smooth constraints function for each voxel deformation, which can better ensure the smoothness of the transformation and maintain the topological properties of the image. The experiments on brain datasets with complex deformations and heart datasets with large deformations show that our proposed method achieves better results while maintaining the topological properties of deformations compared to existing deep learning-based registration methods.
Collapse
Affiliation(s)
- Lei Song
- College of Computer Science and Technology, Jilin University, Changchun 130012, Jilin, PR China; Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun 130012, Jilin, PR China.
| | - Mingrui Ma
- College of Computer Science and Technology, Jilin University, Changchun 130012, Jilin, PR China; Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun 130012, Jilin, PR China.
| | - Guixia Liu
- College of Computer Science and Technology, Jilin University, Changchun 130012, Jilin, PR China; Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun 130012, Jilin, PR China.
| |
Collapse
|
44
|
Salido J, Vallez N, González-López L, Deniz O, Bueno G. Comparison of deep learning models for digital H&E staining from unpaired label-free multispectral microscopy images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 235:107528. [PMID: 37040684 DOI: 10.1016/j.cmpb.2023.107528] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/23/2023] [Revised: 03/27/2023] [Accepted: 04/03/2023] [Indexed: 05/08/2023]
Abstract
BACKGROUND AND OBJECTIVE This paper presents the quantitative comparison of three generative models of digital staining, also known as virtual staining, in H&E modality (i.e., Hematoxylin and Eosin) that are applied to 5 types of breast tissue. Moreover, a qualitative evaluation of the results achieved with the best model was carried out. This process is based on images of samples without staining captured by a multispectral microscope with previous dimensional reduction to three channels in the RGB range. METHODS The models compared are based on conditional GAN (pix2pix) which uses images aligned with/without staining, and two models that do not require image alignment, Cycle GAN (cycleGAN) and contrastive learning-based model (CUT). These models are compared based on the structural similarity and chromatic discrepancy between samples with chemical staining and their corresponding ones with digital staining. The correspondence between images is achieved after the chemical staining images are subjected to digital unstaining by means of a model obtained to guarantee the cyclic consistency of the generative models. RESULTS The comparison of the three models corroborates the visual evaluation of the results showing the superiority of cycleGAN both for its larger structural similarity with respect to chemical staining (mean value of SSIM ∼ 0.95) and lower chromatic discrepancy (10%). To this end, quantization and calculation of EMD (Earth Mover's Distance) between clusters is used. In addition, quality evaluation through subjective psychophysical tests with three experts was carried out to evaluate quality of the results with the best model (cycleGAN). CONCLUSIONS The results can be satisfactorily evaluated by metrics that use as reference image a chemically stained sample and the digital staining images of the reference sample with prior digital unstaining. These metrics demonstrate that generative staining models that guarantee cyclic consistency provide the closest results to chemical H&E staining that also is consistent with the result of qualitative evaluation by experts.
Collapse
Affiliation(s)
- Jesus Salido
- IEEAC Dept. (ESI-UCLM), P de la Universidad 4, Ciudad Real, 13071, Spain.
| | - Noelia Vallez
- IEEAC Dept. (ETSII-UCLM), Avda. Camilo José Cela s/n, Ciudad Real, 13071, Spain
| | - Lucía González-López
- Hospital Gral. Universitario de C.Real (HGUCR), C. Obispo Rafael Torija s/n, Ciudad Real, 13005, Spain
| | - Oscar Deniz
- IEEAC Dept. (ETSII-UCLM), Avda. Camilo José Cela s/n, Ciudad Real, 13071, Spain
| | - Gloria Bueno
- IEEAC Dept. (ETSII-UCLM), Avda. Camilo José Cela s/n, Ciudad Real, 13071, Spain
| |
Collapse
|
45
|
Giacopelli G, Migliore M, Tegolo D. NeuronAlg: An Innovative Neuronal Computational Model for Immunofluorescence Image Segmentation. SENSORS (BASEL, SWITZERLAND) 2023; 23:4598. [PMID: 37430509 DOI: 10.3390/s23104598] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/05/2023] [Revised: 04/24/2023] [Accepted: 05/03/2023] [Indexed: 07/12/2023]
Abstract
Background: Image analysis applications in digital pathology include various methods for segmenting regions of interest. Their identification is one of the most complex steps and therefore of great interest for the study of robust methods that do not necessarily rely on a machine learning (ML) approach. Method: A fully automatic and optimized segmentation process for different datasets is a prerequisite for classifying and diagnosing indirect immunofluorescence (IIF) raw data. This study describes a deterministic computational neuroscience approach for identifying cells and nuclei. It is very different from the conventional neural network approaches but has an equivalent quantitative and qualitative performance, and it is also robust against adversative noise. The method is robust, based on formally correct functions, and does not suffer from having to be tuned on specific data sets. Results: This work demonstrates the robustness of the method against variability of parameters, such as image size, mode, and signal-to-noise ratio. We validated the method on three datasets (Neuroblastoma, NucleusSegData, and ISBI 2009 Dataset) using images annotated by independent medical doctors. Conclusions: The definition of deterministic and formally correct methods, from a functional and structural point of view, guarantees the achievement of optimized and functionally correct results. The excellent performance of our deterministic method (NeuronalAlg) in segmenting cells and nuclei from fluorescence images was measured with quantitative indicators and compared with those achieved by three published ML approaches.
Collapse
Affiliation(s)
| | - Michele Migliore
- National Research Council, Institute of Biophysics, 90153 Palermo, Italy
| | - Domenico Tegolo
- National Research Council, Institute of Biophysics, 90153 Palermo, Italy
- Dipartimento Matematica e Informatica, Universitá degli Studi di Palermo, 90123 Palermo, Italy
| |
Collapse
|
46
|
Zhang R, Wang J, Chen C. Automatic implant shape design for minimally invasive repair of pectus excavatum using deep learning and shape registration. Comput Biol Med 2023; 158:106806. [PMID: 37019009 DOI: 10.1016/j.compbiomed.2023.106806] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2022] [Revised: 03/05/2023] [Accepted: 03/20/2023] [Indexed: 04/05/2023]
Abstract
Minimally invasive repair of pectus excavatum (MIRPE) is an effective method for correcting pectus excavatum (PE), a congenital chest wall deformity characterized by concave depression of the sternum. In MIRPE, a long, thin, curved stainless plate (implant) is placed across the thoracic cage to correct the deformity. However, the implant curvature is difficult to accurately determine during the procedure. This implant depends on the surgeon's expert knowledge and experience and lacks objective criteria. Moreover, tedious manual input by surgeons is required to estimate the implant shape. In this study, a novel three-step end-to-end automatic framework is proposed to determine the implant shape during preoperative planning: (1) The deepest depression point (DDP) in the sagittal plane of the patient's CT volume is automatically determined using Sparse R-CNN-R101, and the axial slice containing the point is extracted. (2) Cascade Mask R-CNN-X101 segments the anterior intercostal gristle of the pectus, sternum and rib in the axial slice, and the contour is extracted to generate the PE point set. (3) Robust shape registration is performed to match the PE shape with a healthy thoracic cage, which is then utilized to generate the implant shape. The framework was evaluated on a CT dataset of 90 PE patients and 30 healthy children. The experimental results show that the average error of the DDP extraction was 5.83 mm. The end-to-end output of our framework was compared with surgical outcomes of professional surgeons to clinically validate the effectiveness of our method. The results indicate that the root mean square error (RMSE) between the midline of the real implant and our framework output was less than 2 mm.
Collapse
|
47
|
Delaby N, Barateau A, Chiavassa S, Biston MC, Chartier P, Graulières E, Guinement L, Huger S, Lacornerie T, Millardet-Martin C, Sottiaux A, Caron J, Gensanne D, Pointreau Y, Coutte A, Biau J, Serre AA, Castelli J, Tomsej M, Garcia R, Khamphan C, Badey A. Practical and technical key challenges in head and neck adaptive radiotherapy: The GORTEC point of view. Phys Med 2023; 109:102568. [PMID: 37015168 DOI: 10.1016/j.ejmp.2023.102568] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 02/15/2023] [Accepted: 03/18/2023] [Indexed: 04/05/2023] Open
Abstract
Anatomical variations occur during head and neck (H&N) radiotherapy (RT) treatment. These variations may result in underdosage to the target volume or overdosage to the organ at risk. Replanning during the treatment course can be triggered to overcome this issue. Due to technological, methodological and clinical evolutions, tools for adaptive RT (ART) are becoming increasingly sophisticated. The aim of this paper is to give an overview of the key steps of an H&N ART workflow and tools from the point of view of a group of French-speaking medical physicists and physicians (from GORTEC). Focuses are made on image registration, segmentation, estimation of the delivered dose of the day, workflow and quality assurance for an implementation of H&N offline and online ART. Practical recommendations are given to assist physicians and medical physicists in a clinical workflow.
Collapse
|
48
|
Iglesias JE. A ready-to-use machine learning tool for symmetric multi-modality registration of brain MRI. Sci Rep 2023; 13:6657. [PMID: 37095168 PMCID: PMC10126156 DOI: 10.1038/s41598-023-33781-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2023] [Accepted: 04/19/2023] [Indexed: 04/26/2023] Open
Abstract
Volumetric registration of brain MRI is routinely used in human neuroimaging, e.g., to align different MRI modalities, to measure change in longitudinal analysis, to map an individual to a template, or in registration-based segmentation. Classical registration techniques based on numerical optimization have been very successful in this domain, and are implemented in widespread software suites like ANTs, Elastix, NiftyReg, or DARTEL. Over the last 7-8 years, learning-based techniques have emerged, which have a number of advantages like high computational efficiency, potential for higher accuracy, easy integration of supervision, and the ability to be part of a meta-architectures. However, their adoption in neuroimaging pipelines has so far been almost inexistent. Reasons include: lack of robustness to changes in MRI modality and resolution; lack of robust affine registration modules; lack of (guaranteed) symmetry; and, at a more practical level, the requirement of deep learning expertise that may be lacking at neuroimaging research sites. Here, we present EasyReg, an open-source, learning-based registration tool that can be easily used from the command line without any deep learning expertise or specific hardware. EasyReg combines the features of classical registration tools, the capabilities of modern deep learning methods, and the robustness to changes in MRI modality and resolution provided by our recent work in domain randomization. As a result, EasyReg is: fast; symmetric; diffeomorphic (and thus invertible); agnostic to MRI modality and resolution; compatible with affine and nonlinear registration; and does not require any preprocessing or parameter tuning. We present results on challenging registration tasks, showing that EasyReg is as accurate as classical methods when registering 1 mm isotropic scans within MRI modality, but much more accurate across modalities and resolutions. EasyReg is publicly available as part of FreeSurfer; see https://surfer.nmr.mgh.harvard.edu/fswiki/EasyReg .
Collapse
Affiliation(s)
- Juan Eugenio Iglesias
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, 02129, USA.
- Department of Medical Physics and Biomedical Engineering, University College London, London, WC1V 6LJ, UK.
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Boston, 02139, USA.
| |
Collapse
|
49
|
Ramakrishnan V, Schönmehl R, Artinger A, Winter L, Böck H, Schreml S, Gürtler F, Daza J, Schmitt VH, Mamilos A, Arbelaez P, Teufel A, Niedermair T, Topolcan O, Karlíková M, Sossalla S, Wiedenroth CB, Rupp M, Brochhausen C. 3D Visualization, Skeletonization and Branching Analysis of Blood Vessels in Angiogenesis. Int J Mol Sci 2023; 24:ijms24097714. [PMID: 37175421 PMCID: PMC10178731 DOI: 10.3390/ijms24097714] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Revised: 04/20/2023] [Accepted: 04/21/2023] [Indexed: 05/15/2023] Open
Abstract
Angiogenesis is the process of new blood vessels growing from existing vasculature. Visualizing them as a three-dimensional (3D) model is a challenging, yet relevant, task as it would be of great help to researchers, pathologists, and medical doctors. A branching analysis on the 3D model would further facilitate research and diagnostic purposes. In this paper, a pipeline of vision algorithms is elaborated to visualize and analyze blood vessels in 3D from formalin-fixed paraffin-embedded (FFPE) granulation tissue sections with two different staining methods. First, a U-net neural network is used to segment blood vessels from the tissues. Second, image registration is used to align the consecutive images. Coarse registration using an image-intensity optimization technique, followed by finetuning using a neural network based on Spatial Transformers, results in an excellent alignment of images. Lastly, the corresponding segmented masks depicting the blood vessels are aligned and interpolated using the results of the image registration, resulting in a visualized 3D model. Additionally, a skeletonization algorithm is used to analyze the branching characteristics of the 3D vascular model. In summary, computer vision and deep learning is used to reconstruct, visualize and analyze a 3D vascular model from a set of parallel tissue samples. Our technique opens innovative perspectives in the pathophysiological understanding of vascular morphogenesis under different pathophysiological conditions and its potential diagnostic role.
Collapse
Affiliation(s)
- Vignesh Ramakrishnan
- Institute of Pathology, University of Regensburg, 93053 Regensburg, Germany
- Central Biobank Regensburg, University and University Hospital Regensburg, 93053 Regensburg, Germany
| | - Rebecca Schönmehl
- Institute of Pathology, University Medical Centre Mannheim, Heidelberg University, 68167 Mannheim, Germany
| | - Annalena Artinger
- Institute of Pathology, University Medical Centre Mannheim, Heidelberg University, 68167 Mannheim, Germany
| | - Lina Winter
- Institute of Pathology, University Medical Centre Mannheim, Heidelberg University, 68167 Mannheim, Germany
| | - Hendrik Böck
- Institute of Pathology, University Medical Centre Mannheim, Heidelberg University, 68167 Mannheim, Germany
| | - Stephan Schreml
- Department of Dermatology, University Medical Centre Regensburg, 93053 Regensburg, Germany
| | - Florian Gürtler
- Institute of Pathology, University of Regensburg, 93053 Regensburg, Germany
- Central Biobank Regensburg, University and University Hospital Regensburg, 93053 Regensburg, Germany
| | - Jimmy Daza
- Department of Internal Medicine II, Division of Hepatology, Medical Faculty Mannheim, Heidelberg University, 68167 Mannheim, Germany
| | - Volker H Schmitt
- Department of Cardiology, University Medical Centre, Johannes Gutenberg University of Mainz, 55131 Mainz, Germany
| | - Andreas Mamilos
- Institute of Pathology, University of Regensburg, 93053 Regensburg, Germany
- Central Biobank Regensburg, University and University Hospital Regensburg, 93053 Regensburg, Germany
| | - Pablo Arbelaez
- Center for Research and Formation in Artificial Intelligence (CinfonIA), Universidad de Los Andes, 111711 Bogota, Colombia
| | - Andreas Teufel
- Department of Internal Medicine II, Division of Hepatology, Medical Faculty Mannheim, Heidelberg University, 68167 Mannheim, Germany
| | - Tanja Niedermair
- Institute of Pathology, University of Regensburg, 93053 Regensburg, Germany
- Central Biobank Regensburg, University and University Hospital Regensburg, 93053 Regensburg, Germany
| | - Ondrej Topolcan
- Biomedical Center, Faculty of Medicine in Pilsen, Charles University, 32300 Pilsen, Czech Republic
| | - Marie Karlíková
- Biomedical Center, Faculty of Medicine in Pilsen, Charles University, 32300 Pilsen, Czech Republic
| | - Samuel Sossalla
- Department of Internal Medicine II, University Hospital Regensburg, 93053 Regensburg, Germany
| | | | - Markus Rupp
- Department of Trauma Surgery, University Medical Centre Regensburg, 93053 Regensburg, Germany
| | - Christoph Brochhausen
- Institute of Pathology, University of Regensburg, 93053 Regensburg, Germany
- Institute of Pathology, University Medical Centre Mannheim, Heidelberg University, 68167 Mannheim, Germany
| |
Collapse
|
50
|
Aganj I, Fischl B. Intermediate Deformable Image Registration via Windowed Cross-Correlation. PROCEEDINGS. IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING 2023; 2023:10.1109/isbi53787.2023.10230715. [PMID: 37691967 PMCID: PMC10485808 DOI: 10.1109/isbi53787.2023.10230715] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/12/2023]
Abstract
In population and longitudinal imaging studies that employ deformable image registration, more accurate results can be achieved by initializing deformable registration with the results of affine registration where global misalignments have been considerably reduced. Such affine registration, however, is limited to linear transformations and it cannot account for large nonlinear anatomical variations, such as those between pre- and post-operative images or across different subject anatomies. In this work, we introduce a new intermediate deformable image registration (IDIR) technique that recovers large deformations via windowed cross-correlation, and provide an efficient implementation based on the fast Fourier transform. We evaluate our method on 2D X-ray and 3D magnetic resonance images, demonstrating its ability to align substantial nonlinear anatomical variations within a few iterations.
Collapse
Affiliation(s)
- Iman Aganj
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School
| | - Bruce Fischl
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School
| |
Collapse
|