1
|
Golestani N, Wang A, Moallem G, Bean GR, Rusu M. PViT-AIR: Puzzling vision transformer-based affine image registration for multi histopathology and faxitron images of breast tissue. Med Image Anal 2024; 99:103356. [PMID: 39378568 DOI: 10.1016/j.media.2024.103356] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Revised: 07/16/2024] [Accepted: 09/23/2024] [Indexed: 10/10/2024]
Abstract
Breast cancer is a significant global public health concern, with various treatment options available based on tumor characteristics. Pathological examination of excision specimens after surgery provides essential information for treatment decisions. However, the manual selection of representative sections for histological examination is laborious and subjective, leading to potential sampling errors and variability, especially in carcinomas that have been previously treated with chemotherapy. Furthermore, the accurate identification of residual tumors presents significant challenges, emphasizing the need for systematic or assisted methods to address this issue. In order to enable the development of deep-learning algorithms for automated cancer detection on radiology images, it is crucial to perform radiology-pathology registration, which ensures the generation of accurately labeled ground truth data. The alignment of radiology and histopathology images plays a critical role in establishing reliable cancer labels for training deep-learning algorithms on radiology images. However, aligning these images is challenging due to their content and resolution differences, tissue deformation, artifacts, and imprecise correspondence. We present a novel deep learning-based pipeline for the affine registration of faxitron images, the x-ray representations of macrosections of ex-vivo breast tissue, and their corresponding histopathology images of tissue segments. The proposed model combines convolutional neural networks and vision transformers, allowing it to effectively capture both local and global information from the entire tissue macrosection as well as its segments. This integrated approach enables simultaneous registration and stitching of image segments, facilitating segment-to-macrosection registration through a puzzling-based mechanism. To address the limitations of multi-modal ground truth data, we tackle the problem by training the model using synthetic mono-modal data in a weakly supervised manner. The trained model demonstrated successful performance in multi-modal registration, yielding registration results with an average landmark error of 1.51 mm (±2.40), and stitching distance of 1.15 mm (±0.94). The results indicate that the model performs significantly better than existing baselines, including both deep learning-based and iterative models, and it is also approximately 200 times faster than the iterative approach. This work bridges the gap in the current research and clinical workflow and has the potential to improve efficiency and accuracy in breast cancer evaluation and streamline pathology workflow.
Collapse
Affiliation(s)
| | - Aihui Wang
- Department of Pathology, Stanford University, USA
| | | | | | - Mirabela Rusu
- Department of Radiology, Stanford University, USA; Department of Urology, Stanford University, USA; Department of Biomedical Data Science, Stanford University, USA.
| |
Collapse
|
2
|
Zakaria R, Abdelmajid H, Dya Z, Hakim A. PelviNet: A Collaborative Multi-agent Convolutional Network for Enhanced Pelvic Image Registration. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01249-w. [PMID: 39249582 DOI: 10.1007/s10278-024-01249-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/26/2024] [Revised: 08/07/2024] [Accepted: 08/08/2024] [Indexed: 09/10/2024]
Abstract
PelviNet introduces a groundbreaking multi-agent convolutional network architecture tailored for enhancing pelvic image registration. This innovative framework leverages shared convolutional layers, enabling synchronized learning among agents and ensuring an exhaustive analysis of intricate 3D pelvic structures. The architecture combines max pooling, parametric ReLU activations, and agent-specific layers to optimize both individual and collective decision-making processes. A communication mechanism efficiently aggregates outputs from these shared layers, enabling agents to make well-informed decisions by harnessing combined intelligence. PelviNet's evaluation centers on both quantitative accuracy metrics and visual representations to elucidate agents' performance in pinpointing optimal landmarks. Empirical results demonstrate PelviNet's superiority over traditional methods, achieving an average image-wise error of 2.8 mm, a subject-wise error of 3.2 mm, and a mean Euclidean distance error of 3.0 mm. These quantitative results highlight the model's efficiency and precision in landmark identification, crucial for medical contexts such as radiation therapy, where exact landmark identification significantly influences treatment outcomes. By reliably identifying critical structures, PelviNet advances pelvic image analysis and offers potential enhancements for broader medical imaging applications, marking a significant step forward in computational healthcare.
Collapse
Affiliation(s)
- Rguibi Zakaria
- LAVETE Laboratory, Hassan First University, Settat, Morocco.
| | | | - Zitouni Dya
- LAVETE Laboratory, Hassan First University, Settat, Morocco
| | - Allali Hakim
- LAVETE Laboratory, Hassan First University, Settat, Morocco
| |
Collapse
|
3
|
Zhang JH, Neumann T, Schaeffter T, Kolbitsch C, Kerkering KM. Respiratory motion-corrected T1 mapping of the abdomen. MAGMA (NEW YORK, N.Y.) 2024; 37:637-649. [PMID: 39133420 PMCID: PMC11417068 DOI: 10.1007/s10334-024-01196-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/19/2024] [Revised: 07/25/2024] [Accepted: 07/26/2024] [Indexed: 08/13/2024]
Abstract
OBJECTIVE The purpose of this study was to investigate an approach for motion-corrected T1 mapping of the abdomen that allows for free breathing data acquisition with 100% scan efficiency. MATERIALS AND METHODS Data were acquired using a continuous golden radial trajectory and multiple inversion pulses. For the correction of respiratory motion, motion estimation based on a surrogate was performed from the same data used for T1 mapping. Image-based self-navigation allowed for binning and reconstruction of respiratory-resolved images, which were used for the estimation of respiratory motion fields. Finally, motion-corrected T1 maps were calculated from the data applying the estimated motion fields. The method was evaluated in five healthy volunteers. For the assessment of the image-based navigator, we compared it to a simultaneously acquired ultrawide band radar signal. Motion-corrected T1 maps were evaluated qualitatively and quantitatively for different scan times. RESULTS For all volunteers, the motion-corrected T1 maps showed fewer motion artifacts in the liver as well as sharper kidney structures and blood vessels compared to uncorrected T1 maps. Moreover, the relative error to the reference breathhold T1 maps could be reduced from up to 25% for the uncorrected T1 maps to below 10% for the motion-corrected maps for the average value of a region of interest, while the scan time could be reduced to 6-8 s. DISCUSSION The proposed approach allows for respiratory motion-corrected T1 mapping in the abdomen and ensures accurate T1 maps without the need for any breathholds.
Collapse
Affiliation(s)
- Jana Huiyue Zhang
- Physikalisch-Technische Bundesanstalt (PTB), Braunschweig and Berlin, Germany.
- Department of Biomedical Engineering, Technical University of Berlin, Berlin, Germany.
- Department of Diagnostic and Interventional Radiology, Lausanne University Hospital (CHUV) and University of Lausanne (UNIL), Lausanne, Switzerland.
| | - Tom Neumann
- Physikalisch-Technische Bundesanstalt (PTB), Braunschweig and Berlin, Germany
| | - Tobias Schaeffter
- Physikalisch-Technische Bundesanstalt (PTB), Braunschweig and Berlin, Germany
- Department of Biomedical Engineering, Technical University of Berlin, Berlin, Germany
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
| | - Christoph Kolbitsch
- Physikalisch-Technische Bundesanstalt (PTB), Braunschweig and Berlin, Germany
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
| | | |
Collapse
|
4
|
Al-Kadi OS, Al-Emaryeen R, Al-Nahhas S, Almallahi I, Braik R, Mahafza W. Empowering brain cancer diagnosis: harnessing artificial intelligence for advanced imaging insights. Rev Neurosci 2024; 35:399-419. [PMID: 38291768 DOI: 10.1515/revneuro-2023-0115] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Accepted: 12/10/2023] [Indexed: 02/01/2024]
Abstract
Artificial intelligence (AI) is increasingly being used in the medical field, specifically for brain cancer imaging. In this review, we explore how AI-powered medical imaging can impact the diagnosis, prognosis, and treatment of brain cancer. We discuss various AI techniques, including deep learning and causality learning, and their relevance. Additionally, we examine current applications that provide practical solutions for detecting, classifying, segmenting, and registering brain tumors. Although challenges such as data quality, availability, interpretability, transparency, and ethics persist, we emphasise the enormous potential of intelligent applications in standardising procedures and enhancing personalised treatment, leading to improved patient outcomes. Innovative AI solutions have the power to revolutionise neuro-oncology by enhancing the quality of routine clinical practice.
Collapse
Affiliation(s)
- Omar S Al-Kadi
- King Abdullah II School for Information Technology, University of Jordan, Amman, 11942, Jordan
| | - Roa'a Al-Emaryeen
- King Abdullah II School for Information Technology, University of Jordan, Amman, 11942, Jordan
| | - Sara Al-Nahhas
- King Abdullah II School for Information Technology, University of Jordan, Amman, 11942, Jordan
| | - Isra'a Almallahi
- Department of Diagnostic Radiology, Jordan University Hospital, Amman, 11942, Jordan
| | - Ruba Braik
- Department of Diagnostic Radiology, Jordan University Hospital, Amman, 11942, Jordan
| | - Waleed Mahafza
- Department of Diagnostic Radiology, Jordan University Hospital, Amman, 11942, Jordan
| |
Collapse
|
5
|
Osman AFI, Al-Mugren KS, Tamam NM, Shahine B. Deformable registration of magnetic resonance images using unsupervised deep learning in neuro-/radiation oncology. Radiat Oncol 2024; 19:61. [PMID: 38773620 PMCID: PMC11110381 DOI: 10.1186/s13014-024-02452-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Accepted: 05/13/2024] [Indexed: 05/24/2024] Open
Abstract
PURPOSE Accurate deformable registration of magnetic resonance imaging (MRI) scans containing pathologies is challenging due to changes in tissue appearance. In this paper, we developed a novel automated three-dimensional (3D) convolutional U-Net based deformable image registration (ConvUNet-DIR) method using unsupervised learning to establish correspondence between baseline pre-operative and follow-up MRI scans of patients with brain glioma. METHODS This study involved multi-parametric brain MRI scans (T1, T1-contrast enhanced, T2, FLAIR) acquired at pre-operative and follow-up time for 160 patients diagnosed with glioma, representing the BraTS-Reg 2022 challenge dataset. ConvUNet-DIR, a deep learning-based deformable registration workflow using 3D U-Net style architecture as a core, was developed to establish correspondence between the MRI scans. The workflow consists of three components: (1) the U-Net learns features from pairs of MRI scans and estimates a mapping between them, (2) the grid generator computes the sampling grid based on the derived transformation parameters, and (3) the spatial transformation layer generates a warped image by applying the sampling operation using interpolation. A similarity measure was used as a loss function for the network with a regularization parameter limiting the deformation. The model was trained via unsupervised learning using pairs of MRI scans on a training data set (n = 102) and validated on a validation data set (n = 26) to assess its generalizability. Its performance was evaluated on a test set (n = 32) by computing the Dice score and structural similarity index (SSIM) quantitative metrics. The model's performance also was compared with the baseline state-of-the-art VoxelMorph (VM1 and VM2) learning-based algorithms. RESULTS The ConvUNet-DIR model showed promising competency in performing accurate 3D deformable registration. It achieved a mean Dice score of 0.975 ± 0.003 and SSIM of 0.908 ± 0.011 on the test set (n = 32). Experimental results also demonstrated that ConvUNet-DIR outperformed the VoxelMorph algorithms concerning Dice (VM1: 0.969 ± 0.006 and VM2: 0.957 ± 0.008) and SSIM (VM1: 0.893 ± 0.012 and VM2: 0.857 ± 0.017) metrics. The time required to perform a registration for a pair of MRI scans is about 1 s on the CPU. CONCLUSIONS The developed deep learning-based model can perform an end-to-end deformable registration of a pair of 3D MRI scans for glioma patients without human intervention. The model could provide accurate, efficient, and robust deformable registration without needing pre-alignment and labeling. It outperformed the state-of-the-art VoxelMorph learning-based deformable registration algorithms and other supervised/unsupervised deep learning-based methods reported in the literature.
Collapse
Affiliation(s)
- Alexander F I Osman
- Department of Medical Physics, Al-Neelain University, Khartoum, 11121, Sudan.
| | - Kholoud S Al-Mugren
- Department of Physics, College of Science, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh, 11671, Saudi Arabia
| | - Nissren M Tamam
- Department of Physics, College of Science, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh, 11671, Saudi Arabia
| | - Bilal Shahine
- Department of Radiation Oncology, American University of Beirut Medical Center, Beirut, Lebanon
| |
Collapse
|
6
|
Takahashi K, Ozawa E, Shimakura A, Mori T, Miyaaki H, Nakao K. Recent Advances in Endoscopic Ultrasound for Gallbladder Disease Diagnosis. Diagnostics (Basel) 2024; 14:374. [PMID: 38396413 PMCID: PMC10887964 DOI: 10.3390/diagnostics14040374] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Revised: 02/01/2024] [Accepted: 02/05/2024] [Indexed: 02/25/2024] Open
Abstract
Gallbladder (GB) disease is classified into two broad categories: GB wall-thickening and protuberant lesions, which include various lesions, such as adenomyomatosis, cholecystitis, GB polyps, and GB carcinoma. This review summarizes recent advances in the differential diagnosis of GB lesions, focusing primarily on endoscopic ultrasound (EUS) and related technologies. Fundamental B-mode EUS and contrast-enhanced harmonic EUS (CH-EUS) have been reported to be useful for the diagnosis of GB diseases because they can evaluate the thickening of the GB wall and protuberant lesions in detail. We also outline the current status of EUS-guided fine-needle aspiration (EUS-FNA) for GB lesions, as there have been scattered reports on EUS-FNA in recent years. Furthermore, artificial intelligence (AI) technologies, ranging from machine learning to deep learning, have become popular in healthcare for disease diagnosis, drug discovery, drug development, and patient risk identification. In this review, we outline the current status of AI in the diagnosis of GB.
Collapse
Affiliation(s)
- Kosuke Takahashi
- Department of Gastroenterology and Hepatology, Graduate School of Biomedical Sciences, Nagasaki University, Nagasaki 852-8501, Japan; (E.O.); (T.M.); (H.M.); (K.N.)
| | | | | | | | | | | |
Collapse
|
7
|
Garzia S, Capellini K, Gasparotti E, Pizzuto D, Spinelli G, Berti S, Positano V, Celi S. Three-Dimensional Multi-Modality Registration for Orthopaedics and Cardiovascular Settings: State-of-the-Art and Clinical Applications. SENSORS (BASEL, SWITZERLAND) 2024; 24:1072. [PMID: 38400229 PMCID: PMC10891817 DOI: 10.3390/s24041072] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Revised: 01/25/2024] [Accepted: 02/02/2024] [Indexed: 02/25/2024]
Abstract
The multimodal and multidomain registration of medical images have gained increasing recognition in clinical practice as a powerful tool for fusing and leveraging useful information from different imaging techniques and in different medical fields such as cardiology and orthopedics. Image registration could be a challenging process, and it strongly depends on the correct tuning of registration parameters. In this paper, the robustness and accuracy of a landmarks-based approach have been presented for five cardiac multimodal image datasets. The study is based on 3D Slicer software and it is focused on the registration of a computed tomography (CT) and 3D ultrasound time-series of post-operative mitral valve repair. The accuracy of the method, as a function of the number of landmarks used, was performed by analysing root mean square error (RMSE) and fiducial registration error (FRE) metrics. The validation of the number of landmarks resulted in an optimal number of 10 landmarks. The mean RMSE and FRE values were 5.26 ± 3.17 and 2.98 ± 1.68 mm, respectively, showing comparable performances with respect to the literature. The developed registration process was also tested on a CT orthopaedic dataset to assess the possibility of reconstructing the damaged jaw portion for a pre-operative planning setting. Overall, the proposed work shows how 3D Slicer and registration by landmarks can provide a useful environment for multimodal/unimodal registration.
Collapse
Affiliation(s)
- Simone Garzia
- BioCardioLab, Bioengineering Unit, Fondazione Toscana G. Monasterio, 54100 Massa, Italy; (S.G.); (K.C.); (E.G.); (V.P.)
- Department of Information Engineering, University of Pisa, 56122 Pisa, Italy;
| | - Katia Capellini
- BioCardioLab, Bioengineering Unit, Fondazione Toscana G. Monasterio, 54100 Massa, Italy; (S.G.); (K.C.); (E.G.); (V.P.)
| | - Emanuele Gasparotti
- BioCardioLab, Bioengineering Unit, Fondazione Toscana G. Monasterio, 54100 Massa, Italy; (S.G.); (K.C.); (E.G.); (V.P.)
| | - Domenico Pizzuto
- Department of Information Engineering, University of Pisa, 56122 Pisa, Italy;
| | - Giuseppe Spinelli
- Maxillofacial Surgery Department, Azienda Ospedaliero-Universitaria Careggi, 50134 Firenze, Italy;
| | - Sergio Berti
- Diagnostic and Interventional Cardiology Department, Fondazione Toscana G. Monasterio, 54100 Massa, Italy;
| | - Vincenzo Positano
- BioCardioLab, Bioengineering Unit, Fondazione Toscana G. Monasterio, 54100 Massa, Italy; (S.G.); (K.C.); (E.G.); (V.P.)
| | - Simona Celi
- BioCardioLab, Bioengineering Unit, Fondazione Toscana G. Monasterio, 54100 Massa, Italy; (S.G.); (K.C.); (E.G.); (V.P.)
| |
Collapse
|
8
|
Yao Y, Zhong J, Zhang L, Khan S, Chen W. CartiMorph: A framework for automated knee articular cartilage morphometrics. Med Image Anal 2024; 91:103035. [PMID: 37992496 DOI: 10.1016/j.media.2023.103035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Revised: 08/25/2023] [Accepted: 11/13/2023] [Indexed: 11/24/2023]
Abstract
We introduce CartiMorph, a framework for automated knee articular cartilage morphometrics. It takes an image as input and generates quantitative metrics for cartilage subregions, including the percentage of full-thickness cartilage loss (FCL), mean thickness, surface area, and volume. CartiMorph leverages the power of deep learning models for hierarchical image feature representation. Deep learning models were trained and validated for tissue segmentation, template construction, and template-to-image registration. We established methods for surface-normal-based cartilage thickness mapping, FCL estimation, and rule-based cartilage parcellation. Our cartilage thickness map showed less error in thin and peripheral regions. We evaluated the effectiveness of the adopted segmentation model by comparing the quantitative metrics obtained from model segmentation and those from manual segmentation. The root-mean-squared deviation of the FCL measurements was less than 8%, and strong correlations were observed for the mean thickness (Pearson's correlation coefficient ρ∈[0.82,0.97]), surface area (ρ∈[0.82,0.98]) and volume (ρ∈[0.89,0.98]) measurements. We compared our FCL measurements with those from a previous study and found that our measurements deviated less from the ground truths. We observed superior performance of the proposed rule-based cartilage parcellation method compared with the atlas-based approach. CartiMorph has the potential to promote imaging biomarkers discovery for knee osteoarthritis.
Collapse
Affiliation(s)
- Yongcheng Yao
- CU Lab of AI in Radiology (CLAIR), Department of Imaging and Interventional Radiology, The Chinese University of Hong Kong, Hong Kong, China.
| | - Junru Zhong
- CU Lab of AI in Radiology (CLAIR), Department of Imaging and Interventional Radiology, The Chinese University of Hong Kong, Hong Kong, China
| | - Liping Zhang
- CU Lab of AI in Radiology (CLAIR), Department of Imaging and Interventional Radiology, The Chinese University of Hong Kong, Hong Kong, China
| | - Sheheryar Khan
- School of Professional Education and Executive Development, The Hong Kong Polytechnic University, Hong Kong, China
| | - Weitian Chen
- CU Lab of AI in Radiology (CLAIR), Department of Imaging and Interventional Radiology, The Chinese University of Hong Kong, Hong Kong, China.
| |
Collapse
|
9
|
Nenoff L, Amstutz F, Murr M, Archibald-Heeren B, Fusella M, Hussein M, Lechner W, Zhang Y, Sharp G, Vasquez Osorio E. Review and recommendations on deformable image registration uncertainties for radiotherapy applications. Phys Med Biol 2023; 68:24TR01. [PMID: 37972540 PMCID: PMC10725576 DOI: 10.1088/1361-6560/ad0d8a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Revised: 10/30/2023] [Accepted: 11/15/2023] [Indexed: 11/19/2023]
Abstract
Deformable image registration (DIR) is a versatile tool used in many applications in radiotherapy (RT). DIR algorithms have been implemented in many commercial treatment planning systems providing accessible and easy-to-use solutions. However, the geometric uncertainty of DIR can be large and difficult to quantify, resulting in barriers to clinical practice. Currently, there is no agreement in the RT community on how to quantify these uncertainties and determine thresholds that distinguish a good DIR result from a poor one. This review summarises the current literature on sources of DIR uncertainties and their impact on RT applications. Recommendations are provided on how to handle these uncertainties for patient-specific use, commissioning, and research. Recommendations are also provided for developers and vendors to help users to understand DIR uncertainties and make the application of DIR in RT safer and more reliable.
Collapse
Affiliation(s)
- Lena Nenoff
- Department of Radiation Oncology, Massachusetts General Hospital, Boston, MA, United States of America
- Harvard Medical School, Boston, MA, United States of America
- OncoRay—National Center for Radiation Research in Oncology, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Helmholtz-Zentrum Dresden—Rossendorf, Dresden Germany
- Helmholtz-Zentrum Dresden—Rossendorf, Institute of Radiooncology—OncoRay, Dresden, Germany
| | - Florian Amstutz
- Department of Physics, ETH Zurich, Switzerland
- Center for Proton Therapy, Paul Scherrer Institute, Villigen PSI, Switzerland
- Division of Medical Radiation Physics and Department of Radiation Oncology, Inselspital, Bern University Hospital, and University of Bern, Bern, Switzerland
| | - Martina Murr
- Section for Biomedical Physics, Department of Radiation Oncology, University of Tübingen, Germany
| | | | - Marco Fusella
- Department of Radiation Oncology, Abano Terme Hospital, Italy
| | - Mohammad Hussein
- Metrology for Medical Physics, National Physical Laboratory, Teddington, United Kingdom
| | - Wolfgang Lechner
- Department of Radiation Oncology, Medical University of Vienna, Austria
| | - Ye Zhang
- Center for Proton Therapy, Paul Scherrer Institute, Villigen PSI, Switzerland
| | - Greg Sharp
- Department of Radiation Oncology, Massachusetts General Hospital, Boston, MA, United States of America
- Harvard Medical School, Boston, MA, United States of America
| | - Eliana Vasquez Osorio
- Division of Cancer Sciences, The University of Manchester, Manchester, United Kingdom
| |
Collapse
|
10
|
Wei HL, Wei C, Feng Y, Yan W, Yu YS, Chen YC, Yin X, Li J, Zhang H. Predicting the efficacy of non-steroidal anti-inflammatory drugs in migraine using deep learning and three-dimensional T1-weighted images. iScience 2023; 26:108107. [PMID: 37867961 PMCID: PMC10585394 DOI: 10.1016/j.isci.2023.108107] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Revised: 07/19/2023] [Accepted: 09/27/2023] [Indexed: 10/24/2023] Open
Abstract
Deep learning (DL) models based on individual images could contribute to tailored therapies and personalized treatment strategies. We aimed to construct a DL model using individual 3D structural images for predicting the efficacy of non-steroidal anti-inflammatory drugs (NSAIDs) in migraine. A 3D convolutional neural network model was constructed, with ResNet18 as the classification backbone, to link structural images to predict the efficacy of NSAIDs. In total, 111 patients were included and allocated to the training and testing sets in a 4:1 ratio. The prediction accuracies of the ResNet34, ResNet50, ResNeXt50, DenseNet121, and 3D ResNet18 models were 0.65, 0.74, 0.65, 0.70, and 0.78, respectively. This model, based on individual 3D structural images, demonstrated better predictive performance in comparison to conventional models. Our study highlights the feasibility of the DL algorithm based on brain structural images and suggests that it can be applied to predict the efficacy of NSAIDs in migraine treatment.
Collapse
Affiliation(s)
- Heng-Le Wei
- Department of Radiology, The Affiliated Jiangning Hospital of Nanjing Medical University, Nanjing, Jiangsu 211100, China
| | - Cunsheng Wei
- Department of Neurology, The Affiliated Jiangning Hospital of Nanjing Medical University, Nanjing, Jiangsu 211100, China
| | - Yibo Feng
- Infervision Medical Technology Co., Ltd, Beijing, China
| | - Wanying Yan
- Infervision Medical Technology Co., Ltd, Beijing, China
| | - Yu-Sheng Yu
- Department of Radiology, The Affiliated Jiangning Hospital of Nanjing Medical University, Nanjing, Jiangsu 211100, China
| | - Yu-Chen Chen
- Department of Radiology, Nanjing First Hospital, Nanjing Medical University, Jiangsu Province, Nanjing 210006, China
| | - Xindao Yin
- Department of Radiology, Nanjing First Hospital, Nanjing Medical University, Jiangsu Province, Nanjing 210006, China
| | - Junrong Li
- Department of Neurology, The Affiliated Jiangning Hospital of Nanjing Medical University, Nanjing, Jiangsu 211100, China
| | - Hong Zhang
- Department of Radiology, The Affiliated Jiangning Hospital of Nanjing Medical University, Nanjing, Jiangsu 211100, China
| |
Collapse
|
11
|
Saito M. MRI-based quantification of carbon and oxygen concentrations in human soft tissues for range verification in proton therapy. Med Phys 2023; 50:5671-5681. [PMID: 36916123 DOI: 10.1002/mp.16353] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2022] [Revised: 02/13/2023] [Accepted: 03/07/2023] [Indexed: 03/15/2023] Open
Abstract
BACKGROUND In-situ range verification of particle therapy based on the detection of secondary emitted radiation requires highly accurate assignment of elemental concentrations (particularly carbon and oxygen) in the human body. PURPOSE A method for quantitatively predicting carbon and oxygen concentrations in human soft tissues is proposed. This method relies on an empirical one-to-one correspondence between the mass fraction and water content (WC), which is a measurable tissue quantity based on magnetic resonance (MR) imaging (referred to as "MRWC-based method"). METHODS A numerical analysis of the MRWC-based method was performed for 47 standard human soft tissues tabulated in the literature as objects of interest with unknown mass fractions of the four main elements-C, O, H, and N. Thereafter, the method was evaluated in terms of the mass-fraction quantification accuracy by comparing it with the gold-standard CT-based method developed by Schneider et al. The MRWC-based method was also applied to the MR imaging data of a virtual head phantom obtained from a three-dimensional MRI-simulated brain database. RESULTS The predicted mass fractions in a range of human soft tissues were in better agreement with the reference values than those predicted by the CT-based method. The mean absolute errors of the predicted mass% values for the overall standard soft tissues could be reduced from 4.8 percentage points (pp) (CT-based) to 0.5 pp (MRWC-based) for carbon and from 5.2 pp (CT-based) to 0.4 pp (MRWC-based) for oxygen. The application to the simulated MRI data confirmed the capability of the sufficient recognition of the boundaries between the white matter and gray matter in the brain that could not be realized by the CT-based method. Thus, the MRWC-based method exhibits superior performance in the prediction of carbon and oxygen concentrations in soft tissues. CONCLUSIONS This study is limited to a proof-of-concept scope but demonstrates the feasibility of the MRWC-based method for the generation of elemental images of human soft tissues from MRI-derived water-content images.
Collapse
Affiliation(s)
- Masatoshi Saito
- Department of Radiological Technology, School of Health Sciences, Faculty of Medicine, Niigata University, Niigata, Japan
| |
Collapse
|
12
|
Zou J, Liu J, Choi KS, Qin J. Intra-Patient Lung CT Registration through Large Deformation Decomposition and Attention-Guided Refinement. Bioengineering (Basel) 2023; 10:562. [PMID: 37237632 PMCID: PMC10215368 DOI: 10.3390/bioengineering10050562] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 04/28/2023] [Accepted: 05/06/2023] [Indexed: 05/28/2023] Open
Abstract
Deformable lung CT image registration is an essential task for computer-assisted interventions and other clinical applications, especially when organ motion is involved. While deep-learning-based image registration methods have recently achieved promising results by inferring deformation fields in an end-to-end manner, large and irregular deformations caused by organ motion still pose a significant challenge. In this paper, we present a method for registering lung CT images that is tailored to the specific patient being imaged. To address the challenge of large deformations between the source and target images, we break the deformation down into multiple continuous intermediate fields. These fields are then combined to create a spatio-temporal motion field. We further refine this field using a self-attention layer that aggregates information along motion trajectories. By leveraging temporal information from a respiratory cycle, our proposed methods can generate intermediate images that facilitate image-guided tumor tracking. We evaluated our approach extensively on a public dataset, and our numerical and visual results demonstrate the effectiveness of the proposed method.
Collapse
Affiliation(s)
- Jing Zou
- Center for Smart Health, School of Nursing, the Hong Kong Polytechnic University, Hong Kong, China; (J.Z.); (J.L.)
| | - Jia Liu
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Kup-Sze Choi
- Center for Smart Health, School of Nursing, the Hong Kong Polytechnic University, Hong Kong, China; (J.Z.); (J.L.)
| | - Jing Qin
- Center for Smart Health, School of Nursing, the Hong Kong Polytechnic University, Hong Kong, China; (J.Z.); (J.L.)
| |
Collapse
|