1
|
Peng W, Bosschieter T, Ouyang J, Paul R, Sullivan EV, Pfefferbaum A, Adeli E, Zhao Q, Pohl KM. Metadata-conditioned generative models to synthesize anatomically-plausible 3D brain MRIs. Med Image Anal 2024; 98:103325. [PMID: 39208560 DOI: 10.1016/j.media.2024.103325] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Revised: 08/06/2024] [Accepted: 08/20/2024] [Indexed: 09/04/2024]
Abstract
Recent advances in generative models have paved the way for enhanced generation of natural and medical images, including synthetic brain MRIs. However, the mainstay of current AI research focuses on optimizing synthetic MRIs with respect to visual quality (such as signal-to-noise ratio) while lacking insights into their relevance to neuroscience. To generate high-quality T1-weighted MRIs relevant for neuroscience discovery, we present a two-stage Diffusion Probabilistic Model (called BrainSynth) to synthesize high-resolution MRIs conditionally-dependent on metadata (such as age and sex). We then propose a novel procedure to assess the quality of BrainSynth according to how well its synthetic MRIs capture macrostructural properties of brain regions and how accurately they encode the effects of age and sex. Results indicate that more than half of the brain regions in our synthetic MRIs are anatomically plausible, i.e., the effect size between real and synthetic MRIs is small relative to biological factors such as age and sex. Moreover, the anatomical plausibility varies across cortical regions according to their geometric complexity. As is, the MRIs generated by BrainSynth significantly improve the training of a predictive model to identify accelerated aging effects in an independent study. These results indicate that our model accurately capture the brain's anatomical information and thus could enrich the data of underrepresented samples in a study. The code of BrainSynth will be released as part of the MONAI project at https://github.com/Project-MONAI/GenerativeModels.
Collapse
Affiliation(s)
- Wei Peng
- Department of Psychiatry & Behavioral Sciences, Stanford University, Stanford, CA 94305, United States of America
| | - Tomas Bosschieter
- Institute for Computational and Mathematical Engineering, Stanford University, Stanford, CA 94305, United States of America
| | - Jiahong Ouyang
- Department of Electrical Engineering, Stanford University, Stanford, CA 94305, United States of America
| | - Robert Paul
- Missouri Institute of Mental Health, University of Missouri, St. Louis, MO 63121, United States of America
| | - Edith V Sullivan
- Department of Psychiatry & Behavioral Sciences, Stanford University, Stanford, CA 94305, United States of America
| | - Adolf Pfefferbaum
- Center for Health Sciences, SRI International, Menlo Park, CA 94025, United States of America
| | - Ehsan Adeli
- Department of Psychiatry & Behavioral Sciences, Stanford University, Stanford, CA 94305, United States of America; Department of Computer Science, Stanford University, Stanford, CA 94305, United States of America
| | - Qingyu Zhao
- Department of Radiology, Weill Cornell Medicine, New York, NY 10065, United States of America.
| | - Kilian M Pohl
- Department of Psychiatry & Behavioral Sciences, Stanford University, Stanford, CA 94305, United States of America; Department of Electrical Engineering, Stanford University, Stanford, CA 94305, United States of America.
| |
Collapse
|
2
|
Bosma LS, Hussein M, Jameson MG, Asghar S, Brock KK, McClelland JR, Poeta S, Yuen J, Zachiu C, Yeo AU. Tools and recommendations for commissioning and quality assurance of deformable image registration in radiotherapy. Phys Imaging Radiat Oncol 2024; 32:100647. [PMID: 39328928 PMCID: PMC11424976 DOI: 10.1016/j.phro.2024.100647] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2024] [Revised: 09/09/2024] [Accepted: 09/10/2024] [Indexed: 09/28/2024] Open
Abstract
Multiple tools are available for commissioning and quality assurance of deformable image registration (DIR), each with their own advantages and disadvantages in the context of radiotherapy. The selection of appropriate tools should depend on the DIR application with its corresponding available input, desired output, and time requirement. Discussions were hosted by the ESTRO Physics Workshop 2021 on Commissioning and Quality Assurance for DIR in Radiotherapy. A consensus was reached on what requirements are needed for commissioning and quality assurance for different applications, and what combination of tools is associated with this. For commissioning, we recommend the target registration error of manually annotated anatomical landmarks or the distance-to-agreement of manually delineated contours to evaluate alignment. These should be supplemented by the distance to discordance and/or biomechanical criteria to evaluate consistency and plausibility. Digital phantoms can be useful to evaluate DIR for dose accumulation but are currently only available for a limited range of anatomies, image modalities and types of deformations. For quality assurance of DIR for contour propagation, we recommend at least a visual inspection of the registered image and contour. For quality assurance of DIR for warping quantitative information such as dose, Hounsfield units or positron emission tomography-data, we recommend visual inspection of the registered image together with image similarity to evaluate alignment, supplemented by an inspection of the Jacobian determinant or bending energy to evaluate plausibility, and by the dose (gradient) to evaluate relevance. We acknowledge that some of these metrics are still missing in currently available commercial solutions.
Collapse
Affiliation(s)
- Lando S Bosma
- Department of Radiotherapy, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Mohammad Hussein
- Metrology for Medical Physics Centre, National Physical Laboratory, Teddington, UK
| | - Michael G Jameson
- GenesisCare, Sydney, Australia
- School of Clinical Medicine, Medicine and Health, University of New South Wales, Sydney, Australia
| | | | - Kristy K Brock
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Jamie R McClelland
- Centre for Medical Image Computing and the Wellcome/EPSRC Centre for Interventional and Surgical Sciences, Dept. Medical Physics and Biomedical Engineering, University College London, London, UK
| | - Sara Poeta
- Medical Physics Department, Institut Jules Bordet - Université Libre de Bruxelles, Belgium
| | - Johnson Yuen
- School of Clinical Medicine, Medicine and Health, University of New South Wales, Sydney, Australia
- St. George Hospital Cancer Care Centre, Sydney NSW2217, Australia
- Ingham Institute for Applied Medical Research, Sydney, Australia
| | - Cornel Zachiu
- Department of Radiotherapy, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Adam U Yeo
- Peter MacCallum Cancer Centre, Melbourne, VIC, Australia
- The Sir Peter MacCallum Department of Oncology, the University of Melbourne, Melbourne, VIC, Australia
| |
Collapse
|
3
|
Shen C, Li W, Chen H, Wang X, Zhu F, Li Y, Wang X, Jin B. Complementary information mutual learning for multimodality medical image segmentation. Neural Netw 2024; 180:106670. [PMID: 39299035 DOI: 10.1016/j.neunet.2024.106670] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2024] [Revised: 07/10/2024] [Accepted: 08/26/2024] [Indexed: 09/22/2024]
Abstract
Radiologists must utilize medical images of multiple modalities for tumor segmentation and diagnosis due to the limitations of medical imaging technology and the diversity of tumor signals. This has led to the development of multimodal learning in medical image segmentation. However, the redundancy among modalities creates challenges for existing subtraction-based joint learning methods, such as misjudging the importance of modalities, ignoring specific modal information, and increasing cognitive load. These thorny issues ultimately decrease segmentation accuracy and increase the risk of overfitting. This paper presents the complementary information mutual learning (CIML) framework, which can mathematically model and address the negative impact of inter-modal redundant information. CIML adopts the idea of addition and removes inter-modal redundant information through inductive bias-driven task decomposition and message passing-based redundancy filtering. CIML first decomposes the multimodal segmentation task into multiple subtasks based on expert prior knowledge, minimizing the information dependence between modalities. Furthermore, CIML introduces a scheme in which each modality can extract information from other modalities additively through message passing. To achieve non-redundancy of extracted information, the redundant filtering is transformed into complementary information learning inspired by the variational information bottleneck. The complementary information learning procedure can be efficiently solved by variational inference and cross-modal spatial attention. Numerical results from the verification task and standard benchmarks indicate that CIML efficiently removes redundant information between modalities, outperforming SOTA methods regarding validation accuracy and segmentation effect. To emphasize, message-passing-based redundancy filtering allows neural network visualization techniques to visualize the knowledge relationship among different modalities, which reflects interpretability.
Collapse
Affiliation(s)
- Chuyun Shen
- School of Computer Science and Technology, East China Normal University, Shanghai 200062, China.
| | - Wenhao Li
- School of Data Science, The Chinese University of Hong Kong, Shenzhen Shenzhen Institute of Artificial Intelligence and Robotics for Society, Shenzhen 518172, China.
| | - Haoqing Chen
- School of Computer Science and Technology, East China Normal University, Shanghai 200062, China.
| | - Xiaoling Wang
- School of Computer Science and Technology, East China Normal University, Shanghai 200062, China.
| | - Fengping Zhu
- Huashan Hospital Fudan University, Shanghai 200040, China.
| | - Yuxin Li
- Huashan Hospital Fudan University, Shanghai 200040, China.
| | - Xiangfeng Wang
- School of Computer Science and Technology, East China Normal University, Shanghai 200062, China.
| | - Bo Jin
- School of Software Engineering, Shanghai Research Institute for Intelligent Autonomous Systems, Tongji University, Shanghai 200092, China.
| |
Collapse
|
4
|
Zhu R, He H, Chen Y, Yi M, Ran S, Wang C, Wang Y. Deep learning for rapid virtual H&E staining of label-free glioma tissue from hyperspectral images. Comput Biol Med 2024; 180:108958. [PMID: 39094325 DOI: 10.1016/j.compbiomed.2024.108958] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2024] [Revised: 07/02/2024] [Accepted: 07/26/2024] [Indexed: 08/04/2024]
Abstract
Hematoxylin and eosin (H&E) staining is a crucial technique for diagnosing glioma, allowing direct observation of tissue structures. However, the H&E staining workflow necessitates intricate processing, specialized laboratory infrastructures, and specialist pathologists, rendering it expensive, labor-intensive, and time-consuming. In view of these considerations, we combine the deep learning method and hyperspectral imaging technique, aiming at accurately and rapidly converting the hyperspectral images into virtual H&E staining images. The method overcomes the limitations of H&E staining by capturing tissue information at different wavelengths, providing comprehensive and detailed tissue composition information as the realistic H&E staining. In comparison with various generator structures, the Unet exhibits substantial overall advantages, as evidenced by a mean structure similarity index measure (SSIM) of 0.7731 and a peak signal-to-noise ratio (PSNR) of 23.3120, as well as the shortest training and inference time. A comprehensive software system for virtual H&E staining, which integrates CCD control, microscope control, and virtual H&E staining technology, is developed to facilitate fast intraoperative imaging, promote disease diagnosis, and accelerate the development of medical automation. The platform reconstructs large-scale virtual H&E staining images of gliomas at a high speed of 3.81 mm2/s. This innovative approach will pave the way for a novel, expedited route in histological staining.
Collapse
Affiliation(s)
- Ruohua Zhu
- National Engineering Research Center of Ophthalmology and Optometry, School of Biomedical Engineering, Eye Hospital, Wenzhou Medical University, Xueyuan Road 270, Wenzhou, 325027, China
| | - Haiyang He
- National Engineering Research Center of Ophthalmology and Optometry, School of Biomedical Engineering, Eye Hospital, Wenzhou Medical University, Xueyuan Road 270, Wenzhou, 325027, China
| | - Yuzhe Chen
- National Engineering Research Center of Ophthalmology and Optometry, School of Biomedical Engineering, Eye Hospital, Wenzhou Medical University, Xueyuan Road 270, Wenzhou, 325027, China
| | - Ming Yi
- National Engineering Research Center of Ophthalmology and Optometry, School of Biomedical Engineering, Eye Hospital, Wenzhou Medical University, Xueyuan Road 270, Wenzhou, 325027, China
| | - Shengdong Ran
- National Engineering Research Center of Ophthalmology and Optometry, School of Biomedical Engineering, Eye Hospital, Wenzhou Medical University, Xueyuan Road 270, Wenzhou, 325027, China
| | - Chengde Wang
- Department of Neurosurgery, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou 325000, China.
| | - Yi Wang
- National Engineering Research Center of Ophthalmology and Optometry, School of Biomedical Engineering, Eye Hospital, Wenzhou Medical University, Xueyuan Road 270, Wenzhou, 325027, China; Wenzhou Institute, University of Chinese Academy of Sciences, Jinlian Road 1, Wenzhou, 325001, China.
| |
Collapse
|
5
|
Biun J, Dudhia R, Arora H. The influence of metal artifact reduction on the trueness of registration of a cone-beam computed tomography scan with an intraoral scan in the presence of severe restoration artifact. J Prosthodont 2024; 33:700-705. [PMID: 37691179 DOI: 10.1111/jopr.13767] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2023] [Revised: 08/21/2023] [Accepted: 09/06/2023] [Indexed: 09/12/2023] Open
Abstract
PURPOSE When planning guided implant surgery, highly radiopaque materials such as metals or zirconia produce streaking artifacts ('metal artifact') on cone-beam computed tomography scans, which can impair registration of the intraoral scan. This study aimed to determine the effect of metal artifact reduction on the trueness of registration in the presence of multiple full-coverage zirconia crowns. MATERIALS AND METHODS A 3D-printed maxillary study model was restored with 12 full-coverage zirconia crowns and scanned with an intraoral scanner. Cone-beam computed tomography scans of the study model were acquired, with and without activation of the metal artifact reduction algorithm. Registration of the optical scans was performed using initial point-based registration with surface-based refinement, and the deviation was measured at four pre-defined dental landmarks. Welch's t-test was used to compare the registration error for the metal artifact reduction group with the control group. RESULTS The average registration error was 0.519 mm (95% CI 0.507 to 0.531) with metal artifact reduction deactivated, compared to 0.478 mm (95% CI 0.460 to 0.496) without metal artifact reduction. Therefore, activation of the metal artifact reduction algorithm was associated with a 0.041 mm (95% CI 0.020 to 0.061, p < 0.001) increase in average registration error. CONCLUSIONS The use of the metal artifact reduction algorithm slightly reduced trueness in this in vitro study. Clinicians are advised not to rely on a metal artifact reduction (MAR) algorithm for registration of a cone-beam computed tomography scan with an intraoral scan when planning guided implant surgery in the presence of restoration artifacts.
Collapse
Affiliation(s)
- John Biun
- School of Dentistry, University of Queensland, Herston, Australia
| | - Raahib Dudhia
- School of Dentistry, University of Queensland, Herston, Australia
| | - Himanshu Arora
- School of Dentistry, University of Queensland, Herston, Australia
| |
Collapse
|
6
|
Yasuda N, Iwasawa T, Baba T, Misumi T, Cheng S, Kato S, Utsunomiya D, Ogura T. Evaluation of Progressive Architectural Distortion in Idiopathic Pulmonary Fibrosis Using Deformable Registration of Sequential CT Images. Diagnostics (Basel) 2024; 14:1650. [PMID: 39125526 PMCID: PMC11311668 DOI: 10.3390/diagnostics14151650] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2024] [Revised: 07/20/2024] [Accepted: 07/25/2024] [Indexed: 08/12/2024] Open
Abstract
BACKGROUND Monitoring the progression of idiopathic pulmonary fibrosis (IPF) using CT primarily focuses on assessing the extent of fibrotic lesions, without considering the distortion of lung architecture. OBJECTIVES To evaluate three-dimensional average displacement (3D-AD) quantification of lung structures using deformable registration of serial CT images as a parameter of local lung architectural distortion and predictor of IPF prognosis. MATERIALS AND METHODS Patients with IPF evaluated between January 2016 and March 2017 who had undergone CT at least twice were retrospectively included (n = 114). The 3D-AD was obtained by deformable registration of baseline and follow-up CT images. A computer-aided quantification software measured the fibrotic lesion volume. Cox regression analysis evaluated these variables to predict mortality. RESULTS The 3D-AD and the fibrotic lesion volume change were significantly larger in the subpleural lung region (5.2 mm (interquartile range (IQR): 3.6-7.1 mm) and 0.70% (IQR: 0.22-1.60%), respectively) than those in the inner region (4.7 mm (IQR: 3.0-6.4 mm) and 0.21% (IQR: 0.004-1.12%), respectively). Multivariable logistic analysis revealed that subpleural region 3D-AD and fibrotic lesion volume change were independent predictors of mortality (hazard ratio: 1.12 and 1.23; 95% confidence interval: 1.02-1.22 and 1.10-1.38; p = 0.01 and p < 0.001, respectively). CONCLUSIONS The 3D-AD quantification derived from deformable registration of serial CT images serves as a marker of lung architectural distortion and a prognostic predictor in patients with IPF.
Collapse
Affiliation(s)
- Naofumi Yasuda
- Department of Radiology, Kanagawa Cardiovascular and Respiratory Center, 6-16-1 Tomioka-Higashi, Kanazawa-ku, Yokohama 236-0051, Kanagawa, Japan;
- Department of Diagnostic Radiology, Yokohama City University Graduate School of Medicine, 3-9, Fukuura, Kanazawa-ku, Yokohama 236-0004, Kanagawa, Japan; (S.C.); (S.K.); (D.U.)
| | - Tae Iwasawa
- Department of Radiology, Kanagawa Cardiovascular and Respiratory Center, 6-16-1 Tomioka-Higashi, Kanazawa-ku, Yokohama 236-0051, Kanagawa, Japan;
- Department of Diagnostic Radiology, Yokohama City University Graduate School of Medicine, 3-9, Fukuura, Kanazawa-ku, Yokohama 236-0004, Kanagawa, Japan; (S.C.); (S.K.); (D.U.)
| | - Tomohisa Baba
- Department of Respiratory Medicine, Kanagawa Cardiovascular and Respiratory Center, 6-16-1 Tomioka-Higashi, Kanazawa-ku, Yokohama 236-0051, Kanagawa, Japan; (T.B.); (T.O.)
| | - Toshihiro Misumi
- Department of Biostatistics, Yokohama City University Graduate School of Medicine, 3-9, Fukuura, Kanazawa-ku, Yokohama 236-0004, Kanagawa, Japan;
| | - Shihyao Cheng
- Department of Diagnostic Radiology, Yokohama City University Graduate School of Medicine, 3-9, Fukuura, Kanazawa-ku, Yokohama 236-0004, Kanagawa, Japan; (S.C.); (S.K.); (D.U.)
| | - Shingo Kato
- Department of Diagnostic Radiology, Yokohama City University Graduate School of Medicine, 3-9, Fukuura, Kanazawa-ku, Yokohama 236-0004, Kanagawa, Japan; (S.C.); (S.K.); (D.U.)
| | - Daisuke Utsunomiya
- Department of Diagnostic Radiology, Yokohama City University Graduate School of Medicine, 3-9, Fukuura, Kanazawa-ku, Yokohama 236-0004, Kanagawa, Japan; (S.C.); (S.K.); (D.U.)
| | - Takashi Ogura
- Department of Respiratory Medicine, Kanagawa Cardiovascular and Respiratory Center, 6-16-1 Tomioka-Higashi, Kanazawa-ku, Yokohama 236-0051, Kanagawa, Japan; (T.B.); (T.O.)
| |
Collapse
|
7
|
Shih SF, Wu HH. Free-breathing MRI techniques for fat and R 2* quantification in the liver. MAGMA (NEW YORK, N.Y.) 2024:10.1007/s10334-024-01187-2. [PMID: 39039272 DOI: 10.1007/s10334-024-01187-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Revised: 05/18/2024] [Accepted: 07/02/2024] [Indexed: 07/24/2024]
Abstract
OBJECTIVE To review the recent advancements in free-breathing MRI techniques for proton-density fat fraction (PDFF) and R2* quantification in the liver, and discuss the current challenges and future opportunities. MATERIALS AND METHODS This work focused on recent developments of different MRI pulse sequences, motion management strategies, and reconstruction approaches that enable free-breathing liver PDFF and R2* quantification. RESULTS Different free-breathing liver PDFF and R2* quantification techniques have been evaluated in various cohorts, including healthy volunteers and patients with liver diseases, both in adults and children. Initial results demonstrate promising performance with respect to reference measurements. These techniques have a high potential impact on providing a solution to the clinical need of accurate liver fat and iron quantification in populations with limited breath-holding capacity. DISCUSSION As these free-breathing techniques progress toward clinical translation, studies of the linearity, bias, and repeatability of free-breathing PDFF and R2* quantification in a larger cohort are important. Scan acceleration and improved motion management also hold potential for further enhancement.
Collapse
Affiliation(s)
- Shu-Fu Shih
- Department of Radiological Sciences, University of California Los Angeles, Los Angeles, CA, USA
- Department of Bioengineering, University of California Los Angeles, Los Angeles, CA, USA
| | - Holden H Wu
- Department of Radiological Sciences, University of California Los Angeles, Los Angeles, CA, USA.
- Department of Bioengineering, University of California Los Angeles, Los Angeles, CA, USA.
| |
Collapse
|
8
|
Lin YH, Chen LW, Wang HJ, Hsieh MS, Lu CW, Chuang JH, Chang YC, Chen JS, Chen CM, Lin MW. Quantification of Resection Margin following Sublobar Resection in Lung Cancer Patients through Pre- and Post-Operative CT Image Comparison: Utilizing a CT-Based 3D Reconstruction Algorithm. Cancers (Basel) 2024; 16:2181. [PMID: 38927887 PMCID: PMC11201844 DOI: 10.3390/cancers16122181] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2024] [Revised: 06/02/2024] [Accepted: 06/06/2024] [Indexed: 06/28/2024] Open
Abstract
Sublobar resection has emerged as a standard treatment option for early-stage peripheral non-small cell lung cancer. Achieving an adequate resection margin is crucial to prevent local tumor recurrence. However, gross measurement of the resection margin may lack accuracy due to the elasticity of lung tissue and interobserver variability. Therefore, this study aimed to develop an objective measurement method, the CT-based 3D reconstruction algorithm, to quantify the resection margin following sublobar resection in lung cancer patients through pre- and post-operative CT image comparison. An automated subvascular matching technique was first developed to ensure accuracy and reproducibility in the matching process. Following the extraction of matched feature points, another key technique involves calculating the displacement field within the image. This is particularly important for mapping discontinuous deformation fields around the surgical resection area. A transformation based on thin-plate spline is used for medical image registration. Upon completing the final step of image registration, the distance at the resection margin was measured. After developing the CT-based 3D reconstruction algorithm, we included 12 cases for resection margin distance measurement, comprising 4 right middle lobectomies, 6 segmentectomies, and 2 wedge resections. The outcomes obtained with our method revealed that the target registration error for all cases was less than 2.5 mm. Our method demonstrated the feasibility of measuring the resection margin following sublobar resection in lung cancer patients through pre- and post-operative CT image comparison. Further validation with a multicenter, large cohort, and analysis of clinical outcome correlation is necessary in future studies.
Collapse
Affiliation(s)
- Yu-Hsuan Lin
- Department of Biomedical Engineering, College of Medicine and College of Engineering, National Taiwan University, Taipei 106, Taiwan; (Y.-H.L.); (L.-W.C.); (H.-J.W.)
| | - Li-Wei Chen
- Department of Biomedical Engineering, College of Medicine and College of Engineering, National Taiwan University, Taipei 106, Taiwan; (Y.-H.L.); (L.-W.C.); (H.-J.W.)
| | - Hao-Jen Wang
- Department of Biomedical Engineering, College of Medicine and College of Engineering, National Taiwan University, Taipei 106, Taiwan; (Y.-H.L.); (L.-W.C.); (H.-J.W.)
| | - Min-Shu Hsieh
- Department of Pathology, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei 100, Taiwan;
| | - Chao-Wen Lu
- Department of Surgery, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei 100, Taiwan; (C.-W.L.); (J.-H.C.); (J.-S.C.)
| | - Jen-Hao Chuang
- Department of Surgery, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei 100, Taiwan; (C.-W.L.); (J.-H.C.); (J.-S.C.)
| | - Yeun-Chung Chang
- Department of Medical Imaging, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei 100, Taiwan;
| | - Jin-Shing Chen
- Department of Surgery, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei 100, Taiwan; (C.-W.L.); (J.-H.C.); (J.-S.C.)
| | - Chung-Ming Chen
- Department of Biomedical Engineering, College of Medicine and College of Engineering, National Taiwan University, Taipei 106, Taiwan; (Y.-H.L.); (L.-W.C.); (H.-J.W.)
| | - Mong-Wei Lin
- Department of Surgery, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei 100, Taiwan; (C.-W.L.); (J.-H.C.); (J.-S.C.)
| |
Collapse
|
9
|
Xu DD, Vong AF, Utama MIB, Lebedev D, Ananth R, Hersam MC, Weiss EA, Mirkin CA. Sub-Diffraction Correlation of Quantum Emitters and Local Strain Fields in Strain-Engineered WSe 2 Monolayers. ADVANCED MATERIALS (DEERFIELD BEACH, FLA.) 2024; 36:e2314242. [PMID: 38346232 DOI: 10.1002/adma.202314242] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Indexed: 03/27/2024]
Abstract
Strain-engineering in atomically thin metal dichalcogenides is a useful method for realizing single-photon emitters (SPEs) for quantum technologies. Correlating SPE position with local strain topography is challenging due to localization inaccuracies from the diffraction limit. Currently, SPEs are assumed to be positioned at the highest strained location and are typically identified by randomly screening narrow-linewidth emitters, of which only a few are spectrally pure. In this work, hyperspectral quantum emitter localization microscopy is used to locate 33 SPEs in nanoparticle-strained WSe2 monolayers with sub-diffraction-limit resolution (≈30 nm) and correlate their positions with the underlying strain field via image registration. In this system, spectrally pure emitters are not concentrated at the highest strain location due to spectral contamination; instead, isolable SPEs are distributed away from points of peak strain with an average displacement of 240 nm. These observations point toward a need for a change in the design rules for strain-engineered SPEs and constitute a key step toward realizing next-generation quantum optical architectures.
Collapse
Affiliation(s)
- David D Xu
- Department of Chemistry, Northwestern University, 2145 Sheridan Road, Evanston, IL, 60208, USA
- International Institute for Nanotechnology, Northwestern University, 2145 Sheridan Road, Evanston, IL, 60208, USA
| | - Albert F Vong
- International Institute for Nanotechnology, Northwestern University, 2145 Sheridan Road, Evanston, IL, 60208, USA
- Department of Materials Science and Engineering, Northwestern University, 2220 Campus Drive, Evanston, IL, 60208, USA
| | - M Iqbal Bakti Utama
- Department of Materials Science and Engineering, Northwestern University, 2220 Campus Drive, Evanston, IL, 60208, USA
| | - Dmitry Lebedev
- Department of Materials Science and Engineering, Northwestern University, 2220 Campus Drive, Evanston, IL, 60208, USA
| | - Riddhi Ananth
- Department of Chemistry, Northwestern University, 2145 Sheridan Road, Evanston, IL, 60208, USA
- International Institute for Nanotechnology, Northwestern University, 2145 Sheridan Road, Evanston, IL, 60208, USA
| | - Mark C Hersam
- Department of Chemistry, Northwestern University, 2145 Sheridan Road, Evanston, IL, 60208, USA
- International Institute for Nanotechnology, Northwestern University, 2145 Sheridan Road, Evanston, IL, 60208, USA
- Department of Materials Science and Engineering, Northwestern University, 2220 Campus Drive, Evanston, IL, 60208, USA
- Department of Electrical and Computer Engineering, Northwestern University, 2145 Sheridan Road, Evanston, IL, 60208, USA
| | - Emily A Weiss
- Department of Chemistry, Northwestern University, 2145 Sheridan Road, Evanston, IL, 60208, USA
- International Institute for Nanotechnology, Northwestern University, 2145 Sheridan Road, Evanston, IL, 60208, USA
- Department of Materials Science and Engineering, Northwestern University, 2220 Campus Drive, Evanston, IL, 60208, USA
| | - Chad A Mirkin
- Department of Chemistry, Northwestern University, 2145 Sheridan Road, Evanston, IL, 60208, USA
- International Institute for Nanotechnology, Northwestern University, 2145 Sheridan Road, Evanston, IL, 60208, USA
- Department of Materials Science and Engineering, Northwestern University, 2220 Campus Drive, Evanston, IL, 60208, USA
| |
Collapse
|
10
|
Schachar RA, Schachar IH, Kumar S, Feldman EI, Pierscionek BK, Cosman PC. Model of zonular forces on the lens capsule during accommodation. Sci Rep 2024; 14:5896. [PMID: 38467700 PMCID: PMC10928188 DOI: 10.1038/s41598-024-56563-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Accepted: 03/08/2024] [Indexed: 03/13/2024] Open
Abstract
How the human eye focuses for near; i.e. accommodates, is still being evaluated after more than 165 years. The mechanism of accommodation is essential for understanding the etiology and potential treatments for myopia, glaucoma and presbyopia. Presbyopia affects 100% of the population in the fifth decade of life. The lens is encased in a semi-elastic capsule with attached ligaments called zonules that mediate ciliary muscle forces to alter lens shape. The zonules are attached at the lens capsule equator. The fundamental issue is whether during accommodation all the zonules relax causing the central and peripheral lens surfaces to steepen, or the equatorial zonules are under increased tension while the anterior and posterior zonules relax causing the lens surface to peripherally flatten and centrally steepen while maintaining lens stability. Here we show with a balloon capsule zonular force model that increased equatorial zonular tension with relaxation of the anterior and posterior zonules replicates the topographical changes observed during in vivo rhesus and human accommodation of the lens capsule without lens stroma. The zonular forces required to simulate lens capsule configuration during in vivo accommodation are inconsistent with the general belief that all the zonules relax during accommodation.
Collapse
Affiliation(s)
- Ronald A Schachar
- Department of Physics, University of Texas at Arlington, Arlington, TX, USA.
| | - Ira H Schachar
- North Bay Vitreoretinal Consultants, Santa Rosa, CA, USA
| | - Shubham Kumar
- Department of Electrical and Computer Engineering, University of California San Diego, La Jolla, CA, USA
| | | | - Barbara K Pierscionek
- Faculty of Health, Medicine and Social Care, Medical Technology Research Centre, Anglia Ruskin University, Chelmsford, UK
| | - Pamela C Cosman
- Department of Electrical and Computer Engineering, University of California San Diego, La Jolla, CA, USA
| |
Collapse
|
11
|
Ji W, Yang F. Affine medical image registration with fusion feature mapping in local and global. Phys Med Biol 2024; 69:055029. [PMID: 38324893 DOI: 10.1088/1361-6560/ad2717] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Accepted: 02/07/2024] [Indexed: 02/09/2024]
Abstract
Objective. Medical image affine registration is a crucial basis before using deformable registration. On the one hand, the traditional affine registration methods based on step-by-step optimization are very time-consuming, so these methods are not compatible with most real-time medical applications. On the other hand, convolutional neural networks are limited in modeling long-range spatial relationships of the features due to inductive biases, such as weight sharing and locality. This is not conducive to affine registration tasks. Therefore, the evolution of real-time and high-accuracy affine medical image registration algorithms is necessary for registration applications.Approach. In this paper, we propose a deep learning-based coarse-to-fine global and local feature fusion architecture for fast affine registration, and we use an unsupervised approach for end-to-end training. We use multiscale convolutional kernels as our elemental convolutional blocks to enhance feature extraction. Then, to learn the long-range spatial relationships of the features, we propose a new affine registration framework with weighted global positional attention that fuses global feature mapping and local feature mapping. Moreover, the fusion regressor is designed to generate the affine parameters.Main results. The additive fusion method can be adaptive to global mapping and local mapping, which improves affine registration accuracy without the center of mass initialization. In addition, the max pooling layer and the multiscale convolutional kernel coding module increase the ability of the model in affine registration.Significance. We validate the effectiveness of our method on the OASIS dataset with 414 3D MRI brain maps. Comprehensive results demonstrate that our method achieves state-of-the-art affine registration accuracy and very efficient runtimes.
Collapse
Affiliation(s)
- Wei Ji
- School of Computer and Electronic Information, Guangxi University, Nanning, Guangxi, 530004, People's Republic of China
| | - Feng Yang
- School of Computer and Electronic Information, Guangxi University, Nanning, Guangxi, 530004, People's Republic of China
- Guangxi Key Laboratory of Multimedia Communications Network Technology, Guangxi University, Nanning, Guangxi, 530004, People's Republic of China
- Key Laboratory of Parallel, Distributed and Intelligent Computing of Guangxi Universities and Colleges, Nanning, Guangxi, 530004, People's Republic of China
| |
Collapse
|
12
|
Wang J, Bermudez D, Chen W, Durgavarjhula D, Randell C, Uyanik M, McMillan A. Motion-correction strategies for enhancing whole-body PET imaging. FRONTIERS IN NUCLEAR MEDICINE (LAUSANNE, SWITZERLAND) 2024; 4:1257880. [PMID: 39118964 PMCID: PMC11308502 DOI: 10.3389/fnume.2024.1257880] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 08/10/2024]
Abstract
Positron Emission Tomography (PET) is a powerful medical imaging technique widely used for detection and monitoring of disease. However, PET imaging can be adversely affected by patient motion, leading to degraded image quality and diagnostic capability. Hence, motion gating schemes have been developed to monitor various motion sources including head motion, respiratory motion, and cardiac motion. The approaches for these techniques have commonly come in the form of hardware-driven gating and data-driven gating, where the distinguishing aspect is the use of external hardware to make motion measurements vs. deriving these measures from the data itself. The implementation of these techniques helps correct for motion artifacts and improves tracer uptake measurements. With the great impact that these methods have on the diagnostic and quantitative quality of PET images, much research has been performed in this area, and this paper outlines the various approaches that have been developed as applied to whole-body PET imaging.
Collapse
Affiliation(s)
- James Wang
- Department of Radiology, University of Wisconsin Madison, Madison, WI, United States
- Department of Medical Physics, University of Wisconsin Madison, Madison, WI, United States
| | - Dalton Bermudez
- Department of Medical Physics, University of Wisconsin Madison, Madison, WI, United States
| | - Weijie Chen
- Department of Radiology, University of Wisconsin Madison, Madison, WI, United States
- Department of Electrical and Computer Engineering, University of Wisconsin Madison, Madison, WI, United States
| | - Divya Durgavarjhula
- Department of Radiology, University of Wisconsin Madison, Madison, WI, United States
- Department of Computer Science, University of Wisconsin Madison, Madison, WI, United States
| | - Caitlin Randell
- Department of Radiology, University of Wisconsin Madison, Madison, WI, United States
- Department of Biomedical Engineering, University of Wisconsin Madison, Madison, WI, United States
| | - Meltem Uyanik
- Department of Radiology, University of Wisconsin Madison, Madison, WI, United States
- Department of Medical Physics, University of Wisconsin Madison, Madison, WI, United States
| | - Alan McMillan
- Department of Radiology, University of Wisconsin Madison, Madison, WI, United States
- Department of Medical Physics, University of Wisconsin Madison, Madison, WI, United States
- Department of Electrical and Computer Engineering, University of Wisconsin Madison, Madison, WI, United States
- Department of Biomedical Engineering, University of Wisconsin Madison, Madison, WI, United States
- Data Science Institute, University of Wisconsin Madison, Madison, WI, United States
| |
Collapse
|
13
|
Wang TW, Chao HS, Chiu HY, Lu CF, Liao CY, Lee Y, Chen JR, Shiao TH, Chen YM, Wu YT. Radiomics of metastatic brain tumor as a predictive image biomarker of progression-free survival in patients with non-small-cell lung cancer with brain metastasis receiving tyrosine kinase inhibitors. Transl Oncol 2024; 39:101826. [PMID: 37984256 PMCID: PMC10689936 DOI: 10.1016/j.tranon.2023.101826] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2023] [Revised: 10/18/2023] [Accepted: 11/07/2023] [Indexed: 11/22/2023] Open
Abstract
BACKGROUND AND OBJECTIVE Epidermal growth factor receptor (EGFR)-targeted tyrosine kinase inhibitors (TKIs) are the first-line therapy for EGFR-mutant non-small-cell lung cancer (NSCLC). Early prediction of treatment failure in patients with brain metastases treated with EGFR-TKIs may help in making decisions for systemic drug therapy or local brain tumor control. This study examined the predictive power of the radiomics of both brain metastasis tumors and primary lung tumors. We propose a deep learning based CoxCC model based on quantitative brain magnetic resonance imaging (MRI), a prognostic index and clinical data; the model can be used to predict progression-free survival (PFS) after EGFR-TKI therapy in advanced EGFR-mutant NSCLC. METHODS This retrospective single-center study included 271 patients receiving first-line EGFR-TKI targeted therapy in 2018-2019. Among them, 72 patients who had brain metastases before receiving first-line EGFR-TKI treatment. Three radiomic features were extracted from pretreatment brain MRI images. A CoxCC model for the progression risk stratification of EGFR-TKI treatment was proposed on the basis of MRI radiomics, clinical features, and a prognostic index. We performed time-dependent PFS predictions to evaluate the performance of the CoxCC model. RESULTS The CoxCC model based on a prognostic index, clinical features, and radiomic features of brain metastasis exhibited higher performance than clinical features combined with indexes previously proposed for determining the prognosis of brain metastasis, including recursive partitioning analysis, diagnostic-specific graded prognostic assessment, graded prognostic assessment for lung cancer using molecular markers (lung-molGPA), and modified lung-molGPA, with c-index values of 0.75, 0.67, 0.66, 0.65, and 0.65, respectively. The model achieved areas under the curve of 0.88, 0.73, 0.92, and 0.90 for predicting PFS at 3, 6, 9 and 12 months, respectively. PFS significantly differed between the high- and low-risk groups (p < 0.001). CONCLUSIONS For patients with advanced-stage NSCLC with brain metastasis, MRI radiomics of brain metastases may predict PFS. The CoxCC model integrating brain metastasis radiomics, clinical features, and a prognostic index provided reliable multi-time-point PFS predictions for patients with advanced NSCLC and brain metastases receiving EGFR-TKI treatment.
Collapse
Affiliation(s)
- Ting-Wei Wang
- Institute of Biophotonics, National Yang Ming Chiao Tung University, Taipei, Taiwan; School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Heng-Sheng Chao
- Department of Chest Medicine, Taipei Veteran General Hospital, Taipei, Taiwan
| | - Hwa-Yen Chiu
- Institute of Biophotonics, National Yang Ming Chiao Tung University, Taipei, Taiwan; School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan; Department of Chest Medicine, Taipei Veteran General Hospital, Taipei, Taiwan
| | - Chia-Feng Lu
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Chien-Yi Liao
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Yen Lee
- Institute of Biophotonics, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Jyun-Ru Chen
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Tsu-Hui Shiao
- Department of Chest Medicine, Taipei Veteran General Hospital, Taipei, Taiwan
| | - Yuh-Min Chen
- School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan; Department of Chest Medicine, Taipei Veteran General Hospital, Taipei, Taiwan
| | - Yu-Te Wu
- Institute of Biophotonics, National Yang Ming Chiao Tung University, Taipei, Taiwan.
| |
Collapse
|
14
|
Nenoff L, Amstutz F, Murr M, Archibald-Heeren B, Fusella M, Hussein M, Lechner W, Zhang Y, Sharp G, Vasquez Osorio E. Review and recommendations on deformable image registration uncertainties for radiotherapy applications. Phys Med Biol 2023; 68:24TR01. [PMID: 37972540 PMCID: PMC10725576 DOI: 10.1088/1361-6560/ad0d8a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Revised: 10/30/2023] [Accepted: 11/15/2023] [Indexed: 11/19/2023]
Abstract
Deformable image registration (DIR) is a versatile tool used in many applications in radiotherapy (RT). DIR algorithms have been implemented in many commercial treatment planning systems providing accessible and easy-to-use solutions. However, the geometric uncertainty of DIR can be large and difficult to quantify, resulting in barriers to clinical practice. Currently, there is no agreement in the RT community on how to quantify these uncertainties and determine thresholds that distinguish a good DIR result from a poor one. This review summarises the current literature on sources of DIR uncertainties and their impact on RT applications. Recommendations are provided on how to handle these uncertainties for patient-specific use, commissioning, and research. Recommendations are also provided for developers and vendors to help users to understand DIR uncertainties and make the application of DIR in RT safer and more reliable.
Collapse
Affiliation(s)
- Lena Nenoff
- Department of Radiation Oncology, Massachusetts General Hospital, Boston, MA, United States of America
- Harvard Medical School, Boston, MA, United States of America
- OncoRay—National Center for Radiation Research in Oncology, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Helmholtz-Zentrum Dresden—Rossendorf, Dresden Germany
- Helmholtz-Zentrum Dresden—Rossendorf, Institute of Radiooncology—OncoRay, Dresden, Germany
| | - Florian Amstutz
- Department of Physics, ETH Zurich, Switzerland
- Center for Proton Therapy, Paul Scherrer Institute, Villigen PSI, Switzerland
- Division of Medical Radiation Physics and Department of Radiation Oncology, Inselspital, Bern University Hospital, and University of Bern, Bern, Switzerland
| | - Martina Murr
- Section for Biomedical Physics, Department of Radiation Oncology, University of Tübingen, Germany
| | | | - Marco Fusella
- Department of Radiation Oncology, Abano Terme Hospital, Italy
| | - Mohammad Hussein
- Metrology for Medical Physics, National Physical Laboratory, Teddington, United Kingdom
| | - Wolfgang Lechner
- Department of Radiation Oncology, Medical University of Vienna, Austria
| | - Ye Zhang
- Center for Proton Therapy, Paul Scherrer Institute, Villigen PSI, Switzerland
| | - Greg Sharp
- Department of Radiation Oncology, Massachusetts General Hospital, Boston, MA, United States of America
- Harvard Medical School, Boston, MA, United States of America
| | - Eliana Vasquez Osorio
- Division of Cancer Sciences, The University of Manchester, Manchester, United Kingdom
| |
Collapse
|
15
|
Wang AQ, Yu EM, Dalca AV, Sabuncu MR. A robust and interpretable deep learning framework for multi-modal registration via keypoints. Med Image Anal 2023; 90:102962. [PMID: 37769550 PMCID: PMC10591968 DOI: 10.1016/j.media.2023.102962] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Revised: 08/24/2023] [Accepted: 09/07/2023] [Indexed: 10/03/2023]
Abstract
We present KeyMorph, a deep learning-based image registration framework that relies on automatically detecting corresponding keypoints. State-of-the-art deep learning methods for registration often are not robust to large misalignments, are not interpretable, and do not incorporate the symmetries of the problem. In addition, most models produce only a single prediction at test-time. Our core insight which addresses these shortcomings is that corresponding keypoints between images can be used to obtain the optimal transformation via a differentiable closed-form expression. We use this observation to drive the end-to-end learning of keypoints tailored for the registration task, and without knowledge of ground-truth keypoints. This framework not only leads to substantially more robust registration but also yields better interpretability, since the keypoints reveal which parts of the image are driving the final alignment. Moreover, KeyMorph can be designed to be equivariant under image translations and/or symmetric with respect to the input image ordering. Finally, we show how multiple deformation fields can be computed efficiently and in closed-form at test time corresponding to different transformation variants. We demonstrate the proposed framework in solving 3D affine and spline-based registration of multi-modal brain MRI scans. In particular, we show registration accuracy that surpasses current state-of-the-art methods, especially in the context of large displacements. Our code is available at https://github.com/alanqrwang/keymorph.
Collapse
Affiliation(s)
- Alan Q Wang
- School of Electrical and Computer Engineering, Cornell University and Cornell Tech, New York, NY 10044, USA; Department of Radiology, Weill Cornell Medical School, New York, NY 10065, USA.
| | - Evan M Yu
- Iterative Scopes, Cambridge, MA 02139, USA
| | - Adrian V Dalca
- Computer Science and Artificial Intelligence Lab at the Massachusetts Institute of Technology, Cambridge, MA 02139, USA; A.A. Martinos Center for Biomedical Imaging at the Massachusetts General Hospital, Charlestown, MA 02129, USA
| | - Mert R Sabuncu
- School of Electrical and Computer Engineering, Cornell University and Cornell Tech, New York, NY 10044, USA; Department of Radiology, Weill Cornell Medical School, New York, NY 10065, USA
| |
Collapse
|
16
|
Jones CK, Li B, Wu JH, Nakaguchi T, Xuan P, Liu TYA. Comparative analysis of alignment algorithms for macular optical coherence tomography imaging. Int J Retina Vitreous 2023; 9:60. [PMID: 37784169 PMCID: PMC10544468 DOI: 10.1186/s40942-023-00497-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2023] [Accepted: 09/09/2023] [Indexed: 10/04/2023] Open
Abstract
BACKGROUND Optical coherence tomography (OCT) is the most important and commonly utilized imaging modality in ophthalmology and is especially crucial for the diagnosis and management of macular diseases. Each OCT volume is typically only available as a series of cross-sectional images (B-scans) that are accessible through proprietary software programs which accompany the OCT machines. To maximize the potential of OCT imaging for machine learning purposes, each OCT image should be analyzed en bloc as a 3D volume, which requires aligning all the cross-sectional images within a particular volume. METHODS A dataset of OCT B-scans obtained from 48 age-related macular degeneration (AMD) patients and 50 normal controls was used to evaluate five registration algorithms. After alignment of B-scans from each patient, an en face surface map was created to measure the registration quality, based on an automatically generated Laplace difference of the surface map-the smoother the surface map, the smaller the average Laplace difference. To demonstrate the usefulness of B-scan alignment, we trained a 3D convolutional neural network (CNN) to detect age-related macular degeneration (AMD) on OCT images and compared the performance of the model with and without B-scan alignment. RESULTS The mean Laplace difference of the surface map before registration was 27 ± 4.2 pixels for the AMD group and 26.6 ± 4 pixels for the control group. After alignment, the smoothness of the surface map was improved, with a mean Laplace difference of 5.5 ± 2.7 pixels for Advanced Normalization Tools Symmetric image Normalization (ANTs-SyN) registration algorithm in the AMD group and a mean Laplace difference of 4.3 ± 1.4.2 pixels for ANTs in the control group. Our 3D CNN achieved superior performance in detecting AMD, when aligned OCT B-scans were used (AUC 0.95 aligned vs. 0.89 unaligned). CONCLUSIONS We introduced a novel metric to quantify OCT B-scan alignment and compared the effectiveness of five alignment algorithms. We confirmed that alignment could be improved in a statistically significant manner with readily available alignment algorithms that are available to the public, and the ANTs algorithm provided the most robust performance overall. We further demonstrated that alignment of OCT B-scans will likely be useful for training 3D CNN models.
Collapse
Affiliation(s)
- Craig K Jones
- Wilmer Eye Institute, School of Medicine, Johns Hopkins University, 600 N. Wolfe Street, Baltimore, MD, 21287, USA
- The Malone Center for Engineering in Healthcare, Johns Hopkins University, Malone Hall, Suite 340, 3400 North Charles Street, Baltimore, MD, 21218, USA
| | - Bochong Li
- Graduate School of Science and Technology, Chiba University, 1-33, Yayoicho, Inage Ward, Chiba-shi, Chiba, 263-8522, Japan
| | - Jo-Hsuan Wu
- Shiley Eye Institute and Viterbi Family Department of Ophthalmology, University of California San Diego, 9415 Campus Point Drive, La Jolla, CA, 92093, USA
| | - Toshiya Nakaguchi
- Center for Frontier Medical Engineering, Chiba University, 1-33, Yayoicho, Inage Ward, Chiba-shi, Chiba, 263-8522, Japan
| | - Ping Xuan
- School of Computer Science and Technology, Heilongjiang University, Harbin, 150080, China
| | - T Y Alvin Liu
- Wilmer Eye Institute, School of Medicine, Johns Hopkins University, 600 N. Wolfe Street, Baltimore, MD, 21287, USA.
- The Malone Center for Engineering in Healthcare, Johns Hopkins University, Malone Hall, Suite 340, 3400 North Charles Street, Baltimore, MD, 21218, USA.
| |
Collapse
|
17
|
Zhang X, Gosnell J, Nainamalai V, Page S, Huang S, Haw M, Peng B, Vettukattil J, Jiang J. Advances in TEE-Centric Intraprocedural Multimodal Image Guidance for Congenital and Structural Heart Disease. Diagnostics (Basel) 2023; 13:2981. [PMID: 37761348 PMCID: PMC10530233 DOI: 10.3390/diagnostics13182981] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Revised: 08/17/2023] [Accepted: 08/17/2023] [Indexed: 09/29/2023] Open
Abstract
Percutaneous interventions are gaining rapid acceptance in cardiology and revolutionizing the treatment of structural heart disease (SHD). As new percutaneous procedures of SHD are being developed, their associated complexity and anatomical variability demand a high-resolution special understanding for intraprocedural image guidance. During the last decade, three-dimensional (3D) transesophageal echocardiography (TEE) has become one of the most accessed imaging methods for structural interventions. Although 3D-TEE can assess cardiac structures and functions in real-time, its limitations (e.g., limited field of view, image quality at a large depth, etc.) must be addressed for its universal adaptation, as well as to improve the quality of its imaging and interventions. This review aims to present the role of TEE in the intraprocedural guidance of percutaneous structural interventions. We also focus on the current and future developments required in a multimodal image integration process when using TEE to enhance the management of congenital and SHD treatments.
Collapse
Affiliation(s)
- Xinyue Zhang
- School of Computer Science, Southwest Petroleum University, Chengdu 610500, China; (X.Z.); (B.P.)
| | - Jordan Gosnell
- Betz Congenital Health Center, Helen DeVos Children’s Hospital, Grand Rapids, MI 49503, USA; (J.G.); (S.H.); (M.H.)
| | - Varatharajan Nainamalai
- Department of Biomedical Engineering, Michigan Technological University, Houghton, MI 49931, USA; (V.N.); (S.P.)
- Joint Center for Biocomputing and Digital Health, Health Research Institute and Institute of Computing and Cybernetics, Michigan Technological University, Houghton, MI 49931, USA
| | - Savannah Page
- Department of Biomedical Engineering, Michigan Technological University, Houghton, MI 49931, USA; (V.N.); (S.P.)
- Joint Center for Biocomputing and Digital Health, Health Research Institute and Institute of Computing and Cybernetics, Michigan Technological University, Houghton, MI 49931, USA
| | - Sihong Huang
- Betz Congenital Health Center, Helen DeVos Children’s Hospital, Grand Rapids, MI 49503, USA; (J.G.); (S.H.); (M.H.)
| | - Marcus Haw
- Betz Congenital Health Center, Helen DeVos Children’s Hospital, Grand Rapids, MI 49503, USA; (J.G.); (S.H.); (M.H.)
| | - Bo Peng
- School of Computer Science, Southwest Petroleum University, Chengdu 610500, China; (X.Z.); (B.P.)
| | - Joseph Vettukattil
- Betz Congenital Health Center, Helen DeVos Children’s Hospital, Grand Rapids, MI 49503, USA; (J.G.); (S.H.); (M.H.)
- Department of Biomedical Engineering, Michigan Technological University, Houghton, MI 49931, USA; (V.N.); (S.P.)
| | - Jingfeng Jiang
- Department of Biomedical Engineering, Michigan Technological University, Houghton, MI 49931, USA; (V.N.); (S.P.)
- Joint Center for Biocomputing and Digital Health, Health Research Institute and Institute of Computing and Cybernetics, Michigan Technological University, Houghton, MI 49931, USA
| |
Collapse
|
18
|
Chatterjee S, Bajaj H, Siddiquee IH, Subbarayappa NB, Simon S, Shashidhar SB, Speck O, Nürnberger A. MICDIR: Multi-scale inverse-consistent deformable image registration using UNetMSS with self-constructing graph latent. Comput Med Imaging Graph 2023; 108:102267. [PMID: 37506427 DOI: 10.1016/j.compmedimag.2023.102267] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Revised: 06/02/2023] [Accepted: 06/03/2023] [Indexed: 07/30/2023]
Abstract
Image registration is the process of bringing different images into a common coordinate system - a technique widely used in various applications of computer vision, such as remote sensing, image retrieval, and, most commonly, medical imaging. Deep learning based techniques have been applied successfully to tackle various complex medical image processing problems, including medical image registration. Over the years, several image registration techniques have been proposed using deep learning. Deformable image registration techniques such as Voxelmorph have been successful in capturing finer changes and providing smoother deformations. However, Voxelmorph, as well as ICNet and FIRE, do not explicitly encode global dependencies (i.e. the overall anatomical view of the supplied image) and, therefore, cannot track large deformations. In order to tackle the aforementioned problems, this paper extends the Voxelmorph approach in three different ways. To improve the performance in case of small as well as large deformations, supervision of the model at different resolutions has been integrated using a multi-scale UNet. To support the network to learn and encode the minute structural co-relations of the given image-pairs, a self-constructing graph network (SCGNet) has been used as the latent of the multi-scale UNet - which can improve the learning process of the model and help the model to generalise better. And finally, to make the deformations inverse-consistent, cycle consistency loss has been employed. On the task of registration of brain MRIs, the proposed method achieved significant improvements over ANTs and VoxelMorph, obtaining a Dice score of 0.8013 ± 0.0243 for intramodal and 0.6211 ± 0.0309 for intermodal, while VoxelMorph achieved 0.7747 ± 0.0260 and 0.6071 ± 0.0510, respectively.
Collapse
Affiliation(s)
- Soumick Chatterjee
- Faculty of Computer Science, Otto von Guericke University Magdeburg, Germany; Data and Knowledge Engineering Group, Otto von Guericke University Magdeburg, Germany; Biomedical Magnetic Resonance, Otto von Guericke University Magdeburg, Germany; Genomics Research Centre, Human Technopole, Milan, Italy.
| | - Himanshi Bajaj
- Faculty of Computer Science, Otto von Guericke University Magdeburg, Germany
| | - Istiyak H Siddiquee
- Faculty of Computer Science, Otto von Guericke University Magdeburg, Germany
| | | | - Steve Simon
- Faculty of Computer Science, Otto von Guericke University Magdeburg, Germany
| | | | - Oliver Speck
- Biomedical Magnetic Resonance, Otto von Guericke University Magdeburg, Germany; German Centre for Neurodegenerative Disease, Magdeburg, Germany; Centre for Behavioural Brain Sciences, Magdeburg, Germany
| | - Andreas Nürnberger
- Faculty of Computer Science, Otto von Guericke University Magdeburg, Germany; Data and Knowledge Engineering Group, Otto von Guericke University Magdeburg, Germany; Centre for Behavioural Brain Sciences, Magdeburg, Germany
| |
Collapse
|
19
|
Yu VY, Otazo R, Wu C, Subashi E, Baumann M, Koken P, Doneva M, Mazurkewitz P, Shasha D, Zelefsky M, Cervino L, Cohen O. Quantitative longitudinal mapping of radiation-treated prostate cancer using MR fingerprinting with radial acquisition and subspace reconstruction. Magn Reson Imaging 2023; 101:25-34. [PMID: 37015305 PMCID: PMC10623548 DOI: 10.1016/j.mri.2023.03.019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Accepted: 03/29/2023] [Indexed: 04/06/2023]
Abstract
MR fingerprinting (MRF) enables fast multiparametric quantitative imaging with a single acquisition and has been shown to improve diagnosis of prostate cancer. However, most prostate MRF studies were performed with spiral acquisitions that are sensitive to B0 inhomogeneities and consequent blurring. In this work, a radial MRF acquisition with a novel subspace reconstruction technique was developed to enable fast T1/T2 mapping in the prostate in under 4 min. The subspace reconstruction exploits the extensive temporal correlations in the MRF dictionary to pre-compute a low dimensional space for the solution and thus reduce the number of radial spokes to accelerate the acquisition. Iterative reconstruction with the subspace model and additional regularization of the signal representation in the subspace is performed to minimize the number of spokes and maintain matching quality and SNR. Reconstruction accuracy was assessed using the ISMRM NIST phantom. In-vivo validation was performed on two healthy subjects and two prostate cancer patients undergoing radiation therapy. The longitudinal repeatability was quantified using the concordance correlation coefficient (CCC) in one of the healthy subjects by repeated scans over 1 year. One prostate cancer patient was scanned at three time points, before initiating therapy and following brachytherapy and external beam radiation. Changes in the T1/T2 maps obtained with the proposed method were quantified. The prostate, peripheral and transitional zones, and visible dominant lesion were delineated for each study, and the statistics and distribution of the quantitative mapping values were analyzed. Significant image quality improvements compared with standard reconstruction methods were obtained with the proposed subspace reconstruction method. A notable decrease in the spread of the T1/T2 values without biasing the estimated mean values was observed with the subspace reconstruction and agreed with reported literature values. The subspace reconstruction enabled visualization of small differences in T1/T2 values in the tumor region within the peripheral zone. Longitudinal imaging of a volunteer subject yielded CCC of 0.89 for MRF T1, and 0.81 for MRF T2 in the prostate gland. Longitudinal imaging of the prostate patient confirmed the feasibility of capturing radiation treatment related changes. This work is a proof-of-concept for a high resolution and fast quantitative mapping using golden-angle radial MRF combined with a subspace reconstruction technique for longitudinal treatment response assessment in subjects undergoing radiation treatment.
Collapse
Affiliation(s)
- Victoria Y Yu
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Ricardo Otazo
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, USA; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Can Wu
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Ergys Subashi
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | | | - Peter Koken
- Philips Research, MR Research, Hamburg, Germany
| | | | | | - Daniel Shasha
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Michael Zelefsky
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Laura Cervino
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Ouri Cohen
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, USA.
| |
Collapse
|
20
|
Kim D, Gan Y, Nedergaard M, Kelley DH, Tithof J. Image Analysis Techniques for In Vivo Quantification of Cerebrospinal Fluid Flow. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.07.20.549937. [PMID: 37546970 PMCID: PMC10401935 DOI: 10.1101/2023.07.20.549937] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/08/2023]
Abstract
Over the last decade, there has been a tremendously increased interest in understanding the neurophysiology of cerebrospinal fluid (CSF) flow, which plays a crucial role in clearing metabolic waste from the brain. This growing interest was largely initiated by two significant discoveries: the glymphatic system (a pathway for solute exchange between interstitial fluid deep within the brain and the CSF surrounding the brain) and meningeal lymphatic vessels (lymphatic vessels in the layer of tissue surrounding the brain that drain CSF). These two CSF systems work in unison, and their disruption has been implicated in several neurological disorders including Alzheimer's disease, stoke, and traumatic brain injury. Here, we present experimental techniques for in vivo quantification of CSF flow via direct imaging of fluorescent microspheres injected into the CSF. We discuss detailed image processing methods, including registration and masking of stagnant particles, to improve the quality of measurements. We provide guidance for quantifying CSF flow through particle tracking and offer tips for optimizing the process. Additionally, we describe techniques for measuring changes in arterial diameter, which is an hypothesized CSF pumping mechanism. Finally, we outline how these same techniques can be applied to cervical lymphatic vessels, which collect fluid downstream from meningeal lymphatic vessels. We anticipate that these fluid mechanical techniques will prove valuable for future quantitative studies aimed at understanding mechanisms of CSF transport and disruption, as well as for other complex biophysical systems.
Collapse
Affiliation(s)
- Daehyun Kim
- Department of Mechanical Engineering, University of Minnesota, 111 Church St SE, Minneapolis, MN, 55455, United States
| | - Yiming Gan
- Department of Mechanical Engineering, University of Rochester, Hopeman Engineering Bldg, Rochester, NY, 14627, United States
| | - Maiken Nedergaard
- Center for Translational Neuromedicine, University of Rochester Medical Center, 601 Elmwood Ave, Rochester, NY, 14642, United States
| | - Douglas H. Kelley
- Department of Mechanical Engineering, University of Rochester, Hopeman Engineering Bldg, Rochester, NY, 14627, United States
| | - Jeffrey Tithof
- Department of Mechanical Engineering, University of Minnesota, 111 Church St SE, Minneapolis, MN, 55455, United States
| |
Collapse
|
21
|
Yang G, Xu M, Chen W, Qiao X, Shi H, Hu Y. A brain CT-based approach for predicting and analyzing stroke-associated pneumonia from intracerebral hemorrhage. Front Neurol 2023; 14:1139048. [PMID: 37332986 PMCID: PMC10272424 DOI: 10.3389/fneur.2023.1139048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Accepted: 05/08/2023] [Indexed: 06/20/2023] Open
Abstract
Introduction Stroke-associated pneumonia (SAP) is a common complication of stroke that can increase the mortality rate of patients and the burden on their families. In contrast to prior clinical scoring models that rely on baseline data, we propose constructing models based on brain CT scans due to their accessibility and clinical universality. Methods Our study aims to explore the mechanism behind the distribution and lesion areas of intracerebral hemorrhage (ICH) in relation to pneumonia, we utilized an MRI atlas that could present brain structures and a registration method in our program to extract features that may represent this relationship. We developed three machine learning models to predict the occurrence of SAP using these features. Ten-fold cross-validation was applied to evaluate the performance of models. Additionally, we constructed a probability map through statistical analysis that could display which brain regions are more frequently impacted by hematoma in patients with SAP based on four types of pneumonia. Results Our study included a cohort of 244 patients, and we extracted 35 features that captured the invasion of ICH to different brain regions for model development. We evaluated the performance of three machine learning models, namely, logistic regression, support vector machine, and random forest, in predicting SAP, and the AUCs for these models ranged from 0.77 to 0.82. The probability map revealed that the distribution of ICH varied between the left and right brain hemispheres in patients with moderate and severe SAP, and we identified several brain structures, including the left-choroid-plexus, right-choroid-plexus, right-hippocampus, and left-hippocampus, that were more closely related to SAP based on feature selection. Additionally, we observed that some statistical indicators of ICH volume, such as mean and maximum values, were proportional to the severity of SAP. Discussion Our findings suggest that our method is effective in classifying the development of pneumonia based on brain CT scans. Furthermore, we identified distinct characteristics, such as volume and distribution, of ICH in four different types of SAP.
Collapse
Affiliation(s)
- Guangtong Yang
- School of Control Science and Engineering, Shandong University, Jinan, China
| | - Min Xu
- Neurointensive Care Unit, Shengli Oilfield Central Hospital, Dongying, China
| | - Wei Chen
- Department of Radiology, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | - Xu Qiao
- School of Control Science and Engineering, Shandong University, Jinan, China
| | - Hongfeng Shi
- Neurointensive Care Unit, Shengli Oilfield Central Hospital, Dongying, China
| | - Yongmei Hu
- School of Control Science and Engineering, Shandong University, Jinan, China
| |
Collapse
|
22
|
Gezginer I, Chen Z, Yoshihara HA, Deán-Ben XL, Razansky D. Volumetric registration framework for multimodal functional magnetic resonance and optoacoustic tomography of the rodent brain. PHOTOACOUSTICS 2023; 31:100522. [PMID: 37362869 PMCID: PMC10285284 DOI: 10.1016/j.pacs.2023.100522] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/22/2023] [Revised: 06/06/2023] [Accepted: 06/08/2023] [Indexed: 06/28/2023]
Abstract
Optoacoustic tomography (OAT) provides a non-invasive means to characterize cerebral hemodynamics across an entire murine brain while attaining multi-parametric readouts not available with other modalities. This unique capability can massively impact our understanding of brain function. However, OAT largely lacks the soft tissue contrast required for unambiguous identification of brain regions. Hence, its accurate registration to a reference brain atlas is paramount for attaining meaningful functional readings. Herein, we capitalized on the simultaneously acquired bi-modal data from the recently-developed hybrid magnetic resonance optoacoustic tomography (MROT) scanner in order to devise an image coregistration paradigm that facilitates brain parcellation and anatomical referencing. We evaluated the performance of the proposed methodology by coregistering OAT data acquired with a standalone system using different registration methods. The enhanced performance is further demonstrated for functional OAT data analysis and characterization of stimulus-evoked brain responses. The suggested approach enables better consolidation of the research findings thus facilitating wider acceptance of OAT as a powerful neuroimaging tool to study brain functions and diseases.
Collapse
Affiliation(s)
- Irmak Gezginer
- Institute for Biomedical Engineering and Institute of Pharmacology and Toxicology, Faculty of Medicine, University of Zurich, Switzerland
- Institute for Biomedical Engineering, Department of Information Technology and Electrical Engineering, ETH Zurich, Switzerland
| | - Zhenyue Chen
- Institute for Biomedical Engineering and Institute of Pharmacology and Toxicology, Faculty of Medicine, University of Zurich, Switzerland
- Institute for Biomedical Engineering, Department of Information Technology and Electrical Engineering, ETH Zurich, Switzerland
| | - Hikari A.I. Yoshihara
- Institute for Biomedical Engineering and Institute of Pharmacology and Toxicology, Faculty of Medicine, University of Zurich, Switzerland
- Institute for Biomedical Engineering, Department of Information Technology and Electrical Engineering, ETH Zurich, Switzerland
| | - Xosé Luís Deán-Ben
- Institute for Biomedical Engineering and Institute of Pharmacology and Toxicology, Faculty of Medicine, University of Zurich, Switzerland
- Institute for Biomedical Engineering, Department of Information Technology and Electrical Engineering, ETH Zurich, Switzerland
| | - Daniel Razansky
- Institute for Biomedical Engineering and Institute of Pharmacology and Toxicology, Faculty of Medicine, University of Zurich, Switzerland
- Institute for Biomedical Engineering, Department of Information Technology and Electrical Engineering, ETH Zurich, Switzerland
| |
Collapse
|
23
|
Hirai R, Mori S, Suyari H, Tsuji H, Ishikawa H. Optimizing 3DCT image registration for interfractional changes in carbon-ion prostate radiotherapy. Sci Rep 2023; 13:7448. [PMID: 37156901 PMCID: PMC10167266 DOI: 10.1038/s41598-023-34339-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2022] [Accepted: 04/27/2023] [Indexed: 05/10/2023] Open
Abstract
To perform setup procedures including both positional and dosimetric information, we developed a CT-CT rigid image registration algorithm utilizing water equivalent pathlength (WEPL)-based image registration and compared the resulting dose distribution with those of two other algorithms, intensity-based image registration and target-based image registration, in prostate cancer radiotherapy using the carbon-ion pencil beam scanning technique. We used the data of the carbon ion therapy planning CT and the four-weekly treatment CTs of 19 prostate cancer cases. Three CT-CT registration algorithms were used to register the treatment CTs to the planning CT. Intensity-based image registration uses CT voxel intensity information. Target-based image registration uses target position on the treatment CTs to register it to that on the planning CT. WEPL-based image registration registers the treatment CTs to the planning CT using WEPL values. Initial dose distributions were calculated using the planning CT with the lateral beam angles. The treatment plan parameters were optimized to administer the prescribed dose to the PTV on the planning CT. Weekly dose distributions using the three different algorithms were calculated by applying the treatment plan parameters to the weekly CT data. Dosimetry, including the dose received by 95% of the clinical target volume (CTV-D95), rectal volumes receiving > 20 Gy (RBE) (V20), > 30 Gy (RBE) (V30), and > 40 Gy (RBE) (V40), were calculated. Statistical significance was assessed using the Wilcoxon signed-rank test. Interfractional CTV displacement over all patients was 6.0 ± 2.7 mm (19.3 mm maximum standard amount). WEPL differences between the planning CT and the treatment CT were 1.2 ± 0.6 mm-H2O (< 3.9 mm-H2O), 1.7 ± 0.9 mm-H2O (< 5.7 mm-H2O) and 1.5 ± 0.7 mm-H2O (< 3.6 mm-H2O maxima) with the intensity-based image registration, target-based image registration, and WEPL-based image registration, respectively. For CTV coverage, the D95 values on the planning CT were > 95% of the prescribed dose in all cases. The mean CTV-D95 values were 95.8 ± 11.5% and 98.8 ± 1.7% with the intensity-based image registration and target-based image registration, respectively. The WEPL-based image registration was CTV-D95 to 99.0 ± 0.4% and rectal Dmax to 51.9 ± 1.9 Gy (RBE) compared to 49.4 ± 9.1 Gy (RBE) with intensity-based image registration and 52.2 ± 1.8 Gy (RBE) with target-based image registration. The WEPL-based image registration algorithm improved the target coverage from the other algorithms and reduced rectal dose from the target-based image registration, even though the magnitude of the interfractional variation was increased.
Collapse
Affiliation(s)
- Ryusuke Hirai
- National Institutes for Quantum Science and Technology, Quantum Life and Medical Science Directorate, Institute for Quantum Medical Science, Inage-ku, Chiba, 263-8555, Japan
- Corporate Research and Development Center, Toshiba Corporation, Kanagawa, 212-8582, Japan
- Department of Information and Image Sciences, Faculty of Engineering, Chiba University, Inage-ku, Chiba, 263-8522, Japan
| | - Shinichiro Mori
- National Institutes for Quantum Science and Technology, Quantum Life and Medical Science Directorate, Institute for Quantum Medical Science, Inage-ku, Chiba, 263-8555, Japan.
| | - Hiroki Suyari
- Department of Information and Image Sciences, Faculty of Engineering, Chiba University, Inage-ku, Chiba, 263-8522, Japan
| | - Hiroshi Tsuji
- QST Hospital, National Institutes for Quantum Science and Technology, Inage-ku, Chiba, 263-8555, Japan
| | - Hitoshi Ishikawa
- QST Hospital, National Institutes for Quantum Science and Technology, Inage-ku, Chiba, 263-8555, Japan
| |
Collapse
|
24
|
Wu J, Xia Y, Wang X, Wei Y, Liu A, Innanje A, Zheng M, Chen L, Shi J, Wang L, Zhan Y, Zhou XS, Xue Z, Shi F, Shen D. uRP: An integrated research platform for one-stop analysis of medical images. FRONTIERS IN RADIOLOGY 2023; 3:1153784. [PMID: 37492386 PMCID: PMC10365282 DOI: 10.3389/fradi.2023.1153784] [Citation(s) in RCA: 21] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Accepted: 03/31/2023] [Indexed: 07/27/2023]
Abstract
Introduction Medical image analysis is of tremendous importance in serving clinical diagnosis, treatment planning, as well as prognosis assessment. However, the image analysis process usually involves multiple modality-specific software and relies on rigorous manual operations, which is time-consuming and potentially low reproducible. Methods We present an integrated platform - uAI Research Portal (uRP), to achieve one-stop analyses of multimodal images such as CT, MRI, and PET for clinical research applications. The proposed uRP adopts a modularized architecture to be multifunctional, extensible, and customizable. Results and Discussion The uRP shows 3 advantages, as it 1) spans a wealth of algorithms for image processing including semi-automatic delineation, automatic segmentation, registration, classification, quantitative analysis, and image visualization, to realize a one-stop analytic pipeline, 2) integrates a variety of functional modules, which can be directly applied, combined, or customized for specific application domains, such as brain, pneumonia, and knee joint analyses, 3) enables full-stack analysis of one disease, including diagnosis, treatment planning, and prognosis assessment, as well as full-spectrum coverage for multiple disease applications. With the continuous development and inclusion of advanced algorithms, we expect this platform to largely simplify the clinical scientific research process and promote more and better discoveries.
Collapse
Affiliation(s)
- Jiaojiao Wu
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Yuwei Xia
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Xuechun Wang
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Ying Wei
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Aie Liu
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Arun Innanje
- Department of Research and Development, United Imaging Intelligence Co., Ltd., Cambridge, MA, United States
| | - Meng Zheng
- Department of Research and Development, United Imaging Intelligence Co., Ltd., Cambridge, MA, United States
| | - Lei Chen
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Jing Shi
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Liye Wang
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Yiqiang Zhan
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Xiang Sean Zhou
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Zhong Xue
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Feng Shi
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Dinggang Shen
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
- Shanghai Clinical Research and Trial Center, Shanghai, China
| |
Collapse
|
25
|
Naser MA, Wahid KA, Ahmed S, Salama V, Dede C, Edwards BW, Lin R, McDonald B, Salzillo TC, He R, Ding Y, Abdelaal MA, Thill D, O'Connell N, Willcut V, Christodouleas JP, Lai SY, Fuller CD, Mohamed ASR. Quality assurance assessment of intra-acquisition diffusion-weighted and T2-weighted magnetic resonance imaging registration and contour propagation for head and neck cancer radiotherapy. Med Phys 2023; 50:2089-2099. [PMID: 36519973 PMCID: PMC10121748 DOI: 10.1002/mp.16128] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2021] [Revised: 11/10/2022] [Accepted: 11/13/2022] [Indexed: 12/23/2022] Open
Abstract
BACKGROUND/PURPOSE Adequate image registration of anatomical and functional magnetic resonance imaging (MRI) scans is necessary for MR-guided head and neck cancer (HNC) adaptive radiotherapy planning. Despite the quantitative capabilities of diffusion-weighted imaging (DWI) MRI for treatment plan adaptation, geometric distortion remains a considerable limitation. Therefore, we systematically investigated various deformable image registration (DIR) methods to co-register DWI and T2-weighted (T2W) images. MATERIALS/METHODS We compared three commercial (ADMIRE, Velocity, Raystation) and three open-source (Elastix with default settings [Elastix Default], Elastix with parameter set 23 [Elastix 23], Demons) post-acquisition DIR methods applied to T2W and DWI MRI images acquired during the same imaging session in twenty immobilized HNC patients. In addition, we used the non-registered images (None) as a control comparator. Ground-truth segmentations of radiotherapy structures (tumour and organs at risk) were generated by a physician expert on both image sequences. For each registration approach, structures were propagated from T2W to DWI images. These propagated structures were then compared with ground-truth DWI structures using the Dice similarity coefficient and mean surface distance. RESULTS 19 left submandibular glands, 18 right submandibular glands, 20 left parotid glands, 20 right parotid glands, 20 spinal cords, and 12 tumours were delineated. Most DIR methods took <30 s to execute per case, with the exception of Elastix 23 which took ∼458 s to execute per case. ADMIRE and Elastix 23 demonstrated improved performance over None for all metrics and structures (Bonferroni-corrected p < 0.05), while the other methods did not. Moreover, ADMIRE and Elastix 23 significantly improved performance in individual and pooled analysis compared to all other methods. CONCLUSIONS The ADMIRE DIR method offers improved geometric performance with reasonable execution time so should be favoured for registering T2W and DWI images acquired during the same scan session in HNC patients. These results are important to ensure the appropriate selection of registration strategies for MR-guided radiotherapy.
Collapse
Affiliation(s)
- Mohamed A Naser
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Kareem A Wahid
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Sara Ahmed
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Vivian Salama
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Cem Dede
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Benjamin W Edwards
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Ruitao Lin
- Department of Biostatistics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Brigid McDonald
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Travis C Salzillo
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Renjie He
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Yao Ding
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Moamen Abobakr Abdelaal
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | | | | | | | | | - Stephen Y Lai
- Department of Head and Neck Surgery, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Clifton D Fuller
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Abdallah S R Mohamed
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| |
Collapse
|
26
|
2D MRI registration using glowworm swarm optimization with partial opposition-based learning for brain tumor progression. Pattern Anal Appl 2023. [DOI: 10.1007/s10044-023-01153-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/24/2023]
|
27
|
Rapid Morphological Measurement Method of Aortic Dissection Stent Based on Spatial Observation Point Set. Bioengineering (Basel) 2023; 10:bioengineering10020139. [PMID: 36829632 PMCID: PMC9951888 DOI: 10.3390/bioengineering10020139] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2023] [Revised: 01/12/2023] [Accepted: 01/12/2023] [Indexed: 01/22/2023] Open
Abstract
OBJECTIVES Post-operative stent morphology of aortic dissection patients is important for performing clinical diagnosis and prognostic assessment. However, stent morphologies still need to be manually measured, which is a process prone to errors, high time consumption and difficulty in exploiting inter-data associations. Herein, we propose a method based on the stepwise combination of basic, non-divisible data sets to quickly obtain morphological parameters with high accuracy. METHODS We performed the 3D reconstruction of 109 post-operative follow-up CT image data from 26 patients using mimics software. By extracting the spatial locations of the basic morphological observation points on the stent, we defined a basic and non-reducible set of observation points. Further, we implemented a fully automatic stent segmentation and an observation point extraction algorithm. We analyzed the stability and accuracy of the algorithms on a test set containing 8 cases and 408 points. Based on this dataset, we calculated three morphological parameters of different complexity for the different spatial structural features exhibited by the stent. Finally, we compared the two measurement schemes in four aspects: data variability, data stability, statistical process complexity and algorithmic error. RESULTS The statistical results of the two methods on two low-complexity morphological parameters (spatial position of stent end and vascular stent end-slip volume) show good agreement (n = 26, P1, P2 < 0.001, r1 = 0.992, r2 = 0.988). The statistics of the proposed method for the morphological parameters of medium complexity (proximal support ring feature diameter and distal support ring feature diameter) avoid the errors caused by manual extraction, and the magnitude of this correction to the traditional method does not exceed 4 mm with an average correction of 1.38 mm. Meanwhile, our proposed automatic observation point extraction method has only 2.2% error rate on the test set, and the average spatial distance from the manually marked observation points is 0.73 mm. Thus, the proposed method is able to rapidly and accurately measure the stent circumferential deflection angle, which is highly complex and cannot be measured using traditional methods. CONCLUSIONS The proposed method can significantly reduce the statistical observation time and information processing cost compared to the traditional morphological observation methods. Moreover, when new morphological parameters are required, one can quickly and accurately obtain the target parameters by new "combinatorial functions." Iterative modification of the data set itself is avoided.
Collapse
|
28
|
Xu J, Yang K, Chen Y, Dai L, Zhang D, Shuai P, Shi R, Yang Z. Reliable and stable fundus image registration based on brain-inspired spatially-varying adaptive pyramid context aggregation network. Front Neurosci 2023; 16:1117134. [PMID: 36726854 PMCID: PMC9884961 DOI: 10.3389/fnins.2022.1117134] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Accepted: 12/28/2022] [Indexed: 01/18/2023] Open
Abstract
The task of fundus image registration aims to find matching keypoints between an image pair. Traditional methods detect the keypoint by hand-designed features, which fail to cope with complex application scenarios. Due to the strong feature learning ability of deep neural network, current image registration methods based on deep learning directly learn to align the geometric transformation between the reference image and test image in an end-to-end manner. Another mainstream of this task aims to learn the displacement vector field between the image pair. In this way, the image registration has achieved significant advances. However, due to the complicated vascular morphology of retinal image, such as texture and shape, current widely used image registration methods based on deep learning fail to achieve reliable and stable keypoint detection and registration results. To this end, in this paper, we aim to bridge this gap. Concretely, since the vessel crossing and branching points can reliably and stably characterize the key components of fundus image, we propose to learn to detect and match all the crossing and branching points of the input images based on a single deep neural network. Moreover, in order to accurately locate the keypoints and learn discriminative feature embedding, a brain-inspired spatially-varying adaptive pyramid context aggregation network is proposed to incorporate the contextual cues under the supervision of structured triplet ranking loss. Experimental results show that the proposed method achieves more accurate registration results with significant speed advantage.
Collapse
Affiliation(s)
- Jie Xu
- Beijing Institute of Ophthalmology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing Key Laboratory of Ophthalmology and Visual Sciences, Beijing, China
| | - Kang Yang
- Beijing Zhizhen Internet Technology Co. Ltd.,Beijing, China
| | - Youxin Chen
- Department of Ophthalmology, Peking Union Medical College Hospital, Beijing, China
| | - Liming Dai
- Beijing Zhizhen Internet Technology Co. Ltd.,Beijing, China
| | - Dongdong Zhang
- Beijing Zhizhen Internet Technology Co. Ltd.,Beijing, China
| | - Ping Shuai
- Department of Health Management and Physical Examination, Sichuan Provincial People's Hospital, University of Electronic Science and Technology of China, Chengdu, China
- School of Medicine, University of Electronic Science and Technology of China, Chengdu, China
| | - Rongjie Shi
- Beijing Zhizhen Internet Technology Co. Ltd.,Beijing, China
| | - Zhanbo Yang
- Beijing Zhizhen Internet Technology Co. Ltd.,Beijing, China
| |
Collapse
|
29
|
Raju VB, Imtiaz MH, Sazonov E. Food Image Segmentation Using Multi-Modal Imaging Sensors with Color and Thermal Data. SENSORS (BASEL, SWITZERLAND) 2023; 23:560. [PMID: 36679357 PMCID: PMC9860575 DOI: 10.3390/s23020560] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Revised: 12/23/2022] [Accepted: 12/29/2022] [Indexed: 06/17/2023]
Abstract
Sensor-based food intake monitoring has become one of the fastest-growing fields in dietary assessment. Researchers are exploring imaging-sensor-based food detection, food recognition, and food portion size estimation. A major problem that is still being tackled in this field is the segmentation of regions of food when multiple food items are present, mainly when similar-looking foods (similar in color and/or texture) are present. Food image segmentation is a relatively under-explored area compared with other fields. This paper proposes a novel approach to food imaging consisting of two imaging sensors: color (Red-Green-Blue) and thermal. Furthermore, we propose a multi-modal four-Dimensional (RGB-T) image segmentation using a k-means clustering algorithm to segment regions of similar-looking food items in multiple combinations of hot, cold, and warm (at room temperature) foods. Six food combinations of two food items each were used to capture RGB and thermal image data. RGB and thermal data were superimposed to form a combined RGB-T image and three sets of data (RGB, thermal, and RGB-T) were tested. A bootstrapped optimization of within-cluster sum of squares (WSS) was employed to determine the optimal number of clusters for each case. The combined RGB-T data achieved better results compared with RGB and thermal data, used individually. The mean ± standard deviation (std. dev.) of the F1 score for RGB-T data was 0.87 ± 0.1 compared with 0.66 ± 0.13 and 0.64 ± 0.39, for RGB and Thermal data, respectively.
Collapse
Affiliation(s)
- Viprav B. Raju
- Department Electrical and Computer Engineering, The University of Alabama, Tuscaloosa, AL 35487, USA
| | - Masudul H. Imtiaz
- Department Electrical and Computer Engineering, Clarkson University, Potsdam, NY 13699, USA
| | - Edward Sazonov
- Department Electrical and Computer Engineering, The University of Alabama, Tuscaloosa, AL 35487, USA
| |
Collapse
|
30
|
The Systematic Review of Artificial Intelligence Applications in Breast Cancer Diagnosis. Diagnostics (Basel) 2022; 13:diagnostics13010045. [PMID: 36611337 PMCID: PMC9818874 DOI: 10.3390/diagnostics13010045] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2022] [Revised: 12/16/2022] [Accepted: 12/17/2022] [Indexed: 12/28/2022] Open
Abstract
Several studies have demonstrated the value of artificial intelligence (AI) applications in breast cancer diagnosis. The systematic review of AI applications in breast cancer diagnosis includes several studies that compare breast cancer diagnosis and AI. However, they lack systematization, and each study appears to be conducted uniquely. The purpose and contributions of this study are to offer elaborative knowledge on the applications of AI in the diagnosis of breast cancer through citation analysis in order to categorize the main area of specialization that attracts the attention of the academic community, as well as thematic issue analysis to identify the species being researched in each category. In this study, a total number of 17,900 studies addressing breast cancer and AI published between 2012 and 2022 were obtained from these databases: IEEE, Embase: Excerpta Medica Database Guide-Ovid, PubMed, Springer, Web of Science, and Google Scholar. We applied inclusion and exclusion criteria to the search; 36 studies were identified. The vast majority of AI applications used classification models for the prediction of breast cancer. Howbeit, accuracy (99%) has the highest number of performance metrics, followed by specificity (98%) and area under the curve (0.95). Additionally, the Convolutional Neural Network (CNN) was the best model of choice in several studies. This study shows that the quantity and caliber of studies that use AI applications in breast cancer diagnosis will continue to rise annually. As a result, AI-based applications are viewed as a supplement to doctors' clinical reasoning, with the ultimate goal of providing quality healthcare that is both affordable and accessible to everyone worldwide.
Collapse
|
31
|
Dufresne E, Fortun D, Kremer S, Noblet V. A unified framework for focal intensity change detection and deformable image registration. Application to the monitoring of multiple sclerosis lesions in longitudinal 3D brain MRI. FRONTIERS IN NEUROIMAGING 2022; 1:1008128. [PMID: 37555167 PMCID: PMC10406299 DOI: 10.3389/fnimg.2022.1008128] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/31/2022] [Accepted: 12/06/2022] [Indexed: 08/10/2023]
Abstract
Registration is a crucial step in the design of automatic change detection methods dedicated to longitudinal brain MRI. Even small registration inaccuracies can significantly deteriorate the detection performance by introducing numerous spurious detections. Rigid or affine registration are usually considered to align baseline and follow-up scans, as a pre-processing step before applying a change detection method. In the context of multiple sclerosis, using deformable registration can be required to capture the complex deformations due to brain atrophy. However, non-rigid registration can alter the shape of appearing and evolving lesions while minimizing the dissimilarity between the two images. To overcome this issue, we consider registration and change detection as intertwined problems that should be solved jointly. To this end, we formulate these two separate tasks as a single optimization problem involving a unique energy that models their coupling. We focus on intensity-based change detection and registration, but the approach is versatile and could be extended to other modeling choices. We show experimentally on synthetic and real data that the proposed joint approach overcomes the limitations of the sequential scheme.
Collapse
Affiliation(s)
| | - Denis Fortun
- ICube UMR 7357, Université de Strasbourg, CNRS, Strasbourg, France
| | - Stéphane Kremer
- ICube UMR 7357, Université de Strasbourg, CNRS, Strasbourg, France
- Hôpitaux Universitaires de Strasbourg, Strasbourg, France
| | - Vincent Noblet
- ICube UMR 7357, Université de Strasbourg, CNRS, Strasbourg, France
| |
Collapse
|
32
|
Computational Analysis of Cardiac Contractile Function. Curr Cardiol Rep 2022; 24:1983-1994. [PMID: 36301405 PMCID: PMC10091868 DOI: 10.1007/s11886-022-01814-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 10/14/2022] [Indexed: 01/11/2023]
Abstract
PURPOSE OF REVIEW Heart failure results in the high incidence and mortality all over the world. Mechanical properties of myocardium are critical determinants of cardiac function, with regional variations in myocardial contractility demonstrated within infarcted ventricles. Quantitative assessment of cardiac contractile function is therefore critical to identify myocardial infarction for the early diagnosis and therapeutic intervention. RECENT FINDINGS Current advancement of cardiac functional assessments is in pace with the development of imaging techniques. The methods tailored to advanced imaging have been widely used in cardiac magnetic resonance, echocardiography, and optical microscopy. In this review, we introduce fundamental concepts and applications of representative methods for each imaging modality used in both fundamental research and clinical investigations. All these methods have been designed or developed to quantify time-dependent 2-dimensional (2D) or 3D cardiac mechanics, holding great potential to unravel global or regional myocardial deformation and contractile function from end-systole to end-diastole. Computational methods to assess cardiac contractile function provide a quantitative insight into the analysis of myocardial mechanics during cardiac development, injury, and remodeling.
Collapse
|
33
|
Al-Mallah MH, Bateman TM, Branch KR, Crean A, Gingold EL, Thompson RC, McKenney SE, Miller EJ, Murthy VL, Nieman K, Villines TC, Yester MV, Einstein AJ, Mahmarian JJ. 2022 ASNC/AAPM/SCCT/SNMMI guideline for the use of CT in hybrid nuclear/CT cardiac imaging. J Nucl Cardiol 2022; 29:3491-3535. [PMID: 36056224 DOI: 10.1007/s12350-022-03089-z] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2022] [Accepted: 08/08/2022] [Indexed: 01/29/2023]
Affiliation(s)
- Mouaz H Al-Mallah
- Department of Cardiology, Houston Methodist DeBakey Heart and Vascular Center, Houston, TX, USA.
| | - Timothy M Bateman
- Department of Cardiology, Saint Luke's Mid America Heart Institute, University of Missouri-Kansas City, Kansas City, MO, USA
| | - Kelley R Branch
- Division of Cardiovascular, University of Washington, Seattle, WA, USA
| | - Andrew Crean
- Division of Cardiovascular Medicine, Ottawa Heart Institute, Ottawa, ON, Canada
| | - Eric L Gingold
- Department of Radiology, Thomas Jefferson University, Philadelphia, PA, USA
| | - Randall C Thompson
- Department of Cardiology, Saint Luke's Mid America Heart Institute, University of Missouri-Kansas City, Kansas City, MO, USA
| | - Sarah E McKenney
- Department of Radiology, University of California, Davis Medical Center, Sacramento, CA, USA
| | - Edward J Miller
- Section of Cardiovascular Medicine, Yale University School of Medicine, New Haven, CT, USA
| | - Venkatesh L Murthy
- Division of Cardiovascular Medicine, Department of Internal Medicine, University of Michigan, Ann Arbor, MI, USA
| | - Koen Nieman
- Departments of Cardiovascular Medicine and Radiology, Stanford University Medical Center, Stanford, CA, USA
| | - Todd C Villines
- Division of Cardiovascular Medicine, University of Virginia Health System, Charlottesville, VA, USA
| | - Michael V Yester
- Department of Radiology, School of Medicine, University of Alabama Medical Center, Birmingham, AL, USA
| | - Andrew J Einstein
- Division of Cardiology, Department of Medicine, and Department of Radiology, Columbia University Irving Medical Center and New York-Presbyterian Hospital, New York, NY, USA
| | - John J Mahmarian
- Department of Cardiology, Houston Methodist DeBakey Heart and Vascular Center, Houston, TX, USA
| |
Collapse
|
34
|
Lu J, Öfverstedt J, Lindblad J, Sladoje N. Is image-to-image translation the panacea for multimodal image registration? A comparative study. PLoS One 2022; 17:e0276196. [PMID: 36441754 PMCID: PMC9704666 DOI: 10.1371/journal.pone.0276196] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Accepted: 09/30/2022] [Indexed: 11/29/2022] Open
Abstract
Despite current advancement in the field of biomedical image processing, propelled by the deep learning revolution, multimodal image registration, due to its several challenges, is still often performed manually by specialists. The recent success of image-to-image (I2I) translation in computer vision applications and its growing use in biomedical areas provide a tempting possibility of transforming the multimodal registration problem into a, potentially easier, monomodal one. We conduct an empirical study of the applicability of modern I2I translation methods for the task of rigid registration of multimodal biomedical and medical 2D and 3D images. We compare the performance of four Generative Adversarial Network (GAN)-based I2I translation methods and one contrastive representation learning method, subsequently combined with two representative monomodal registration methods, to judge the effectiveness of modality translation for multimodal image registration. We evaluate these method combinations on four publicly available multimodal (2D and 3D) datasets and compare with the performance of registration achieved by several well-known approaches acting directly on multimodal image data. Our results suggest that, although I2I translation may be helpful when the modalities to register are clearly correlated, registration of modalities which express distinctly different properties of the sample are not well handled by the I2I translation approach. The evaluated representation learning method, which aims to find abstract image-like representations of the information shared between the modalities, manages better, and so does the Mutual Information maximisation approach, acting directly on the original multimodal images. We share our complete experimental setup as open-source (https://github.com/MIDA-group/MultiRegEval), including method implementations, evaluation code, and all datasets, for further reproducing and benchmarking.
Collapse
Affiliation(s)
- Jiahao Lu
- MIDA Group, Department of Information Technology, Uppsala University, Uppsala, Sweden
- IMAGE Section, Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| | - Johan Öfverstedt
- MIDA Group, Department of Information Technology, Uppsala University, Uppsala, Sweden
| | - Joakim Lindblad
- MIDA Group, Department of Information Technology, Uppsala University, Uppsala, Sweden
- * E-mail:
| | - Nataša Sladoje
- MIDA Group, Department of Information Technology, Uppsala University, Uppsala, Sweden
| |
Collapse
|
35
|
A Systematic Literature Review on Applications of GAN-Synthesized Images for Brain MRI. FUTURE INTERNET 2022. [DOI: 10.3390/fi14120351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
With the advances in brain imaging, magnetic resonance imaging (MRI) is evolving as a popular radiological tool in clinical diagnosis. Deep learning (DL) methods can detect abnormalities in brain images without an extensive manual feature extraction process. Generative adversarial network (GAN)-synthesized images have many applications in this field besides augmentation, such as image translation, registration, super-resolution, denoising, motion correction, segmentation, reconstruction, and contrast enhancement. The existing literature was reviewed systematically to understand the role of GAN-synthesized dummy images in brain disease diagnosis. Web of Science and Scopus databases were extensively searched to find relevant studies from the last 6 years to write this systematic literature review (SLR). Predefined inclusion and exclusion criteria helped in filtering the search results. Data extraction is based on related research questions (RQ). This SLR identifies various loss functions used in the above applications and software to process brain MRIs. A comparative study of existing evaluation metrics for GAN-synthesized images helps choose the proper metric for an application. GAN-synthesized images will have a crucial role in the clinical sector in the coming years, and this paper gives a baseline for other researchers in the field.
Collapse
|
36
|
A survey of catheter tracking concepts and methodologies. Med Image Anal 2022; 82:102584. [DOI: 10.1016/j.media.2022.102584] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2021] [Revised: 08/01/2022] [Accepted: 08/11/2022] [Indexed: 11/23/2022]
|
37
|
Toh K, Saunders D, Verd B, Steventon B. Zebrafish neuromesodermal progenitors undergo a critical state transition in vivo. iScience 2022; 25:105216. [PMID: 36274939 PMCID: PMC9579027 DOI: 10.1016/j.isci.2022.105216] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Revised: 08/05/2022] [Accepted: 09/22/2022] [Indexed: 11/30/2022] Open
Abstract
The transition state model of cell differentiation proposes that a transient window of gene expression stochasticity precedes entry into a differentiated state. Here, we assess this theoretical model in zebrafish neuromesodermal progenitors (NMps) in vivo during late somitogenesis stages. We observed an increase in gene expression variability at the 24 somite stage (24ss) before their differentiation into spinal cord and paraxial mesoderm. Analysis of a published 18ss scRNA-seq dataset showed that the NMp population is noisier than its derivatives. By building in silico composite gene expression maps from image data, we assigned an 'NM index' to in silico NMps based on the expression of neural and mesodermal markers and demonstrated that cell population heterogeneity peaked at 24ss. Further examination revealed cells with gene expression profiles incongruent with their prospective fate. Taken together, our work supports the transition state model within an endogenous cell fate decision making event.
Collapse
Affiliation(s)
- Kane Toh
- Department of Genetics, University of Cambridge, Cambridge CB2 3EH, UK
| | - Dillan Saunders
- Department of Genetics, University of Cambridge, Cambridge CB2 3EH, UK
| | - Berta Verd
- Department of Genetics, University of Cambridge, Cambridge CB2 3EH, UK
- Department of Zoology, University of Oxford, Oxford OX1 3SZ, UK
| | | |
Collapse
|
38
|
A New Approach to Automatically Calibrate and Detect Building Cracks. BUILDINGS 2022. [DOI: 10.3390/buildings12081081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Timely crack detection plays an important role in building damage assessment. In this study, an automatic crack detection method based on image registration and pixel-level segmentation (improved DeepLab_v3+) is proposed. Firstly, the moving images are calibrated by image registration, and the similarity method is adopted to evaluate the calibrated results. Secondly, the DeepLab_v3+ is improved and used to segment the fixed images and the calibrated images. Finally, the difference of crack pixels between the fixed and calibrated images is estimated, and the key parameter is investigated to find the optimal optimizer and learning rate. The results illustrate that: (1) the image registration technology shows excellent calibration achievement and the average error is only 4%; (2) with the resnet50 being selected as the backbone network of improved Deeplab_v3+, the automatic detection method proposed in this study is more efficient in comparison with other common pixel-level segmentation algorithms; (3) the best network optimizer of improved Deeplab_v3+ and learning rate of crack segmentation task are sgdm and 0.001, respectively. The crack detection method proposed in this study can significantly improves the technical level of crack detection in practical projects.
Collapse
|
39
|
Semi-Automatic Multiparametric MR Imaging Classification Using Novel Image Input Sequences and 3D Convolutional Neural Networks. ALGORITHMS 2022. [DOI: 10.3390/a15070248] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The role of multi-parametric magnetic resonance imaging (mp-MRI) is becoming increasingly important in the diagnosis of the clinical severity of prostate cancer (PCa). However, mp-MRI images usually contain several unaligned 3D sequences, such as DWI image sequences and T2-weighted image sequences, and there are many images among the entirety of 3D sequence images that do not contain cancerous tissue, which affects the accuracy of large-scale prostate cancer detection. Therefore, there is a great need for a method that uses accurate computer-aided detection of mp-MRI images and minimizes the influence of useless features. Our proposed PCa detection method is divided into three stages: (i) multimodal image alignment, (ii) automatic cropping of the sequence images to the entire prostate region, and, finally, (iii) combining multiple modal images of each patient into novel 3D sequences and using 3D convolutional neural networks to learn the newly composed 3D sequences with different modal alignments. We arrange the different modal methods to make the model fully learn the cancerous tissue features; then, we predict the clinical severity of PCa and generate a 3D cancer response map for the 3D sequence images from the last convolution layer of the network. The prediction results and 3D response map help to understand the features that the model focuses on during the process of 3D-CNN feature learning. We applied our method to Toho hospital prostate cancer patient data; the AUC (=0.85) results were significantly higher than those of other methods.
Collapse
|
40
|
MRI Radiogenomics in Precision Oncology: New Diagnosis and Treatment Method. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:2703350. [PMID: 35845886 PMCID: PMC9282990 DOI: 10.1155/2022/2703350] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/02/2022] [Revised: 05/04/2022] [Accepted: 05/25/2022] [Indexed: 11/21/2022]
Abstract
Precision medicine for cancer affords a new way for the most accurate and effective treatment to each individual cancer. Given the high time-evolving intertumor and intratumor heterogeneity features of personal medicine, there are still several obstacles hindering its diagnosis and treatment in clinical practice regardless of extensive exploration on it over the past years. This paper is to investigate radiogenomics methods in the literature for precision medicine for cancer focusing on the heterogeneity analysis of tumors. Based on integrative analysis of multimodal (parametric) imaging and molecular data in bulk tumors, a comprehensive analysis and discussion involving the characterization of tumor heterogeneity in imaging and molecular expression are conducted. These investigations are intended to (i) fully excavate the multidimensional spatial, temporal, and semantic related information regarding high-dimensional breast magnetic resonance imaging data, with integration of the highly specific structured data of genomics and combination of the diagnosis and cognitive process of doctors, and (ii) establish a radiogenomics data representation model based on multidimensional consistency analysis with multilevel spatial-temporal correlations.
Collapse
|
41
|
Xiao X, Xu Z, Hou D, Yang Z, Lin F. Rigid registration algorithm based on the minimization of the total variation of the difference map. JOURNAL OF SYNCHROTRON RADIATION 2022; 29:1085-1094. [PMID: 35787576 PMCID: PMC9255568 DOI: 10.1107/s1600577522005598] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Accepted: 05/23/2022] [Indexed: 06/15/2023]
Abstract
Image registration is broadly used in various scenarios in which similar scenes in different images are to be aligned. However, image registration becomes challenging when the contrasts and backgrounds in the images are vastly different. This work proposes using the total variation of the difference map between two images (TVDM) as a dissimilarity metric in rigid registration. A method based on TVDM minimization is implemented for image rigid registration. The method is tested with both synthesized and real experimental data that have various noise and background conditions. The performance of the proposed method is compared with the results of other rigid registration methods. It is demonstrated that the proposed method is highly accurate and robust and outperforms other methods in all of the tests. The new algorithm provides a robust option for image registrations that are critical to many nano-scale X-ray imaging and microscopy applications.
Collapse
Affiliation(s)
- Xianghui Xiao
- National Synchrotron Light Source II, Brookhaven National Laboratory, Upton, NY 11973, USA
| | - Zhengrui Xu
- Department of Chemistry, Virginia Tech, Blacksburg, VA 24061, USA
| | - Dong Hou
- Department of Chemistry, Virginia Tech, Blacksburg, VA 24061, USA
| | - Zhijie Yang
- Department of Chemistry, Virginia Tech, Blacksburg, VA 24061, USA
| | - Feng Lin
- Department of Chemistry, Virginia Tech, Blacksburg, VA 24061, USA
| |
Collapse
|
42
|
Wang J, Xiang K, Chen K, Liu R, Ni R, Zhu H, Xiong Y. Medical Image Registration Algorithm Based on Bounded Generalized Gaussian Mixture Model. Front Neurosci 2022; 16:911957. [PMID: 35720703 PMCID: PMC9201218 DOI: 10.3389/fnins.2022.911957] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2022] [Accepted: 05/04/2022] [Indexed: 11/13/2022] Open
Abstract
In this paper, a method for medical image registration based on the bounded generalized Gaussian mixture model is proposed. The bounded generalized Gaussian mixture model is used to approach the joint intensity of source medical images. The mixture model is formulated based on a maximum likelihood framework, and is solved by an expectation-maximization algorithm. The registration performance of the proposed approach on different medical images is verified through extensive computer simulations. Empirical findings confirm that the proposed approach is significantly better than other conventional ones.
Collapse
Affiliation(s)
- Jingkun Wang
- Department of Orthopaedics, Daping Hospital, Army Medical University, Chongqing, China
| | - Kun Xiang
- College of Automation, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Kuo Chen
- School of Software Engineering, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Rui Liu
- College of Automation, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Ruifeng Ni
- College of Automation, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Hao Zhu
- College of Automation, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Yan Xiong
- Department of Orthopaedics, Daping Hospital, Army Medical University, Chongqing, China
| |
Collapse
|
43
|
Khawaled S, Freiman M. NPBDREG: Uncertainty Assessment in Diffeomorphic Brain MRI Registration using a Non-parametric Bayesian Deep-Learning Based Approach. Comput Med Imaging Graph 2022; 99:102087. [DOI: 10.1016/j.compmedimag.2022.102087] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2021] [Revised: 04/28/2022] [Accepted: 05/31/2022] [Indexed: 10/18/2022]
|
44
|
Lowther N, Louwe R, Yuen J, Hardcastle N, Yeo A, Jameson M. MIRSIG position paper: the use of image registration and fusion algorithms in radiotherapy. Phys Eng Sci Med 2022; 45:421-428. [PMID: 35522369 PMCID: PMC9239966 DOI: 10.1007/s13246-022-01125-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/28/2022] [Indexed: 12/12/2022]
Abstract
The report of the American Association of Physicists in Medicine (AAPM) Task Group No. 132 published in 2017 reviewed rigid image registration and deformable image registration (DIR) approaches and solutions to provide recommendations for quality assurance and quality control of clinical image registration and fusion techniques in radiotherapy. However, that report did not include the use of DIR for advanced applications such as dose warping or warping of other matrices of interest. Considering that DIR warping tools are now readily available, discussions were hosted by the Medical Image Registration Special Interest Group (MIRSIG) of the Australasian College of Physical Scientists & Engineers in Medicine in 2018 to form a consensus on best practice guidelines. This position statement authored by MIRSIG endorses the recommendations of the report of AAPM task group 132 and expands on the best practice advice from the 'Deforming to Best Practice' MIRSIG publication to provide guidelines on the use of DIR for advanced applications.
Collapse
Affiliation(s)
- Nicholas Lowther
- Department of Radiation Oncology, Wellington Blood and Cancer Centre, Wellington, New Zealand
| | - Rob Louwe
- Holland Proton Therapy Centre, Delft, Netherlands
| | - Johnson Yuen
- St George Hospital Cancer Care Centre, Kogarah, New South Wales, 2217, Australia
- South Western Clinical School, University of New South Wales, Sydney, Australia
- Ingham Institute for Applied Medical Research, Sydney, NSW, Australia
| | - Nicholas Hardcastle
- Physical Sciences, Peter MacCallum Cancer Centre, Melbourne, VIC, Australia
- Centre for Medical Radiation Physics, University of Wollongong, Wollongong, NSW, Australia
- The Sir Peter MacCallum Department of Oncology, The University of Melbourne, Melbourne, VIC, Australia
| | - Adam Yeo
- Physical Sciences, Peter MacCallum Cancer Centre, Melbourne, VIC, Australia
- School of Applied Sciences, RMIT University, Melbourne, VIC, Australia
| | - Michael Jameson
- GenesisCare, Sydney, NSW, 2015, Australia.
- St Vincent's Clinical School, University of New South Wales, Sydney, Australia.
| |
Collapse
|
45
|
Ming Y, Dong X, Zhao J, Chen Z, Wang H, Wu N. Deep learning-based multimodal image analysis for cervical cancer detection. Methods 2022; 205:46-52. [DOI: 10.1016/j.ymeth.2022.05.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2022] [Revised: 03/25/2022] [Accepted: 05/16/2022] [Indexed: 10/18/2022] Open
|
46
|
Cho HH, Kim CK, Park H. Overview of radiomics in prostate imaging and future directions. Br J Radiol 2022; 95:20210539. [PMID: 34797688 PMCID: PMC8978251 DOI: 10.1259/bjr.20210539] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022] Open
Abstract
Recent advancements in imaging technology and analysis methods have led to an analytic framework known as radiomics. This framework extracts comprehensive high-dimensional features from imaging data and performs data mining to build analytical models for improved decision-support. Its features include many categories spanning texture and shape; thus, it can provide abundant information for precision medicine. Many studies of prostate radiomics have shown promising results in the assessment of pathological features, prediction of treatment response, and stratification of risk groups. Herein, we aimed to provide a general overview of radiomics procedures, discuss technical issues, explain various clinical applications, and suggest future research directions, especially for prostate imaging.
Collapse
Affiliation(s)
- Hwan-Ho Cho
- Department of Electrical and Computer Engineering, Sungkyunkwan University, Suwon, Korea.,Center for Neuroscience Imaging Research, Institute for Basic Science, Suwon, Korea
| | - Chan Kyo Kim
- Department of Radiology and Center for Imaging Science, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea
| | - Hyunjin Park
- Center for Neuroscience Imaging Research, Institute for Basic Science, Suwon, Korea.,School of Electronic and Electrical Engineering, Sungkyunkwan University, Suwon, Korea
| |
Collapse
|
47
|
Yu Z, Nguyen J, Nguyen TD, Kelly J, Mclean C, Bonnington P, Zhang L, Mar V, Ge Z. Early Melanoma Diagnosis With Sequential Dermoscopic Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:633-646. [PMID: 34648437 DOI: 10.1109/tmi.2021.3120091] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Dermatologists often diagnose or rule out early melanoma by evaluating the follow-up dermoscopic images of skin lesions. However, existing algorithms for early melanoma diagnosis are developed using single time-point images of lesions. Ignoring the temporal, morphological changes of lesions can lead to misdiagnosis in borderline cases. In this study, we propose a framework for automated early melanoma diagnosis using sequential dermoscopic images. To this end, we construct our method in three steps. First, we align sequential dermoscopic images of skin lesions using estimated Euclidean transformations, extract the lesion growth region by computing image differences among the consecutive images, and then propose a spatio-temporal network to capture the dermoscopic changes from aligned lesion images and the corresponding difference images. Finally, we develop an early diagnosis module to compute probability scores of malignancy for lesion images over time. We collected 179 serial dermoscopic imaging data from 122 patients to verify our method. Extensive experiments show that the proposed model outperforms other commonly used sequence models. We also compared the diagnostic results of our model with those of seven experienced dermatologists and five registrars. Our model achieved higher diagnostic accuracy than clinicians (63.69% vs. 54.33%, respectively) and provided an earlier diagnosis of melanoma (60.7% vs. 32.7% of melanoma correctly diagnosed on the first follow-up images). These results demonstrate that our model can be used to identify melanocytic lesions that are at high-risk of malignant transformation earlier in the disease process and thereby redefine what is possible in the early detection of melanoma.
Collapse
|
48
|
Moshaei-Nezhad Y, Müller J, Oelschlägel M, Kirsch M, Tetzlaff R. Registration of IRT and visible light images in neurosurgery: analysis and comparison of automatic intensity-based registration approaches. Int J Comput Assist Radiol Surg 2022; 17:683-697. [PMID: 35175502 DOI: 10.1007/s11548-022-02562-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2021] [Accepted: 01/06/2022] [Indexed: 11/05/2022]
Abstract
PURPOSE The purpose of this study is to analyze and compare six automatic intensity-based registration methods for intraoperative infrared thermography (IRT) and visible light imaging (VIS/RGB). The practical requirement is to get a good performance of Euclidean distance between manually set landmarks in reference and target images as well as to achieve a high structural similarity index metric (SSIM) and peak signal-to-noise ratio (PSNR) with respect to the reference image. METHODS In this study, preprocessing is applied to bring both image types to a similar intensity. Similarity transformation is employed to align roughly IRT and visible light images. Two optimizers and two measures are used in this process. Thereafter, due to locally different displacement of the brain surface through respiration and heartbeat, two non-rigid transformations are applied, and finally, a bicubic interpolation is carried out to compensate for the resulting estimated transformation. Performance was assessed using eleven image datasets. The registration accuracy of the different computational approaches was assessed based on SSIM and PSNR. Additionally, five concise landmarks for each dataset were selected manually in reference and target images and the Euclidean distance between the corresponding landmarks was compared. RESULTS The results are showing that the combination of normalized intensity, mutual information measure with one-plus-one evolutionary optimizer in combination with Demon registration results in improved accuracy and performance as compared to all other methods tested here. Furthermore, the obtained results led to [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], and [Formula: see text] registrations for datasets 1, 2, 5, 7, and 8 with respect to the second best result by calculating the mean Euclidean distance of five landmarks. CONCLUSIONS We conclude that the mutual information measure with one-plus-one evolutionary optimizer in combination with Demon registration can achieve better accuracy and performance to those other methods mentioned here for automatic registration of IRT and visible light images in neurosurgery.
Collapse
Affiliation(s)
- Yahya Moshaei-Nezhad
- Institute of Circuits and Systems, Faculty of Electrical and Computer Engineering, Technische Universität Dresden, 01062, Dresden, Germany.
| | - Juliane Müller
- Carl Gustav Carus Faculty of Medicine, Anesthesiology and Intensive Care Medicine, Clinical Sensoring and Monitoring, Technische Universität Dresden, Fetscherstraße 74, 01307, Dresden, Germany
| | - Martin Oelschlägel
- Carl Gustav Carus Faculty of Medicine, Anesthesiology and Intensive Care Medicine, Clinical Sensoring and Monitoring, Technische Universität Dresden, Fetscherstraße 74, 01307, Dresden, Germany
| | - Matthias Kirsch
- Carl Gustav Carus Faculty of Medicine, Department of Neurosurgery, Technische Universität Dresden, Fetscherstraße 74, 01307, Dresden, Germany.,Department of Neurosurgery, Asklepios Kliniken Schildautal, Karl-Herold-Str. 1, 38723, Seesen, Germany
| | - Ronald Tetzlaff
- Institute of Circuits and Systems, Faculty of Electrical and Computer Engineering, Technische Universität Dresden, 01062, Dresden, Germany
| |
Collapse
|
49
|
Ose T, Autio JA, Ohno M, Frey S, Uematsu A, Kawasaki A, Takeda C, Hori Y, Nishigori K, Nakako T, Yokoyama C, Nagata H, Yamamori T, Van Essen DC, Glasser MF, Watabe H, Hayashi T. Anatomical variability, multi-modal coordinate systems, and precision targeting in the marmoset brain. Neuroimage 2022; 250:118965. [PMID: 35122965 PMCID: PMC8948178 DOI: 10.1016/j.neuroimage.2022.118965] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2021] [Revised: 01/31/2022] [Accepted: 02/01/2022] [Indexed: 01/02/2023] Open
Abstract
Localising accurate brain regions needs careful evaluation in each experimental species due to their individual variability. However, the function and connectivity of brain areas is commonly studied using a single-subject cranial landmark-based stereotactic atlas in animal neuroscience. Here, we address this issue in a small primate, the common marmoset, which is increasingly widely used in systems neuroscience. We developed a non-invasive multi-modal neuroimaging-based targeting pipeline, which accounts for intersubject anatomical variability in cranial and cortical landmarks in marmosets. This methodology allowed creation of multi-modal templates (MarmosetRIKEN20) including head CT and brain MR images, embedded in coordinate systems of anterior and posterior commissures (AC-PC) and CIFTI grayordinates. We found that the horizontal plane of the stereotactic coordinate was significantly rotated in pitch relative to the AC-PC coordinate system (10 degrees, frontal downwards), and had a significant bias and uncertainty due to positioning procedures. We also found that many common cranial and brain landmarks (e.g., bregma, intraparietal sulcus) vary in location across subjects and are substantial relative to average marmoset cortical area dimensions. Combining the neuroimaging-based targeting pipeline with robot-guided surgery enabled proof-of-concept targeting of deep brain structures with an accuracy of 0.2 mm. Altogether, our findings demonstrate substantial intersubject variability in marmoset brain and cranial landmarks, implying that subject-specific neuroimaging-based localization is needed for precision targeting in marmosets. The population-based templates and atlases in grayordinates, created for the first time in marmoset monkeys, should help bridging between macroscale and microscale analyses.
Collapse
Affiliation(s)
- Takayuki Ose
- Laboratory for Brain Connectomics Imaging, RIKEN Center for Biosystems Dynamics Research, Kobe, Japan; Graduate School of Biomedical Engineering, Tohoku University, Sendai, Japan.
| | - Joonas A Autio
- Laboratory for Brain Connectomics Imaging, RIKEN Center for Biosystems Dynamics Research, Kobe, Japan.
| | - Masahiro Ohno
- Laboratory for Brain Connectomics Imaging, RIKEN Center for Biosystems Dynamics Research, Kobe, Japan.
| | | | - Akiko Uematsu
- Laboratory for Brain Connectomics Imaging, RIKEN Center for Biosystems Dynamics Research, Kobe, Japan.
| | - Akihiro Kawasaki
- Laboratory for Brain Connectomics Imaging, RIKEN Center for Biosystems Dynamics Research, Kobe, Japan.
| | - Chiho Takeda
- Laboratory for Brain Connectomics Imaging, RIKEN Center for Biosystems Dynamics Research, Kobe, Japan.
| | - Yuki Hori
- Laboratory for Brain Connectomics Imaging, RIKEN Center for Biosystems Dynamics Research, Kobe, Japan; Department of Functional Brain Imaging, National Institutes for Quantum and Radiological Science and Technology, Chiba, Japan.
| | - Kantaro Nishigori
- Laboratory for Brain Connectomics Imaging, RIKEN Center for Biosystems Dynamics Research, Kobe, Japan; Sumitomo Dainippon Pharma Co., Ltd., Osaka, Japan.
| | - Tomokazu Nakako
- Laboratory for Brain Connectomics Imaging, RIKEN Center for Biosystems Dynamics Research, Kobe, Japan; Sumitomo Dainippon Pharma Co., Ltd., Osaka, Japan.
| | - Chihiro Yokoyama
- Laboratory for Brain Connectomics Imaging, RIKEN Center for Biosystems Dynamics Research, Kobe, Japan; Faculty of Human life and Environmental Science, Nara women's University, Nara, Japan.
| | | | - Tetsuo Yamamori
- Laboratory for Molecular Analysis of Higher Brain Function, RIKEN Center for Brain Science, Wako, Japan.
| | - David C Van Essen
- Department of Neuroscience, Washington University Medical School, St Louis, MO USA.
| | - Matthew F Glasser
- Department of Neuroscience, Washington University Medical School, St Louis, MO USA; Department of Radiology, Washington University Medical School, St Louis, MO USA.
| | - Hiroshi Watabe
- Graduate School of Biomedical Engineering, Tohoku University, Sendai, Japan.
| | - Takuya Hayashi
- Laboratory for Brain Connectomics Imaging, RIKEN Center for Biosystems Dynamics Research, Kobe, Japan; Department of Brain Connectomics, Kyoto University Graduate School of Medicine, Kyoto, Japan.
| |
Collapse
|
50
|
Hoque MZ, Keskinarkaus A, Nyberg P, Mattila T, Seppänen T. Whole slide image registration via multi-stained feature matching. Comput Biol Med 2022; 144:105301. [DOI: 10.1016/j.compbiomed.2022.105301] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Revised: 01/18/2022] [Accepted: 01/24/2022] [Indexed: 11/15/2022]
|