1
|
Woo B, Engstrom C, Baresic W, Fripp J, Crozier S, Chandra SS. Automated anomaly-aware 3D segmentation of bones and cartilages in knee MR images from the Osteoarthritis Initiative. Med Image Anal 2024; 93:103089. [PMID: 38246088 DOI: 10.1016/j.media.2024.103089] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 09/25/2023] [Accepted: 01/12/2024] [Indexed: 01/23/2024]
Abstract
In medical image analysis, automated segmentation of multi-component anatomical entities, with the possible presence of variable anomalies or pathologies, is a challenging task. In this work, we develop a multi-step approach using U-Net-based models to initially detect anomalies (bone marrow lesions, bone cysts) in the distal femur, proximal tibia and patella from 3D magnetic resonance (MR) images in individuals with varying grades of knee osteoarthritis. Subsequently, the extracted data are used for downstream tasks involving semantic segmentation of individual bone and cartilage volumes as well as bone anomalies. For anomaly detection, U-Net-based models were developed to reconstruct bone volume profiles of the femur and tibia in images via inpainting so anomalous bone regions could be replaced with close to normal appearances. The reconstruction error was used to detect bone anomalies. An anomaly-aware segmentation network, which was compared to anomaly-naïve segmentation networks, was used to provide a final automated segmentation of the individual femoral, tibial and patellar bone and cartilage volumes from the knee MR images which contain a spectrum of bone anomalies. The anomaly-aware segmentation approach provided up to 58% reduction in Hausdorff distances for bone segmentations compared to the results from anomaly-naïve segmentation networks. In addition, the anomaly-aware networks were able to detect bone anomalies in the MR images with greater sensitivity and specificity (area under the receiver operating characteristic curve [AUC] up to 0.896) compared to anomaly-naïve segmentation networks (AUC up to 0.874).
Collapse
Affiliation(s)
- Boyeong Woo
- School of Electrical Engineering and Computer Science, The University of Queensland, Australia.
| | - Craig Engstrom
- School of Human Movement and Nutrition Sciences, The University of Queensland, Australia
| | - William Baresic
- School of Human Movement and Nutrition Sciences, The University of Queensland, Australia
| | - Jurgen Fripp
- School of Electrical Engineering and Computer Science, The University of Queensland, Australia; Australian e-Health Research Centre, Commonwealth Scientific and Industrial Research Organization, Australia
| | - Stuart Crozier
- School of Electrical Engineering and Computer Science, The University of Queensland, Australia
| | - Shekhar S Chandra
- School of Electrical Engineering and Computer Science, The University of Queensland, Australia
| |
Collapse
|
2
|
Jiang M, Wang S, Song Z, Song L, Wang Y, Zhu C, Zheng Q. Cross 2SynNet: cross-device-cross-modal synthesis of routine brain MRI sequences from CT with brain lesion. MAGMA (NEW YORK, N.Y.) 2024; 37:241-256. [PMID: 38315352 DOI: 10.1007/s10334-023-01145-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Revised: 11/28/2023] [Accepted: 12/27/2023] [Indexed: 02/07/2024]
Abstract
OBJECTIVES CT and MR are often needed to determine the location and extent of brain lesions collectively to improve diagnosis. However, patients with acute brain diseases cannot complete the MRI examination within a short time. The aim of the study is to devise a cross-device and cross-modal medical image synthesis (MIS) method Cross2SynNet for synthesizing routine brain MRI sequences of T1WI, T2WI, FLAIR, and DWI from CT with stroke and brain tumors. MATERIALS AND METHODS For the retrospective study, the participants covered four different diseases of cerebral ischemic stroke (CIS-cohort), cerebral hemorrhage (CH-cohort), meningioma (M-cohort), glioma (G-cohort). The MIS model Cross2SynNet was established on the basic architecture of conditional generative adversarial network (CGAN), of which, the fully convolutional Transformer (FCT) module was adopted into generator to capture the short- and long-range dependencies between healthy and pathological tissues, and the edge loss function was to minimize the difference in gradient magnitude between synthetic image and ground truth. Three metrics of mean square error (MSE), peak signal-to-noise ratio (PSNR), and structure similarity index measure (SSIM) were used for evaluation. RESULTS A total of 230 participants (mean patient age, 59.77 years ± 13.63 [standard deviation]; 163 men [71%] and 67 women [29%]) were included, including CIS-cohort (95 participants between Dec 2019 and Feb 2022), CH-cohort (69 participants between Jan 2020 and Dec 2021), M-cohort (40 participants between Sep 2018 and Dec 2021), and G-cohort (26 participants between Sep 2019 and Dec 2021). The Cross2SynNet achieved averaged values of MSE = 0.008, PSNR = 21.728, and SSIM = 0.758 when synthesizing MRIs from CT, outperforming the CycleGAN, pix2pix, RegGAN, Pix2PixHD, and ResViT. The Cross2SynNet could synthesize the brain lesion on pseudo DWI even if the CT image did not exhibit clear signal in the acute ischemic stroke patients. CONCLUSIONS Cross2SynNet could achieve routine brain MRI synthesis of T1WI, T2WI, FLAIR, and DWI from CT with promising performance given the brain lesion of stroke and brain tumor.
Collapse
Affiliation(s)
- Minbo Jiang
- School of Computer and Control Engineering, Yantai University, No 30, Qingquan Road, Laishan District, Yantai, 264005, Shandong, China
| | - Shuai Wang
- Department of Radiology, Binzhou Medical University Hospital, Binzhou, 256603, China
| | - Zhiwei Song
- School of Computer and Control Engineering, Yantai University, No 30, Qingquan Road, Laishan District, Yantai, 264005, Shandong, China
| | - Limei Song
- School of Medical Imaging, Weifang Medical University, Weifang, 261000, China
| | - Yi Wang
- School of Computer and Control Engineering, Yantai University, No 30, Qingquan Road, Laishan District, Yantai, 264005, Shandong, China
| | - Chuanzhen Zhu
- School of Computer and Control Engineering, Yantai University, No 30, Qingquan Road, Laishan District, Yantai, 264005, Shandong, China
| | - Qiang Zheng
- School of Computer and Control Engineering, Yantai University, No 30, Qingquan Road, Laishan District, Yantai, 264005, Shandong, China.
| |
Collapse
|
3
|
Huang X, Bajaj R, Li Y, Ye X, Lin J, Pugliese F, Ramasamy A, Gu Y, Wang Y, Torii R, Dijkstra J, Zhou H, Bourantas CV, Zhang Q. POST-IVUS: A perceptual organisation-aware selective transformer framework for intravascular ultrasound segmentation. Med Image Anal 2023; 89:102922. [PMID: 37598605 DOI: 10.1016/j.media.2023.102922] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Revised: 07/06/2023] [Accepted: 08/01/2023] [Indexed: 08/22/2023]
Abstract
Intravascular ultrasound (IVUS) is recommended in guiding coronary intervention. The segmentation of coronary lumen and external elastic membrane (EEM) borders in IVUS images is a key step, but the manual process is time-consuming and error-prone, and suffers from inter-observer variability. In this paper, we propose a novel perceptual organisation-aware selective transformer framework that can achieve accurate and robust segmentation of the vessel walls in IVUS images. In this framework, temporal context-based feature encoders extract efficient motion features of vessels. Then, a perceptual organisation-aware selective transformer module is proposed to extract accurate boundary information, supervised by a dedicated boundary loss. The obtained EEM and lumen segmentation results will be fused in a temporal constraining and fusion module, to determine the most likely correct boundaries with robustness to morphology. Our proposed methods are extensively evaluated in non-selected IVUS sequences, including normal, bifurcated, and calcified vessels with shadow artifacts. The results show that the proposed methods outperform the state-of-the-art, with a Jaccard measure of 0.92 for lumen and 0.94 for EEM on the IVUS 2011 open challenge dataset. This work has been integrated into a software QCU-CMS2 to automatically segment IVUS images in a user-friendly environment.
Collapse
Affiliation(s)
- Xingru Huang
- School of Electronic Engineering and Computer Science, Queen Mary University of London, London, E3 4BL, UK; School of Communication Engineering, Hangzhou Dianzi University, Xiasha Higher Education Zone, Hangzhou, Zhejiang, China
| | - Retesh Bajaj
- Department of Cardiology, Barts Heart Centre, Barts Health NHS Trust, West Smithfield, London, EC1A 7BE, UK; Centre for Cardiovascular Medicine and Devices, William Harvey Research Institute, Queen Mary University of London, London, UK
| | - Yilong Li
- School of Electronic Engineering and Computer Science, Queen Mary University of London, London, E3 4BL, UK
| | - Xin Ye
- Zhejiang Provincial People's Hospital, 270 West Xueyuan Road, Wenzhou, Zhejiang, China
| | - Ji Lin
- School of Electronic Engineering and Computer Science, Queen Mary University of London, London, E3 4BL, UK
| | - Francesca Pugliese
- Department of Cardiology, Barts Heart Centre, Barts Health NHS Trust, West Smithfield, London, EC1A 7BE, UK; Centre for Cardiovascular Medicine and Devices, William Harvey Research Institute, Queen Mary University of London, London, UK
| | - Anantharaman Ramasamy
- Department of Cardiology, Barts Heart Centre, Barts Health NHS Trust, West Smithfield, London, EC1A 7BE, UK; Centre for Cardiovascular Medicine and Devices, William Harvey Research Institute, Queen Mary University of London, London, UK
| | - Yue Gu
- Zhejiang Institute of Mechanical and Electrical Engineering, Hangzhou, China
| | - Yaqi Wang
- College of Media Engineering, Communication University of Zhejiang, Hangzhou, China
| | - Ryo Torii
- Department of Mechanical Engineering, University College London, London, UK
| | | | - Huiyu Zhou
- School of Informatics, University of Leicester, University Road, Leicester, LE1 7RH, United Kingdom
| | - Christos V Bourantas
- Department of Cardiology, Barts Heart Centre, Barts Health NHS Trust, West Smithfield, London, EC1A 7BE, UK; Centre for Cardiovascular Medicine and Devices, William Harvey Research Institute, Queen Mary University of London, London, UK
| | - Qianni Zhang
- School of Electronic Engineering and Computer Science, Queen Mary University of London, London, E3 4BL, UK.
| |
Collapse
|
4
|
Wang Y, Xia W, Yan Z, Zhao L, Bian X, Liu C, Qi Z, Zhang S, Tang Z. Root canal treatment planning by automatic tooth and root canal segmentation in dental CBCT with deep multi-task feature learning. Med Image Anal 2023; 85:102750. [PMID: 36682153 DOI: 10.1016/j.media.2023.102750] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2022] [Revised: 10/16/2022] [Accepted: 01/10/2023] [Indexed: 01/21/2023]
Abstract
Accurate and automatic segmentation of individual tooth and root canal from cone-beam computed tomography (CBCT) images is an essential but challenging step for dental surgical planning. In this paper, we propose a novel framework, which consists of two neural networks, DentalNet and PulpNet, for efficient, precise, and fully automatic tooth instance segmentation and root canal segmentation from CBCT images. We first use the proposed DentalNet to achieve tooth instance segmentation and identification. Then, the region of interest (ROI) of the affected tooth is extracted and fed into the PulpNet to obtain precise segmentation of the pulp chamber and the root canal space. These two networks are trained by multi-task feature learning and evaluated on two clinical datasets respectively and achieve superior performances to several comparing methods. In addition, we incorporate our method into an efficient clinical workflow to improve the surgical planning process. In two clinical case studies, our workflow took only 2 min instead of 6 h to obtain the 3D model of tooth and root canal effectively for the surgical planning, resulting in satisfying outcomes in difficult root canal treatments.
Collapse
Affiliation(s)
- Yiwei Wang
- Department of Endodontics, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine; College of Stomatology, Shanghai Jiao Tong University; National Center for Stomatology; National Clinical Research Center for Oral Diseases; Shanghai Key Laboratory of Stomatology; Research Unit of Oral and Maxillofacial Regenerative Medicine, Chinese Academy of Medical Sciences, Shanghai 200011, China
| | - Wenjun Xia
- Shanghai Xuhui District Dental Center, Shanghai 200031, China
| | - Zhennan Yan
- SenseBrain Technology, Princeton, NJ 08540, USA
| | - Liang Zhao
- SenseTime Research, Shanghai 200233, China
| | - Xiaohe Bian
- Department of Endodontics, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine; College of Stomatology, Shanghai Jiao Tong University; National Center for Stomatology; National Clinical Research Center for Oral Diseases; Shanghai Key Laboratory of Stomatology; Research Unit of Oral and Maxillofacial Regenerative Medicine, Chinese Academy of Medical Sciences, Shanghai 200011, China
| | - Chang Liu
- SenseTime Research, Shanghai 200233, China
| | - Zhengnan Qi
- Department of Endodontics, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine; College of Stomatology, Shanghai Jiao Tong University; National Center for Stomatology; National Clinical Research Center for Oral Diseases; Shanghai Key Laboratory of Stomatology; Research Unit of Oral and Maxillofacial Regenerative Medicine, Chinese Academy of Medical Sciences, Shanghai 200011, China
| | - Shaoting Zhang
- Shanghai Artificial Intelligence Laboratory, Shanghai 200232, China; Centre for Perceptual and Interactive Intelligence (CPII), Hong Kong Special Administrative Region of China.
| | - Zisheng Tang
- Department of Endodontics, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine; College of Stomatology, Shanghai Jiao Tong University; National Center for Stomatology; National Clinical Research Center for Oral Diseases; Shanghai Key Laboratory of Stomatology; Research Unit of Oral and Maxillofacial Regenerative Medicine, Chinese Academy of Medical Sciences, Shanghai 200011, China.
| |
Collapse
|
5
|
Wang F, Xu X, Yang D, Chen RC, Royce TJ, Wang A, Lian J, Lian C. Dynamic Cross-Task Representation Adaptation for Clinical Targets Co-Segmentation in CT Image-Guided Post-Prostatectomy Radiotherapy. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1046-1055. [PMID: 36399586 PMCID: PMC10209913 DOI: 10.1109/tmi.2022.3223405] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
Adjuvant and salvage radiotherapy after radical prostatectomy requires precise delineations of prostate bed (PB), i.e., the clinical target volume, and surrounding organs at risk (OARs) to optimize radiotherapy planning. Segmenting PB is particularly challenging even for clinicians, e.g., from the planning computed tomography (CT) images, as it is an invisible/virtual target after the operative removal of the cancerous prostate gland. Very recently, a few deep learning-based methods have been proposed to automatically contour non-contrast PB by leveraging its spatial reliance on adjacent OARs (i.e., the bladder and rectum) with much more clear boundaries, mimicking the clinical workflow of experienced clinicians. Although achieving state-of-the-art results from both the clinical and technical aspects, these existing methods improperly ignore the gap between the hierarchical feature representations needed for segmenting those fundamentally different clinical targets (i.e., PB and OARs), which in turn limits their delineation accuracy. This paper proposes an asymmetric multi-task network integrating dynamic cross-task representation adaptation (i.e., DyAdapt) for accurate and efficient co-segmentation of PB and OARs in one-pass from CT images. In the learning-to-learn framework, the DyAdapt modules adaptively transfer the hierarchical feature representations from the source task of OARs segmentation to match up with the target (and more challenging) task of PB segmentation, conditioned on the dynamic inter-task associations learned from the learning states of the feed-forward path. On a real-patient dataset, our method led to state-of-the-art results of PB and OARs co-segmentation. Code is available at https://github.com/ladderlab-xjtu/DyAdapt.
Collapse
|
6
|
Yuan B, Sun Z, Pei L, Li W, Ding M, Hao X. Super-Resolution Reconstruction Method of Pavement Crack Images Based on an Improved Generative Adversarial Network. SENSORS (BASEL, SWITZERLAND) 2022; 22:9092. [PMID: 36501791 PMCID: PMC9737262 DOI: 10.3390/s22239092] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Revised: 11/12/2022] [Accepted: 11/21/2022] [Indexed: 06/17/2023]
Abstract
A super-resolution reconstruction approach based on an improved generative adversarial network is presented to overcome the huge disparities in image quality due to variable equipment and illumination conditions in the image-collecting stage of intelligent pavement detection. The nonlinear network of the generator is first improved, and the Residual Dense Block (RDB) is created to serve as Batch Normalization (BN). The Attention Module is then formed by combining the RDB, Gated Recurrent Unit (GRU), and Conv Layer. Finally, a loss function based on the L1 norm is utilized to replace the original loss function. The experimental findings demonstrate that the self-built pavement crack dataset's Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity (SSIM) of the reconstructed images reach 29.21 dB and 0.854, respectively. The results improved compared to the Set5, Set14, and BSD100 datasets. Additionally, by employing Faster-RCNN and a Fully Convolutional Network (FCN), the effects of image reconstruction on detection and segmentation are confirmed. The findings indicate that the segmentation results' F1 is enhanced by 0.012 to 0.737 and the detection results' confidence is increased by 0.031 to 0.9102 when compared to state-of-the-art methods. It has a significant engineering application value and can successfully increase pavement crack-detecting accuracy.
Collapse
|
7
|
Dalmaz O, Yurt M, Cukur T. ResViT: Residual Vision Transformers for Multimodal Medical Image Synthesis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2598-2614. [PMID: 35436184 DOI: 10.1109/tmi.2022.3167808] [Citation(s) in RCA: 75] [Impact Index Per Article: 37.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Generative adversarial models with convolutional neural network (CNN) backbones have recently been established as state-of-the-art in numerous medical image synthesis tasks. However, CNNs are designed to perform local processing with compact filters, and this inductive bias compromises learning of contextual features. Here, we propose a novel generative adversarial approach for medical image synthesis, ResViT, that leverages the contextual sensitivity of vision transformers along with the precision of convolution operators and realism of adversarial learning. ResViT's generator employs a central bottleneck comprising novel aggregated residual transformer (ART) blocks that synergistically combine residual convolutional and transformer modules. Residual connections in ART blocks promote diversity in captured representations, while a channel compression module distills task-relevant information. A weight sharing strategy is introduced among ART blocks to mitigate computational burden. A unified implementation is introduced to avoid the need to rebuild separate synthesis models for varying source-target modality configurations. Comprehensive demonstrations are performed for synthesizing missing sequences in multi-contrast MRI, and CT images from MRI. Our results indicate superiority of ResViT against competing CNN- and transformer-based methods in terms of qualitative observations and quantitative metrics.
Collapse
|
8
|
Boutillon A, Borotikar B, Burdin V, Conze PH. Multi-structure bone segmentation in pediatric MR images with combined regularization from shape priors and adversarial network. Artif Intell Med 2022; 132:102364. [DOI: 10.1016/j.artmed.2022.102364] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2021] [Revised: 05/13/2022] [Accepted: 07/10/2022] [Indexed: 11/02/2022]
|
9
|
Iqbal A, Sharif M, Yasmin M, Raza M, Aftab S. Generative adversarial networks and its applications in the biomedical image segmentation: a comprehensive survey. INTERNATIONAL JOURNAL OF MULTIMEDIA INFORMATION RETRIEVAL 2022; 11:333-368. [PMID: 35821891 PMCID: PMC9264294 DOI: 10.1007/s13735-022-00240-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/24/2021] [Revised: 03/16/2022] [Accepted: 05/24/2022] [Indexed: 05/13/2023]
Abstract
Recent advancements with deep generative models have proven significant potential in the task of image synthesis, detection, segmentation, and classification. Segmenting the medical images is considered a primary challenge in the biomedical imaging field. There have been various GANs-based models proposed in the literature to resolve medical segmentation challenges. Our research outcome has identified 151 papers; after the twofold screening, 138 papers are selected for the final survey. A comprehensive survey is conducted on GANs network application to medical image segmentation, primarily focused on various GANs-based models, performance metrics, loss function, datasets, augmentation methods, paper implementation, and source codes. Secondly, this paper provides a detailed overview of GANs network application in different human diseases segmentation. We conclude our research with critical discussion, limitations of GANs, and suggestions for future directions. We hope this survey is beneficial and increases awareness of GANs network implementations for biomedical image segmentation tasks.
Collapse
Affiliation(s)
- Ahmed Iqbal
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Muhammad Sharif
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Mussarat Yasmin
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Mudassar Raza
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Shabib Aftab
- Department of Computer Science, Virtual University of Pakistan, Lahore, Pakistan
| |
Collapse
|
10
|
Tran MQ, Do T, Tran H, Tjiputra E, Tran QD, Nguyen A. Light-Weight Deformable Registration Using Adversarial Learning With Distilling Knowledge. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1443-1453. [PMID: 34990354 DOI: 10.1109/tmi.2022.3141013] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Deformable registration is a crucial step in many medical procedures such as image-guided surgery and radiation therapy. Most recent learning-based methods focus on improving the accuracy by optimizing the non-linear spatial correspondence between the input images. Therefore, these methods are computationally expensive and require modern graphic cards for real-time deployment. In this paper, we introduce a new Light-weight Deformable Registration network that significantly reduces the computational cost while achieving competitive accuracy. In particular, we propose a new adversarial learning with distilling knowledge algorithm that successfully leverages meaningful information from the effective but expensive teacher network to the student network. We design the student network such as it is light-weight and well suitable for deployment on a typical CPU. The extensively experimental results on different public datasets show that our proposed method achieves state-of-the-art accuracy while significantly faster than recent methods. We further show that the use of our adversarial learning algorithm is essential for a time-efficiency deformable registration method. Finally, our source code and trained models are available at https://github.com/aioz-ai/LDR_ALDK.
Collapse
|
11
|
Sun H, Xi Q, Sun J, Fan R, Xie K, Ni X, Yang J. Research on new treatment mode of radiotherapy based on pseudo-medical images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 221:106932. [PMID: 35671601 DOI: 10.1016/j.cmpb.2022.106932] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/02/2022] [Revised: 04/20/2022] [Accepted: 06/01/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE Multi-modal medical images with multiple feature information are beneficial for radiotherapy. A new radiotherapy treatment mode based on triangle generative adversarial network (TGAN) model was proposed to synthesize pseudo-medical images between multi-modal datasets. METHODS CBCT, MRI and CT images of 80 patients with nasopharyngeal carcinoma were selected. The TGAN model based on multi-scale discriminant network was used for data training between different image domains. The generator of the TGAN model refers to cGAN and CycleGAN, and only one generation network can establish the non-linear mapping relationship between multiple image domains. The discriminator used multi-scale discrimination network to guide the generator to synthesize pseudo-medical images that are similar to real images from both shallow and deep aspects. The accuracy of pseudo-medical images was verified in anatomy and dosimetry. RESULTS In the three synthetic directions, namely, CBCT → CT, CBCT → MRI, and MRI → CT, significant differences (p < 0.05) in the three-fold-cross validation results on PSNR and SSIM metrics between the pseudo-medical images obtained based on TGAN and the real images. In the testing stage, for TGAN, the MAE metric results in the three synthesis directions (CBCT → CT, CBCT → MRI, and MRI → CT) were presented as mean (standard deviation), which were 68.67 (5.83), 83.14 (8.48), and 79.96 (7.59), and the NMI metric results were 0.8643 (0.0253), 0.8051 (0.0268), and 0.8146 (0.0267) respectively. In terms of dose verification, the differences in dose distribution between the pseudo-CT obtained by TGAN and the real CT were minimal. The H values of the measurement results of dose uncertainty in PGTV, PGTVnd, PTV1, and PTV2 were 42.510, 43.121, 17.054, and 7.795, respectively (P < 0.05). The differences were statistically significant. The gamma pass rate (2%/2 mm) of pseudo-CT obtained by the new model was 94.94% (0.73%), and the numerical results were better than those of the three other comparison models. CONCLUSIONS The pseudo-medical images acquired based on TGAN were close to the real images in anatomy and dosimetry. The pseudo-medical images synthesized by the TGAN model have good application prospects in clinical adaptive radiotherapy.
Collapse
Affiliation(s)
- Hongfei Sun
- School of Automation, Northwestern Polytechnical University, Xi'an, 710129, People's Republic of China.
| | - Qianyi Xi
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, 213003, People's Republic of China; Center of Medical Physics, Nanjing Medical University, Changzhou, 213003,People's Republic of China.
| | - Jiawei Sun
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, 213003, People's Republic of China; Center of Medical Physics, Nanjing Medical University, Changzhou, 213003,People's Republic of China.
| | - Rongbo Fan
- School of Automation, Northwestern Polytechnical University, Xi'an, 710129, People's Republic of China.
| | - Kai Xie
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, 213003, People's Republic of China; Center of Medical Physics, Nanjing Medical University, Changzhou, 213003,People's Republic of China.
| | - Xinye Ni
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, 213003, People's Republic of China; Center of Medical Physics, Nanjing Medical University, Changzhou, 213003,People's Republic of China.
| | - Jianhua Yang
- School of Automation, Northwestern Polytechnical University, Xi'an, 710129, People's Republic of China.
| |
Collapse
|
12
|
Low-contrast lesion segmentation in advanced MRI experiments by time-domain Ricker-type wavelets and fuzzy 2-means. APPL INTELL 2022. [DOI: 10.1007/s10489-022-03184-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
13
|
Jaber MM, Abd SK, Ali SM. Adam Optimized Deep Learning Model for Segmenting ROI Region in Medical Imaging. PROCEEDINGS OF INTERNATIONAL CONFERENCE ON EMERGING TECHNOLOGIES AND INTELLIGENT SYSTEMS 2022:669-691. [DOI: 10.1007/978-3-030-85990-9_54] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
|
14
|
Sun H, Xi Q, Fan R, Sun J, Xie K, Ni X, Yang J. Synthesis of pseudo-CT images from pelvic MRI images based on MD-CycleGAN model for radiotherapy. Phys Med Biol 2021; 67. [PMID: 34879356 DOI: 10.1088/1361-6560/ac4123] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2021] [Accepted: 12/08/2021] [Indexed: 11/12/2022]
Abstract
OBJECTIVE A multi-discriminator-based cycle generative adversarial network (MD-CycleGAN) model was proposed to synthesize higher-quality pseudo-CT from MRI. APPROACH The MRI and CT images obtained at the simulation stage with cervical cancer were selected to train the model. The generator adopted the DenseNet as the main architecture. The local and global discriminators based on convolutional neural network jointly discriminated the authenticity of the input image data. In the testing phase, the model was verified by four-fold cross-validation method. In the prediction stage, the data were selected to evaluate the accuracy of the pseudo-CT in anatomy and dosimetry, and they were compared with the pseudo-CT synthesized by GAN with generator based on the architectures of ResNet, sU-Net, and FCN. MAIN RESULTS There are significant differences(P<0.05) in the four-fold-cross validation results on peak signal-to-noise ratio and structural similarity index metrics between the pseudo-CT obtained based on MD-CycleGAN and the ground truth CT (CTgt). The pseudo-CT synthesized by MD-CycleGAN had closer anatomical information to the CTgt with root mean square error of 47.83±2.92 HU and normalized mutual information value of 0.9014±0.0212 and mean absolute error value of 46.79±2.76 HU. The differences in dose distribution between the pseudo-CT obtained by MD-CycleGAN and the CTgt were minimal. The mean absolute dose errors of Dosemax, Dosemin and Dosemean based on the planning target volume were used to evaluate the dose uncertainty of the four pseudo-CT. The u-values of the Wilcoxon test were 55.407, 41.82 and 56.208, and the differences were statistically significant. The 2%/2 mm-based gamma pass rate (%) of the proposed method was 95.45±1.91, and the comparison methods (ResNet_GAN, sUnet_GAN and FCN_GAN) were 93.33±1.20, 89.64±1.63 and 87.31±1.94, respectively. SIGNIFICANCE The pseudo-CT obtained based on MD-CycleGAN have higher imaging quality and are closer to the CTgt in terms of anatomy and dosimetry than other GAN models.
Collapse
Affiliation(s)
- Hongfei Sun
- Northwestern Polytechnical University School of Automation, School of Automation, Xi'an, Shaanxi, 710129, CHINA
| | - Qianyi Xi
- The Affiliated Changzhou No 2 People's Hospital of Nanjing Medical University, ., Changzhou, Jiangsu, 213003, CHINA
| | - Rongbo Fan
- Northwestern Polytechnical University School of Automation, School of Automation, Xi'an, Shaanxi, 710129, CHINA
| | - Jiawei Sun
- The Affiliated Changzhou No 2 People's Hospital of Nanjing Medical University, ., Changzhou, Jiangsu, 213003, CHINA
| | - Kai Xie
- The Affiliated Changzhou No 2 People's Hospital of Nanjing Medical University, ., Changzhou, Jiangsu, 213003, CHINA
| | - Xinye Ni
- The Affiliated Changzhou No 2 People's Hospital of Nanjing Medical University, ., Changzhou, 213003, CHINA
| | - Jianhua Yang
- Northwestern Polytechnical University School of Automation, School of Automation, Xi'an, Shaanxi, 710129, CHINA
| |
Collapse
|
15
|
Gong H, Liu J, Li S, Chen B. Axial-SpineGAN: simultaneous segmentation and diagnosis of multiple spinal structures on axial magnetic resonance imaging images. Phys Med Biol 2021; 66. [PMID: 33887718 DOI: 10.1088/1361-6560/abfad9] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2020] [Accepted: 04/22/2021] [Indexed: 11/12/2022]
Abstract
Providing a simultaneous segmentation and diagnosis of the spinal structures on axial magnetic resonance imaging (MRI) images has significant value for subsequent pathological analyses and clinical treatments. However, this task remains challenging, owing to the significant structural diversity, subtle differences between normal and abnormal structures, implicit borders, and insufficient training data. In this study, we propose an innovative network framework called 'Axial-SpineGAN' comprising a generator, discriminator, and diagnostor, aiming to address the above challenges, and to achieve simultaneous segmentation and disease diagnosis for discs, neural foramens, thecal sacs, and posterior arches on axial MRI images. The generator employs an enhancing feature fusion module to generate discriminative features, i.e. to address the challenges regarding the significant structural diversity and subtle differences between normal and abnormal structures. An enhancing border alignment module is employed to obtain an accurate pixel classification of the implicit borders. The discriminator employs an adversarial learning module to effectively strengthen the higher-order spatial consistency, and to avoid overfitting owing to insufficient training data. The diagnostor employs an automated diagnosis module to provide automated recognition of spinal diseases. Extensive experiments demonstrate that these modules have positive effects on improving the segmentation and diagnosis accuracies. Additionally, the results indicate that Axial-SpineGAN has the highest Dice similarity coefficient (94.9% ± 1.8%) in terms of the segmentation accuracy and highest accuracy rate (93.9% ± 2.6%) in terms of the diagnosis accuracy, thereby outperforming existing state-of-the-art methods. Therefore, our proposed Axial-SpineGAN is effective and potential as a clinical tool for providing an automated segmentation and disease diagnosis for multiple spinal structures on MRI images.
Collapse
Affiliation(s)
- Hao Gong
- Beijing Institute of Technology, School of Mechanical Engineering, 5 South Zhongguancun Street, Haidian District, Beijing, 100081, People's Republic of China
| | - Jianhua Liu
- Beijing Institute of Technology, School of Mechanical Engineering, 5 South Zhongguancun Street, Haidian District, Beijing, 100081, People's Republic of China
| | - Shuo Li
- University of Western, Department of Medical Imaging and Medical Biophysics, London, ON, N6A 5W9, Canada
| | - Bo Chen
- Western University, School of Health Science, London, ON, N6A 4V2, Canada
| |
Collapse
|
16
|
Yıldız E, Arslan AT, Yıldız Taş A, Acer AF, Demir S, Şahin A, Erol Barkana D. Generative Adversarial Network Based Automatic Segmentation of Corneal Subbasal Nerves on In Vivo Confocal Microscopy Images. Transl Vis Sci Technol 2021; 10:33. [PMID: 34038501 PMCID: PMC8161698 DOI: 10.1167/tvst.10.6.33] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2021] [Accepted: 05/05/2021] [Indexed: 11/24/2022] Open
Abstract
Purpose In vivo confocal microscopy (IVCM) is a noninvasive, reproducible, and inexpensive diagnostic tool for corneal diseases. However, widespread and effortless image acquisition in IVCM creates serious image analysis workloads on ophthalmologists, and neural networks could solve this problem quickly. We have produced a novel deep learning algorithm based on generative adversarial networks (GANs), and we compare its accuracy for automatic segmentation of subbasal nerves in IVCM images with a fully convolutional neural network (U-Net) based method. Methods We have collected IVCM images from 85 subjects. U-Net and GAN-based image segmentation methods were trained and tested under the supervision of three clinicians for the segmentation of corneal subbasal nerves. Nerve segmentation results for GAN and U-Net-based methods were compared with the clinicians by using Pearson's R correlation, Bland-Altman analysis, and receiver operating characteristics (ROC) statistics. Additionally, different noises were applied on IVCM images to evaluate the performances of the algorithms with noises of biomedical imaging. Results The GAN-based algorithm demonstrated similar correlation and Bland-Altman analysis results with U-Net. The GAN-based method showed significantly higher accuracy compared to U-Net in ROC curves. Additionally, the performance of the U-Net deteriorated significantly with different noises, especially in speckle noise, compared to GAN. Conclusions This study is the first application of GAN-based algorithms on IVCM images. The GAN-based algorithms demonstrated higher accuracy than U-Net for automatic corneal nerve segmentation in IVCM images, in patient-acquired images and noise applied images. This GAN-based segmentation method can be used as a facilitating diagnostic tool in ophthalmology clinics. Translational Relevance Generative adversarial networks are emerging deep learning models for medical image processing, which could be important clinical tools for rapid segmentation and analysis of corneal subbasal nerves in IVCM images.
Collapse
Affiliation(s)
- Erdost Yıldız
- Koç University Research Center for Translational Medicine, Koç University, Istanbul, Turkey
| | | | - Ayşe Yıldız Taş
- Department of Ophthalmology, Koç University School of Medicine, Istanbul, Turkey
| | | | - Sertaç Demir
- Techy Bilişim Ltd., Eskişehir, Turkey
- Department of Computer Engineering, Eskişehir Osmangazi University, Eskişehir, Turkey
| | - Afsun Şahin
- Koç University Research Center for Translational Medicine, Koç University, Istanbul, Turkey
- Department of Ophthalmology, Koç University School of Medicine, Istanbul, Turkey
| | - Duygun Erol Barkana
- Department of Electrical and Electronics Engineering, Yeditepe University, Istanbul, Turkey
| |
Collapse
|
17
|
Valous NA, Moraleda RR, Jäger D, Zörnig I, Halama N. Interrogating the microenvironmental landscape of tumors with computational image analysis approaches. Semin Immunol 2020; 48:101411. [PMID: 33168423 DOI: 10.1016/j.smim.2020.101411] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/14/2020] [Revised: 08/13/2020] [Accepted: 09/04/2020] [Indexed: 02/07/2023]
Abstract
The tumor microenvironment is an interacting heterogeneous collection of cancer cells, resident as well as infiltrating host cells, secreted factors, and extracellular matrix proteins. With the growing importance of immunotherapies, it has become crucial to be able to characterize the composition and the functional orientation of the microenvironment. The development of novel computational image analysis methodologies may enable the robust quantification and localization of immune and related biomarker-expressing cells within the microenvironment. The aim of the review is to concisely highlight a selection of current and significant contributions pertinent to methodological advances coupled with biomedical or translational applications. A further aim is to concisely present computational advances that, to our knowledge, have currently very limited use for the assessment of the microenvironment but have the potential to enhance image analysis pipelines; on this basis, an example is shown for the detection and segmentation of cells of the microenvironment using a published pipeline and a public dataset. Finally, a general proposal is presented on the conceptual design of automation-optimized computational image analysis workflows in the biomedical and clinical domain.
Collapse
Affiliation(s)
- Nektarios A Valous
- Applied Tumor Immunity Clinical Cooperation Unit, National Center for Tumor Diseases (NCT), German Cancer Research Center (DKFZ), Im Neuenheimer Feld 460, 69120 Heidelberg, Germany.
| | - Rodrigo Rojas Moraleda
- Applied Tumor Immunity Clinical Cooperation Unit, National Center for Tumor Diseases (NCT), German Cancer Research Center (DKFZ), Im Neuenheimer Feld 460, 69120 Heidelberg, Germany.
| | - Dirk Jäger
- Applied Tumor Immunity Clinical Cooperation Unit, National Center for Tumor Diseases (NCT), German Cancer Research Center (DKFZ), Im Neuenheimer Feld 460, 69120 Heidelberg, Germany; Department of Medical Oncology, National Center for Tumor Diseases (NCT), Heidelberg University Hospital (UKHD), Im Neuenheimer Feld 460, 69120 Heidelberg, Germany
| | - Inka Zörnig
- Department of Medical Oncology, National Center for Tumor Diseases (NCT), Heidelberg University Hospital (UKHD), Im Neuenheimer Feld 460, 69120 Heidelberg, Germany
| | - Niels Halama
- Department of Medical Oncology, National Center for Tumor Diseases (NCT), Heidelberg University Hospital (UKHD), Im Neuenheimer Feld 460, 69120 Heidelberg, Germany; Division of Translational Immunotherapy, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120 Heidelberg, Germany.
| |
Collapse
|