1
|
Liu X, Xiang C, Lan L, Li C, Xiao H, Liu Z. Lesion region inpainting: an approach for pseudo-healthy image synthesis in intracranial infection imaging. Front Microbiol 2024; 15:1453870. [PMID: 39224212 PMCID: PMC11368058 DOI: 10.3389/fmicb.2024.1453870] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2024] [Accepted: 08/05/2024] [Indexed: 09/04/2024] Open
Abstract
The synthesis of pseudo-healthy images, involving the generation of healthy counterparts for pathological images, is crucial for data augmentation, clinical disease diagnosis, and understanding pathology-induced changes. Recently, Generative Adversarial Networks (GANs) have shown substantial promise in this domain. However, the heterogeneity of intracranial infection symptoms caused by various infections complicates the model's ability to accurately differentiate between pathological and healthy regions, leading to the loss of critical information in healthy areas and impairing the precise preservation of the subject's identity. Moreover, for images with extensive lesion areas, the pseudo-healthy images generated by these methods often lack distinct organ and tissue structures. To address these challenges, we propose a three-stage method (localization, inpainting, synthesis) that achieves nearly perfect preservation of the subject's identity through precise pseudo-healthy synthesis of the lesion region and its surroundings. The process begins with a Segmentor, which identifies the lesion areas and differentiates them from healthy regions. Subsequently, a Vague-Filler fills the lesion areas to construct a healthy outline, thereby preventing structural loss in cases of extensive lesions. Finally, leveraging this healthy outline, a Generative Adversarial Network integrated with a contextual residual attention module generates a more realistic and clearer image. Our method was validated through extensive experiments across different modalities within the BraTS2021 dataset, achieving a healthiness score of 0.957. The visual quality of the generated images markedly exceeded those produced by competing methods, with enhanced capabilities in repairing large lesion areas. Further testing on the COVID-19-20 dataset showed that our model could effectively partially reconstruct images of other organs.
Collapse
Affiliation(s)
- Xiaojuan Liu
- College of Artificial Intelligence, Chongqing University of Technology, Chongqing, China
- College of Big Data and Intelligent Engineering, Chongqing College of International Business and Economics, Chongqing, China
| | - Cong Xiang
- College of Artificial Intelligence, Chongqing University of Technology, Chongqing, China
| | - Libin Lan
- College of Computer Science and Engineering, Chongqing University of Technology, Chongqing, China
| | - Chuan Li
- College of Big Data and Intelligent Engineering, Chongqing College of International Business and Economics, Chongqing, China
| | - Hanguang Xiao
- College of Artificial Intelligence, Chongqing University of Technology, Chongqing, China
| | - Zhi Liu
- College of Artificial Intelligence, Chongqing University of Technology, Chongqing, China
| |
Collapse
|
2
|
Algohary A, Zacharaki EI, Breto AL, Alhusseini M, Wallaengen V, Xu IR, Gaston SM, Punnen S, Castillo P, Pattany PM, Kryvenko ON, Spieler B, Abramowitz MC, Pra AD, Ford JC, Pollack A, Stoyanova R. Uncovering prostate cancer aggressiveness signal in T2-weighted MRI through a three-reference tissues normalization technique. NMR IN BIOMEDICINE 2024; 37:e5069. [PMID: 37990759 DOI: 10.1002/nbm.5069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Revised: 09/27/2023] [Accepted: 10/16/2023] [Indexed: 11/23/2023]
Abstract
Quantitative T2-weighted MRI (T2W) interpretation is impeded by the variability of acquisition-related features, such as field strength, coil type, signal amplification, and pulse sequence parameters. The main purpose of this work is to develop an automated method for prostate T2W intensity normalization. The procedure includes the following: (i) a deep learning-based network utilizing MASK R-CNN for automatic segmentation of three reference tissues: gluteus maximus muscle, femur, and bladder; (ii) fitting a spline function between average intensities in these structures and reference values; and (iii) using the function to transform all T2W intensities. The T2W distributions in the prostate cancer regions of interest (ROIs) and normal appearing prostate tissue (NAT) were compared before and after normalization using Student's t-test. The ROIs' T2W associations with the Gleason Score (GS), Decipher genomic score, and a three-tier prostate cancer risk were evaluated with Spearman's correlation coefficient (rS ). T2W differences in indolent and aggressive prostate cancer lesions were also assessed. The MASK R-CNN was trained with manual contours from 32 patients. The normalization procedure was applied to an independent MRI dataset from 83 patients. T2W differences between ROIs and NAT significantly increased after normalization. T2W intensities in 231 biopsy ROIs were significantly negatively correlated with GS (rS = -0.21, p = 0.001), Decipher (rS = -0.193, p = 0.003), and three-tier risk (rS = -0.235, p < 0.001). The average T2W intensities in the aggressive ROIs were significantly lower than in the indolent ROIs after normalization. In conclusion, the automated triple-reference tissue normalization method significantly improved the discrimination between prostate cancer and normal prostate tissue. In addition, the normalized T2W intensities of cancer exhibited a significant association with tumor aggressiveness. By improving the quantitative utilization of the T2W in the assessment of prostate cancer on MRI, the new normalization method represents an important advance over clinical protocols that do not include sequences for the measurement of T2 relaxation times.
Collapse
Affiliation(s)
- Ahmad Algohary
- Department of Radiation Oncology, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Evangelia I Zacharaki
- Department of Radiation Oncology, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Adrian L Breto
- Department of Radiation Oncology, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Mohammad Alhusseini
- Department of Radiation Oncology, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Veronica Wallaengen
- Department of Radiation Oncology, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Isaac R Xu
- Department of Radiation Oncology, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Sandra M Gaston
- Department of Radiation Oncology, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Sanoj Punnen
- Desai Sethi Urology Institute, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Patricia Castillo
- Department of Radiology, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Pradip M Pattany
- Department of Radiology, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Oleksandr N Kryvenko
- Department of Pathology and Laboratory Medicine, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Benjamin Spieler
- Department of Radiation Oncology, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Matthew C Abramowitz
- Department of Radiation Oncology, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Alan Dal Pra
- Department of Radiation Oncology, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - John C Ford
- Department of Radiation Oncology, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Alan Pollack
- Department of Radiation Oncology, University of Miami Miller School of Medicine, Miami, Florida, USA
| | - Radka Stoyanova
- Department of Radiation Oncology, University of Miami Miller School of Medicine, Miami, Florida, USA
| |
Collapse
|
3
|
Kobayashi K, Gu L, Hataya R, Mizuno T, Miyake M, Watanabe H, Takahashi M, Takamizawa Y, Yoshida Y, Nakamura S, Kouno N, Bolatkan A, Kurose Y, Harada T, Hamamoto R. Sketch-based semantic retrieval of medical images. Med Image Anal 2024; 92:103060. [PMID: 38104401 DOI: 10.1016/j.media.2023.103060] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Revised: 08/31/2023] [Accepted: 12/05/2023] [Indexed: 12/19/2023]
Abstract
The volume of medical images stored in hospitals is rapidly increasing; however, the utilization of these accumulated medical images remains limited. Existing content-based medical image retrieval (CBMIR) systems typically require example images, leading to practical limitations, such as the lack of customizable, fine-grained image retrieval, the inability to search without example images, and difficulty in retrieving rare cases. In this paper, we introduce a sketch-based medical image retrieval (SBMIR) system that enables users to find images of interest without the need for example images. The key concept is feature decomposition of medical images, which allows the entire feature of a medical image to be decomposed into and reconstructed from normal and abnormal features. Building on this concept, our SBMIR system provides an easy-to-use two-step graphical user interface: users first select a template image to specify a normal feature and then draw a semantic sketch of the disease on the template image to represent an abnormal feature. The system integrates both types of input to construct a query vector and retrieves reference images. For evaluation, ten healthcare professionals participated in a user test using two datasets. Consequently, our SBMIR system enabled users to overcome previous challenges, including image retrieval based on fine-grained image characteristics, image retrieval without example images, and image retrieval for rare cases. Our SBMIR system provides on-demand, customizable medical image retrieval, thereby expanding the utility of medical image databases.
Collapse
Affiliation(s)
- Kazuma Kobayashi
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan.
| | - Lin Gu
- Machine Intelligence for Medical Engineering Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; Research Center for Advanced Science and Technology, The University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo 153-8904, Japan.
| | - Ryuichiro Hataya
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan.
| | - Takaaki Mizuno
- Department of Experimental Therapeutics, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan.
| | - Mototaka Miyake
- Department of Diagnostic Radiology, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan.
| | - Hirokazu Watanabe
- Department of Diagnostic Radiology, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan.
| | - Masamichi Takahashi
- Department of Neurosurgery and Neuro-Oncology, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan.
| | - Yasuyuki Takamizawa
- Department of Colorectal Surgery, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan.
| | - Yukihiro Yoshida
- Department of Thoracic Surgery, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan.
| | - Satoshi Nakamura
- Radiation Safety and Quality Assurance Division, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; Division of Research and Development for Boron Neutron Capture Therapy, National Cancer Center, Exploratory Oncology Research & Clinical Trial Center, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; Medical Physics Laboratory, Division of Health Science, Graduate School of Medicine, Osaka University, Yamadaoka 1-7, Suita-shi, Osaka 565-0871, Japan.
| | - Nobuji Kouno
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; Department of Surgery, Kyoto University Graduate School of Medicine, 54 Shogoin Kawahara-cho, Sakyo-ku, Kyoto 606-8507, Japan.
| | - Amina Bolatkan
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan.
| | - Yusuke Kurose
- Machine Intelligence for Medical Engineering Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; Research Center for Advanced Science and Technology, The University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo 153-8904, Japan.
| | - Tatsuya Harada
- Machine Intelligence for Medical Engineering Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; Research Center for Advanced Science and Technology, The University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo 153-8904, Japan.
| | - Ryuji Hamamoto
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan.
| |
Collapse
|
4
|
Ke J, Liu K, Sun Y, Xue Y, Huang J, Lu Y, Dai J, Chen Y, Han X, Shen Y, Shen D. Artifact Detection and Restoration in Histology Images With Stain-Style and Structural Preservation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3487-3500. [PMID: 37352087 DOI: 10.1109/tmi.2023.3288940] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/25/2023]
Abstract
The artifacts in histology images may encumber the accurate interpretation of medical information and cause misdiagnosis. Accordingly, prepending manual quality control of artifacts considerably decreases the degree of automation. To close this gap, we propose a methodical pre-processing framework to detect and restore artifacts, which minimizes their impact on downstream AI diagnostic tasks. First, the artifact recognition network AR-Classifier first differentiates common artifacts from normal tissues, e.g., tissue folds, marking dye, tattoo pigment, spot, and out-of-focus, and also catalogs artifact patches by their restorability. Then, the succeeding artifact restoration network AR-CycleGAN performs de-artifact processing where stain styles and tissue structures can be maximally retained. We construct a benchmark for performance evaluation, curated from both clinically collected WSIs and public datasets of colorectal and breast cancer. The functional structures are compared with state-of-the-art methods, and also comprehensively evaluated by multiple metrics across multiple tasks, including artifact classification, artifact restoration, downstream diagnostic tasks of tumor classification and nuclei segmentation. The proposed system allows full automation of deep learning based histology image analysis without human intervention. Moreover, the structure-independent characteristic enables its processing with various artifact subtypes. The source code and data in this research are available at https://github.com/yunboer/AR-classifier-and-AR-CycleGAN.
Collapse
|
5
|
Hao D, Li H, Zhang Y, Zhang Q. MUE-CoT: multi-scale uncertainty entropy-aware co-training framework for left atrial segmentation. Phys Med Biol 2023; 68:215008. [PMID: 37567214 DOI: 10.1088/1361-6560/acef8e] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Accepted: 08/11/2023] [Indexed: 08/13/2023]
Abstract
Objective.Accurate left atrial segmentation is the basis of the recognition and clinical analysis of atrial fibrillation. Supervised learning has achieved some competitive segmentation results, but the high annotation cost often limits its performance. Semi-supervised learning is implemented from limited labeled data and a large amount of unlabeled data and shows good potential in solving practical medical problems.Approach. In this study, we proposed a collaborative training framework for multi-scale uncertain entropy perception (MUE-CoT) and achieved efficient left atrial segmentation from a small amount of labeled data. Based on the pyramid feature network, learning is implemented from unlabeled data by minimizing the pyramid prediction difference. In addition, novel loss constraints are proposed for co-training in the study. The diversity loss is defined as a soft constraint so as to accelerate the convergence and a novel multi-scale uncertainty entropy calculation method and a consistency regularization term are proposed to measure the consistency between prediction results. The quality of pseudo-labels cannot be guaranteed in the pre-training period, so a confidence-dependent empirical Gaussian function is proposed to weight the pseudo-supervised loss.Main results.The experimental results of a publicly available dataset and an in-house clinical dataset proved that our method outperformed existing semi-supervised methods. For the two datasets with a labeled ratio of 5%, the Dice similarity coefficient scores were 84.94% ± 4.31 and 81.24% ± 2.4, the HD95values were 4.63 mm ± 2.13 and 3.94 mm ± 2.72, and the Jaccard similarity coefficient scores were 74.00% ± 6.20 and 68.49% ± 3.39, respectively.Significance.The proposed model effectively addresses the challenges of limited data samples and high costs associated with manual annotation in the medical field, leading to enhanced segmentation accuracy.
Collapse
Affiliation(s)
- Dechen Hao
- School of Software, North University of China, Taiyuan Shanxi, People's Republic of China
| | - Hualing Li
- School of Software, North University of China, Taiyuan Shanxi, People's Republic of China
| | - Yonglai Zhang
- School of Software, North University of China, Taiyuan Shanxi, People's Republic of China
| | - Qi Zhang
- Department of Cardiology, The Second Hospital of Shanxi Medical University, Taiyuan Shanxi, People's Republic of China
| |
Collapse
|