1
|
Gong S, Zhong Y, Ma W, Li J, Wang Z, Zhang J, Heng PA, Dou Q. 3DSAM-adapter: Holistic adaptation of SAM from 2D to 3D for promptable tumor segmentation. Med Image Anal 2024; 98:103324. [PMID: 39213939 DOI: 10.1016/j.media.2024.103324] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Revised: 08/13/2024] [Accepted: 08/20/2024] [Indexed: 09/04/2024]
Abstract
Despite that the segment anything model (SAM) achieved impressive results on general-purpose semantic segmentation with strong generalization ability on daily images, its demonstrated performance on medical image segmentation is less precise and unstable, especially when dealing with tumor segmentation tasks that involve objects of small sizes, irregular shapes, and low contrast. Notably, the original SAM architecture is designed for 2D natural images and, therefore would not be able to extract the 3D spatial information from volumetric medical data effectively. In this paper, we propose a novel adaptation method for transferring SAM from 2D to 3D for promptable medical image segmentation. Through a holistically designed scheme for architecture modification, we transfer the SAM to support volumetric inputs while retaining the majority of its pre-trained parameters for reuse. The fine-tuning process is conducted in a parameter-efficient manner, wherein most of the pre-trained parameters remain frozen, and only a few lightweight spatial adapters are introduced and tuned. Regardless of the domain gap between natural and medical data and the disparity in the spatial arrangement between 2D and 3D, the transformer trained on natural images can effectively capture the spatial patterns present in volumetric medical images with only lightweight adaptations. We conduct experiments on four open-source tumor segmentation datasets, and with a single click prompt, our model can outperform domain state-of-the-art medical image segmentation models and interactive segmentation models. We also compared our adaptation method with existing popular adapters and observed significant performance improvement on most datasets. Our code and models are available at: https://github.com/med-air/3DSAM-adapter.
Collapse
Affiliation(s)
- Shizhan Gong
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Yuan Zhong
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Wenao Ma
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Jinpeng Li
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Zhao Wang
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Jingyang Zhang
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Pheng-Ann Heng
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Qi Dou
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China.
| |
Collapse
|
2
|
Liu T, Bai Q, Torigian DA, Tong Y, Udupa JK. VSmTrans: A hybrid paradigm integrating self-attention and convolution for 3D medical image segmentation. Med Image Anal 2024; 98:103295. [PMID: 39217673 PMCID: PMC11381179 DOI: 10.1016/j.media.2024.103295] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Revised: 05/10/2024] [Accepted: 07/31/2024] [Indexed: 09/04/2024]
Abstract
PURPOSE Vision Transformers recently achieved a competitive performance compared with CNNs due to their excellent capability of learning global representation. However, there are two major challenges when applying them to 3D image segmentation: i) Because of the large size of 3D medical images, comprehensive global information is hard to capture due to the enormous computational costs. ii) Insufficient local inductive bias in Transformers affects the ability to segment detailed features such as ambiguous and subtly defined boundaries. Hence, to apply the Vision Transformer mechanism in the medical image segmentation field, the above challenges need to be overcome adequately. METHODS We propose a hybrid paradigm, called Variable-Shape Mixed Transformer (VSmTrans), that integrates self-attention and convolution and can enjoy the benefits of free learning of both complex relationships from the self-attention mechanism and the local prior knowledge from convolution. Specifically, we designed a Variable-Shape self-attention mechanism, which can rapidly expand the receptive field without extra computing cost and achieve a good trade-off between global awareness and local details. In addition, the parallel convolution paradigm introduces strong local inductive bias to facilitate the ability to excavate details. Meanwhile, a pair of learnable parameters can automatically adjust the importance of the above two paradigms. Extensive experiments were conducted on two public medical image datasets with different modalities: the AMOS CT dataset and the BraTS2021 MRI dataset. RESULTS Our method achieves the best average Dice scores of 88.3 % and 89.7 % on these datasets, which are superior to the previous state-of-the-art Swin Transformer-based and CNN-based architectures. A series of ablation experiments were also conducted to verify the efficiency of the proposed hybrid mechanism and the components and explore the effectiveness of those key parameters in VSmTrans. CONCLUSIONS The proposed hybrid Transformer-based backbone network for 3D medical image segmentation can tightly integrate self-attention and convolution to exploit the advantages of these two paradigms. The experimental results demonstrate our method's superiority compared to other state-of-the-art methods. The hybrid paradigm seems to be most appropriate to the medical image segmentation field. The ablation experiments also demonstrate that the proposed hybrid mechanism can effectively balance large receptive fields with local inductive biases, resulting in highly accurate segmentation results, especially in capturing details. Our code is available at https://github.com/qingze-bai/VSmTrans.
Collapse
Affiliation(s)
- Tiange Liu
- School of Intelligence Science and Technology, University of Science and Technology Beijing, Beijing, PR China, 100083; School of Information Science and Engineering, Yanshan University, Qinhuangdao, Hebei PR China, 066004
| | - Qingze Bai
- School of Information Science and Engineering, Yanshan University, Qinhuangdao, Hebei PR China, 066004
| | - Drew A Torigian
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA, 19104
| | - Yubing Tong
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA, 19104
| | - Jayaram K Udupa
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA, 19104.
| |
Collapse
|
3
|
Hatamikia S, George G, Schwarzhans F, Mahbod A, Woitek R. Breast MRI radiomics and machine learning-based predictions of response to neoadjuvant chemotherapy - How are they affected by variations in tumor delineation? Comput Struct Biotechnol J 2024; 23:52-63. [PMID: 38125296 PMCID: PMC10730996 DOI: 10.1016/j.csbj.2023.11.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2023] [Revised: 11/08/2023] [Accepted: 11/08/2023] [Indexed: 12/23/2023] Open
Abstract
Manual delineation of volumes of interest (VOIs) by experts is considered the gold-standard method in radiomics analysis. However, it suffers from inter- and intra-operator variability. A quantitative assessment of the impact of variations in these delineations on the performance of the radiomics predictors is required to develop robust radiomics based prediction models. In this study, we developed radiomics models for the prediction of pathological complete response to neoadjuvant chemotherapy in patients with two different breast cancer subtypes based on contrast-enhanced magnetic resonance imaging acquired prior to treatment (baseline MRI scans). Different mathematical operations such as erosion, smoothing, dilation, randomization, and ellipse fitting were applied to the original VOIs delineated by experts to simulate variations of segmentation masks. The effects of such VOI modifications on various steps of the radiomics workflow, including feature extraction, feature selection, and prediction performance, were evaluated. Using manual tumor VOIs and radiomics features extracted from baseline MRI scans, an AUC of up to 0.96 and 0.89 was achieved for human epidermal growth receptor 2 positive and triple-negative breast cancer, respectively. For smoothing and erosion, VOIs yielded the highest number of robust features and the best prediction performance, while ellipse fitting and dilation lead to the lowest robustness and prediction performance for both breast cancer subtypes. At most 28% of the selected features were similar to manual VOIs when different VOI delineation data were used. Differences in VOI delineation affect different steps of radiomics analysis, and their quantification is therefore important for development of standardized radiomics research.
Collapse
Affiliation(s)
- Sepideh Hatamikia
- Danube Private University, Krems, Rathausplatz 1, Krems-Stein, AT-3500, Austria
- Austrian Center for Medical Innovation and Technology (ACMIT), Viktor Kaplan-Straße 2/1, Wiener Neustadt 2700, Austria
| | - Geevarghese George
- Danube Private University, Krems, Rathausplatz 1, Krems-Stein, AT-3500, Austria
| | - Florian Schwarzhans
- Danube Private University, Krems, Rathausplatz 1, Krems-Stein, AT-3500, Austria
| | - Amirreza Mahbod
- Danube Private University, Krems, Rathausplatz 1, Krems-Stein, AT-3500, Austria
| | - Ramona Woitek
- Danube Private University, Krems, Rathausplatz 1, Krems-Stein, AT-3500, Austria
| |
Collapse
|
4
|
Xie Y, Gu L, Harada T, Zhang J, Xia Y, Wu Q. Rethinking masked image modelling for medical image representation. Med Image Anal 2024; 98:103304. [PMID: 39173412 DOI: 10.1016/j.media.2024.103304] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2023] [Revised: 08/06/2024] [Accepted: 08/09/2024] [Indexed: 08/24/2024]
Abstract
Masked Image Modelling (MIM), a form of self-supervised learning, has garnered significant success in computer vision by improving image representations using unannotated data. Traditional MIMs typically employ a strategy of random sampling across the image. However, this random masking technique may not be ideally suited for medical imaging, which possesses distinct characteristics divergent from natural images. In medical imaging, particularly in pathology, disease-related features are often exceedingly sparse and localized, while the remaining regions appear normal and undifferentiated. Additionally, medical images frequently accompany reports, directly pinpointing pathological changes' location. Inspired by this, we propose Masked medical Image Modelling (MedIM), a novel approach, to our knowledge, the first research that employs radiological reports to guide the masking and restore the informative areas of images, encouraging the network to explore the stronger semantic representations from medical images. We introduce two mutual comprehensive masking strategies, knowledge-driven masking (KDM), and sentence-driven masking (SDM). KDM uses Medical Subject Headings (MeSH) words unique to radiology reports to identify symptom clues mapped to MeSH words (e.g., cardiac, edema, vascular, pulmonary) and guide the mask generation. Recognizing that radiological reports often comprise several sentences detailing varied findings, SDM integrates sentence-level information to identify key regions for masking. MedIM reconstructs images informed by this masking from the KDM and SDM modules, promoting a comprehensive and enriched medical image representation. Our extensive experiments on seven downstream tasks covering multi-label/class image classification, pneumothorax segmentation, and medical image-report analysis, demonstrate that MedIM with report-guided masking achieves competitive performance. Our method substantially outperforms ImageNet pre-training, MIM-based pre-training, and medical image-report pre-training counterparts. Codes are available at https://github.com/YtongXie/MedIM.
Collapse
Affiliation(s)
| | - Lin Gu
- RIKEN AIP, Japan; RCAST, The University of Tokyo, Japan
| | | | - Jianpeng Zhang
- College of Computer Science and Technology, Zhejiang University, China
| | - Yong Xia
- School of Computer Science and Engineering, Northwestern Polytechnical University, Xi'an 710072, China; Ningbo Institute of Northwestern Polytechnical University, Ningbo 315048, China
| | - Qi Wu
- University of Adelaide, Australia.
| |
Collapse
|
5
|
Chen C, Miao J, Wu D, Zhong A, Yan Z, Kim S, Hu J, Liu Z, Sun L, Li X, Liu T, Heng PA, Li Q. MA-SAM: Modality-agnostic SAM adaptation for 3D medical image segmentation. Med Image Anal 2024; 98:103310. [PMID: 39182302 PMCID: PMC11381141 DOI: 10.1016/j.media.2024.103310] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2023] [Revised: 08/13/2024] [Accepted: 08/16/2024] [Indexed: 08/27/2024]
Abstract
The Segment Anything Model (SAM), a foundation model for general image segmentation, has demonstrated impressive zero-shot performance across numerous natural image segmentation tasks. However, SAM's performance significantly declines when applied to medical images, primarily due to the substantial disparity between natural and medical image domains. To effectively adapt SAM to medical images, it is important to incorporate critical third-dimensional information, i.e., volumetric or temporal knowledge, during fine-tuning. Simultaneously, we aim to harness SAM's pre-trained weights within its original 2D backbone to the fullest extent. In this paper, we introduce a modality-agnostic SAM adaptation framework, named as MA-SAM, that is applicable to various volumetric and video medical data. Our method roots in the parameter-efficient fine-tuning strategy to update only a small portion of weight increments while preserving the majority of SAM's pre-trained weights. By injecting a series of 3D adapters into the transformer blocks of the image encoder, our method enables the pre-trained 2D backbone to extract third-dimensional information from input data. We comprehensively evaluate our method on five medical image segmentation tasks, by using 11 public datasets across CT, MRI, and surgical video data. Remarkably, without using any prompt, our method consistently outperforms various state-of-the-art 3D approaches, surpassing nnU-Net by 0.9%, 2.6%, and 9.9% in Dice for CT multi-organ segmentation, MRI prostate segmentation, and surgical scene segmentation respectively. Our model also demonstrates strong generalization, and excels in challenging tumor segmentation when prompts are used. Our code is available at: https://github.com/cchen-cc/MA-SAM.
Collapse
Affiliation(s)
- Cheng Chen
- Center of Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114, USA
| | - Juzheng Miao
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Dufan Wu
- Center of Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114, USA
| | - Aoxiao Zhong
- Center of Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114, USA; Harvard John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA 02138, USA
| | - Zhiling Yan
- Department of Computer Science and Engineering, Lehigh University, Bethlehem, PA 18015, USA
| | - Sekeun Kim
- Center of Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114, USA
| | - Jiang Hu
- Center of Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114, USA
| | - Zhengliang Liu
- Center of Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114, USA; School of Computing, The University of Georgia, Athens, GA 30602, USA
| | - Lichao Sun
- Department of Computer Science and Engineering, Lehigh University, Bethlehem, PA 18015, USA
| | - Xiang Li
- Center of Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114, USA.
| | - Tianming Liu
- School of Computing, The University of Georgia, Athens, GA 30602, USA
| | - Pheng-Ann Heng
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Quanzheng Li
- Center of Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114, USA
| |
Collapse
|
6
|
Dong X, Chen G, Zhu Y, Ma B, Ban X, Wu N, Ming Y. Artificial intelligence in skeletal metastasis imaging. Comput Struct Biotechnol J 2024; 23:157-164. [PMID: 38144945 PMCID: PMC10749216 DOI: 10.1016/j.csbj.2023.11.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Revised: 11/02/2023] [Accepted: 11/02/2023] [Indexed: 12/26/2023] Open
Abstract
In the field of metastatic skeletal oncology imaging, the role of artificial intelligence (AI) is becoming more prominent. Bone metastasis typically indicates the terminal stage of various malignant neoplasms. Once identified, it necessitates a comprehensive revision of the initial treatment regime, and palliative care is often the only resort. Given the gravity of the condition, the diagnosis of bone metastasis should be approached with utmost caution. AI techniques are being evaluated for their efficacy in a range of tasks within medical imaging, including object detection, disease classification, region segmentation, and prognosis prediction in medical imaging. These methods offer a standardized solution to the frequently subjective challenge of image interpretation.This subjectivity is most desirable in bone metastasis imaging. This review describes the basic imaging modalities of bone metastasis imaging, along with the recent developments and current applications of AI in the respective imaging studies. These concrete examples emphasize the importance of using computer-aided systems in the clinical setting. The review culminates with an examination of the current limitations and prospects of AI in the realm of bone metastasis imaging. To establish the credibility of AI in this domain, further research efforts are required to enhance the reproducibility and attain robust level of empirical support.
Collapse
Affiliation(s)
- Xiying Dong
- Department of Orthopedic Surgery, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Peking Union Medical College and Chinese Academy of Medical Sciences, Beijing 100730, China
- Key Laboratory of Big Data for Spinal Deformities, Chinese Academy of Medical Sciences, Beijing 100730, China
- Peking Union Medical College Hospital, Peking Union Medical College and Chinese Academy of Medical Sciences, Beijing, China
- Department of Urology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, 100021 Beijing, China
| | - Guilin Chen
- Department of Orthopedic Surgery, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Peking Union Medical College and Chinese Academy of Medical Sciences, Beijing 100730, China
- Key Laboratory of Big Data for Spinal Deformities, Chinese Academy of Medical Sciences, Beijing 100730, China
- Graduate School of Peking Union Medical College, Beijing 100730, China
| | - Yuanpeng Zhu
- Department of Orthopedic Surgery, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Peking Union Medical College and Chinese Academy of Medical Sciences, Beijing 100730, China
- Key Laboratory of Big Data for Spinal Deformities, Chinese Academy of Medical Sciences, Beijing 100730, China
- Graduate School of Peking Union Medical College, Beijing 100730, China
| | - Boyuan Ma
- School of Intelligence Science and Technology, University of Science and Technology Beijing, Beijing, China
| | - Xiaojuan Ban
- School of Intelligence Science and Technology, University of Science and Technology Beijing, Beijing, China
| | - Nan Wu
- Department of Orthopedic Surgery, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Peking Union Medical College and Chinese Academy of Medical Sciences, Beijing 100730, China
- Key Laboratory of Big Data for Spinal Deformities, Chinese Academy of Medical Sciences, Beijing 100730, China
- Beijing Key Laboratory for Genetic Research of Skeletal Deformity, Beijing 100730, China
| | - Yue Ming
- Department of Nuclear Medicine (PET-CT Center), National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| |
Collapse
|
7
|
Summerfield N, Morris E, Banerjee S, He Q, Ghanem AI, Zhu S, Zhao J, Dong M, Glide-Hurst C. Enhancing Precision in Cardiac Segmentation for Magnetic Resonance-Guided Radiation Therapy Through Deep Learning. Int J Radiat Oncol Biol Phys 2024; 120:904-914. [PMID: 38797498 PMCID: PMC11427143 DOI: 10.1016/j.ijrobp.2024.05.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Revised: 04/25/2024] [Accepted: 05/17/2024] [Indexed: 05/29/2024]
Abstract
PURPOSE Cardiac substructure dose metrics are more strongly linked to late cardiac morbidities than to whole-heart metrics. Magnetic resonance (MR)-guided radiation therapy (MRgRT) enables substructure visualization during daily localization, allowing potential for enhanced cardiac sparing. We extend a publicly available state-of-the-art deep learning framework, "No New" U-Net, to incorporate self-distillation (nnU-Net.wSD) for substructure segmentation for MRgRT. METHODS AND MATERIALS Eighteen (institute A) patients who underwent thoracic or abdominal radiation therapy on a 0.35 T MR-guided linear accelerator were retrospectively evaluated. On each image, 1 of 2 radiation oncologists delineated reference contours of 12 cardiac substructures (chambers, great vessels, and coronary arteries) used to train (n = 10), validate (n = 3), and test (n = 5) nnU-Net.wSD by leveraging a teacher-student network and comparing it to standard 3-dimensional U-Net. The impact of using simulation data or including 3 to 4 daily images for augmentation during training was evaluated for nnU-Net.wSD. Geometric metrics (Dice similarity coefficient, mean distance to agreement, and 95% Hausdorff distance), visual inspection, and clinical dose-volume histograms were evaluated. To determine generalizability, institute A's model was tested on an unlabeled data set from institute B (n = 22) and evaluated via consensus scoring and volume comparisons. RESULTS nnU-Net.wSD yielded a Dice similarity coefficient (reported mean ± SD) of 0.65 ± 0.25 across the 12 substructures (chambers, 0.85 ± 0.05; great vessels, 0.67 ± 0.19; and coronary arteries, 0.33 ± 0.16; mean distance to agreement, <3 mm; mean 95% Hausdorff distance, <9 mm) while outperforming the 3-dimensional U-Net (0.583 ± 0.28; P <.01). Leveraging fractionated data for augmentation improved over a single MR simulation time point (0.579 ± 0.29; P <.01). Predicted contours yielded dose-volume histograms that closely matched those of the clinical treatment plans where mean and maximum (ie, dose to 0.03 cc) doses deviated by 0.32 ± 0.5 Gy and 1.42 ± 2.6 Gy, respectively. There were no statistically significant differences between institute A and B volumes (P >.05) for 11 of 12 substructures, with larger volumes requiring minor changes and coronary arteries exhibiting more variability. CONCLUSIONS This work is a critical step toward rapid and reliable cardiac substructure segmentation to improve cardiac sparing in low-field MRgRT.
Collapse
Affiliation(s)
- Nicholas Summerfield
- Department of Medical Physics, University of Wisconsin-Madison, Madison, Wisconsin; Department of Human Oncology, University of Wisconsin-Madison, Madison, Wisconsin
| | - Eric Morris
- Department of Radiation Oncology, Washington University of Medicine in St. Louis, St. Louis, Missouri
| | - Soumyanil Banerjee
- Department of Computer Science, Wayne State University, Detroit, Michigan
| | - Qisheng He
- Department of Computer Science, Wayne State University, Detroit, Michigan
| | - Ahmed I Ghanem
- Department of Radiation Oncology, Henry Ford Cancer Institute, Detroit, Michigan; Alexandria Department of Clinical Oncology, Faculty of Medicine, Alexandria University, Alexandria, Egypt
| | - Simeng Zhu
- Department of Radiation Oncology, The Ohio State University, Columbus, Ohio
| | - Jiwei Zhao
- Department of Biostatistics and Medical Informatics, University of Wisconsin-Madison, Madison, Wisconsin
| | - Ming Dong
- Department of Computer Science, Wayne State University, Detroit, Michigan
| | - Carri Glide-Hurst
- Department of Medical Physics, University of Wisconsin-Madison, Madison, Wisconsin; Department of Human Oncology, University of Wisconsin-Madison, Madison, Wisconsin.
| |
Collapse
|
8
|
Bentsen KK, Brink C, Nielsen TB, Lynggaard RB, Vinholt PJ, Schytte T, Hansen O, Jeppesen SS. Cumulative rib fracture risk after stereotactic body radiotherapy in patients with localized non-small cell lung cancer. Radiother Oncol 2024; 200:110481. [PMID: 39159679 DOI: 10.1016/j.radonc.2024.110481] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2024] [Revised: 08/01/2024] [Accepted: 08/14/2024] [Indexed: 08/21/2024]
Abstract
INTRODUCTION Rib fracture is a known complication after stereotactic body radiotherapy (SBRT). Patient-related parameters are essential to provide patient-tailored risk estimation, however, their impact on rib fracture is less documented compared to dosimetric parameters. This study aimed to predict the risk of rib fractures in patients with localized non-small cell lung cancer (NSCLC) post-SBRT based on both patient-related and dosimetric parameters with death as a competing risk. MATERIALS AND METHODS In total, 602 patients with localized NSCLC treated with SBRT between 2010-2020 at Odense University Hospital, Denmark were included. All patients received SBRT with 45-66 Gray (Gy)/3 fractions. Rib fractures were identified in CT-scans using a word embedding model. The cumulative incidence function was based on cause-specific Cox hazard models with variable selection based on cross-validation model likelihood performed using 50 bootstraps. RESULTS In total, 19 % of patients experienced a rib fracture. The cumulative risk of rib fracture increased rapidly from 6-54 months post-SBRT. Female gender, bone density, near max dose to the rib, V30 and V40 to the rib, gross tumor volume, and mean lung dose were significantly associated with rib fracture risk in univariable analysis. The final multi-variable model consisted of V20 and V30 to the rib and mean lung dose. CONCLUSION Female gender and low bone density in male patients are significant predictors of rib fracture risk. The final model predicting cumulative rib fracture risk of 19 % in patients with localized NSCLC treated with SBRT contained no patient-related parameters, suggesting that dosimetric parameters are the primary drivers.
Collapse
Affiliation(s)
- Kristian Kirkelund Bentsen
- Department of Oncology, Odense University Hospital, Odense, Denmark; Department of Clinical Research, University of Southern Denmark, Odense, Denmark; Academy of Geriatric Cancer Research (AgeCare), Odense University Hospital, Odense, Denmark.
| | - Carsten Brink
- Department of Clinical Research, University of Southern Denmark, Odense, Denmark; Laboratory of Radiation Physics, Department of Oncology, Odense University Hospital, Odense, Denmark
| | - Tine Bjørn Nielsen
- Laboratory of Radiation Physics, Department of Oncology, Odense University Hospital, Odense, Denmark
| | - Rasmus Bank Lynggaard
- Department of Clinical Biochemistry and Pharmacology, Odense University Hospital, Odense, Denmark
| | - Pernille Just Vinholt
- Department of Clinical Research, University of Southern Denmark, Odense, Denmark; Department of Clinical Biochemistry and Pharmacology, Odense University Hospital, Odense, Denmark
| | - Tine Schytte
- Department of Oncology, Odense University Hospital, Odense, Denmark; Department of Clinical Research, University of Southern Denmark, Odense, Denmark
| | - Olfred Hansen
- Department of Oncology, Odense University Hospital, Odense, Denmark; Department of Clinical Research, University of Southern Denmark, Odense, Denmark; Academy of Geriatric Cancer Research (AgeCare), Odense University Hospital, Odense, Denmark
| | - Stefan Starup Jeppesen
- Department of Oncology, Odense University Hospital, Odense, Denmark; Department of Clinical Research, University of Southern Denmark, Odense, Denmark; Academy of Geriatric Cancer Research (AgeCare), Odense University Hospital, Odense, Denmark
| |
Collapse
|
9
|
Gao M, Cheng J, Qiu A, Zhao D, Wang J, Liu J. Magnetic resonance imaging (MRI)-based intratumoral and peritumoral radiomics for prognosis prediction in glioma patients. Clin Radiol 2024; 79:e1383-e1393. [PMID: 39218720 DOI: 10.1016/j.crad.2024.08.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2024] [Revised: 06/30/2024] [Accepted: 08/05/2024] [Indexed: 09/04/2024]
Abstract
AIM The purpose of this study was to identify robust radiological features from intratumoral and peritumoral regions, evaluate MRI protocols, and machine learning methods for overall survival stratification of glioma patients, and explore the relationship between radiological features and the tumour microenvironment. MATERIAL AND METHODS A retrospective analysis was conducted on 163 glioma patients, divided into a training set (n=113) and a testing set (n=50). For each patient, 2135 features were extracted from clinical MRI. Feature selection was performed using the Minimum Redundancy Maximum Relevance method and the Random Forest (RF) algorithm. Prognostic factors were assessed using the Cox proportional hazards model. Four machine learning models (RF, Logistic Regression, Support Vector Machine, and XGBoost) were trained on clinical and radiological features from tumour and peritumoral regions. Model evaluations on the testing set used receiver operating characteristic curves. RESULTS Among the 163 patients, 96 had an overall survival (OS) of less than three years postsurgery, while 67 had an OS of more than three years. Univariate Cox regression in the validation set indicated that age (p=0.003) and tumour grade (p<0.001) were positively associated with the risk of death within three years postsurgery. The final predictive model incorporated 13 radiological and 7 clinical features. The RF model, combining intratumor and peritumor radiomics, achieved the best predictive performance (AUC = 0.91; ACC = 0.86), outperforming single-region models. CONCLUSION Combined intratumoral and peritumoral radiomics can improve survival prediction and have potential as a practical imaging biomarker to guide clinical decision-making.
Collapse
Affiliation(s)
- M Gao
- Department of Radiology, The Second Xiangya Hospital of Central South University, Changsha, China
| | - J Cheng
- Hunan Provincial Key Lab on Bioinformatics, School of Computer Science and Engineering, Central South University, Changsha, China; Institute of Guizhou Aerospace Measuring and Testing Technology, Guiyang, China
| | - A Qiu
- Department of Biomedical Engineering, The Johns Hopkins University, MD, USA; Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - D Zhao
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China.
| | - J Wang
- Hunan Provincial Key Lab on Bioinformatics, School of Computer Science and Engineering, Central South University, Changsha, China.
| | - J Liu
- Department of Radiology, The Second Xiangya Hospital of Central South University, Changsha, China; Department of Radiology Quality Control Center, Changsha, China.
| |
Collapse
|
10
|
Wernz MM, Voskrebenzev A, Müller RA, Zubke M, Klimeš F, Glandorf J, Czerner C, Wacker F, Olsson KM, Hoeper MM, Hohlfeld JM, Vogel-Claussen J. Feasibility, Repeatability, and Correlation to Lung Function of Phase-Resolved Functional Lung (PREFUL) MRI-derived Pulmonary Artery Pulse Wave Velocity Measurements. J Magn Reson Imaging 2024; 60:2216-2228. [PMID: 38460124 DOI: 10.1002/jmri.29337] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2023] [Revised: 02/22/2024] [Accepted: 02/22/2024] [Indexed: 03/11/2024] Open
Abstract
BACKGROUND Pulse wave velocity (PWV) in the pulmonary arteries (PA) is a marker of vascular stiffening. Currently, only phase-contrast (PC) MRI-based options exist to measure PA-PWV. PURPOSE To test feasibility, repeatability, and correlation to clinical data of Phase-Resolved Functional Lung (PREFUL) MRI-based calculation of PA-PWV. STUDY TYPE Retrospective. SUBJECTS 79 (26 female) healthy subjects (age range 19-78), 58 (24 female) patients with chronic obstructive pulmonary disease (COPD, age range 40-77), 60 (33 female) patients with suspected pulmonary hypertension (PH, age range 28-85). SEQUENCE 2D spoiled gradient echo, 1.5T. ASSESSMENT PA-PWV was measured from PREFUL-derived cardiac cycles based on the determination of temporal and spatial distance between lung vasculature voxels using a simplified (sPWV) method and a more comprehensive (cPWV) method including more elaborate distance calculation. For 135 individuals, PC MRI-based PWV (PWV-QA) was measured. STATISTICAL TESTS Intraclass-correlation-coefficient (ICC) and coefficient of variation (CoV) were used to test repeatability. Nonparametric tests were used to compare cohorts. Correlation of sPWV/cPWV, PWV-QA, forced expiratory volume in 1 sec (FEV1) %predicted, residual volume (RV) %predicted, age, and right heart catheterization (RHC) data were tested. Significance level α = 0.05 was used. RESULTS sPWV and cPWV showed no significant differences between repeated measurements (P-range 0.10-0.92). CoV was generally lower than 15%. COPD and PH patients had significantly higher sPWV and cPWV than healthy subjects. Significant correlation was found between sPWV or cPWV and FEV1%pred. (R = -0.36 and R = -0.44), but not with RHC (P-range -0.11 - 0.91) or age (P-range 0.23-0.89). Correlation to RV%pred. was significant for cPWV (R = 0.42) but not for sPWV (R = 0.34, P = 0.055). For all cohorts, sPWV and cPWV were significantly correlated with PWV-QA (R = -0.41 and R = 0.48). DATA CONCLUSION PREFUL-derived PWV is feasible and repeatable. PWV is increased in COPD and PH patients and correlates to airway obstruction and hyperinflation. LEVEL OF EVIDENCE 3 TECHNICAL EFFICACY: Stage 2.
Collapse
Affiliation(s)
- Marius M Wernz
- Institute of Diagnostic and Interventional Radiology, Hannover Medical School, Hannover, Germany
- Biomedical Research in Endstage and Obstructive Lung Disease Hannover (BREATH), German Center for Lung Research (DZL), Hannover, Germany
| | - Andreas Voskrebenzev
- Institute of Diagnostic and Interventional Radiology, Hannover Medical School, Hannover, Germany
- Biomedical Research in Endstage and Obstructive Lung Disease Hannover (BREATH), German Center for Lung Research (DZL), Hannover, Germany
| | - Robin A Müller
- Institute of Diagnostic and Interventional Radiology, Hannover Medical School, Hannover, Germany
- Biomedical Research in Endstage and Obstructive Lung Disease Hannover (BREATH), German Center for Lung Research (DZL), Hannover, Germany
| | - Maximilian Zubke
- Institute of Diagnostic and Interventional Radiology, Hannover Medical School, Hannover, Germany
- Biomedical Research in Endstage and Obstructive Lung Disease Hannover (BREATH), German Center for Lung Research (DZL), Hannover, Germany
| | - Filip Klimeš
- Institute of Diagnostic and Interventional Radiology, Hannover Medical School, Hannover, Germany
- Biomedical Research in Endstage and Obstructive Lung Disease Hannover (BREATH), German Center for Lung Research (DZL), Hannover, Germany
| | - Julian Glandorf
- Institute of Diagnostic and Interventional Radiology, Hannover Medical School, Hannover, Germany
- Biomedical Research in Endstage and Obstructive Lung Disease Hannover (BREATH), German Center for Lung Research (DZL), Hannover, Germany
| | - Christoph Czerner
- Department of Nuclear Medicine, Hannover Medical School, Hannover, Germany
| | - Frank Wacker
- Institute of Diagnostic and Interventional Radiology, Hannover Medical School, Hannover, Germany
- Biomedical Research in Endstage and Obstructive Lung Disease Hannover (BREATH), German Center for Lung Research (DZL), Hannover, Germany
| | - Karen M Olsson
- Biomedical Research in Endstage and Obstructive Lung Disease Hannover (BREATH), German Center for Lung Research (DZL), Hannover, Germany
- Department of Respiratory Medicine and Infectious Diseases, Hannover Medical School, Hannover, Germany
| | - Marius M Hoeper
- Biomedical Research in Endstage and Obstructive Lung Disease Hannover (BREATH), German Center for Lung Research (DZL), Hannover, Germany
- Department of Respiratory Medicine and Infectious Diseases, Hannover Medical School, Hannover, Germany
| | - Jens M Hohlfeld
- Biomedical Research in Endstage and Obstructive Lung Disease Hannover (BREATH), German Center for Lung Research (DZL), Hannover, Germany
- Department of Respiratory Medicine and Infectious Diseases, Hannover Medical School, Hannover, Germany
- Fraunhofer Institute for Toxicology and Experimental Medicine, Hannover, Germany
| | - Jens Vogel-Claussen
- Institute of Diagnostic and Interventional Radiology, Hannover Medical School, Hannover, Germany
- Biomedical Research in Endstage and Obstructive Lung Disease Hannover (BREATH), German Center for Lung Research (DZL), Hannover, Germany
| |
Collapse
|
11
|
Karkalousos D, Išgum I, Marquering HA, Caan MWA. ATOMMIC: An Advanced Toolbox for Multitask Medical Imaging Consistency to facilitate Artificial Intelligence applications from acquisition to analysis in Magnetic Resonance Imaging. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 256:108377. [PMID: 39180913 DOI: 10.1016/j.cmpb.2024.108377] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/19/2024] [Revised: 07/26/2024] [Accepted: 08/15/2024] [Indexed: 08/27/2024]
Abstract
BACKGROUND AND OBJECTIVES Artificial intelligence (AI) is revolutionizing Magnetic Resonance Imaging (MRI) along the acquisition and processing chain. Advanced AI frameworks have been applied in various successive tasks, such as image reconstruction, quantitative parameter map estimation, and image segmentation. However, existing frameworks are often designed to perform tasks independently of each other or are focused on specific models or single datasets, limiting generalization. This work introduces the Advanced Toolbox for Multitask Medical Imaging Consistency (ATOMMIC), a novel open-source toolbox that streamlines AI applications for accelerated MRI reconstruction and analysis. ATOMMIC implements several tasks using deep learning (DL) models and enables MultiTask Learning (MTL) to perform related tasks in an integrated manner, targeting generalization in the MRI domain. METHODS We conducted a comprehensive literature review and analyzed 12,479 GitHub repositories to assess the current landscape of AI frameworks for MRI. Subsequently, we demonstrate how ATOMMIC standardizes workflows and improves data interoperability, enabling effective benchmarking of various DL models across MRI tasks and datasets. To showcase ATOMMIC's capabilities, we evaluated twenty-five DL models on eight publicly available datasets, focusing on accelerated MRI reconstruction, segmentation, quantitative parameter map estimation, and joint accelerated MRI reconstruction and segmentation using MTL. RESULTS ATOMMIC's high-performance training and testing capabilities, utilizing multiple GPUs and mixed precision support, enable efficient benchmarking of multiple models across various tasks. The framework's modular architecture implements each task through a collection of data loaders, models, loss functions, evaluation metrics, and pre-processing transformations, facilitating seamless integration of new tasks, datasets, and models. Our findings demonstrate that ATOMMIC supports MTL for multiple MRI tasks with harmonized complex-valued and real-valued data support while maintaining active development and documentation. Task-specific evaluations demonstrate that physics-based models outperform other approaches in reconstructing highly accelerated acquisitions. These high-quality reconstruction models also show superior accuracy in estimating quantitative parameter maps. Furthermore, when combining high-performing reconstruction models with robust segmentation networks through MTL, performance is improved in both tasks. CONCLUSIONS ATOMMIC advances MRI reconstruction and analysis by leveraging MTL and ensuring consistency across tasks, models, and datasets. This comprehensive framework serves as a versatile platform for researchers to use existing AI methods and develop new approaches in medical imaging.
Collapse
Affiliation(s)
- Dimitrios Karkalousos
- Department of Biomedical Engineering & Physics, Amsterdam University Medical Center, Location University of Amsterdam, Amsterdam, The Netherlands; Department of Radiology & Nuclear Medicine, Amsterdam University Medical Center, Location University of Amsterdam, Amsterdam, The Netherlands; Amsterdam Neuroscience, Brain Imaging, Amsterdam, The Netherlands.
| | - Ivana Išgum
- Department of Biomedical Engineering & Physics, Amsterdam University Medical Center, Location University of Amsterdam, Amsterdam, The Netherlands; Department of Radiology & Nuclear Medicine, Amsterdam University Medical Center, Location University of Amsterdam, Amsterdam, The Netherlands; Informatics Institute, University of Amsterdam, Amsterdam, The Netherlands
| | - Henk A Marquering
- Department of Biomedical Engineering & Physics, Amsterdam University Medical Center, Location University of Amsterdam, Amsterdam, The Netherlands; Department of Radiology & Nuclear Medicine, Amsterdam University Medical Center, Location University of Amsterdam, Amsterdam, The Netherlands; Amsterdam Neuroscience, Brain Imaging, Amsterdam, The Netherlands
| | - Matthan W A Caan
- Department of Biomedical Engineering & Physics, Amsterdam University Medical Center, Location University of Amsterdam, Amsterdam, The Netherlands; Amsterdam Neuroscience, Brain Imaging, Amsterdam, The Netherlands
| |
Collapse
|
12
|
Kahraman AT, Fröding T, Toumpanakis D, Gustafsson CJ, Sjöblom T. Enhanced classification performance using deep learning based segmentation for pulmonary embolism detection in CT angiography. Heliyon 2024; 10:e38118. [PMID: 39398015 PMCID: PMC11471166 DOI: 10.1016/j.heliyon.2024.e38118] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2024] [Revised: 09/17/2024] [Accepted: 09/18/2024] [Indexed: 10/15/2024] Open
Abstract
Purpose To develop a deep learning-based algorithm that automatically and accurately classifies patients as either having pulmonary emboli or not in CT pulmonary angiography (CTPA) examinations. Materials and methods For model development, 700 CTPA examinations from 652 patients performed at a single institution were used, of which 149 examinations contained 1497 PE traced by radiologists. The nnU-Net deep learning-based segmentation framework was trained using 5-fold cross-validation. To enhance classification, we applied logical rules based on PE volume and probability thresholds. External model evaluation was performed in 770 and 34 CTPAs from two independent datasets. Results A total of 1483 CTPA examinations were evaluated. In internal cross-validation and test set, the trained model correctly classified 123 of 128 examinations as positive for PE (sensitivity 96.1 %; 95 % C.I. 91-98 %; P < .05) and 521 of 551 as negative (specificity 94.6 %; 95 % C.I. 92-96 %; P < .05), achieving an area under the receiver operating characteristic (AUROC) of 96.4 % (95 % C.I. 79-99 %; P < .05). In the first external test dataset, the trained model correctly classified 31 of 32 examinations as positive (sensitivity 96.9 %; 95 % C.I. 84-99 %; P < .05) and 2 of 2 as negative (specificity 100 %; 95 % C.I. 34-100 %; P < .05), achieving an AUROC of 98.6 % (95 % C.I. 83-100 %; P < .05). In the second external test dataset, the trained model correctly classified 379 of 385 examinations as positive (sensitivity 98.4 %; 95 % C.I. 97-99 %; P < .05) and 346 of 385 as negative (specificity 89.9 %; 95 % C.I. 86-93 %; P < .05), achieving an AUROC of 98.5 % (95 % C.I. 83-100 %; P < .05). Conclusion Our automatic pipeline achieved beyond state-of-the-art diagnostic performance of PE in CTPA using nnU-Net for segmentation and volume- and probability-based post-processing for classification.
Collapse
Affiliation(s)
- Ali Teymur Kahraman
- Department of Immunology, Genetics and Pathology, Uppsala University, Uppsala, Sweden
| | - Tomas Fröding
- Department of Radiology, Nyköping Hospital, Nyköping, Sweden
| | - Dimitris Toumpanakis
- Karolinska University Hospital, Stockholm, Sweden
- Department of Surgical Sciences, Uppsala University, Sweden
| | - Christian Jamtheim Gustafsson
- Department of Hematology Oncology and Radiation Physics, Skåne University Hospital, Lund, Sweden
- Department of Translational Sciences, Medical Radiation Physics, Lund University, Malmö, Sweden
| | - Tobias Sjöblom
- Department of Immunology, Genetics and Pathology, Uppsala University, Uppsala, Sweden
| |
Collapse
|
13
|
Delmoral JC, R S Tavares JM. Semantic Segmentation of CT Liver Structures: A Systematic Review of Recent Trends and Bibliometric Analysis : Neural Network-based Methods for Liver Semantic Segmentation. J Med Syst 2024; 48:97. [PMID: 39400739 DOI: 10.1007/s10916-024-02115-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2024] [Accepted: 10/02/2024] [Indexed: 10/15/2024]
Abstract
The use of artificial intelligence (AI) in the segmentation of liver structures in medical images has become a popular research focus in the past half-decade. The performance of AI tools in screening for this task may vary widely and has been tested in the literature in various datasets. However, no scientometric report has provided a systematic overview of this scientific area. This article presents a systematic and bibliometric review of recent advances in neuronal network modeling approaches, mainly of deep learning, to outline the multiple research directions of the field in terms of algorithmic features. Therefore, a detailed systematic review of the most relevant publications addressing fully automatic semantic segmenting liver structures in Computed Tomography (CT) images in terms of algorithm modeling objective, performance benchmark, and model complexity is provided. The review suggests that fully automatic hybrid 2D and 3D networks are the top performers in the semantic segmentation of the liver. In the case of liver tumor and vasculature segmentation, fully automatic generative approaches perform best. However, the reported performance benchmark indicates that there is still much to be improved in segmenting such small structures in high-resolution abdominal CT scans.
Collapse
Affiliation(s)
- Jessica C Delmoral
- Instituto de Ciência e Inovação em Engenharia Mecânica e Engenharia Industrial, Faculdade de Engenharia, Universidade do Porto, Rua Dr. Roberto Frias, s/n, 4200-465, Porto, Portugal
| | - João Manuel R S Tavares
- Instituto de Ciência e Inovação em Engenharia Mecânica e Engenharia Industrial, Departamento de Engenharia Mecânica, Faculdade de Engenharia, Universidade do Porto, Rua Dr. Roberto Frias, s/n, 4200-465, Porto, Portugal.
| |
Collapse
|
14
|
Ouyang X, Gu D, Li X, Zhou W, Chen Q, Zhan Y, Zhou XS, Shi F, Xue Z, Shen D. Towards a general computed tomography image segmentation model for anatomical structures and lesions. COMMUNICATIONS ENGINEERING 2024; 3:143. [PMID: 39397081 DOI: 10.1038/s44172-024-00287-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/07/2024] [Accepted: 09/30/2024] [Indexed: 10/15/2024]
Abstract
Numerous deep-learning models have been developed using task-specific data, but they ignore the inherent connections among different tasks. By jointly learning a wide range of segmentation tasks, we prove that a general medical image segmentation model can improve segmentation performance for computerized tomography (CT) volumes. The proposed general CT image segmentation (gCIS) model utilizes a common transformer-based encoder for all tasks and incorporates automatic pathway modules for task prompt-based decoding. It is trained on one of the largest datasets, comprising 36,419 CT scans and 83 tasks. gCIS can automatically perform various segmentation tasks using automatic pathway modules of decoding networks through text prompt inputs, achieving an average Dice coefficient of 82.84%. Furthermore, the proposed automatic pathway routing mechanism allows for parameter pruning of the network during deployment, and gCIS can also be quickly adapted to unseen tasks with minimal training samples while maintaining great performance.
Collapse
Affiliation(s)
- Xi Ouyang
- Department of Research and Development, United Imaging Intelligence, Shanghai, China
| | - Dongdong Gu
- Department of Research and Development, United Imaging Intelligence, Shanghai, China
| | - Xuejian Li
- Department of Research and Development, United Imaging Intelligence, Shanghai, China
| | - Wenqi Zhou
- Department of Research and Development, United Imaging Intelligence, Shanghai, China
- School of Biomedical Engineering & State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, Shanghai, China
| | - Qianqian Chen
- Department of Research and Development, United Imaging Intelligence, Shanghai, China
- School of Biomedical Engineering & State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, Shanghai, China
| | - Yiqiang Zhan
- Department of Research and Development, United Imaging Intelligence, Shanghai, China
| | - Xiang Sean Zhou
- Department of Research and Development, United Imaging Intelligence, Shanghai, China
| | - Feng Shi
- Department of Research and Development, United Imaging Intelligence, Shanghai, China
| | - Zhong Xue
- Department of Research and Development, United Imaging Intelligence, Shanghai, China.
| | - Dinggang Shen
- Department of Research and Development, United Imaging Intelligence, Shanghai, China.
- School of Biomedical Engineering & State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, Shanghai, China.
- Shanghai Clinical Research and Trial Center, Shanghai, China.
| |
Collapse
|
15
|
Wan M, Zhu J, Che Y, Cao X, Han X, Si X, Wang W, Shu C, Luo M, Zhang X. BIF-Net: Boundary information fusion network for abdominal aortic aneurysm segmentation. Comput Biol Med 2024; 183:109191. [PMID: 39393127 DOI: 10.1016/j.compbiomed.2024.109191] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2024] [Revised: 08/21/2024] [Accepted: 09/21/2024] [Indexed: 10/13/2024]
Abstract
The accurate abdominal aortic aneurysm (AAA) segmentation is significant for assisting clinicians in diagnosis and treatment planning. However, existing segmentation methods exhibit a low utilization rate for the semantic information of vessel boundaries, which is disadvantageous for segmenting AAA with significant scale variability of vessel diameter (diameter ranges from 4 mm to 85 mm). To tackle this problem, we introduce a boundary information fusion network (BIF-Net) specially designed for AAA segmentation. BIF-Net initially constructs convolutional kernel functions based on Gabor and Sobel operators, enriching the global semantic features and localization information through the Gabor and Sobel dilated convolution (GSDC) module. Additionally, BIF-Net supplements lost boundary feature information during the sampling process through the guided filtering feature supplementation (GFFS) module and the channel-spatial attention module (CSAM), enhancing the ability to capture targets with shape diversity and boundary features. Finally, we introduce a boundary feature loss function to alleviate the impact of the imbalance between positive and negative samples. The results demonstrate that BIF-Net outperforms current state-of-the-art methods across multiple evaluation metrics, achieving the highest Dice similarity coefficient (DSC) accuracies of 93.29 % and 91.01 % on the preoperative and postoperative datasets, respectively. Compared to the state-of-the-art methods, BIF-Net improves DSC by 6.86 % and 3.85 %. Due to the powerful boundary feature extraction ability, the proposed BIF-Net is a competitive AAA segmentation method exhibiting significant potential for application in diagnosis and treatment processes of AAA.
Collapse
Affiliation(s)
- Mingyu Wan
- School of Mathematics and Physics, University of Science and Technology Beijing, Beijing, 100083, China.
| | - Jing Zhu
- School of Mathematics and Physics, University of Science and Technology Beijing, Beijing, 100083, China
| | - Yue Che
- School of Mathematics and Physics, University of Science and Technology Beijing, Beijing, 100083, China
| | - Xiran Cao
- School of Mathematics and Physics, University of Science and Technology Beijing, Beijing, 100083, China
| | - Xiao Han
- School of Mathematics and Physics, University of Science and Technology Beijing, Beijing, 100083, China
| | - Xinhui Si
- School of Mathematics and Physics, University of Science and Technology Beijing, Beijing, 100083, China
| | - Wei Wang
- Department of Radiology, Beijing Rehabilitation Hospital of Capital Medical University, Beijing, 100144, China
| | - Chang Shu
- Department of Vascular Surgery, Fuwai Hospital, Chinese Academy of Medical Science & Peking Union Medical College, Beijing, 100037, China; Department of Vascular Surgery, Second Xiangya Hospital, Central South University, Number 139, Renmin Road, Changsha, 410011, China.
| | - Mingyao Luo
- Department of Vascular Surgery, Fuwai Hospital, Chinese Academy of Medical Science & Peking Union Medical College, Beijing, 100037, China; Department of Vascular Surgery, Fuwai Yunnan Cardiovascular Hospital, Affiliated Cardiovascular Hospital of Kunming Medical University, Kunming, 650102, China.
| | - Xuelan Zhang
- School of Mathematics and Physics, University of Science and Technology Beijing, Beijing, 100083, China.
| |
Collapse
|
16
|
Draguet C, Populaire P, Vera MC, Fredriksson A, Haustermans K, Lee JA, Barragán-Montero AM, Sterpin E. A comparative study on automatic treatment planning for online adaptive proton therapy of esophageal cancer: which combination of deformable registration and deep learning planning tools performs the best? Phys Med Biol 2024; 69:205013. [PMID: 39332445 DOI: 10.1088/1361-6560/ad80f6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2024] [Accepted: 09/27/2024] [Indexed: 09/29/2024]
Abstract
Objective.To demonstrate the feasibility of integrating fully-automated online adaptive proton therapy strategies (OAPT) within a commercially available treatment planning system and underscore what limits their clinical implementation. These strategies leverage existing deformable image registration (DIR) algorithms and state-of-the-art deep learning (DL) networks for organ segmentation and proton dose prediction.Approach.Four OAPT strategies featuring automatic segmentation and robust optimization were evaluated on a cohort of 17 patients, each undergoing a repeat CT scan. (1) DEF-INIT combines deformably registered contours with template-based optimization. (2) DL-INIT, (3) DL-DEF, and (4) DL-DL employ a nnU-Net DL network for organ segmentation and a controlling ROIs-guided DIR algorithm for internal clinical target volume (iCTV) segmentation. DL-INIT uses this segmentation alongside template-based optimization, DL-DEF integrates it with a dose-mimicking (DM) step using a reference deformed dose, and DL-DL merges it with DM on a reference DL-predicted dose. All strategies were evaluated on manual contours and contours used for optimization and compared with manually adapted plans. Key dose volume metrics like iCTV D98% are reported.Main results.iCTV D98% was comparable in manually adapted plans and for all strategies in nominal cases but dropped to 20 Gy in worst-case scenarios for a few patients per strategy, highlighting the need to correct segmentation errors in the target volume. Evaluations on optimization contours showed minimal relative error, with some outliers, particularly in template-based strategies (DEF-INIT and DL-INIT). DL-DEF achieves a good trade-off between speed and dosimetric quality, showing a passing rate (iCTV D98% > 94%) of 90% when evaluated against 2, 4 and 5 mm setup error and of 88% when evaluated against 7 mm setup error. While template-based methods are more rigid, DL-DEF and DL-DL have potential for further enhancements with proper DM algorithm tuning.Significance.Among investigated strategies, DL-DEF and DL-DL demonstrated promising within 10 min OAPT implementation results and significant potential for improvements.
Collapse
Affiliation(s)
- C Draguet
- UCLouvain, Institut de Recherche Expérimentale et Clinique, Molecular Imaging Radiotherapy and Oncology (MIRO), Brussels, Belgium
- Department of Oncology, Laboratory of Experimental Radiotherapy, KU Leuven, Leuven, Belgium
| | - P Populaire
- Department of Oncology, Laboratory of Experimental Radiotherapy, KU Leuven, Leuven, Belgium
- Department of Radiation Oncology, Laboratory of Experimental Radiotherapy, University Hospitals Leuven, Leuven, Belgium
| | - M Chocan Vera
- UCLouvain, Institut de Recherche Expérimentale et Clinique, Molecular Imaging Radiotherapy and Oncology (MIRO), Brussels, Belgium
| | | | - K Haustermans
- Department of Oncology, Laboratory of Experimental Radiotherapy, KU Leuven, Leuven, Belgium
- Department of Radiation Oncology, Laboratory of Experimental Radiotherapy, University Hospitals Leuven, Leuven, Belgium
| | - J A Lee
- UCLouvain, Institut de Recherche Expérimentale et Clinique, Molecular Imaging Radiotherapy and Oncology (MIRO), Brussels, Belgium
| | - A M Barragán-Montero
- UCLouvain, Institut de Recherche Expérimentale et Clinique, Molecular Imaging Radiotherapy and Oncology (MIRO), Brussels, Belgium
| | - E Sterpin
- UCLouvain, Institut de Recherche Expérimentale et Clinique, Molecular Imaging Radiotherapy and Oncology (MIRO), Brussels, Belgium
- Department of Oncology, Laboratory of Experimental Radiotherapy, KU Leuven, Leuven, Belgium
| |
Collapse
|
17
|
Lin D, Kaye S, Chen M, Lyanna A, Ye L, Hammond LA, Gao J. Transcriptome and proteome profiling reveals TREM2-dependent and -independent glial response and metabolic perturbation in an Alzheimer's mouse model. J Biol Chem 2024:107874. [PMID: 39395805 DOI: 10.1016/j.jbc.2024.107874] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Revised: 09/27/2024] [Accepted: 10/01/2024] [Indexed: 10/14/2024] Open
Abstract
Elucidating the intricate molecular mechanisms of Alzheimer's disease (AD) requires a multidimensional analysis incorporating various omics data. In this study, we employed transcriptome and proteome profiling of AppNL-G-F, a human APP knock-in model of amyloidosis, at the early and mid-stages of amyloid-beta (Aβ) pathology to delineate the impacts of Aβ deposition on brain cells. By contrasting AppNL-G-F mice with TREM2 (Triggering receptor expressed on myeloid cells 2) knockout models, our study further investigates the role of TREM2, a well-known AD risk gene, in influencing microglial responses to Aβ pathology. Our results highlight altered microglial states as a central feature of Aβ pathology, characterized by the significant upregulation of microglia-specific genes related to immune responses such as complement system and antigen presentation, and catabolic pathways such as phagosome formation and lysosome biogenesis. The absence of TREM2 markedly diminishes the induction of these genes, impairs Aβ clearance, and exacerbates dystrophic neurite formation. Importantly, TREM2 is required for the microglial engagement with Aβ plaques and the formation of compact Aβ plaque cores. Furthermore, this study reveals substantial disruptions in energy metabolism and protein synthesis, signaling a shift from anabolism to catabolism in response to Aβ deposition. This metabolic alteration, coupled with a decrease in synaptic protein abundance, occurs independently of TREM2, suggesting the direct effects of Aβ deposition on synaptic integrity and plasticity. In summary, our findings demonstrate altered microglial states and metabolic disruption following Aβ deposition, offering mechanistic insights into Aβ pathology and highlighting the potential of targeting these pathways in AD therapy.
Collapse
Affiliation(s)
- Da Lin
- Department of Neuroscience, The Ohio State University Wexner Medical Center, Columbus, OH, 43210, USA
| | - Sarah Kaye
- Department of Neuroscience, The Ohio State University Wexner Medical Center, Columbus, OH, 43210, USA
| | - Min Chen
- Department of Neuroscience, The Ohio State University Wexner Medical Center, Columbus, OH, 43210, USA
| | - Amogh Lyanna
- Department of Neuroscience, The Ohio State University Wexner Medical Center, Columbus, OH, 43210, USA
| | - Lihua Ye
- Department of Neuroscience, The Ohio State University Wexner Medical Center, Columbus, OH, 43210, USA
| | - Luke A Hammond
- Department of Neurology, The Ohio State University Wexner Medical Center, Columbus, OH, 43210, USA
| | - Jie Gao
- Department of Neuroscience, The Ohio State University Wexner Medical Center, Columbus, OH, 43210, USA.
| |
Collapse
|
18
|
Boyd C, Brown GC, Kleinig TJ, Mayer W, Dawson J, Jenkinson M, Bezak E. Hyperparameter selection for dataset-constrained semantic segmentation: Practical machine learning optimization. J Appl Clin Med Phys 2024:e14542. [PMID: 39387832 DOI: 10.1002/acm2.14542] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2024] [Revised: 07/23/2024] [Accepted: 09/08/2024] [Indexed: 10/15/2024] Open
Abstract
PURPOSE/AIM This paper provides a pedagogical example for systematic machine learning optimization in small dataset image segmentation, emphasizing hyperparameter selections. A simple process is presented for medical physicists to examine hyperparameter optimization. This is also applied to a case-study, demonstrating the benefit of the method. MATERIALS AND METHODS An unrestricted public Computed Tomography (CT) dataset, with binary organ segmentation, was used to develop a multiclass segmentation model. To start the optimization process, a preliminary manual search of hyperparameters was conducted and from there a grid search identified the most influential result metrics. A total of 658 different models were trained in 2100 h, using 13 160 effective patients. The quantity of results was analyzed using random forest regression, identifying relative hyperparameter impact. RESULTS Metric implied segmentation quality (accuracy 96.8%, precision 95.1%) and visual inspection were found to be mismatched. In this work batch normalization was most important, but performance varied with hyperparameters and metrics selected. Targeted grid-search optimization and random forest analysis of relative hyperparameter importance, was an easily implementable sensitivity analysis approach. CONCLUSION The proposed optimization method gives a systematic and quantitative approach to something intuitively understood, that hyperparameters change model performance. Even just grid search optimization with random forest analysis presented here can be informative within hardware and data quality/availability limitations, adding confidence to model validity and minimize decision-making risks. By providing a guided methodology, this work helps medical physicists to improve their model optimization, irrespective of specific challenges posed by datasets and model design.
Collapse
Affiliation(s)
- Chris Boyd
- Allied Health and Human Performance, University of South Australia, Adelaide, Australia
- Medical Physics and Radiation Safety, South Australia Medical Imaging, Adelaide, Australia
| | - Gregory C Brown
- Allied Health and Human Performance, University of South Australia, Adelaide, Australia
| | - Timothy J Kleinig
- Department of Neurology, Royal Adelaide Hospital, Adelaide, Australia
- Adelaide Medical School, The University of Adelaide, Adelaide, Australia
| | - Wolfgang Mayer
- Discipline of Surgery, University of Adelaide, Adelaide, Australia
| | - Joseph Dawson
- Department of Vascular and Endovascular Surgery, Royal Adelaide Hospital, Adelaide, Australia
- Industrial AI Research Centre, UniSA STEM, University of South Australia, Adelaide, Australia
| | - Mark Jenkinson
- Australian Institute for Machine Learning (AIML), School of Computer and Mathematical Sciences, University of Adelaide, Adelaide, Australia
- South Australian Health and Medical Research Institute (SAHMRI), Adelaide, Australia
- Wellcome Trust Centre for Integrative Neuroimaging (WIN), Nuffield Department of Clinical Neurosciences (FMRIB), University of Oxford, Oxford, UK
| | - Eva Bezak
- Allied Health and Human Performance, University of South Australia, Adelaide, Australia
- Department of Physics, University of Adelaide, Adelaide, Australia
| |
Collapse
|
19
|
Mahmutoglu MA, Rastogi A, Schell M, Foltyn-Dumitru M, Baumgartner M, Maier-Hein KH, Deike-Hofmann K, Radbruch A, Bendszus M, Brugnara G, Vollmuth P. Deep learning-based defacing tool for CT angiography: CTA-DEFACE. Eur Radiol Exp 2024; 8:111. [PMID: 39382818 PMCID: PMC11465008 DOI: 10.1186/s41747-024-00510-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2024] [Accepted: 09/05/2024] [Indexed: 10/10/2024] Open
Abstract
The growing use of artificial neural network (ANN) tools for computed tomography angiography (CTA) data analysis underscores the necessity for elevated data protection measures. We aimed to establish an automated defacing pipeline for CTA data. In this retrospective study, CTA data from multi-institutional cohorts were utilized to annotate facemasks (n = 100) and train an ANN model, subsequently tested on an external institution's dataset (n = 50) and compared to a publicly available defacing algorithm. Face detection (MTCNN) and verification (FaceNet) networks were applied to measure the similarity between the original and defaced CTA images. Dice similarity coefficient (DSC), face detection probability, and face similarity measures were calculated to evaluate model performance. The CTA-DEFACE model effectively segmented soft face tissue in CTA data achieving a DSC of 0.94 ± 0.02 (mean ± standard deviation) on the test set. Our model was benchmarked against a publicly available defacing algorithm. After applying face detection and verification networks, our model showed substantially reduced face detection probability (p < 0.001) and similarity to the original CTA image (p < 0.001). The CTA-DEFACE model enabled robust and precise defacing of CTA data. The trained network is publicly accessible at www.github.com/neuroAI-HD/CTA-DEFACE . RELEVANCE STATEMENT: The ANN model CTA-DEFACE, developed for automatic defacing of CT angiography images, achieves significantly lower face detection probabilities and greater dissimilarity from the original images compared to a publicly available model. The algorithm has been externally validated and is publicly accessible. KEY POINTS: The developed ANN model (CTA-DEFACE) automatically generates facemasks for CT angiography images. CTA-DEFACE offers superior deidentification capabilities compared to a publicly available model. By means of graphics processing unit optimization, our model ensures rapid processing of medical images. Our model underwent external validation, underscoring its reliability for real-world application.
Collapse
Affiliation(s)
- Mustafa Ahmed Mahmutoglu
- Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, Germany.
- Division for Computational Neuroimaging, Heidelberg University Hospital, Heidelberg, Germany.
| | - Aditya Rastogi
- Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, Germany
- Division for Computational Neuroimaging, Heidelberg University Hospital, Heidelberg, Germany
| | - Marianne Schell
- Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, Germany
- Division for Computational Neuroimaging, Heidelberg University Hospital, Heidelberg, Germany
| | - Martha Foltyn-Dumitru
- Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, Germany
- Division for Computational Neuroimaging, Heidelberg University Hospital, Heidelberg, Germany
| | - Michael Baumgartner
- Division for Medical Image Computing, German Cancer Research Center, Heidelberg, Germany
- Helmholtz Imaging, Heidelberg, Germany
- Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany
| | | | - Katerina Deike-Hofmann
- Department of Neuroradiology, Bonn University Hospital, Bonn, Germany
- Clinical Neuroimaging Group, German Center for Neurodegenerative Diseases, DZNE, Bonn, Germany
| | - Alexander Radbruch
- Department of Neuroradiology, Bonn University Hospital, Bonn, Germany
- Clinical Neuroimaging Group, German Center for Neurodegenerative Diseases, DZNE, Bonn, Germany
| | - Martin Bendszus
- Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, Germany
| | - Gianluca Brugnara
- Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, Germany
- Division for Computational Neuroimaging, Heidelberg University Hospital, Heidelberg, Germany
| | - Philipp Vollmuth
- Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, Germany
- Division for Computational Neuroimaging, Heidelberg University Hospital, Heidelberg, Germany
| |
Collapse
|
20
|
Combes BF, Kalva SK, Benveniste PL, Tournant A, Law MH, Newton J, Krüger M, Weber RZ, Dias I, Noain D, Dean-Ben XL, Konietzko U, Baumann CR, Gillberg PG, Hock C, Nitsch RM, Cohen-Adad J, Razansky D, Ni R. Spiral volumetric optoacoustic tomography of reduced oxygen saturation in the spinal cord of M83 mouse model of Parkinson's disease. Eur J Nucl Med Mol Imaging 2024:10.1007/s00259-024-06938-w. [PMID: 39382580 DOI: 10.1007/s00259-024-06938-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2024] [Accepted: 09/29/2024] [Indexed: 10/10/2024]
Abstract
PURPOSE Metabolism and bioenergetics in the central nervous system play important roles in the pathophysiology of Parkinson's disease (PD). Here, we employed a multimodal imaging approach to assess oxygenation changes in the spinal cord of the transgenic M83 murine model of PD overexpressing the mutated A53T alpha-synuclein form in comparison with non-transgenic littermates. METHODS In vivo spiral volumetric optoacoustic tomography (SVOT) was performed to assess oxygen saturation (sO2) in the spinal cords of M83 mice and non-transgenic littermates. Ex vivo high-field T1-weighted (T1w) magnetic resonance imaging (MRI) at 9.4T was used to assess volumetric alterations in the spinal cord. 3D SVOT analysis and deep learning-based automatic segmentation of T1w MRI data for the mouse spinal cord were developed for quantification. Immunostaining for phosphorylated alpha-synuclein (pS129 α-syn), as well as vascular organization (CD31 and GLUT1), was performed after MRI scan. RESULTS In vivo SVOT imaging revealed a lower sO2SVOT in the spinal cord of M83 mice compared to non-transgenic littermates at sub-100 μm spatial resolution. Ex vivo MRI-assisted by in-house developed deep learning-based automatic segmentation (validated by manual analysis) revealed no volumetric atrophy in the spinal cord of M83 mice compared to non-transgenic littermates at 50 μm spatial resolution. The vascular network was not impaired in the spinal cord of M83 mice in the presence of pS129 α-syn accumulation. CONCLUSION We developed tools for deep-learning-based analysis for the segmentation of mouse spinal cord structural MRI data, and volumetric analysis of sO2SVOT data. We demonstrated non-invasive high-resolution imaging of reduced sO2SVOT in the absence of volumetric structural changes in the spinal cord of PD M83 mouse model.
Collapse
Affiliation(s)
- Benjamin F Combes
- Institute for Regenerative Medicine, University of Zurich, Zurich, Switzerland
| | - Sandeep Kumar Kalva
- Institute for Biomedical Engineering, University of Zurich & ETH Zurich, Zurich, Switzerland
- Institute of Pharmacology and Toxicology, University of Zurich, Zurich, Switzerland
| | - Pierre-Louis Benveniste
- NeuroPoly Lab, Institute of Biomedical Engineering, Polytechnique Montreal, Montreal, QC, Canada
- Mila - Quebec AI Institute, Montreal, QC, Canada
| | - Agathe Tournant
- Institute for Biomedical Engineering, University of Zurich & ETH Zurich, Zurich, Switzerland
| | - Man Hoi Law
- Institute for Regenerative Medicine, University of Zurich, Zurich, Switzerland
| | - Joshua Newton
- NeuroPoly Lab, Institute of Biomedical Engineering, Polytechnique Montreal, Montreal, QC, Canada
| | - Maik Krüger
- Institute for Regenerative Medicine, University of Zurich, Zurich, Switzerland
| | - Rebecca Z Weber
- Institute for Regenerative Medicine, University of Zurich, Zurich, Switzerland
| | - Inês Dias
- Department of Neurology, University Hospital Zurich, University of Zurich, Zurich, Switzerland
| | - Daniela Noain
- Department of Neurology, University Hospital Zurich, University of Zurich, Zurich, Switzerland
- Neuroscience Center Zurich (ZNZ), University of Zurich, Zurich, Switzerland
- Center of Competence Sleep and Health Zurich, University of Zurich, Zurich, Switzerland
| | - Xose Luis Dean-Ben
- Institute for Biomedical Engineering, University of Zurich & ETH Zurich, Zurich, Switzerland
- Institute of Pharmacology and Toxicology, University of Zurich, Zurich, Switzerland
| | - Uwe Konietzko
- Institute for Regenerative Medicine, University of Zurich, Zurich, Switzerland
| | - Christian R Baumann
- Department of Neurology, University Hospital Zurich, University of Zurich, Zurich, Switzerland
- Neuroscience Center Zurich (ZNZ), University of Zurich, Zurich, Switzerland
- Center of Competence Sleep and Health Zurich, University of Zurich, Zurich, Switzerland
| | - Per-Göran Gillberg
- Department of Neurobiology, Care Sciences and Society, Karolinska Institute, Stockholm, Sweden
| | - Christoph Hock
- Institute for Regenerative Medicine, University of Zurich, Zurich, Switzerland
- Neurimmune, Schlieren, Switzerland
| | - Roger M Nitsch
- Institute for Regenerative Medicine, University of Zurich, Zurich, Switzerland
- Neurimmune, Schlieren, Switzerland
| | - Julien Cohen-Adad
- NeuroPoly Lab, Institute of Biomedical Engineering, Polytechnique Montreal, Montreal, QC, Canada
- Mila - Quebec AI Institute, Montreal, QC, Canada
| | - Daniel Razansky
- Institute for Biomedical Engineering, University of Zurich & ETH Zurich, Zurich, Switzerland.
- Institute of Pharmacology and Toxicology, University of Zurich, Zurich, Switzerland.
| | - Ruiqing Ni
- Institute for Regenerative Medicine, University of Zurich, Zurich, Switzerland.
- Institute for Biomedical Engineering, University of Zurich & ETH Zurich, Zurich, Switzerland.
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland.
| |
Collapse
|
21
|
Kamel P, Khalid M, Steger R, Kanhere A, Kulkarni P, Parekh V, Yi PH, Gandhi D, Bodanapally U. Dual Energy CT for Deep Learning-Based Segmentation and Volumetric Estimation of Early Ischemic Infarcts. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01294-5. [PMID: 39384719 DOI: 10.1007/s10278-024-01294-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/15/2024] [Revised: 09/13/2024] [Accepted: 10/03/2024] [Indexed: 10/11/2024]
Abstract
Ischemic changes are not visible on non-contrast head CT until several hours after infarction, though deep convolutional neural networks have shown promise in the detection of subtle imaging findings. This study aims to assess if dual-energy CT (DECT) acquisition can improve early infarct visibility for machine learning. The retrospective dataset consisted of 330 DECTs acquired up to 48 h prior to confirmation of a DWI positive infarct on MRI between 2016 and 2022. Infarct segmentation maps were generated from the MRI and co-registered to the CT to serve as ground truth for segmentation. A self-configuring 3D nnU-Net was trained for segmentation on (1) standard 120 kV mixed-images (2) 190 keV virtual monochromatic images and (3) 120 kV + 190 keV images as dual channel inputs. Algorithm performance was assessed with Dice scores with paired t-tests on a test set. Global aggregate Dice scores were 0.616, 0.645, and 0.665 for standard 120 kV images, 190 keV, and combined channel inputs respectively. Differences in overall Dice scores were statistically significant with highest performance for combined channel inputs (p < 0.01). Small but statistically significant differences were observed for infarcts between 6 and 12 h from last-known-well with higher performance for larger infarcts. Volumetric accuracy trended higher with combined inputs but differences were not statistically significant (p = 0.07). Supplementation of standard head CT images with dual-energy data provides earlier and more accurate segmentation of infarcts for machine learning particularly between 6 and 12 h after last-known-well.
Collapse
Affiliation(s)
- Peter Kamel
- Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, 22 S. Greene Street, Baltimore, MD, 21201, USA.
- University of Maryland Medical Intelligent Imaging (UM2ii) Center, University of Maryland School of Medicine, 22 S. Greene Street, Baltimore, MD, 21201, USA.
| | - Mazhar Khalid
- Department of Neurology, University of Maryland School of Medicine, Baltimore, MD, USA
| | - Rachel Steger
- University of Maryland School of Medicine, Baltimore, MD, USA
| | - Adway Kanhere
- Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, 22 S. Greene Street, Baltimore, MD, 21201, USA
- University of Maryland Medical Intelligent Imaging (UM2ii) Center, University of Maryland School of Medicine, 22 S. Greene Street, Baltimore, MD, 21201, USA
| | - Pranav Kulkarni
- Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, 22 S. Greene Street, Baltimore, MD, 21201, USA
- University of Maryland Medical Intelligent Imaging (UM2ii) Center, University of Maryland School of Medicine, 22 S. Greene Street, Baltimore, MD, 21201, USA
| | - Vishwa Parekh
- Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, 22 S. Greene Street, Baltimore, MD, 21201, USA
- University of Maryland Medical Intelligent Imaging (UM2ii) Center, University of Maryland School of Medicine, 22 S. Greene Street, Baltimore, MD, 21201, USA
| | - Paul H Yi
- Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, 22 S. Greene Street, Baltimore, MD, 21201, USA
- University of Maryland Medical Intelligent Imaging (UM2ii) Center, University of Maryland School of Medicine, 22 S. Greene Street, Baltimore, MD, 21201, USA
| | - Dheeraj Gandhi
- Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, 22 S. Greene Street, Baltimore, MD, 21201, USA
| | - Uttam Bodanapally
- Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, 22 S. Greene Street, Baltimore, MD, 21201, USA
| |
Collapse
|
22
|
Schroeder C, Gatidis S, Kelemen O, Schütz L, Bonzheim I, Muyas F, Martus P, Admard J, Armeanu-Ebinger S, Gückel B, Küstner T, Garbe C, Flatz L, Pfannenberg C, Ossowski S, Forschner A. Tumour-informed liquid biopsies to monitor advanced melanoma patients under immune checkpoint inhibition. Nat Commun 2024; 15:8750. [PMID: 39384805 PMCID: PMC11464631 DOI: 10.1038/s41467-024-52923-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2023] [Accepted: 09/20/2024] [Indexed: 10/11/2024] Open
Abstract
Immune checkpoint inhibitors (ICI) have significantly improved overall survival in melanoma patients. However, 60% experience severe adverse events and early response markers are lacking. Circulating tumour DNA (ctDNA) is a promising biomarker for treatment-response and recurrence detection. The prospective PET/LIT study included 104 patients with palliative combined or adjuvant ICI. Tumour-informed sequencing panels to monitor 30 patient-specific variants were designed and 321 liquid biopsies of 87 patients sequenced. Mean sequencing depth after deduplication using UMIs was 6000x and the error rate of UMI-corrected reads was 2.47×10-4. Variant allele fractions correlated with PET/CT MTV (rho=0.69), S100 (rho=0.72), and LDH (rho=0.54). A decrease of allele fractions between T1 and T2 was associated with improved PFS and OS in the palliative cohort (p = 0.008 and p < 0.001). ctDNA was detected in 76.9% of adjuvant patients with relapse (n = 10/13), while all patients without progression (n = 9) remained ctDNA negative. Tumour-informed liquid biopsies are a reliable tool for monitoring treatment response and early relapse in melanoma patients with ICI.
Collapse
Affiliation(s)
- Christopher Schroeder
- Institute of Medical Genetics and Applied Genomics, University of Tübingen, Tübingen, Germany
- German Cancer Consortium (DKTK), partner site Tübingen, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Sergios Gatidis
- Department of Radiology, Diagnostic and Interventional Radiology, University Hospital Tübingen, Tübingen, Germany
| | - Olga Kelemen
- Institute of Medical Genetics and Applied Genomics, University of Tübingen, Tübingen, Germany
| | - Leon Schütz
- Institute of Medical Genetics and Applied Genomics, University of Tübingen, Tübingen, Germany
| | - Irina Bonzheim
- Institute of Pathology and Neuropathology, University Hospital Tübingen, Tübingen, Germany
| | - Francesc Muyas
- Institute of Medical Genetics and Applied Genomics, University of Tübingen, Tübingen, Germany
| | - Peter Martus
- Institute for Clinical Epidemiology and Applied Biostatistics (IKEaB), Tübingen, Germany
| | - Jakob Admard
- Institute of Medical Genetics and Applied Genomics, University of Tübingen, Tübingen, Germany
- NGS Competence Center Tübingen (NCCT), University of Tübingen, Tübingen, Germany
| | - Sorin Armeanu-Ebinger
- Institute of Medical Genetics and Applied Genomics, University of Tübingen, Tübingen, Germany
| | - Brigitte Gückel
- Department of Radiology, Diagnostic and Interventional Radiology, University Hospital Tübingen, Tübingen, Germany
| | - Thomas Küstner
- Department of Radiology, Diagnostic and Interventional Radiology, University Hospital Tübingen, Tübingen, Germany
| | - Claus Garbe
- Department of Dermatology, University Hospital Tübingen, Tübingen, Germany
| | - Lukas Flatz
- Department of Dermatology, University Hospital Tübingen, Tübingen, Germany
| | - Christina Pfannenberg
- Department of Radiology, Diagnostic and Interventional Radiology, University Hospital Tübingen, Tübingen, Germany
| | - Stephan Ossowski
- Institute of Medical Genetics and Applied Genomics, University of Tübingen, Tübingen, Germany
- German Cancer Consortium (DKTK), partner site Tübingen, German Cancer Research Center (DKFZ), Heidelberg, Germany
- NGS Competence Center Tübingen (NCCT), University of Tübingen, Tübingen, Germany
- Institute for Bioinformatics and Medical Informatics (IBMI), University of Tübingen, Tübingen, Germany
| | - Andrea Forschner
- Department of Dermatology, University Hospital Tübingen, Tübingen, Germany.
| |
Collapse
|
23
|
Zheng H, Zou W, Hu N, Wang J. Joint segmentation of tumors in 3D PET-CT images with a network fusing multi-view and multi-modal information. Phys Med Biol 2024; 69:205009. [PMID: 39317235 DOI: 10.1088/1361-6560/ad7f1b] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2024] [Accepted: 09/24/2024] [Indexed: 09/26/2024]
Abstract
Objective. Joint segmentation of tumors in positron emission tomography-computed tomography (PET-CT) images is crucial for precise treatment planning. However, current segmentation methods often use addition or concatenation to fuse PET and CT images, which potentially overlooks the nuanced interplay between these modalities. Additionally, these methods often neglect multi-view information that is helpful for more accurately locating and segmenting the target structure. This study aims to address these disadvantages and develop a deep learning-based algorithm for joint segmentation of tumors in PET-CT images.Approach. To address these limitations, we propose the Multi-view Information Enhancement and Multi-modal Feature Fusion Network (MIEMFF-Net) for joint tumor segmentation in three-dimensional PET-CT images. Our model incorporates a dynamic multi-modal fusion strategy to effectively exploit the metabolic and anatomical information from PET and CT images and a multi-view information enhancement strategy to effectively recover the lost information during upsamping. A Multi-scale Spatial Perception Block is proposed to effectively extract information from different views and reduce redundancy interference in the multi-view feature extraction process.Main results. The proposed MIEMFF-Net achieved a Dice score of 83.93%, a Precision of 81.49%, a Sensitivity of 87.89% and an IOU of 69.27% on the Soft Tissue Sarcomas dataset and a Dice score of 76.83%, a Precision of 86.21%, a Sensitivity of 80.73% and an IOU of 65.15% on the AutoPET dataset.Significance. Experimental results demonstrate that MIEMFF-Net outperforms existing state-of-the-art models which implies potential applications of the proposed method in clinical practice.
Collapse
Affiliation(s)
- HaoYang Zheng
- School of Electronic and Information Engineering, Soochow University, Suzhou 215006, People's Republic of China
| | - Wei Zou
- School of Electronic and Information Engineering, Soochow University, Suzhou 215006, People's Republic of China
| | - Nan Hu
- School of Electronic and Information Engineering, Soochow University, Suzhou 215006, People's Republic of China
| | - Jiajun Wang
- School of Electronic and Information Engineering, Soochow University, Suzhou 215006, People's Republic of China
| |
Collapse
|
24
|
Shen Q, Zheng B, Li W, Shi X, Luo K, Yao Y, Li X, Lv S, Tao J, Wei Q. MixUNETR: A U-shaped network based on W-MSA and depth-wise convolution with channel and spatial interactions for zonal prostate segmentation in MRI. Neural Netw 2024; 181:106782. [PMID: 39388995 DOI: 10.1016/j.neunet.2024.106782] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2024] [Revised: 09/26/2024] [Accepted: 10/02/2024] [Indexed: 10/12/2024]
Abstract
Magnetic resonance imaging (MRI) plays a pivotal role in diagnosing and staging prostate cancer. Precise delineation of the peripheral zone (PZ) and transition zone (TZ) within prostate MRI is essential for accurate diagnosis and subsequent artificial intelligence-driven analysis. However, existing segmentation methods are limited by ambiguous boundaries, shape variations and texture complexities between PZ and TZ. Moreover, they suffer from inadequate modeling capabilities and limited receptive fields. To address these challenges, we propose a Enhanced MixFormer, which integrates window-based multi-head self-attention (W-MSA) and depth-wise convolution with parallel design and cross-branch bidirectional interaction. We further introduce MixUNETR, which use multiple Enhanced MixFormers as encoder to extract features from both PZ and TZ in prostate MRI. This augmentation effectively enlarges the receptive field and enhances the modeling capability of W-MSA, ultimately improving the extraction of both global and local feature information from PZ and TZ, thereby addressing mis-segmentation and challenges in delineating boundaries between them. Extensive experiments were conducted, comparing MixUNETR with several state-of-the-art methods on the Prostate158, ProstateX public datasets and private dataset. The results consistently demonstrate the accuracy and robustness of MixUNETR in MRI prostate segmentation. Our code of methods is available at https://github.com/skyous779/MixUNETR.git.
Collapse
Affiliation(s)
- Quanyou Shen
- School of Automation, Guangdong University of Technology, Guangzhou, 510006, China; Guangdong Provincial Key Laboratory of Intelligent Decision and Cooperative Control, Guangzhou, 510006, China; Guangdong-Hong Kong Joint Laboratory for Intelligent Decision and Cooperative Control, Guangzhou, 510006, China
| | - Bowen Zheng
- Department of Urology, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, China
| | - Wenhao Li
- School of Automation, Guangdong University of Technology, Guangzhou, 510006, China; Guangdong Provincial Key Laboratory of Intelligent Decision and Cooperative Control, Guangzhou, 510006, China; Guangdong-Hong Kong Joint Laboratory for Intelligent Decision and Cooperative Control, Guangzhou, 510006, China
| | - Xiaoran Shi
- Department of Urology, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, China
| | - Kun Luo
- School of Automation, Guangdong University of Technology, Guangzhou, 510006, China
| | - Yuqian Yao
- School of Automation, Guangdong University of Technology, Guangzhou, 510006, China; Guangdong Provincial Key Laboratory of Intelligent Decision and Cooperative Control, Guangzhou, 510006, China; Guangdong-Hong Kong Joint Laboratory for Intelligent Decision and Cooperative Control, Guangzhou, 510006, China
| | - Xinyan Li
- School of Biomedical and Pharmaceutical Sciences, Guangdong University of Technology, Guangzhou, 510006, China; Guangdong Provincial Laboratory of Chemistry and Fine Chemical Engineering Jieyang Center, Jieyang, 515200, China
| | - Shidong Lv
- Department of Urology, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, China
| | - Jie Tao
- School of Automation, Guangdong University of Technology, Guangzhou, 510006, China; Guangdong Provincial Key Laboratory of Intelligent Decision and Cooperative Control, Guangzhou, 510006, China; Guangdong-Hong Kong Joint Laboratory for Intelligent Decision and Cooperative Control, Guangzhou, 510006, China.
| | - Qiang Wei
- Department of Urology, Guangdong Provincial People's Hospital, Southern Medical University, Guangzhou, 510515, China.
| |
Collapse
|
25
|
Buhl ES, Lorenzen EL, Refsgaard L, Nielsen AWM, Brixen ATL, Maae E, Holm HS, Schøler J, Thai LMH, Matthiessen LW, Maraldo MV, Nielsen MM, Johansen MB, Milo ML, Mogensen MB, Nielsen MH, Møller M, Sand M, Schultz P, Al-Rawi SAJ, Esser-Naumann S, Yammeni S, Petersen SE, Offersen BV, Korreman SS. Development and comprehensive evaluation of a national DBCG consensus-based auto-segmentation model for lymph node levels in breast cancer radiotherapy. Radiother Oncol 2024; 201:110567. [PMID: 39374675 DOI: 10.1016/j.radonc.2024.110567] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2024] [Revised: 09/17/2024] [Accepted: 09/29/2024] [Indexed: 10/09/2024]
Abstract
BACKGROUND AND PURPOSE This study aimed at training and validating a multi-institutional deep learning (DL) auto segmentation model for nodal clinical target volume (CTVn) in high-risk breast cancer (BC) patients with both training and validation dataset created with multi-institutional participation, with the overall aim of national clinical implementation in Denmark. MATERIALS AND METHODS A gold standard (GS) dataset and a high-quality training dataset were created by 21 BC delineation experts from all radiotherapy centres in Denmark. The delineations were created according to ESTRO consensus delineation guidelines. Four models were trained: One per laterality and extension of CTVn internal mammary nodes. The DL models were tested quantitatively in their own test-set and in relation to interobserver variation (IOV) in the GS dataset with geometrical metrics, such as the Dice Similarity Coefficient (DSC). A blinded qualitative evaluation was conducted with a national board, presented to both DL and manual delineations. RESULTS A median DSC > 0.7 was found for all, except the CTVn interpectoral node in one of the models. In the qualitative evaluation 'no corrections needed' were acquired for 297 (36 %) in the DL structures and 286 (34 %) for manual delineations. A higher rate of 'major corrections' and 'easier to start from scratch' was found in the manual delineations. The models performed within the IOV of an expert group, with two exceptions. CONCLUSION DL models were developed on a national consensus cohort and performed on par with the IOV between BC experts and had a comparable or higher clinical acceptance than expert manual delineations.
Collapse
Affiliation(s)
- Emma Skarsø Buhl
- Danish Centre for Particle Therapy, Aarhus University Hospital, Aarhus, Denmark; Department of Clinical Medicine, Aarhus University Hospital, Aarhus, Denmark.
| | - Ebbe Laugaard Lorenzen
- Laboratory of Radiation Physics, Department of Oncology, Odense University Hospital, Odense, Denmark; Department of Clinical Research, University of Southern Denmark, Odense, Denmark
| | - Lasse Refsgaard
- Danish Centre for Particle Therapy, Aarhus University Hospital, Aarhus, Denmark; Department of Clinical Medicine, Aarhus University Hospital, Aarhus, Denmark
| | - Anders Winther Mølby Nielsen
- Department of Clinical Medicine, Aarhus University Hospital, Aarhus, Denmark; Deparment of Experimental Clinical Oncology, Aarhus University Hospital, Aarhus, Denmark
| | | | - Else Maae
- Department of Oncology, Vejle Hospital, University Hospital of Southern Denmark, Vejle, Denmark
| | - Hanne Spangsberg Holm
- Department of Oncology, Vejle Hospital, University Hospital of Southern Denmark, Vejle, Denmark
| | - Joachim Schøler
- Department of Oncology, Vejle Hospital, University Hospital of Southern Denmark, Vejle, Denmark
| | - Linh My Hoang Thai
- Danish Centre for Particle Therapy, Aarhus University Hospital, Aarhus, Denmark
| | | | - Maja Vestmø Maraldo
- Department of Oncology, Copenhagen University Hospital - Rigshospitalet, Copenhagen, Denmark; Department of Clinical Medicine, University of Copenhagen, Copenhagen, Denmark
| | | | | | - Marie Louise Milo
- Department of Oncology, Aalborg University Hospital, Aalborg, Denmark
| | - Marie Benzon Mogensen
- Department of Oncology, Copenhagen University Hospital - Rigshospitalet, Copenhagen, Denmark
| | | | - Mette Møller
- Department of Oncology, Aalborg University Hospital, Aalborg, Denmark
| | - Maja Sand
- Department of Oncology, Aarhus University Hospital, Aarhus, Denmark
| | - Peter Schultz
- Department of Oncology, Aarhus University Hospital, Aarhus, Denmark
| | - Sami Aziz-Jowad Al-Rawi
- Department of Clinical Oncology and Palliative Care Zealand University Hospital, Næstved, Denmark
| | - Saskia Esser-Naumann
- Department of Clinical Oncology and Palliative Care Zealand University Hospital, Næstved, Denmark
| | - Sophie Yammeni
- Department of Oncology, Aalborg University Hospital, Aalborg, Denmark
| | | | - Birgitte Vrou Offersen
- Danish Centre for Particle Therapy, Aarhus University Hospital, Aarhus, Denmark; Department of Clinical Medicine, Aarhus University Hospital, Aarhus, Denmark; Deparment of Experimental Clinical Oncology, Aarhus University Hospital, Aarhus, Denmark; Department of Oncology, Aarhus University Hospital, Aarhus, Denmark
| | - Stine Sofia Korreman
- Danish Centre for Particle Therapy, Aarhus University Hospital, Aarhus, Denmark; Department of Clinical Medicine, Aarhus University Hospital, Aarhus, Denmark; Deparment of Experimental Clinical Oncology, Aarhus University Hospital, Aarhus, Denmark
| |
Collapse
|
26
|
Zhang Q, Huang Z, Jin Y, Li W, Zheng H, Liang D, Hu Z. Total-Body PET/CT: A Role of Artificial Intelligence? Semin Nucl Med 2024:S0001-2998(24)00078-3. [PMID: 39368911 DOI: 10.1053/j.semnuclmed.2024.09.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2024] [Revised: 09/06/2024] [Accepted: 09/09/2024] [Indexed: 10/07/2024]
Abstract
The purpose of this paper is to provide an overview of the cutting-edge applications of artificial intelligence (AI) technology in total-body positron emission tomography/computed tomography (PET/CT) scanning technology and its profound impact on the field of medical imaging. The introduction of total-body PET/CT scanners marked a major breakthrough in medical imaging, as their superior sensitivity and ultralong axial fields of view allowed for high-quality PET images of the entire body to be obtained in a single scan, greatly enhancing the efficiency and accuracy of diagnoses. However, this advancement is accompanied by the challenges of increasing data volumes and data complexity levels, which pose severe challenges for traditional image processing and analysis methods. Given the excellent ability of AI technology to process massive and high-dimensional data, the combination of AI technology and ultrasensitive PET/CT can be considered a complementary match, opening a new path for rapidly improving the efficiency of the PET-based medical diagnosis process. Recently, AI technology has demonstrated extraordinary potential in several key areas related to total-body PET/CT, including radiation dose reductions, dynamic parametric imaging refinements, quantitative analysis accuracy improvements, and significant image quality enhancements. The accelerated adoption of AI in clinical practice is of particular interest and is directly driven by the rapid progress made by AI technologies in terms of interpretability; i.e., the decision-making processes of algorithms and models have become more transparent and understandable. In the future, we believe that AI technology will fundamentally reshape the use of PET/CT, not only playing a more critical role in clinical diagnoses but also facilitating the customization and implementation of personalized healthcare solutions, providing patients with safer, more accurate, and more efficient healthcare experiences.
Collapse
Affiliation(s)
- Qiyang Zhang
- The Research Center for Medical AI, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Zhenxing Huang
- The Research Center for Medical AI, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Yuxi Jin
- The Research Center for Medical AI, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Wenbo Li
- The Research Center for Medical AI, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Hairong Zheng
- The Research Center for Medical AI, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China; Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Dong Liang
- The Research Center for Medical AI, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China; Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Zhanli Hu
- The Research Center for Medical AI, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China; Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen, 518055, China.
| |
Collapse
|
27
|
Zhu H, Shu S, Zhang J. A cascaded FAS-UNet+ framework with iterative optimization strategy for segmentation of organs at risk. Med Biol Eng Comput 2024:10.1007/s11517-024-03208-7. [PMID: 39365519 DOI: 10.1007/s11517-024-03208-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Accepted: 09/17/2024] [Indexed: 10/05/2024]
Abstract
Segmentation of organs at risks (OARs) in the thorax plays a critical role in radiation therapy for lung and esophageal cancer. Although automatic segmentation of OARs has been extensively studied, it remains challenging due to the varying sizes and shapes of organs, as well as the low contrast between the target and background. This paper proposes a cascaded FAS-UNet+ framework, which integrates convolutional neural networks and nonlinear multi-grid theory to solve a modified Mumford-shah model for segmenting OARs. This framework is equipped with an enhanced iteration block, a coarse-to-fine multiscale architecture, an iterative optimization strategy, and a model ensemble technique. The enhanced iteration block aims to extract multiscale features, while the cascade module is used to refine coarse segmentation predictions. The iterative optimization strategy improves the network parameters to avoid unfavorable local minima. An efficient data augmentation method is also developed to train the network, which significantly improves its performance. During the prediction stage, a weighted ensemble technique combines predictions from multiple models to refine the final segmentation. The proposed cascaded FAS-UNet+ framework was evaluated on the SegTHOR dataset, and the results demonstrate significant improvements in Dice score and Hausdorff Distance (HD). The Dice scores were 95.22%, 95.68%, and HD values were 0.1024, and 0.1194 for the segmentations of the aorta and heart in the official unlabeled dataset, respectively. Our code and trained models are available at https://github.com/zhuhui100/C-FASUNet-plus .
Collapse
Affiliation(s)
- Hui Zhu
- School of Mathematics and Computational Science, Xiangtan University, Xiangtan, 411105, China
- School of Computational Science and Electronics, Hunan Institute of Engineering, Xiangtan, 411104, China
- Key Laboratory of Intelligent Computing and Information Processing of Ministry of Education, Xiangtan, Hunan, 411105, China
| | - Shi Shu
- School of Mathematics and Computational Science, Xiangtan University, Xiangtan, 411105, China
- Hunan Key Laboratory for Computation and Simulation in Science and Engineering, Xiangtan, Hunan, 411105, China
| | - Jianping Zhang
- School of Mathematics and Computational Science, Xiangtan University, Xiangtan, 411105, China.
- National Center for Applied Mathematics in Hunan, Xiangtan, Hunan, 411105, China.
| |
Collapse
|
28
|
Daenen LHBA, van de Worp WRPH, Rezaeifar B, de Bruijn J, Qiu P, Webster JM, Peeters S, De Ruysscher D, Langen RCJ, Wolfs CJA, Verhaegen F. Towards a fully automatic workflow for investigating the dynamics of lung cancer cachexia during radiotherapy using cone beam computed tomography. Phys Med Biol 2024; 69:205005. [PMID: 39299273 DOI: 10.1088/1361-6560/ad7d5b] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2024] [Accepted: 09/19/2024] [Indexed: 09/22/2024]
Abstract
Objective.Cachexia is a devastating condition, characterized by involuntary loss of muscle mass with or without loss of adipose tissue mass. It affects more than half of patients with lung cancer, diminishing treatment effects and increasing mortality. Cone-beam computed tomography (CBCT) images, routinely acquired during radiotherapy treatment, might contain valuable anatomical information for monitoring body composition changes associated with cachexia. For this purpose, we propose an automatic artificial intelligence (AI)-based workflow, consisting of CBCT to CT conversion, followed by segmentation of pectoralis muscles.Approach.Data from 140 stage III non-small cell lung cancer patients was used. Two deep learning models, cycle-consistent generative adversarial network (CycleGAN) and contrastive unpaired translation (CUT), were used for unpaired training of CBCT to CT conversion, to generate synthetic CT (sCT) images. The no-new U-Net (nnU-Net) model was used for automatic pectoralis muscle segmentation. To evaluate tissue segmentation performance in the absence of ground truth labels, an uncertainty metric (UM) based on Monte Carlo dropout was developed and validated.Main results.Both CycleGAN and CUT restored the Hounsfield unit fidelity of the CBCT images compared to the planning CT (pCT) images and visually reduced streaking artefacts. The nnU-Net model achieved a Dice similarity coefficient (DSC) of 0.93, 0.94, 0.92 for the CT, sCT and CBCT images, respectively, on an independent test set. The UM showed a high correlation with DSC with a correlation coefficient of -0.84 for the pCT dataset and -0.89 for the sCT dataset.Significance.This paper shows a proof-of-concept for automatic AI-based monitoring of the pectoralis muscle area of lung cancer patients during radiotherapy treatment based on CBCT images, which provides an unprecedented time resolution of muscle mass loss during cachexia progression. Ultimately, the proposed workflow could provide valuable information for early intervention of cachexia, ideally resulting in improved cancer treatment outcome.
Collapse
Affiliation(s)
- Lars H B A Daenen
- Department of Radiation Oncology (Maastro), GROW Research Institute for Oncology and Reproduction, Maastricht University Medical Centre+, Maastricht, The Netherlands
- Medical Image Analysis group, Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Wouter R P H van de Worp
- Department of Respiratory Medicine, NUTRIM Institute of Nutrition and Translational Research in Metabolism, Maastricht University, Maastricht, The Netherlands
| | - Behzad Rezaeifar
- Department of Radiation Oncology (Maastro), GROW Research Institute for Oncology and Reproduction, Maastricht University Medical Centre+, Maastricht, The Netherlands
| | - Joël de Bruijn
- SmART Scientific Solutions BV, Maastricht, The Netherlands
| | - Peiyu Qiu
- Department of Respiratory Medicine, NUTRIM Institute of Nutrition and Translational Research in Metabolism, Maastricht University, Maastricht, The Netherlands
| | - Justine M Webster
- Department of Respiratory Medicine, NUTRIM Institute of Nutrition and Translational Research in Metabolism, Maastricht University, Maastricht, The Netherlands
| | - Stéphanie Peeters
- Department of Radiation Oncology (Maastro), GROW Research Institute for Oncology and Reproduction, Maastricht University Medical Centre+, Maastricht, The Netherlands
| | - Dirk De Ruysscher
- Department of Radiation Oncology (Maastro), GROW Research Institute for Oncology and Reproduction, Maastricht University Medical Centre+, Maastricht, The Netherlands
| | - Ramon C J Langen
- Department of Respiratory Medicine, NUTRIM Institute of Nutrition and Translational Research in Metabolism, Maastricht University, Maastricht, The Netherlands
| | - Cecile J A Wolfs
- Department of Radiation Oncology (Maastro), GROW Research Institute for Oncology and Reproduction, Maastricht University Medical Centre+, Maastricht, The Netherlands
| | - Frank Verhaegen
- Department of Radiation Oncology (Maastro), GROW Research Institute for Oncology and Reproduction, Maastricht University Medical Centre+, Maastricht, The Netherlands
- SmART Scientific Solutions BV, Maastricht, The Netherlands
| |
Collapse
|
29
|
Tariq R, Dilmaghani S. Machine Learning and Radiomics: Changing the Horizon of Crohn's Disease Assessment. Inflamm Bowel Dis 2024; 30:1919-1921. [PMID: 38011655 DOI: 10.1093/ibd/izad284] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Indexed: 11/29/2023]
Affiliation(s)
- Raseen Tariq
- Division of Gastroenterology and Hepatology, Mayo Clinic, Rochester, MN, USA
| | - Saam Dilmaghani
- Division of Gastroenterology and Hepatology, Mayo Clinic, Rochester, MN, USA
| |
Collapse
|
30
|
Chen J, Mei J, Li X, Lu Y, Yu Q, Wei Q, Luo X, Xie Y, Adeli E, Wang Y, Lungren MP, Zhang S, Xing L, Lu L, Yuille A, Zhou Y. TransUNet: Rethinking the U-Net architecture design for medical image segmentation through the lens of transformers. Med Image Anal 2024; 97:103280. [PMID: 39096845 DOI: 10.1016/j.media.2024.103280] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2024] [Revised: 06/16/2024] [Accepted: 07/15/2024] [Indexed: 08/05/2024]
Abstract
Medical image segmentation is crucial for healthcare, yet convolution-based methods like U-Net face limitations in modeling long-range dependencies. To address this, Transformers designed for sequence-to-sequence predictions have been integrated into medical image segmentation. However, a comprehensive understanding of Transformers' self-attention in U-Net components is lacking. TransUNet, first introduced in 2021, is widely recognized as one of the first models to integrate Transformer into medical image analysis. In this study, we present the versatile framework of TransUNet that encapsulates Transformers' self-attention into two key modules: (1) a Transformer encoder tokenizing image patches from a convolution neural network (CNN) feature map, facilitating global context extraction, and (2) a Transformer decoder refining candidate regions through cross-attention between proposals and U-Net features. These modules can be flexibly inserted into the U-Net backbone, resulting in three configurations: Encoder-only, Decoder-only, and Encoder+Decoder. TransUNet provides a library encompassing both 2D and 3D implementations, enabling users to easily tailor the chosen architecture. Our findings highlight the encoder's efficacy in modeling interactions among multiple abdominal organs and the decoder's strength in handling small targets like tumors. It excels in diverse medical applications, such as multi-organ segmentation, pancreatic tumor segmentation, and hepatic vessel segmentation. Notably, our TransUNet achieves a significant average Dice improvement of 1.06% and 4.30% for multi-organ segmentation and pancreatic tumor segmentation, respectively, when compared to the highly competitive nn-UNet, and surpasses the top-1 solution in the BrasTS2021 challenge. 2D/3D Code and models are available at https://github.com/Beckschen/TransUNet and https://github.com/Beckschen/TransUNet-3D, respectively.
Collapse
Affiliation(s)
- Jieneng Chen
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Jieru Mei
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Xianhang Li
- Department of Computer Science and Engineering, University of California, Santa Cruz, CA 95064, USA
| | - Yongyi Lu
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Qihang Yu
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Qingyue Wei
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Xiangde Luo
- Shanghai AI Lab, Xuhui District, Shanghai, 200000, China
| | - Yutong Xie
- The Australian Institute for Machine Learning, University of Adelaide, Australia
| | - Ehsan Adeli
- The School of Medicine, Stanford University, Stanford, CA 94305, USA
| | - Yan Wang
- The East China Normal University, Shanghai 200062, China
| | - Matthew P Lungren
- The School of Medicine, Stanford University, Stanford, CA 94305, USA
| | - Shaoting Zhang
- Shanghai AI Lab, Xuhui District, Shanghai, 200000, China
| | - Lei Xing
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Le Lu
- DAMO Academy, Alibaba Group, New York, NY 10014, USA
| | - Alan Yuille
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Yuyin Zhou
- Department of Computer Science and Engineering, University of California, Santa Cruz, CA 95064, USA.
| |
Collapse
|
31
|
Manthe M, Duffner S, Lartizien C. Federated brain tumor segmentation: An extensive benchmark. Med Image Anal 2024; 97:103270. [PMID: 39059241 DOI: 10.1016/j.media.2024.103270] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Revised: 06/14/2024] [Accepted: 07/08/2024] [Indexed: 07/28/2024]
Abstract
Recently, federated learning has raised increasing interest in the medical image analysis field due to its ability to aggregate multi-center data with privacy-preserving properties. A large amount of federated training schemes have been published, which we categorize into global (one final model), personalized (one model per institution) or hybrid (one model per cluster of institutions) methods. However, their applicability on the recently published Federated Brain Tumor Segmentation 2022 dataset has not been explored yet. We propose an extensive benchmark of federated learning algorithms from all three classes on this task. While standard FedAvg already performs very well, we show that some methods from each category can bring a slight performance improvement and potentially limit the final model(s) bias toward the predominant data distribution of the federation. Moreover, we provide a deeper understanding of the behavior of federated learning on this task through alternative ways of distributing the pooled dataset among institutions, namely an Independent and Identical Distributed (IID) setup, and a limited data setup. Our code is available at (https://github.com/MatthisManthe/Benchmark_FeTS2022).
Collapse
Affiliation(s)
- Matthis Manthe
- INSA Lyon, Universite Claude Bernard Lyon 1, CNRS, Inserm, CREATIS UMR 5220, U1294, F-69621 Lyon, France; INSA Lyon, CNRS, Universite Claude Bernard Lyon 1, Centrale Lyon, Université Lumière Lyon 2, LIRIS, UMR5205, F-69621 Villeurbanne, France.
| | - Stefan Duffner
- INSA Lyon, CNRS, Universite Claude Bernard Lyon 1, Centrale Lyon, Université Lumière Lyon 2, LIRIS, UMR5205, F-69621 Villeurbanne, France
| | - Carole Lartizien
- INSA Lyon, Universite Claude Bernard Lyon 1, CNRS, Inserm, CREATIS UMR 5220, U1294, F-69621 Lyon, France
| |
Collapse
|
32
|
Huijben EMC, Terpstra ML, Galapon AJ, Pai S, Thummerer A, Koopmans P, Afonso M, van Eijnatten M, Gurney-Champion O, Chen Z, Zhang Y, Zheng K, Li C, Pang H, Ye C, Wang R, Song T, Fan F, Qiu J, Huang Y, Ha J, Sung Park J, Alain-Beaudoin A, Bériault S, Yu P, Guo H, Huang Z, Li G, Zhang X, Fan Y, Liu H, Xin B, Nicolson A, Zhong L, Deng Z, Müller-Franzes G, Khader F, Li X, Zhang Y, Hémon C, Boussot V, Zhang Z, Wang L, Bai L, Wang S, Mus D, Kooiman B, Sargeant CAH, Henderson EGA, Kondo S, Kasai S, Karimzadeh R, Ibragimov B, Helfer T, Dafflon J, Chen Z, Wang E, Perko Z, Maspero M. Generating synthetic computed tomography for radiotherapy: SynthRAD2023 challenge report. Med Image Anal 2024; 97:103276. [PMID: 39068830 DOI: 10.1016/j.media.2024.103276] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2024] [Revised: 06/02/2024] [Accepted: 07/11/2024] [Indexed: 07/30/2024]
Abstract
Radiation therapy plays a crucial role in cancer treatment, necessitating precise delivery of radiation to tumors while sparing healthy tissues over multiple days. Computed tomography (CT) is integral for treatment planning, offering electron density data crucial for accurate dose calculations. However, accurately representing patient anatomy is challenging, especially in adaptive radiotherapy, where CT is not acquired daily. Magnetic resonance imaging (MRI) provides superior soft-tissue contrast. Still, it lacks electron density information, while cone beam CT (CBCT) lacks direct electron density calibration and is mainly used for patient positioning. Adopting MRI-only or CBCT-based adaptive radiotherapy eliminates the need for CT planning but presents challenges. Synthetic CT (sCT) generation techniques aim to address these challenges by using image synthesis to bridge the gap between MRI, CBCT, and CT. The SynthRAD2023 challenge was organized to compare synthetic CT generation methods using multi-center ground truth data from 1080 patients, divided into two tasks: (1) MRI-to-CT and (2) CBCT-to-CT. The evaluation included image similarity and dose-based metrics from proton and photon plans. The challenge attracted significant participation, with 617 registrations and 22/17 valid submissions for tasks 1/2. Top-performing teams achieved high structural similarity indices (≥0.87/0.90) and gamma pass rates for photon (≥98.1%/99.0%) and proton (≥97.3%/97.0%) plans. However, no significant correlation was found between image similarity metrics and dose accuracy, emphasizing the need for dose evaluation when assessing the clinical applicability of sCT. SynthRAD2023 facilitated the investigation and benchmarking of sCT generation techniques, providing insights for developing MRI-only and CBCT-based adaptive radiotherapy. It showcased the growing capacity of deep learning to produce high-quality sCT, reducing reliance on conventional CT for treatment planning.
Collapse
Affiliation(s)
- Evi M C Huijben
- Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Maarten L Terpstra
- Radiotherapy Department, University Medical Center Utrecht, Utrecht, The Netherlands; Computational Imaging Group for MR Diagnostics & Therapy, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Arthur Jr Galapon
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Suraj Pai
- Department of Radiation Oncology (Maastro), GROW School for Oncology, Maastricht University Medical Centre, Maastricht, The Netherlands
| | - Adrian Thummerer
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands; Department of Radiation Oncology, LMU University Hospital, LMU Munich, Munich, Germany
| | - Peter Koopmans
- Department of Radiation Oncology, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Manya Afonso
- Wageningen University & Research, Wageningen Plant Research, Wageningen, The Netherlands
| | - Maureen van Eijnatten
- Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Oliver Gurney-Champion
- Department of Radiology and Nuclear Medicine, Amsterdam UMC, location University of Amsterdam, Amsterdam, The Netherlands; Cancer Center Amsterdam, Imaging and Biomarkers, Amsterdam, The Netherlands
| | - Zeli Chen
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
| | - Yiwen Zhang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
| | - Kaiyi Zheng
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
| | - Chuanpu Li
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
| | - Haowen Pang
- School of Integrated Circuits and Electronics, Beijing Institute of Technology, Beijing, China
| | - Chuyang Ye
- School of Integrated Circuits and Electronics, Beijing Institute of Technology, Beijing, China
| | - Runqi Wang
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
| | - Tao Song
- Fudan University, Shanghai, China
| | - Fuxin Fan
- Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Jingna Qiu
- Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Yixing Huang
- Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | | | | | | | | | - Pengxin Yu
- Infervision Medical Technology Co., Ltd. Beijing, China
| | - Hongbin Guo
- Department of Biomedical Engineering, Shantou University, China
| | - Zhanyao Huang
- Department of Biomedical Engineering, Shantou University, China
| | | | | | - Yubo Fan
- Department of Computer Science, Vanderbilt University, Nashville, USA
| | - Han Liu
- Department of Computer Science, Vanderbilt University, Nashville, USA
| | - Bowen Xin
- Australian e-Health Research Centre, CSIRO, Herston, Queensland, Australia
| | - Aaron Nicolson
- Australian e-Health Research Centre, CSIRO, Herston, Queensland, Australia
| | - Lujia Zhong
- Stevens Neuroimaging and Informatics Institute, Keck School of Medicine, University of Southern California (USC), Los Angeles, CA, USA
| | - Zhiwei Deng
- Stevens Neuroimaging and Informatics Institute, Keck School of Medicine, University of Southern California (USC), Los Angeles, CA, USA
| | | | | | - Xia Li
- Center for Proton Therapy, Paul Scherrer Institut, Villigen, Switzerland; Department of Computer Science, ETH Zurich, Zurich, Switzerland
| | - Ye Zhang
- Center for Proton Therapy, Paul Scherrer Institut, Villigen, Switzerland; Department of Computer Science, ETH Zurich, Zurich, Switzerland
| | - Cédric Hémon
- University Rennes 1, CLCC Eugène Marquis, INSERM, LTSI, Rennes, France
| | - Valentin Boussot
- University Rennes 1, CLCC Eugène Marquis, INSERM, LTSI, Rennes, France
| | | | | | - Lu Bai
- MedMind Technology Co. Ltd., Beijing, China
| | | | - Derk Mus
- MRI Guidance BV, Utrecht, The Netherlands
| | | | | | | | | | - Satoshi Kasai
- Niigata University of Health and Welfare, Niigata, Japan
| | - Reza Karimzadeh
- Image Analysis, Computational Modelling and Geometry, University of Copenhagen, Denmark
| | - Bulat Ibragimov
- Image Analysis, Computational Modelling and Geometry, University of Copenhagen, Denmark
| | | | - Jessica Dafflon
- Data Science and Sharing Team, Functional Magnetic Resonance Imaging Facility, National Institute of Mental Health, Bethesda, USA; Machine Learning Team, Functional Magnetic Resonance Imaging Facility National Institute of Mental Health, Bethesda, USA
| | - Zijie Chen
- Shenying Medical Technology (Shenzhen) Co., Ltd., Shenzhen, Guangdong, China
| | - Enpei Wang
- Shenying Medical Technology (Shenzhen) Co., Ltd., Shenzhen, Guangdong, China
| | - Zoltan Perko
- Delft University of Technology, Faculty of Applied Sciences, Department of Radiation Science and Technology, Delft, The Netherlands
| | - Matteo Maspero
- Radiotherapy Department, University Medical Center Utrecht, Utrecht, The Netherlands; Computational Imaging Group for MR Diagnostics & Therapy, University Medical Center Utrecht, Utrecht, The Netherlands.
| |
Collapse
|
33
|
Cai L, Chen L, Huang J, Wang Y, Zhang Y. Know your orientation: A viewpoint-aware framework for polyp segmentation. Med Image Anal 2024; 97:103288. [PMID: 39096844 DOI: 10.1016/j.media.2024.103288] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2023] [Revised: 07/23/2024] [Accepted: 07/24/2024] [Indexed: 08/05/2024]
Abstract
Automatic polyp segmentation in endoscopic images is critical for the early diagnosis of colorectal cancer. Despite the availability of powerful segmentation models, two challenges still impede the accuracy of polyp segmentation algorithms. Firstly, during a colonoscopy, physicians frequently adjust the orientation of the colonoscope tip to capture underlying lesions, resulting in viewpoint changes in the colonoscopy images. These variations increase the diversity of polyp visual appearance, posing a challenge for learning robust polyp features. Secondly, polyps often exhibit properties similar to the surrounding tissues, leading to indistinct polyp boundaries. To address these problems, we propose a viewpoint-aware framework named VANet for precise polyp segmentation. In VANet, polyps are emphasized as a discriminative feature and thus can be localized by class activation maps in a viewpoint classification process. With these polyp locations, we design a viewpoint-aware Transformer (VAFormer) to alleviate the erosion of attention by the surrounding tissues, thereby inducing better polyp representations. Additionally, to enhance the polyp boundary perception of the network, we develop a boundary-aware Transformer (BAFormer) to encourage self-attention towards uncertain regions. As a consequence, the combination of the two modules is capable of calibrating predictions and significantly improving polyp segmentation performance. Extensive experiments on seven public datasets across six metrics demonstrate the state-of-the-art results of our method, and VANet can handle colonoscopy images in real-world scenarios effectively. The source code is available at https://github.com/1024803482/Viewpoint-Aware-Network.
Collapse
Affiliation(s)
- Linghan Cai
- School of Computer Science and Technology, Harbin Institute of Technology (Shenzhen), Shenzhen, 518055, China; Department of Electronic Information Engineering, Beihang University, Beijing, 100191, China.
| | - Lijiang Chen
- Department of Electronic Information Engineering, Beihang University, Beijing, 100191, China
| | - Jianhao Huang
- School of Computer Science and Technology, Harbin Institute of Technology (Shenzhen), Shenzhen, 518055, China
| | - Yifeng Wang
- School of Science, Harbin Institute of Technology (Shenzhen), Shenzhen, 518055, China
| | - Yongbing Zhang
- School of Computer Science and Technology, Harbin Institute of Technology (Shenzhen), Shenzhen, 518055, China.
| |
Collapse
|
34
|
Nan Y, Xing X, Wang S, Tang Z, Felder FN, Zhang S, Ledda RE, Ding X, Yu R, Liu W, Shi F, Sun T, Cao Z, Zhang M, Gu Y, Zhang H, Gao J, Wang P, Tang W, Yu P, Kang H, Chen J, Lu X, Zhang B, Mamalakis M, Prinzi F, Carlini G, Cuneo L, Banerjee A, Xing Z, Zhu L, Mesbah Z, Jain D, Mayet T, Yuan H, Lyu Q, Qayyum A, Mazher M, Wells A, Walsh SL, Yang G. Hunting imaging biomarkers in pulmonary fibrosis: Benchmarks of the AIIB23 challenge. Med Image Anal 2024; 97:103253. [PMID: 38968907 DOI: 10.1016/j.media.2024.103253] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2023] [Revised: 04/16/2024] [Accepted: 06/22/2024] [Indexed: 07/07/2024]
Abstract
Airway-related quantitative imaging biomarkers are crucial for examination, diagnosis, and prognosis in pulmonary diseases. However, the manual delineation of airway structures remains prohibitively time-consuming. While significant efforts have been made towards enhancing automatic airway modelling, current public-available datasets predominantly concentrate on lung diseases with moderate morphological variations. The intricate honeycombing patterns present in the lung tissues of fibrotic lung disease patients exacerbate the challenges, often leading to various prediction errors. To address this issue, the 'Airway-Informed Quantitative CT Imaging Biomarker for Fibrotic Lung Disease 2023' (AIIB23) competition was organized in conjunction with the official 2023 International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI). The airway structures were meticulously annotated by three experienced radiologists. Competitors were encouraged to develop automatic airway segmentation models with high robustness and generalization abilities, followed by exploring the most correlated QIB of mortality prediction. A training set of 120 high-resolution computerised tomography (HRCT) scans were publicly released with expert annotations and mortality status. The online validation set incorporated 52 HRCT scans from patients with fibrotic lung disease and the offline test set included 140 cases from fibrosis and COVID-19 patients. The results have shown that the capacity of extracting airway trees from patients with fibrotic lung disease could be enhanced by introducing voxel-wise weighted general union loss and continuity loss. In addition to the competitive image biomarkers for mortality prediction, a strong airway-derived biomarker (Hazard ratio>1.5, p < 0.0001) was revealed for survival prognostication compared with existing clinical measurements, clinician assessment and AI-based biomarkers.
Collapse
Affiliation(s)
- Yang Nan
- Bioengineering Department and Imperial-X, Imperial College London, London, UK; Royal Brompton Hospital, London, UK.
| | - Xiaodan Xing
- Bioengineering Department and Imperial-X, Imperial College London, London, UK.
| | - Shiyi Wang
- National Heart and Lung Institute, Imperial College London, London, UK
| | - Zeyu Tang
- Bioengineering Department and Imperial-X, Imperial College London, London, UK
| | - Federico N Felder
- Royal Brompton Hospital, London, UK; National Heart and Lung Institute, Imperial College London, London, UK
| | - Sheng Zhang
- National Heart and Lung Institute, Imperial College London, London, UK
| | | | - Xiaoliu Ding
- Shanghai MicroPort MedBot (Group) Co., Ltd., China
| | - Ruiqi Yu
- Shanghai MicroPort MedBot (Group) Co., Ltd., China
| | - Weiping Liu
- Shanghai MicroPort MedBot (Group) Co., Ltd., China
| | - Feng Shi
- Shanghai United Imaging Intelligence Co., Ltd., China
| | - Tianyang Sun
- Shanghai United Imaging Intelligence Co., Ltd., China
| | - Zehong Cao
- Shanghai United Imaging Intelligence Co., Ltd., China
| | - Minghui Zhang
- Institute of Medical Robotics, Shanghai Jiao Tong University, China
| | - Yun Gu
- Institute of Medical Robotics, Shanghai Jiao Tong University, China
| | - Hanxiao Zhang
- Institute of Medical Robotics, Shanghai Jiao Tong University, China
| | - Jian Gao
- Department Computational Biology, School of Life Sciences, Fudan University, Shanghai, China
| | - Pingyu Wang
- Cambridge International Exam Centre in Shanghai Experimental School, China
| | - Wen Tang
- InferVision Medical Technology Co., Ltd., China
| | - Pengxin Yu
- InferVision Medical Technology Co., Ltd., China
| | - Han Kang
- InferVision Medical Technology Co., Ltd., China
| | - Junqiang Chen
- Shanghai MediWorks Precision Instruments Co., Ltd, China
| | - Xing Lu
- Sanmed Biotech Ltd., Zhuhai, China
| | | | | | - Francesco Prinzi
- Department of Biomedicine, University of Palermo, Palermo, Italy
| | - Gianluca Carlini
- IRCCS Istituto delle Scienze Neurologiche di Bologna, Bologna, Italy
| | - Lisa Cuneo
- Istituto Italiano di Tecnologia, Nanoscopy, Genova, Italy
| | - Abhirup Banerjee
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, UK
| | - Zhaohu Xing
- Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
| | - Lei Zhu
- Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
| | - Zacharia Mesbah
- INSA Rouen Normandie, Univ Rouen Normandie, Université Le Havre Normandie, France; Nuclear Medicine Department, Henri Becquerel Cancer Center, Rouen, France
| | - Dhruv Jain
- INSA Rouen Normandie, Univ Rouen Normandie, Université Le Havre Normandie, France
| | - Tsiry Mayet
- INSA Rouen Normandie, Univ Rouen Normandie, Université Le Havre Normandie, France
| | - Hongyu Yuan
- Department of Radiology, Wake Forest University School of Medicine, USA
| | - Qing Lyu
- Department of Radiology, Wake Forest University School of Medicine, USA
| | - Abdul Qayyum
- National Heart and Lung Institute, Imperial College London, London, UK
| | - Moona Mazher
- Department of Computer Science, University College London, United Kingdom
| | - Athol Wells
- Royal Brompton Hospital, London, UK; National Heart and Lung Institute, Imperial College London, London, UK
| | - Simon Lf Walsh
- Royal Brompton Hospital, London, UK; National Heart and Lung Institute, Imperial College London, London, UK
| | - Guang Yang
- Bioengineering Department and Imperial-X, Imperial College London, London, UK; Royal Brompton Hospital, London, UK; National Heart and Lung Institute, Imperial College London, London, UK; School of Biomedical Engineering & Imaging Sciences, King's College London, UK.
| |
Collapse
|
35
|
Verdonk SJE, Willemse J, Zoutenbier VS, Treurniet S, Maillette de Buy Wenniger LJ, Ghyczy EAE, Curro KR, González PJ, Micha D, Eekhoff EMW, de Boer JF. Polarization-sensitive optical coherence tomography and scleral collagen fiber orientation in osteogenesis imperfecta. Exp Eye Res 2024; 247:110048. [PMID: 39151773 DOI: 10.1016/j.exer.2024.110048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2024] [Revised: 07/23/2024] [Accepted: 08/13/2024] [Indexed: 08/19/2024]
Abstract
Osteogenesis imperfecta (OI), a rare genetic connective tissue disorder, primarily arises from pathogenic variants affecting the production or structure of collagen type I. In addition to skeletal fragility, individuals with OI may face an increased risk of developing ophthalmic diseases. This association is believed to stem from the widespread presence of collagen type I throughout various parts of the eye. However, the precise consequences of abnormal collagen type I on different ocular tissues remain unknown. Of particular significance is the sclera, where collagen type I is abundant and crucial for maintaining the structural integrity of the eye. Recent research on healthy individuals has uncovered a unique organizational pattern of collagen fibers within the sclera, characterized by fiber arrangement in both circular and radial layers around the optic nerve head. While the precise function of this organizational pattern remains unclear, it is hypothesized to play a role in providing mechanical support to the optic nerve. The objective of this study is to investigate the impact of abnormal collagen type I on the sclera by assessing the fiber organization near the optic nerve head in individuals with OI and comparing them to healthy individuals. Collagen fiber orientation of the sclera was measured using polarization-sensitive optical coherence tomography (PS-OCT), an extension of the conventional OCT that is sensitive to materials that exhibit birefringence (axial changes in light refraction). Birefringence was quantified and used as imaging contrast to extract collagen fiber orientation as well as the thickness of the radially oriented scleral layer. Three individuals with OI, exhibiting different degrees of disease severity, were assessed and analyzed, along with seventeen healthy individuals. Mean values obtained from individuals with OI were descriptively compared to those of the healthy participant group. PS-OCT revealed a similar orientation pattern of scleral collagen fibers around the optic nerve head between OI individuals and healthy individuals. However, two OI participants exhibited reduced mean birefringence of the radially oriented scleral layer compared to the healthy participant group (OI participant 1 oculus dexter et sinister (ODS): 0.34°/μm, OI participant 2: ODS 0.26°/μm, OI participant 3: OD: 0.29°/μm, OS: 0.28°/μm, healthy participants: ODS 0.38 ± 0.05°/μm). The radially oriented scleral layer was thinner in all OI participants although within ±2 standard deviations of the mean observed in healthy individuals (OI participant 1 OD: 101 μm, OS 104 μm, OI participant 2: OD 97 μm, OS 98 μm, OI participant 3: OD: 94 μm, OS 120 μm, healthy participants: OD 122.8 ± 13.6 μm, OS 120.8 ± 15.1 μm). These findings imply abnormalities in collagen organization or composition, underscoring the necessity for additional research to comprehend the ocular phenotype in OI.
Collapse
Affiliation(s)
- Sara J E Verdonk
- Department of Endocrinology and Metabolism, Amsterdam University Medical Centers Location Vrije Universiteit, De Boelelaan 1117, Amsterdam, the Netherlands; Rare Bone Disease Center Amsterdam, the Netherlands; Amsterdam Movement Sciences, Amsterdam, the Netherlands
| | - Joy Willemse
- Department of Physics and Astronomy, LaserLab Amsterdam, Vrije Universiteit, Amsterdam, the Netherlands; Department of Ophthalmology, Amsterdam University Medical Centers, Amsterdam, the Netherlands
| | - Vincent S Zoutenbier
- Department of Physics and Astronomy, LaserLab Amsterdam, Vrije Universiteit, Amsterdam, the Netherlands; Department of Ophthalmology, Amsterdam University Medical Centers, Amsterdam, the Netherlands
| | - Sanne Treurniet
- Department of Endocrinology and Metabolism, Amsterdam University Medical Centers Location Vrije Universiteit, De Boelelaan 1117, Amsterdam, the Netherlands
| | | | - Ebba A E Ghyczy
- Department of Ophthalmology, Amsterdam University Medical Centers, Amsterdam, the Netherlands
| | - Katie R Curro
- Department of Ophthalmology, Amsterdam University Medical Centers, Amsterdam, the Netherlands
| | - Patrick J González
- Department of Physics and Astronomy, LaserLab Amsterdam, Vrije Universiteit, Amsterdam, the Netherlands; Department of Ophthalmology, Amsterdam University Medical Centers, Amsterdam, the Netherlands
| | - Dimitra Micha
- Department of Human Genetics, Amsterdam University Medical Centers Location Vrije Universiteit, Amsterdam, the Netherlands
| | - E Marelise W Eekhoff
- Department of Endocrinology and Metabolism, Amsterdam University Medical Centers Location Vrije Universiteit, De Boelelaan 1117, Amsterdam, the Netherlands; Rare Bone Disease Center Amsterdam, the Netherlands; Amsterdam Movement Sciences, Amsterdam, the Netherlands.
| | - Johannes F de Boer
- Department of Physics and Astronomy, LaserLab Amsterdam, Vrije Universiteit, Amsterdam, the Netherlands; Department of Ophthalmology, Amsterdam University Medical Centers, Amsterdam, the Netherlands
| |
Collapse
|
36
|
Sanchez T, Esteban O, Gomez Y, Pron A, Koob M, Dunet V, Girard N, Jakab A, Eixarch E, Auzias G, Bach Cuadra M. FetMRQC: A robust quality control system for multi-centric fetal brain MRI. Med Image Anal 2024; 97:103282. [PMID: 39053168 DOI: 10.1016/j.media.2024.103282] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2023] [Revised: 06/28/2024] [Accepted: 07/15/2024] [Indexed: 07/27/2024]
Abstract
Fetal brain MRI is becoming an increasingly relevant complement to neurosonography for perinatal diagnosis, allowing fundamental insights into fetal brain development throughout gestation. However, uncontrolled fetal motion and heterogeneity in acquisition protocols lead to data of variable quality, potentially biasing the outcome of subsequent studies. We present FetMRQC, an open-source machine-learning framework for automated image quality assessment and quality control that is robust to domain shifts induced by the heterogeneity of clinical data. FetMRQC extracts an ensemble of quality metrics from unprocessed anatomical MRI and combines them to predict experts' ratings using random forests. We validate our framework on a pioneeringly large and diverse dataset of more than 1600 manually rated fetal brain T2-weighted images from four clinical centers and 13 different scanners. Our study shows that FetMRQC's predictions generalize well to unseen data while being interpretable. FetMRQC is a step towards more robust fetal brain neuroimaging, which has the potential to shed new insights on the developing human brain.
Collapse
Affiliation(s)
- Thomas Sanchez
- CIBM - Center for Biomedical Imaging, Switzerland; Department of Diagnostic and Interventional Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland.
| | - Oscar Esteban
- Department of Diagnostic and Interventional Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Yvan Gomez
- BCNatal Fetal Medicine Research Center (Hospital Clínic and Hospital Sant Joan de Déu), Universitat de Barcelona, Spain; Department Woman-Mother-Child, CHUV, Lausanne, Switzerland
| | - Alexandre Pron
- Aix-Marseille Université, CNRS, Institut de Neurosciences de La Timone, Marseilles, France
| | - Mériam Koob
- Department of Diagnostic and Interventional Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Vincent Dunet
- Department of Diagnostic and Interventional Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Nadine Girard
- Aix-Marseille Université, CNRS, Institut de Neurosciences de La Timone, Marseilles, France; Service de Neuroradiologie Diagnostique et Interventionnelle, Hôpital Timone, AP-HM, Marseilles, France
| | - Andras Jakab
- Center for MR Research, University Children's Hospital Zurich, University of Zurich, Zurich, Switzerland; Neuroscience Center Zurich, University of Zurich, Zurich, Switzerland; Research Priority Project Adaptive Brain Circuits in Development and Learning (AdaBD), University of Zürich, Zurich, Switzerland
| | - Elisenda Eixarch
- BCNatal Fetal Medicine Research Center (Hospital Clínic and Hospital Sant Joan de Déu), Universitat de Barcelona, Spain; IDIBAPS and CIBERER, Barcelona, Spain
| | - Guillaume Auzias
- Aix-Marseille Université, CNRS, Institut de Neurosciences de La Timone, Marseilles, France
| | - Meritxell Bach Cuadra
- CIBM - Center for Biomedical Imaging, Switzerland; Department of Diagnostic and Interventional Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| |
Collapse
|
37
|
Cox J, Liu P, Stolte SE, Yang Y, Liu K, See KB, Ju H, Fang R. BrainSegFounder: Towards 3D foundation models for neuroimage segmentation. Med Image Anal 2024; 97:103301. [PMID: 39146701 PMCID: PMC11382327 DOI: 10.1016/j.media.2024.103301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2024] [Revised: 08/05/2024] [Accepted: 08/06/2024] [Indexed: 08/17/2024]
Abstract
The burgeoning field of brain health research increasingly leverages artificial intelligence (AI) to analyze and interpret neuroimaging data. Medical foundation models have shown promise of superior performance with better sample efficiency. This work introduces a novel approach towards creating 3-dimensional (3D) medical foundation models for multimodal neuroimage segmentation through self-supervised training. Our approach involves a novel two-stage pretraining approach using vision transformers. The first stage encodes anatomical structures in generally healthy brains from the large-scale unlabeled neuroimage dataset of multimodal brain magnetic resonance imaging (MRI) images from 41,400 participants. This stage of pertaining focuses on identifying key features such as shapes and sizes of different brain structures. The second pretraining stage identifies disease-specific attributes, such as geometric shapes of tumors and lesions and spatial placements within the brain. This dual-phase methodology significantly reduces the extensive data requirements usually necessary for AI model training in neuroimage segmentation with the flexibility to adapt to various imaging modalities. We rigorously evaluate our model, BrainSegFounder, using the Brain Tumor Segmentation (BraTS) challenge and Anatomical Tracings of Lesions After Stroke v2.0 (ATLAS v2.0) datasets. BrainSegFounder demonstrates a significant performance gain, surpassing the achievements of the previous winning solutions using fully supervised learning. Our findings underscore the impact of scaling up both the model complexity and the volume of unlabeled training data derived from generally healthy brains. Both of these factors enhance the accuracy and predictive capabilities of the model in neuroimage segmentation tasks. Our pretrained models and code are at https://github.com/lab-smile/BrainSegFounder.
Collapse
Affiliation(s)
- Joseph Cox
- J. Crayton Pruitt Family Department of Biomedical Engineering, Herbert Wertheim College of Engineering, University of Florida, Gainesville, USA
| | - Peng Liu
- J. Crayton Pruitt Family Department of Biomedical Engineering, Herbert Wertheim College of Engineering, University of Florida, Gainesville, USA
| | - Skylar E Stolte
- J. Crayton Pruitt Family Department of Biomedical Engineering, Herbert Wertheim College of Engineering, University of Florida, Gainesville, USA
| | - Yunchao Yang
- University of Florida Research Computing, University of Florida, Gainesville, USA
| | - Kang Liu
- Department of Physics, University of Florida, Gainesville, FL, 32611, USA
| | - Kyle B See
- J. Crayton Pruitt Family Department of Biomedical Engineering, Herbert Wertheim College of Engineering, University of Florida, Gainesville, USA
| | - Huiwen Ju
- NVIDIA Corporation, Santa Clara, CA, USA
| | - Ruogu Fang
- J. Crayton Pruitt Family Department of Biomedical Engineering, Herbert Wertheim College of Engineering, University of Florida, Gainesville, USA; Center for Cognitive Aging and Memory, McKnight Brain Institute, University of Florida, Gainesville, USA; Department of Electrical and Computer Engineering, Herbert Wertheim College of Engineering, University of Florida, Gainesville, USA; Department Of Computer Information Science Engineering, Herbert Wertheim College of Engineering, University of Florida, Gainesville, USA.
| |
Collapse
|
38
|
Bai J, He M, Gao E, Yang G, Zhang C, Yang H, Dong J, Ma X, Gao Y, Zhang H, Yan X, Zhang Y, Cheng J, Zhao G. High-performance presurgical differentiation of glioblastoma and metastasis by means of multiparametric neurite orientation dispersion and density imaging (NODDI) radiomics. Eur Radiol 2024; 34:6616-6628. [PMID: 38485749 PMCID: PMC11399163 DOI: 10.1007/s00330-024-10686-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2023] [Revised: 02/06/2024] [Accepted: 02/10/2024] [Indexed: 04/19/2024]
Abstract
OBJECTIVES To evaluate the performance of multiparametric neurite orientation dispersion and density imaging (NODDI) radiomics in distinguishing between glioblastoma (Gb) and solitary brain metastasis (SBM). MATERIALS AND METHODS In this retrospective study, NODDI images were curated from 109 patients with Gb (n = 57) or SBM (n = 52). Automatically segmented multiple volumes of interest (VOIs) encompassed the main tumor regions, including necrosis, solid tumor, and peritumoral edema. Radiomics features were extracted for each main tumor region, using three NODDI parameter maps. Radiomics models were developed based on these three NODDI parameter maps and their amalgamation to differentiate between Gb and SBM. Additionally, radiomics models were constructed based on morphological magnetic resonance imaging (MRI) and diffusion imaging (diffusion-weighted imaging [DWI]; diffusion tensor imaging [DTI]) for performance comparison. RESULTS The validation dataset results revealed that the performance of a single NODDI parameter map model was inferior to that of the combined NODDI model. In the necrotic regions, the combined NODDI radiomics model exhibited less than ideal discriminative capabilities (area under the receiver operating characteristic curve [AUC] = 0.701). For peritumoral edema regions, the combined NODDI radiomics model achieved a moderate level of discrimination (AUC = 0.820). Within the solid tumor regions, the combined NODDI radiomics model demonstrated superior performance (AUC = 0.904), surpassing the models of other VOIs. The comparison results demonstrated that the NODDI model was better than the DWI and DTI models, while those of the morphological MRI and NODDI models were similar. CONCLUSION The NODDI radiomics model showed promising performance for preoperative discrimination between Gb and SBM. CLINICAL RELEVANCE STATEMENT The NODDI radiomics model showed promising performance for preoperative discrimination between Gb and SBM, and radiomics features can be incorporated into the multidimensional phenotypic features that describe tumor heterogeneity. KEY POINTS • The neurite orientation dispersion and density imaging (NODDI) radiomics model showed promising performance for preoperative discrimination between glioblastoma and solitary brain metastasis. • Compared with other tumor volumes of interest, the NODDI radiomics model based on solid tumor regions performed best in distinguishing the two types of tumors. • The performance of the single-parameter NODDI model was inferior to that of the combined-parameter NODDI model.
Collapse
Affiliation(s)
- Jie Bai
- Department of Magnetic Resonance Imaging, the First Affiliated Hospital of Zhengzhou University, Zhengzhou, 450052, China
- Henan Engineering Research Center of Medical Imaging Intelligent Diagnosis and Treatment, Zhengzhou, 450052, China
| | - Mengyang He
- School of Cyber Science and Engineering, Zhengzhou University, Zhengzhou, 450001, China
| | - Eryuan Gao
- Department of Magnetic Resonance Imaging, the First Affiliated Hospital of Zhengzhou University, Zhengzhou, 450052, China
- Henan Engineering Research Center of Medical Imaging Intelligent Diagnosis and Treatment, Zhengzhou, 450052, China
| | - Guang Yang
- Shanghai Key Laboratory of Magnetic Resonance, East China Normal University, Shanghai, 200062, China
| | - Chengxiu Zhang
- Shanghai Key Laboratory of Magnetic Resonance, East China Normal University, Shanghai, 200062, China
| | - Hongxi Yang
- Shanghai Key Laboratory of Magnetic Resonance, East China Normal University, Shanghai, 200062, China
| | - Jie Dong
- School of Information Engineering, North China University of Water Resources and Electric Power, Zhengzhou, 450046, China
| | - Xiaoyue Ma
- Department of Magnetic Resonance Imaging, the First Affiliated Hospital of Zhengzhou University, Zhengzhou, 450052, China
- Henan Engineering Research Center of Medical Imaging Intelligent Diagnosis and Treatment, Zhengzhou, 450052, China
| | - Yufei Gao
- School of Cyber Science and Engineering, Zhengzhou University, Zhengzhou, 450001, China
| | - Huiting Zhang
- MR Research Collaboration, Siemens Healthineers, Wuhan, 201318, China
| | - Xu Yan
- MR Research Collaboration, Siemens Healthineers, Wuhan, 201318, China
| | - Yong Zhang
- Department of Magnetic Resonance Imaging, the First Affiliated Hospital of Zhengzhou University, Zhengzhou, 450052, China
- Henan Engineering Research Center of Medical Imaging Intelligent Diagnosis and Treatment, Zhengzhou, 450052, China
| | - Jingliang Cheng
- Department of Magnetic Resonance Imaging, the First Affiliated Hospital of Zhengzhou University, Zhengzhou, 450052, China
- Henan Engineering Research Center of Medical Imaging Intelligent Diagnosis and Treatment, Zhengzhou, 450052, China
| | - Guohua Zhao
- Department of Magnetic Resonance Imaging, the First Affiliated Hospital of Zhengzhou University, Zhengzhou, 450052, China.
- Henan Engineering Research Center of Medical Imaging Intelligent Diagnosis and Treatment, Zhengzhou, 450052, China.
| |
Collapse
|
39
|
Tian S, Liu Y, Mao X, Xu X, He S, Jia L, Zhang W, Peng P, Wang J. A multicenter study on deep learning for glioblastoma auto-segmentation with prior knowledge in multimodal imaging. Cancer Sci 2024; 115:3415-3425. [PMID: 39119927 PMCID: PMC11447882 DOI: 10.1111/cas.16304] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2024] [Revised: 07/19/2024] [Accepted: 07/22/2024] [Indexed: 08/10/2024] Open
Abstract
A precise radiotherapy plan is crucial to ensure accurate segmentation of glioblastomas (GBMs) for radiation therapy. However, the traditional manual segmentation process is labor-intensive and heavily reliant on the experience of radiation oncologists. In this retrospective study, a novel auto-segmentation method is proposed to address these problems. To assess the method's applicability across diverse scenarios, we conducted its development and evaluation using a cohort of 148 eligible patients drawn from four multicenter datasets and retrospective data collection including noncontrast CT, multisequence MRI scans, and corresponding medical records. All patients were diagnosed with histologically confirmed high-grade glioma (HGG). A deep learning-based method (PKMI-Net) for automatically segmenting gross tumor volume (GTV) and clinical target volumes (CTV1 and CTV2) of GBMs was proposed by leveraging prior knowledge from multimodal imaging. The proposed PKMI-Net demonstrated high accuracy in segmenting, respectively, GTV, CTV1, and CTV2 in an 11-patient test set, achieving Dice similarity coefficients (DSC) of 0.94, 0.95, and 0.92; 95% Hausdorff distances (HD95) of 2.07, 1.18, and 3.95 mm; average surface distances (ASD) of 0.69, 0.39, and 1.17 mm; and relative volume differences (RVD) of 5.50%, 9.68%, and 3.97%. Moreover, the vast majority of GTV, CTV1, and CTV2 produced by PKMI-Net are clinically acceptable and require no revision for clinical practice. In our multicenter evaluation, the PKMI-Net exhibited consistent and robust generalizability across the various datasets, demonstrating its effectiveness in automatically segmenting GBMs. The proposed method using prior knowledge in multimodal imaging can improve the contouring accuracy of GBMs, which holds the potential to improve the quality and efficiency of GBMs' radiotherapy.
Collapse
Affiliation(s)
- Suqing Tian
- Department of Radiation Oncology, Peking University Third Hospital, Beijing, China
| | - Yinglong Liu
- United Imaging Research Institute of Innovative Medical Equipment, Shenzhen, China
| | - Xinhui Mao
- Radiotherapy Center, People's Hospital of Xinjiang Uygur Autonomous Region, Urumqi, China
| | - Xin Xu
- Department of Radiation Oncology, The Second Affiliated Hospital of Shandong First Medical University, Tai'an, China
| | - Shumeng He
- Intelligent Radiation Treatment Laboratory, United Imaging Research Institute of Intelligent Imaging, Beijing, China
| | - Lecheng Jia
- United Imaging Research Institute of Innovative Medical Equipment, Shenzhen, China
| | - Wei Zhang
- Radiotherapy Business Unit, Shanghai United Imaging Healthcare Co., Ltd., Shanghai, China
| | - Peng Peng
- United Imaging Research Institute of Innovative Medical Equipment, Shenzhen, China
| | - Junjie Wang
- Department of Radiation Oncology, Peking University Third Hospital, Beijing, China
| |
Collapse
|
40
|
Salimi Y, Hajianfar G, Mansouri Z, Sanaat A, Amini M, Shiri I, Zaidi H. Organomics: A Concept Reflecting the Importance of PET/CT Healthy Organ Radiomics in Non-Small Cell Lung Cancer Prognosis Prediction Using Machine Learning. Clin Nucl Med 2024; 49:899-908. [PMID: 39192505 DOI: 10.1097/rlu.0000000000005400] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/29/2024]
Abstract
PURPOSE Non-small cell lung cancer is the most common subtype of lung cancer. Patient survival prediction using machine learning (ML) and radiomics analysis proved to provide promising outcomes. However, most studies reported in the literature focused on information extracted from malignant lesions. This study aims to explore the relevance and additional value of information extracted from healthy organs in addition to tumoral tissue using ML algorithms. PATIENTS AND METHODS This study included PET/CT images of 154 patients collected from available online databases. The gross tumor volume and 33 volumes of interest defined on healthy organs were segmented using nnU-Net deep learning-based segmentation. Subsequently, 107 radiomic features were extracted from PET and CT images (Organomics). Clinical information was combined with PET and CT radiomics from organs and gross tumor volumes considering 19 different combinations of inputs. Finally, different feature selection (FS; 5 methods) and ML (6 algorithms) algorithms were tested in a 3-fold data split cross-validation scheme. The performance of the models was quantified in terms of the concordance index (C-index) metric. RESULTS For an input combination of all radiomics information, most of the selected features belonged to PET Organomics and CT Organomics. The highest C-index (0.68) was achieved using univariate C-index FS method and random survival forest ML model using CT Organomics + PET Organomics as input as well as minimum depth FS method and CoxPH ML model using PET Organomics as input. Considering all 17 combinations with C-index higher than 0.65, Organomics from PET or CT images were used as input in 16 of them. CONCLUSIONS The selected features and C-indices demonstrated that the additional information extracted from healthy organs of both PET and CT imaging modalities improved the ML performance. Organomics could be a step toward exploiting the whole information available from multimodality medical images, contributing to the emerging field of digital twins in health care.
Collapse
Affiliation(s)
- Yazdan Salimi
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Ghasem Hajianfar
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Zahra Mansouri
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Amirhosein Sanaat
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Mehdi Amini
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | | | | |
Collapse
|
41
|
de Boisredon d'Assier MA, Portafaix A, Vorontsov E, Le WT, Kadoury S. Image-level supervision and self-training for transformer-based cross-modality tumor segmentation. Med Image Anal 2024; 97:103287. [PMID: 39111265 DOI: 10.1016/j.media.2024.103287] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 06/20/2024] [Accepted: 07/24/2024] [Indexed: 08/30/2024]
Abstract
Deep neural networks are commonly used for automated medical image segmentation, but models will frequently struggle to generalize well across different imaging modalities. This issue is particularly problematic due to the limited availability of annotated data, both in the target as well as the source modality, making it difficult to deploy these models on a larger scale. To overcome these challenges, we propose a new semi-supervised training strategy called MoDATTS. Our approach is designed for accurate cross-modality 3D tumor segmentation on unpaired bi-modal datasets. An image-to-image translation strategy between modalities is used to produce synthetic but annotated images and labels in the desired modality and improve generalization to the unannotated target modality. We also use powerful vision transformer architectures for both image translation (TransUNet) and segmentation (Medformer) tasks and introduce an iterative self-training procedure in the later task to further close the domain gap between modalities, thus also training on unlabeled images in the target modality. MoDATTS additionally allows the possibility to exploit image-level labels with a semi-supervised objective that encourages the model to disentangle tumors from the background. This semi-supervised methodology helps in particular to maintain downstream segmentation performance when pixel-level label scarcity is also present in the source modality dataset, or when the source dataset contains healthy controls. The proposed model achieves superior performance compared to other methods from participating teams in the CrossMoDA 2022 vestibular schwannoma (VS) segmentation challenge, as evidenced by its reported top Dice score of 0.87±0.04 for the VS segmentation. MoDATTS also yields consistent improvements in Dice scores over baselines on a cross-modality adult brain gliomas segmentation task composed of four different contrasts from the BraTS 2020 challenge dataset, where 95% of a target supervised model performance is reached when no target modality annotations are available. We report that 99% and 100% of this maximum performance can be attained if 20% and 50% of the target data is additionally annotated, which further demonstrates that MoDATTS can be leveraged to reduce the annotation burden.
Collapse
Affiliation(s)
| | - Aloys Portafaix
- Polytechnique Montreal, Montreal, QC, Canada; Centre de Recherche du Centre Hospitalier de l'Université de Montréal, Montreal, QC, Canada
| | | | - William Trung Le
- Polytechnique Montreal, Montreal, QC, Canada; Centre de Recherche du Centre Hospitalier de l'Université de Montréal, Montreal, QC, Canada
| | - Samuel Kadoury
- Polytechnique Montreal, Montreal, QC, Canada; Centre de Recherche du Centre Hospitalier de l'Université de Montréal, Montreal, QC, Canada.
| |
Collapse
|
42
|
Spaanderman D, Hakkesteegt S, Hanff D, Schut A, Schiphouwer L, Vos M, Messiou C, Doran S, Jones R, Hayes A, Nardo L, Abdelhafez Y, Moawad A, Elsayes K, Lee S, Link T, Niessen W, van Leenders G, Visser J, Klein S, Grünhagen D, Verhoef C, Starmans M. Multi-center external validation of an automated method segmenting and differentiating atypical lipomatous tumors from lipomas using radiomics and deep-learning on MRI. EClinicalMedicine 2024; 76:102802. [PMID: 39351025 PMCID: PMC11440245 DOI: 10.1016/j.eclinm.2024.102802] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/21/2024] [Revised: 08/09/2024] [Accepted: 08/12/2024] [Indexed: 10/04/2024] Open
Abstract
Background As differentiating between lipomas and atypical lipomatous tumors (ALTs) based on imaging is challenging and requires biopsies, radiomics has been proposed to aid the diagnosis. This study aimed to externally and prospectively validate a radiomics model differentiating between lipomas and ALTs on MRI in three large, multi-center cohorts, and extend it with automatic and minimally interactive segmentation methods to increase clinical feasibility. Methods Three study cohorts were formed, two for external validation containing data from medical centers in the United States (US) collected from 2008 until 2018 and the United Kingdom (UK) collected from 2011 until 2017, and one for prospective validation consisting of data collected from 2020 until 2021 in the Netherlands. Patient characteristics, MDM2 amplification status, and MRI scans were collected. An automatic segmentation method was developed to segment all tumors on T1-weighted MRI scans of the validation cohorts. Segmentations were subsequently quality scored. In case of insufficient quality, an interactive segmentation method was used. Radiomics performance was evaluated for all cohorts and compared to two radiologists. Findings The validation cohorts included 150 (54% ALT), 208 (37% ALT), and 86 patients (28% ALT) from the US, UK and NL. Of the 444 cases, 78% were automatically segmented. For 22%, interactive segmentation was necessary due to insufficient quality, with only 3% of all patients requiring manual adjustment. External validation resulted in an AUC of 0.74 (95% CI: 0.66, 0.82) in US data and 0.86 (0.80, 0.92) in UK data. Prospective validation resulted in an AUC of 0.89 (0.83, 0.96). The radiomics model performed similar to the two radiologists (US: 0.79 and 0.76, UK: 0.86 and 0.86, NL: 0.82 and 0.85). Interpretation The radiomics model extended with automatic and minimally interactive segmentation methods accurately differentiated between lipomas and ALTs in two large, multi-center external cohorts, and in prospective validation, performing similar to expert radiologists, possibly limiting the need for invasive diagnostics. Funding Hanarth fonds.
Collapse
Affiliation(s)
- D.J. Spaanderman
- Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, the Netherlands
| | - S.N. Hakkesteegt
- Department of Surgical Oncology and Gastrointestinal Surgery, Erasmus MC Cancer Institute Erasmus University Medical Center, Rotterdam, the Netherlands
| | - D.F. Hanff
- Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, the Netherlands
| | - A.R.W. Schut
- Department of Surgical Oncology and Gastrointestinal Surgery, Erasmus MC Cancer Institute Erasmus University Medical Center, Rotterdam, the Netherlands
| | - L.M. Schiphouwer
- Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, the Netherlands
| | - M. Vos
- Department of Surgical Oncology and Gastrointestinal Surgery, Erasmus MC Cancer Institute Erasmus University Medical Center, Rotterdam, the Netherlands
| | - C. Messiou
- The Royal Marsden Hospital and The Institute of Cancer Research London, United Kingdom
| | - S.J. Doran
- The Royal Marsden Hospital and The Institute of Cancer Research London, United Kingdom
| | - R.L. Jones
- The Royal Marsden Hospital and The Institute of Cancer Research London, United Kingdom
| | - A.J. Hayes
- The Royal Marsden Hospital and The Institute of Cancer Research London, United Kingdom
| | - L. Nardo
- Department of Radiology, UC Davis Health, Sacramento, CA, USA
| | - Y.G. Abdelhafez
- Department of Radiology, UC Davis Health, Sacramento, CA, USA
| | - A.W. Moawad
- Department of Diagnostic Imaging, University of Texas MD Anderson Cancer Center, Houston, TX, USA
- Department of Diagnostic Radiology, Mercy Catholic Medical Center, Darby, PA, USA
| | - K.M. Elsayes
- Department of Diagnostic Imaging, University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - S. Lee
- Department of Radiological Sciences, University of California, Irvine, CA, USA
| | - T.M. Link
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, CA, USA
| | - W.J. Niessen
- Faculty of Medical Sciences, University of Groningen, Groningen, the Netherlands
| | | | - J.J. Visser
- Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, the Netherlands
| | - S. Klein
- Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, the Netherlands
| | - D.J. Grünhagen
- Department of Surgical Oncology and Gastrointestinal Surgery, Erasmus MC Cancer Institute Erasmus University Medical Center, Rotterdam, the Netherlands
| | - C. Verhoef
- Department of Surgical Oncology and Gastrointestinal Surgery, Erasmus MC Cancer Institute Erasmus University Medical Center, Rotterdam, the Netherlands
| | - M.P.A. Starmans
- Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, the Netherlands
- Department of Pathology, Erasmus MC, Rotterdam, the Netherlands
| |
Collapse
|
43
|
Qu L, Zhao S, Huang Y, Ye X, Wang K, Liu Y, Liu X, Mao H, Hu G, Chen W, Guo C, He J, Tan J, Li H, Chen L, Zhao W. Self-inspired learning for denoising live-cell super-resolution microscopy. Nat Methods 2024; 21:1895-1908. [PMID: 39261639 DOI: 10.1038/s41592-024-02400-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Accepted: 07/31/2024] [Indexed: 09/13/2024]
Abstract
Every collected photon is precious in live-cell super-resolution (SR) microscopy. Here, we describe a data-efficient, deep learning-based denoising solution to improve diverse SR imaging modalities. The method, SN2N, is a Self-inspired Noise2Noise module with self-supervised data generation and self-constrained learning process. SN2N is fully competitive with supervised learning methods and circumvents the need for large training set and clean ground truth, requiring only a single noisy frame for training. We show that SN2N improves photon efficiency by one-to-two orders of magnitude and is compatible with multiple imaging modalities for volumetric, multicolor, time-lapse SR microscopy. We further integrated SN2N into different SR reconstruction algorithms to effectively mitigate image artifacts. We anticipate SN2N will enable improved live-SR imaging and inspire further advances.
Collapse
Affiliation(s)
- Liying Qu
- Innovation Photonics and Imaging Center, School of Instrumentation Science and Engineering, Harbin Institute of Technology, Harbin, China
| | - Shiqun Zhao
- State Key Laboratory of Membrane Biology, Beijing Key Laboratory of Cardiometabolic Molecular Medicine, Institute of Molecular Medicine, National Biomedical Imaging Center, School of Future Technology, Peking University, Beijing, China
| | - Yuanyuan Huang
- Innovation Photonics and Imaging Center, School of Instrumentation Science and Engineering, Harbin Institute of Technology, Harbin, China
| | - Xianxin Ye
- State Key Laboratory of Membrane Biology, Beijing Key Laboratory of Cardiometabolic Molecular Medicine, Institute of Molecular Medicine, National Biomedical Imaging Center, School of Future Technology, Peking University, Beijing, China
| | - Kunhao Wang
- State Key Laboratory of Membrane Biology, Beijing Key Laboratory of Cardiometabolic Molecular Medicine, Institute of Molecular Medicine, National Biomedical Imaging Center, School of Future Technology, Peking University, Beijing, China
| | - Yuzhen Liu
- Innovation Photonics and Imaging Center, School of Instrumentation Science and Engineering, Harbin Institute of Technology, Harbin, China
| | - Xianming Liu
- School of Computer Science and Technology, Harbin Institute of Technology, Harbin, China
| | - Heng Mao
- School of Mathematical Sciences, Peking University, Beijing, China
| | - Guangwei Hu
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore, Singapore
| | - Wei Chen
- School of Mechanical Science and Engineering, Advanced Biomedical Imaging Facility, Huazhong University of Science and Technology, Wuhan, China
| | - Changliang Guo
- State Key Laboratory of Membrane Biology, Beijing Key Laboratory of Cardiometabolic Molecular Medicine, Institute of Molecular Medicine, National Biomedical Imaging Center, School of Future Technology, Peking University, Beijing, China
| | - Jiaye He
- National Innovation Center for Advanced Medical Devices, Shenzhen, China
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Jiubin Tan
- Key Laboratory of Ultra-precision Intelligent Instrumentation of Ministry of Industry and Information Technology, Harbin Institute of Technology, Harbin, China
| | - Haoyu Li
- Innovation Photonics and Imaging Center, School of Instrumentation Science and Engineering, Harbin Institute of Technology, Harbin, China
- Key Laboratory of Ultra-precision Intelligent Instrumentation of Ministry of Industry and Information Technology, Harbin Institute of Technology, Harbin, China
- Frontiers Science Center for Matter Behave in Space Environment, Harbin Institute of Technology, Harbin, China
- Key Laboratory of Micro-Systems and Micro-Structures Manufacturing of Ministry of Education, Harbin Institute of Technology, Harbin, China
| | - Liangyi Chen
- State Key Laboratory of Membrane Biology, Beijing Key Laboratory of Cardiometabolic Molecular Medicine, Institute of Molecular Medicine, National Biomedical Imaging Center, School of Future Technology, Peking University, Beijing, China
- PKU-IDG/McGovern Institute for Brain Research, Beijing, China
- Beijing Academy of Artificial Intelligence, Beijing, China
| | - Weisong Zhao
- Innovation Photonics and Imaging Center, School of Instrumentation Science and Engineering, Harbin Institute of Technology, Harbin, China.
- Key Laboratory of Ultra-precision Intelligent Instrumentation of Ministry of Industry and Information Technology, Harbin Institute of Technology, Harbin, China.
- Frontiers Science Center for Matter Behave in Space Environment, Harbin Institute of Technology, Harbin, China.
- Key Laboratory of Micro-Systems and Micro-Structures Manufacturing of Ministry of Education, Harbin Institute of Technology, Harbin, China.
| |
Collapse
|
44
|
Huang J, Luo Y, Guo Y, Li W, Wang Z, Liu G, Yang G. Accurate segmentation of intracellular organelle networks using low-level features and topological self-similarity. Bioinformatics 2024; 40:btae559. [PMID: 39302662 PMCID: PMC11467052 DOI: 10.1093/bioinformatics/btae559] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2024] [Revised: 08/12/2024] [Accepted: 09/19/2024] [Indexed: 09/22/2024] Open
Abstract
MOTIVATION Intracellular organelle networks (IONs) such as the endoplasmic reticulum (ER) network and the mitochondrial (MITO) network serve crucial physiological functions. The morphology of these networks plays a critical role in mediating their functions. Accurate image segmentation is required for analyzing the morphology and topology of these networks for applications such as molecular mechanism analysis and drug target screening. So far, however, progress has been hindered by their structural complexity and density. RESULTS In this study, we first establish a rigorous performance baseline for accurate segmentation of these organelle networks from fluorescence microscopy images by optimizing a baseline U-Net model. We then develop the multi-resolution encoder (MRE) and the hierarchical fusion loss (Lhf) based on two inductive components, namely low-level features and topological self-similarity, to assist the model in better adapting to the task of segmenting IONs. Empowered by MRE and Lhf, both U-Net and Pyramid Vision Transformer (PVT) outperform competing state-of-the-art models such as U-Net++, HR-Net, nnU-Net, and TransUNet on custom datasets of the ER network and the MITO network, as well as on public datasets of another biological network, the retinal blood vessel network. In addition, integrating MRE and Lhf with models such as HR-Net and TransUNet also enhances their segmentation performance. These experimental results confirm the generalization capability and potential of our approach. Furthermore, accurate segmentation of the ER network enables analysis that provides novel insights into its dynamic morphological and topological properties. AVAILABILITY AND IMPLEMENTATION Code and data are openly accessible at https://github.com/cbmi-group/MRE.
Collapse
Affiliation(s)
- Jiaxing Huang
- State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yaoru Luo
- State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yuanhao Guo
- State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Wenjing Li
- State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Zichen Wang
- State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Guole Liu
- State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Ge Yang
- State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| |
Collapse
|
45
|
Becker J, Woźnicki P, Decker JA, Risch F, Wudy R, Kaufmann D, Canalini L, Wollny C, Scheurig-Muenkler C, Kroencke T, Bette S, Schwarz F. Radiomics signature for automatic hydronephrosis detection in unenhanced Low-Dose CT. Eur J Radiol 2024; 179:111677. [PMID: 39178684 DOI: 10.1016/j.ejrad.2024.111677] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2024] [Revised: 08/02/2024] [Accepted: 08/07/2024] [Indexed: 08/26/2024]
Abstract
PURPOSE To investigate the diagnostic performance of an automatic pipeline for detection of hydronephrosis on kidney's parenchyma on unenhanced low-dose CT of the abdomen. METHODS This retrospective study included 95 patients with confirmed unilateral hydronephrosis in an unenhanced low-dose CT of the abdomen. Data were split into training (n = 67) and test (n = 28) cohorts. Both kidneys for each case were included in further analyses, whereas the kidney without hydronephrosis was used as control. Using the training cohort, we developed a pipeline consisting of a deep-learning model for automatic segmentation (a Convolutional Neural Network based on nnU-Net architecture) of the kidney's parenchyma and a radiomics classifier to detect hydronephrosis. The models were assessed using standard classification metrics, such as area under the ROC curve (AUC), sensitivity and specificity, as well as semantic segmentation metrics, including Dice coefficient and Jaccard index. RESULTS Using manual segmentation of the kidney's parenchyma, hydronephrosis can be detected with an AUC of 0.84, a sensitivity of 75% and a specificity of 82%, a PPV of 81% and a NPV of 77%. Automatic kidney segmentation achieved a mean Dice score of 0.87 and 0.91 for the right and left kidney, respectively. Additionally, automatic segmentation achieved an AUC of 0.83, a sensitivity of 86%, specificity of 64%, PPV of 71%, and NPV of 82%. CONCLUSION Our proposed radiomics signature using automatic kidney's parenchyma segmentation allows for accurate hydronephrosis detection on unenhanced low-dose CT scans of the abdomen independently of widened renal pelvis. This method could be used in clinical routine to highlight hydronephrosis to radiologists as well as clinicians, especially in patients with concurrent parapelvic cysts and might reduce time and costs associated with diagnosing hydronephrosis.
Collapse
Affiliation(s)
- Judith Becker
- Department of Diagnostic and Interventional Radiology, University Hospital Augsburg, Stenglinstr. 2, 86156 Augsburg, Germany
| | - Piotr Woźnicki
- Diagnostic and Interventional Radiology, University Hospital Würzburg, Josef-Schneider-Straße 2, 97080 Würzburg, Germany
| | - Josua A Decker
- Department of Diagnostic and Interventional Radiology, University Hospital Augsburg, Stenglinstr. 2, 86156 Augsburg, Germany
| | - Franka Risch
- Department of Diagnostic and Interventional Radiology, University Hospital Augsburg, Stenglinstr. 2, 86156 Augsburg, Germany
| | - Ramona Wudy
- Department of Diagnostic and Interventional Radiology, University Hospital Augsburg, Stenglinstr. 2, 86156 Augsburg, Germany
| | - David Kaufmann
- Department of Diagnostic and Interventional Radiology, University Hospital Augsburg, Stenglinstr. 2, 86156 Augsburg, Germany
| | - Luca Canalini
- Department of Diagnostic and Interventional Radiology, University Hospital Augsburg, Stenglinstr. 2, 86156 Augsburg, Germany
| | - Claudia Wollny
- Department of Diagnostic and Interventional Radiology, University Hospital Augsburg, Stenglinstr. 2, 86156 Augsburg, Germany
| | - Christian Scheurig-Muenkler
- Department of Diagnostic and Interventional Radiology, University Hospital Augsburg, Stenglinstr. 2, 86156 Augsburg, Germany
| | - Thomas Kroencke
- Department of Diagnostic and Interventional Radiology, University Hospital Augsburg, Stenglinstr. 2, 86156 Augsburg, Germany; Centre for Advanced Analytics and Predictive Sciences (CAAPS), University of Augsburg, Universitätsstr. 2, 86159 Augsburg, Germany.
| | - Stefanie Bette
- Department of Diagnostic and Interventional Radiology, University Hospital Augsburg, Stenglinstr. 2, 86156 Augsburg, Germany
| | - Florian Schwarz
- Centre for Diagnostic Imaging and Interventional Therapy, Donau-Isar-Klinikum, Perlasberger Straße 41, 94469 Deggendorf, Germany; Medical Faculty, Ludwig Maximilian University Munich, Bavariaring 19, 80336 Munich, Germany
| |
Collapse
|
46
|
Lin J, Tao H, Yuan X, Yang J. ASO Author Reflections: Radical Resection After Neoadjuvant Therapy for Intrahepatic Cholangiocarcinoma-Emerging Technologies in Comprehensive Treatment Strategies. Ann Surg Oncol 2024; 31:6573-6575. [PMID: 39048906 DOI: 10.1245/s10434-024-15896-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2024] [Accepted: 07/10/2024] [Indexed: 07/27/2024]
Affiliation(s)
- Jinyu Lin
- The Department of Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China
- Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
- Guangdong Provincial Clinical and Engineering Center of Digital Medicine, Guangzhou, China
| | - Haisu Tao
- The Department of Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China
- Guangdong Provincial Clinical and Engineering Center of Digital Medicine, Guangzhou, China
| | - Xiangdong Yuan
- Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China.
| | - Jian Yang
- The Department of Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China.
- Guangdong Provincial Clinical and Engineering Center of Digital Medicine, Guangzhou, China.
| |
Collapse
|
47
|
Fernandez V, Pinaya WHL, Borges P, Graham MS, Tudosiu PD, Vercauteren T, Cardoso MJ. Generating multi-pathological and multi-modal images and labels for brain MRI. Med Image Anal 2024; 97:103278. [PMID: 39059240 DOI: 10.1016/j.media.2024.103278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2023] [Revised: 07/04/2024] [Accepted: 07/15/2024] [Indexed: 07/28/2024]
Abstract
The last few years have seen a boom in using generative models to augment real datasets, as synthetic data can effectively model real data distributions and provide privacy-preserving, shareable datasets that can be used to train deep learning models. However, most of these methods are 2D and provide synthetic datasets that come, at most, with categorical annotations. The generation of paired images and segmentation samples that can be used in downstream, supervised segmentation tasks remains fairly uncharted territory. This work proposes a two-stage generative model capable of producing 2D and 3D semantic label maps and corresponding multi-modal images. We use a latent diffusion model for label synthesis and a VAE-GAN for semantic image synthesis. Synthetic datasets provided by this model are shown to work in a wide variety of segmentation tasks, supporting small, real datasets or fully replacing them while maintaining good performance. We also demonstrate its ability to improve downstream performance on out-of-distribution data.
Collapse
Affiliation(s)
- Virginia Fernandez
- Department of Biomedical Engineering and Imaging Sciences, King's College London, Strand, London, WC2R 2LS, United Kingdom.
| | - Walter Hugo Lopez Pinaya
- Department of Biomedical Engineering and Imaging Sciences, King's College London, Strand, London, WC2R 2LS, United Kingdom
| | - Pedro Borges
- Department of Biomedical Engineering and Imaging Sciences, King's College London, Strand, London, WC2R 2LS, United Kingdom
| | - Mark S Graham
- Department of Biomedical Engineering and Imaging Sciences, King's College London, Strand, London, WC2R 2LS, United Kingdom
| | - Petru-Daniel Tudosiu
- Department of Biomedical Engineering and Imaging Sciences, King's College London, Strand, London, WC2R 2LS, United Kingdom
| | - Tom Vercauteren
- Department of Biomedical Engineering and Imaging Sciences, King's College London, Strand, London, WC2R 2LS, United Kingdom
| | - M Jorge Cardoso
- Department of Biomedical Engineering and Imaging Sciences, King's College London, Strand, London, WC2R 2LS, United Kingdom
| |
Collapse
|
48
|
Wang B, Ju M, Zhang X, Yang Y, Tian X. Dual-consistency guidance semi-supervised medical image segmentation with low-level detail feature augmentation. Comput Biol Med 2024; 181:109046. [PMID: 39205345 DOI: 10.1016/j.compbiomed.2024.109046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2023] [Revised: 05/09/2024] [Accepted: 08/17/2024] [Indexed: 09/04/2024]
Abstract
In deep-learning-based medical image segmentation tasks, semi-supervised learning can greatly reduce the dependence of the model on labeled data. However, existing semi-supervised medical image segmentation methods face the challenges of object boundary ambiguity and a small amount of available data, which limit the application of segmentation models in clinical practice. To solve these problems, we propose a novel semi-supervised medical image segmentation network based on dual-consistency guidance, which can extract reliable semantic information from unlabeled data over a large spatial and dimensional range in a simple and effective manner. This serves to improve the contribution of unlabeled data to the model accuracy. Specifically, we construct a split weak and strong consistency constraint strategy to capture data-level and feature-level consistencies from unlabeled data to improve the learning efficiency of the model. Furthermore, we design a simple multi-scale low-level detail feature enhancement module to improve the extraction of low-level detail contextual information, which is crucial to accurately locate object contours and avoid omitting small objects in semi-supervised medical image dense prediction tasks. Quantitative and qualitative evaluations on six challenging datasets demonstrate that our model outperforms other semi-supervised segmentation models in terms of segmentation accuracy and presents advantages in terms of generalizability. Code is available at https://github.com/0Jmyy0/SSMIS-DC.
Collapse
Affiliation(s)
- Bing Wang
- College of Mathematics and Information Science, Hebei University, Wusi Road 180, Baoding, 071000, Hebei, China; Hebei Key Laboratory of Machine Learning and Computational Intelligence, Hebei University, Wusi Road 180, Baoding, 071000, Hebei, China
| | - Mengyi Ju
- College of Mathematics and Information Science, Hebei University, Wusi Road 180, Baoding, 071000, Hebei, China
| | - Xin Zhang
- College of Electronic Information Engineering, Hebei University, Qiyi Road 2666, Baoding, 071000, Hebei, China.
| | - Ying Yang
- Hebei University Affiliated Hospital, Hebei University, Wusi Road 180, Baoding, 071000, Hebei, China
| | - Xuedong Tian
- College of Cyber Security and Computer, Hebei University, Wusi Road 180, Baoding, 071000, Hebei, China
| |
Collapse
|
49
|
Wang R, Mu Z, Wang J, Wang K, Liu H, Zhou Z, Jiao L. ASF-LKUNet: Adjacent-scale fusion U-Net with large kernel for multi-organ segmentation. Comput Biol Med 2024; 181:109050. [PMID: 39205343 DOI: 10.1016/j.compbiomed.2024.109050] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2024] [Revised: 08/17/2024] [Accepted: 08/19/2024] [Indexed: 09/04/2024]
Abstract
In the multi-organ segmentation task of medical images, there are some challenging issues such as the complex background, blurred boundaries between organs, and the larger scale difference in volume. Due to the local receptive fields of conventional convolution operations, it is difficult to obtain desirable results by directly using them for multi-organ segmentation. While Transformer-based models have global information, there is a significant dependency on hardware because of the high computational demands. Meanwhile, the depthwise convolution with large kernel can capture global information and have less computational requirements. Therefore, to leverage the large receptive field and reduce model complexity, we propose a novel CNN-based approach, namely adjacent-scale fusion U-Net with large kernel (ASF-LKUNet) for multi-organ segmentation. We utilize a u-shaped encoder-decoder as the base architecture of ASF-LKUNet. In the encoder path, we design the large kernel residual block, which combines the large and small kernels and can simultaneously capture the global and local features. Furthermore, for the first time, we propose an adjacent-scale fusion and large kernel GRN channel attention that incorporates the low-level details with the high-level semantics by the adjacent-scale feature and then adaptively focuses on the more global and meaningful channel information. Extensive experiments and interpretability analysis are made on the Synapse multi-organ dataset (Synapse) and the ACDC cardiac multi-structure dataset (ACDC). Our proposed ASF-LKUNet achieves 88.41% and 89.45% DSC scores on the Synapse and ACDC datasets, respectively, with 17.96M parameters and 29.14 GFLOPs. These results show that our method achieves superior performance with favorable lower complexity against ten competing approaches.ASF-LKUNet is superior to various competing methods and has less model complexity. Code and the trained models have been released on GitHub.
Collapse
Affiliation(s)
- Rongfang Wang
- School of Artificial Intelligence, Xidian University, China.
| | - Zhaoshan Mu
- School of Artificial Intelligence, Xidian University, China
| | - Jing Wang
- Department of Radiation Oncology, UTSW, United States of America
| | - Kai Wang
- Department of Radiation Oncology, UMMC, United States of America
| | - Hui Liu
- Department of Biostatistics Data Science, KUMC, United States of America
| | - Zhiguo Zhou
- Department of Biostatistics Data Science, KUMC, United States of America
| | - Licheng Jiao
- School of Artificial Intelligence, Xidian University, China
| |
Collapse
|
50
|
Gu Y, Wu Q, Tang H, Mai X, Shu H, Li B, Chen Y. LeSAM: Adapt Segment Anything Model for Medical Lesion Segmentation. IEEE J Biomed Health Inform 2024; 28:6031-6041. [PMID: 38809720 DOI: 10.1109/jbhi.2024.3406871] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/31/2024]
Abstract
The Segment Anything Model (SAM) is a foundational model that has demonstrated impressive results in the field of natural image segmentation. However, its performance remains suboptimal for medical image segmentation, particularly when delineating lesions with irregular shapes and low contrast. This can be attributed to the significant domain gap between medical images and natural images on which SAM was originally trained. In this paper, we propose an adaptation of SAM specifically tailored for lesion segmentation termed LeSAM. LeSAM first learns medical-specific domain knowledge through an efficient adaptation module and integrates it with the general knowledge obtained from the pre-trained SAM. Subsequently, we leverage this merged knowledge to generate lesion masks using a modified mask decoder implemented as a lightweight U-shaped network design. This modification enables better delineation of lesion boundaries while facilitating ease of training. We conduct comprehensive experiments on various lesion segmentation tasks involving different image modalities such as CT scans, MRI scans, ultrasound images, dermoscopic images, and endoscopic images. Our proposed method achieves superior performance compared to previous state-of-the-art methods in 8 out of 12 lesion segmentation tasks while achieving competitive performance in the remaining 4 datasets. Additionally, ablation studies are conducted to validate the effectiveness of our proposed adaptation modules and modified decoder.
Collapse
|