1
|
Zhu J, Zou L, Xie X, Xu R, Tian Y, Zhang B. 2.5D deep learning based on multi-parameter MRI to differentiate primary lung cancer pathological subtypes in patients with brain metastases. Eur J Radiol 2024; 180:111712. [PMID: 39222565 DOI: 10.1016/j.ejrad.2024.111712] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2024] [Revised: 08/17/2024] [Accepted: 08/27/2024] [Indexed: 09/04/2024]
Abstract
BACKGROUND Brain metastases (BMs) represents a severe neurological complication stemming from cancers originating from various sources. It is a highly challenging clinical task to accurately distinguish the pathological subtypes of brain metastatic tumors from lung cancer (LC).The utility of 2.5-dimensional (2.5D) deep learning (DL) in distinguishing pathological subtypes of LC with BMs is yet to be determined. METHODS A total of 250 patients were included in this retrospective study, divided in a 7:3 ratio into training set (N=175) and testing set (N=75). We devised a method to assemble a series of two-dimensional (2D) images by extracting adjacent slices from a central slice in both superior-inferior and anterior-posterior directions to form a 2.5D dataset. Multi-Instance learning (MIL) is a weakly supervised learning method that organizes training instances into "bags" and provides labels for entire bags, with the purpose of learning a classifier based on the labeled positive and negative bags to predict the corresponding class for an unknown bag. Therefore, we employed MIL to construct a comprehensive 2.5D feature set. Then we used the single-slice as input for constructing the 2D model. DL features were extracted from these slices using the pre-trained ResNet101. All feature sets were inputted into the support vector machine (SVM) for evaluation. The diagnostic performance of the classification models were evaluated using five-fold cross-validation, with accuracy and area under the curve (AUC) metrics calculated for analysis. RESULTS The optimal performance was obtained using the 2.5D DL model, which achieved the micro-AUC of 0.868 (95% confidence interval [CI], 0.817-0.919) and accuracy of 0.836 in the test cohort. The 2D model achieved the micro-AUC of 0.836 (95 % CI, 0.778-0.894) and accuracy of 0.827 in the test cohort. CONCLUSIONS The proposed 2.5D DL model is feasible and effective in identifying pathological subtypes of BMs from lung cancer.
Collapse
Affiliation(s)
- Jinling Zhu
- Department Of Radiology, The Second Affiliated Hospital of Soochow University, Suzhou, 215004, China
| | - Li Zou
- Department of Radiotherapy & Oncology, The Second Affiliated Hospital of Soochow University, Suzhou, 215004, China
| | - Xin Xie
- Department Of Radiology, The Second Affiliated Hospital of Soochow University, Suzhou, 215004, China
| | - Ruizhe Xu
- Department of Radiotherapy & Oncology, The Second Affiliated Hospital of Soochow University, Suzhou, 215004, China
| | - Ye Tian
- Department of Radiotherapy & Oncology, The Second Affiliated Hospital of Soochow University, Suzhou, 215004, China.
| | - Bo Zhang
- Department Of Radiology, The Second Affiliated Hospital of Soochow University, Suzhou, 215004, China.
| |
Collapse
|
2
|
Wolf D, Payer T, Lisson CS, Lisson CG, Beer M, Götz M, Ropinski T. Less is More: Selective reduction of CT data for self-supervised pre-training of deep learning models with contrastive learning improves downstream classification performance. Comput Biol Med 2024; 183:109242. [PMID: 39388839 DOI: 10.1016/j.compbiomed.2024.109242] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Revised: 09/30/2024] [Accepted: 10/01/2024] [Indexed: 10/12/2024]
Abstract
BACKGROUND Self-supervised pre-training of deep learning models with contrastive learning is a widely used technique in image analysis. Current findings indicate a strong potential for contrastive pre-training on medical images. However, further research is necessary to incorporate the particular characteristics of these images. METHOD We hypothesize that the similarity of medical images hinders the success of contrastive learning in the medical imaging domain. To this end, we investigate different strategies based on deep embedding, information theory, and hashing in order to identify and reduce redundancy in medical pre-training datasets. The effect of these different reduction strategies on contrastive learning is evaluated on two pre-training datasets and several downstream classification tasks. RESULTS In all of our experiments, dataset reduction leads to a considerable performance gain in downstream tasks, e.g., an AUC score improvement from 0.78 to 0.83 for the COVID CT Classification Grand Challenge, 0.97 to 0.98 for the OrganSMNIST Classification Challenge and 0.73 to 0.83 for a brain hemorrhage classification task. Furthermore, pre-training is up to nine times faster due to the dataset reduction. CONCLUSIONS In conclusion, the proposed approach highlights the importance of dataset quality and provides a transferable approach to improve contrastive pre-training for classification downstream tasks on medical images.
Collapse
Affiliation(s)
- Daniel Wolf
- Visual Computing Research Group, Institute of Media Informatics, Ulm University, James-Franck-Ring, Ulm, 89081, Germany; Experimental Radiology Research Group, Department for Diagnostic and Interventional Radiology, Ulm University Medical Center, Albert Einstein Allee, Ulm, 89081, Germany.
| | - Tristan Payer
- Visual Computing Research Group, Institute of Media Informatics, Ulm University, James-Franck-Ring, Ulm, 89081, Germany
| | - Catharina Silvia Lisson
- Experimental Radiology Research Group, Department for Diagnostic and Interventional Radiology, Ulm University Medical Center, Albert Einstein Allee, Ulm, 89081, Germany
| | - Christoph Gerhard Lisson
- Experimental Radiology Research Group, Department for Diagnostic and Interventional Radiology, Ulm University Medical Center, Albert Einstein Allee, Ulm, 89081, Germany
| | - Meinrad Beer
- Experimental Radiology Research Group, Department for Diagnostic and Interventional Radiology, Ulm University Medical Center, Albert Einstein Allee, Ulm, 89081, Germany
| | - Michael Götz
- Experimental Radiology Research Group, Department for Diagnostic and Interventional Radiology, Ulm University Medical Center, Albert Einstein Allee, Ulm, 89081, Germany
| | - Timo Ropinski
- Visual Computing Research Group, Institute of Media Informatics, Ulm University, James-Franck-Ring, Ulm, 89081, Germany
| |
Collapse
|
3
|
Wang P, Zhang J, Liu Y, Wu J, Yu H, Yu C, Jiang R. Combining 2.5D deep learning and conventional features in a joint model for the early detection of sICH expansion. Sci Rep 2024; 14:22467. [PMID: 39341957 PMCID: PMC11439036 DOI: 10.1038/s41598-024-73415-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2024] [Accepted: 09/17/2024] [Indexed: 10/01/2024] Open
Abstract
The study aims to investigate the potential of training efficient deep learning models by using 2.5D (2.5-Dimension) masks of sICH. Furthermore, it intends to evaluate and compare the predictive performance of a joint model incorporating four types of features with standalone 2.5D deep learning, radiomics, radiology, and clinical models for early expansion in sICH. A total of 254 sICH patients were enrolled retrospectively and divided into two groups according to whether the hematoma was enlarged or not. The 2.5D mask of sICH is constructed with the maximum axial, coronal and sagittal planes of the hematoma, which is used to train the deep learning model and extract deep learning features. Predictive models were built on clinic, radiology, radiomics and deep learning features separately and four type features jointly. The diagnostic performance of each model was measured using the receiver operating characteristic curve (AUC), Accuracy, Recall, F1 and decision curve analysis (DCA). The AUCs of the clinic model, radiology model, radiomics model, deep learning model, joint model, and nomogram model on the train set (training and Cross-validation) were 0.639, 0.682, 0.859, 0.807, 0.939, and 0.942, respectively, while the AUCs on the test set (external validation) were 0.680, 0.758, 0.802, 0.857, 0.929, and 0.926. Decision curve analysis showed that the joint model was superior to the other models and demonstrated good consistency between the predicted probability of early hematoma expansion and the actual occurrence probability. Our study demonstrates that the joint model is a more efficient and robust prediction model, as verified by multicenter data. This finding highlights the potential clinical utility of a multifactorial prediction model that integrates various data sources for prognostication in patients with intracerebral hemorrhage. The Critical Relevance Statement: Combining 2.5D deep learning features with clinic features, radiology markers, and radiomics signatures to establish a joint model enabling physicians to conduct better-individualized assessments the risk of early expansion of sICH.
Collapse
Affiliation(s)
- Peng Wang
- Department of Radiology, Chinese People's Liberation the General Hospital of Western Theater Command, Chengdu, 610083, China.
| | - Junfeng Zhang
- Department of Radiology, Chinese People's Liberation the General Hospital of Western Theater Command, Chengdu, 610083, China
| | - Yi Liu
- Department of Radiology, Chinese People's Liberation the General Hospital of Western Theater Command, Chengdu, 610083, China
| | - Jialing Wu
- Department of Radiology, Chinese People's Liberation the General Hospital of Western Theater Command, Chengdu, 610083, China
| | - Hongmei Yu
- Department of Radiology, Chinese People's Liberation the General Hospital of Western Theater Command, Chengdu, 610083, China
| | - Chengzhou Yu
- Department of Radiology, Chinese People's Liberation Army Marine Corps Hospital, Chaozhou, 521000, China
| | - Rui Jiang
- Department of Radiology, Chinese People's Liberation the General Hospital of Western Theater Command, Chengdu, 610083, China.
| |
Collapse
|
4
|
Xue X, Sun L, Liang D, Zhu J, Liu L, Sun Q, Liu H, Gao J, Fu X, Ding J, Dai X, Tao L, Cheng J, Li T, Zhou F. Deep learning-based segmentation for high-dose-rate brachytherapy in cervical cancer using 3D Prompt-ResUNet. Phys Med Biol 2024; 69:195008. [PMID: 39270708 DOI: 10.1088/1361-6560/ad7ad1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2024] [Accepted: 09/13/2024] [Indexed: 09/15/2024]
Abstract
Objective.To develop and evaluate a 3D Prompt-ResUNet module that utilized the prompt-based model combined with 3D nnUNet for rapid and consistent autosegmentation of high-risk clinical target volume (HRCTV) and organ at risk (OAR) in high-dose-rate brachytherapy for cervical cancer patients.Approach.We used 73 computed tomography scans and 62 magnetic resonance imaging scans from 135 (103 for training, 16 for validation, and 16 for testing) cervical cancer patients across two hospitals for HRCTV and OAR segmentation. A novel comparison of the deep learning neural networks 3D Prompt-ResUNet, nnUNet, and segment anything model-Med3D was applied for the segmentation. Evaluation was conducted in two parts: geometric and clinical assessments. Quantitative metrics included the Dice similarity coefficient (DSC), 95th percentile Hausdorff distance (HD95%), Jaccard index (JI), and Matthews correlation coefficient (MCC). Clinical evaluation involved interobserver comparison, 4-grade expert scoring, and a double-blinded Turing test.Main results.The Prompt-ResUNet model performed most similarly to experienced radiation oncologists, outperforming less experienced ones. During testing, the DSC, HD95% (mm), JI, and MCC value (mean ± SD) for HRCTV were 0.92 ± 0.03, 2.91 ± 0.69, 0.85 ± 0.04, and 0.92 ± 0.02, respectively. For the bladder, these values were 0.93 ± 0.05, 3.07 ± 1.05, 0.87 ± 0.08, and 0.93 ± 0.05, respectively. For the rectum, they were 0.87 ± 0.03, 3.54 ± 1.46, 0.78 ± 0.05, and 0.87 ± 0.03, respectively. For the sigmoid, they were 0.76 ± 0.11, 7.54 ± 5.54, 0.63 ± 0.14, and 0.78 ± 0.09, respectively. The Prompt-ResUNet achieved a clinical viability score of at least 2 in all evaluation cases (100%) for both HRCTV and bladder and exceeded the 30% positive rate benchmark for all evaluated structures in the Turing test.Significance.The Prompt-ResUNet architecture demonstrated high consistency with ground truth in autosegmentation of HRCTV and OARs, reducing interobserver variability and shortening treatment times.
Collapse
Affiliation(s)
- Xian Xue
- Key Laboratory of Radiological Protection and Nuclear Emergency, National Institute for Radiological Protection, Chinese Center for Disease Control and Prevention (CDC), Beijing 100088, People's Republic of China
| | - Lining Sun
- Department of radiation oncology, Fudan University Shanghai Cancer Center, Shanghai 200032, People's Republic of China
| | - Dazhu Liang
- Digital Health China Technologies Co., LTD, Beijing 100089, People's Republic of China
| | - Jingyang Zhu
- Department of radiation oncology, Zhongcheng Cancer center, Beijing 100160, People's Republic of China
| | - Lele Liu
- Department of radiation oncology, The First Affiliated Hospital of Zhengzhou University, Zhengzhou 450052, People's Republic of China
| | - Quanfu Sun
- Key Laboratory of Radiological Protection and Nuclear Emergency, National Institute for Radiological Protection, Chinese Center for Disease Control and Prevention (CDC), Beijing 100088, People's Republic of China
| | - Hefeng Liu
- Digital Health China Technologies Co., LTD, Beijing 100089, People's Republic of China
| | - Jianwei Gao
- Digital Health China Technologies Co., LTD, Beijing 100089, People's Republic of China
| | - Xiaosha Fu
- Biomedical Research Centre, Sheffield Hallam University, Sheffield S11WB, United Kingdom
| | - Jingjing Ding
- Department of radiation oncology, Chinese People's Liberation Army (PLA) General Hospital, Beijing 100853, People's Republic of China
| | - Xiangkun Dai
- Department of radiation oncology, Chinese People's Liberation Army (PLA) General Hospital, Beijing 100853, People's Republic of China
| | - Laiyuan Tao
- Digital Health China Technologies Co., LTD, Beijing 100089, People's Republic of China
| | - Jinsheng Cheng
- Key Laboratory of Radiological Protection and Nuclear Emergency, National Institute for Radiological Protection, Chinese Center for Disease Control and Prevention (CDC), Beijing 100088, People's Republic of China
| | - Tengxiang Li
- Department of Nuclear Science and Engineering, Nanhua University, Hunan 421001, People's Republic of China
| | - Fugen Zhou
- Department of Aero-space Information Engineering, Beihang University, Beijing 100191, People's Republic of China
| |
Collapse
|
5
|
Zhao M, Song L, Zhu J, Zhou T, Zhang Y, Chen SC, Li H, Cao D, Jiang YQ, Ho W, Cai J, Ren G. Non-contrasted computed tomography (NCCT) based chronic thromboembolic pulmonary hypertension (CTEPH) automatic diagnosis using cascaded network with multiple instance learning. Phys Med Biol 2024; 69:185011. [PMID: 39191289 DOI: 10.1088/1361-6560/ad7455] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2024] [Accepted: 08/27/2024] [Indexed: 08/29/2024]
Abstract
Objective.The diagnosis of chronic thromboembolic pulmonary hypertension (CTEPH) is challenging due to nonspecific early symptoms, complex diagnostic processes, and small lesion sizes. This study aims to develop an automatic diagnosis method for CTEPH using non-contrasted computed tomography (NCCT) scans, enabling automated diagnosis without precise lesion annotation.Approach.A novel cascade network (CN) with multiple instance learning (CNMIL) framework was developed to improve the diagnosis of CTEPH. This method uses a CN architecture combining two Resnet-18 CNN networks to progressively distinguish between normal and CTEPH cases. Multiple instance learning (MIL) is employed to treat each 3D CT case as a 'bag' of image slices, using attention scoring to identify the most important slices. An attention module helps the model focus on diagnostically relevant regions within each slice. The dataset comprised NCCT scans from 300 subjects, including 117 males and 183 females, with an average age of 52.5 ± 20.9 years, consisting of 132 normal cases and 168 cases of lung diseases, including 88 cases of CTEPH. The CNMIL framework was evaluated using sensitivity, specificity, and the area under the curve (AUC) metrics, and compared with common 3D supervised classification networks and existing CTEPH automatic diagnosis networks.Main results. The CNMIL framework demonstrated high diagnostic performance, achieving an AUC of 0.807, accuracy of 0.833, sensitivity of 0.795, and specificity of 0.849 in distinguishing CTEPH cases. Ablation studies revealed that integrating MIL and the CN significantly enhanced performance, with the model achieving an AUC of 0.978 and perfect sensitivity (1.000) in normal classification. Comparisons with other 3D network architectures confirmed that the integrated model outperformed others, achieving the highest AUC of 0.8419.Significance. The CNMIL network requires no additional scans or annotations, relying solely on NCCT. This approach can improve timely and accurate CTEPH detection, resulting in better patient outcomes.
Collapse
Affiliation(s)
- Mayang Zhao
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong Special Administrative Region of China, People's Republic of China
| | - Liming Song
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong Special Administrative Region of China, People's Republic of China
| | - Jiarui Zhu
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong Special Administrative Region of China, People's Republic of China
| | - Ta Zhou
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong Special Administrative Region of China, People's Republic of China
| | - Yuanpeng Zhang
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong Special Administrative Region of China, People's Republic of China
| | - Shu-Cheng Chen
- School of Nursing, Hong Kong Polytechnic University, Hong Kong Special Administrative Region of China, People's Republic of China
| | - Haojiang Li
- Department of Radiology, State Key Laboratory of Oncology in South China, Collaborative Innovation Centre for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Sun Yat-sen University Cancer Centre, Guangzhou, People's Republic of China
| | - Di Cao
- Department of Radiology, State Key Laboratory of Oncology in South China, Collaborative Innovation Centre for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Sun Yat-sen University Cancer Centre, Guangzhou, People's Republic of China
| | - Yi-Quan Jiang
- Department of Minimally Invasive Interventional Therapy, State Key Laboratory of Oncology in South China, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou, People's Republic of China
| | - Waiyin Ho
- Department of Nuclear Medicine, Queen Mary Hospital, Hong Kong Special Administrative Region of China , People's Republic of China
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong Special Administrative Region of China, People's Republic of China
| | - Ge Ren
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong Special Administrative Region of China, People's Republic of China
| |
Collapse
|
6
|
DuPlissis A, Medewar A, Hegarty E, Laing A, Shen A, Gomez S, Mondal S, Ben-Yakar A. vivoBodySeg: Machine learning-based analysis of C. elegans immobilized in vivoChip for automated developmental toxicity testing. RESEARCH SQUARE 2024:rs.3.rs-4796642. [PMID: 39281859 PMCID: PMC11398583 DOI: 10.21203/rs.3.rs-4796642/v1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/18/2024]
Abstract
Developmental toxicity (DevTox) tests evaluate the adverse effects of chemical exposures on an organism's development. While large animal tests are currently heavily relied on, the development of new approach methodologies (NAMs) is encouraging industries and regulatory agencies to evaluate these novel assays. Several practical advantages have made C. elegansa useful model for rapid toxicity testing and studying developmental biology. Although the potential to study DevTox is promising, current low-resolution and labor-intensive methodologies prohibit the use of C. elegans for sub-lethal DevTox studies at high throughputs. With the recent availability of a large-scale microfluidic device, vivoChip, we can now rapidly collect 3D high-resolution images of ~ 1,000 C. elegans from 24 different populations. In this paper, we demonstrate DevTox studies using a 2.5D U-Net architecture (vivoBodySeg) that can precisely segment C. elegans in images obtained from vivoChip devices, achieving an average Dice score of 97.80. The fully automated platform can analyze 36 GB data from each device to phenotype multiple body parameters within 35 min on a desktop PC at speeds ~ 140x faster than the manual analysis. Highly reproducible DevTox parameters (4-8% CV) and additional autofluorescence-based phenotypes allow us to assess the toxicity of chemicals with high statistical power.
Collapse
|
7
|
de Araújo AS, Pinho MS, Marques da Silva AM, Fiorentini LF, Becker J. A 2.5D Self-Training Strategy for Carotid Artery Segmentation in T1-Weighted Brain Magnetic Resonance Images. J Imaging 2024; 10:161. [PMID: 39057732 PMCID: PMC11278143 DOI: 10.3390/jimaging10070161] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2024] [Revised: 06/25/2024] [Accepted: 06/28/2024] [Indexed: 07/28/2024] Open
Abstract
Precise annotations for large medical image datasets can be time-consuming. Additionally, when dealing with volumetric regions of interest, it is typical to apply segmentation techniques on 2D slices, compromising important information for accurately segmenting 3D structures. This study presents a deep learning pipeline that simultaneously tackles both challenges. Firstly, to streamline the annotation process, we employ a semi-automatic segmentation approach using bounding boxes as masks, which is less time-consuming than pixel-level delineation. Subsequently, recursive self-training is utilized to enhance annotation quality. Finally, a 2.5D segmentation technique is adopted, wherein a slice of a volumetric image is segmented using a pseudo-RGB image. The pipeline was applied to segment the carotid artery tree in T1-weighted brain magnetic resonance images. Utilizing 42 volumetric non-contrast T1-weighted brain scans from four datasets, we delineated bounding boxes around the carotid arteries in the axial slices. Pseudo-RGB images were generated from these slices, and recursive segmentation was conducted using a Res-Unet-based neural network architecture. The model's performance was tested on a separate dataset, with ground truth annotations provided by a radiologist. After recursive training, we achieved an Intersection over Union (IoU) score of (0.68 ± 0.08) on the unseen dataset, demonstrating commendable qualitative results.
Collapse
Affiliation(s)
- Adriel Silva de Araújo
- School of Technology, Pontifícia Universidade Católica do Rio Grande do Sul, Porto Alegre 90619-900, Brazil;
| | - Márcio Sarroglia Pinho
- School of Technology, Pontifícia Universidade Católica do Rio Grande do Sul, Porto Alegre 90619-900, Brazil;
| | | | - Luis Felipe Fiorentini
- Centro de Diagnóstico por Imagem, Santa Casa de Misericórdia de Porto Alegre, Porto Alegre 90020-090, Brazil
- Grupo Hospitalar Conceição, Porto Alegre 91350-200, Brazil
| | - Jefferson Becker
- Hospital São Lucas, Pontifícia Universidade Católica do Rio Grande do Sul, Porto Alegre 90610-000, Brazil
- Brain Institute, Pontifícia Universidade Católica do Rio Grande do Sul, Porto Alegre 90619-900, Brazil
| |
Collapse
|
8
|
Xue X, Liang D, Wang K, Gao J, Ding J, Zhou F, Xu J, Liu H, Sun Q, Jiang P, Tao L, Shi W, Cheng J. A deep learning-based 3D Prompt-nnUnet model for automatic segmentation in brachytherapy of postoperative endometrial carcinoma. J Appl Clin Med Phys 2024; 25:e14371. [PMID: 38682540 PMCID: PMC11244685 DOI: 10.1002/acm2.14371] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2023] [Revised: 02/07/2024] [Accepted: 03/25/2024] [Indexed: 05/01/2024] Open
Abstract
PURPOSE To create and evaluate a three-dimensional (3D) Prompt-nnUnet module that utilizes the prompts-based model combined with 3D nnUnet for producing the rapid and consistent autosegmentation of high-risk clinical target volume (HR CTV) and organ at risk (OAR) in high-dose-rate brachytherapy (HDR BT) for patients with postoperative endometrial carcinoma (EC). METHODS AND MATERIALS On two experimental batches, a total of 321 computed tomography (CT) scans were obtained for HR CTV segmentation from 321 patients with EC, and 125 CT scans for OARs segmentation from 125 patients. The numbers of training/validation/test were 257/32/32 and 87/13/25 for HR CTV and OARs respectively. A novel comparison of the deep learning neural network 3D Prompt-nnUnet and 3D nnUnet was applied for HR CTV and OARs segmentation. Three-fold cross validation and several quantitative metrics were employed, including Dice similarity coefficient (DSC), Hausdorff distance (HD), 95th percentile of Hausdorff distance (HD95%), and intersection over union (IoU). RESULTS The Prompt-nnUnet included two forms of parameters Predict-Prompt (PP) and Label-Prompt (LP), with the LP performing most similarly to the experienced radiation oncologist and outperforming the less experienced ones. During the testing phase, the mean DSC values for the LP were 0.96 ± 0.02, 0.91 ± 0.02, and 0.83 ± 0.07 for HR CTV, rectum and urethra, respectively. The mean HD values (mm) were 2.73 ± 0.95, 8.18 ± 4.84, and 2.11 ± 0.50, respectively. The mean HD95% values (mm) were 1.66 ± 1.11, 3.07 ± 0.94, and 1.35 ± 0.55, respectively. The mean IoUs were 0.92 ± 0.04, 0.84 ± 0.03, and 0.71 ± 0.09, respectively. A delineation time < 2.35 s per structure in the new model was observed, which was available to save clinician time. CONCLUSION The Prompt-nnUnet architecture, particularly the LP, was highly consistent with ground truth (GT) in HR CTV or OAR autosegmentation, reducing interobserver variability and shortening treatment time.
Collapse
Affiliation(s)
- Xian Xue
- Secondary Standard Dosimetry LaboratoryNational Institute for Radiological ProtectionChinese Center for Disease Control and Prevention (CDC)BeijingChina
| | - Dazhu Liang
- Digital Health China Technologies Co., LTDBeijingChina
| | - Kaiyue Wang
- Department of RadiotherapyPeking University Third HospitalBeijingChina
| | - Jianwei Gao
- Digital Health China Technologies Co., LTDBeijingChina
| | - Jingjing Ding
- Department of RadiotherapyChinese People's Liberation Army (PLA) General HospitalBeijingChina
| | - Fugen Zhou
- Department of Aero‐space Information EngineeringBeihang UniversityBeijingChina
| | - Juan Xu
- Digital Health China Technologies Co., LTDBeijingChina
| | - Hefeng Liu
- Digital Health China Technologies Co., LTDBeijingChina
| | - Quanfu Sun
- Secondary Standard Dosimetry LaboratoryNational Institute for Radiological ProtectionChinese Center for Disease Control and Prevention (CDC)BeijingChina
| | - Ping Jiang
- Department of RadiotherapyPeking University Third HospitalBeijingChina
| | - Laiyuan Tao
- Digital Health China Technologies Co., LTDBeijingChina
| | - Wenzhao Shi
- Digital Health China Technologies Co., LTDBeijingChina
| | - Jinsheng Cheng
- Secondary Standard Dosimetry LaboratoryNational Institute for Radiological ProtectionChinese Center for Disease Control and Prevention (CDC)BeijingChina
| |
Collapse
|
9
|
Chi M, An H, Jin X, Nie Z. An N-Shaped Lightweight Network with a Feature Pyramid and Hybrid Attention for Brain Tumor Segmentation. ENTROPY (BASEL, SWITZERLAND) 2024; 26:166. [PMID: 38392421 PMCID: PMC10888052 DOI: 10.3390/e26020166] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/24/2024] [Revised: 02/11/2024] [Accepted: 02/12/2024] [Indexed: 02/24/2024]
Abstract
Brain tumor segmentation using neural networks presents challenges in accurately capturing diverse tumor shapes and sizes while maintaining real-time performance. Additionally, addressing class imbalance is crucial for achieving accurate clinical results. To tackle these issues, this study proposes a novel N-shaped lightweight network that combines multiple feature pyramid paths and U-Net architectures. Furthermore, we ingeniously integrate hybrid attention mechanisms into various locations of depth-wise separable convolution module to improve efficiency, with channel attention found to be the most effective for skip connections in the proposed network. Moreover, we introduce a combination loss function that incorporates a newly designed weighted cross-entropy loss and dice loss to effectively tackle the issue of class imbalance. Extensive experiments are conducted on four publicly available datasets, i.e., UCSF-PDGM, BraTS 2021, BraTS 2019, and MSD Task 01 to evaluate the performance of different methods. The results demonstrate that the proposed network achieves superior segmentation accuracy compared to state-of-the-art methods. The proposed network not only improves the overall segmentation performance but also provides a favorable computational efficiency, making it a promising approach for clinical applications.
Collapse
Affiliation(s)
- Mengxian Chi
- School of Computer Science and Technology, University of Science and Technology of China, Hefei 230026, China
| | - Hong An
- School of Computer Science and Technology, University of Science and Technology of China, Hefei 230026, China
| | - Xu Jin
- School of Computer Science and Technology, University of Science and Technology of China, Hefei 230026, China
| | - Zhenguo Nie
- Department of Mechanical Engineering, Tsinghua University, Beijing 100084, China
- State Key Laboratory of Tribology in Advanced Equipment, Tsinghua University, Beijing 100084, China
- Beijing Key Laboratory of Precision/Ultra-Precision Manufacturing Equipments and Control, Tsinghua University, Beijing 100084, China
| |
Collapse
|
10
|
Wolf D, Payer T, Lisson CS, Lisson CG, Beer M, Götz M, Ropinski T. Self-supervised pre-training with contrastive and masked autoencoder methods for dealing with small datasets in deep learning for medical imaging. Sci Rep 2023; 13:20260. [PMID: 37985685 PMCID: PMC10662445 DOI: 10.1038/s41598-023-46433-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Accepted: 10/31/2023] [Indexed: 11/22/2023] Open
Abstract
Deep learning in medical imaging has the potential to minimize the risk of diagnostic errors, reduce radiologist workload, and accelerate diagnosis. Training such deep learning models requires large and accurate datasets, with annotations for all training samples. However, in the medical imaging domain, annotated datasets for specific tasks are often small due to the high complexity of annotations, limited access, or the rarity of diseases. To address this challenge, deep learning models can be pre-trained on large image datasets without annotations using methods from the field of self-supervised learning. After pre-training, small annotated datasets are sufficient to fine-tune the models for a specific task. The most popular self-supervised pre-training approaches in medical imaging are based on contrastive learning. However, recent studies in natural image processing indicate a strong potential for masked autoencoder approaches. Our work compares state-of-the-art contrastive learning methods with the recently introduced masked autoencoder approach "SparK" for convolutional neural networks (CNNs) on medical images. Therefore, we pre-train on a large unannotated CT image dataset and fine-tune on several CT classification tasks. Due to the challenge of obtaining sufficient annotated training data in medical imaging, it is of particular interest to evaluate how the self-supervised pre-training methods perform when fine-tuning on small datasets. By experimenting with gradually reducing the training dataset size for fine-tuning, we find that the reduction has different effects depending on the type of pre-training chosen. The SparK pre-training method is more robust to the training dataset size than the contrastive methods. Based on our results, we propose the SparK pre-training for medical imaging tasks with only small annotated datasets.
Collapse
Affiliation(s)
- Daniel Wolf
- Visual Computing Research Group, Institute of Media Informatics, Ulm University, Ulm, Germany.
- Experimental Radiology Research Group, Department for Diagnostic and Interventional Radiology, Ulm University Medical Center, Ulm, Germany.
| | - Tristan Payer
- Visual Computing Research Group, Institute of Media Informatics, Ulm University, Ulm, Germany
| | - Catharina Silvia Lisson
- Experimental Radiology Research Group, Department for Diagnostic and Interventional Radiology, Ulm University Medical Center, Ulm, Germany
| | - Christoph Gerhard Lisson
- Experimental Radiology Research Group, Department for Diagnostic and Interventional Radiology, Ulm University Medical Center, Ulm, Germany
| | - Meinrad Beer
- Experimental Radiology Research Group, Department for Diagnostic and Interventional Radiology, Ulm University Medical Center, Ulm, Germany
| | - Michael Götz
- Experimental Radiology Research Group, Department for Diagnostic and Interventional Radiology, Ulm University Medical Center, Ulm, Germany
| | - Timo Ropinski
- Visual Computing Research Group, Institute of Media Informatics, Ulm University, Ulm, Germany
| |
Collapse
|
11
|
Yoo YS, Kim D, Yang S, Kang SR, Kim JE, Huh KH, Lee SS, Heo MS, Yi WJ. Comparison of 2D, 2.5D, and 3D segmentation networks for maxillary sinuses and lesions in CBCT images. BMC Oral Health 2023; 23:866. [PMID: 37964229 PMCID: PMC10647072 DOI: 10.1186/s12903-023-03607-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Accepted: 11/01/2023] [Indexed: 11/16/2023] Open
Abstract
BACKGROUND The purpose of this study was to compare the segmentation performances of the 2D, 2.5D, and 3D networks for maxillary sinuses (MSs) and lesions inside the maxillary sinus (MSL) with variations in sizes, shapes, and locations in cone beam CT (CBCT) images under the same constraint of memory capacity. METHODS The 2D, 2.5D, and 3D networks were compared comprehensively for the segmentation of the MS and MSL in CBCT images under the same constraint of memory capacity. MSLs were obtained by subtracting the prediction of the air region of the maxillary sinus (MSA) from that of the MS. RESULTS The 2.5D network showed the highest segmentation performances for the MS and MSA compared to the 2D and 3D networks. The performances of the Jaccard coefficient, Dice similarity coefficient, precision, and recall by the 2.5D network of U-net + + reached 0.947, 0.973, 0.974, and 0.971 for the MS, respectively, and 0.787, 0.875, 0.897, and 0.858 for the MSL, respectively. CONCLUSIONS The 2.5D segmentation network demonstrated superior segmentation performance for various MSLs with an ensemble learning approach of combining the predictions from three orthogonal planes.
Collapse
Affiliation(s)
- Yeon-Sun Yoo
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, Korea
| | - DaEl Kim
- Interdisciplinary Program in Bioengineering, Graduate School of Engineering, Seoul National University, Seoul, Korea
| | - Su Yang
- Department of Applied Bioengineering, Graduate School of Convergence Science and Technology, Seoul National University, Seoul, Korea
| | - Se-Ryong Kang
- Department of Biomedical Radiation Sciences, Graduate School of Convergence Science and Technology, Seoul National University, Seoul, Korea
| | - Jo-Eun Kim
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, Korea
| | - Kyung-Hoe Huh
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, Korea
| | - Sam-Sun Lee
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, Korea
| | - Min-Suk Heo
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, Korea
| | - Won-Jin Yi
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, Korea.
- Interdisciplinary Program in Bioengineering, Graduate School of Engineering, Seoul National University, Seoul, Korea.
- Department of Applied Bioengineering, Graduate School of Convergence Science and Technology, Seoul National University, Seoul, Korea.
- Department of Biomedical Radiation Sciences, Graduate School of Convergence Science and Technology, Seoul National University, Seoul, Korea.
| |
Collapse
|
12
|
Jenkin Suji R, Bhadauria SS, Wilfred Godfrey W. A survey and taxonomy of 2.5D approaches for lung segmentation and nodule detection in CT images. Comput Biol Med 2023; 165:107437. [PMID: 37717526 DOI: 10.1016/j.compbiomed.2023.107437] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Revised: 08/20/2023] [Accepted: 08/28/2023] [Indexed: 09/19/2023]
Abstract
CAD systems for lung cancer diagnosis and detection can significantly offer unbiased, infatiguable diagnostics with minimal variance, decreasing the mortality rate and the five-year survival rate. Lung segmentation and lung nodule detection are critical steps in the lung cancer CAD system pipeline. Literature on lung segmentation and lung nodule detection mostly comprises techniques that process 3-D volumes or 2-D slices and surveys. However, surveys that highlight 2.5D techniques for lung segmentation and lung nodule detection still need to be included. This paper presents a background and discussion on 2.5D methods to fill this gap. Further, this paper also gives a taxonomy of 2.5D approaches and a detailed description of the 2.5D approaches. Based on the taxonomy, various 2.5D techniques for lung segmentation and lung nodule detection are clustered into these 2.5D approaches, which is followed by possible future work in this direction.
Collapse
|
13
|
Duan S, Cao G, Hua Y, Hu J, Zheng Y, Wu F, Xu S, Rong T, Liu B. Identification of Origin for Spinal Metastases from MR Images: Comparison Between Radiomics and Deep Learning Methods. World Neurosurg 2023; 175:e823-e831. [PMID: 37059360 DOI: 10.1016/j.wneu.2023.04.029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2023] [Revised: 04/06/2023] [Accepted: 04/07/2023] [Indexed: 04/16/2023]
Abstract
OBJECTIVE To determine whether spinal metastatic lesions originated from lung cancer or from other cancers based on spinal contrast-enhanced T1 (CET1) magnetic resonance (MR) images analyzed using radiomics (RAD) and deep learning (DL) methods. METHODS We recruited and retrospectively reviewed 173 patients diagnosed with spinal metastases at two different centers between July 2018 and June 2021. Of these, 68 involved lung cancer and 105 were other types of cancer. They were assigned to an internal cohort of 149 patients, randomly divided into a training set and a validation set, and to an external cohort of 24 patients. All patients underwent CET1-MR imaging before surgery or biopsy. We developed two predictive algorithms: a DL model and a RAD model. We compared performance between models, and against human radiological assessment, via accuracy (ACC) and receiver operating characteristic (ROC) analyses. Furthermore, we analyzed the correlation between RAD and DL features. RESULTS The DL model outperformed RAD model across the board, with ACC/ area under the receiver operating characteristic curve (AUC) values of 0.93/0.94 (DL) versus 0.84/0.93 (RAD) when applied to the training set from the internal cohort, 0.74/0.76 versus 0.72/0.75 when applied to the validation set, and 0.72/0.76 versus 0.69/0.72 when applied to the external test cohort. For the validation set, it also outperformed expert radiological assessment (ACC: 0.65, AUC: 0.68). We only found weak correlations between DL and RAD features. CONCLUSION The DL algorithm successfully identified the origin of spinal metastases from pre-operative CET1-MR images, outperforming both RAD models and expert assessment by trained radiologists.
Collapse
Affiliation(s)
- Shuo Duan
- Department of Orthopaedic Surgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | - Guanmei Cao
- Department of Radiology, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | - Yichun Hua
- Department of Medical Oncology, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | - Junnan Hu
- Department of Orthopaedic Surgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | - Yali Zheng
- Department of Respiratory, Critical Care, and Sleep Medicine, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China
| | - Fangfang Wu
- Department of Respiratory, Critical Care, and Sleep Medicine, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China
| | - Shuai Xu
- Department of Spinal Surgery, Peking University People's Hospital, Peking University, Beijing, China
| | - Tianhua Rong
- Department of Orthopaedic Surgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | - Baoge Liu
- Department of Orthopaedic Surgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, China; China National Clinical Research Center for Neurological Diseases, Beijing, China.
| |
Collapse
|
14
|
Avesta A, Hui Y, Aboian M, Duncan J, Krumholz HM, Aneja S. 3D Capsule Networks for Brain Image Segmentation. AJNR Am J Neuroradiol 2023; 44:562-568. [PMID: 37080721 PMCID: PMC10171390 DOI: 10.3174/ajnr.a7845] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Accepted: 03/11/2023] [Indexed: 04/22/2023]
Abstract
BACKGROUND AND PURPOSE Current autosegmentation models such as UNets and nnUNets have limitations, including the inability to segment images that are not represented during training and lack of computational efficiency. 3D capsule networks have the potential to address these limitations. MATERIALS AND METHODS We used 3430 brain MRIs, acquired in a multi-institutional study, to train and validate our models. We compared our capsule network with standard alternatives, UNets and nnUNets, on the basis of segmentation efficacy (Dice scores), segmentation performance when the image is not well-represented in the training data, performance when the training data are limited, and computational efficiency including required memory and computational speed. RESULTS The capsule network segmented the third ventricle, thalamus, and hippocampus with Dice scores of 95%, 94%, and 92%, respectively, which were within 1% of the Dice scores of UNets and nnUNets. The capsule network significantly outperformed UNets in segmenting images that were not well-represented in the training data, with Dice scores 30% higher. The computational memory required for the capsule network is less than one-tenth of the memory required for UNets or nnUNets. The capsule network is also >25% faster to train compared with UNet and nnUNet. CONCLUSIONS We developed and validated a capsule network that is effective in segmenting brain images, can segment images that are not well-represented in the training data, and is computationally efficient compared with alternatives.
Collapse
Affiliation(s)
- A Avesta
- From the Department of Radiology and Biomedical Imaging (A.A., M.A., J.D.)
- Department of Therapeutic Radiology (A.A., Y.H., S.A.)
- Center for Outcomes Research and Evaluation (A.A., Y.H., H.M.K., S.A.)
| | - Y Hui
- Department of Therapeutic Radiology (A.A., Y.H., S.A.)
- Center for Outcomes Research and Evaluation (A.A., Y.H., H.M.K., S.A.)
| | - M Aboian
- From the Department of Radiology and Biomedical Imaging (A.A., M.A., J.D.)
| | - J Duncan
- From the Department of Radiology and Biomedical Imaging (A.A., M.A., J.D.)
- Departments of Statistics and Data Science (J.D.)
- Biomedical Engineering (J.D., S.A.), Yale University, New Haven, Connecticut
| | - H M Krumholz
- Center for Outcomes Research and Evaluation (A.A., Y.H., H.M.K., S.A.)
- Division of Cardiovascular Medicine (H.M.K.), Yale School of Medicine, New Haven, Connecticut
| | - S Aneja
- Department of Therapeutic Radiology (A.A., Y.H., S.A.)
- Center for Outcomes Research and Evaluation (A.A., Y.H., H.M.K., S.A.)
- Biomedical Engineering (J.D., S.A.), Yale University, New Haven, Connecticut
| |
Collapse
|
15
|
Joel MZ, Avesta A, Yang DX, Zhou JG, Omuro A, Herbst RS, Krumholz HM, Aneja S. Comparing Detection Schemes for Adversarial Images against Deep Learning Models for Cancer Imaging. Cancers (Basel) 2023; 15:1548. [PMID: 36900339 PMCID: PMC10000732 DOI: 10.3390/cancers15051548] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2023] [Revised: 02/27/2023] [Accepted: 02/27/2023] [Indexed: 03/05/2023] Open
Abstract
Deep learning (DL) models have demonstrated state-of-the-art performance in the classification of diagnostic imaging in oncology. However, DL models for medical images can be compromised by adversarial images, where pixel values of input images are manipulated to deceive the DL model. To address this limitation, our study investigates the detectability of adversarial images in oncology using multiple detection schemes. Experiments were conducted on thoracic computed tomography (CT) scans, mammography, and brain magnetic resonance imaging (MRI). For each dataset we trained a convolutional neural network to classify the presence or absence of malignancy. We trained five DL and machine learning (ML)-based detection models and tested their performance in detecting adversarial images. Adversarial images generated using projected gradient descent (PGD) with a perturbation size of 0.004 were detected by the ResNet detection model with an accuracy of 100% for CT, 100% for mammogram, and 90.0% for MRI. Overall, adversarial images were detected with high accuracy in settings where adversarial perturbation was above set thresholds. Adversarial detection should be considered alongside adversarial training as a defense technique to protect DL models for cancer imaging classification from the threat of adversarial images.
Collapse
Affiliation(s)
- Marina Z. Joel
- Department of Dermatology, Johns Hopkins University School of Medicine, Baltimore, MD 21205, USA
- Department of Therapeutic Radiology, Yale School of Medicine, New Haven, CT 06510, USA
| | - Arman Avesta
- Department of Therapeutic Radiology, Yale School of Medicine, New Haven, CT 06510, USA
| | - Daniel X. Yang
- Department of Therapeutic Radiology, Yale School of Medicine, New Haven, CT 06510, USA
| | - Jian-Ge Zhou
- Department of Chemistry, Physics and Atmospheric Science, Jackson State University, Jackson, MS 39217, USA
| | - Antonio Omuro
- Department of Neurology, Yale School of Medicine, New Haven, CT 06510, USA
| | - Roy S. Herbst
- Department of Medicine, Yale School of Medicine, New Haven, CT 06510, USA
| | - Harlan M. Krumholz
- Department of Medicine, Yale School of Medicine, New Haven, CT 06510, USA
- Center for Outcomes Research and Evaluation (CORE), Yale School of Medicine, New Haven, CT 06510, USA
| | - Sanjay Aneja
- Department of Therapeutic Radiology, Yale School of Medicine, New Haven, CT 06510, USA
- Center for Outcomes Research and Evaluation (CORE), Yale School of Medicine, New Haven, CT 06510, USA
| |
Collapse
|