1
|
Boneš E, Gergolet M, Bohak C, Lesar Ž, Marolt M. Automatic Segmentation and Alignment of Uterine Shapes from 3D Ultrasound Data. Comput Biol Med 2024; 178:108794. [PMID: 38941903 DOI: 10.1016/j.compbiomed.2024.108794] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2023] [Revised: 06/18/2024] [Accepted: 06/19/2024] [Indexed: 06/30/2024]
Abstract
BACKGROUND The uterus is the most important organ in the female reproductive system. Its shape plays a critical role in fertility and pregnancy outcomes. Advances in medical imaging, such as 3D ultrasound, have significantly improved the exploration of the female genital tract, thereby enhancing gynecological healthcare. Despite well-documented data for organs like the liver and heart, large-scale studies on the uterus are lacking. Existing classifications, such as VCUAM and ESHRE/ESGE, provide different definitions for normal uterine shapes but are not based on real-world measurements. Moreover, the lack of comprehensive datasets significantly hinders research in this area. Our research, part of the larger NURSE study, aims to fill this gap by establishing the shape of a normal uterus using real-world 3D vaginal ultrasound scans. This will facilitate research into uterine shape abnormalities associated with infertility and recurrent miscarriages. METHODS We developed an automated system for the segmentation and alignment of uterine shapes from 3D ultrasound data, which consists of two steps: automatic segmentation of the uteri in 3D ultrasound scans using deep learning techniques, and alignment of the resulting shapes with standard geometrical approaches, enabling the extraction of the normal shape for future analysis. The system was trained and validated on a comprehensive dataset of 3D ultrasound images from multiple medical centers. Its performance was evaluated by comparing the automated results with manual annotations provided by expert clinicians. RESULTS The presented approach demonstrated high accuracy in segmenting and aligning uterine shapes from 3D ultrasound data. The segmentation achieved an average Dice similarity coefficient (DSC) of 0.90. Our method for aligning uterine shapes showed minimal translation and rotation errors compared to traditional methods, with the preliminary average shape exhibiting characteristics consistent with expert findings of a normal uterus. CONCLUSION We have presented an approach to automatically segment and align uterine shapes from 3D ultrasound data. We trained a deep learning nnU-Net model that achieved high accuracy and proposed an alignment method using a combination of standard geometrical techniques. Additionally, we have created a publicly available dataset of 3D transvaginal ultrasound volumes with manual annotations of uterine cavities to support further research and development in this field. The dataset and the trained models are available at https://github.com/UL-FRI-LGM/UterUS.
Collapse
Affiliation(s)
- Eva Boneš
- University of Ljubljana, Faculty of Computer and Information Science, Večna pot 113, Ljubljana, 1000, Slovenia.
| | - Marco Gergolet
- University of Ljubljana, Faculty of Medicine, Vrazov trg 2, Ljubljana, 1000, Slovenia.
| | - Ciril Bohak
- University of Ljubljana, Faculty of Computer and Information Science, Večna pot 113, Ljubljana, 1000, Slovenia; King Abdullah University of Science and Technology, Visual Computing Center, Thuwal, 23955-6900, Saudi Arabia.
| | - Žiga Lesar
- University of Ljubljana, Faculty of Computer and Information Science, Večna pot 113, Ljubljana, 1000, Slovenia.
| | - Matija Marolt
- University of Ljubljana, Faculty of Computer and Information Science, Večna pot 113, Ljubljana, 1000, Slovenia.
| |
Collapse
|
2
|
Gao C, Wang H. Intelligent Stroke Disease Prediction Model Using Deep Learning Approaches. Stroke Res Treat 2024; 2024:4523388. [PMID: 38817540 PMCID: PMC11139533 DOI: 10.1155/2024/4523388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2024] [Revised: 04/24/2024] [Accepted: 04/27/2024] [Indexed: 06/01/2024] Open
Abstract
Stroke is a high morbidity and mortality disease that poses a serious threat to people's health. Early recognition of the various warning signs of stroke is necessary so that timely clinical intervention can help reduce the severity of stroke. Deep neural networks have powerful feature representation capabilities and can automatically learn discriminant features from large amounts of data. This paper uses a range of physiological characteristic parameters and collaborates with deep neural networks, such as the Wasserstein generative adversarial networks with gradient penalty and regression network, to construct a stroke prediction model. Firstly, to address the problem of imbalance between positive and negative samples in the stroke public data set, we performed positive sample data augmentation and utilized WGAN-GP to generate stroke data with high fidelity and used it for the training of the prediction network model. Then, the relationship between observable physiological characteristic parameters and the predicted risk of suffering a stroke was modeled as a nonlinear mapping transformation, and a stroke prediction model based on a deep regression network was designed. Finally, the proposed method is compared with commonly used machine learning-based classification algorithms such as decision tree, random forest, support vector machine, and artificial neural networks. The prediction results of the proposed method are optimal in the comprehensive measurement index F. Further ablation experiments also show that the designed prediction model has certain robustness and can effectively predict stroke diseases.
Collapse
Affiliation(s)
- Chunhua Gao
- School of Tourism and Physical Health, Hezhou University, Hezhou 542899, China
| | - Hui Wang
- School of Artificial Intelligence, Hezhou University, Hezhou 542899, China
| |
Collapse
|
3
|
Chato L, Regentova E. Survey of Transfer Learning Approaches in the Machine Learning of Digital Health Sensing Data. J Pers Med 2023; 13:1703. [PMID: 38138930 PMCID: PMC10744730 DOI: 10.3390/jpm13121703] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Revised: 12/01/2023] [Accepted: 12/08/2023] [Indexed: 12/24/2023] Open
Abstract
Machine learning and digital health sensing data have led to numerous research achievements aimed at improving digital health technology. However, using machine learning in digital health poses challenges related to data availability, such as incomplete, unstructured, and fragmented data, as well as issues related to data privacy, security, and data format standardization. Furthermore, there is a risk of bias and discrimination in machine learning models. Thus, developing an accurate prediction model from scratch can be an expensive and complicated task that often requires extensive experiments and complex computations. Transfer learning methods have emerged as a feasible solution to address these issues by transferring knowledge from a previously trained task to develop high-performance prediction models for a new task. This survey paper provides a comprehensive study of the effectiveness of transfer learning for digital health applications to enhance the accuracy and efficiency of diagnoses and prognoses, as well as to improve healthcare services. The first part of this survey paper presents and discusses the most common digital health sensing technologies as valuable data resources for machine learning applications, including transfer learning. The second part discusses the meaning of transfer learning, clarifying the categories and types of knowledge transfer. It also explains transfer learning methods and strategies, and their role in addressing the challenges in developing accurate machine learning models, specifically on digital health sensing data. These methods include feature extraction, fine-tuning, domain adaptation, multitask learning, federated learning, and few-/single-/zero-shot learning. This survey paper highlights the key features of each transfer learning method and strategy, and discusses the limitations and challenges of using transfer learning for digital health applications. Overall, this paper is a comprehensive survey of transfer learning methods on digital health sensing data which aims to inspire researchers to gain knowledge of transfer learning approaches and their applications in digital health, enhance the current transfer learning approaches in digital health, develop new transfer learning strategies to overcome the current limitations, and apply them to a variety of digital health technologies.
Collapse
Affiliation(s)
- Lina Chato
- Department of Electrical and Computer Engineering, University of Nevada, Las Vegas, NV 89154, USA;
| | | |
Collapse
|
4
|
Yang C, Zhou Q, Li M, Xu L, Zeng Y, Liu J, Wei Y, Shi F, Chen J, Li P, Shu Y, Yang L, Shu J. MRI-based automatic identification and segmentation of extrahepatic cholangiocarcinoma using deep learning network. BMC Cancer 2023; 23:1089. [PMID: 37950207 PMCID: PMC10636947 DOI: 10.1186/s12885-023-11575-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Accepted: 10/27/2023] [Indexed: 11/12/2023] Open
Abstract
BACKGROUND Accurate identification of extrahepatic cholangiocarcinoma (ECC) from an image is challenging because of the small size and complex background structure. Therefore, considering the limitation of manual delineation, it's necessary to develop automated identification and segmentation methods for ECC. The aim of this study was to develop a deep learning approach for automatic identification and segmentation of ECC using MRI. METHODS We recruited 137 ECC patients from our hospital as the main dataset (C1) and an additional 40 patients from other hospitals as the external validation set (C2). All patients underwent axial T1-weighted imaging (T1WI), T2-weighted imaging (T2WI), and diffusion-weighted imaging (DWI). Manual delineations were performed and served as the ground truth. Next, we used 3D VB-Net to establish single-mode automatic identification and segmentation models based on T1WI (model 1), T2WI (model 2), and DWI (model 3) in the training cohort (80% of C1), and compared them with the combined model (model 4). Subsequently, the generalization capability of the best models was evaluated using the testing set (20% of C1) and the external validation set (C2). Finally, the performance of the developed models was further evaluated. RESULTS Model 3 showed the best identification performance in the training, testing, and external validation cohorts with success rates of 0.980, 0.786, and 0.725, respectively. Furthermore, model 3 yielded an average Dice similarity coefficient (DSC) of 0.922, 0.495, and 0.466 to segment ECC automatically in the training, testing, and external validation cohorts, respectively. CONCLUSION The DWI-based model performed better in automatically identifying and segmenting ECC compared to T1WI and T2WI, which may guide clinical decisions and help determine prognosis.
Collapse
Affiliation(s)
- Chunmei Yang
- Department of Radiology, The Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, 646000, China
| | - Qin Zhou
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Mingdong Li
- Department of Radiology, The Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, 646000, China
| | - Lulu Xu
- Department of Radiology, The Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, 646000, China
| | - Yanyan Zeng
- Department of Radiology, The Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, 646000, China
| | - Jiong Liu
- Department of Radiology, The Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, 646000, China
| | - Ying Wei
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Feng Shi
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Jing Chen
- Department of Radiology, The Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, 646000, China
| | - Pinxiong Li
- Department of Radiology, The Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, 646000, China
| | - Yue Shu
- Department of Oncology, The Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, 646000, China
| | - Lu Yang
- Department of Radiology, The Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, 646000, China
| | - Jian Shu
- Department of Radiology, The Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, 646000, China.
| |
Collapse
|
5
|
Rich JM, Bhardwaj LN, Shah A, Gangal K, Rapaka MS, Oberai AA, Fields BKK, Matcuk GR, Duddalwar VA. Deep learning image segmentation approaches for malignant bone lesions: a systematic review and meta-analysis. FRONTIERS IN RADIOLOGY 2023; 3:1241651. [PMID: 37614529 PMCID: PMC10442705 DOI: 10.3389/fradi.2023.1241651] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/17/2023] [Accepted: 07/28/2023] [Indexed: 08/25/2023]
Abstract
Introduction Image segmentation is an important process for quantifying characteristics of malignant bone lesions, but this task is challenging and laborious for radiologists. Deep learning has shown promise in automating image segmentation in radiology, including for malignant bone lesions. The purpose of this review is to investigate deep learning-based image segmentation methods for malignant bone lesions on Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and Positron-Emission Tomography/CT (PET/CT). Method The literature search of deep learning-based image segmentation of malignant bony lesions on CT and MRI was conducted in PubMed, Embase, Web of Science, and Scopus electronic databases following the guidelines of Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). A total of 41 original articles published between February 2017 and March 2023 were included in the review. Results The majority of papers studied MRI, followed by CT, PET/CT, and PET/MRI. There was relatively even distribution of papers studying primary vs. secondary malignancies, as well as utilizing 3-dimensional vs. 2-dimensional data. Many papers utilize custom built models as a modification or variation of U-Net. The most common metric for evaluation was the dice similarity coefficient (DSC). Most models achieved a DSC above 0.6, with medians for all imaging modalities between 0.85-0.9. Discussion Deep learning methods show promising ability to segment malignant osseous lesions on CT, MRI, and PET/CT. Some strategies which are commonly applied to help improve performance include data augmentation, utilization of large public datasets, preprocessing including denoising and cropping, and U-Net architecture modification. Future directions include overcoming dataset and annotation homogeneity and generalizing for clinical applicability.
Collapse
Affiliation(s)
- Joseph M. Rich
- Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
| | - Lokesh N. Bhardwaj
- Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
| | - Aman Shah
- Department of Applied Biostatistics and Epidemiology, University of Southern California, Los Angeles, CA, United States
| | - Krish Gangal
- Bridge UnderGrad Science Summer Research Program, Irvington High School, Fremont, CA, United States
| | - Mohitha S. Rapaka
- Department of Biology, University of Texas at Austin, Austin, TX, United States
| | - Assad A. Oberai
- Department of Aerospace and Mechanical Engineering Department, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States
| | - Brandon K. K. Fields
- Department of Radiology & Biomedical Imaging, University of California, San Francisco, San Francisco, CA, United States
| | - George R. Matcuk
- Department of Radiology, Cedars-Sinai Medical Center, Los Angeles, CA, United States
| | - Vinay A. Duddalwar
- Department of Radiology, Keck School of Medicine of the University of Southern California, Los Angeles, CA, United States
- Department of Radiology, USC Radiomics Laboratory, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
| |
Collapse
|
6
|
Cho K, Kim KD, Nam Y, Jeong J, Kim J, Choi C, Lee S, Lee JS, Woo S, Hong GS, Seo JB, Kim N. CheSS: Chest X-Ray Pre-trained Model via Self-supervised Contrastive Learning. J Digit Imaging 2023; 36:902-910. [PMID: 36702988 PMCID: PMC10287612 DOI: 10.1007/s10278-023-00782-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Revised: 01/12/2023] [Accepted: 01/16/2023] [Indexed: 01/27/2023] Open
Abstract
Training deep learning models on medical images heavily depends on experts' expensive and laborious manual labels. In addition, these images, labels, and even models themselves are not widely publicly accessible and suffer from various kinds of bias and imbalances. In this paper, chest X-ray pre-trained model via self-supervised contrastive learning (CheSS) was proposed to learn models with various representations in chest radiographs (CXRs). Our contribution is a publicly accessible pretrained model trained with a 4.8-M CXR dataset using self-supervised learning with a contrastive learning and its validation with various kinds of downstream tasks including classification on the 6-class diseases in internal dataset, diseases classification in CheXpert, bone suppression, and nodule generation. When compared to a scratch model, on the 6-class classification test dataset, we achieved 28.5% increase in accuracy. On the CheXpert dataset, we achieved 1.3% increase in mean area under the receiver operating characteristic curve on the full dataset and 11.4% increase only using 1% data in stress test manner. On bone suppression with perceptual loss, we achieved improvement in peak signal to noise ratio from 34.99 to 37.77, structural similarity index measure from 0.976 to 0.977, and root-square-mean error from 4.410 to 3.301 when compared to ImageNet pretrained model. Finally, on nodule generation, we achieved improvement in Fréchet inception distance from 24.06 to 17.07. Our study showed the decent transferability of CheSS weights. CheSS weights can help researchers overcome data imbalance, data shortage, and inaccessibility of medical image datasets. CheSS weight is available at https://github.com/mi2rl/CheSS .
Collapse
Affiliation(s)
- Kyungjin Cho
- Department of Biomedical Engineering, Asan Medical Center, College of Medicine, Asan Medical Institute of Convergence Science and Technology, University of Ulsan, Seoul, Republic of Korea
- Department of Convergence Medicine, Asan Medical Center, Asan Medical Institute of Convergence Science and Technology, University of Ulsan College of Medicine, 5F, 26, Olympic-Ro 43-Gil, Songpa-Gu, Seoul, 05505, Republic of Korea
| | - Ki Duk Kim
- Department of Convergence Medicine, Asan Medical Center, Asan Medical Institute of Convergence Science and Technology, University of Ulsan College of Medicine, 5F, 26, Olympic-Ro 43-Gil, Songpa-Gu, Seoul, 05505, Republic of Korea
| | - Yujin Nam
- Department of Biomedical Engineering, Asan Medical Center, College of Medicine, Asan Medical Institute of Convergence Science and Technology, University of Ulsan, Seoul, Republic of Korea
- Department of Convergence Medicine, Asan Medical Center, Asan Medical Institute of Convergence Science and Technology, University of Ulsan College of Medicine, 5F, 26, Olympic-Ro 43-Gil, Songpa-Gu, Seoul, 05505, Republic of Korea
| | - Jiheon Jeong
- Department of Biomedical Engineering, Asan Medical Center, College of Medicine, Asan Medical Institute of Convergence Science and Technology, University of Ulsan, Seoul, Republic of Korea
- Department of Convergence Medicine, Asan Medical Center, Asan Medical Institute of Convergence Science and Technology, University of Ulsan College of Medicine, 5F, 26, Olympic-Ro 43-Gil, Songpa-Gu, Seoul, 05505, Republic of Korea
| | - Jeeyoung Kim
- Department of Biomedical Engineering, Asan Medical Center, College of Medicine, Asan Medical Institute of Convergence Science and Technology, University of Ulsan, Seoul, Republic of Korea
- Department of Convergence Medicine, Asan Medical Center, Asan Medical Institute of Convergence Science and Technology, University of Ulsan College of Medicine, 5F, 26, Olympic-Ro 43-Gil, Songpa-Gu, Seoul, 05505, Republic of Korea
| | - Changyong Choi
- Department of Biomedical Engineering, Asan Medical Center, College of Medicine, Asan Medical Institute of Convergence Science and Technology, University of Ulsan, Seoul, Republic of Korea
- Department of Convergence Medicine, Asan Medical Center, Asan Medical Institute of Convergence Science and Technology, University of Ulsan College of Medicine, 5F, 26, Olympic-Ro 43-Gil, Songpa-Gu, Seoul, 05505, Republic of Korea
| | - Soyoung Lee
- Department of Biomedical Engineering, Asan Medical Center, College of Medicine, Asan Medical Institute of Convergence Science and Technology, University of Ulsan, Seoul, Republic of Korea
- Department of Convergence Medicine, Asan Medical Center, Asan Medical Institute of Convergence Science and Technology, University of Ulsan College of Medicine, 5F, 26, Olympic-Ro 43-Gil, Songpa-Gu, Seoul, 05505, Republic of Korea
| | - Jun Soo Lee
- Department of Industrial Engineering, Seoul National University, Seoul, Republic of Korea
| | - Seoyeon Woo
- Department of Biomedical Engineering, University of Waterloo, Waterloo, ON, Canada
| | - Gil-Sun Hong
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Joon Beom Seo
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Namkug Kim
- Department of Convergence Medicine, Asan Medical Center, Asan Medical Institute of Convergence Science and Technology, University of Ulsan College of Medicine, 5F, 26, Olympic-Ro 43-Gil, Songpa-Gu, Seoul, 05505, Republic of Korea.
- Department of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.
| |
Collapse
|
7
|
Medical Images Segmentation for Lung Cancer Diagnosis Based on Deep Learning Architectures. Diagnostics (Basel) 2023; 13:diagnostics13030546. [PMID: 36766655 PMCID: PMC9914913 DOI: 10.3390/diagnostics13030546] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2023] [Revised: 01/28/2023] [Accepted: 01/29/2023] [Indexed: 02/05/2023] Open
Abstract
Lung cancer presents one of the leading causes of mortalities for people around the world. Lung image analysis and segmentation are one of the primary steps used for early diagnosis of cancer. Handcrafted medical imaging segmentation presents a very time-consuming task for radiation oncologists. To address this problem, we propose in this work to develop a full and entire system used for early diagnosis of lung cancer in CT scan imaging. The proposed lung cancer diagnosis system is composed of two main parts: the first part is used for segmentation developed on top of the UNETR network, and the second part is a classification part used to classify the output segmentation part, either benign or malignant, developed on top of the self-supervised network. The proposed system presents a powerful tool for early diagnosing and combatting lung cancer using 3D-input CT scan data. Extensive experiments have been performed to contribute to better segmentation and classification results. Training and testing experiments have been performed using the Decathlon dataset. Experimental results have been conducted to new state-of-the-art performances: segmentation accuracy of 97.83%, and 98.77% as classification accuracy. The proposed system presents a new powerful tool to use for early diagnosing and combatting lung cancer using 3D-input CT scan data.
Collapse
|
8
|
Huang L, Zhu E, Chen L, Wang Z, Chai S, Zhang B. A transformer-based generative adversarial network for brain tumor segmentation. Front Neurosci 2022; 16:1054948. [PMID: 36532274 PMCID: PMC9750177 DOI: 10.3389/fnins.2022.1054948] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Accepted: 11/07/2022] [Indexed: 09/19/2023] Open
Abstract
Brain tumor segmentation remains a challenge in medical image segmentation tasks. With the application of transformer in various computer vision tasks, transformer blocks show the capability of learning long-distance dependency in global space, which is complementary to CNNs. In this paper, we proposed a novel transformer-based generative adversarial network to automatically segment brain tumors with multi-modalities MRI. Our architecture consists of a generator and a discriminator, which is trained in min-max game progress. The generator is based on a typical "U-shaped" encoder-decoder architecture, whose bottom layer is composed of transformer blocks with Resnet. Besides, the generator is trained with deep supervision technology. The discriminator we designed is a CNN-based network with multi-scale L 1 loss, which is proved to be effective for medical semantic image segmentation. To validate the effectiveness of our method, we conducted exclusive experiments on BRATS2015 dataset, achieving comparable or better performance than previous state-of-the-art methods. On additional datasets, including BRATS2018 and BRATS2020, experimental results prove that our technique is capable of generalizing successfully.
Collapse
Affiliation(s)
- Liqun Huang
- The School of Automation, Beijing Institute of Technology, Beijing, China
| | - Enjun Zhu
- Department of Cardiac Surgery, Beijing Anzhen Hospital, Capital Medical University, Beijing, China
| | - Long Chen
- The School of Automation, Beijing Institute of Technology, Beijing, China
| | - Zhaoyang Wang
- The School of Automation, Beijing Institute of Technology, Beijing, China
| | - Senchun Chai
- The School of Automation, Beijing Institute of Technology, Beijing, China
| | - Baihai Zhang
- The School of Automation, Beijing Institute of Technology, Beijing, China
| |
Collapse
|
9
|
Intelligent tuberculosis activity assessment system based on an ensemble of neural networks. Comput Biol Med 2022; 147:105800. [PMID: 35809407 DOI: 10.1016/j.compbiomed.2022.105800] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Revised: 05/11/2022] [Accepted: 06/26/2022] [Indexed: 11/20/2022]
Abstract
This article proposes a novel approach to assess the degree of activity of pulmonary tuberculosis by active tuberculoma foci. It includes the development of a new method for processing lung CT images using an ensemble of deep convolutional neural networks using such special algorithms: an optimized algorithm for preliminary segmentation and selection of informative scans, a new algorithm for refining segmented masks to improve the final accuracy, an efficient fuzzy inference system for more weighted activity assessment. The approach also includes the use of medical classification of disease activity based on densitometric measures of tuberculomas. The selection and markup of the training sample images were performed manually by qualified pulmonologists from a base of approximately 9,000 CT lung scans of patients who had been enrolled in the dispensary for 15 years. The first basic step of the proposed approach is the developed algorithm for preprocessing CT lung scans. It consists in segmentation of intrapulmonary regions, which contain vessels, bronchi, lung walls to detect complex cases of ingrown tuberculomas. To minimize computational cost, the proposed approach includes a new method for selecting informative lung scans, i.e., those that potentially contain tuberculomas. The main processing step is binary segmentation of tuberculomas, which is proposed to be performed optimally by a certain ensemble of neural networks. Optimization of the ensemble size and its composition is achieved by using an algorithm for calculating individual contributions. A modification of this algorithm using new effective heuristic metrics has been proposed which improves the performance of the algorithm for this problem. A special algorithm was developed for post-processing of tuberculoma masks obtained during the segmentation step. The goal of this step is to refine the calculated mask for the physical placement of the tuberculoma. The algorithm consists in cleaning the mask from noisy formations on the scan, as well as expanding the mask area to maximize the capture of the tuberculoma location area. A simplified fuzzy inference system was developed to provide a more accurate final calculation of the degree of disease activity, which reflects data from current medical studies. The accuracy of the system was also tested on a test sample of independent patients, showing more than 96% correct calculations of disease activity, confirming the effectiveness and feasibility of introducing the system into clinical practice.
Collapse
|
10
|
DETECT-LC: A 3D Deep Learning and Textural Radiomics Computational Model for Lung Cancer Staging and Tumor Phenotyping Based on Computed Tomography Volumes. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12136318] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Lung Cancer is one of the primary causes of cancer-related deaths worldwide. Timely diagnosis and precise staging are pivotal for treatment planning, and thus can lead to increased survival rates. The application of advanced machine learning techniques helps in effective diagnosis and staging. In this study, a multistage neurobased computational model is proposed, DETECT-LC learning. DETECT-LC handles the challenge of choosing discriminative CT slices for constructing 3D volumes, using Haralick, histogram-based radiomics, and unsupervised clustering. ALT-CNN-DENSE Net architecture is introduced as part of DETECT-LC for voxel-based classification. DETECT-LC offers an automatic threshold-based segmentation approach instead of the manual procedure, to help mitigate this burden for radiologists and clinicians. Also, DETECT-LC presents a slice selection approach and a newly proposed relatively light weight 3D CNN architecture to improve existing studies performance. The proposed pipeline is employed for tumor phenotyping and staging. DETECT-LC performance is assessed through a range of experiments, in which DETECT-LC attains outstanding performance surpassing its counterparts in terms of accuracy, sensitivity, F1-score and Area under Curve (AuC). For histopathology classification, DETECT-LC average performance achieved an improvement of 20% in overall accuracy, 0.19 in sensitivity, 0.16 in F1-Score and 0.16 in AuC over the state of the art. A similar enhancement is reached for staging, where higher overall accuracy, sensitivity and F1-score are attained with differences of 8%, 0.08 and 0.14.
Collapse
|
11
|
Wang Y, Cai H, Pu Y, Li J, Yang F, Yang C, Chen L, Hu Z. The value of AI in the Diagnosis, Treatment, and Prognosis of Malignant Lung Cancer. FRONTIERS IN RADIOLOGY 2022; 2:810731. [PMID: 37492685 PMCID: PMC10365105 DOI: 10.3389/fradi.2022.810731] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/07/2021] [Accepted: 03/30/2022] [Indexed: 07/27/2023]
Abstract
Malignant tumors is a serious public health threat. Among them, lung cancer, which has the highest fatality rate globally, has significantly endangered human health. With the development of artificial intelligence (AI) and its integration with medicine, AI research in malignant lung tumors has become critical. This article reviews the value of CAD, computer neural network deep learning, radiomics, molecular biomarkers, and digital pathology for the diagnosis, treatment, and prognosis of malignant lung tumors.
Collapse
Affiliation(s)
- Yue Wang
- Department of PET/CT Center, Cancer Center of Yunnan Province, Yunnan Cancer Hospital, The Third Affiliated Hospital of Kunming Medical University, Kunming, China
| | - Haihua Cai
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Yongzhu Pu
- Department of PET/CT Center, Cancer Center of Yunnan Province, Yunnan Cancer Hospital, The Third Affiliated Hospital of Kunming Medical University, Kunming, China
| | - Jindan Li
- Department of PET/CT Center, Cancer Center of Yunnan Province, Yunnan Cancer Hospital, The Third Affiliated Hospital of Kunming Medical University, Kunming, China
| | - Fake Yang
- Department of PET/CT Center, Cancer Center of Yunnan Province, Yunnan Cancer Hospital, The Third Affiliated Hospital of Kunming Medical University, Kunming, China
| | - Conghui Yang
- Department of PET/CT Center, Cancer Center of Yunnan Province, Yunnan Cancer Hospital, The Third Affiliated Hospital of Kunming Medical University, Kunming, China
| | - Long Chen
- Department of PET/CT Center, Cancer Center of Yunnan Province, Yunnan Cancer Hospital, The Third Affiliated Hospital of Kunming Medical University, Kunming, China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
12
|
Zhou Z, Sun J, Yu J, Liu K, Duan J, Chen L, Chen CLP. An Image-Based Benchmark Dataset and a Novel Object Detector for Water Surface Object Detection. Front Neurorobot 2021; 15:723336. [PMID: 34630064 PMCID: PMC8497741 DOI: 10.3389/fnbot.2021.723336] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2021] [Accepted: 08/24/2021] [Indexed: 11/13/2022] Open
Abstract
Water surface object detection is one of the most significant tasks in autonomous driving and water surface vision applications. To date, existing public large-scale datasets collected from websites do not focus on specific scenarios. As a characteristic of these datasets, the quantity of the images and instances is also still at a low level. To accelerate the development of water surface autonomous driving, this paper proposes a large-scale, high-quality annotated benchmark dataset, named Water Surface Object Detection Dataset (WSODD), to benchmark different water surface object detection algorithms. The proposed dataset consists of 7,467 water surface images in different water environments, climate conditions, and shooting times. In addition, the dataset comprises a total of 14 common object categories and 21,911 instances. Simultaneously, more specific scenarios are focused on in WSODD. In order to find a straightforward architecture to provide good performance on WSODD, a new object detector, named CRB-Net, is proposed to serve as a baseline. In experiments, CRB-Net was compared with 16 state-of-the-art object detection methods and outperformed all of them in terms of detection precision. In this paper, we further discuss the effect of the dataset diversity (e.g., instance size, lighting conditions), training set size, and dataset details (e.g., method of categorization). Cross-dataset validation shows that WSODD significantly outperforms other relevant datasets and that the adaptability of CRB-Net is excellent.
Collapse
Affiliation(s)
- Zhiguo Zhou
- School of Information and Electronics, Beijing Institute of Technology, Beijing, China
| | - Jiaen Sun
- School of Information and Electronics, Beijing Institute of Technology, Beijing, China
| | - Jiabao Yu
- School of Information and Electronics, Beijing Institute of Technology, Beijing, China
| | - Kaiyuan Liu
- School of Information and Electronics, Beijing Institute of Technology, Beijing, China
| | - Junwei Duan
- College of Information Science and Technology, Jinan University, Guangzhou, China
| | - Long Chen
- Faculty of Science and Technology, University of Macau, Taipa, Macau, SAR China
| | - C L Philip Chen
- School of Computer Science and Engineering, South China University of Technology, Guangzhou, China
| |
Collapse
|
13
|
Zoetmulder R, Konduri PR, Obdeijn IV, Gavves E, Išgum I, Majoie CB, Dippel DW, Roos YB, Goyal M, Mitchell PJ, Campbell BCV, Lopes DK, Reimann G, Jovin TG, Saver JL, Muir KW, White P, Bracard S, Chen B, Brown S, Schonewille WJ, van der Hoeven E, Puetz V, Marquering HA. Automated Final Lesion Segmentation in Posterior Circulation Acute Ischemic Stroke Using Deep Learning. Diagnostics (Basel) 2021; 11:1621. [PMID: 34573963 PMCID: PMC8466415 DOI: 10.3390/diagnostics11091621] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Revised: 08/25/2021] [Accepted: 08/30/2021] [Indexed: 11/17/2022] Open
Abstract
Final lesion volume (FLV) is a surrogate outcome measure in anterior circulation stroke (ACS). In posterior circulation stroke (PCS), this relation is plausibly understudied due to a lack of methods that automatically quantify FLV. The applicability of deep learning approaches to PCS is limited due to its lower incidence compared to ACS. We evaluated strategies to develop a convolutional neural network (CNN) for PCS lesion segmentation by using image data from both ACS and PCS patients. We included follow-up non-contrast computed tomography scans of 1018 patients with ACS and 107 patients with PCS. To assess whether an ACS lesion segmentation generalizes to PCS, a CNN was trained on ACS data (ACS-CNN). Second, to evaluate the performance of only including PCS patients, a CNN was trained on PCS data. Third, to evaluate the performance when combining the datasets, a CNN was trained on both datasets. Finally, to evaluate the performance of transfer learning, the ACS-CNN was fine-tuned using PCS patients. The transfer learning strategy outperformed the other strategies in volume agreement with an intra-class correlation of 0.88 (95% CI: 0.83-0.92) vs. 0.55 to 0.83 and a lesion detection rate of 87% vs. 41-77 for the other strategies. Hence, transfer learning improved the FLV quantification and detection rate of PCS lesions compared to the other strategies.
Collapse
Affiliation(s)
- Riaan Zoetmulder
- Department of Biomedical Engineering and Physics, Amsterdam UMC, Location AMC, 1105 Amsterdam, The Netherlands; (R.Z.); (P.R.K.); (I.V.O.); (I.I.)
- Department of Radiology and Nuclear Medicine, Amsterdam UMC, Location AMC, 1105 Amsterdam, The Netherlands;
- Informatics Institute, University of Amsterdam, 1097 Amsterdam, The Netherlands;
| | - Praneeta R. Konduri
- Department of Biomedical Engineering and Physics, Amsterdam UMC, Location AMC, 1105 Amsterdam, The Netherlands; (R.Z.); (P.R.K.); (I.V.O.); (I.I.)
- Department of Radiology and Nuclear Medicine, Amsterdam UMC, Location AMC, 1105 Amsterdam, The Netherlands;
| | - Iris V. Obdeijn
- Department of Biomedical Engineering and Physics, Amsterdam UMC, Location AMC, 1105 Amsterdam, The Netherlands; (R.Z.); (P.R.K.); (I.V.O.); (I.I.)
| | - Efstratios Gavves
- Informatics Institute, University of Amsterdam, 1097 Amsterdam, The Netherlands;
| | - Ivana Išgum
- Department of Biomedical Engineering and Physics, Amsterdam UMC, Location AMC, 1105 Amsterdam, The Netherlands; (R.Z.); (P.R.K.); (I.V.O.); (I.I.)
- Department of Radiology and Nuclear Medicine, Amsterdam UMC, Location AMC, 1105 Amsterdam, The Netherlands;
- Informatics Institute, University of Amsterdam, 1097 Amsterdam, The Netherlands;
| | - Charles B.L.M. Majoie
- Department of Radiology and Nuclear Medicine, Amsterdam UMC, Location AMC, 1105 Amsterdam, The Netherlands;
| | - Diederik W.J. Dippel
- Department of Neurology, Erasmus MC University Medical Center, 3015 Rotterdam, The Netherlands;
| | - Yvo B.W.E.M. Roos
- Department of Neurology, Amsterdam UMC, Location AMC, 1105 Amsterdam, The Netherlands;
| | - Mayank Goyal
- Radiology, Foothills Medical Centre, University of Calgary, Calgary, AB T2N 2T9, Canada;
- Department of Clinical Neurosciences, Hotchkiss Brain Institute, University of Calgary, Calgary, AB T2N 4N1, Canada
| | - Peter J. Mitchell
- Department of Radiology, The University of Melbourne & The Royal Melbourne Hospital, Melbourne, VIC 3050, Australia;
| | - Bruce C. V. Campbell
- Department of Medicine and Neurology, Melbourne Brain Centre at the Royal Melbourne Hospital, University of Melbourne, Melbourne, VIC 3052, Australia;
| | - Demetrius K. Lopes
- Department of Neurological Surgery, Rush University Medical Center, Chicago, IL 60612, USA;
| | - Gernot Reimann
- Department of Neurology, Community Hospital Klinikum Dortmund, 44137 Dortmund, Germany;
| | - Tudor G. Jovin
- Cooper Neurological Institute, Cooper University Medical Center, Camden, NJ 08103, USA;
| | - Jeffrey L. Saver
- Department of Neurology and Comprehensive Stroke Center, David Geffen School of Medicine at UCLA, Los Angeles, CA 90095, USA;
| | - Keith W. Muir
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow G12 8QB, UK;
| | - Phil White
- Translational and Clinical Research Institute, Faculty of Medical Sciences, Newcastle University, Newcastle upon Tyne NE1 7RU, UK;
- Department of Neuroradiology, Newcastle upon Tyne Hospitals, Newcastle upon Tyne NE1 4LP, UK
| | - Serge Bracard
- INSERM U1254, IADI, University Hospital, Neuroradiology, 54511 Nancy, France;
| | - Bailiang Chen
- INSERM CIC-IT 1433, University Hospital, 54511 Nancy, France;
| | - Scott Brown
- Altair Biostatistics, St Louis Park, MN 55416, USA;
| | | | - Erik van der Hoeven
- Department of Radiology, St. Antonius Hospital, P.O. Box 2500, 3430 Nieuwegein, The Netherlands;
| | - Volker Puetz
- Department of Neurology, Dresden University Stroke Centre, Technical University Dresden, Fetscherstraße 74, 01307 Dresden, Germany;
| | - Henk A. Marquering
- Department of Biomedical Engineering and Physics, Amsterdam UMC, Location AMC, 1105 Amsterdam, The Netherlands; (R.Z.); (P.R.K.); (I.V.O.); (I.I.)
- Department of Radiology and Nuclear Medicine, Amsterdam UMC, Location AMC, 1105 Amsterdam, The Netherlands;
| |
Collapse
|