1
|
Cai L, Chen L, Huang J, Wang Y, Zhang Y. Know your orientation: A viewpoint-aware framework for polyp segmentation. Med Image Anal 2024; 97:103288. [PMID: 39096844 DOI: 10.1016/j.media.2024.103288] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2023] [Revised: 07/23/2024] [Accepted: 07/24/2024] [Indexed: 08/05/2024]
Abstract
Automatic polyp segmentation in endoscopic images is critical for the early diagnosis of colorectal cancer. Despite the availability of powerful segmentation models, two challenges still impede the accuracy of polyp segmentation algorithms. Firstly, during a colonoscopy, physicians frequently adjust the orientation of the colonoscope tip to capture underlying lesions, resulting in viewpoint changes in the colonoscopy images. These variations increase the diversity of polyp visual appearance, posing a challenge for learning robust polyp features. Secondly, polyps often exhibit properties similar to the surrounding tissues, leading to indistinct polyp boundaries. To address these problems, we propose a viewpoint-aware framework named VANet for precise polyp segmentation. In VANet, polyps are emphasized as a discriminative feature and thus can be localized by class activation maps in a viewpoint classification process. With these polyp locations, we design a viewpoint-aware Transformer (VAFormer) to alleviate the erosion of attention by the surrounding tissues, thereby inducing better polyp representations. Additionally, to enhance the polyp boundary perception of the network, we develop a boundary-aware Transformer (BAFormer) to encourage self-attention towards uncertain regions. As a consequence, the combination of the two modules is capable of calibrating predictions and significantly improving polyp segmentation performance. Extensive experiments on seven public datasets across six metrics demonstrate the state-of-the-art results of our method, and VANet can handle colonoscopy images in real-world scenarios effectively. The source code is available at https://github.com/1024803482/Viewpoint-Aware-Network.
Collapse
Affiliation(s)
- Linghan Cai
- School of Computer Science and Technology, Harbin Institute of Technology (Shenzhen), Shenzhen, 518055, China; Department of Electronic Information Engineering, Beihang University, Beijing, 100191, China.
| | - Lijiang Chen
- Department of Electronic Information Engineering, Beihang University, Beijing, 100191, China
| | - Jianhao Huang
- School of Computer Science and Technology, Harbin Institute of Technology (Shenzhen), Shenzhen, 518055, China
| | - Yifeng Wang
- School of Science, Harbin Institute of Technology (Shenzhen), Shenzhen, 518055, China
| | - Yongbing Zhang
- School of Computer Science and Technology, Harbin Institute of Technology (Shenzhen), Shenzhen, 518055, China.
| |
Collapse
|
2
|
Xu W, Xu R, Wang C, Li X, Xu S, Guo L. PSTNet: Enhanced Polyp Segmentation With Multi-Scale Alignment and Frequency Domain Integration. IEEE J Biomed Health Inform 2024; 28:6042-6053. [PMID: 38954569 DOI: 10.1109/jbhi.2024.3421550] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/04/2024]
Abstract
Accurate segmentation of colorectal polyps in colonoscopy images is crucial for effective diagnosis and management of colorectal cancer (CRC). However, current deep learning-based methods primarily rely on fusing RGB information across multiple scales, leading to limitations in accurately identifying polyps due to restricted RGB domain information and challenges in feature misalignment during multi-scale aggregation. To address these limitations, we propose the Polyp Segmentation Network with Shunted Transformer (PSTNet), a novel approach that integrates both RGB and frequency domain cues present in the images. PSTNet comprises three key modules: the Frequency Characterization Attention Module (FCAM) for extracting frequency cues and capturing polyp characteristics, the Feature Supplementary Alignment Module (FSAM) for aligning semantic information and reducing misalignment noise, and the Cross Perception localization Module (CPM) for synergizing frequency cues with high-level semantics to achieve efficient polyp segmentation. Extensive experiments on challenging datasets demonstrate PSTNet's significant improvement in polyp segmentation accuracy across various metrics, consistently outperforming state-of-the-art methods. The integration of frequency domain cues and the novel architectural design of PSTNet contribute to advancing computer-assisted polyp segmentation, facilitating more accurate diagnosis and management of CRC.
Collapse
|
3
|
Tudela Y, Majó M, de la Fuente N, Galdran A, Krenzer A, Puppe F, Yamlahi A, Tran TN, Matuszewski BJ, Fitzgerald K, Bian C, Pan J, Liu S, Fernández-Esparrach G, Histace A, Bernal J. A complete benchmark for polyp detection, segmentation and classification in colonoscopy images. Front Oncol 2024; 14:1417862. [PMID: 39381041 PMCID: PMC11458519 DOI: 10.3389/fonc.2024.1417862] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2024] [Accepted: 07/11/2024] [Indexed: 10/10/2024] Open
Abstract
Introduction Colorectal cancer (CRC) is one of the main causes of deaths worldwide. Early detection and diagnosis of its precursor lesion, the polyp, is key to reduce its mortality and to improve procedure efficiency. During the last two decades, several computational methods have been proposed to assist clinicians in detection, segmentation and classification tasks but the lack of a common public validation framework makes it difficult to determine which of them is ready to be deployed in the exploration room. Methods This study presents a complete validation framework and we compare several methodologies for each of the polyp characterization tasks. Results Results show that the majority of the approaches are able to provide good performance for the detection and segmentation task, but that there is room for improvement regarding polyp classification. Discussion While studied show promising results in the assistance of polyp detection and segmentation tasks, further research should be done in classification task to obtain reliable results to assist the clinicians during the procedure. The presented framework provides a standarized method for evaluating and comparing different approaches, which could facilitate the identification of clinically prepared assisting methods.
Collapse
Affiliation(s)
- Yael Tudela
- Computer Vision Center and Computer Science Department, Universitat Autònoma de Cerdanyola del Valles, Barcelona, Spain
| | - Mireia Majó
- Computer Vision Center and Computer Science Department, Universitat Autònoma de Cerdanyola del Valles, Barcelona, Spain
| | - Neil de la Fuente
- Computer Vision Center and Computer Science Department, Universitat Autònoma de Cerdanyola del Valles, Barcelona, Spain
| | - Adrian Galdran
- Department of Information and Communication Technologies, SymBioSys Research Group, BCNMedTech, Barcelona, Spain
| | - Adrian Krenzer
- Artificial Intelligence and Knowledge Systems, Institute for Computer Science, Julius-Maximilians University of Würzburg, Würzburg, Germany
| | - Frank Puppe
- Artificial Intelligence and Knowledge Systems, Institute for Computer Science, Julius-Maximilians University of Würzburg, Würzburg, Germany
| | - Amine Yamlahi
- Division of Intelligent Medical Systems, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Thuy Nuong Tran
- Division of Intelligent Medical Systems, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Bogdan J. Matuszewski
- Computer Vision and Machine Learning (CVML) Research Group, University of Central Lancashir (UCLan), Preston, United Kingdom
| | - Kerr Fitzgerald
- Computer Vision and Machine Learning (CVML) Research Group, University of Central Lancashir (UCLan), Preston, United Kingdom
| | - Cheng Bian
- Hebei University of Technology, Baoding, China
| | | | - Shijle Liu
- Hebei University of Technology, Baoding, China
| | | | - Aymeric Histace
- ETIS UMR 8051, École Nationale Supérieure de l'Électronique et de ses Applications (ENSEA), Centre national de la recherche scientifique (CNRS), CY Paris Cergy University, Cergy, France
| | - Jorge Bernal
- Computer Vision Center and Computer Science Department, Universitat Autònoma de Cerdanyola del Valles, Barcelona, Spain
| |
Collapse
|
4
|
Lin M, Lan Q, Huang C, Yang B, Yu Y. Wavelet-based U-shape network for bioabsorbable vascular stents segmentation in IVOCT images. Front Physiol 2024; 15:1454835. [PMID: 39210969 PMCID: PMC11358552 DOI: 10.3389/fphys.2024.1454835] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2024] [Accepted: 08/05/2024] [Indexed: 09/04/2024] Open
Abstract
Background and Objective Coronary artery disease remains a leading cause of mortality among individuals with cardiovascular conditions. The therapeutic use of bioresorbable vascular scaffolds (BVSs) through stent implantation is common, yet the effectiveness of current BVS segmentation techniques from Intravascular Optical Coherence Tomography (IVOCT) images is inadequate. Methods This paper introduces an enhanced segmentation approach using a novel Wavelet-based U-shape network to address these challenges. We developed a Wavelet-based U-shape network that incorporates an Attention Gate (AG) and an Atrous Multi-scale Field Module (AMFM), designed to enhance the segmentation accuracy by improving the differentiation between the stent struts and the surrounding tissue. A unique wavelet fusion module mitigates the semantic gaps between different feature map branches, facilitating more effective feature integration. Results Extensive experiments demonstrate that our model surpasses existing techniques in key metrics such as Dice coefficient, accuracy, sensitivity, and Intersection over Union (IoU), achieving scores of 85.10%, 99.77%, 86.93%, and 73.81%, respectively. The integration of AG, AMFM, and the fusion module played a crucial role in achieving these outcomes, indicating a significant enhancement in capturing detailed contextual information. Conclusion The introduction of the Wavelet-based U-shape network marks a substantial improvement in the segmentation of BVSs in IVOCT images, suggesting potential benefits for clinical practices in coronary artery disease treatment. This approach may also be applicable to other intricate medical imaging segmentation tasks, indicating a broad scope for future research.
Collapse
Affiliation(s)
- Mingfeng Lin
- Henan Key Laboratory of Cardiac Remodeling and Transplantation, Zhengzhou Seventh People’s Hospital, Zhengzhou, China
- School of Informatics, Xiamen University, Xiamen, China
| | - Quan Lan
- Department of Neurology and Department of Neuroscience, The First Affiliated Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China
- Fujian Key Laboratory of Brain Tumors Diagnosis and Precision Treatment, Xiamen, China
| | - Chenxi Huang
- School of Informatics, Xiamen University, Xiamen, China
| | - Bin Yang
- Henan Key Laboratory of Cardiac Remodeling and Transplantation, Zhengzhou Seventh People’s Hospital, Zhengzhou, China
| | - Yuexin Yu
- Henan Key Laboratory of Cardiac Remodeling and Transplantation, Zhengzhou Seventh People’s Hospital, Zhengzhou, China
| |
Collapse
|
5
|
Hussein A, Youssef S, Ahmed MA, Ghatwary N. MGB-Unet: An Improved Multiscale Unet with Bottleneck Transformer for Myositis Segmentation from Ultrasound Images. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01168-w. [PMID: 39037670 DOI: 10.1007/s10278-024-01168-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/19/2024] [Revised: 05/15/2024] [Accepted: 05/28/2024] [Indexed: 07/23/2024]
Abstract
Myositis is the inflammation of the muscles that can arise from various sources with diverse symptoms and require different treatments. For treatment to achieve optimal results, it is essential to obtain an accurate diagnosis promptly. This paper presents a new supervised segmentation architecture that can efficiently perform precise segmentation and classification of myositis from ultrasound images with few computational resources. The architecture of our model includes a unique encoder-decoder structure that integrates the Bottleneck Transformer (BOT) with a newly developed Residual block named Multi-Conv Ghost switchable bottleneck Residual Block (MCG_RB). This block effectively captures and analyzes ultrasound image input inside the encoder segment at several resolutions. Moreover, the BOT module is a transformer-style attention module designed to bridge the feature gap between the encoding and decoding stages. Furthermore, multi-level features are retrieved using the MCG-RB module, which combines multi-convolution with ghost switchable residual connections of convolutions for both the encoding and decoding stages. The suggested method attains state-of-the-art performance on a benchmark set of myositis ultrasound images across all parameters, including accuracy, precision, recall, dice coefficient, and Jaccard index. Despite its limited training data, the suggested approach demonstrates remarkable generalizability by yielding exceptional results. The proposed model showed a substantial enhancement in accuracy when compared to segmentation state-of-the-art methods such as Unet++, DeepLabV3, and the Duck-Net. The dice coefficient and Jaccard index obtained improvements of up to 3%, 6%, and 7%, respectively, surpassing the other methods.
Collapse
Affiliation(s)
- Allaa Hussein
- Computer Engineering, Pharos University, Pharos, Egypt.
| | - Sherin Youssef
- Computer Engineering, Arab Academy for Science and Technology, Alexandria, Egypt
| | - Magdy A Ahmed
- Computer Engineering, Faculty of Engineering, Alexandria, Egypt
| | - Noha Ghatwary
- Computer Engineering, Arab Academy for Science and Technology, Alexandria, Egypt
| |
Collapse
|
6
|
Li C, Mao Y, Liang S, Li J, Wang Y, Guo Y. Deep causal learning for pancreatic cancer segmentation in CT sequences. Neural Netw 2024; 175:106294. [PMID: 38657562 DOI: 10.1016/j.neunet.2024.106294] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Revised: 03/19/2024] [Accepted: 04/05/2024] [Indexed: 04/26/2024]
Abstract
Segmenting the irregular pancreas and inconspicuous tumor simultaneously is an essential but challenging step in diagnosing pancreatic cancer. Current deep-learning (DL) methods usually segment the pancreas or tumor independently using mixed image features, which are disrupted by surrounding complex and low-contrast background tissues. Here, we proposed a deep causal learning framework named CausegNet for pancreas and tumor co-segmentation in 3D CT sequences. Specifically, a causality-aware module and a counterfactual loss are employed to enhance the DL network's comprehension of the anatomical causal relationship between the foreground elements (pancreas and tumor) and the background. By integrating causality into CausegNet, the network focuses solely on extracting intrinsic foreground causal features while effectively learning the potential causality between the pancreas and the tumor. Then based on the extracted causal features, CausegNet applies a counterfactual inference to significantly reduce the background interference and sequentially search for pancreas and tumor from the foreground. Consequently, our approach can handle deformable pancreas and obscure tumors, resulting in superior co-segmentation performance in both public and real clinical datasets, achieving the highest pancreas/tumor Dice coefficients of 86.67%/84.28%. The visualized features and anti-noise experiments further demonstrate the causal interpretability and stability of our method. Furthermore, our approach improves the accuracy and sensitivity of downstream pancreatic cancer risk assessment task by 12.50% and 50.00%, respectively, compared to experienced clinicians, indicating promising clinical applications.
Collapse
Affiliation(s)
- Chengkang Li
- School of Information Science and Technology of Fudan University, Shanghai 200433, China; Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention (MICCAI) of Shanghai, Shanghai 200032, China
| | - Yishen Mao
- Department of Pancreatic Surgery, Pancreatic Disease Institute, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai 200040, China
| | - Shuyu Liang
- School of Information Science and Technology of Fudan University, Shanghai 200433, China; Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention (MICCAI) of Shanghai, Shanghai 200032, China
| | - Ji Li
- Department of Pancreatic Surgery, Pancreatic Disease Institute, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai 200040, China.
| | - Yuanyuan Wang
- School of Information Science and Technology of Fudan University, Shanghai 200433, China; Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention (MICCAI) of Shanghai, Shanghai 200032, China.
| | - Yi Guo
- School of Information Science and Technology of Fudan University, Shanghai 200433, China; Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention (MICCAI) of Shanghai, Shanghai 200032, China.
| |
Collapse
|
7
|
Schäfer R, Nicke T, Höfener H, Lange A, Merhof D, Feuerhake F, Schulz V, Lotz J, Kiessling F. Overcoming data scarcity in biomedical imaging with a foundational multi-task model. NATURE COMPUTATIONAL SCIENCE 2024; 4:495-509. [PMID: 39030386 PMCID: PMC11288886 DOI: 10.1038/s43588-024-00662-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Accepted: 06/17/2024] [Indexed: 07/21/2024]
Abstract
Foundational models, pretrained on a large scale, have demonstrated substantial success across non-medical domains. However, training these models typically requires large, comprehensive datasets, which contrasts with the smaller and more specialized datasets common in biomedical imaging. Here we propose a multi-task learning strategy that decouples the number of training tasks from memory requirements. We trained a universal biomedical pretrained model (UMedPT) on a multi-task database including tomographic, microscopic and X-ray images, with various labeling strategies such as classification, segmentation and object detection. The UMedPT foundational model outperformed ImageNet pretraining and previous state-of-the-art models. For classification tasks related to the pretraining database, it maintained its performance with only 1% of the original training data and without fine-tuning. For out-of-domain tasks it required only 50% of the original training data. In an external independent validation, imaging features extracted using UMedPT proved to set a new standard for cross-center transferability.
Collapse
Affiliation(s)
- Raphael Schäfer
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| | - Till Nicke
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| | - Henning Höfener
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| | - Annkristin Lange
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| | - Dorit Merhof
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
- Institute of Image Analysis and Computer Vision, Faculty of Informatics and Data Science, University of Regensburg, Regensburg, Germany
| | - Friedrich Feuerhake
- Institute for Pathology, Hannover Medical School, Hanover, Germany
- Institute for Neuropathology, Medical Center, University of Freiburg, Freiburg, Germany
| | - Volkmar Schulz
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
- Institute for Experimental Molecular Imaging, RWTH Aachen University, Aachen, Germany
| | - Johannes Lotz
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany.
| | - Fabian Kiessling
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany.
- Institute for Experimental Molecular Imaging, RWTH Aachen University, Aachen, Germany.
| |
Collapse
|
8
|
Wang H, Hu T, Zhang Y, Zhang H, Qi Y, Wang L, Ma J, Du M. Unveiling camouflaged and partially occluded colorectal polyps: Introducing CPSNet for accurate colon polyp segmentation. Comput Biol Med 2024; 171:108186. [PMID: 38394804 DOI: 10.1016/j.compbiomed.2024.108186] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Revised: 02/02/2024] [Accepted: 02/18/2024] [Indexed: 02/25/2024]
Abstract
BACKGROUND Segmenting colorectal polyps presents a significant challenge due to the diverse variations in their size, shape, texture, and intricate backgrounds. Particularly demanding are the so-called "camouflaged" polyps, which are partially concealed by surrounding tissues or fluids, adding complexity to their detection. METHODS We present CPSNet, an innovative model designed for camouflaged polyp segmentation. CPSNet incorporates three key modules: the Deep Multi-Scale-Feature Fusion Module, the Camouflaged Object Detection Module, and the Multi-Scale Feature Enhancement Module. These modules work collaboratively to improve the segmentation process, enhancing both robustness and accuracy. RESULTS Our experiments confirm the effectiveness of CPSNet. When compared to state-of-the-art methods in colon polyp segmentation, CPSNet consistently outperforms the competition. Particularly noteworthy is its performance on the ETIS-LaribPolypDB dataset, where CPSNet achieved a remarkable 2.3% increase in the Dice coefficient compared to the Polyp-PVT model. CONCLUSION In summary, CPSNet marks a significant advancement in the field of colorectal polyp segmentation. Its innovative approach, encompassing multi-scale feature fusion, camouflaged object detection, and feature enhancement, holds considerable promise for clinical applications.
Collapse
Affiliation(s)
- Huafeng Wang
- School of Information Technology, North China University of Technology, Beijing 100041, China.
| | - Tianyu Hu
- School of Information Technology, North China University of Technology, Beijing 100041, China.
| | - Yanan Zhang
- School of Information Technology, North China University of Technology, Beijing 100041, China.
| | - Haodu Zhang
- School of Intelligent Systems Engineering, Sun Yat-sen University, Guangzhou 510335, China.
| | - Yong Qi
- School of Information Technology, North China University of Technology, Beijing 100041, China.
| | - Longzhen Wang
- Department of Gastroenterology, Second People's Hospital, Changzhi, Shanxi 046000, China.
| | - Jianhua Ma
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510335, China.
| | - Minghua Du
- Department of Emergency, PLA General Hospital, Beijing 100853, China.
| |
Collapse
|
9
|
Kato S, Hotta K. Adaptive t-vMF dice loss: An effective expansion of dice loss for medical image segmentation. Comput Biol Med 2024; 168:107695. [PMID: 38061152 DOI: 10.1016/j.compbiomed.2023.107695] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Revised: 10/30/2023] [Accepted: 11/06/2023] [Indexed: 01/10/2024]
Abstract
Dice loss is widely used for medical image segmentation, and many improved loss functions have been proposed. However, further Dice loss improvements are still possible. In this study, we reconsidered the use of Dice loss and discovered that Dice loss can be rewritten in the loss function using the cosine similarity through a simple equation transformation. Using this knowledge, we present a novel t-vMF Dice loss based on the t-vMF similarity instead of the cosine similarity. Based on the t-vMF similarity, our proposed Dice loss is formulated in a more compact similarity loss function than the original Dice loss. Furthermore, we present an effective algorithm that automatically determines the parameter κ for the t-vMF similarity using a validation accuracy, called Adaptive t-vMF Dice loss. Using this algorithm, it is possible to apply more compact similarities for easy classes and wider similarities for difficult classes, and we are able to achieve adaptive training based on the accuracy of each class. We evaluated binary segmentation datasets of CVC-ClinicDB and Kvasir-SEG, and multi-class segmentation datasets of Automated Cardiac Diagnosis Challenge and Synapse multi-organ segmentation. Through experiments conducted on four datasets using a five-fold cross-validation, we confirmed that the Dice score coefficient (DSC) was further improved in comparison with the original Dice loss and other loss functions.
Collapse
Affiliation(s)
- Sota Kato
- Department of Electrical, Information, Materials and Materials Engineering, Meijo University, Tempaku-ku, Nagoya, 468-8502, Aichi, Japan.
| | - Kazuhiro Hotta
- Department of Electrical and Electronic Engineering, Meijo University, Nagoya, Japan.
| |
Collapse
|
10
|
Fang K, Zheng X, Lin X, Dai Z. Unveiling Osteoporosis Through Radiomics Analysis of Hip CT Imaging. Acad Radiol 2023; 31:S1076-6332(23)00544-5. [PMID: 39492007 DOI: 10.1016/j.acra.2023.10.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2023] [Revised: 10/02/2023] [Accepted: 10/03/2023] [Indexed: 11/05/2024]
Abstract
RATIONALE AND OBJECTIVES This study aims to investigate the use of radiomics analysis of hip CT imaging to unveil osteoporosis. MATERIALS AND METHODS The researchers analyzed hip CT scans from a cohort of patients, including both osteoporotic and healthy individuals. Radiomics technique are employed to extract a comprehensive array of features from these images, encompassing texture, shape, and intensity alterations. Radiomics analysis using the 10 most commonly used machine learning models was employed to identify screened radiomics features for the detection of osteoporosis in patients. In addition to radiomics features, the basic information of patients is also utilized as training data for these machine learning models to accurately identify the presence of osteoporosis. A comparison would be made between the efficiency of recognizing radiomics features and the efficiency of recognizing patient basic information. The machine learning model that achieves the highest performance would be chosen to integrate patient basic information and radiomics features for the development of clinical nomograms. RESULT After a thorough screening process, 16 radiomics features were selected as input parameters for the machine learning model. In the test group, the highest accuracy achieved using radiomics features was 0.849, with an area under the curve (AUC) of 0.919. Evaluation of clinical features identified age and gender as closely associated with osteoporosis. Among these features, the KNN model exhibited the highest accuracy of 0.731 and an AUC of 0.658 in the test group. Comparing the performance of radiomics and clinical features, radiomics features demonstrated superior AUC values in the machine learning models. Ultimately, the XGBoost model, utilizing both radiomics and clinical features, was selected as the final Nomogram prediction model. In the test group, this model achieved an accuracy of 0.882 and an AUC of 0.886 in screening for osteoporosis. CONCLUSION Radiomics features derived from hip CT scans exhibit strong screening capabilities for osteoporosis. Furthermore, when combined with easily obtainable clinical features like patient age and gender, an effective screening efficacy for osteoporosis can be achieved.
Collapse
Affiliation(s)
- Kaibin Fang
- Department of Orthopaedic Surgery, The Second Affiliated Hospital of Fujian Medical University, No. 34, Zhongshanbeilu, Quanzhou, 362000, China (K.F., X.L., Z.D.)
| | - Xiaoling Zheng
- Liming Vocational University, Quanzhou, 362000, China (X.Z.)
| | - Xiaocong Lin
- Department of Orthopaedic Surgery, The Second Affiliated Hospital of Fujian Medical University, No. 34, Zhongshanbeilu, Quanzhou, 362000, China (K.F., X.L., Z.D.)
| | - Zhangsheng Dai
- Department of Orthopaedic Surgery, The Second Affiliated Hospital of Fujian Medical University, No. 34, Zhongshanbeilu, Quanzhou, 362000, China (K.F., X.L., Z.D.).
| |
Collapse
|
11
|
Ahamed MF, Syfullah MK, Sarkar O, Islam MT, Nahiduzzaman M, Islam MR, Khandakar A, Ayari MA, Chowdhury MEH. IRv2-Net: A Deep Learning Framework for Enhanced Polyp Segmentation Performance Integrating InceptionResNetV2 and UNet Architecture with Test Time Augmentation Techniques. SENSORS (BASEL, SWITZERLAND) 2023; 23:7724. [PMID: 37765780 PMCID: PMC10534485 DOI: 10.3390/s23187724] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Revised: 08/25/2023] [Accepted: 08/30/2023] [Indexed: 09/29/2023]
Abstract
Colorectal polyps in the colon or rectum are precancerous growths that can lead to a more severe disease called colorectal cancer. Accurate segmentation of polyps using medical imaging data is essential for effective diagnosis. However, manual segmentation by endoscopists can be time-consuming, error-prone, and expensive, leading to a high rate of missed anomalies. To solve this problem, an automated diagnostic system based on deep learning algorithms is proposed to find polyps. The proposed IRv2-Net model is developed using the UNet architecture with a pre-trained InceptionResNetV2 encoder to extract most features from the input samples. The Test Time Augmentation (TTA) technique, which utilizes the characteristics of the original, horizontal, and vertical flips, is used to gain precise boundary information and multi-scale image features. The performance of numerous state-of-the-art (SOTA) models is compared using several metrics such as accuracy, Dice Similarity Coefficients (DSC), Intersection Over Union (IoU), precision, and recall. The proposed model is tested on the Kvasir-SEG and CVC-ClinicDB datasets, demonstrating superior performance in handling unseen real-time data. It achieves the highest area coverage in the area under the Receiver Operating Characteristic (ROC-AUC) and area under Precision-Recall (AUC-PR) curves. The model exhibits excellent qualitative testing outcomes across different types of polyps, including more oversized, smaller, over-saturated, sessile, or flat polyps, within the same dataset and across different datasets. Our approach can significantly minimize the number of missed rating difficulties. Lastly, a graphical interface is developed for producing the mask in real-time. The findings of this study have potential applications in clinical colonoscopy procedures and can serve based on further research and development.
Collapse
Affiliation(s)
- Md. Faysal Ahamed
- Department of Computer Science & Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh; (M.F.A.); (M.R.I.)
| | - Md. Khalid Syfullah
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh; (M.K.S.); (O.S.); (M.N.)
| | - Ovi Sarkar
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh; (M.K.S.); (O.S.); (M.N.)
| | - Md. Tohidul Islam
- Department of Information & Communication Engineering, University of Rajshahi, Rajshahi 6205, Bangladesh;
| | - Md. Nahiduzzaman
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh; (M.K.S.); (O.S.); (M.N.)
- Department of Electrical Engineering, Qatar University, Doha 2713, Qatar;
| | - Md. Rabiul Islam
- Department of Computer Science & Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh; (M.F.A.); (M.R.I.)
| | - Amith Khandakar
- Department of Electrical Engineering, Qatar University, Doha 2713, Qatar;
| | - Mohamed Arselene Ayari
- Department of Civil and environmental Engineering, Qatar University, Doha 2713, Qatar;
- Technology Innovation and Engineering Education Unit (TIEE), Qatar University, Doha 2713, Qatar
| | | |
Collapse
|