1
|
Cai L, Chen L, Huang J, Wang Y, Zhang Y. Know your orientation: A viewpoint-aware framework for polyp segmentation. Med Image Anal 2024; 97:103288. [PMID: 39096844 DOI: 10.1016/j.media.2024.103288] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2023] [Revised: 07/23/2024] [Accepted: 07/24/2024] [Indexed: 08/05/2024]
Abstract
Automatic polyp segmentation in endoscopic images is critical for the early diagnosis of colorectal cancer. Despite the availability of powerful segmentation models, two challenges still impede the accuracy of polyp segmentation algorithms. Firstly, during a colonoscopy, physicians frequently adjust the orientation of the colonoscope tip to capture underlying lesions, resulting in viewpoint changes in the colonoscopy images. These variations increase the diversity of polyp visual appearance, posing a challenge for learning robust polyp features. Secondly, polyps often exhibit properties similar to the surrounding tissues, leading to indistinct polyp boundaries. To address these problems, we propose a viewpoint-aware framework named VANet for precise polyp segmentation. In VANet, polyps are emphasized as a discriminative feature and thus can be localized by class activation maps in a viewpoint classification process. With these polyp locations, we design a viewpoint-aware Transformer (VAFormer) to alleviate the erosion of attention by the surrounding tissues, thereby inducing better polyp representations. Additionally, to enhance the polyp boundary perception of the network, we develop a boundary-aware Transformer (BAFormer) to encourage self-attention towards uncertain regions. As a consequence, the combination of the two modules is capable of calibrating predictions and significantly improving polyp segmentation performance. Extensive experiments on seven public datasets across six metrics demonstrate the state-of-the-art results of our method, and VANet can handle colonoscopy images in real-world scenarios effectively. The source code is available at https://github.com/1024803482/Viewpoint-Aware-Network.
Collapse
Affiliation(s)
- Linghan Cai
- School of Computer Science and Technology, Harbin Institute of Technology (Shenzhen), Shenzhen, 518055, China; Department of Electronic Information Engineering, Beihang University, Beijing, 100191, China.
| | - Lijiang Chen
- Department of Electronic Information Engineering, Beihang University, Beijing, 100191, China
| | - Jianhao Huang
- School of Computer Science and Technology, Harbin Institute of Technology (Shenzhen), Shenzhen, 518055, China
| | - Yifeng Wang
- School of Science, Harbin Institute of Technology (Shenzhen), Shenzhen, 518055, China
| | - Yongbing Zhang
- School of Computer Science and Technology, Harbin Institute of Technology (Shenzhen), Shenzhen, 518055, China.
| |
Collapse
|
2
|
Wang Z, Liu M, Jiang J, Qu X. Colorectal polyp segmentation with denoising diffusion probabilistic models. Comput Biol Med 2024; 180:108981. [PMID: 39146839 DOI: 10.1016/j.compbiomed.2024.108981] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Revised: 07/13/2024] [Accepted: 08/01/2024] [Indexed: 08/17/2024]
Abstract
Early detection of polyps is essential to decrease colorectal cancer(CRC) incidence. Therefore, developing an efficient and accurate polyp segmentation technique is crucial for clinical CRC prevention. In this paper, we propose an end-to-end training approach for polyp segmentation that employs diffusion model. The images are considered as priors, and the segmentation is formulated as a mask generation process. In the sampling process, multiple predictions are generated for each input image using the trained model, and significant performance enhancements are achieved through the use of majority vote strategy. Four public datasets and one in-house dataset are used to train and test the model performance. The proposed method achieves mDice scores of 0.934 and 0.967 for datasets Kvasir-SEG and CVC-ClinicDB respectively. Furthermore, one cross-validation is applied to test the generalization of the proposed model, and the proposed methods outperformed previous state-of-the-art(SOTA) models to the best of our knowledge. The proposed method also significantly improves the segmentation accuracy and has strong generalization capability.
Collapse
Affiliation(s)
- Zenan Wang
- Department of Gastroenterology, Beijing Chaoyang Hospital, the Third Clinical Medical College of Capital Medical University, Beijing, China.
| | - Ming Liu
- Hunan Key Laboratory of Nonferrous Resources and Geological Hazard Exploration, Changsha, China
| | - Jue Jiang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York City, NY, United States
| | - Xiaolei Qu
- School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing, China
| |
Collapse
|
3
|
Jia Y, Feng G, Yang T, Chen S, Dai F. Self-Adaptive Teacher-Student framework for colon polyp segmentation from unannotated private data with public annotated datasets. PLoS One 2024; 19:e0307777. [PMID: 39196967 PMCID: PMC11356409 DOI: 10.1371/journal.pone.0307777] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2024] [Accepted: 07/10/2024] [Indexed: 08/30/2024] Open
Abstract
Colon polyps have become a focal point of research due to their heightened potential to develop into appendiceal cancer, which has the highest mortality rate globally. Although numerous colon polyp segmentation methods have been developed using public polyp datasets, they tend to underperform on private datasets due to inconsistencies in data distribution and the difficulty of fine-tuning without annotations. In this paper, we propose a Self-Adaptive Teacher-Student (SATS) framework to segment colon polyps from unannotated private data by utilizing multiple publicly annotated datasets. The SATS trains multiple teacher networks on public datasets and then generates pseudo-labels on private data to assist in training a student network. To enhance the reliability of the pseudo-labels from the teacher networks, the SATS includes a newly proposed Uncertainty and Distance Fusion (UDFusion) strategy. UDFusion dynamically adjusts the pseudo-label weights based on a novel reconstruction similarity measure, innovatively bridging the gap between private and public data distributions. To ensure accurate identification and segmentation of colon polyps, the SATS also incorporates a Granular Attention Network (GANet) architecture for both teacher and student networks. GANet first identifies polyps roughly from a global perspective by encoding long-range anatomical dependencies and then refines this identification to remove false-positive areas through multi-scale background-foreground attention. The SATS framework was validated using three public datasets and one private dataset, achieving 76.30% on IoU, 86.00% on Recall, and 7.01 pixels on HD. These results outperform the existing five methods, indicating the effectiveness of this approach for colon polyp segmentation.
Collapse
Affiliation(s)
- Yiwen Jia
- Department of Gastroenterology, The Third Affiliated Hospital of Anhui Medical University, Hefei, Anhui, China
| | - Guangming Feng
- Department of Gastroenterology, The Third Affiliated Hospital of Anhui Medical University, Hefei, Anhui, China
| | - Tang Yang
- Department of Gastroenterology, The Third Affiliated Hospital of Anhui Medical University, Hefei, Anhui, China
| | - Siyuan Chen
- Department of Gastroenterology, The Third Affiliated Hospital of Anhui Medical University, Hefei, Anhui, China
| | - Fu Dai
- Department of Gastroenterology, The Third Affiliated Hospital of Anhui Medical University, Hefei, Anhui, China
| |
Collapse
|
4
|
Hussain MS, Asgher U, Nisar S, Socha V, Shaukat A, Wang J, Feng T, Paracha RZ, Khan MA. Enhanced accuracy with Segmentation of Colorectal Polyp using NanoNetB, and Conditional Random Field Test-Time Augmentation. Front Robot AI 2024; 11:1387491. [PMID: 39184863 PMCID: PMC11341306 DOI: 10.3389/frobt.2024.1387491] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2024] [Accepted: 07/09/2024] [Indexed: 08/27/2024] Open
Abstract
Colonoscopy is a reliable diagnostic method to detect colorectal polyps early on and prevent colorectal cancer. The current examination techniques face a significant challenge of high missed rates, resulting in numerous undetected polyps and irregularities. Automated and real-time segmentation methods can help endoscopists to segment the shape and location of polyps from colonoscopy images in order to facilitate clinician's timely diagnosis and interventions. Different parameters like shapes, small sizes of polyps, and their close resemblance to surrounding tissues make this task challenging. Furthermore, high-definition image quality and reliance on the operator make real-time and accurate endoscopic image segmentation more challenging. Deep learning models utilized for segmenting polyps, designed to capture diverse patterns, are becoming progressively complex. This complexity poses challenges for real-time medical operations. In clinical settings, utilizing automated methods requires the development of accurate, lightweight models with minimal latency, ensuring seamless integration with endoscopic hardware devices. To address these challenges, in this study a novel lightweight and more generalized Enhanced Nanonet model, an improved version of Nanonet using NanonetB for real-time and precise colonoscopy image segmentation, is proposed. The proposed model enhances the performance of Nanonet using Nanonet B on the overall prediction scheme by applying data augmentation, Conditional Random Field (CRF), and Test-Time Augmentation (TTA). Six publicly available datasets are utilized to perform thorough evaluations, assess generalizability, and validate the improvements: Kvasir-SEG, Endotect Challenge 2020, Kvasir-instrument, CVC-ClinicDB, CVC-ColonDB, and CVC-300. Through extensive experimentation, using the Kvasir-SEG dataset, our model achieves a mIoU score of 0.8188 and a Dice coefficient of 0.8060 with only 132,049 parameters and employing minimal computational resources. A thorough cross-dataset evaluation was performed to assess the generalization capability of the proposed Enhanced Nanonet model across various publicly available polyp datasets for potential real-world applications. The result of this study shows that using CRF (Conditional Random Fields) and TTA (Test-Time Augmentation) enhances performance within the same dataset and also across diverse datasets with a model size of just 132,049 parameters. Also, the proposed method indicates improved results in detecting smaller and sessile polyps (flats) that are significant contributors to the high miss rates.
Collapse
Affiliation(s)
- Muhammad Sajjad Hussain
- Department of Computer Science, Sir Syed (CASE) Institute of Technology, Islamabad, Pakistan
| | - Umer Asgher
- Laboratory of Human Factors and Automation in Aviation, Department of Air Transport, Faculty of Transportation Sciences, Czech Technical University in Prague (CTU), Prague, Czechia
- School of Interdisciplinary Engineering and Sciences (SINES), National University of Sciences and Technology (NUST), Islamabad, Pakistan
| | - Sajid Nisar
- Department of Mechanical and Electrical Systems Engineering, Faculty of Engineering, Kyoto University of Advanced Science, Kyoto, Japan
| | - Vladimir Socha
- Laboratory of Human Factors and Automation in Aviation, Department of Air Transport, Faculty of Transportation Sciences, Czech Technical University in Prague (CTU), Prague, Czechia
- Department of Information and Communication Technology in Medicine, Faculty of Biomedical Engineering, Czech Technical University in Prague, Prague, Czechia
| | - Arslan Shaukat
- Department of Computer and Software Engineering, College of Electrical and Mechanical Engineering (CoEME), National University of Sciences and Technology (NUST), Islamabad, Pakistan
| | - Jinhui Wang
- Institute for Brain Research and Rehabilitation, South China Normal University, Guangzhou, China
| | - Tian Feng
- Department of Physical Education, Physical Education College of Zhengzhou University, Zhengzhou, China
| | - Rehan Zafar Paracha
- School of Interdisciplinary Engineering and Sciences (SINES), National University of Sciences and Technology (NUST), Islamabad, Pakistan
| | - Muhammad Ali Khan
- Department of Mechanical Engineering, College of Electrical and Mechanical Engineering (CoEME), National University of Sciences and Technology (NUST), Islamabad, Pakistan
- School of Mechanical and Manufacturing Engineering (SMME), National University of Sciences and Technology (NUST), Islamabad, Pakistan
| |
Collapse
|
5
|
Lin Q, Tan W, Cai S, Yan B, Li J, Zhong Y. Lesion-Decoupling-Based Segmentation With Large-Scale Colon and Esophageal Datasets for Early Cancer Diagnosis. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:11142-11156. [PMID: 37028330 DOI: 10.1109/tnnls.2023.3248804] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Lesions of early cancers often show flat, small, and isochromatic characteristics in medical endoscopy images, which are difficult to be captured. By analyzing the differences between the internal and external features of the lesion area, we propose a lesion-decoupling-based segmentation (LDS) network for assisting early cancer diagnosis. We introduce a plug-and-play module called self-sampling similar feature disentangling module (FDM) to obtain accurate lesion boundaries. Then, we propose a feature separation loss (FSL) function to separate pathological features from normal ones. Moreover, since physicians make diagnoses with multimodal data, we propose a multimodal cooperative segmentation network with two different modal images as input: white-light images (WLIs) and narrowband images (NBIs). Our FDM and FSL show a good performance for both single-modal and multimodal segmentations. Extensive experiments on five backbones prove that our FDM and FSL can be easily applied to different backbones for a significant lesion segmentation accuracy improvement, and the maximum increase of mean Intersection over Union (mIoU) is 4.58. For colonoscopy, we can achieve up to mIoU of 91.49 on our Dataset A and 84.41 on the three public datasets. For esophagoscopy, mIoU of 64.32 is best achieved on the WLI dataset and 66.31 on the NBI dataset.
Collapse
|
6
|
Hizukuri A, Nakayama R, Goto M, Sakai K. Computerized Segmentation Method for Nonmasses on Breast DCE-MRI Images Using ResUNet++ with Slice Sequence Learning and Cross-Phase Convolution. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:1567-1578. [PMID: 38441702 PMCID: PMC11300778 DOI: 10.1007/s10278-024-01053-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Revised: 11/23/2023] [Accepted: 12/22/2023] [Indexed: 08/07/2024]
Abstract
The purpose of this study was to develop a computerized segmentation method for nonmasses using ResUNet++ with a slice sequence learning and cross-phase convolution to analyze temporal information in breast dynamic contrast material-enhanced magnetic resonance imaging (DCE-MRI) images. The dataset consisted of a series of DCE-MRI examinations from 54 patients, each containing three-phase images, which included one image that was acquired before contrast injection and two images that were acquired after contrast injection. In the proposed method, the region of interest (ROI) slice images are first extracted from each phase image. The slice images at the same position in each ROI are stacked to generate a three-dimensional (3D) tensor. A cross-phase convolution generates feature maps with the 3D tensor to incorporate the temporal information. Subsequently, the feature maps are used as the input layers for ResUNet++. New feature maps are extracted from the input data using the ResUNet++ encoders, following which the nonmass regions are segmented by a decoder. A convolutional long short-term memory layer is introduced into the decoder to analyze a sequence of slice images. When using the proposed method, the average detection accuracy of nonmasses, number of false positives, Jaccard coefficient, Dice similarity coefficient, positive predictive value, and sensitivity were 90.5%, 1.91, 0.563, 0.712, 0.714, and 0.727, respectively, larger than those obtained using 3D U-Net, V-Net, and nnFormer. The proposed method achieves high detection and shape accuracies and will be useful in differential diagnoses of nonmasses.
Collapse
Affiliation(s)
- Akiyoshi Hizukuri
- Department of Electronic and Computer Engineering, Ritsumeikan University, 1-1-1 Noji-Higashi, Kusatsu, Shiga, 525-8577, Japan.
| | - Ryohei Nakayama
- Department of Electronic and Computer Engineering, Ritsumeikan University, 1-1-1 Noji-Higashi, Kusatsu, Shiga, 525-8577, Japan
| | - Mariko Goto
- Department of Radiology, Graduate School of Medical Science, Kyoto Prefectural University of Medicine, 465 Kajiicho, Kawaramachi Hirokoji, Kamigyoku, Kyoto, 602-8566, Japan
| | - Koji Sakai
- Department of Radiology, Graduate School of Medical Science, Kyoto Prefectural University of Medicine, 465 Kajiicho, Kawaramachi Hirokoji, Kamigyoku, Kyoto, 602-8566, Japan
| |
Collapse
|
7
|
Chen Y, Zhang X, Peng L, He Y, Sun F, Sun H. Medical image segmentation network based on multi-scale frequency domain filter. Neural Netw 2024; 175:106280. [PMID: 38579574 DOI: 10.1016/j.neunet.2024.106280] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2023] [Revised: 02/15/2024] [Accepted: 03/27/2024] [Indexed: 04/07/2024]
Abstract
With the development of deep learning, medical image segmentation in computer-aided diagnosis has become a research hotspot. Recently, UNet and its variants have become the most powerful medical image segmentation methods. However, these methods suffer from (1) insufficient sensing field and insufficient depth; (2) computational nonlinearity and redundancy of channel features; and (3) ignoring the interrelationships among feature channels. These problems lead to poor network segmentation performance and weak generalization ability. Therefore, first of all, we propose an effective replacement scheme of UNet base block, Double residual depthwise atrous convolution (DRDAC) block, to effectively improve the deficiency of receptive field and depth. Secondly, a new linear module, the Multi-scale frequency domain filter (MFDF), is designed to capture global information from the frequency domain. The high order multi-scale relationship is extracted by combining the depthwise atrous separable convolution with the frequency domain filter. Finally, a channel attention called Axial selection channel attention (ASCA) is redesigned to enhance the network's ability to model feature channel interrelationships. Further, we design a novel frequency domain medical image segmentation baseline method FDFUNet based on the above modules. We conduct extensive experiments on five publicly available medical image datasets and demonstrate that the present method has stronger segmentation performance as well as generalization ability compared to other state-of-the-art baseline methods.
Collapse
Affiliation(s)
- Yufeng Chen
- School of Information Engineering, Southwest University of Science and Technology, Mianyang 621010, PR China.
| | - Xiaoqian Zhang
- School of Information Engineering, Southwest University of Science and Technology, Mianyang 621010, PR China.
| | - Lifan Peng
- School of Information Engineering, Southwest University of Science and Technology, Mianyang 621010, PR China.
| | - Youdong He
- School of Information Engineering, Southwest University of Science and Technology, Mianyang 621010, PR China.
| | - Feng Sun
- Mianyang Central Hospital, School of Medicine, University of Electronic Science and Technology of China, Mianyang 621010, China; NHC Key Laboratory of Nuclear Technology Medical Transformation, Mianyang Central Hospital, Mianyang 621010, PR China.
| | - Huaijiang Sun
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing 210094, PR China.
| |
Collapse
|
8
|
Xu Z, Miao Y, Chen G, Liu S, Chen H. GLGFormer: Global Local Guidance Network for Mucosal Lesion Segmentation in Gastrointestinal Endoscopy Images. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01162-2. [PMID: 38940891 DOI: 10.1007/s10278-024-01162-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/07/2024] [Revised: 05/05/2024] [Accepted: 06/03/2024] [Indexed: 06/29/2024]
Abstract
Automatic mucosal lesion segmentation is a critical component in computer-aided clinical support systems for endoscopic image analysis. Image segmentation networks currently rely mainly on convolutional neural networks (CNNs) and Transformers, which have demonstrated strong performance in various applications. However, they cannot cope with blurred lesion boundaries and lesions of different scales in gastrointestinal endoscopy images. To address these challenges, we propose a new Transformer-based network, named GLGFormer, for the task of mucosal lesion segmentation. Specifically, we design the global guidance module to guide single-scale features patch-wise, enabling them to incorporate global information from the global map without information loss. Furthermore, a partial decoder is employed to fuse these enhanced single-scale features, achieving single-scale to multi-scale enhancement. Additionally, the local guidance module is designed to refocus attention on the neighboring patch, thus enhancing local features and refining lesion boundary segmentation. We conduct experiments on a private atrophic gastritis segmentation dataset and four public gastrointestinal polyp segmentation datasets. Compared to the current lesion segmentation networks, our proposed GLGFormer demonstrates outstanding learning and generalization capabilities. On the public dataset ClinicDB, GLGFormer achieved a mean intersection over union (mIoU) of 91.0% and a mean dice coefficient (mDice) of 95.0%. On the private dataset Gastritis-Seg, GLGFormer achieved an mIoU of 90.6% and an mDice of 94.6%.
Collapse
Affiliation(s)
- Zhiyang Xu
- Engineering Research Center of Intelligent Control for Underground Space, Ministry of Education, School of Information and Control Engineering, Advanced Robotics Research Center, China University of Mining and Technology, Xuzhou, Jiangsu, 221116, P. R. China
| | - Yanzi Miao
- Engineering Research Center of Intelligent Control for Underground Space, Ministry of Education, School of Information and Control Engineering, Advanced Robotics Research Center, China University of Mining and Technology, Xuzhou, Jiangsu, 221116, P. R. China.
| | - Guangxia Chen
- Department of Gastroenterology, Xuzhou Municipal Hospital Affiliated to Xuzhou Medical University, Xuzhou, Jiangsu, 221002, P. R. China
| | - Shiyu Liu
- Department of Gastroenterology, Xuzhou Municipal Hospital Affiliated to Xuzhou Medical University, Xuzhou, Jiangsu, 221002, P. R. China
| | - Hu Chen
- The First Clinical Medical School of Xuzhou Medical University, Xuzhou, Jiangsu, 221002, P. R. China
| |
Collapse
|
9
|
Wang Z, Liu Z, Yu J, Gao Y, Liu M. Multi-scale nested UNet with transformer for colorectal polyp segmentation. J Appl Clin Med Phys 2024; 25:e14351. [PMID: 38551396 PMCID: PMC11163511 DOI: 10.1002/acm2.14351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Revised: 02/13/2024] [Accepted: 02/19/2024] [Indexed: 06/11/2024] Open
Abstract
BACKGROUND Polyp detection and localization are essential tasks for colonoscopy. U-shape network based convolutional neural networks have achieved remarkable segmentation performance for biomedical images, but lack of long-range dependencies modeling limits their receptive fields. PURPOSE Our goal was to develop and test a novel architecture for polyp segmentation, which takes advantage of learning local information with long-range dependencies modeling. METHODS A novel architecture combining with multi-scale nested UNet structure integrated transformer for polyp segmentation was developed. The proposed network takes advantage of both CNN and transformer to extract distinct feature information. The transformer layer is embedded between the encoder and decoder of a U-shape net to learn explicit global context and long-range semantic information. To address the challenging of variant polyp sizes, a MSFF unit was proposed to fuse features with multiple resolution. RESULTS Four public datasets and one in-house dataset were used to train and test the model performance. Ablation study was also conducted to verify each component of the model. For dataset Kvasir-SEG and CVC-ClinicDB, the proposed model achieved mean dice score of 0.942 and 0.950 respectively, which were more accurate than the other methods. To show the generalization of different methods, we processed two cross dataset validations, the proposed model achieved the highest mean dice score. The results demonstrate that the proposed network has powerful learning and generalization capability, significantly improving segmentation accuracy and outperforming state-of-the-art methods. CONCLUSIONS The proposed model produced more accurate polyp segmentation than current methods on four different public and one in-house datasets. Its capability of polyps segmentation in different sizes shows the potential clinical application.
Collapse
Affiliation(s)
- Zenan Wang
- Department of Gastroenterology, Beijing Chaoyang Hospitalthe Third Clinical Medical College of Capital Medical UniversityBeijingChina
| | - Zhen Liu
- Department of Gastroenterology, Beijing Chaoyang Hospitalthe Third Clinical Medical College of Capital Medical UniversityBeijingChina
| | - Jianfeng Yu
- Department of Gastroenterology, Beijing Chaoyang Hospitalthe Third Clinical Medical College of Capital Medical UniversityBeijingChina
| | - Yingxin Gao
- Department of Gastroenterology, Beijing Chaoyang Hospitalthe Third Clinical Medical College of Capital Medical UniversityBeijingChina
| | - Ming Liu
- Hunan Key Laboratory of Nonferrous Resources and Geological Hazard ExplorationChangshaChina
| |
Collapse
|
10
|
Ou Z, Bai J, Chen Z, Lu Y, Wang H, Long S, Chen G. RTSeg-net: A lightweight network for real-time segmentation of fetal head and pubic symphysis from intrapartum ultrasound images. Comput Biol Med 2024; 175:108501. [PMID: 38703545 DOI: 10.1016/j.compbiomed.2024.108501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2024] [Revised: 03/19/2024] [Accepted: 04/21/2024] [Indexed: 05/06/2024]
Abstract
The segmentation of the fetal head (FH) and pubic symphysis (PS) from intrapartum ultrasound images plays a pivotal role in monitoring labor progression and informing crucial clinical decisions. Achieving real-time segmentation with high accuracy on systems with limited hardware capabilities presents significant challenges. To address these challenges, we propose the real-time segmentation network (RTSeg-Net), a groundbreaking lightweight deep learning model that incorporates innovative distribution shifting convolutional blocks, tokenized multilayer perceptron blocks, and efficient feature fusion blocks. Designed for optimal computational efficiency, RTSeg-Net minimizes resource demand while significantly enhancing segmentation performance. Our comprehensive evaluation on two distinct intrapartum ultrasound image datasets reveals that RTSeg-Net achieves segmentation accuracy on par with more complex state-of-the-art networks, utilizing merely 1.86 M parameters-just 6 % of their hyperparameters-and operating seven times faster, achieving a remarkable rate of 31.13 frames per second on a Jetson Nano, a device known for its limited computing capacity. These achievements underscore RTSeg-Net's potential to provide accurate, real-time segmentation on low-power devices, broadening the scope for its application across various stages of labor. By facilitating real-time, accurate ultrasound image analysis on portable, low-cost devices, RTSeg-Net promises to revolutionize intrapartum monitoring, making sophisticated diagnostic tools accessible to a wider range of healthcare settings.
Collapse
Affiliation(s)
- Zhanhong Ou
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Information Technology, Jinan University, Guangzhou, 510632, China
| | - Jieyun Bai
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Information Technology, Jinan University, Guangzhou, 510632, China; Auckland Bioengineering Institute, University of Auckland, Auckland, 1010, New Zealand.
| | - Zhide Chen
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Information Technology, Jinan University, Guangzhou, 510632, China
| | - Yaosheng Lu
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Information Technology, Jinan University, Guangzhou, 510632, China
| | - Huijin Wang
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Information Technology, Jinan University, Guangzhou, 510632, China
| | - Shun Long
- Guangdong Provincial Key Laboratory of Traditional Chinese Medicine Information Technology, Jinan University, Guangzhou, 510632, China
| | - Gaowen Chen
- Obstetrics and Gynecology Center, Zhujiang Hospital, Southern Medical University, Guangzhou, 510280, China
| |
Collapse
|
11
|
Xu Z, Guo X, Wang J. Enhancing skin lesion segmentation with a fusion of convolutional neural networks and transformer models. Heliyon 2024; 10:e31395. [PMID: 38807881 PMCID: PMC11130697 DOI: 10.1016/j.heliyon.2024.e31395] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Revised: 05/11/2024] [Accepted: 05/15/2024] [Indexed: 05/30/2024] Open
Abstract
Accurate segmentation is crucial in diagnosing and analyzing skin lesions. However, automatic segmentation of skin lesions is extremely challenging because of their variable sizes, uneven color distributions, irregular shapes, hair occlusions, and blurred boundaries. Owing to the limited range of convolutional networks receptive fields, shallow convolution cannot extract the global features of images and thus has limited segmentation performance. Because medical image datasets are small in scale, the use of excessively deep networks could cause overfitting and increase computational complexity. Although transformer networks can focus on extracting global information, they cannot extract sufficient local information and accurately segment detailed lesion features. In this study, we designed a dual-branch encoder that combines a convolution neural network (CNN) and a transformer. The CNN branch of the encoder comprises four layers, which learn the local features of images through layer-wise downsampling. The transformer branch also comprises four layers, enabling the learning of global image information through attention mechanisms. The feature fusion module in the network integrates local features and global information, emphasizes important channel features through the channel attention mechanism, and filters irrelevant feature expressions. The information exchange between the decoder and encoder is finally achieved through skip connections to supplement the information lost during the sampling process, thereby enhancing segmentation accuracy. The data used in this paper are from four public datasets, including images of melanoma, basal cell tumor, fibroma, and benign nevus. Because of the limited size of the image data, we enhanced them using methods such as random horizontal flipping, random vertical flipping, random brightness enhancement, random contrast enhancement, and rotation. The segmentation accuracy is evaluated through intersection over union and duration, integrity, commitment, and effort indicators, reaching 87.7 % and 93.21 %, 82.05 % and 89.19 %, 86.81 % and 92.72 %, and 92.79 % and 96.21 %, respectively, on the ISIC 2016, ISIC 2017, ISIC 2018, and PH2 datasets, respectively (code: https://github.com/hyjane/CCT-Net).
Collapse
Affiliation(s)
- Zhijian Xu
- School of Electronic Information Engineering, China West Normal University, No. 1 Shida Road, Nanchong, Sichuan, 637009, China
| | - Xingyue Guo
- School of Computer Science, China West Normal University, No. 1 Shida Road, Nanchong, Sichuan, 637009, China
| | - Juan Wang
- School of Computer Science, China West Normal University, No. 1 Shida Road, Nanchong, Sichuan, 637009, China
| |
Collapse
|
12
|
Shu X, Wang J, Zhang A, Shi J, Wu XJ. CSCA U-Net: A channel and space compound attention CNN for medical image segmentation. Artif Intell Med 2024; 150:102800. [PMID: 38553146 DOI: 10.1016/j.artmed.2024.102800] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Revised: 12/10/2023] [Accepted: 02/03/2024] [Indexed: 04/02/2024]
Abstract
Image segmentation is one of the vital steps in medical image analysis. A large number of methods based on convolutional neural networks have emerged, which can extract abstract features from multiple-modality medical images, learn valuable information that is difficult to recognize by humans, and obtain more reliable results than traditional image segmentation approaches. U-Net, due to its simple structure and excellent performance, is widely used in medical image segmentation. In this paper, to further improve the performance of U-Net, we propose a channel and space compound attention (CSCA) convolutional neural network, CSCA U-Net in abbreviation, which increases the network depth and employs a double squeeze-and-excitation (DSE) block in the bottleneck layer to enhance feature extraction and obtain more high-level semantic features. Moreover, the characteristics of the proposed method are three-fold: (1) channel and space compound attention (CSCA) block, (2) cross-layer feature fusion (CLFF), and (3) deep supervision (DS). Extensive experiments on several available medical image datasets, including Kvasir-SEG, CVC-ClinicDB, CVC-ColonDB, ETIS, CVC-T, 2018 Data Science Bowl (2018 DSB), ISIC 2018, and JSUAH-Cerebellum, show that CSCA U-Net achieves competitive results and significantly improves generalization performance. The codes and trained models are available at https://github.com/xiaolanshu/CSCA-U-Net.
Collapse
Affiliation(s)
- Xin Shu
- School of Computer Science, Jiangsu University of Science and Technology, Zhenjiang, 212100, Jiangsu, China; Development and Related Diseases of Women and Children Key Laboratory of Sichuan Province, Chengdu, 610041, Sichuan, China.
| | - Jiashu Wang
- School of Computer Science, Jiangsu University of Science and Technology, Zhenjiang, 212100, Jiangsu, China
| | - Aoping Zhang
- School of Computer Science, Jiangsu University of Science and Technology, Zhenjiang, 212100, Jiangsu, China
| | - Jinlong Shi
- School of Computer Science, Jiangsu University of Science and Technology, Zhenjiang, 212100, Jiangsu, China
| | - Xiao-Jun Wu
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, 214122, Jiangsu, China
| |
Collapse
|
13
|
Yang C, Zhang Z. PFD-Net: Pyramid Fourier Deformable Network for medical image segmentation. Comput Biol Med 2024; 172:108302. [PMID: 38503092 DOI: 10.1016/j.compbiomed.2024.108302] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Revised: 02/26/2024] [Accepted: 03/12/2024] [Indexed: 03/21/2024]
Abstract
Medical image segmentation is crucial for accurately locating lesion regions and assisting doctors in diagnosis. However, most existing methods fail to effectively utilize both local details and global semantic information in medical image segmentation, resulting in the inability to effectively capture fine-grained content such as small targets and irregular boundaries. To address this issue, we propose a novel Pyramid Fourier Deformable Network (PFD-Net) for medical image segmentation, which leverages the strengths of CNN and Transformer. The PFD-Net first utilizes PVTv2-based Transformer as the primary encoder to capture global information and further enhances both local and global feature representations with the Fast Fourier Convolution Residual (FFCR) module. Moreover, PFD-Net further proposes the Dilated Deformable Refinement (DDR) module to enhance the model's capacity to comprehend global semantic structures of shape-diverse targets and their irregular boundaries. Lastly, Cross-Level Fusion Block with deformable convolution (CLFB) is proposed to combine the decoded feature maps from the final Residual Decoder Block (DDR) with local features from the CNN auxiliary encoder branch, improving the network's ability to perceive targets resembling the surrounding structures. Extensive experiments were conducted on nine publicly medical image datasets for five types of segmentation tasks including polyp, abdominal, cardiac, gland cells and nuclei. The qualitative and quantitative results demonstrate that PFD-Net outperforms existing state-of-the-art methods in various evaluation metrics, and achieves the highest performance of mDice with the value of 0.826 on the most challenging dataset (ETIS), which is 1.8% improvement compared to the previous best-performing HSNet and 3.6% improvement compared to the next-best PVT-CASCADE. Codes are available at https://github.com/ChaorongYang/PFD-Net.
Collapse
Affiliation(s)
- Chaorong Yang
- College of Computer and Cyber Security, Hebei Normal University, Shijiazhuang 050024, China; Hebei Provincial Engineering Research Center for Supply Chain Big Data Analytics & Data Security, Shijiazhuang 050024, China; Hebei Provincial Key Laboratory of Network & Information Security, Hebei Normal University, Shijiazhuang 050024, China.
| | - Zhaohui Zhang
- College of Computer and Cyber Security, Hebei Normal University, Shijiazhuang 050024, China; Hebei Provincial Engineering Research Center for Supply Chain Big Data Analytics & Data Security, Shijiazhuang 050024, China; Hebei Provincial Key Laboratory of Network & Information Security, Hebei Normal University, Shijiazhuang 050024, China.
| |
Collapse
|
14
|
Chen TH, Wang YT, Wu CH, Kuo CF, Cheng HT, Huang SW, Lee C. A colonial serrated polyp classification model using white-light ordinary endoscopy images with an artificial intelligence model and TensorFlow chart. BMC Gastroenterol 2024; 24:99. [PMID: 38443794 PMCID: PMC10913269 DOI: 10.1186/s12876-024-03181-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Accepted: 02/19/2024] [Indexed: 03/07/2024] Open
Abstract
In this study, we implemented a combination of data augmentation and artificial intelligence (AI) model-Convolutional Neural Network (CNN)-to help physicians classify colonic polyps into traditional adenoma (TA), sessile serrated adenoma (SSA), and hyperplastic polyp (HP). We collected ordinary endoscopy images under both white and NBI lights. Under white light, we collected 257 images of HP, 423 images of SSA, and 60 images of TA. Under NBI light, were collected 238 images of HP, 284 images of SSA, and 71 images of TA. We implemented the CNN-based artificial intelligence model, Inception V4, to build a classification model for the types of colon polyps. Our final AI classification model with data augmentation process is constructed only with white light images. Our classification prediction accuracy of colon polyp type is 94%, and the discriminability of the model (area under the curve) was 98%. Thus, we can conclude that our model can help physicians distinguish between TA, SSA, and HPs and correctly identify precancerous lesions such as TA and SSA.
Collapse
Affiliation(s)
- Tsung-Hsing Chen
- Department of Gastroenterology and Hepatology, Linkou Medical Center, Chang Gung Memorial Hospital, Taoyuan, Taiwan
- College of Medicine, Chang Gung University, Taoyuan, Taiwan
| | | | - Chi-Huan Wu
- Department of Gastroenterology and Hepatology, Linkou Medical Center, Chang Gung Memorial Hospital, Taoyuan, Taiwan
- College of Medicine, Chang Gung University, Taoyuan, Taiwan
| | - Chang-Fu Kuo
- Division of Rheumatology, Allergy, and Immunology, Chang Gung Memorial Hospital- Linkou and Chang Gung University College of Medicine, Taoyuan, Taiwan, ROC
- Center for Artificial Intelligence in Medicine, Chang Gung Memorial Hospital, Taoyuan, Taiwan, ROC
| | - Hao-Tsai Cheng
- Department of Gastroenterology and Hepatology, Linkou Medical Center, Chang Gung Memorial Hospital, Taoyuan, Taiwan
- College of Medicine, Chang Gung University, Taoyuan, Taiwan
- Division of Gastroenterology and Hepatology, Department of Internal Medicine, New Taipei Municipal TuCheng Hospital, New Taipei City, Taiwan
- Graduate Institute of Clinical Medicine, College of Medicine, Chang Gung University, Taoyuan City, Taiwan
| | - Shu-Wei Huang
- Department of Gastroenterology and Hepatology, Linkou Medical Center, Chang Gung Memorial Hospital, Taoyuan, Taiwan
- College of Medicine, Chang Gung University, Taoyuan, Taiwan
- Division of Gastroenterology and Hepatology, Department of Internal Medicine, New Taipei Municipal TuCheng Hospital, New Taipei City, Taiwan
| | - Chieh Lee
- Department of Information and Management, College of Business, National Sun Yat-sen University, Kaohsiung city, Taiwan.
| |
Collapse
|
15
|
Gangrade S, Sharma PC, Sharma AK, Singh YP. Modified DeeplabV3+ with multi-level context attention mechanism for colonoscopy polyp segmentation. Comput Biol Med 2024; 170:108096. [PMID: 38320340 DOI: 10.1016/j.compbiomed.2024.108096] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Revised: 01/31/2024] [Accepted: 02/01/2024] [Indexed: 02/08/2024]
Abstract
The development of automated methods for analyzing medical images of colon cancer is one of the main research fields. A colonoscopy is a medical treatment that enables a doctor to look for any abnormalities like polyps, cancer, or inflammatory tissue inside the colon and rectum. It falls under the category of gastrointestinal illnesses, and it claims the lives of almost two million people worldwide. Video endoscopy is an advanced medical imaging approach to diagnose gastrointestinal disorders such as inflammatory bowel, ulcerative colitis, esophagitis, and polyps. Medical video endoscopy generates several images, which must be reviewed by specialists. The difficulty of manual diagnosis has sparked research towards computer-aided techniques that can quickly and reliably diagnose all generated images. The proposed methodology establishes a framework for diagnosing coloscopy diseases. Endoscopists can lower the risk of polyps turning into cancer during colonoscopies by using more accurate computer-assisted polyp detection and segmentation. With the aim of creating a model that can automatically distinguish polyps from images, we presented a modified DeeplabV3+ model in this study to carry out segmentation tasks successfully and efficiently. The framework's encoder uses a pre-trained dilated convolutional residual network for optimal feature map resolution. The robustness of the modified model is tested against state-of-the-art segmentation approaches. In this work, we employed two publicly available datasets, CVC-Clinic DB and Kvasir-SEG, and obtained Dice similarity coefficients of 0.97 and 0.95, respectively. The results show that the improved DeeplabV3+ model improves segmentation efficiency and effectiveness in both software and hardware with only minor changes.
Collapse
Affiliation(s)
- Shweta Gangrade
- School of Information Technology, Manipal University Jaipur, Jaipur, Rajasthan, India; School of Computer Science and Engineering, Manipal University Jaipur, Jaipur, Rajasthan, India
| | - Prakash Chandra Sharma
- School of Information Technology, Manipal University Jaipur, Jaipur, Rajasthan, India; School of Computer Science and Engineering, Manipal University Jaipur, Jaipur, Rajasthan, India
| | - Akhilesh Kumar Sharma
- School of Information Technology, Manipal University Jaipur, Jaipur, Rajasthan, India; School of Computer Science and Engineering, Manipal University Jaipur, Jaipur, Rajasthan, India
| | - Yadvendra Pratap Singh
- School of Information Technology, Manipal University Jaipur, Jaipur, Rajasthan, India; School of Computer Science and Engineering, Manipal University Jaipur, Jaipur, Rajasthan, India.
| |
Collapse
|
16
|
Li C, Liu J, Tang J. Simultaneous segmentation and classification of colon cancer polyp images using a dual branch multi-task learning network. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2024; 21:2024-2049. [PMID: 38454673 DOI: 10.3934/mbe.2024090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/09/2024]
Abstract
Accurate classification and segmentation of polyps are two important tasks in the diagnosis and treatment of colorectal cancers. Existing models perform segmentation and classification separately and do not fully make use of the correlation between the two tasks. Furthermore, polyps exhibit random regions and varying shapes and sizes, and they often share similar boundaries and backgrounds. However, existing models fail to consider these factors and thus are not robust because of their inherent limitations. To address these issues, we developed a multi-task network that performs both segmentation and classification simultaneously and can cope with the aforementioned factors effectively. Our proposed network possesses a dual-branch structure, comprising a transformer branch and a convolutional neural network (CNN) branch. This approach enhances local details within the global representation, improving both local feature awareness and global contextual understanding, thus contributing to the improved preservation of polyp-related information. Additionally, we have designed a feature interaction module (FIM) aimed at bridging the semantic gap between the two branches and facilitating the integration of diverse semantic information from both branches. This integration enables the full capture of global context information and local details related to polyps. To prevent the loss of edge detail information crucial for polyp identification, we have introduced a reverse attention boundary enhancement (RABE) module to gradually enhance edge structures and detailed information within polyp regions. Finally, we conducted extensive experiments on five publicly available datasets to evaluate the performance of our method in both polyp segmentation and classification tasks. The experimental results confirm that our proposed method outperforms other state-of-the-art methods.
Collapse
Affiliation(s)
- Chenqian Li
- School of Computer Science and Technology, Wuhan University of Science and Technology, Wuhan 430065, China
- Hubei Province Key Laboratory of Intelligent Information Processing and Real-Time Industrial System, Wuhan 430065, China
| | - Jun Liu
- School of Computer Science and Technology, Wuhan University of Science and Technology, Wuhan 430065, China
- Hubei Province Key Laboratory of Intelligent Information Processing and Real-Time Industrial System, Wuhan 430065, China
| | - Jinshan Tang
- Department of Health Administration and Policy, College of Public Health, George Mason University, Fairfax, VA 22030, USA
| |
Collapse
|
17
|
Zhao Y, Li J, Ren L, Chen Z. DTAN: Diffusion-based Text Attention Network for medical image segmentation. Comput Biol Med 2024; 168:107728. [PMID: 37984203 DOI: 10.1016/j.compbiomed.2023.107728] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Revised: 11/09/2023] [Accepted: 11/15/2023] [Indexed: 11/22/2023]
Abstract
In the current era, diffusion models have emerged as a groundbreaking force in the realm of medical image segmentation. Against this backdrop, we introduce the Diffusion Text-Attention Network (DTAN), a pioneering segmentation framework that amalgamates the principles of text attention with diffusion models to enhance the precision and integrity of medical image segmentation. Our proposed DTAN architecture is designed to steer the segmentation process towards areas of interest by leveraging a text attention mechanism. This mechanism is adept at identifying and zeroing in on the regions of significance, thus improving the accuracy and robustness of the segmentation. In parallel, the integration of a diffusion model serves to diminish the influence of noise and irrelevant background data in medical images, thereby improving the quality of the segmentation results. The diffusion model is instrumental in filtering out extraneous factors, allowing the network to more effectively capture the nuances and characteristics of the target regions, which in turn enhances segmentation precision. We have subjected DTAN to rigorous evaluation across three datasets: Kvasir-Sessile, Kvasir-SEG, and GlaS. Our focus was particularly drawn to the Kvasir-Sessile dataset due to its relevance to clinical applications. When benchmarked against other state-of-the-art methods, our approach demonstrated significant improvements on the Kvasir-Sessile dataset, with a 2.77% increase in mean Intersection over Union (mIoU) and a 3.06% increase in mean Dice Similarity Coefficient (mDSC). These results provide strong evidence of the DTAN's generalizability and robustness, and its distinct advantages in the task of medical image segmentation.
Collapse
Affiliation(s)
- Yiyang Zhao
- School of Information and electronic engineering, Shandong Technology and Business University, Yantai, China
| | - Jinjiang Li
- School of Information and electronic engineering, Shandong Technology and Business University, Yantai, China
| | - Lu Ren
- School of Computer Science and Technology, Shandong Technology and Business University, Yantai, China
| | - Zheng Chen
- School of Computer Science and Technology, Shandong Technology and Business University, Yantai, China.
| |
Collapse
|
18
|
Zhang W, Lu F, Su H, Hu Y. Dual-branch multi-information aggregation network with transformer and convolution for polyp segmentation. Comput Biol Med 2024; 168:107760. [PMID: 38064849 DOI: 10.1016/j.compbiomed.2023.107760] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Revised: 10/21/2023] [Accepted: 11/21/2023] [Indexed: 01/10/2024]
Abstract
Computer-Aided Diagnosis (CAD) for polyp detection offers one of the most notable showcases. By using deep learning technologies, the accuracy of polyp segmentation is surpassing human experts. In such CAD process, a critical step is concerned with segmenting colorectal polyps from colonoscopy images. Despite remarkable successes attained by recent deep learning related works, much improvement is still anticipated to tackle challenging cases. For instance, the effects of motion blur and light reflection can introduce significant noise into the image. The same type of polyps has a diversity of size, color and texture. To address such challenges, this paper proposes a novel dual-branch multi-information aggregation network (DBMIA-Net) for polyp segmentation, which is able to accurately and reliably segment a variety of colorectal polyps with efficiency. Specifically, a dual-branch encoder with transformer and convolutional neural networks (CNN) is employed to extract polyp features, and two multi-information aggregation modules are applied in the decoder to fuse multi-scale features adaptively. Two multi-information aggregation modules include global information aggregation (GIA) module and edge information aggregation (EIA) module. In addition, to enhance the representation learning capability of the potential channel feature association, this paper also proposes a novel adaptive channel graph convolution (ACGC). To validate the effectiveness and advantages of the proposed network, we compare it with several state-of-the-art (SOTA) methods on five public datasets. Experimental results consistently demonstrate that the proposed DBMIA-Net obtains significantly superior segmentation performance across six popularly used evaluation matrices. Especially, we achieve 94.12% mean Dice on CVC-ClinicDB dataset which is 4.22% improvement compared to the previous state-of-the-art method PraNet. Compared with SOTA algorithms, DBMIA-Net has a better fitting ability and stronger generalization ability.
Collapse
Affiliation(s)
- Wenyu Zhang
- School of Information Science and Engineering, Lanzhou University, China
| | - Fuxiang Lu
- School of Information Science and Engineering, Lanzhou University, China.
| | - Hongjing Su
- School of Information Science and Engineering, Lanzhou University, China
| | - Yawen Hu
- School of Information Science and Engineering, Lanzhou University, China
| |
Collapse
|
19
|
Ma T, Wang K, Hu F. LMU-Net: lightweight U-shaped network for medical image segmentation. Med Biol Eng Comput 2024; 62:61-70. [PMID: 37615845 DOI: 10.1007/s11517-023-02908-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2023] [Accepted: 08/08/2023] [Indexed: 08/25/2023]
Abstract
Deep learning technology has been employed for precise medical image segmentation in recent years. However, due to the limited available datasets and real-time processing requirement, the inherently complicated structure of deep learning models restricts their application in the field of medical image processing. In this work, we present a novel lightweight LMU-Net network with improved accuracy for medical image segmentation. The multilayer perceptron (MLP) and depth-wise separable convolutions are adopted in both encoder and decoder of the LMU-Net to reduce feature loss and the number of training parameters. In addition, a lightweight channel attention mechanism and convolution operation with a larger kernel are introduced in the proposed architecture to further improve the segmentation performance. Furthermore, we employ batch normalization (BN) and group normalization (GN) interchangeably in our module to minimize the estimation shift in the network. Finally, the proposed network is evaluated and compared to other architectures on publicly accessible ISIC and BUSI datasets by carrying out robust experiments with sufficient ablation considerations. The experimental results show that the proposed LMU-Net can achieve a better overall performance than existing techniques by adopting fewer parameters.
Collapse
Affiliation(s)
- Ting Ma
- Southwest Petroleum University, Chengdu, China
| | - Ke Wang
- Southwest Petroleum University, Chengdu, China
| | - Feng Hu
- Jiangsu Citron Biotech Company Limited, Nantong, China.
| |
Collapse
|
20
|
He W, Zhang C, Dai J, Liu L, Wang T, Liu X, Jiang Y, Li N, Xiong J, Wang L, Xie Y, Liang X. A statistical deformation model-based data augmentation method for volumetric medical image segmentation. Med Image Anal 2024; 91:102984. [PMID: 37837690 DOI: 10.1016/j.media.2023.102984] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2022] [Revised: 07/15/2023] [Accepted: 09/28/2023] [Indexed: 10/16/2023]
Abstract
The accurate delineation of organs-at-risk (OARs) is a crucial step in treatment planning during radiotherapy, as it minimizes the potential adverse effects of radiation on surrounding healthy organs. However, manual contouring of OARs in computed tomography (CT) images is labor-intensive and susceptible to errors, particularly for low-contrast soft tissue. Deep learning-based artificial intelligence algorithms surpass traditional methods but require large datasets. Obtaining annotated medical images is both time-consuming and expensive, hindering the collection of extensive training sets. To enhance the performance of medical image segmentation, augmentation strategies such as rotation and Gaussian smoothing are employed during preprocessing. However, these conventional data augmentation techniques cannot generate more realistic deformations, limiting improvements in accuracy. To address this issue, this study introduces a statistical deformation model-based data augmentation method for volumetric medical image segmentation. By applying diverse and realistic data augmentation to CT images from a limited patient cohort, our method significantly improves the fully automated segmentation of OARs across various body parts. We evaluate our framework on three datasets containing tumor OARs from the head, neck, chest, and abdomen. Test results demonstrate that the proposed method achieves state-of-the-art performance in numerous OARs segmentation challenges. This innovative approach holds considerable potential as a powerful tool for various medical imaging-related sub-fields, effectively addressing the challenge of limited data access.
Collapse
Affiliation(s)
- Wenfeng He
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China; School of Computer Science and Engineering, South China University of Technology, Guangzhou 510006, China
| | - Chulong Zhang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Jingjing Dai
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Lin Liu
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Tangsheng Wang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Xuan Liu
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Yuming Jiang
- Department of Radiation Oncology, Wake Forest University School of Medicine, Winston Salem, North Carolina 27157, USA
| | - Na Li
- Department of Biomedical Engineering, Guangdong Medical University, Dongguan, 523808, China
| | - Jing Xiong
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Lei Wang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Yaoqin Xie
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Xiaokun Liang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.
| |
Collapse
|
21
|
Jain S, Atale R, Gupta A, Mishra U, Seal A, Ojha A, Jaworek-Korjakowska J, Krejcar O. CoInNet: A Convolution-Involution Network With a Novel Statistical Attention for Automatic Polyp Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3987-4000. [PMID: 37768798 DOI: 10.1109/tmi.2023.3320151] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/30/2023]
Abstract
Polyps are very common abnormalities in human gastrointestinal regions. Their early diagnosis may help in reducing the risk of colorectal cancer. Vision-based computer-aided diagnostic systems automatically identify polyp regions to assist surgeons in their removal. Due to their varying shape, color, size, texture, and unclear boundaries, polyp segmentation in images is a challenging problem. Existing deep learning segmentation models mostly rely on convolutional neural networks that have certain limitations in learning the diversity in visual patterns at different spatial locations. Further, they fail to capture inter-feature dependencies. Vision transformer models have also been deployed for polyp segmentation due to their powerful global feature extraction capabilities. But they too are supplemented by convolution layers for learning contextual local information. In the present paper, a polyp segmentation model CoInNet is proposed with a novel feature extraction mechanism that leverages the strengths of convolution and involution operations and learns to highlight polyp regions in images by considering the relationship between different feature maps through a statistical feature attention unit. To further aid the network in learning polyp boundaries, an anomaly boundary approximation module is introduced that uses recursively fed feature fusion to refine segmentation results. It is indeed remarkable that even tiny-sized polyps with only 0.01% of an image area can be precisely segmented by CoInNet. It is crucial for clinical applications, as small polyps can be easily overlooked even in the manual examination due to the voluminous size of wireless capsule endoscopy videos. CoInNet outperforms thirteen state-of-the-art methods on five benchmark polyp segmentation datasets.
Collapse
|
22
|
Wang L, Ye M, Lu Y, Qiu Q, Niu Z, Shi H, Wang J. A combined encoder-transformer-decoder network for volumetric segmentation of adrenal tumors. Biomed Eng Online 2023; 22:106. [PMID: 37940921 PMCID: PMC10631161 DOI: 10.1186/s12938-023-01160-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2023] [Accepted: 09/25/2023] [Indexed: 11/10/2023] Open
Abstract
BACKGROUND The morphology of the adrenal tumor and the clinical statistics of the adrenal tumor area are two crucial diagnostic and differential diagnostic features, indicating precise tumor segmentation is essential. Therefore, we build a CT image segmentation method based on an encoder-decoder structure combined with a Transformer for volumetric segmentation of adrenal tumors. METHODS This study included a total of 182 patients with adrenal metastases, and an adrenal tumor volumetric segmentation method combining encoder-decoder structure and Transformer was constructed. The Dice Score coefficient (DSC), Hausdorff distance, Intersection over union (IOU), Average surface distance (ASD) and Mean average error (MAE) were calculated to evaluate the performance of the segmentation method. RESULTS Analyses were made among our proposed method and other CNN-based and transformer-based methods. The results showed excellent segmentation performance, with a mean DSC of 0.858, a mean Hausdorff distance of 10.996, a mean IOU of 0.814, a mean MAE of 0.0005, and a mean ASD of 0.509. The boxplot of all test samples' segmentation performance implies that the proposed method has the lowest skewness and the highest average prediction performance. CONCLUSIONS Our proposed method can directly generate 3D lesion maps and showed excellent segmentation performance. The comparison of segmentation metrics and visualization results showed that our proposed method performed very well in the segmentation.
Collapse
Affiliation(s)
- Liping Wang
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, Zhejiang, China
| | - Mingtao Ye
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, Zhejiang, China
| | - Yanjie Lu
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, Zhejiang, China
| | - Qicang Qiu
- Zhejiang Lab, No. 1818, Western Road of Wenyi, Hangzhou, Zhejiang, China.
| | - Zhongfeng Niu
- Department of Radiology, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang, China
| | - Hengfeng Shi
- Department of Radiology, Anqing Municipal Hospital, Anqing, Anhui, China
| | - Jian Wang
- Department of Radiology, Tongde Hospital of Zhejiang Province, No.234, Gucui Road, Hangzhou, Zhejiang, China.
| |
Collapse
|
23
|
Yao T, Wang C, Wang X, Li X, Jiang Z, Qi P. Enhancing percutaneous coronary intervention with heuristic path planning and deep-learning-based vascular segmentation. Comput Biol Med 2023; 166:107540. [PMID: 37806060 DOI: 10.1016/j.compbiomed.2023.107540] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2023] [Revised: 09/21/2023] [Accepted: 09/28/2023] [Indexed: 10/10/2023]
Abstract
Percutaneous coronary intervention (PCI) is a minimally invasive technique for treating vascular diseases. PCI requires precise and real-time visualization and guidance to ensure surgical safety and efficiency. Existing mainstream guiding methods rely on hemodynamic parameters. However, these methods are less intuitive than images and pose some challenges to the decision-making of cardiologists. This paper proposes a novel PCI guiding assistance system by combining a novel vascular segmentation network and a heuristic intervention path planning algorithm, providing cardiologists with clear and visualized information. A dataset of 1077 DSA images from 288 patients is also collected in clinical practice. A Likert Scale is also designed to evaluate system performance in user experiments. Results of user experiments demonstrate that the system can generate satisfactory and reasonable paths for PCI. Our proposed method outperformed the state-of-the-art baselines based on three metrics (Jaccard: 0.4091, F1: 0.5626, Accuracy: 0.9583). The proposed system can effectively assist cardiologists in PCI by providing a clear segmentation of vascular structures and optimal real-time intervention paths, thus demonstrating great potential for robotic PCI autonomy.
Collapse
Affiliation(s)
- Tianliang Yao
- College of Electronics and Information Engineering, Tongji University, Shanghai, 200092, China.
| | - Chengjia Wang
- School of Mathematical and Computer Sciences, Heriot-Watt University, Edinburgh, EH14 4AP, United Kingdom; BHF Centre for Cardiovascular Science,University of Edinburgh, Edinburgh, EH16 4TJ, United Kingdom.
| | - Xinyi Wang
- School of Medicine, Tongji University, Shanghai, 200092, China.
| | - Xiang Li
- Departments of Cardiology and Nursing, Shanghai Tenth People's Hospital, School of Medicine, Tongji University, Shanghai, 200072, China.
| | - Zhaolei Jiang
- Department of Cardiothoracic Surgery, Xinhua Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, 200092, China.
| | - Peng Qi
- College of Electronics and Information Engineering, Tongji University, Shanghai, 200092, China.
| |
Collapse
|
24
|
Lee GE, Cho J, Choi SI. Shallow and reverse attention network for colon polyp segmentation. Sci Rep 2023; 13:15243. [PMID: 37709828 PMCID: PMC10502036 DOI: 10.1038/s41598-023-42436-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2023] [Accepted: 09/10/2023] [Indexed: 09/16/2023] Open
Abstract
Polyp segmentation is challenging because the boundary between polyps and mucosa is ambiguous. Several models have considered the use of attention mechanisms to solve this problem. However, these models use only finite information obtained from a single type of attention. We propose a new dual-attention network based on shallow and reverse attention modules for colon polyps segmentation called SRaNet. The shallow attention mechanism removes background noise while emphasizing the locality by focusing on the foreground. In contrast, reverse attention helps distinguish the boundary between polyps and mucous membranes more clearly by focusing on the background. The two attention mechanisms are adaptively fused using a "Softmax Gate". Combining the two types of attention enables the model to capture complementary foreground and boundary features. Therefore, the proposed model predicts the boundaries of polyps more accurately than other models. We present the results of extensive experiments on polyp benchmarks to show that the proposed method outperforms existing models on both seen and unseen data. Furthermore, the results show that the proposed dual attention module increases the explainability of the model.
Collapse
Affiliation(s)
- Go-Eun Lee
- Department of Computer Science and Engineering, Dankook University, Yongin, 16890, South Korea
| | - Jungchan Cho
- School of Computing, Gachon University, Seongnam, 13120, South Korea.
| | - Sang-Ii Choi
- Department of Computer Science and Engineering, Dankook University, Yongin, 16890, South Korea.
| |
Collapse
|
25
|
Yang L, Zhai C, Liu Y, Yu H. CFHA-Net: A polyp segmentation method with cross-scale fusion strategy and hybrid attention. Comput Biol Med 2023; 164:107301. [PMID: 37573723 DOI: 10.1016/j.compbiomed.2023.107301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Revised: 07/10/2023] [Accepted: 07/28/2023] [Indexed: 08/15/2023]
Abstract
Colorectal cancer is a prevalent disease in modern times, with most cases being caused by polyps. Therefore, the segmentation of polyps has garnered significant attention in the field of medical image segmentation. In recent years, the variant network derived from the U-Net network has demonstrated a good segmentation effect on polyp segmentation challenges. In this paper, a polyp segmentation model, called CFHA-Net, is proposed, that combines a cross-scale feature fusion strategy and a hybrid attention mechanism. Inspired by feature learning, the encoder unit incorporates a cross-scale context fusion (CCF) module that performs cross-layer feature fusion and enhances the feature information of different scales. The skip connection is optimized by proposed triple hybrid attention (THA) module that aggregates spatial and channel attention features from three directions to improve the long-range dependence between features and help identify subsequent polyp lesion boundaries. Additionally, a dense-receptive feature fusion (DFF) module, which combines dense connections and multi-receptive field fusion modules, is added at the bottleneck layer to capture more comprehensive context information. Furthermore, a hybrid pooling (HP) module and a hybrid upsampling (HU) module are proposed to help the segmentation network acquire more contextual features. A series of experiments have been conducted on three typical datasets for polyp segmentation (CVC-ClinicDB, Kvasir-SEG, EndoTect) to evaluate the effectiveness and generalization of the proposed CFHA-Net. The experimental results demonstrate the validity and generalization of the proposed method, with many performance metrics surpassing those of related advanced segmentation networks. Therefore, proposed CFHA-Net could present a promising solution to the challenges of polyp segmentation in medical image analysis. The source code of proposed CFHA-Net is available at https://github.com/CXzhai/CFHA-Net.git.
Collapse
Affiliation(s)
- Lei Yang
- School of Electrical and Information Engineering, Zhengzhou University, Henan Province, 450001, China; Robot Perception and Control Engineering Laboratory of Henan Province, 450001, China
| | - Chenxu Zhai
- School of Electrical and Information Engineering, Zhengzhou University, Henan Province, 450001, China; Robot Perception and Control Engineering Laboratory of Henan Province, 450001, China
| | - Yanhong Liu
- School of Electrical and Information Engineering, Zhengzhou University, Henan Province, 450001, China; Robot Perception and Control Engineering Laboratory of Henan Province, 450001, China.
| | - Hongnian Yu
- School of Electrical and Information Engineering, Zhengzhou University, Henan Province, 450001, China; Built Environment, Edinburgh Napier University, Edinburgh EH10 5DT, UK
| |
Collapse
|
26
|
Zhang Q, Cheng J, Zhou C, Jiang X, Zhang Y, Zeng J, Liu L. PDC-Net: parallel dilated convolutional network with channel attention mechanism for pituitary adenoma segmentation. Front Physiol 2023; 14:1259877. [PMID: 37711463 PMCID: PMC10498772 DOI: 10.3389/fphys.2023.1259877] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Accepted: 08/16/2023] [Indexed: 09/16/2023] Open
Abstract
Accurate segmentation of the medical image is the basis and premise of intelligent diagnosis and treatment, which has a wide range of clinical application value. However, the robustness and effectiveness of medical image segmentation algorithms remains a challenging subject due to the unbalanced categories, blurred boundaries, highly variable anatomical structures and lack of training samples. For this reason, we present a parallel dilated convolutional network (PDC-Net) to address the pituitary adenoma segmentation in magnetic resonance imaging images. Firstly, the standard convolution block in U-Net is replaced by a basic convolution operation and a parallel dilated convolutional module (PDCM), to extract the multi-level feature information of different dilations. Furthermore, the channel attention mechanism (CAM) is integrated to enhance the ability of the network to distinguish between lesions and non-lesions in pituitary adenoma. Then, we introduce residual connections at each layer of the encoder-decoder, which can solve the problem of gradient disappearance and network performance degradation caused by network deepening. Finally, we employ the dice loss to deal with the class imbalance problem in samples. By testing on the self-established patient dataset from Quzhou People's Hospital, the experiment achieves 90.92% of Sensitivity, 99.68% of Specificity, 88.45% of Dice value and 79.43% of Intersection over Union (IoU).
Collapse
Affiliation(s)
- Qile Zhang
- Department of Rehabilitation, The Quzhou Affiliated Hospital of Wenzhou Medical University, Quzhou People’s Hospital, Quzhou, China
| | - Jianzhen Cheng
- Department of Rehabilitation, Quzhou Third Hospital, Quzhou, China
| | - Chun Zhou
- Department of Rehabilitation, The Quzhou Affiliated Hospital of Wenzhou Medical University, Quzhou People’s Hospital, Quzhou, China
| | - Xiaoliang Jiang
- College of Mechanical Engineering, Quzhou University, Quzhou, China
| | - Yuanxiang Zhang
- College of Mechanical Engineering, Quzhou University, Quzhou, China
| | - Jiantao Zeng
- College of Mechanical Engineering, Quzhou University, Quzhou, China
| | - Li Liu
- Department of Thyroid and Breast Surgery, Kecheng District People’s Hospital, Quzhou, China
| |
Collapse
|
27
|
Huang ZH, Liu YY, Wu WJ, Huang KW. Design and Validation of a Deep Learning Model for Renal Stone Detection and Segmentation on Kidney-Ureter-Bladder Images. Bioengineering (Basel) 2023; 10:970. [PMID: 37627855 PMCID: PMC10452034 DOI: 10.3390/bioengineering10080970] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Revised: 08/11/2023] [Accepted: 08/13/2023] [Indexed: 08/27/2023] Open
Abstract
Kidney-ureter-bladder (KUB) imaging is used as a frontline investigation for patients with suspected renal stones. In this study, we designed a computer-aided diagnostic system for KUB imaging to assist clinicians in accurately diagnosing urinary tract stones. The image dataset used for training and testing the model comprised 485 images provided by Kaohsiung Chang Gung Memorial Hospital. The proposed system was divided into two subsystems, 1 and 2. Subsystem 1 used Inception-ResNetV2 to train a deep learning model on preprocessed KUB images to verify the improvement in diagnostic accuracy with image preprocessing. Subsystem 2 trained an image segmentation model using the ResNet hybrid, U-net, to accurately identify the contours of renal stones. The performance was evaluated using a confusion matrix for the classification model. We conclude that the model can assist clinicians in accurately diagnosing renal stones via KUB imaging. Therefore, the proposed system can assist doctors in diagnosis, reduce patients' waiting time for CT scans, and minimize the radiation dose absorbed by the body.
Collapse
Affiliation(s)
- Zih-Hao Huang
- Department of Electrical Engineering, National Kaohsiung University of Science and Technology, Kaohsiung City 807618, Taiwan; (Z.-H.H.); (Y.-Y.L.); (W.-J.W.)
| | - Yi-Yang Liu
- Department of Electrical Engineering, National Kaohsiung University of Science and Technology, Kaohsiung City 807618, Taiwan; (Z.-H.H.); (Y.-Y.L.); (W.-J.W.)
- Department of Urology, Kaohsiung Chang Gung Memorial Hospital and Chang Gung University College of Medicine, Kaohsiung City 83301, Taiwan
| | - Wei-Juei Wu
- Department of Electrical Engineering, National Kaohsiung University of Science and Technology, Kaohsiung City 807618, Taiwan; (Z.-H.H.); (Y.-Y.L.); (W.-J.W.)
| | - Ko-Wei Huang
- Department of Electrical Engineering, National Kaohsiung University of Science and Technology, Kaohsiung City 807618, Taiwan; (Z.-H.H.); (Y.-Y.L.); (W.-J.W.)
| |
Collapse
|
28
|
Feng Y, Cong Y, Xing S, Wang H, Zhao C, Zhang X, Yao Q. Distance Matters: A Distance-Aware Medical Image Segmentation Algorithm. ENTROPY (BASEL, SWITZERLAND) 2023; 25:1169. [PMID: 37628199 PMCID: PMC10453236 DOI: 10.3390/e25081169] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Revised: 08/01/2023] [Accepted: 08/03/2023] [Indexed: 08/27/2023]
Abstract
The transformer-based U-Net network structure has gained popularity in the field of medical image segmentation. However, most networks overlook the impact of the distance between each patch on the encoding process. This paper proposes a novel GC-TransUnet for medical image segmentation. The key innovation is that it takes into account the relationships between patch blocks based on their distances, optimizing the encoding process in traditional transformer networks. This optimization results in improved encoding efficiency and reduced computational costs. Moreover, the proposed GC-TransUnet is combined with U-Net to accomplish the segmentation task. In the encoder part, the traditional vision transformer is replaced by the global context vision transformer (GC-VIT), eliminating the need for the CNN network while retaining skip connections for subsequent decoders. Experimental results demonstrate that the proposed algorithm achieves superior segmentation results compared to other algorithms when applied to medical images.
Collapse
Affiliation(s)
- Yuncong Feng
- College of Computer Science and Engineering, Changchun University of Technology, Changchun 130012, China; (Y.F.); (Y.C.); (S.X.); (H.W.); (C.Z.); (Q.Y.)
- Artificial Intelligence Research Institute, Changchun University of Technology, Changchun 130012, China
- Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun 130012, China
| | - Yeming Cong
- College of Computer Science and Engineering, Changchun University of Technology, Changchun 130012, China; (Y.F.); (Y.C.); (S.X.); (H.W.); (C.Z.); (Q.Y.)
| | - Shuaijie Xing
- College of Computer Science and Engineering, Changchun University of Technology, Changchun 130012, China; (Y.F.); (Y.C.); (S.X.); (H.W.); (C.Z.); (Q.Y.)
| | - Hairui Wang
- College of Computer Science and Engineering, Changchun University of Technology, Changchun 130012, China; (Y.F.); (Y.C.); (S.X.); (H.W.); (C.Z.); (Q.Y.)
| | - Cuixing Zhao
- College of Computer Science and Engineering, Changchun University of Technology, Changchun 130012, China; (Y.F.); (Y.C.); (S.X.); (H.W.); (C.Z.); (Q.Y.)
| | - Xiaoli Zhang
- Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun 130012, China
| | - Qingan Yao
- College of Computer Science and Engineering, Changchun University of Technology, Changchun 130012, China; (Y.F.); (Y.C.); (S.X.); (H.W.); (C.Z.); (Q.Y.)
| |
Collapse
|
29
|
Liu Y, Wang J, Wu C, Liu L, Zhang Z, Yu H. Fovea-UNet: detection and segmentation of lymph node metastases in colorectal cancer with deep learning. Biomed Eng Online 2023; 22:74. [PMID: 37479991 PMCID: PMC10362618 DOI: 10.1186/s12938-023-01137-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Accepted: 07/11/2023] [Indexed: 07/23/2023] Open
Abstract
BACKGROUND Colorectal cancer is one of the most serious malignant tumors, and lymph node metastasis (LNM) from colorectal cancer is a major factor for patient management and prognosis. Accurate image detection of LNM is an important task to help clinicians diagnose cancer. Recently, the U-Net architecture based on convolutional neural networks (CNNs) has been widely used to segment image to accomplish more precise cancer diagnosis. However, the accurate segmentation of important regions with high diagnostic value is still a great challenge due to the insufficient capability of CNN and codec structure in aggregating the detailed and non-local contextual information. In this work, we propose a high performance and low computation solution. METHODS Inspired by the working principle of Fovea in visual neuroscience, a novel network framework based on U-Net for cancer segmentation named Fovea-UNet is proposed to adaptively adjust the resolution according to the importance-aware of information and selectively focuses on the region most relevant to colorectal LNM. Specifically, we design an effective adaptively optimized pooling operation called Fovea Pooling (FP), which dynamically aggregate the detailed and non-local contextual information according to the pixel-level feature importance. In addition, the improved lightweight backbone network based on GhostNet is adopted to reduce the computational cost caused by FP. RESULTS Experimental results show that our proposed framework can achieve higher performance than other state-of-the-art segmentation networks with 79.38% IoU, 88.51% DSC, 92.82% sensitivity and 84.57% precision on the LNM dataset, and the parameter amount is reduced to 23.23 MB. CONCLUSIONS The proposed framework can provide a valid tool for cancer diagnosis, especially for LNM of colorectal cancer.
Collapse
Affiliation(s)
- Yajiao Liu
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Jiang Wang
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Chenpeng Wu
- Department of Pathology, Tangshan Gongren Hospital, Tangshan, China
| | - Liyun Liu
- Department of Pathology, Tangshan Gongren Hospital, Tangshan, China
| | - Zhiyong Zhang
- Department of Pathology, Tangshan Gongren Hospital, Tangshan, China
| | - Haitao Yu
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China.
| |
Collapse
|
30
|
NST: A nuclei segmentation method based on transformer for gastrointestinal cancer pathological images. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104785] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/09/2023]
|
31
|
Hu K, Chen W, Sun Y, Hu X, Zhou Q, Zheng Z. PPNet: Pyramid pooling based network for polyp segmentation. Comput Biol Med 2023; 160:107028. [PMID: 37201273 DOI: 10.1016/j.compbiomed.2023.107028] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Revised: 04/24/2023] [Accepted: 05/09/2023] [Indexed: 05/20/2023]
Abstract
Colonoscopy is the gold standard method for investigating the gastrointestinal tract. Localizing the polyps in colonoscopy images plays a vital role when doing a colonoscopy screening, and it is also quite important for the following treatment, e.g., polyp resection. Many deep learning-based methods have been applied for solving the polyp segmentation issue. However, precisely polyp segmentation is still an open issue. Considering the effectiveness of the Pyramid Pooling Transformer (P2T) in modeling long-range dependencies and capturing robust contextual features, as well as the power of pyramid pooling in extracting features, we propose a pyramid pooling based network for polyp segmentation, namely PPNet. We first adopt the P2T as the encoder for extracting more powerful features. Next, a pyramid feature fusion module (PFFM) combining the channel attention scheme is utilized for learning a global contextual feature, in order to guide the information transition in the decoder branch. Aiming to enhance the effectiveness of PPNet on feature extraction during the decoder stage layer by layer, we introduce the memory-keeping pyramid pooling module (MPPM) into each side branch of the encoder, and transmit the corresponding feature to each lower-level side branch. Experimental results conducted on five public colorectal polyp segmentation datasets are given and discussed. Our method performs better compared with several state-of-the-art polyp extraction networks, which demonstrate the effectiveness of the mechanism of pyramid pooling for colorectal polyp segmentation.
Collapse
Affiliation(s)
- Keli Hu
- Department of Computer Science and Engineering, Shaoxing University, Shaoxing, 312000, PR China; Cancer Center, Department of Gastroenterology, Zhejiang Provincial People's Hospital (Affiliated People's Hospital, Hangzhou Medical College), Hangzhou, 310014, PR China; Information Technology R&D Innovation Center of Peking University, Shaoxing, 312000, PR China
| | - Wenping Chen
- Department of Computer Science and Engineering, Shaoxing University, Shaoxing, 312000, PR China.
| | - YuanZe Sun
- Department of Computer Science and Engineering, Shaoxing University, Shaoxing, 312000, PR China
| | - Xiaozhao Hu
- Shaoxing People's Hospital, Shaoxing, 312000, PR China
| | - Qianwei Zhou
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, 310023, PR China
| | - Zirui Zheng
- Department of Computer Science and Engineering, Shaoxing University, Shaoxing, 312000, PR China
| |
Collapse
|
32
|
Semantic Segmentation of Digestive Abnormalities from WCE Images by Using AttResU-Net Architecture. Life (Basel) 2023; 13:life13030719. [PMID: 36983874 PMCID: PMC10051085 DOI: 10.3390/life13030719] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Revised: 02/04/2023] [Accepted: 03/03/2023] [Indexed: 03/09/2023] Open
Abstract
Colorectal cancer is one of the most common malignancies and the leading cause of cancer death worldwide. Wireless capsule endoscopy is currently the most frequent method for detecting precancerous digestive diseases. Thus, precise and early polyps segmentation has significant clinical value in reducing the probability of cancer development. However, the manual examination is a time-consuming and tedious task for doctors. Therefore, scientists have proposed many computational techniques to automatically segment the anomalies from endoscopic images. In this paper, we present an end-to-end 2D attention residual U-Net architecture (AttResU-Net), which concurrently integrates the attention mechanism and residual units into U-Net for further polyp and bleeding segmentation performance enhancement. To reduce outside areas in an input image while emphasizing salient features, AttResU-Net inserts a sequence of attention units among related downsampling and upsampling steps. On the other hand, the residual block propagates information across layers, allowing for the construction of a deeper neural network capable of solving the vanishing gradient issue in each encoder. This improves the channel interdependencies while lowering the computational cost. Multiple publicly available datasets were employed in this work, to evaluate and verify the proposed method. Our highest-performing model was AttResU-Net, on the MICCAI 2017 WCE dataset, which achieved an accuracy of 99.16%, a Dice coefficient of 94.91%, and a Jaccard index of 90.32%. The experiment findings show that the proposed AttResU-Net overcomes its baselines and provides performance comparable to existing polyp segmentation approaches.
Collapse
|
33
|
Shamrat FJM, Azam S, Karim A, Ahmed K, Bui FM, De Boer F. High-precision multiclass classification of lung disease through customized MobileNetV2 from chest X-ray images. Comput Biol Med 2023; 155:106646. [PMID: 36805218 DOI: 10.1016/j.compbiomed.2023.106646] [Citation(s) in RCA: 17] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2022] [Revised: 01/30/2023] [Accepted: 02/06/2023] [Indexed: 02/12/2023]
Abstract
In this study, multiple lung diseases are diagnosed with the help of the Neural Network algorithm. Specifically, Emphysema, Infiltration, Mass, Pleural Thickening, Pneumonia, Pneumothorax, Atelectasis, Edema, Effusion, Hernia, Cardiomegaly, Pulmonary Fibrosis, Nodule, and Consolidation, are studied from the ChestX-ray14 dataset. A proposed fine-tuned MobileLungNetV2 model is employed for analysis. Initially, pre-processing is done on the X-ray images from the dataset using CLAHE to increase image contrast. Additionally, a Gaussian Filter, to denoise images, and data augmentation methods are used. The pre-processed images are fed into several transfer learning models; such as InceptionV3, AlexNet, DenseNet121, VGG19, and MobileNetV2. Among these models, MobileNetV2 performed with the highest accuracy of 91.6% in overall classifying lesions on Chest X-ray Images. This model is then fine-tuned to optimise the MobileLungNetV2 model. On the pre-processed data, the fine-tuned model, MobileLungNetV2, achieves an extraordinary classification accuracy of 96.97%. Using a confusion matrix for all the classes, it is determined that the model has an overall high precision, recall, and specificity scores of 96.71%, 96.83% and 99.78% respectively. The study employs the Grad-cam output to determine the heatmap of disease detection. The proposed model shows promising results in classifying multiple lesions on Chest X-ray images.
Collapse
Affiliation(s)
- Fm Javed Mehedi Shamrat
- Department of Software Engineering, Daffodil International University, Birulia, 1216, Dhaka, Bangladesh
| | - Sami Azam
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT 0909, Australia.
| | - Asif Karim
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT 0909, Australia.
| | - Kawsar Ahmed
- Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, SK S7N 5A9, Canada; Group of Bio-photomatiχ, Department of Information and Communication Technology, Mawlana Bhashani Science and Technology University, Tangail, 1902, Bangladesh
| | - Francis M Bui
- Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, SK S7N 5A9, Canada
| | - Friso De Boer
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT 0909, Australia
| |
Collapse
|
34
|
Dual parallel net: A novel deep learning model for rectal tumor segmentation via CNN and transformer with Gaussian Mixture prior. J Biomed Inform 2023; 139:104304. [PMID: 36736447 DOI: 10.1016/j.jbi.2023.104304] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Revised: 12/27/2022] [Accepted: 01/29/2023] [Indexed: 02/05/2023]
Abstract
Segmentation of rectal cancerous regions from Magnetic Resonance (MR) images can help doctor define the extent of the rectal cancer and judge the severity of rectal cancer, so rectal tumor segmentation is crucial to improve the accuracy of rectal cancer diagnosis. However, accurate segmentation of rectal cancerous regions remains a challenging task due to the shape of rectal tumor has significant variations and the tumor and surrounding tissue are indistinguishable. In addition, in the early research on rectal tumor segmentation, most deep learning methods were based on convolutional neural networks (CNNs), and traditional CNN have small receptive field, which can only capture local information while ignoring the global information of the image. Nevertheless, the global information plays a crucial role in rectal tumor segmentation, so traditional CNN-based methods usually cannot achieve satisfactory segmentation results. In this paper, we propose an encoder-decoder network named Dual Parallel Net (DuPNet), which fuses transformer and classical CNN for capturing both global and local information. Meanwhile, as for capture features at different scales as well as to avoid accuracy loss and parameters reduction, we design a feature adaptive block (FAB) in skip connection between encoder and decoder. Furthermore, in order to utilize the apriori information of rectal tumor shape effectively, we design a Gaussian Mixture prior and embed it in self-attention mechanism of the transformer, leading to robust feature representation and accurate segmentation results. We have performed extensive ablation experiments to verify the effectiveness of our proposed dual parallel encoder, FAB and Gaussian Mixture prior on the dataset from the Shanxi Cancer Hospital. In the experimental comparison with the state-of-the-art methods, our method achieved a Mean Intersection over Union (MIoU) of 89.34% on the test set. In addition to that, we evaluated the generalizability of our method on the dataset from Xinhua Hospital, the promising results verify the superiority of our method.
Collapse
|
35
|
DBCGN: dual branch cascade graph network for skin lesion segmentation. INT J MACH LEARN CYB 2023. [DOI: 10.1007/s13042-023-01802-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/27/2023]
|
36
|
Al-Battal AF, Lerman IR, Nguyen TQ. Multi-path decoder U-Net: A weakly trained real-time segmentation network for object detection and localization in ultrasound scans. Comput Med Imaging Graph 2023; 107:102205. [PMID: 37030216 DOI: 10.1016/j.compmedimag.2023.102205] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2022] [Revised: 02/19/2023] [Accepted: 02/19/2023] [Indexed: 04/10/2023]
Abstract
Detecting and localizing an anatomical structure of interest within the field of view of an ultrasound scan is an essential step in many diagnostic and therapeutic procedures. However, ultrasound scans suffer from high levels of variabilities across sonographers and patients, making it challenging for sonographers to accurately identify and locate these structures without extensive experience. Segmentation-based convolutional neural networks (CNNs) have been proposed as a solution to assist sonographers in this task. Despite their accuracy, these networks require pixel-wise annotations for training; an expensive and labor-intensive operation that requires the expertise of an experienced practitioner to identify the precise outline of the structures of interest. This complicates, delays, and increases the cost of network training and deployment. To address this problem, we propose a multi-path decoder U-Net architecture that is trained on bounding box segmentation maps; not requiring pixel-wise annotations. We show that the network can be trained on small training sets, which is the case in medical imaging datasets; reducing the cost and time needed for deployment and use in clinical settings. The multi-path decoder design allows for better training of deeper layers and earlier attention to the target anatomical structures of interest. This architecture offers up to a 7% relative improvement compared to the U-Net architecture in localization and detection performance, with an increase of only 0.75% in the number of parameters. Its performance is on par with, or slightly better than, the more computationally expensive U-Net++, which has 20% more parameters; making the proposed architecture a more computationally efficient alternative for real-time object detection and localization in ultrasound scans.
Collapse
Affiliation(s)
- Abdullah F Al-Battal
- Electrical and Computer Engineering Department, University of California, San Diego, CA 92093, USA; Electrical Engineering Department, King Fahd University of Petroleum and Minerals, Dhahran, Saudi Arabia.
| | - Imanuel R Lerman
- Electrical and Computer Engineering Department, University of California, San Diego, CA 92093, USA; UC San Diego Health, University of California, San Diego, CA 92093, USA
| | - Truong Q Nguyen
- Electrical and Computer Engineering Department, University of California, San Diego, CA 92093, USA
| |
Collapse
|
37
|
Rajaraman S, Yang F, Zamzmi G, Xue Z, Antani S. Assessing the Impact of Image Resolution on Deep Learning for TB Lesion Segmentation on Frontal Chest X-rays. Diagnostics (Basel) 2023; 13:diagnostics13040747. [PMID: 36832235 PMCID: PMC9955202 DOI: 10.3390/diagnostics13040747] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2023] [Revised: 02/10/2023] [Accepted: 02/15/2023] [Indexed: 02/18/2023] Open
Abstract
Deep learning (DL) models are state-of-the-art in segmenting anatomical and disease regions of interest (ROIs) in medical images. Particularly, a large number of DL-based techniques have been reported using chest X-rays (CXRs). However, these models are reportedly trained on reduced image resolutions for reasons related to the lack of computational resources. Literature is sparse in discussing the optimal image resolution to train these models for segmenting the tuberculosis (TB)-consistent lesions in CXRs. In this study, we investigated the performance variations with an Inception-V3 UNet model using various image resolutions with/without lung ROI cropping and aspect ratio adjustments and identified the optimal image resolution through extensive empirical evaluations to improve TB-consistent lesion segmentation performance. We used the Shenzhen CXR dataset for the study, which includes 326 normal patients and 336 TB patients. We proposed a combinatorial approach consisting of storing model snapshots, optimizing segmentation threshold and test-time augmentation (TTA), and averaging the snapshot predictions, to further improve performance with the optimal resolution. Our experimental results demonstrate that higher image resolutions are not always necessary; however, identifying the optimal image resolution is critical to achieving superior performance.
Collapse
|
38
|
Houwen BBSL, Nass KJ, Vleugels JLA, Fockens P, Hazewinkel Y, Dekker E. Comprehensive review of publicly available colonoscopic imaging databases for artificial intelligence research: availability, accessibility, and usability. Gastrointest Endosc 2023; 97:184-199.e16. [PMID: 36084720 DOI: 10.1016/j.gie.2022.08.043] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 08/24/2022] [Accepted: 08/30/2022] [Indexed: 01/28/2023]
Abstract
BACKGROUND AND AIMS Publicly available databases containing colonoscopic imaging data are valuable resources for artificial intelligence (AI) research. Currently, little is known regarding the available number and content of these databases. This review aimed to describe the availability, accessibility, and usability of publicly available colonoscopic imaging databases, focusing on polyp detection, polyp characterization, and quality of colonoscopy. METHODS A systematic literature search was performed in MEDLINE and Embase to identify AI studies describing publicly available colonoscopic imaging databases published after 2010. Second, a targeted search using Google's Dataset Search, Google Search, GitHub, and Figshare was done to identify databases directly. Databases were included if they contained data about polyp detection, polyp characterization, or quality of colonoscopy. To assess accessibility of databases, the following categories were defined: open access, open access with barriers, and regulated access. To assess the potential usability of the included databases, essential details of each database were extracted using a checklist derived from the Checklist for Artificial Intelligence in Medical Imaging. RESULTS We identified 22 databases with open access, 3 databases with open access with barriers, and 15 databases with regulated access. The 22 open access databases contained 19,463 images and 952 videos. Nineteen of these databases focused on polyp detection, localization, and/or segmentation; 6 on polyp characterization, and 3 on quality of colonoscopy. Only half of these databases have been used by other researcher to develop, train, or benchmark their AI system. Although technical details were in general well reported, important details such as polyp and patient demographics and the annotation process were under-reported in almost all databases. CONCLUSIONS This review provides greater insight on public availability of colonoscopic imaging databases for AI research. Incomplete reporting of important details limits the ability of researchers to assess the usability of current databases.
Collapse
Affiliation(s)
- Britt B S L Houwen
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | - Karlijn J Nass
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | - Jasper L A Vleugels
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | - Paul Fockens
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | - Yark Hazewinkel
- Department of Gastroenterology and Hepatology, Radboud University Nijmegen Medical Center, Radboud University of Nijmegen, Nijmegen, the Netherlands
| | - Evelien Dekker
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| |
Collapse
|
39
|
Nachmani R, Nidal I, Robinson D, Yassin M, Abookasis D. Segmentation of polyps based on pyramid vision transformers and residual block for real-time endoscopy imaging. J Pathol Inform 2023; 14:100197. [PMID: 36844703 PMCID: PMC9945716 DOI: 10.1016/j.jpi.2023.100197] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Revised: 01/22/2023] [Accepted: 01/22/2023] [Indexed: 01/27/2023] Open
Abstract
Polyp segmentation is an important task in early identification of colon polyps for prevention of colorectal cancer. Numerous methods of machine learning have been utilized in an attempt to solve this task with varying levels of success. A successful polyp segmentation method which is both accurate and fast could make a huge impact on colonoscopy exams, aiding in real-time detection, as well as enabling faster and cheaper offline analysis. Thus, recent studies have worked to produce networks that are more accurate and faster than the previous generation of networks (e.g., NanoNet). Here, we propose ResPVT architecture for polyp segmentation. This platform uses transformers as a backbone and far surpasses all previous networks not only in accuracy but also with a much higher frame rate which may drastically reduce costs in both real time and offline analysis and enable the widespread application of this technology.
Collapse
Affiliation(s)
- Roi Nachmani
- Department of Electrical and Electronics Engineering, Ariel University, Ariel 407000, Israel
| | - Issa Nidal
- Department of Surgery, Hasharon Hospital, Rabin Medical Center, affiliated with Tel Aviv, University School of Medicine, Petah Tikva, Israel
| | - Dror Robinson
- Department of Orthopedics, Hasharon Hospital, Rabin Medical Center, affiliated with Tel Aviv, University School of Medicine, Petah Tikva, Israel
| | - Mustafa Yassin
- Department of Orthopedics, Hasharon Hospital, Rabin Medical Center, affiliated with Tel Aviv, University School of Medicine, Petah Tikva, Israel
| | - David Abookasis
- Department of Electrical and Electronics Engineering, Ariel University, Ariel 407000, Israel
- Ariel Photonics Center, Ariel University, Ariel 407000, Israel
- Corresponding author.
| |
Collapse
|
40
|
ELKarazle K, Raman V, Then P, Chua C. Detection of Colorectal Polyps from Colonoscopy Using Machine Learning: A Survey on Modern Techniques. SENSORS (BASEL, SWITZERLAND) 2023; 23:1225. [PMID: 36772263 PMCID: PMC9953705 DOI: 10.3390/s23031225] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Revised: 01/08/2023] [Accepted: 01/17/2023] [Indexed: 06/18/2023]
Abstract
Given the increased interest in utilizing artificial intelligence as an assistive tool in the medical sector, colorectal polyp detection and classification using deep learning techniques has been an active area of research in recent years. The motivation for researching this topic is that physicians miss polyps from time to time due to fatigue and lack of experience carrying out the procedure. Unidentified polyps can cause further complications and ultimately lead to colorectal cancer (CRC), one of the leading causes of cancer mortality. Although various techniques have been presented recently, several key issues, such as the lack of enough training data, white light reflection, and blur affect the performance of such methods. This paper presents a survey on recently proposed methods for detecting polyps from colonoscopy. The survey covers benchmark dataset analysis, evaluation metrics, common challenges, standard methods of building polyp detectors and a review of the latest work in the literature. We conclude this paper by providing a precise analysis of the gaps and trends discovered in the reviewed literature for future work.
Collapse
Affiliation(s)
- Khaled ELKarazle
- School of Information and Communication Technologies, Swinburne University of Technology, Sarawak Campus, Kuching 93350, Malaysia
| | - Valliappan Raman
- Department of Artificial Intelligence and Data Science, Coimbatore Institute of Technology, Coimbatore 641014, India
| | - Patrick Then
- School of Information and Communication Technologies, Swinburne University of Technology, Sarawak Campus, Kuching 93350, Malaysia
| | - Caslon Chua
- Department of Computer Science and Software Engineering, Swinburne University of Technology, Melbourne 3122, Australia
| |
Collapse
|
41
|
Ren H, Ren C, Guo Z, Zhang G, Luo X, Ren Z, Tian H, Li W, Yuan H, Hao L, Wang J, Zhang M. A novel approach for automatic segmentation of prostate and its lesion regions on magnetic resonance imaging. Front Oncol 2023; 13:1095353. [PMID: 37152013 PMCID: PMC10154598 DOI: 10.3389/fonc.2023.1095353] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2022] [Accepted: 03/30/2023] [Indexed: 05/09/2023] Open
Abstract
Objective To develop an accurate and automatic segmentation model based on convolution neural network to segment the prostate and its lesion regions. Methods Of all 180 subjects, 122 healthy individuals and 58 patients with prostate cancer were included. For each subject, all slices of the prostate were comprised in the DWIs. A novel DCNN is proposed to automatically segment the prostate and its lesion regions. This model is inspired by the U-Net model with the encoding-decoding path as the backbone, importing dense block, attention mechanism techniques, and group norm-Atrous Spatial Pyramidal Pooling. Data augmentation was used to avoid overfitting in training. In the experimental phase, the data set was randomly divided into a training (70%), testing set (30%). four-fold cross-validation methods were used to obtain results for each metric. Results The proposed model achieved in terms of Iou, Dice score, accuracy, sensitivity, 95% Hausdorff Distance, 86.82%,93.90%, 94.11%, 93.8%,7.84 for the prostate, 79.2%, 89.51%, 88.43%,89.31%,8.39 for lesion region in segmentation. Compared to the state-of-the-art models, FCN, U-Net, U-Net++, and ResU-Net, the segmentation model achieved more promising results. Conclusion The proposed model yielded excellent performance in accurate and automatic segmentation of the prostate and lesion regions, revealing that the novel deep convolutional neural network could be used in clinical disease treatment and diagnosis.
Collapse
Affiliation(s)
- Huipeng Ren
- Department of Medical Imaging, First Affiliated Hospital of Xi’an Jiaotong University, Xi’an, China
- Department of Medical Imaging, Baoji Central Hospital, Baoji, China
| | - Chengjuan Ren
- Department of Language Intelligence, Sichuan International Studies University, Chongqing, China
| | - Ziyu Guo
- Department of Computer Science & Engineering, The Chinese University of Hong Kong, Hong Kong, Hong Kong SAR, China
| | - Guangnan Zhang
- Department of Computer Science, Baoji University of Arts and Sciences, Baoji, China
| | - Xiaohui Luo
- Department of Urology, Baoji Central Hospital, Baoji, China
| | - Zhuanqin Ren
- Department of Medical Imaging, Baoji Central Hospital, Baoji, China
| | - Hongzhe Tian
- Department of Medical Imaging, Baoji Central Hospital, Baoji, China
| | - Wei Li
- Department of Medical Imaging, Baoji Central Hospital, Baoji, China
| | - Hao Yuan
- Department of Computer Science, Baoji University of Arts and Sciences, Baoji, China
| | - Lele Hao
- Department of Computer Science, Baoji University of Arts and Sciences, Baoji, China
| | - Jiacheng Wang
- Department of Computer Science, Baoji University of Arts and Sciences, Baoji, China
| | - Ming Zhang
- Department of Medical Imaging, First Affiliated Hospital of Xi’an Jiaotong University, Xi’an, China
- *Correspondence: Ming Zhang,
| |
Collapse
|
42
|
Peng Y, Yu D, Guo Y. MShNet: Multi-scale feature combined with h-network for medical image segmentation. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
43
|
Baban NS, Saha S, Orozaliev A, Kim J, Bhattacharjee S, Song YA, Karri R, Chakrabarty K. Structural Attacks and Defenses for Flow-Based Microfluidic Biochips. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2022; 16:1261-1275. [PMID: 36350866 DOI: 10.1109/tbcas.2022.3220758] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
Flow-based microfluidic biochips (FMBs) have seen rapid commercialization and deployment in recent years for point-of-care and clinical diagnostics. However, the outsourcing of FMB design and manufacturing makes them susceptible to susceptible to malicious physical level and intellectual property (IP)-theft attacks. This work demonstrates the first structure-based (SB) attack on representative commercial FMBs. The SB attacks maliciously decrease the heights of the FMB reaction chambers to produce false-negative results. We validate this attack experimentally using fluorescence microscopy, which showed a high correlation ( R2 = 0.987) between chamber height and related fluorescence intensity of the DNA amplified by polymerase chain reaction. To detect SB attacks, we adopt two existing deep learning-based anomaly detection algorithms with ∼ 96% validation accuracy in recognizing such deliberately introduced microstructural anomalies. To safeguard FMBs against intellectual property (IP)-theft, we propose a novel device-level watermarking scheme for FMBs using intensity-height correlation. The countermeasures can be used to proactively safeguard FMBs against SB and IP-theft attacks in the era of global pandemics and personalized medicine.
Collapse
|
44
|
Jiang S, Li J. TransCUNet: UNet cross fused transformer for medical image segmentation. Comput Biol Med 2022; 150:106207. [PMID: 37859294 DOI: 10.1016/j.compbiomed.2022.106207] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2022] [Revised: 09/20/2022] [Accepted: 10/09/2022] [Indexed: 11/21/2022]
Abstract
Accurate segmentation of medical images is crucial for clinical diagnosis and evaluation. However, medical images have complex shapes, the structures of different objects are very different, and most medical datasets are small in scale, making it difficult to train effectively. These problems increase the difficulty of automatic segmentation. To further improve the segmentation performance of the model, we propose a multi-branch network model, called TransCUNet, for segmenting medical images of different modalities. The model contains three structures: cross residual fusion block (CRFB), pyramidal pooling module (PPM) and gated axial-attention, which achieve effective extraction of high-level and low-level features of images, while showing high robustness to different size segmentation objects and different scale datasets. In our experiments, we use four datasets to train, validate and test the models. The experimental results show that TransCUNet has better segmentation performance compared to the current mainstream segmentation methods, and the model has a smaller size and number of parameters, which has great potential for clinical applications.
Collapse
Affiliation(s)
- Shen Jiang
- School of Computer Science and Technology, Shandong Technology and Business University, Yantai 264005, China
| | - Jinjiang Li
- School of Computer Science and Technology, Shandong Technology and Business University, Yantai 264005, China.
| |
Collapse
|
45
|
Zhang W, Fu C, Zheng Y, Zhang F, Zhao Y, Sham CW. HSNet: A hybrid semantic network for polyp segmentation. Comput Biol Med 2022; 150:106173. [PMID: 36257278 DOI: 10.1016/j.compbiomed.2022.106173] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Revised: 09/18/2022] [Accepted: 10/01/2022] [Indexed: 11/29/2022]
Abstract
Automatic polyp segmentation can help physicians to effectively locate polyps (a.k.a. region of interests) in clinical practice, in the way of screening colonoscopy images assisted by neural networks (NN). However, two significant bottlenecks hinder its effectiveness, disappointing physicians' expectations. (1) Changeable polyps in different scaling, orientation, and illumination, bring difficulty in accurate segmentation. (2) Current works building on a dominant decoder-encoder network tend to overlook appearance details (e.g., textures) for a tiny polyp, degrading the accuracy to differentiate polyps. For alleviating the bottlenecks, we investigate a hybrid semantic network (HSNet) that adopts both advantages of Transformer and convolutional neural networks (CNN), aiming at improving polyp segmentation. Our HSNet contains a cross-semantic attention module (CSA), a hybrid semantic complementary module (HSC), and a multi-scale prediction module (MSP). Unlike previous works on segmenting polyps, we newly insert the CSA module, which can fill the gap between low-level and high-level features via an interactive mechanism that exchanges two types of semantics from different NN attentions. By a dual-branch structure of Transformer and CNN, we newly design an HSC module, for capturing both long-range dependencies and local details of appearance. Besides, the MSP module can learn weights for fusing stage-level prediction masks of a decoder. Experimentally, we compared our work with 10 state-of-the-art works, including both recent and classical works, showing improved accuracy (via 7 evaluative metrics) over 5 benchmark datasets, e.g., it achieves 0.926/0.877 mDic/mIoU on Kvasir-SEG, 0.948/0.905 mDic/mIoU on ClinicDB, 0.810/0.735 mDic/mIoU on ColonDB, 0.808/0.74 mDic/mIoU on ETIS, and 0.903/0.839 mDic/mIoU on Endoscene. The proposed model is available at (https://github.com/baiboat/HSNet).
Collapse
Affiliation(s)
- Wenchao Zhang
- School of Computer Science and Engineering, Northeastern University, Shenyang 110819, China.
| | - Chong Fu
- School of Computer Science and Engineering, Northeastern University, Shenyang 110819, China; Engineering Research Center of Security Technology of Complex Network System, Ministry of Education, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang 110819, China.
| | - Yu Zheng
- Department of Information Engineering, The Chinese University of Hong Kong, Sha Tin, Hong Kong Special Administrative Region.
| | - Fangyuan Zhang
- Department of General Surgery, Shengjing Hospital of China Medical University, Shenyang, China.
| | - Yanli Zhao
- School of Electrical Information Engineering, Ningxia Institute of Science and Technology, Shizuishan, 753000, China.
| | - Chiu-Wing Sham
- School of Computer Science, The University of Auckland, New Zealand.
| |
Collapse
|
46
|
TGANet: Text-guided attention for improved polyp segmentation. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2022; 13433:151-160. [PMID: 36780239 PMCID: PMC9912908 DOI: 10.1007/978-3-031-16437-8_15] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
Abstract
Colonoscopy is a gold standard procedure but is highly operator-dependent. Automated polyp segmentation, a precancerous precursor, can minimize missed rates and timely treatment of colon cancer at an early stage. Even though there are deep learning methods developed for this task, variability in polyp size can impact model training, thereby limiting it to the size attribute of the majority of samples in the training dataset that may provide sub-optimal results to differently sized polyps. In this work, we exploit size-related and polyp number-related features in the form of text attention during training. We introduce an auxiliary classification task to weight the text-based embedding that allows network to learn additional feature representations that can distinctly adapt to differently sized polyps and can adapt to cases with multiple polyps. Our experimental results demonstrate that these added text embeddings improve the overall performance of the model compared to state-of-the-art segmentation methods. We explore four different datasets and provide insights for size-specific improvements. Our proposed text-guided attention network (TGANet) can generalize well to variable-sized polyps in different datasets. Codes are available at https://github.com/nikhilroxtomar/TGANet.
Collapse
|
47
|
UPolySeg: A U-Net-Based Polyp Segmentation Network Using Colonoscopy Images. GASTROENTEROLOGY INSIGHTS 2022. [DOI: 10.3390/gastroent13030027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Colonoscopy is a gold standard procedure for tracking the lower gastrointestinal region. A colorectal polyp is one such condition that is detected through colonoscopy. Even though technical advancements have improved the early detection of colorectal polyps, there is still a high percentage of misses due to various factors. Polyp segmentation can play a significant role in the detection of polyps at the early stage and can thus help reduce the severity of the disease. In this work, the authors implemented several image pre-processing techniques such as coherence transport and contrast limited adaptive histogram equalization (CLAHE) to handle different challenges in colonoscopy images. The processed image was then segmented into a polyp and normal pixel using a U-Net-based deep learning segmentation model named UPolySeg. The main framework of UPolySeg has an encoder–decoder section with feature concatenation in the same layer as the encoder–decoder along with the use of dilated convolution. The model was experimentally verified using the publicly available Kvasir-SEG dataset, which gives a global accuracy of 96.77%, a dice coefficient of 96.86%, an IoU of 87.91%, a recall of 95.57%, and a precision of 92.29%. The new framework for the polyp segmentation implementing UPolySeg improved the performance by 1.93% compared with prior work.
Collapse
|
48
|
Xie X, Huang Y, Ning W, Wu D, Li Z, Yang H. RDAD: A reconstructive and discriminative anomaly detection model based on transformer. INT J INTELL SYST 2022. [DOI: 10.1002/int.22974] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Affiliation(s)
- Xin Xie
- School of Information Engineering East China Jiaotong University Nanchang China
| | - Yuhui Huang
- School of Information Engineering East China Jiaotong University Nanchang China
| | - Weiye Ning
- School of Information Engineering East China Jiaotong University Nanchang China
| | - Dengquan Wu
- School of Information Engineering East China Jiaotong University Nanchang China
| | - Zixi Li
- School of Information Engineering East China Jiaotong University Nanchang China
| | - Hao Yang
- State Grid Jiangxi Electric Power Co. Ltd., Electric Power Research Institute Nanchang China
| |
Collapse
|
49
|
Itoh H, Misawa M, Mori Y, Kudo SE, Oda M, Mori K. Positive-gradient-weighted object activation mapping: visual explanation of object detector towards precise colorectal-polyp localisation. Int J Comput Assist Radiol Surg 2022; 17:2051-2063. [PMID: 35939251 DOI: 10.1007/s11548-022-02696-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Accepted: 05/31/2022] [Indexed: 11/29/2022]
Abstract
PURPOSE Precise polyp detection and localisation are essential for colonoscopy diagnosis. Statistical machine learning with a large-scale data set can contribute to the construction of a computer-aided diagnosis system for the prevention of overlooking and miss-localisation of a polyp in colonoscopy. We propose new visual explaining methods for a well-trained object detector, which achieves fast and accurate polyp detection with a bounding box towards a precise automated polyp localisation. METHOD We refine gradient-weighted class activation mapping for more accurate highlighting of important patterns in processing a convolutional neural network. Extending the refined mapping into multiscaled processing, we define object activation mapping that highlights important object patterns in an image for a detection task. Finally, we define polyp activation mapping to achieve precise polyp localisation by integrating adaptive local thresholding into object activation mapping. We experimentally evaluate the proposed visual explaining methods with four publicly available databases. RESULTS The refined mapping visualises important patterns in each convolutional layer more accurately than the original gradient-weighted class activation mapping. The object activation mapping clearly visualises important patterns in colonoscopic images for polyp detection. The polyp activation mapping localises the detected polyps in ETIS-Larib, CVC-Clinic and Kvasir-SEG database with mean Dice scores of 0.76, 0.72 and 0.72, respectively. CONCLUSIONS We developed new visual explaining methods for a convolutional neural network by refining and extending gradient-weighted class activation mapping. Experimental results demonstrated the validity of the proposed methods by showing that accurate visualisation of important patterns and localisation of polyps in a colonoscopic image. The proposed visual explaining methods are useful for the interpreting and applying a trained polyp detector.
Collapse
Affiliation(s)
- Hayato Itoh
- Graduate School of Informatics, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, 464-8601, Japan.
| | - Masashi Misawa
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Chigasaki-chuo 35-1, Tsuzuki-ku, Yokohama, 224-8503, Japan
| | - Yuichi Mori
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Chigasaki-chuo 35-1, Tsuzuki-ku, Yokohama, 224-8503, Japan.,Clinical Effectiveness Research Group, University of Oslo, Gaustad Sykehus, Bygg 20, Sognsvannsveien 21, 0372, Oslo, Norway
| | - Shin-Ei Kudo
- Digestive Disease Center, Showa University Northern Yokohama Hospital, Chigasaki-chuo 35-1, Tsuzuki-ku, Yokohama, 224-8503, Japan
| | - Masahiro Oda
- Graduate School of Informatics, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, 464-8601, Japan
| | - Kensaku Mori
- Graduate School of Informatics, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, 464-8601, Japan
| |
Collapse
|
50
|
Cervical Cell Segmentation Method Based on Global Dependency and Local Attention. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12157742] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/07/2022]
Abstract
The refined segmentation of nuclei and the cytoplasm is the most challenging task in the automation of cervical cell screening. The U-Shape network structure has demonstrated great superiority in the field of biomedical imaging. However, the classical U-Net network cannot effectively utilize mixed domain information and contextual information, and fails to achieve satisfactory results in this task. To address the above problems, a module based on global dependency and local attention (GDLA) for contextual information modeling and features refinement, is proposed in this study. It consists of three components computed in parallel, which are the global dependency module, the spatial attention module, and the channel attention module. The global dependency module models global contextual information to capture a priori knowledge of cervical cells, such as the positional dependence of the nuclei and cytoplasm, and the closure and uniqueness of the nuclei. The spatial attention module combines contextual information to extract cell boundary information and refine target boundaries. The channel and spatial attention modules are used to provide adaption of the input information, and make it easy to identify subtle but dominant differences of similar objects. Comparative and ablation experiments are conducted on the Herlev dataset, and the experimental results demonstrate the effectiveness of the proposed method, which surpasses the most popular existing channel attention, hybrid attention, and context networks in terms of the nuclei and cytoplasm segmentation metrics, achieving better segmentation performance than most previous advanced methods.
Collapse
|