1
|
Wang G, Zhou M, Ning X, Tiwari P, Zhu H, Yang G, Yap CH. US2Mask: Image-to-mask generation learning via a conditional GAN for cardiac ultrasound image segmentation. Comput Biol Med 2024; 172:108282. [PMID: 38503085 DOI: 10.1016/j.compbiomed.2024.108282] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2024] [Revised: 02/29/2024] [Accepted: 03/12/2024] [Indexed: 03/21/2024]
Abstract
Cardiac ultrasound (US) image segmentation is vital for evaluating clinical indices, but it often demands a large dataset and expert annotations, resulting in high costs for deep learning algorithms. To address this, our study presents a framework utilizing artificial intelligence generation technology to produce multi-class RGB masks for cardiac US image segmentation. The proposed approach directly performs semantic segmentation of the heart's main structures in US images from various scanning modes. Additionally, we introduce a novel learning approach based on conditional generative adversarial networks (CGAN) for cardiac US image segmentation, incorporating a conditional input and paired RGB masks. Experimental results from three cardiac US image datasets with diverse scan modes demonstrate that our approach outperforms several state-of-the-art models, showcasing improvements in five commonly used segmentation metrics, with lower noise sensitivity. Source code is available at https://github.com/energy588/US2mask.
Collapse
Affiliation(s)
- Gang Wang
- School of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing, Chongqing; Department of Bioengineering, Imperial College London, London, UK
| | - Mingliang Zhou
- School of Computer Science, Chongqing University, Chongqing, Chongqing.
| | - Xin Ning
- Institute of Semiconductors, Chinese Academy of Sciences, Beijing, China
| | - Prayag Tiwari
- School of Information Technology, Halmstad University, Halmstad, Sweden
| | | | - Guang Yang
- Department of Bioengineering, Imperial College London, London, UK; Cardiovascular Research Centre, Royal Brompton Hospital, London, UK; National Heart and Lung Institute, Imperial College London, London, UK
| | - Choon Hwai Yap
- Department of Bioengineering, Imperial College London, London, UK
| |
Collapse
|
2
|
Manh V, Jia X, Xue W, Xu W, Mei Z, Dong Y, Zhou J, Huang R, Ni D. An efficient framework for lesion segmentation in ultrasound images using global adversarial learning and region-invariant loss. Comput Biol Med 2024; 171:108137. [PMID: 38447499 DOI: 10.1016/j.compbiomed.2024.108137] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2023] [Revised: 01/16/2024] [Accepted: 02/12/2024] [Indexed: 03/08/2024]
Abstract
Lesion segmentation in ultrasound images is an essential yet challenging step for early evaluation and diagnosis of cancers. In recent years, many automatic CNN-based methods have been proposed to assist this task. However, most modern approaches often lack capturing long-range dependencies and prior information making it difficult to identify the lesions with unfixed shapes, sizes, locations, and textures. To address this, we present a novel lesion segmentation framework that guides the model to learn the global information about lesion characteristics and invariant features (e.g., morphological features) of lesions to improve the segmentation in ultrasound images. Specifically, the segmentation model is guided to learn the characteristics of lesions from the global maps using an adversarial learning scheme with a self-attention-based discriminator. We argue that under such a lesion characteristics-based guidance mechanism, the segmentation model gets more clues about the boundaries, shapes, sizes, and positions of lesions and can produce reliable predictions. In addition, as ultrasound lesions have different textures, we embed this prior knowledge into a novel region-invariant loss to constrain the model to focus on invariant features for robust segmentation. We demonstrate our method on one in-house breast ultrasound (BUS) dataset and two public datasets (i.e., breast lesion (BUS B) and thyroid nodule from TNSCUI2020). Experimental results show that our method is specifically suitable for lesion segmentation in ultrasound images and can outperform the state-of-the-art approaches with Dice of 0.931, 0.906, and 0.876, respectively. The proposed method demonstrates that it can provide more important information about the characteristics of lesions for lesion segmentation in ultrasound images, especially for lesions with irregular shapes and small sizes. It can assist the current lesion segmentation models to better suit clinical needs.
Collapse
Affiliation(s)
- Van Manh
- Medical Ultrasound Image Computing (MUSIC) lab, School of Biomedical Engineering, Shenzhen University, Shenzhen, 518060, China
| | - Xiaohong Jia
- Department of Ultrasound Medicine, Ruijin Hospital, School of Medicine, Shanghai Jiaotong University, Shanghai, 200240, China
| | - Wufeng Xue
- Medical Ultrasound Image Computing (MUSIC) lab, School of Biomedical Engineering, Shenzhen University, Shenzhen, 518060, China
| | - Wenwen Xu
- Department of Ultrasound Medicine, Ruijin Hospital, School of Medicine, Shanghai Jiaotong University, Shanghai, 200240, China
| | - Zihan Mei
- Department of Ultrasound Medicine, Ruijin Hospital, School of Medicine, Shanghai Jiaotong University, Shanghai, 200240, China
| | - Yijie Dong
- Department of Ultrasound Medicine, Ruijin Hospital, School of Medicine, Shanghai Jiaotong University, Shanghai, 200240, China
| | - Jianqiao Zhou
- Department of Ultrasound Medicine, Ruijin Hospital, School of Medicine, Shanghai Jiaotong University, Shanghai, 200240, China.
| | - Ruobing Huang
- Medical Ultrasound Image Computing (MUSIC) lab, School of Biomedical Engineering, Shenzhen University, Shenzhen, 518060, China.
| | - Dong Ni
- Medical Ultrasound Image Computing (MUSIC) lab, School of Biomedical Engineering, Shenzhen University, Shenzhen, 518060, China.
| |
Collapse
|
3
|
Guo Q, Fang X, Wang L, Zhang E, Liu Z. Robust fusion for skin lesion segmentation of dermoscopic images. Front Bioeng Biotechnol 2023; 11:1057866. [PMID: 37020509 PMCID: PMC10069440 DOI: 10.3389/fbioe.2023.1057866] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Accepted: 02/21/2023] [Indexed: 03/22/2023] Open
Abstract
Robust skin lesion segmentation of dermoscopic images is still very difficult. Recent methods often take the combinations of CNN and Transformer for feature abstraction and multi-scale features for further classification. Both types of combination in general rely on some forms of feature fusion. This paper considers these fusions from two novel points of view. For abstraction, Transformer is viewed as the affinity exploration of different patch tokens and can be applied to attend CNN features in multiple scales. Consequently, a new fusion module, the Attention-based Transformer-And-CNN fusion module (ATAC), is proposed. ATAC augments the CNN features with more global contexts. For further classification, adaptively combining the information from multiple scales according to their contributions to object recognition is expected. Accordingly, a new fusion module, the GAting-based Multi-Scale fusion module (GAMS), is also introduced, which adaptively weights the information from multiple scales by the light-weighted gating mechanism. Combining ATAC and GAMS leads to a new encoder-decoder-based framework. In this method, ATAC acts as an encoder block to progressively abstract strong CNN features with rich global contexts attended by long-range relations, while GAMS works as an enhancement of the decoder to generate the discriminative features through adaptive fusion of multi-scale ones. This framework is especially good at lesions of varying sizes and shapes and of low contrasts and its performances are demonstrated with extensive experiments on public skin lesion segmentation datasets.
Collapse
Affiliation(s)
- Qingqing Guo
- School of Computer Science and Technology, Anhui University, Hefei, China
| | - Xianyong Fang
- School of Computer Science and Technology, Anhui University, Hefei, China
- *Correspondence: Xianyong Fang,
| | - Linbo Wang
- School of Computer Science and Technology, Anhui University, Hefei, China
| | - Enming Zhang
- Islet Pathophysiology, Department of Clinical Science, Lund University Diabetes Centre, Malmö, Sweden
| | - Zhengyi Liu
- School of Computer Science and Technology, Anhui University, Hefei, China
| |
Collapse
|
4
|
Jadhav S, Torkaman M, Tannenbaum A, Nadeem S, Kaufman AE. Volume Exploration Using Multidimensional Bhattacharyya Flow. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:1651-1663. [PMID: 34780328 PMCID: PMC9594946 DOI: 10.1109/tvcg.2021.3127918] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
We present a novel approach for volume exploration that is versatile yet effective in isolating semantic structures in both noisy and clean data. Specifically, we describe a hierarchical active contours approach based on Bhattacharyya gradient flow which is easier to control, robust to noise, and can incorporate various types of statistical information to drive an edge-agnostic exploration process. To facilitate a time-bound user-driven volume exploration process that is applicable to a wide variety of data sources, we present an efficient multi-GPU implementation that (1) is approximately 400 times faster than a single thread CPU implementation, (2) allows hierarchical exploration of 2D and 3D images, (3) supports customization through multidimensional attribute spaces, and (4) is applicable to a variety of data sources and semantic structures. The exploration system follows a 2-step process. It first applies active contours to isolate semantically meaningful subsets of the volume. It then applies transfer functions to the isolated regions locally to produce clear and clutter-free visualizations. We show the effectiveness of our approach in isolating and visualizing structures-of-interest without needing any specialized segmentation methods on a variety of data sources, including 3D optical microscopy, multi-channel optical volumes, abdominal and chest CT, micro-CT, MRI, simulation, and synthetic data. We also gathered feedback from a medical trainee regarding the usefulness of our approach and discussion on potential applications in clinical workflows.
Collapse
|
5
|
Hasan MK, Ahamad MA, Yap CH, Yang G. A survey, review, and future trends of skin lesion segmentation and classification. Comput Biol Med 2023; 155:106624. [PMID: 36774890 DOI: 10.1016/j.compbiomed.2023.106624] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2022] [Revised: 01/04/2023] [Accepted: 01/28/2023] [Indexed: 02/03/2023]
Abstract
The Computer-aided Diagnosis or Detection (CAD) approach for skin lesion analysis is an emerging field of research that has the potential to alleviate the burden and cost of skin cancer screening. Researchers have recently indicated increasing interest in developing such CAD systems, with the intention of providing a user-friendly tool to dermatologists to reduce the challenges encountered or associated with manual inspection. This article aims to provide a comprehensive literature survey and review of a total of 594 publications (356 for skin lesion segmentation and 238 for skin lesion classification) published between 2011 and 2022. These articles are analyzed and summarized in a number of different ways to contribute vital information regarding the methods for the development of CAD systems. These ways include: relevant and essential definitions and theories, input data (dataset utilization, preprocessing, augmentations, and fixing imbalance problems), method configuration (techniques, architectures, module frameworks, and losses), training tactics (hyperparameter settings), and evaluation criteria. We intend to investigate a variety of performance-enhancing approaches, including ensemble and post-processing. We also discuss these dimensions to reveal their current trends based on utilization frequencies. In addition, we highlight the primary difficulties associated with evaluating skin lesion segmentation and classification systems using minimal datasets, as well as the potential solutions to these difficulties. Findings, recommendations, and trends are disclosed to inform future research on developing an automated and robust CAD system for skin lesion analysis.
Collapse
Affiliation(s)
- Md Kamrul Hasan
- Department of Bioengineering, Imperial College London, UK; Department of Electrical and Electronic Engineering (EEE), Khulna University of Engineering & Technology (KUET), Khulna 9203, Bangladesh.
| | - Md Asif Ahamad
- Department of Electrical and Electronic Engineering (EEE), Khulna University of Engineering & Technology (KUET), Khulna 9203, Bangladesh.
| | - Choon Hwai Yap
- Department of Bioengineering, Imperial College London, UK.
| | - Guang Yang
- National Heart and Lung Institute, Imperial College London, UK; Cardiovascular Research Centre, Royal Brompton Hospital, UK.
| |
Collapse
|
6
|
Namburu A, Mohan S, Chakkaravarthy S, Selvaraj P. Skin Cancer Segmentation Based on Triangular Intuitionistic Fuzzy Sets. SN COMPUTER SCIENCE 2023; 4:228. [DOI: 10.1007/s42979-023-01701-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/12/2022] [Accepted: 01/20/2023] [Indexed: 09/15/2023]
|
7
|
Liu S, Xin J, Wu J, Deng Y, Su R, Niessen WJ, Zheng N, van Walsum T. Multi-view Contour-constrained Transformer Network for Thin-cap Fibroatheroma Identification. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.12.041] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
8
|
A Framework for Interactive Medical Image Segmentation Using Optimized Swarm Intelligence with Convolutional Neural Networks. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:7935346. [PMID: 36059415 PMCID: PMC9433214 DOI: 10.1155/2022/7935346] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/03/2022] [Revised: 07/09/2022] [Accepted: 07/27/2022] [Indexed: 11/23/2022]
Abstract
Recent improvements in current technology have had a significant impact on a wide range of image processing applications, including medical imaging. Classification, detection, and segmentation are all important aspects of medical imaging technology. An enormous need exists for the segmentation of diagnostic images, which can be applied to a wide variety of medical research applications. It is important to develop an effective segmentation technique based on deep learning algorithms for optimal identification of regions of interest and rapid segmentation. To cover this gap, a pipeline for image segmentation using traditional Convolutional Neural Network (CNN) as well as introduced Swarm Intelligence (SI) for optimal identification of the desired area has been proposed. Fuzzy C-means (FCM), K-means, and improvisation of FCM with Particle Swarm Optimization (PSO), improvisation of K-means with PSO, improvisation of FCM with CNN, and improvisation of K-means with CNN are the six modules examined and evaluated. Experiments are carried out on various types of images such as Magnetic Resonance Imaging (MRI) for brain data analysis, dermoscopic for skin, microscopic for blood leukemia, and computed tomography (CT) scan images for lungs. After combining all of the datasets, we have constructed five subsets of data, each of which had a different number of images: 50, 100, 500, 1000, and 2000. Each of the models was executed and trained on the selected subset of the datasets. From the experimental analysis, it is observed that the performance of K-means with CNN is better than others and achieved 96.45% segmentation accuracy with an average time of 9.09 seconds.
Collapse
|
9
|
Alahmadi MD. Medical Image Segmentation with Learning Semantic and Global Contextual Representation. Diagnostics (Basel) 2022; 12:diagnostics12071548. [PMID: 35885454 PMCID: PMC9319384 DOI: 10.3390/diagnostics12071548] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Revised: 06/18/2022] [Accepted: 06/20/2022] [Indexed: 11/16/2022] Open
Abstract
Automatic medical image segmentation is an essential step toward accurate diseases diagnosis and designing a follow-up treatment. This assistive method facilitates the cancer detection process and provides a benchmark to highlight the affected area. The U-Net model has become the standard design choice. Although the symmetrical structure of the U-Net model enables this network to encode rich semantic representation, the intrinsic locality of the CNN layers limits this network’s capability in modeling long-range contextual dependency. On the other hand, sequence to sequence Transformer models with a multi-head attention mechanism can enable them to effectively model global contextual dependency. However, the lack of low-level information stemming from the Transformer architecture limits its performance for capturing local representation. In this paper, we propose a two parallel encoder model, where in the first path the CNN module captures the local semantic representation whereas the second path deploys a Transformer module to extract the long-range contextual representation. Next, by adaptively fusing these two feature maps, we encode both representations into a single representative tensor to be further processed by the decoder block. An experimental study demonstrates that our design can provide rich and generic representation features which are highly efficient for a fine-grained semantic segmentation task.
Collapse
Affiliation(s)
- Mohammad D Alahmadi
- Department of Software Engineering, College of Computer Science and Engineering, University of Jeddah, Jeddah 23890, Saudi Arabia
| |
Collapse
|
10
|
An Effective Skin Disease Segmentation Model based on Deep Convolutional Neural Network. INTERNATIONAL JOURNAL OF INTELLIGENT INFORMATION TECHNOLOGIES 2022. [DOI: 10.4018/ijiit.298695] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Automated segmentation of skin lesions as of digitally recorded images is a crucial procedure to diagnose skin diseases accurately. This paper proposes a segmentation model for skin lesions centered on Deep Convolutional Neural Network (DCNN) for melanoma, squamous, basal, keratosis, dermatofibroma, and vascular types of skin diseases. The DCNN is trained from scratch instead of pre-trained networks with different layers among variations in pooling and activation functions. The comparison of the proposed model is made with the winner of the ISIC 2018 challenge task1(skin lesion segmentation) and other methods. The experiments are performed on challenge datasets and shown better segmentation results. The main contribution is developing an automated segmentation model, evaluating performance, and comparing it with other state-of-art methods. The essence of the proposed work is the simple network architecture and its excellent results. It outperforms by obtaining a Jaccard index of 87%, dice similarity coefficient of 91%, the accuracy of 94%, recall of 94% and precision of 89%.
Collapse
|
11
|
Machine Learning and Deep Learning Methods for Skin Lesion Classification and Diagnosis: A Systematic Review. Diagnostics (Basel) 2021; 11:diagnostics11081390. [PMID: 34441324 PMCID: PMC8391467 DOI: 10.3390/diagnostics11081390] [Citation(s) in RCA: 62] [Impact Index Per Article: 20.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2021] [Revised: 07/25/2021] [Accepted: 07/27/2021] [Indexed: 12/04/2022] Open
Abstract
Computer-aided systems for skin lesion diagnosis is a growing area of research. Recently, researchers have shown an increasing interest in developing computer-aided diagnosis systems. This paper aims to review, synthesize and evaluate the quality of evidence for the diagnostic accuracy of computer-aided systems. This study discusses the papers published in the last five years in ScienceDirect, IEEE, and SpringerLink databases. It includes 53 articles using traditional machine learning methods and 49 articles using deep learning methods. The studies are compared based on their contributions, the methods used and the achieved results. The work identified the main challenges of evaluating skin lesion segmentation and classification methods such as small datasets, ad hoc image selection and racial bias.
Collapse
|
12
|
Liu L, Tsui YY, Mandal M. Skin Lesion Segmentation Using Deep Learning with Auxiliary Task. J Imaging 2021; 7:67. [PMID: 34460517 PMCID: PMC8321325 DOI: 10.3390/jimaging7040067] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2021] [Revised: 03/22/2021] [Accepted: 03/23/2021] [Indexed: 11/17/2022] Open
Abstract
Skin lesion segmentation is a primary step for skin lesion analysis, which can benefit the subsequent classification task. It is a challenging task since the boundaries of pigment regions may be fuzzy and the entire lesion may share a similar color. Prevalent deep learning methods for skin lesion segmentation make predictions by ensembling different convolutional neural networks (CNN), aggregating multi-scale information, or by multi-task learning framework. The main purpose of doing so is trying to make use of as much information as possible so as to make robust predictions. A multi-task learning framework has been proved to be beneficial for the skin lesion segmentation task, which is usually incorporated with the skin lesion classification task. However, multi-task learning requires extra labeling information which may not be available for the skin lesion images. In this paper, a novel CNN architecture using auxiliary information is proposed. Edge prediction, as an auxiliary task, is performed simultaneously with the segmentation task. A cross-connection layer module is proposed, where the intermediate feature maps of each task are fed into the subblocks of the other task which can implicitly guide the neural network to focus on the boundary region of the segmentation task. In addition, a multi-scale feature aggregation module is proposed, which makes use of features of different scales and enhances the performance of the proposed method. Experimental results show that the proposed method obtains a better performance compared with the state-of-the-art methods with a Jaccard Index (JA) of 79.46, Accuracy (ACC) of 94.32, SEN of 88.76 with only one integrated model, which can be learned in an end-to-end manner.
Collapse
Affiliation(s)
| | | | - Mrinal Mandal
- Department of Electrical and Computer Engineering, University of Alberta, Edmonton, AB T6G1H9, Canada; (L.L.); (Y.Y.T.)
| |
Collapse
|
13
|
Nikesh P, Raju G. Automatic Skin Lesion Segmentation—A Novel Approach of Lesion Filling through Pixel Path. PATTERN RECOGNITION AND IMAGE ANALYSIS 2021. [DOI: 10.1134/s1054661820040215] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
14
|
Automatic skin lesion classification based on mid-level feature learning. Comput Med Imaging Graph 2020; 84:101765. [PMID: 32810817 DOI: 10.1016/j.compmedimag.2020.101765] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2019] [Revised: 07/14/2020] [Accepted: 07/18/2020] [Indexed: 11/20/2022]
Abstract
Dermoscopic images are widely used for melanoma detection. Many existing works based on traditional classification methods and deep learning models have been proposed for automatic skin lesion analysis. The traditional classification methods use hand-crafted features as input. However, due to the strong visual similarity between different classes of skin lesions and complex skin conditions, the hand-crafted features are not discriminative enough and fail in many cases. Recently, deep convolutional neural networks (CNN) have gained popularity since they can automatically learn optimal features during the training phase. Different from existing works, a novel mid-level feature learning method for skin lesion classification task is proposed in this paper. In this method, skin lesion segmentation is first performed to detect the regions of interest (ROI) of skin lesion images. Next, pretrained neural networks including ResNet and DenseNet are used as the feature extractors for the ROI images. Instead of using the extracted features directly as input of classifiers, the proposed method obtains the mid-level feature representations by utilizing the relationships among different image samples based on distance metric learning. The learned feature representation is a soft discriminative descriptor, having more tolerance to the hard samples and hence is more robust to the large intra-class difference and inter-class similarity. Experimental results demonstrate advantages of the proposed mid-level features, and the proposed method obtains state-of-the-art performance compared with the existing CNN based methods.
Collapse
|
15
|
Pereira PM, Fonseca-Pinto R, Paiva RP, Assuncao PA, Tavora LM, Thomaz LA, Faria SM. Dermoscopic skin lesion image segmentation based on Local Binary Pattern Clustering: Comparative study. Biomed Signal Process Control 2020. [DOI: 10.1016/j.bspc.2020.101924] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
16
|
Skin Lesion Segmentation Using Image Bit-Plane Multilayer Approach. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10093045] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
The establishment of automatic diagnostic systems able to detect and classify skin lesions at the initial stage are getting really relevant and effective in providing support for medical personnel during clinical assessment. Image segmentation has a determinant part in computer-aided skin lesion diagnosis pipeline because it makes possible to extract and highlight information on lesion contour texture as, for example, skewness and area unevenness. However, artifacts, low contrast, indistinct boundaries, and different shapes and areas contribute to make skin lesion segmentation a challenging task. In this paper, a fully automatic computer-aided system for skin lesion segmentation in dermoscopic images is indicated. Adopting this method, noise and artifacts are initially reduced by the singular value decomposition; afterward lesion decomposition into a frame of bit-plane layers is performed. A specific procedure is implemented for redundant data reduction using simple Boolean operators. Since lesion and background are rarely homogeneous regions, the obtained segmentation region could contain some disjointed areas classified as lesion. To obtain a single zone classified as lesion avoiding spurious pixels or holes inside the image under test, mathematical morphological techniques are implemented. The performance obtained highlights the method validity.
Collapse
|
17
|
Chen H, Lu W, Chen M, Zhou L, Timmerman R, Tu D, Nedzi L, Wardak Z, Jiang S, Zhen X, Gu X. A recursive ensemble organ segmentation (REOS) framework: application in brain radiotherapy. ACTA ACUST UNITED AC 2019; 64:025015. [DOI: 10.1088/1361-6560/aaf83c] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|