1
|
Microfeature Segmentation Algorithm for Biological Images Using Improved Density Peak Clustering. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:8630449. [PMID: 36035280 PMCID: PMC9410864 DOI: 10.1155/2022/8630449] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/24/2022] [Accepted: 07/21/2022] [Indexed: 11/17/2022]
Abstract
To address the problem of low precision in feature segmentation of biological images with large noise, a microfeature segmentation algorithm for biological images using improved density peak clustering was proposed. First, the center pixel and edge information of a biological image were obtained to remove some redundant information. The three-dimensional space of the image is constructed, and the coordinate system is used to describe every superpixel of the biological image. Second, the image symmetry and reversibility are used to obtain the stopping position of pixels, other adjacent points are used to obtain the current color and shape information, and more vectors are used to express the density to complete the image pretreatment. Finally, the improved density peak clustering method is used to cluster the image, and the pixels completed by clustering and the remaining pixels are evenly distributed into the space to segment the image so as to complete the microfeature segmentation of the biological image based on the improved density peak clustering method. The results show that the proposed algorithm improves the segmentation efficiency, segmentation integrity rate, and segmentation accuracy. The time consumed by the proposed biological image microfeature segmentation algorithm is always less than 2 minutes, and the segmentation integrity rate can reach more than 90%. Furthermore, the proposed algorithm can reduce the missing condition and the noise of the segmented image and improve the image feature segmentation effect.
Collapse
|
2
|
Cammarasana S, Nicolardi P, Patanè G. Real-time denoising of ultrasound images based on deep learning. Med Biol Eng Comput 2022; 60:2229-2244. [PMID: 35672630 PMCID: PMC9293842 DOI: 10.1007/s11517-022-02573-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2021] [Accepted: 03/30/2022] [Indexed: 12/17/2022]
Abstract
AbstractUltrasound images are widespread in medical diagnosis for muscle-skeletal, cardiac, and obstetrical diseases, due to the efficiency and non-invasiveness of the acquisition methodology. However, ultrasound acquisition introduces noise in the signal, which corrupts the resulting image and affects further processing steps, e.g. segmentation and quantitative analysis. We define a novel deep learning framework for the real-time denoising of ultrasound images. Firstly, we compare state-of-the-art methods for denoising (e.g. spectral, low-rank methods) and select WNNM (Weighted Nuclear Norm Minimisation) as the best denoising in terms of accuracy, preservation of anatomical features, and edge enhancement. Then, we propose a tuned version of WNNM (tuned-WNNM) that improves the quality of the denoised images and extends its applicability to ultrasound images. Through a deep learning framework, the tuned-WNNM qualitatively and quantitatively replicates WNNM results in real-time. Finally, our approach is general in terms of its building blocks and parameters of the deep learning and high-performance computing framework; in fact, we can select different denoising algorithms and deep learning architectures.
Collapse
|
3
|
Zhuang S, Li F, Raj ANJ, Ding W, Zhou W, Zhuang Z. Automatic segmentation for ultrasound image of carotid intimal-media based on improved superpixel generation algorithm and fractal theory. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 205:106084. [PMID: 33887633 DOI: 10.1016/j.cmpb.2021.106084] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/05/2020] [Accepted: 03/28/2021] [Indexed: 06/12/2023]
Abstract
OBJECTIVE Carotid atherosclerosis (CAS) is the main reason leading to cardiovascular conditions such as coronary heart disease and cerebrovascular diseases. In the carotid ultrasound images, the carotid intima-media structure can be observed in an annular narrow strip, which its inner contour corresponds to the carotid intima, and the outer contour corresponds to the carotid extima. With the development of carotid atherosclerosis, the carotid intima-media will gradually thicken. Therefore, doctors can observe the carotid intima-media so as to obtain the pathological changes of the internal structure of the patient's carotid arteries. However, due to the presence of artifacts and noises the quality of the ultrasound images are degraded, making it difficult to obtain accurate carotid intima-media structures. This article presents a novel self-adaptive method to enable obtaining the carotid intima-media through carotid intima/extima segmentation. METHOD After preprocessing the ultrasound images by homomorphic filtering and median filtering, we propose an improved superpixel generation algorithm that employs the fusion of gray-level and luminosity-based information to decompose the image into numerous superpixels and later presents the carotid intima. Meanwhile, based on the features of the carotid artery, the initial position of the carotid extima is located by the normalized cut algorithm and later the fractal theory is employed to segment the carotid extima. RESULTS The proposed method for segmenting carotid intima obtained mean values of the DICE true positive ratio (TPR), false positive ratio (FPR), precision scores of 97.797%, 99.126%, 0.540%, 97.202%, respectively. Further from the segmentation method of the carotid extima the performance measures such as mean DICE, TPR, accuracy, F-score obtained are 95.00%, 92.265%, 97.689%, 94.997%, respectively. CONCLUSION Comparing with traditional methods, the proposed method performed better. The experimental results indicated that the proposed method obtained the carotid intima-media both automatically and accurately thus effectively assist doctors in the diagnosis of CAS.
Collapse
Affiliation(s)
- Shuxin Zhuang
- Key Laboratory of Digital Signal and Image Processing of Guangdong Province, Shantou University, Shantou, Guangdong, China; Department of Electronic Engineering, Shantou University, Shantou, Guangdong, China
| | - Fenlan Li
- Department of Electronic Engineering, Shantou University, Shantou, Guangdong, China
| | - Alex Noel Joseph Raj
- Key Laboratory of Digital Signal and Image Processing of Guangdong Province, Shantou University, Shantou, Guangdong, China; Department of Electronic Engineering, Shantou University, Shantou, Guangdong, China
| | - Wanli Ding
- Department of Electronic Engineering, Shantou University, Shantou, Guangdong, China
| | - Wang Zhou
- Department of Electronic Engineering, Shantou University, Shantou, Guangdong, China
| | - Zhemin Zhuang
- Key Laboratory of Digital Signal and Image Processing of Guangdong Province, Shantou University, Shantou, Guangdong, China.
| |
Collapse
|
4
|
Multi-Features-Based Automated Breast Tumor Diagnosis Using Ultrasound Image and Support Vector Machine. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2021; 2021:9980326. [PMID: 34113378 PMCID: PMC8154287 DOI: 10.1155/2021/9980326] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/30/2021] [Accepted: 05/07/2021] [Indexed: 12/11/2022]
Abstract
Breast ultrasound examination is a routine, fast, and safe method for clinical diagnosis of breast tumors. In this paper, a classification method based on multi-features and support vector machines was proposed for breast tumor diagnosis. Multi-features are composed of characteristic features and deep learning features of breast tumor images. Initially, an improved level set algorithm was used to segment the lesion in breast ultrasound images, which provided an accurate calculation of characteristic features, such as orientation, edge indistinctness, characteristics of posterior shadowing region, and shape complexity. Simultaneously, we used transfer learning to construct a pretrained model as a feature extractor to extract the deep learning features of breast ultrasound images. Finally, the multi-features were fused and fed to support vector machine for the further classification of breast ultrasound images. The proposed model, when tested on unknown samples, provided a classification accuracy of 92.5% for cancerous and noncancerous tumors.
Collapse
|
5
|
Raj ANJ, Nersisson R, Mahesh VGV, Zhuang Z. Nipple Localization in Automated Whole Breast Ultrasound Coronal Scans Using Ensemble Learning. ULTRASONIC IMAGING 2021; 43:29-45. [PMID: 33355518 DOI: 10.1177/0161734620974273] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Nipple is a vital landmark in the breast lesion diagnosis. Although there are advanced computer-aided detection (CADe) systems for nipple detection in breast mediolateral oblique (MLO) views of mammogram images, few academic works address the coronal views of breast ultrasound (BUS) images. This paper addresses a novel CADe system to locate the Nipple Shadow Area (NSA) in ultrasound images. Here the Hu Moments and Gray-level Co-occurrence Matrix (GLCM) were calculated through an iterative sliding window for the extraction of shape and texture features. These features are then concatenated and fed into an Artificial Neural Network (ANN) to obtain probable NSA's. Later, contour features, such as shape complexity through fractal dimension, edge distance from the periphery and contour area, were computed and passed into a Support Vector Machine (SVM) to identify the accurate NSA in each case. The coronal plane BUS dataset is built upon our own, which consists of 64 images from 13 patients. The test results show that the proposed CADe system achieves 91.99% accuracy, 97.55% specificity, 82.46% sensitivity and 88% F-score on our dataset.
Collapse
Affiliation(s)
| | | | | | - Zhemin Zhuang
- Shantou University, Shantou, Guangdong Province, China
| |
Collapse
|
6
|
Bharti P, Mittal D. An Ultrasound Image Enhancement Method Using Neutrosophic Similarity Score. ULTRASONIC IMAGING 2020; 42:271-283. [PMID: 33019917 DOI: 10.1177/0161734620961005] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Ultrasound images, having low contrast and noise, adversely impact in the detection of abnormalities. In view of this, an enhancement method is proposed in this work to reduce noise and improve contrast of ultrasound images. The proposed method is based on scaling with neutrosophic similarity score (NSS), where an image is represented in the neutrosophic domain through three membership subsets T, I, and F denoting the degree of truth, indeterminacy, and falseness, respectively. The NSS measures the belonging degree of pixel to the texture using multi-criteria that is based on intensity, local mean intensity and edge detection. Then, NSS is utilized to extract the enhanced coefficient and this enhanced coefficient is applied to scale the input image. This scaling reflects contrast improvement and denoising effect on ultrasound images. The performance of proposed enhancement method is evaluated on clinical ultrasound images, using both subjective and objective image quality measures. In subjective evaluation, with proposed method, overall best score of 4.3 was obtained and that was 44% higher than the score of original images. These results were also supported by objective measures. The results demonstrated that the proposed method outperformed the other methods in terms of mean brightness preservation, edge preservation, structural similarity, and human perception-based image quality assessment. Thus, the proposed method can be used in computer-aided diagnosis systems and to visually assist radiologists in their interactive-decision-making task.
Collapse
Affiliation(s)
- Puja Bharti
- Thapar Institute of Engineering and Technology, Patiala, India
| | - Deepti Mittal
- Thapar Institute of Engineering and Technology, Patiala, India
| |
Collapse
|
7
|
Zhuang Z, Kang Y, Joseph Raj AN, Yuan Y, Ding W, Qiu S. Breast ultrasound lesion classification based on image decomposition and transfer learning. Med Phys 2020; 47:6257-6269. [DOI: 10.1002/mp.14510] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2019] [Revised: 08/31/2020] [Accepted: 09/15/2020] [Indexed: 12/24/2022] Open
Affiliation(s)
- Zhemin Zhuang
- Key Laboratory of Digital Signal and Image Processing of Guangdong Province Department of Electronic Engineering Shantou University Shantou Guangdong China
| | - Yuqiang Kang
- Key Laboratory of Digital Signal and Image Processing of Guangdong Province Department of Electronic Engineering Shantou University Shantou Guangdong China
| | - Alex Noel Joseph Raj
- Key Laboratory of Digital Signal and Image Processing of Guangdong Province Department of Electronic Engineering Shantou University Shantou Guangdong China
| | - Ye Yuan
- Key Laboratory of Digital Signal and Image Processing of Guangdong Province Department of Electronic Engineering Shantou University Shantou Guangdong China
| | - Wanli Ding
- Key Laboratory of Digital Signal and Image Processing of Guangdong Province Department of Electronic Engineering Shantou University Shantou Guangdong China
| | - Shunmin Qiu
- Imaging Department First Hospital of Medical College of Shantou University Shantou Guangdong China
| |
Collapse
|
8
|
Panigrahy C, Seal A, Kumar Mahato N, Krejcar O, Herrera-Viedma E. Multi-focus image fusion using fractal dimension. APPLIED OPTICS 2020; 59:5642-5655. [PMID: 32609685 DOI: 10.1364/ao.391234] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/24/2020] [Accepted: 05/27/2020] [Indexed: 06/11/2023]
Abstract
Multi-focus image fusion is defined as "the combination of a group of partially focused images of a same scene with the objective of producing a fully focused image." Normally, transform-domain-based image fusion methods preserve the textures and edges in the blend image, but many are translation variant. The translation-invariant transforms produce the same size approximation and detail images, which are more convenient to devise the fusion rules. In this work, a translation-invariant multi-focus image fusion approach using the à-trous wavelet transform is introduced, which uses fractal dimension as a clarity measure for the approximation coefficients and Otsu's threshold to fuse the detail coefficients. The subjective assessment of the proposed method is carried out using the fusion results of nine state-of-the-art methods. On the other hand, eight fusion quality metrics are considered for the objective assessment. The results of subjective and objective assessment on grayscale and color multi-focus image pairs illustrate that the proposed method is competitive and even better than some of the existing methods.
Collapse
|
9
|
Zhuang Z, Liu G, Ding W, Raj ANJ, Qiu S, Guo J, Yuan Y. Cardiac VFM visualization and analysis based on YOLO deep learning model and modified 2D continuity equation. Comput Med Imaging Graph 2020; 82:101732. [DOI: 10.1016/j.compmedimag.2020.101732] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2019] [Revised: 03/23/2020] [Accepted: 04/13/2020] [Indexed: 11/16/2022]
|
10
|
Zhuang Z, Li N, Joseph Raj AN, Mahesh VGV, Qiu S. An RDAU-NET model for lesion segmentation in breast ultrasound images. PLoS One 2019; 14:e0221535. [PMID: 31442268 PMCID: PMC6707567 DOI: 10.1371/journal.pone.0221535] [Citation(s) in RCA: 45] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2019] [Accepted: 08/08/2019] [Indexed: 11/28/2022] Open
Abstract
Breast cancer is a common gynecological disease that poses a great threat to women health due to its high malignant rate. Breast cancer screening tests are used to find any warning signs or symptoms for early detection and currently, Ultrasound screening is the preferred method for breast cancer diagnosis. The localization and segmentation of the lesions in breast ultrasound (BUS) images are helpful for clinical diagnosis of the disease. In this paper, an RDAU-NET (Residual-Dilated-Attention-Gate-UNet) model is proposed and employed to segment the tumors in BUS images. The model is based on the conventional U-Net, but the plain neural units are replaced with residual units to enhance the edge information and overcome the network performance degradation problem associated with deep networks. To increase the receptive field and acquire more characteristic information, dilated convolutions were used to process the feature maps obtained from the encoder stages. The traditional cropping and copying between the encoder-decoder pipelines were replaced by the Attention Gate modules which enhanced the learning capabilities through suppression of background information. The model, when tested with BUS images with benign and malignant tumor presented excellent segmentation results as compared to other Deep Networks. A variety of quantitative indicators including Accuracy, Dice coefficient, AUC(Area-Under-Curve), Precision, Sensitivity, Specificity, Recall, F1score and M-IOU (Mean-Intersection-Over-Union) provided performances above 80%. The experimental results illustrate that the proposed RDAU-NET model can accurately segment breast lesions when compared to other deep learning models and thus has a good prospect for clinical diagnosis.
Collapse
Affiliation(s)
- Zhemin Zhuang
- Key Laboratory of Digital Signal and Image Processing of Guangdong Province, Department of Electronic Engineering, Shantou University, Shantou, Guangdong, China
| | - Nan Li
- Key Laboratory of Digital Signal and Image Processing of Guangdong Province, Department of Electronic Engineering, Shantou University, Shantou, Guangdong, China
| | - Alex Noel Joseph Raj
- Key Laboratory of Digital Signal and Image Processing of Guangdong Province, Department of Electronic Engineering, Shantou University, Shantou, Guangdong, China
| | - Vijayalakshmi G. V. Mahesh
- Department of Electronics and Communication Engineering, BMS Institute of Technology and Management, Bengaluru, Karnataka, India
| | - Shunmin Qiu
- Imaging Department, First Hospital of Medical College of Shantou University, Shantou, Guangdong, China
| |
Collapse
|