1
|
Wang H, Wang T, Hao Y, Ding S, Feng J. Breast tumor segmentation via deep correlation analysis of multi-sequence MRI. Med Biol Eng Comput 2024; 62:3801-3814. [PMID: 39031329 DOI: 10.1007/s11517-024-03166-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2023] [Accepted: 07/03/2024] [Indexed: 07/22/2024]
Abstract
Precise segmentation of breast tumors from MRI is crucial for breast cancer diagnosis, as it allows for detailed calculation of tumor characteristics such as shape, size, and edges. Current segmentation methodologies face significant challenges in accurately modeling the complex interrelationships inherent in multi-sequence MRI data. This paper presents a hybrid deep network framework with three interconnected modules, aimed at efficiently integrating and exploiting the spatial-temporal features among multiple MRI sequences for breast tumor segmentation. The first module involves an advanced multi-sequence encoder with a densely connected architecture, separating the encoding pathway into multiple streams for individual MRI sequences. To harness the intricate correlations between different sequence features, we propose a sequence-awareness and temporal-awareness method that adeptly fuses spatial-temporal features of MRI in the second multi-scale feature embedding module. Finally, the decoder module engages in the upsampling of feature maps, meticulously refining the resolution to achieve highly precise segmentation of breast tumors. In contrast to other popular methods, the proposed method learns the interrelationships inherent in multi-sequence MRI. We justify the proposed method through extensive experiments. It achieves notable improvements in segmentation performance, with Dice Similarity Coefficient (DSC), Intersection over Union (IoU), and Positive Predictive Value (PPV) scores of 80.57%, 74.08%, and 84.74% respectively.
Collapse
Affiliation(s)
- Hongyu Wang
- School of Computer Science and Technology, Xi'an University of Posts and Telecommunications, Xi'an, Shaanxi, 710121, China.
- Shaanxi Key Laboratory of Network Data Analysis and Intelligent Processing, Xi'an University of Posts and Telecommunications, Xi'an, Shaanxi, 710121, China.
- Xi'an Key Laboratory of Big Data and Intelligent Computing, Xi'an University of Posts and Telecommunications, Xi'an, Shaanxi, 710121, China.
| | - Tonghui Wang
- Department of Information Science and Technology, Northwest University, Xi'an, Shaanxi, 7101127, China
| | - Yanfang Hao
- School of Computer Science and Technology, Xi'an University of Posts and Telecommunications, Xi'an, Shaanxi, 710121, China
- Shaanxi Key Laboratory of Network Data Analysis and Intelligent Processing, Xi'an University of Posts and Telecommunications, Xi'an, Shaanxi, 710121, China
- Xi'an Key Laboratory of Big Data and Intelligent Computing, Xi'an University of Posts and Telecommunications, Xi'an, Shaanxi, 710121, China
| | - Songtao Ding
- School of Computer Science and Technology, Xi'an University of Posts and Telecommunications, Xi'an, Shaanxi, 710121, China
- Shaanxi Key Laboratory of Network Data Analysis and Intelligent Processing, Xi'an University of Posts and Telecommunications, Xi'an, Shaanxi, 710121, China
- Xi'an Key Laboratory of Big Data and Intelligent Computing, Xi'an University of Posts and Telecommunications, Xi'an, Shaanxi, 710121, China
| | - Jun Feng
- Department of Information Science and Technology, Northwest University, Xi'an, Shaanxi, 7101127, China.
| |
Collapse
|
2
|
Zhou J, Hou Z, Lu H, Wang W, Zhao W, Wang Z, Zheng D, Wang S, Tang W, Qu X. A deep supervised transformer U-shaped full-resolution residual network for the segmentation of breast ultrasound image. Med Phys 2023; 50:7513-7524. [PMID: 37816131 DOI: 10.1002/mp.16765] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Revised: 09/05/2023] [Accepted: 09/07/2023] [Indexed: 10/12/2023] Open
Abstract
PURPOSE Breast ultrasound (BUS) is an important breast imaging tool. Automatic BUS image segmentation can measure the breast tumor size objectively and reduce doctors' workload. In this article, we proposed a deep supervised transformer U-shaped full-resolution residual network (DSTransUFRRN) to segment BUS images. METHODS In the proposed method, a full-resolution residual stream and a deep supervision mechanism were introduced into TransU-Net. The residual stream can keep full resolution features from different levels and enhance features fusion. Then, the deep supervision can suppress gradient dispersion. Moreover, the transformer module can suppress irrelevant features and improve feature extraction process. Two datasets (dataset A and B) were used for training and evaluation. The dataset A included 980 BUS image samples and the dataset B had 163 BUS image samples. RESULTS Cross-validation was conducted. For the dataset A, the proposed DSTransUFRRN achieved significantly higher Dice (91.04 ± 0.86%) than all compared methods (p < 0.05). For the dataset B, the Dice was lower than that for the dataset A due to the small number of samples, but the Dice of DSTransUFRRN (88.15% ± 2.11%) was significantly higher than that of other compared methods (p < 0.05). CONCLUSIONS In this study, we proposed DSTransUFRRN for BUS image segmentation. The proposed methods achieved significantly higher accuracy than the compared previous methods.
Collapse
Affiliation(s)
- Jiale Zhou
- School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing, China
| | - Zuoxun Hou
- Beijing Institute of Mechanics & Electricity, Beijing, China
| | - Hongyan Lu
- School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing, China
| | - Wenhan Wang
- School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing, China
| | - Wanchen Zhao
- School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing, China
| | - Zenan Wang
- Department of Gastroenterology, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Dezhi Zheng
- Research Institute for Frontier Science, Beihang University, Beijing, China
| | - Shuai Wang
- School of Computer Science and Engineering, Beihang University, Beijing, China
| | - Wenzhong Tang
- School of Computer Science and Engineering, Beihang University, Beijing, China
| | - Xiaolei Qu
- School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing, China
| |
Collapse
|
3
|
Qi W, Wu HC, Chan SC. MDF-Net: A Multi-Scale Dynamic Fusion Network for Breast Tumor Segmentation of Ultrasound Images. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2023; 32:4842-4855. [PMID: 37639409 DOI: 10.1109/tip.2023.3304518] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/31/2023]
Abstract
Breast tumor segmentation of ultrasound images provides valuable information of tumors for early detection and diagnosis. Accurate segmentation is challenging due to low image contrast between areas of interest; speckle noises, and large inter-subject variations in tumor shape and size. This paper proposes a novel Multi-scale Dynamic Fusion Network (MDF-Net) for breast ultrasound tumor segmentation. It employs a two-stage end-to-end architecture with a trunk sub-network for multiscale feature selection and a structurally optimized refinement sub-network for mitigating impairments such as noise and inter-subject variation via better feature exploration and fusion. The trunk network is extended from UNet++ with a simplified skip pathway structure to connect the features between adjacent scales. Moreover, deep supervision at all scales, instead of at the finest scale in UNet++, is proposed to extract more discriminative features and mitigate errors from speckle noise via a hybrid loss function. Unlike previous works, the first stage is linked to a loss function of the second stage so that both the preliminary segmentations and refinement subnetworks can be refined together at training. The refinement sub-network utilizes a structurally optimized MDF mechanism to integrate preliminary segmentation information (capturing general tumor shape and size) at coarse scales and explores inter-subject variation information at finer scales. Experimental results from two public datasets show that the proposed method achieves better Dice and other scores over state-of-the-art methods. Qualitative analysis also indicates that our proposed network is more robust to tumor size/shapes, speckle noise and heavy posterior shadows along tumor boundaries. An optional post-processing step is also proposed to facilitate users in mitigating segmentation artifacts. The efficiency of the proposed network is also illustrated on the "Electron Microscopy neural structures segmentation dataset". It outperforms a state-of-the-art algorithm based on UNet-2022 with simpler settings. This indicates the advantages of our MDF-Nets in other challenging image segmentation tasks with small to medium data sizes.
Collapse
|
4
|
Zhang M, Huang A, Yang D, Xu R. Boundary-oriented Network for Automatic Breast Tumor Segmentation in Ultrasound Images. ULTRASONIC IMAGING 2023; 45:62-73. [PMID: 36951101 DOI: 10.1177/01617346231162925] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
Breast cancer is considered as the most prevalent cancer. Using ultrasound images is a momentous clinical diagnosis method to locate breast tumors. However, accurate segmentation of breast tumors remains an open problem due to ultrasound artifacts, low contrast, and complicated tumor shapes in ultrasound images. To address this issue, we proposed a boundary-oriented network (BO-Net) for boosting breast tumor segmentation in ultrasound images. The BO-Net boosts tumor segmentation performance from two perspectives. Firstly, a boundary-oriented module (BOM) was designed to capture the weak boundaries of breast tumors by learning additional breast tumor boundary maps. Second, we focus on enhanced feature extraction, which takes advantage of the Atrous Spatial Pyramid Pooling (ASPP) module and Squeeze-and-Excitation (SE) block to obtain multi-scale and efficient feature information. We evaluate our network on two public datasets: Dataset B and BUSI. For the Dataset B, our network achieves 0.8685 in Dice, 0.7846 in Jaccard, 0.8604 in Precision, 0.9078 in Recall, and 0.9928 in Specificity. For the BUSI dataset, our network achieves 0.7954 in Dice, 0.7033 in Jaccard, 0.8275 in Precision, 0.8251 in Recall, and 0.9814 in Specificity. Experimental results show that BO-Net outperforms the state-of-the-art segmentation methods for breast tumor segmentation in ultrasound images. It demonstrates that focusing on boundary and feature enhancement creates more efficient and robust breast tumor segmentation.
Collapse
Affiliation(s)
- Mengmeng Zhang
- School of Media and Design, Hangzhou Dianzi University, Hangzhou, China
| | - Aibin Huang
- School of Media and Design, Hangzhou Dianzi University, Hangzhou, China
| | - Debiao Yang
- School of Media and Design, Hangzhou Dianzi University, Hangzhou, China
| | - Rui Xu
- School of Media and Design, Hangzhou Dianzi University, Hangzhou, China
| |
Collapse
|
5
|
Iqbal A, Sharif M. BTS-ST: Swin transformer network for segmentation and classification of multimodality breast cancer images. Knowl Based Syst 2023. [DOI: 10.1016/j.knosys.2023.110393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/16/2023]
|
6
|
Shao J, Zhou K, Cai YH, Geng DY. Application of an Improved U2-Net Model in Ultrasound Median Neural Image Segmentation. ULTRASOUND IN MEDICINE & BIOLOGY 2022; 48:2512-2520. [PMID: 36167742 DOI: 10.1016/j.ultrasmedbio.2022.08.003] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/17/2022] [Revised: 08/02/2022] [Accepted: 08/03/2022] [Indexed: 06/16/2023]
Abstract
To investigate whether an improved U2-Net model could be used to segment the median nerve and improve segmentation performance, we performed a retrospective study with 402 nerve images from patients who visited Huashan Hospital from October 2018 to July 2020; 249 images were from patients with carpal tunnel syndrome, and 153 were from healthy volunteers. From these, 320 cases were selected as training sets, and 82 cases were selected as test sets. The improved U2-Net model was used to segment each image. Dice coefficients (Dice), pixel accuracy (PA), mean intersection over union (MIoU) and average Hausdorff distance (AVD) were used to evaluate segmentation performance. Results revealed that the Dice, MIoU, PA and AVD values of our improved U2-Net were 72.85%, 79.66%, 95.92% and 51.37 mm, respectively, which were comparable to the actual ground truth; the ground truth came from the labeling of clinicians. However, the Dice, MIoU, PA and AVD values of U-Net were 43.19%, 65.57%, 86.22% and 74.82 mm, and those of Res-U-Net were 58.65%, 72.53%, 88.98% and 57.30 mm. Overall, our data suggest our improved U2-Net model might be used for segmentation of ultrasound median neural images.
Collapse
Affiliation(s)
- Jie Shao
- Department of Ultrasound, Huashan Hospital, Fudan University, Shanghai, China
| | - Kun Zhou
- Academy for Engineering and Technology, Fudan University, Shanghai, China
| | - Ye-Hua Cai
- Department of Ultrasound, Huashan Hospital, Fudan University, Shanghai, China
| | - Dao-Ying Geng
- Department of Radiology, Huashan Hospital, Fudan University, Shanghai, China; Greater Bay Area Institute of Precision Medicine (Guangzhou), Fudan University, Guangzhou, China.
| |
Collapse
|
7
|
Woon Cho S, Rae Baek N, Ryoung Park K. Deep Learning-based Multi-stage Segmentation Method Using Ultrasound Images for Breast Cancer Diagnosis. JOURNAL OF KING SAUD UNIVERSITY - COMPUTER AND INFORMATION SCIENCES 2022. [DOI: 10.1016/j.jksuci.2022.10.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
8
|
Yuan D, Zhang D, Yang Y, Yang S. Automatic construction of filter tree by genetic programming for ultrasound guidance image segmentation. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103641] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
9
|
Ilesanmi AE, Chaumrattanakul U, Makhanov SS. Methods for the segmentation and classification of breast ultrasound images: a review. J Ultrasound 2021; 24:367-382. [PMID: 33428123 PMCID: PMC8572242 DOI: 10.1007/s40477-020-00557-5] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2020] [Accepted: 12/21/2020] [Indexed: 02/07/2023] Open
Abstract
PURPOSE Breast ultrasound (BUS) is one of the imaging modalities for the diagnosis and treatment of breast cancer. However, the segmentation and classification of BUS images is a challenging task. In recent years, several methods for segmenting and classifying BUS images have been studied. These methods use BUS datasets for evaluation. In addition, semantic segmentation algorithms have gained prominence for segmenting medical images. METHODS In this paper, we examined different methods for segmenting and classifying BUS images. Popular datasets used to evaluate BUS images and semantic segmentation algorithms were examined. Several segmentation and classification papers were selected for analysis and review. Both conventional and semantic methods for BUS segmentation were reviewed. RESULTS Commonly used methods for BUS segmentation were depicted in a graphical representation, while other conventional methods for segmentation were equally elucidated. CONCLUSIONS We presented a review of the segmentation and classification methods for tumours detected in BUS images. This review paper selected old and recent studies on segmenting and classifying tumours in BUS images.
Collapse
Affiliation(s)
- Ademola E. Ilesanmi
- School of ICT, Sirindhorn International Institute of Technology, Thammasat University, Pathum Thani, 12000 Thailand
| | | | - Stanislav S. Makhanov
- School of ICT, Sirindhorn International Institute of Technology, Thammasat University, Pathum Thani, 12000 Thailand
| |
Collapse
|
10
|
Deep Vision for Breast Cancer Classification and Segmentation. Cancers (Basel) 2021; 13:cancers13215384. [PMID: 34771547 PMCID: PMC8582536 DOI: 10.3390/cancers13215384] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Revised: 10/18/2021] [Accepted: 10/24/2021] [Indexed: 11/30/2022] Open
Abstract
Simple Summary Breast cancer misdiagnoses increase individual and system stressors as well as costs and result in increased morbidity and mortality. Digital mammography studies are typically about 80% sensitive and 90% specific. Improvement in classification of breast cancer imagery is possible using deep vision methods, and these methods may be further used to identify autonomously regions of interest most closely associated with anomalies to support clinician analysis. This research explores deep vision techniques for improving mammography classification and for identifying associated regions of interest. The findings from this research contribute to the future of automated assistive diagnoses of breast cancer and the isolation of regions of interest. Abstract (1) Background: Female breast cancer diagnoses odds have increased from 11:1 in 1975 to 8:1 today. Mammography false positive rates (FPR) are associated with overdiagnoses and overtreatment, while false negative rates (FNR) increase morbidity and mortality. (2) Methods: Deep vision supervised learning classifies 299 × 299 pixel de-noised mammography images as negative or non-negative using models built on 55,890 pre-processed training images and applied to 15,364 unseen test images. A small image representation from the fitted training model is returned to evaluate the portion of the loss function gradient with respect to the image that maximizes the classification probability. This gradient is then re-mapped back to the original images, highlighting the areas of the original image that are most influential for classification (perhaps masses or boundary areas). (3) Results: initial classification results were 97% accurate, 99% specific, and 83% sensitive. Gradient techniques for unsupervised region of interest mapping identified areas most associated with the classification results clearly on positive mammograms and might be used to support clinician analysis. (4) Conclusions: deep vision techniques hold promise for addressing the overdiagnoses and treatment, underdiagnoses, and automated region of interest identification on mammography.
Collapse
|
11
|
Iqbal A, Sharif M. MDA-Net: Multiscale dual attention-based network for breast lesion segmentation using ultrasound images. JOURNAL OF KING SAUD UNIVERSITY - COMPUTER AND INFORMATION SCIENCES 2021. [DOI: 10.1016/j.jksuci.2021.10.002] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
|
12
|
|
13
|
An FP, Liu JE, Wang JR. Medical image segmentation algorithm based on positive scaling invariant-self encoding CCA. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2020.102395] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
|
14
|
Qu X, Shi Y, Hou Y, Jiang J. An attention-supervised full-resolution residual network for the segmentation of breast ultrasound images. Med Phys 2020; 47:5702-5714. [PMID: 32964449 DOI: 10.1002/mp.14470] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2020] [Revised: 08/07/2020] [Accepted: 08/10/2020] [Indexed: 01/22/2023] Open
Abstract
PURPOSE Breast cancer is the most common cancer among women worldwide. Medical ultrasound imaging is one of the widely applied breast imaging methods for breast tumors. Automatic breast ultrasound (BUS) image segmentation can measure the size of tumors objectively. However, various ultrasound artifacts hinder segmentation. We proposed an attention-supervised full-resolution residual network (ASFRRN) to segment tumors from BUS images. METHODS In the proposed method, Global Attention Upsample (GAU) and deep supervision were introduced into a full-resolution residual network (FRRN), where GAU learns to merge features at different levels with attention for deep supervision. Two datasets were employed for evaluation. One (Dataset A) consisted of 163 BUS images with tumors (53 malignant and 110 benign) from UDIAT Centre Diagnostic, and the other (Dataset B) included 980 BUS images with tumors (595 malignant and 385 benign) from the Sun Yat-sen University Cancer Center. The tumors from both datasets were manually segmented by medical doctors. For evaluation, the Dice coefficient (Dice), Jaccard similarity coefficient (JSC), and F1 score were calculated. RESULTS For Dataset A, the proposed method achieved higher Dice (84.3 ± 10.0%), JSC (75.2 ± 10.7%), and F1 score (84.3 ± 10.0%) than the previous best method: FRRN. For Dataset B, the proposed method also achieved higher Dice (90.7 ± 13.0%), JSC (83.7 ± 14.8%), and F1 score (90.7 ± 13.0%) than the previous best methods: DeepLabv3 and dual attention network (DANet). For Dataset A + B, the proposed method achieved higher Dice (90.5 ± 13.1%), JSC (83.3 ± 14.8%), and F1 score (90.5 ± 13.1%) than the previous best method: DeepLabv3. Additionally, the parameter number of ASFRRN was only 10.6 M, which is less than those of DANet (71.4 M) and DeepLabv3 (41.3 M). CONCLUSIONS We proposed ASFRRN, which combined with FRRN, attention mechanism, and deep supervision to segment tumors from BUS images. It achieved high segmentation accuracy with a reduced parameter number.
Collapse
Affiliation(s)
- Xiaolei Qu
- School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing, 100191, China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Beihang University, Beijing, 100191, China
| | - Yao Shi
- School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing, 100191, China
| | - Yaxin Hou
- Department of Diagnostic Ultrasound, Beijing Tongren Hospital, Capital Medical University, Beijing, 100730, China
| | - Jue Jiang
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, USA
| |
Collapse
|
15
|
Al-Dulaimi K, Banks J, Nugyen K, Al-Sabaawi A, Tomeo-Reyes I, Chandran V. Segmentation of White Blood Cell, Nucleus and Cytoplasm in Digital Haematology Microscope Images: A Review-Challenges, Current and Future Potential Techniques. IEEE Rev Biomed Eng 2020; 14:290-306. [PMID: 32746365 DOI: 10.1109/rbme.2020.3004639] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Segmentation of white blood cells in digital haematology microscope images represents one of the major tools in the diagnosis and evaluation of blood disorders. Pathological examinations are being the gold standard in many haematology and histophathology, and also play a key role in the diagnosis of diseases. In clinical diagnosis, white blood cells are analysed by pathologists from peripheral blood smears samples of patients. This analysis is mainly based on morphological features and characteristics of the white blood cells and their nuclei and cytoplasm, including, shapes, sizes, colours, textures, maturity stages and staining processes. Recently, Computer Aided Diagnosis techniques have been rapidly growing in the digital haematology area related to white blood cells, and their nuclei and cytoplasm detection, as well as their segmentation and classification techniques. In digital haematology image analysis, these techniques have played and will continue to play, a vital role for providing traceable clinical information, consolidating pertinent second opinions, and minimizing human intervention. This study outlines, discusses, and introduces the major trends from a particular review of detection and segmentation methods for white blood cells and their nuclei and cytoplasm from digital haematology microscope images. Performance of existing methods have been comprehensively compared, taking into account databases used, number of images and limitations. This study can also help us to identify the challenges that remain, in achieving a robust analysis of white blood cell microscope images, which could support the diagnosis of blood disorders and assist researchers and pathologists in the future. The impact of this work is to enhance the accuracy of pathologists' decisions and their efficiency, and overall benefit the patients for faster and more accurate diagnosis. The significant of the paper on intelligent system is that provides future potential techniques for solving overlapping white blood cell identification and other problems microscopic images. The accurate segmentation and detection of white blood cells can increase the accuracy of cell counting system for diagnosing diseases in the future.
Collapse
|
16
|
Ramadan H, Lachqar C, Tairi H. Saliency-guided automatic detection and segmentation of tumor in breast ultrasound images. Biomed Signal Process Control 2020. [DOI: 10.1016/j.bspc.2020.101945] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
|
17
|
Two-stage ultrasound image segmentation using U-Net and test time augmentation. Int J Comput Assist Radiol Surg 2020; 15:981-988. [PMID: 32350786 DOI: 10.1007/s11548-020-02158-3] [Citation(s) in RCA: 37] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2019] [Accepted: 04/03/2020] [Indexed: 12/20/2022]
Abstract
PURPOSE Detecting breast lesions using ultrasound imaging is an important application of computer-aided diagnosis systems. Several automatic methods have been proposed for breast lesion detection and segmentation; however, due to the ultrasound artefacts, and to the complexity of lesion shapes and locations, lesion or tumor segmentation from ultrasound breast images is still an open problem. In this paper, we propose using a lesion detection stage prior to the segmentation stage in order to improve the accuracy of the segmentation. METHODS We used a breast ultrasound imaging dataset which contained 163 images of the breast with either benign lesions or malignant tumors. First, we used a U-Net to detect the lesions and then used another U-Net to segment the detected region. We could show when the lesion is precisely detected, the segmentation performance substantially improves; however, if the detection stage is not precise enough, the segmentation stage also fails. Therefore, we developed a test-time augmentation technique to assess the detection stage performance. RESULTS By using the proposed two-stage approach, we could improve the average Dice score by 1.8% overall. The improvement was substantially more for images wherein the original Dice score was less than 70%, where average Dice score was improved by 14.5%. CONCLUSIONS The proposed two-stage technique shows promising results for segmentation of breast US images and has a much smaller chance of failure.
Collapse
|
18
|
Segmentation of breast ultrasound image with semantic classification of superpixels. Med Image Anal 2020; 61:101657. [PMID: 32032899 DOI: 10.1016/j.media.2020.101657] [Citation(s) in RCA: 83] [Impact Index Per Article: 16.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2019] [Revised: 01/18/2020] [Accepted: 01/22/2020] [Indexed: 11/22/2022]
Abstract
Breast cancer is a great threat to females. Ultrasound imaging has been applied extensively in diagnosis of breast cancer. Due to the poor image quality, segmentation of breast ultrasound (BUS) image remains a very challenging task. Besides, BUS image segmentation is a crucial step for further analysis. In this paper, we proposed a novel method to segment the breast tumor via semantic classification and merging patches. The proposed method firstly selects two diagonal points to crop a region of interest (ROI) on the original image. Then, histogram equalization, bilateral filter and pyramid mean shift filter are adopted to enhance the image. The cropped image is divided into many superpixels using simple linear iterative clustering (SLIC). Furthermore, some features are extracted from the superpixels and a bag-of-words model can be created. The initial classification can be obtained by a back propagation neural network (BPNN). To refine preliminary result, k-nearest neighbor (KNN) is used for reclassification and the final result is achieved. To verify the proposed method, we collected a BUS dataset containing 320 cases. The segmentation results of our method have been compared with the corresponding results obtained by five existing approaches. The experimental results show that our method achieved competitive results compared to conventional methods in terms of TP and FP, and produced good approximations to the hand-labelled tumor contours with comprehensive consideration of all metrics (the F1-score = 89.87% ± 4.05%, and the average radial error = 9.95% ± 4.42%).
Collapse
|
19
|
Adjusted Quick Shift Phase Preserving Dynamic Range Compression method for breast lesions segmentation. INFORMATICS IN MEDICINE UNLOCKED 2020. [DOI: 10.1016/j.imu.2020.100344] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
|
20
|
Semi-supervised multi-view clustering based on constrained nonnegative matrix factorization. Knowl Based Syst 2019. [DOI: 10.1016/j.knosys.2019.06.006] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
21
|
Keatmanee C, Chaumrattanakul U, Kotani K, Makhanov SS. Initialization of active contours for segmentation of breast cancer via fusion of ultrasound, Doppler, and elasticity images. ULTRASONICS 2019; 94:438-453. [PMID: 29477236 DOI: 10.1016/j.ultras.2017.12.008] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/03/2017] [Revised: 12/15/2017] [Accepted: 12/19/2017] [Indexed: 06/08/2023]
Abstract
Active contours (snakes) are an efficient method for segmentation of ultrasound (US) images of breast cancer. However, the method produces inaccurate results if the seeds are initialized improperly (far from the true boundaries and close to the false boundaries). Therefore, we propose a novel initialization method based on the fusion of a conventional US image with elasticity and Doppler images. The proposed fusion method (FM) has been tested against four state-of-the-art initialization methods on 90 ultrasound images from a database collected by the Thammasat University Hospital of Thailand. The ground truth was hand-drawn by three leading radiologists of the hospital. The reference methods are: center of divergence (CoD), force field segmentation (FFS), Poisson Inverse Gradient Vector Flow (PIG), and quasi-automated initialization (QAI). A variety of numerical tests proves the advantages of the FM. For the raw US images, the percentage of correctly initialized contours is: FM-94.2%, CoD-0%, FFS-0%, PIG-26.7%, QAI-42.2%. The percentage of correctly segmented tumors is FM-84.4%, CoD-0%, FFS-0%, PIG-16.67%, QAI-22.44%. For reduced field of view US images, the percentage of correctly initialized contours is: FM-94.2%, CoD-0%, FFS-0%, PIG-65.6%, QAI-67.8%. The correctly segmented tumors are FM-88.9%, CoD-0%, FFS-0%, PIG-48.9%, QAI-44.5%. The accuracy, in terms of the average Hausdorff distance, is respectively 2.29 pixels, 33.81, 34.71, 7.7, and 8.4, whereas in terms of the Jaccard index, it is 0.9, 0.18, 0.19, 0.63, and 0.48.
Collapse
Affiliation(s)
- Chadaporn Keatmanee
- Sirindhorn International Institute of Technology, Thammasat University, Pathum Thani, Thailand; Japan Advanced Institute of Science and Technology, Ishikawa, Japan
| | | | - Kazunori Kotani
- Japan Advanced Institute of Science and Technology, Ishikawa, Japan
| | - Stanislav S Makhanov
- Sirindhorn International Institute of Technology, Thammasat University, Pathum Thani, Thailand.
| |
Collapse
|
22
|
Xu Y, Wang Y, Yuan J, Cheng Q, Wang X, Carson PL. Medical breast ultrasound image segmentation by machine learning. ULTRASONICS 2019; 91:1-9. [PMID: 30029074 DOI: 10.1016/j.ultras.2018.07.006] [Citation(s) in RCA: 81] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/27/2018] [Revised: 07/12/2018] [Accepted: 07/12/2018] [Indexed: 05/02/2023]
Abstract
Breast cancer is the most commonly diagnosed cancer, which alone accounts for 30% all new cancer diagnoses for women, posing a threat to women's health. Segmentation of breast ultrasound images into functional tissues can aid tumor localization, breast density measurement, and assessment of treatment response, which is important to the clinical diagnosis of breast cancer. However, manually segmenting the ultrasound images, which is skill and experience dependent, would lead to a subjective diagnosis; in addition, it is time-consuming for radiologists to review hundreds of clinical images. Therefore, automatic segmentation of breast ultrasound images into functional tissues has received attention in recent years, amidst the more numerous studies of detection and segmentation of masses. In this paper, we propose to use convolutional neural networks (CNNs) for segmenting breast ultrasound images into four major tissues: skin, fibroglandular tissue, mass, and fatty tissue, on three-dimensional (3D) breast ultrasound images. Quantitative metrics for evaluation of segmentation results including Accuracy, Precision, Recall, and F1measure, all reached over 80%, which indicates that the method proposed has the capacity to distinguish functional tissues in breast ultrasound images. Another metric called the Jaccard similarity index (JSI) yields an 85.1% value, outperforming our previous study using the watershed algorithm with 74.54% JSI value. Thus, our proposed method might have the potential to provide the segmentations necessary to assist the clinical diagnosis of breast cancer and improve imaging in other modes in medical ultrasound.
Collapse
Affiliation(s)
- Yuan Xu
- Department of Electronic Science and Engineering, Nanjing University, Nanjing 210093, China
| | - Yuxin Wang
- Department of Electronic Science and Engineering, Nanjing University, Nanjing 210093, China
| | - Jie Yuan
- Department of Electronic Science and Engineering, Nanjing University, Nanjing 210093, China.
| | - Qian Cheng
- Department of Physics, Tongji University, Shanghai 200000, China
| | - Xueding Wang
- Department of Physics, Tongji University, Shanghai 200000, China; Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA
| | - Paul L Carson
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA
| |
Collapse
|
23
|
Zheng Q, Warner S, Tasian G, Fan Y. A Dynamic Graph Cuts Method with Integrated Multiple Feature Maps for Segmenting Kidneys in 2D Ultrasound Images. Acad Radiol 2018; 25:1136-1145. [PMID: 29449144 DOI: 10.1016/j.acra.2018.01.004] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2017] [Revised: 01/01/2018] [Accepted: 01/02/2018] [Indexed: 10/18/2022]
Abstract
RATIONALE AND OBJECTIVES Automatic segmentation of kidneys in ultrasound (US) images remains a challenging task because of high speckle noise, low contrast, and large appearance variations of kidneys in US images. Because texture features may improve the US image segmentation performance, we propose a novel graph cuts method to segment kidney in US images by integrating image intensity information and texture feature maps. MATERIALS AND METHODS We develop a new graph cuts-based method to segment kidney US images by integrating original image intensity information and texture feature maps extracted using Gabor filters. To handle large appearance variation within kidney images and improve computational efficiency, we build a graph of image pixels close to kidney boundary instead of building a graph of the whole image. To make the kidney segmentation robust to weak boundaries, we adopt localized regional information to measure similarity between image pixels for computing edge weights to build the graph of image pixels. The localized graph is dynamically updated and the graph cuts-based segmentation iteratively progresses until convergence. Our method has been evaluated based on kidney US images of 85 subjects. The imaging data of 20 randomly selected subjects were used as training data to tune parameters of the image segmentation method, and the remaining data were used as testing data for validation. RESULTS Experiment results demonstrated that the proposed method obtained promising segmentation results for bilateral kidneys (average Dice index = 0.9446, average mean distance = 2.2551, average specificity = 0.9971, average accuracy = 0.9919), better than other methods under comparison (P < .05, paired Wilcoxon rank sum tests). CONCLUSIONS The proposed method achieved promising performance for segmenting kidneys in two-dimensional US images, better than segmentation methods built on any single channel of image information. This method will facilitate extraction of kidney characteristics that may predict important clinical outcomes such as progression of chronic kidney disease.
Collapse
|
24
|
Ilunga-Mbuyamba E, Avina-Cervantes JG, Lindner D, Arlt F, Ituna-Yudonago JF, Chalopin C. Patient-specific model-based segmentation of brain tumors in 3D intraoperative ultrasound images. Int J Comput Assist Radiol Surg 2018; 13:331-342. [PMID: 29330658 DOI: 10.1007/s11548-018-1703-0] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2017] [Accepted: 01/04/2018] [Indexed: 11/27/2022]
Abstract
PURPOSE Intraoperative ultrasound (iUS) imaging is commonly used to support brain tumor operation. The tumor segmentation in the iUS images is a difficult task and still under improvement because of the low signal-to-noise ratio. The success of automatic methods is also limited due to the high noise sensibility. Therefore, an alternative brain tumor segmentation method in 3D-iUS data using a tumor model obtained from magnetic resonance (MR) data for local MR-iUS registration is presented in this paper. The aim is to enhance the visualization of the brain tumor contours in iUS. METHODS A multistep approach is proposed. First, a region of interest (ROI) based on the specific patient tumor model is defined. Second, hyperechogenic structures, mainly tumor tissues, are extracted from the ROI of both modalities by using automatic thresholding techniques. Third, the registration is performed over the extracted binary sub-volumes using a similarity measure based on gradient values, and rigid and affine transformations. Finally, the tumor model is aligned with the 3D-iUS data, and its contours are represented. RESULTS Experiments were successfully conducted on a dataset of 33 patients. The method was evaluated by comparing the tumor segmentation with expert manual delineations using two binary metrics: contour mean distance and Dice index. The proposed segmentation method using local and binary registration was compared with two grayscale-based approaches. The outcomes showed that our approach reached better results in terms of computational time and accuracy than the comparative methods. CONCLUSION The proposed approach requires limited interaction and reduced computation time, making it relevant for intraoperative use. Experimental results and evaluations were performed offline. The developed tool could be useful for brain tumor resection supporting neurosurgeons to improve tumor border visualization in the iUS volumes.
Collapse
Affiliation(s)
- Elisee Ilunga-Mbuyamba
- CA Telematics, Engineering Division, Campus Irapuato-Salamanca, University of Guanajuato, Carr. Salamanca-Valle de Santiago km 3.5 + 1.8, Comunidad de Palo Blanco, 36885, Salamanca, Mexico
- Innovation Center Computer Assisted Surgery (ICCAS), University of Leipzig, 04103, Leipzig, Germany
| | - Juan Gabriel Avina-Cervantes
- CA Telematics, Engineering Division, Campus Irapuato-Salamanca, University of Guanajuato, Carr. Salamanca-Valle de Santiago km 3.5 + 1.8, Comunidad de Palo Blanco, 36885, Salamanca, Mexico.
| | - Dirk Lindner
- Department of Neurosurgery, University Hospital Leipzig, 04103, Leipzig, Germany
| | - Felix Arlt
- Department of Neurosurgery, University Hospital Leipzig, 04103, Leipzig, Germany
| | - Jean Fulbert Ituna-Yudonago
- CA Telematics, Engineering Division, Campus Irapuato-Salamanca, University of Guanajuato, Carr. Salamanca-Valle de Santiago km 3.5 + 1.8, Comunidad de Palo Blanco, 36885, Salamanca, Mexico
| | - Claire Chalopin
- Innovation Center Computer Assisted Surgery (ICCAS), University of Leipzig, 04103, Leipzig, Germany
| |
Collapse
|
25
|
Liu L, Li K, Qin W, Wen T, Li L, Wu J, Gu J. Automated breast tumor detection and segmentation with a novel computational framework of whole ultrasound images. Med Biol Eng Comput 2018; 56:183-199. [PMID: 29292471 DOI: 10.1007/s11517-017-1770-3] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2017] [Accepted: 12/13/2017] [Indexed: 01/12/2023]
Abstract
Due to the low contrast and ambiguous boundaries of the tumors in breast ultrasound (BUS) images, it is still a challenging task to automatically segment the breast tumors from the ultrasound. In this paper, we proposed a novel computational framework that can detect and segment breast lesions fully automatic in the whole ultrasound images. This framework includes several key components: pre-processing, contour initialization, and tumor segmentation. In the pre-processing step, we applied non-local low-rank (NLLR) filter to reduce the speckle noise. In contour initialization step, we cascaded a two-step Otsu-based adaptive thresholding (OBAT) algorithm with morphologic operations to effectively locate the tumor regions and initialize the tumor contours. Finally, given the initial tumor contours, the improved Chan-Vese model based on the ratio of exponentially weighted averages (CV-ROEWA) method was utilized. This pipeline was tested on a set of 61 breast ultrasound (BUS) images with diagnosed tumors. The experimental results in clinical ultrasound images prove the high accuracy and robustness of the proposed framework, indicating its potential applications in clinical practice. Graphical abstract ᅟ.
Collapse
Affiliation(s)
- Lei Liu
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, People's Republic of China
| | - Kai Li
- Department of Medical Ultrasonics, The Third Affiliated Hospital, Sun Yat-sen University, Guangzhou, 510630, People's Republic of China
| | - Wenjian Qin
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, People's Republic of China.,University of Chinese Academy of Sciences, Beijing, 100049, People's Republic of China
| | - Tiexiang Wen
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, People's Republic of China
| | - Ling Li
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, People's Republic of China
| | - Jia Wu
- Department of Radiation Oncology, Stanford University, Stanford, CA, 94305, USA.
| | - Jia Gu
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, People's Republic of China.
| |
Collapse
|
26
|
Abstract
Machine learning techniques such as artificial neural network (ANN), support vector machine (SVM), radial basis function network (RBFN), random forest (RF), naive Bayes classifier, etc. have gained much attention in recent years due to their widespread applications in diverse fields. This chapter is focused on providing a comprehensive insight of various techniques employed for key areas of medical image processing and analysis. Different applications covered in this chapter include feature extraction, feature selection, and cancer classification in medical images. The authors present current practices and evaluation measures used for objective evaluation of different machine learning methods in context to above-mentioned applications. Various factors associated with acceptance/rejection of such automated systems by medical research community are discussed. The authors also discuss how the interaction between automated analysis systems and medical professionals can be improved for its acceptance in clinical practice. They conclude the chapter by presenting research gaps and future challenges.
Collapse
|
27
|
Meiburger KM, Acharya UR, Molinari F. Automated localization and segmentation techniques for B-mode ultrasound images: A review. Comput Biol Med 2017; 92:210-235. [PMID: 29247890 DOI: 10.1016/j.compbiomed.2017.11.018] [Citation(s) in RCA: 57] [Impact Index Per Article: 7.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2017] [Revised: 11/30/2017] [Accepted: 11/30/2017] [Indexed: 12/14/2022]
Abstract
B-mode ultrasound imaging is used extensively in medicine. Hence, there is a need to have efficient segmentation tools to aid in computer-aided diagnosis, image-guided interventions, and therapy. This paper presents a comprehensive review on automated localization and segmentation techniques for B-mode ultrasound images. The paper first describes the general characteristics of B-mode ultrasound images. Then insight on the localization and segmentation of tissues is provided, both in the case in which the organ/tissue localization provides the final segmentation and in the case in which a two-step segmentation process is needed, due to the desired boundaries being too fine to locate from within the entire ultrasound frame. Subsequenly, examples of some main techniques found in literature are shown, including but not limited to shape priors, superpixel and classification, local pixel statistics, active contours, edge-tracking, dynamic programming, and data mining. Ten selected applications (abdomen/kidney, breast, cardiology, thyroid, liver, vascular, musculoskeletal, obstetrics, gynecology, prostate) are then investigated in depth, and the performances of a few specific applications are compared. In conclusion, future perspectives for B-mode based segmentation, such as the integration of RF information, the employment of higher frequency probes when possible, the focus on completely automatic algorithms, and the increase in available data are discussed.
Collapse
Affiliation(s)
- Kristen M Meiburger
- Biolab, Department of Electronics and Telecommunications, Politecnico di Torino, Torino, Italy
| | - U Rajendra Acharya
- Department of Electronic & Computer Engineering, Ngee Ann Polytechnic, Singapore; Department of Biomedical Engineering, School of Science and Technology, SUSS University, Singapore; Department of Biomedical Imaging, Faculty of Medicine, University of Malaya, Kuala Lumpur, Malaysia
| | - Filippo Molinari
- Biolab, Department of Electronics and Telecommunications, Politecnico di Torino, Torino, Italy.
| |
Collapse
|
28
|
Kuo JW, Mamou J, Wang Y, Saegusa-Beecroft E, Machi J, Feleppa EJ. Segmentation of 3-D High-Frequency Ultrasound Images of Human Lymph Nodes Using Graph Cut With Energy Functional Adapted to Local Intensity Distribution. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2017; 64:1514-1525. [PMID: 28796617 PMCID: PMC5913754 DOI: 10.1109/tuffc.2017.2737948] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Previous studies by our group have shown that 3-D high-frequency quantitative ultrasound (QUS) methods have the potential to differentiate metastatic lymph nodes (LNs) from cancer-free LNs dissected from human cancer patients. To successfully perform these methods inside the LN parenchyma (LNP), an automatic segmentation method is highly desired to exclude the surrounding thin layer of fat from QUS processing and accurately correct for ultrasound attenuation. In high-frequency ultrasound images of LNs, the intensity distribution of LNP and fat varies spatially because of acoustic attenuation and focusing effects. Thus, the intensity contrast between two object regions (e.g., LNP and fat) is also spatially varying. In our previous work, nested graph cut (GC) demonstrated its ability to simultaneously segment LNP, fat, and the outer phosphate-buffered saline bath even when some boundaries are lost because of acoustic attenuation and focusing effects. This paper describes a novel approach called GC with locally adaptive energy to further deal with spatially varying distributions of LNP and fat caused by inhomogeneous acoustic attenuation. The proposed method achieved Dice similarity coefficients of 0.937±0.035 when compared with expert manual segmentation on a representative data set consisting of 115 3-D LN images obtained from colorectal cancer patients.
Collapse
|
29
|
Sadeghi-Naini A, Sannachi L, Tadayyon H, Tran WT, Slodkowska E, Trudeau M, Gandhi S, Pritchard K, Kolios MC, Czarnota GJ. Chemotherapy-Response Monitoring of Breast Cancer Patients Using Quantitative Ultrasound-Based Intra-Tumour Heterogeneities. Sci Rep 2017; 7:10352. [PMID: 28871171 PMCID: PMC5583340 DOI: 10.1038/s41598-017-09678-0] [Citation(s) in RCA: 38] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2017] [Accepted: 07/28/2017] [Indexed: 12/12/2022] Open
Abstract
Anti-cancer therapies including chemotherapy aim to induce tumour cell death. Cell death introduces alterations in cell morphology and tissue micro-structures that cause measurable changes in tissue echogenicity. This study investigated the effectiveness of quantitative ultrasound (QUS) parametric imaging to characterize intra-tumour heterogeneity and monitor the pathological response of breast cancer to chemotherapy in a large cohort of patients (n = 100). Results demonstrated that QUS imaging can non-invasively monitor pathological response and outcome of breast cancer patients to chemotherapy early following treatment initiation. Specifically, QUS biomarkers quantifying spatial heterogeneities in size, concentration and spacing of acoustic scatterers could predict treatment responses of patients with cross-validated accuracies of 82 ± 0.7%, 86 ± 0.7% and 85 ± 0.9% and areas under the receiver operating characteristic (ROC) curve of 0.75 ± 0.1, 0.80 ± 0.1 and 0.89 ± 0.1 at 1, 4 and 8 weeks after the start of treatment, respectively. The patients classified as responders and non-responders using QUS biomarkers demonstrated significantly different survivals, in good agreement with clinical and pathological endpoints. The results form a basis for using early predictive information on survival-linked patient response to facilitate adapting standard anti-cancer treatments on an individual patient basis.
Collapse
Affiliation(s)
- Ali Sadeghi-Naini
- Department of Medical Biophysics, University of Toronto, Toronto, ON, Canada.,Physical Sciences, Sunnybrook Research Institute, Sunnybrook Health Sciences Centre, Toronto, ON, Canada.,Department of Radiation Oncology, Odette Cancer Centre, Sunnybrook Health Sciences Centre, Toronto, ON, Canada.,Department of Radiation Oncology, University of Toronto, Toronto, ON, Canada
| | - Lakshmanan Sannachi
- Department of Medical Biophysics, University of Toronto, Toronto, ON, Canada.,Physical Sciences, Sunnybrook Research Institute, Sunnybrook Health Sciences Centre, Toronto, ON, Canada.,Department of Radiation Oncology, Odette Cancer Centre, Sunnybrook Health Sciences Centre, Toronto, ON, Canada
| | - Hadi Tadayyon
- Department of Medical Biophysics, University of Toronto, Toronto, ON, Canada.,Physical Sciences, Sunnybrook Research Institute, Sunnybrook Health Sciences Centre, Toronto, ON, Canada
| | - William T Tran
- Department of Radiation Oncology, Odette Cancer Centre, Sunnybrook Health Sciences Centre, Toronto, ON, Canada.,Centre for Health and Social Care Research, Sheffield Hallam University, Sheffield, UK
| | - Elzbieta Slodkowska
- Division of Anatomic Pathology, Sunnybrook Health Sciences Centre, Toronto, ON, Canada
| | - Maureen Trudeau
- Division of Medical Oncology, Sunnybrook Health Sciences Centre, Toronto, ON, Canada
| | - Sonal Gandhi
- Division of Medical Oncology, Sunnybrook Health Sciences Centre, Toronto, ON, Canada
| | - Kathleen Pritchard
- Division of Medical Oncology, Sunnybrook Health Sciences Centre, Toronto, ON, Canada
| | | | - Gregory J Czarnota
- Department of Medical Biophysics, University of Toronto, Toronto, ON, Canada. .,Physical Sciences, Sunnybrook Research Institute, Sunnybrook Health Sciences Centre, Toronto, ON, Canada. .,Department of Radiation Oncology, Odette Cancer Centre, Sunnybrook Health Sciences Centre, Toronto, ON, Canada. .,Department of Radiation Oncology, University of Toronto, Toronto, ON, Canada.
| |
Collapse
|
30
|
Feng Y, Dong F, Xia X, Hu CH, Fan Q, Hu Y, Gao M, Mutic S. An adaptive Fuzzy C-means method utilizing neighboring information for breast tumor segmentation in ultrasound images. Med Phys 2017; 44:3752-3760. [PMID: 28513858 DOI: 10.1002/mp.12350] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2016] [Revised: 04/24/2017] [Accepted: 05/10/2017] [Indexed: 12/26/2022] Open
Abstract
PURPOSE Ultrasound (US) imaging has been widely used in breast tumor diagnosis and treatment intervention. Automatic delineation of the tumor is a crucial first step, especially for the computer-aided diagnosis (CAD) and US-guided breast procedure. However, the intrinsic properties of US images such as low contrast and blurry boundaries pose challenges to the automatic segmentation of the breast tumor. Therefore, the purpose of this study is to propose a segmentation algorithm that can contour the breast tumor in US images. METHODS To utilize the neighbor information of each pixel, a Hausdorff distance based fuzzy c-means (FCM) method was adopted. The size of the neighbor region was adaptively updated by comparing the mutual information between them. The objective function of the clustering process was updated by a combination of Euclid distance and the adaptively calculated Hausdorff distance. Segmentation results were evaluated by comparing with three experts' manual segmentations. The results were also compared with a kernel-induced distance based FCM with spatial constraints, the method without adaptive region selection, and conventional FCM. RESULTS Results from segmenting 30 patient images showed the adaptive method had a value of sensitivity, specificity, Jaccard similarity, and Dice coefficient of 93.60 ± 5.33%, 97.83 ± 2.17%, 86.38 ± 5.80%, and 92.58 ± 3.68%, respectively. The region-based metrics of average symmetric surface distance (ASSD), root mean square symmetric distance (RMSD), and maximum symmetric surface distance (MSSD) were 0.03 ± 0.04 mm, 0.04 ± 0.03 mm, and 1.18 ± 1.01 mm, respectively. All the metrics except sensitivity were better than that of the non-adaptive algorithm and the conventional FCM. Only three region-based metrics were better than that of the kernel-induced distance based FCM with spatial constraints. CONCLUSION Inclusion of the pixel neighbor information adaptively in segmenting US images improved the segmentation performance. The results demonstrate the potential application of the method in breast tumor CAD and other US-guided procedures.
Collapse
Affiliation(s)
- Yuan Feng
- Center for Molecular Imaging and Nuclear Medicine, School of Radiological and Interdisciplinary Sciences (RAD-X), Soochow University, Collaborative Innovation Center of Radiation Medicine of Jiangsu Higher Education Institutions, Suzhou, Jiangsu, 215123, China.,School of Mechanical and Electronic Engineering, Soochow University, Suzhou, Jiangsu, 215021, China.,School of Computer Science and Engineering, Soochow University, Suzhou, Jiangsu, 215021, China
| | - Fenglin Dong
- Department of Ultrasounds, the First Affiliated Hospital of Soochow University, 188 Shizi Street, Suzhou, 215006, China
| | - Xiaolong Xia
- Center for Molecular Imaging and Nuclear Medicine, School of Radiological and Interdisciplinary Sciences (RAD-X), Soochow University, Collaborative Innovation Center of Radiation Medicine of Jiangsu Higher Education Institutions, Suzhou, Jiangsu, 215123, China
| | - Chun-Hong Hu
- Department of Radiology, the First Affiliated Hospital of Soochow University, 188 Shizi Street, Suzhou, 215006, China
| | - Qianmin Fan
- Department of Ultrasounds, the First Affiliated Hospital of Soochow University, 188 Shizi Street, Suzhou, 215006, China
| | - Yanle Hu
- Department of Radiation Oncology, Mayo Clinic in Arizona, Phoenix, AZ, USA
| | - Mingyuan Gao
- Center for Molecular Imaging and Nuclear Medicine, School of Radiological and Interdisciplinary Sciences (RAD-X), Soochow University, Collaborative Innovation Center of Radiation Medicine of Jiangsu Higher Education Institutions, Suzhou, Jiangsu, 215123, China
| | - Sasa Mutic
- Department of Radiation Oncology, Washington University, St. Louis, MO, USA
| |
Collapse
|
31
|
A Novel Segmentation Approach Combining Region- and Edge-Based Information for Ultrasound Images. BIOMED RESEARCH INTERNATIONAL 2017; 2017:9157341. [PMID: 28536703 PMCID: PMC5426079 DOI: 10.1155/2017/9157341] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/09/2016] [Revised: 01/21/2017] [Accepted: 03/14/2017] [Indexed: 11/17/2022]
Abstract
Ultrasound imaging has become one of the most popular medical imaging modalities with numerous diagnostic applications. However, ultrasound (US) image segmentation, which is the essential process for further analysis, is a challenging task due to the poor image quality. In this paper, we propose a new segmentation scheme to combine both region- and edge-based information into the robust graph-based (RGB) segmentation method. The only interaction required is to select two diagonal points to determine a region of interest (ROI) on the original image. The ROI image is smoothed by a bilateral filter and then contrast-enhanced by histogram equalization. Then, the enhanced image is filtered by pyramid mean shift to improve homogeneity. With the optimization of particle swarm optimization (PSO) algorithm, the RGB segmentation method is performed to segment the filtered image. The segmentation results of our method have been compared with the corresponding results obtained by three existing approaches, and four metrics have been used to measure the segmentation performance. The experimental results show that the method achieves the best overall performance and gets the lowest ARE (10.77%), the second highest TPVF (85.34%), and the second lowest FPVF (4.48%).
Collapse
|
32
|
Interactive Outlining of Pancreatic Cancer Liver Metastases in Ultrasound Images. Sci Rep 2017; 7:892. [PMID: 28420871 PMCID: PMC5429849 DOI: 10.1038/s41598-017-00940-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2016] [Accepted: 03/20/2017] [Indexed: 02/01/2023] Open
Abstract
Ultrasound (US) is the most commonly used liver imaging modality worldwide. Due to its low cost, it is increasingly used in the follow-up of cancer patients with metastases localized in the liver. In this contribution, we present the results of an interactive segmentation approach for liver metastases in US acquisitions. A (semi-) automatic segmentation is still very challenging because of the low image quality and the low contrast between the metastasis and the surrounding liver tissue. Thus, the state of the art in clinical practice is still manual measurement and outlining of the metastases in the US images. We tackle the problem by providing an interactive segmentation approach providing real-time feedback of the segmentation results. The approach has been evaluated with typical US acquisitions from the clinical routine, and the datasets consisted of pancreatic cancer metastases. Even for difficult cases, satisfying segmentations results could be achieved because of the interactive real-time behavior of the approach. In total, 40 clinical images have been evaluated with our method by comparing the results against manual ground truth segmentations. This evaluation yielded to an average Dice Score of 85% and an average Hausdorff Distance of 13 pixels.
Collapse
|
33
|
Xiong H, Sultan LR, Cary TW, Schultz SM, Bouzghar G, Sehgal CM. The diagnostic performance of leak-plugging automated segmentation versus manual tracing of breast lesions on ultrasound images. ULTRASOUND : JOURNAL OF THE BRITISH MEDICAL ULTRASOUND SOCIETY 2017; 25:98-106. [PMID: 28567104 DOI: 10.1177/1742271x17690425] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/14/2016] [Accepted: 12/08/2016] [Indexed: 11/15/2022]
Abstract
PURPOSE To assess the diagnostic performance of a leak-plugging segmentation method that we have developed for delineating breast masses on ultrasound images. MATERIALS AND METHODS Fifty-two biopsy-proven breast lesion images were analyzed by three observers using the leak-plugging and manual segmentation methods. From each segmentation method, grayscale and morphological features were extracted and classified as malignant or benign by logistic regression analysis. The performance of leak-plugging and manual segmentations was compared by: size of the lesion, overlap area (Oa ) between the margins, and area under the ROC curves (Az ). RESULTS The lesion size from leak-plugging segmentation correlated closely with that from manual tracing (R2 of 0.91). Oa was higher for leak plugging, 0.92 ± 0.01 and 0.86 ± 0.06 for benign and malignant masses, respectively, compared to 0.80 ± 0.04 and 0.73 ± 0.02 for manual tracings. Overall Oa between leak-plugging and manual segmentations was 0.79 ± 0.14 for benign and 0.73 ± 0.14 for malignant lesions. Az for leak plugging was consistently higher (0.910 ± 0.003) compared to 0.888 ± 0.012 for manual tracings. The coefficient of variation of Az between three observers was 0.29% for leak plugging compared to 1.3% for manual tracings. CONCLUSION The diagnostic performance, size measurements, and observer variability for automated leak-plugging segmentations were either comparable to or better than those of manual tracings.
Collapse
Affiliation(s)
- Hui Xiong
- Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA
| | - Laith R Sultan
- Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA
| | - Theodore W Cary
- Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA
| | - Susan M Schultz
- Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA
| | - Ghizlane Bouzghar
- Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA
| | - Chandra M Sehgal
- Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
34
|
Breast ultrasound image segmentation: a survey. Int J Comput Assist Radiol Surg 2017; 12:493-507. [DOI: 10.1007/s11548-016-1513-1] [Citation(s) in RCA: 69] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2016] [Accepted: 12/15/2016] [Indexed: 10/20/2022]
|
35
|
Systematic Evaluation on Speckle Suppression Methods in Examination of Ultrasound Breast Images. APPLIED SCIENCES-BASEL 2016. [DOI: 10.3390/app7010037] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
|
36
|
Prabusankarlal KM, Thirumoorthy P, Manavalan R. Segmentation of Breast Lesions in Ultrasound Images through Multiresolution Analysis Using Undecimated Discrete Wavelet Transform. ULTRASONIC IMAGING 2016; 38:384-402. [PMID: 26586725 DOI: 10.1177/0161734615615838] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Earliest detection and diagnosis of breast cancer reduces mortality rate of patients by increasing the treatment options. A novel method for the segmentation of breast ultrasound images is proposed in this work. The proposed method utilizes undecimated discrete wavelet transform to perform multiresolution analysis of the input ultrasound image. As the resolution level increases, although the effect of noise reduces, the details of the image also dilute. The appropriate resolution level, which contains essential details of the tumor, is automatically selected through mean structural similarity. The feature vector for each pixel is constructed by sampling intra-resolution and inter-resolution data of the image. The dimensionality of feature vectors is reduced by using principal components analysis. The reduced set of feature vectors is segmented into two disjoint clusters using spatial regularized fuzzy c-means algorithm. The proposed algorithm is evaluated by using four validation metrics on a breast ultrasound database of 150 images including 90 benign and 60 malignant cases. The algorithm produced significantly better segmentation results (Dice coef = 0.8595, boundary displacement error = 9.796, dvi = 1.744, and global consistency error = 0.1835) than the other three state of the art methods.
Collapse
Affiliation(s)
- K M Prabusankarlal
- Research and Development Centre, Bharathiar University, Coimbatore, India Department of Electronics & Communication, K.S.R. College of Arts & Science, Tiruchengode, India
| | - P Thirumoorthy
- Department of Electronics & Communication, Government Arts College, Dharmapuri, India
| | - R Manavalan
- Department of Computer Applications, K.S.R. College of Arts & Science, Tiruchengode, India
| |
Collapse
|
37
|
Zhang D, Liu Y, Yang Y, Xu M, Yan Y, Qin Q. A region-based segmentation method for ultrasound images in HIFU therapy. Med Phys 2016; 43:2975-2989. [DOI: 10.1118/1.4950706] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
|
38
|
Wang K, Ma C. A robust statistics driven volume-scalable active contour for segmenting anatomical structures in volumetric medical images with complex conditions. Biomed Eng Online 2016; 15:39. [PMID: 27074891 PMCID: PMC4831199 DOI: 10.1186/s12938-016-0153-6] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2015] [Accepted: 04/01/2016] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Accurate segmentation of anatomical structures in medical images is a critical step in the development of computer assisted intervention systems. However, complex image conditions, such as intensity inhomogeneity, noise and weak object boundary, often cause considerable difficulties in medical image segmentation. To cope with these difficulties, we propose a novel robust statistics driven volume-scalable active contour framework, to extract desired object boundary from magnetic resonance (MR) and computed tomography (CT) imagery in 3D. METHODS We define an energy functional in terms of the initial seeded labels and two fitting functions that are derived from object local robust statistics features. This energy is then incorporated into a level set scheme which drives the active contour evolving and converging at the desired position of the object boundary. Due to the local robust statistics and the volume scaling function in the energy fitting term, the object features in local volumes are learned adaptively to guide the motion of the contours, which thereby guarantees the capability of our method to cope with intensity inhomogeneity, noise and weak boundary. In addition, the initialization of active contour is simplified by select several seeds in the object and/or background to eliminate the sensitivity to initialization. RESULTS The proposed method was applied to extensive public available volumetric medical images with challenging image conditions. The segmentation results of various anatomical structures, such as white matter (WM), atrium, caudate nucleus and brain tumor, were evaluated quantitatively by comparing with the corresponding ground truths. It was found that the proposed method achieves consistent and coherent segmentation accuracy of 0.9246 ± 0.0068 for WM, 0.9043 ± 0.0131 for liver tumors, 0.8725 ± 0.0374 for caudate nucleus, 0.8802 ± 0.0595 for brain tumors, etc., measured by Dice similarity coefficients value for the overlap between the algorithm one and the ground truth. Further comparative experimental results showed desirable performances of the proposed method over several well-known segmentation methods in terms of accuracy and robustness. CONCLUSION We proposed an approach to accurate segment volumetric medical images with complex conditions. The accuracy of segmentation, robustness to noise and contour initialization were validated on the basis of extensive MR and CT volumes.
Collapse
Affiliation(s)
- Kuanquan Wang
- School of Computer Science and Technology, Biocomputing Research Center, Harbin Institute of Technology, Harbin, China.
| | - Chao Ma
- School of Computer Science and Technology, Biocomputing Research Center, Harbin Institute of Technology, Harbin, China
| |
Collapse
|
39
|
Gómez W, Pereira W, Infantosi A. Evolutionary pulse-coupled neural network for segmenting breast lesions on ultrasonography. Neurocomputing 2016. [DOI: 10.1016/j.neucom.2015.04.121] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
40
|
Wang B, Gao X, Li J, Li X, Tao D. A level set method with shape priors by using locality preserving projections. Neurocomputing 2015. [DOI: 10.1016/j.neucom.2014.07.086] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
41
|
Huang Q, Yang F, Liu L, Li X. Automatic segmentation of breast lesions for interaction in ultrasonic computer-aided diagnosis. Inf Sci (N Y) 2015. [DOI: 10.1016/j.ins.2014.08.021] [Citation(s) in RCA: 72] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
42
|
Watershed based intelligent scissors. Comput Med Imaging Graph 2015; 43:122-9. [DOI: 10.1016/j.compmedimag.2015.01.003] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2014] [Revised: 11/26/2014] [Accepted: 01/09/2015] [Indexed: 11/17/2022]
|
43
|
Xu M, Zhang D, Yang Y, Liu Y, Yuan Z, Qin Q. A Split-and-Merge-Based Uterine Fibroid Ultrasound Image Segmentation Method in HIFU Therapy. PLoS One 2015; 10:e0125738. [PMID: 25973906 PMCID: PMC4431844 DOI: 10.1371/journal.pone.0125738] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2014] [Accepted: 03/26/2015] [Indexed: 11/19/2022] Open
Abstract
High-intensity focused ultrasound (HIFU) therapy has been used to treat uterine fibroids widely and successfully. Uterine fibroid segmentation plays an important role in positioning the target region for HIFU therapy. Presently, it is completed by physicians manually, reducing the efficiency of therapy. Thus, computer-aided segmentation of uterine fibroids benefits the improvement of therapy efficiency. Recently, most computer-aided ultrasound segmentation methods have been based on the framework of contour evolution, such as snakes and level sets. These methods can achieve good performance, although they need an initial contour that influences segmentation results. It is difficult to obtain the initial contour automatically; thus, the initial contour is always obtained manually in many segmentation methods. A split-and-merge-based uterine fibroid segmentation method, which needs no initial contour to ensure less manual intervention, is proposed in this paper. The method first splits the image into many small homogeneous regions called superpixels. A new feature representation method based on texture histogram is employed to characterize each superpixel. Next, the superpixels are merged according to their similarities, which are measured by integrating their Quadratic-Chi texture histogram distances with their space adjacency. Multi-way Ncut is used as the merging criterion, and an adaptive scheme is incorporated to decrease manual intervention further. The method is implemented using Matlab on a personal computer (PC) platform with Intel Pentium Dual-Core CPU E5700. The method is validated on forty-two ultrasound images acquired from HIFU therapy. The average running time is 9.54 s. Statistical results showed that SI reaches a value as high as 87.58%, and normHD is 5.18% on average. It has been demonstrated that the proposed method is appropriate for segmentation of uterine fibroids in HIFU pre-treatment imaging and planning.
Collapse
Affiliation(s)
- Menglong Xu
- School of Physics and Technology, Wuhan University, Wuhan, Hubei, China
| | - Dong Zhang
- School of Physics and Technology, Wuhan University, Wuhan, Hubei, China
| | - Yan Yang
- School of Physics and Technology, Wuhan University, Wuhan, Hubei, China
| | - Yu Liu
- School of Physics and Technology, Wuhan University, Wuhan, Hubei, China
| | - Zhiyong Yuan
- School of Computer, Wuhan University, Wuhan, Hubei, China
| | - Qianqing Qin
- State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan, Hubei, China
| |
Collapse
|
44
|
Zhou Z, Wu S, Chang KJ, Chen WR, Chen YS, Kuo WH, Lin CC, Tsui PH. Classification of Benign and Malignant Breast Tumors in Ultrasound Images with Posterior Acoustic Shadowing Using Half-Contour Features. J Med Biol Eng 2015; 35:178-187. [PMID: 25960706 PMCID: PMC4414937 DOI: 10.1007/s40846-015-0031-x] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2014] [Accepted: 06/16/2014] [Indexed: 12/01/2022]
Abstract
Posterior acoustic shadowing (PAS) can bias breast tumor segmentation and classification in ultrasound images. In this paper, half-contour features are proposed to classify benign and malignant breast tumors with PAS, considering the fact that the upper half of the tumor contour is less affected by PAS. Adaptive thresholding and disk expansion are employed to detect tumor contours. Based on the detected full contour, the upper half contour is extracted. For breast tumor classification, six quantitative feature parameters are analyzed for both full contours and half contours, including standard deviation of degree (SDD), which is proposed to describe tumor irregularity. Fifty clinical cases (40 with PAS and 10 without PAS) were used. Tumor circularity (TC) and SDD were both effective full- and half-contour parameters in classifying images without PAS. Half-contour TC [74 % accuracy, 72 % sensitivity, 76 % specificity, 0.78 area under the receiver operating characteristic curve (AUC), p > 0.05] significantly improved the classification of breast tumors with PAS compared to that with full-contour TC (54 % accuracy, 56 % sensitivity, 52 % specificity, 0.52 AUC, p > 0.05). Half-contour SDD (72 % accuracy, 76 % sensitivity, 68 % specificity, 0.81 AUC, p < 0.05) improved the classification of breast tumors with PAS compared to that with full-contour SDD (62 % accuracy, 80 % sensitivity, 44 % specificity, 0.61 AUC, p > 0.05). The proposed half-contour TC and SDD may be useful in classifying benign and malignant breast tumors in ultrasound images affected by PAS.
Collapse
Affiliation(s)
- Zhuhuang Zhou
- Biomedical Engineering Center, College of Life Science and Bioengineering, Beijing University of Technology, Beijing, 100124 China
| | - Shuicai Wu
- Biomedical Engineering Center, College of Life Science and Bioengineering, Beijing University of Technology, Beijing, 100124 China
| | - King-Jen Chang
- Department of Surgery, Cheng Ching General Hospital, Chung Kang Branch, Taichung, 407 Taiwan
- Department of Surgery, National Taiwan University Hospital, Taipei, 10048 Taiwan
| | - Wei-Ren Chen
- Department of Electrical Engineering, Yuan Ze University, Chung Li, 32003 Taiwan
| | - Yung-Sheng Chen
- Department of Electrical Engineering, Yuan Ze University, Chung Li, 32003 Taiwan
| | - Wen-Hung Kuo
- Department of Surgery, National Taiwan University Hospital, Taipei, 10048 Taiwan
| | - Chung-Chih Lin
- Department of Computer Science and Information Engineering, Chang Gung University, Taoyuan, 33302 Taiwan
| | - Po-Hsiang Tsui
- Department of Medical Imaging and Radiological Sciences, College of Medicine, Chang Gung University, Taoyuan, 33302 Taiwan
- Institute of Radiological Research, Chang Gung University and Hospital, Taoyuan, 33302 Taiwan
| |
Collapse
|
45
|
Chang H, Chen Z, Huang Q, Shi J, Li X. Graph-based learning for segmentation of 3D ultrasound images. Neurocomputing 2015. [DOI: 10.1016/j.neucom.2014.05.092] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
46
|
|
47
|
Ning H, Yang R, Ma A, Wu X. Interactive 3D medical data cutting using closed curve with arbitrary shape. Comput Med Imaging Graph 2014; 40:120-7. [PMID: 25456145 DOI: 10.1016/j.compmedimag.2014.10.004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2013] [Revised: 08/31/2014] [Accepted: 10/06/2014] [Indexed: 10/24/2022]
Abstract
Interactive 3D cutting is widely used as a flexible manual segmentation tool to extract medical data on regions of interest. A novel method for clipping 3D medical data is proposed to reveal the interior of volumetric data. The 3D cutting method retains or clips away selected voxels projected inside an arbitrary-shaped closed curve which is clipping geometry constructed by interactive tool to make cutting operation more flexible. Transformation between the world and screen coordinate frames is studied to project voxels of medical data onto the screen frame and avoid computing intersection of clipping geometry and volumetric data in 3D space. For facilitating the decision on whether the voxels should be retained, voxels through coordinate transformation are all projected onto a binary mask image on screen frame which the closed curve is also projected onto to conveniently obtain the voxels of intersection. The paper pays special attention to optimization algorithm of cutting process. The optimization algorithm that mixes octree with quad-tree decomposition is introduced to reduce computation complexity, save computation time, and match real time. The paper presents results obtained from raw and segmented medical volume datasets and the process time of cutting operation.
Collapse
Affiliation(s)
- Hai Ning
- Department of Biomedical Engineering, South China University of Technology, 510006 Guangzhou, China.
| | - Rongqian Yang
- Department of Biomedical Engineering, South China University of Technology, 510006 Guangzhou, China.
| | - Amin Ma
- Department of Biomedical Engineering, South China University of Technology, 510006 Guangzhou, China.
| | - Xiaoming Wu
- Department of Biomedical Engineering, South China University of Technology, 510006 Guangzhou, China.
| |
Collapse
|
48
|
Zhou Z, Wu W, Wu S, Tsui PH, Lin CC, Zhang L, Wang T. Semi-automatic breast ultrasound image segmentation based on mean shift and graph cuts. ULTRASONIC IMAGING 2014; 36:256-276. [PMID: 24759696 DOI: 10.1177/0161734614524735] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
Computerized tumor segmentation on breast ultrasound (BUS) images remains a challenging task. In this paper, we proposed a new method for semi-automatic tumor segmentation on BUS images using Gaussian filtering, histogram equalization, mean shift, and graph cuts. The only interaction required was to select two diagonal points to determine a region of interest (ROI) on an input image. The ROI image was shrunken by a factor of 2 using bicubic interpolation to reduce computation time. The shrunken image was smoothed by a Gaussian filter and then contrast-enhanced by histogram equalization. Next, the enhanced image was filtered by pyramid mean shift to improve homogeneity. The object and background seeds for graph cuts were automatically generated on the filtered image. Using these seeds, the filtered image was then segmented by graph cuts into a binary image containing the object and background. Finally, the binary image was expanded by a factor of 2 using bicubic interpolation, and the expanded image was processed by morphological opening and closing to refine the tumor contour. The method was implemented with OpenCV 2.4.3 and Visual Studio 2010 and tested for 38 BUS images with benign tumors and 31 BUS images with malignant tumors from different ultrasound scanners. Experimental results showed that our method had a true positive rate (TP) of 91.7%, a false positive (FP) rate of 11.9%, and a similarity (SI) rate of 85.6%. The mean run time on Intel Core 2.66 GHz CPU and 4 GB RAM was 0.49 ± 0.36 s. The experimental results indicate that the proposed method may be useful in BUS image segmentation.
Collapse
Affiliation(s)
- Zhuhuang Zhou
- College of Life Science and Bioengineering, Beijing University of Technology, Beijing, China
| | - Weiwei Wu
- College of Electronic Information and Control Engineering, Beijing University of Technology, Beijing, China
| | - Shuicai Wu
- College of Life Science and Bioengineering, Beijing University of Technology, Beijing, China
| | - Po-Hsiang Tsui
- Department of Medical Imaging and Radiological Sciences, College of Medicine, Chang Gung University, Taoyuan, Taiwan
| | - Chung-Chih Lin
- Department of Computer Science and Information Engineering, Chang Gung University, Taoyuan, Taiwan
| | - Ling Zhang
- Department of Biomedical Engineering, Shenzhen University, Shenzhen, Guangdong, China
| | - Tianfu Wang
- Department of Biomedical Engineering, Shenzhen University, Shenzhen, Guangdong, China
| |
Collapse
|
49
|
Li Y, Liu W, Li X, Huang Q, Li X. GA-SIFT: A new scale invariant feature transform for multispectral image using geometric algebra. Inf Sci (N Y) 2014. [DOI: 10.1016/j.ins.2013.12.022] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
50
|
Fu J, Wei G, Huang Q, Ji F, Feng Y. Barker coded excitation with linear frequency modulated carrier for ultrasonic imaging. Biomed Signal Process Control 2014. [DOI: 10.1016/j.bspc.2014.06.004] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|